Обсуждение: RE: [GENERAL] urgent: problems with query_limit

Поиск
Список
Период
Сортировка

RE: [GENERAL] urgent: problems with query_limit

От
Marcus Mascari
Дата:
What I would like to see (perhaps its possible
right now, or a workaround is available) is a method
for yielding an elog() message when the number of
tuples to be returned is over a specified USER limit.

The thrust of this issue is that we are rolling out
a PostgreSQL database which is driven by a
Web environment for most users.  However, some users
will have ODBC access to some of the tables.  One
of these table has more than a million rows and is
a totally denormalized "sales" table for all
transactions performed.  It is the main tables
the various managers will be attacking with ODBC.

However, what we would like to prevent is some
new user, unfamiliar with how to design aggregate
and restrive queries, from doing a SELECT * FROM sales
and having 1 gig of data shipped across the network
to their Excel, Access, or Crystal Report Writer
application. Instead, we would like to generate
an elog() error in the backend like:

"You are allowed to return a maximum of 20,000 rows."

or some such constraint, as a per-user attribute. We
don't want to restrict the tuples which can
participate in their queries, just the number of rows
returned. So, SELECT SUM(price) FROM sales, would
be okay.  What we don't want is a query limit, which,
when reached, silently returns the number of rows
without the user knowing they didn't get all the
data they requested.

Any hints or tips to achieve this functionality?

Marcus "Mike" Mascari
(mascarim@yahoo.com)

--- "Jackson, DeJuan" <djackson@cpsgroup.com> wrote:
> Try: select * from table LIMIT 100;
>
> > Hi
> >
> > We upgraded our system from 6.4 to the new 6.5
> version. The set
> > query_limit function is not working
> > anymore in 6.5.
> >
> > db => set query_limit to '100';
> > SET VARIABLE
> > db => select * from table;
> >
> > statement is returning all records from the table.
> What's wrong here?
> >
> > Herbie


_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


Re: [HACKERS] RE: [GENERAL] urgent: problems with query_limit

От
wieck@debis.com (Jan Wieck)
Дата:
Marcus "Mike" Mascari wrote:

> What I would like to see (perhaps its possible
> right now, or a workaround is available) is a method
> for yielding an elog() message when the number of
> tuples to be returned is over a specified USER limit.
>
> [...]
>
> However, what we would like to prevent is some
> new user, unfamiliar with how to design aggregate
> and restrive queries, from doing a SELECT * FROM sales
> and having 1 gig of data shipped across the network
> to their Excel, Access, or Crystal Report Writer
> application. Instead, we would like to generate
> an elog() error in the backend like:
>
> "You are allowed to return a maximum of 20,000 rows."

    I don't think that this is easy to implement because it is in
    conflict  with  the  general   design   of   the   PostgreSQL
    optimizer/executor combo.

    If  possible  (no sorting or grouping), the execution plan is
    setup in a way that the data is directly fed from  the  table
    to  the  client.  So  when  the  maximum  exceeds, the client
    already received that many rows.

    And how should the database know how many rows it  WILL  send
    before  collecting  them  all?  Well, in the case of a simple
    "SELECT * FROM sales" it could look at the statistics. But on
    a  simple  join  between  "salesorg"  and  "sales" it is only
    possible to guess how  many  rows  this  MIGHT  produce  (the
    optimizer  uses this guessing to decide the joining and where
    indices might be helpful). But the exact number  of  rows  is
    only  known  after  the  complete execution of the plan - and
    then they are already sent to the client.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#========================================= wieck@debis.com (Jan Wieck) #