Обсуждение: Re: [HACKERS] What about LIMIT in SELECT ?

Поиск
Список
Период
Сортировка

Re: [HACKERS] What about LIMIT in SELECT ?

От
"Jeff Hoffmann"
Дата:
>Hi, my 2 cents...
>
>I agree completely, LIMIT would be VERY usefull in web based apps, which
>is all I run.  It does not matter to me if it is not part of a formal
>standard.  The idea is so common that it is a defacto standard.

i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
be achieved by declaring a cursor and fetching however many records in the
cursor?  it's a very noticeable improvement when you only want the first 20
out of 500 in a 200k record database, at least.

jeff


Re: [HACKERS] What about LIMIT in SELECT ?

От
Eric Lee Green
Дата:
On Tue, 13 Oct 1998, Jeff Hoffmann wrote:
> >I agree completely, LIMIT would be VERY usefull in web based apps, which
> >is all I run.  It does not matter to me if it is not part of a formal
> >standard.  The idea is so common that it is a defacto standard.
>
> i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
> be achieved by declaring a cursor and fetching however many records in the
> cursor?  it's a very noticeable improvement when you only want the first 20
> out of 500 in a 200k record database, at least.

The problem with declaring a cursor vs. the "LIMIT" clause is that the
"LIMIT" clause, if used properly by the database engine (along with the
database engine using indexes in "ORDER BY" clauses) allows the database
engine to short-circuit the tail end of the query. That is, if you have 25
names and the last one ends with BEAVIS, the database engine doesn't have
to go through the BUTTHEADS and KENNYs and etc.

Theoretically a cursor is superior to the "LIMIT" clause because you're
eventually going to want the B's and K's and etc. anyhow -- but only in a
stateful enviornment. In the stateless web environment, a cursor is
useless because the connection can close at any time even when you're
using "persistent" connections (and of course when the connection closes
the cursor closes).

I wanted very badly to use PostgreSQL for a web project I'm working on,
but it just wouldn't do the job :-(.

--
Eric Lee Green         eric@linux-hw.com     http://www.linux-hw.com/~eric
"To call Microsoft an innovator is like calling the Pope Jewish ..."
            -- James Love (Consumer Project on Technology)


Re: [HACKERS] What about LIMIT in SELECT ?

От
"Marc G. Fournier"
Дата:
On Tue, 13 Oct 1998, Eric Lee Green wrote:

> On Tue, 13 Oct 1998, Jeff Hoffmann wrote:
> > >I agree completely, LIMIT would be VERY usefull in web based apps, which
> > >is all I run.  It does not matter to me if it is not part of a formal
> > >standard.  The idea is so common that it is a defacto standard.
> >
> > i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
> > be achieved by declaring a cursor and fetching however many records in the
> > cursor?  it's a very noticeable improvement when you only want the first 20
> > out of 500 in a 200k record database, at least.
>
> The problem with declaring a cursor vs. the "LIMIT" clause is that the
> "LIMIT" clause, if used properly by the database engine (along with the
> database engine using indexes in "ORDER BY" clauses) allows the database
> engine to short-circuit the tail end of the query. That is, if you have 25
> names and the last one ends with BEAVIS, the database engine doesn't have
> to go through the BUTTHEADS and KENNYs and etc.
>
> Theoretically a cursor is superior to the "LIMIT" clause because you're
> eventually going to want the B's and K's and etc. anyhow -- but only in a
> stateful enviornment. In the stateless web environment, a cursor is
> useless because the connection can close at any time even when you're
> using "persistent" connections (and of course when the connection closes
> the cursor closes).

Ookay, I'm sorry, butyou lost me here.  I haven't gotten into using
CURSORs/FETCHs yet, since I haven't need it...but can you give an example
of what you would want to do using a LIMIT?  I may be missing something,
but wha is the different between using LIMIT to get X records, and
definiing a cursor to FETCH X records?

Practical example of *at least* the LIMIT side would be good, so that we
can at least see a physical example of what LIMIT can do that
CURSORs/FETCH can't...

Marc G. Fournier                               scrappy@hub.org
Systems Administrator @ hub.org
scrappy@{postgresql|isc}.org                       ICQ#7615664


Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> Theoretically a cursor is superior to the "LIMIT" clause because you're
> eventually going to want the B's and K's and etc. anyhow -- but only in a
> stateful enviornment. In the stateless web environment, a cursor is
> useless because the connection can close at any time even when you're
> using "persistent" connections (and of course when the connection closes
> the cursor closes).
>
> I wanted very badly to use PostgreSQL for a web project I'm working on,
> but it just wouldn't do the job :-(.

See my other posting mentioning the FAQ item on this subject.  If you
are going after only one table(no joins), and have no ORDER BY, we could
short-circuit the evaluation, but how many queries could use LIMIT in
that case?  Zero, I think.

What we could do is _if_ there is only one table(no joins), and an index
exists that matches the ORDER BY, we could use the index to
short-circuit the query.

I have added this item to the TODO list:

* Allow LIMIT ability on single-table queries that have no ORDER BY or
        a matching index

This looks do-able, and a real win.  Would this make web applications
happier?  If there is an ORDER BY and no index, or a join, I can't
figure out how we would short-circuit the query.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026


Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> Ookay, I'm sorry, butyou lost me here.  I haven't gotten into using
> CURSORs/FETCHs yet, since I haven't need it...but can you give an example
> of what you would want to do using a LIMIT?  I may be missing something,
> but wha is the different between using LIMIT to get X records, and
> definiing a cursor to FETCH X records?
>
> Practical example of *at least* the LIMIT side would be good, so that we
> can at least see a physical example of what LIMIT can do that
> CURSORs/FETCH can't...

My guess in a web application is that the transaction is started for
every new page, so you can't have transactions spanning SQL sessions.

LIMIT theoretically would allow you to start up where you left off.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026


Re: [HACKERS] What about LIMIT in SELECT ?

От
Eric Lee Green
Дата:
On Tue, 13 Oct 1998, Marc G. Fournier wrote:
> On Tue, 13 Oct 1998, Eric Lee Green wrote:
> > Theoretically a cursor is superior to the "LIMIT" clause because you're
> > eventually going to want the B's and K's and etc. anyhow -- but only in a
> > stateful enviornment. In the stateless web environment, a cursor is
> > useless because the connection can close at any time even when you're
> Ookay, I'm sorry, butyou lost me here.  I haven't gotten into using
> CURSORs/FETCHs yet, since I haven't need it...but can you give an example
> of what you would want to do using a LIMIT?  I may be missing something,

Whoops! Sorry, I goofed in my post (typing  faster than my brain :-).
What I *MEANT* to say was that this superiority of cursors was not
applicable in a web environment.

> but wha is the different between using LIMIT to get X records, and
> definiing a cursor to FETCH X records?

From a logical point of view, none. From an implementation point of
view, it is a matter of speed. Declaring a cursor four times, doing a
query four times, and fetching X records four times takes more time
than just doing a query with a LIMIT clause four times (assuming your
query results in four screenfulls of records).

> Practical example of *at least* the LIMIT side would be good, so that we
> can at least see a physical example of what LIMIT can do that
> CURSORs/FETCH can't...

You can do everything with CURSORs/FETCH that you can do with LIMIT.
In a non-web environment, where you have stateful connections, a FETCH
is always going to be faster than a SELECT...LIMIT statement. (Well,
it would be if implemented correctly, but I'll leave that to others to
haggle over). However: In a CGI-type environment, cursors are a huge
performance drain because in the example above you end up doing this
huge query four times, with its results stored in the cursor four
times, and only a few values are ever fetched from the cursor before it
is destroyed by the end of the CGI script.

Whereas with the SELECT...LIMIT paradigm, the database engine does NOT
process the entire huge query, it quits processing once it reaches the
limit.  (Well, at least MySQL does so, if you happen to be using an
"ORDER BY" supported by an index). Obviously doing 1/4th the work four times
is better than doing the whole tamale four times :-}.

--
Eric Lee Green         eric@linux-hw.com     http://www.linux-hw.com/~eric
"To call Microsoft an innovator is like calling the Pope Jewish ..."
            -- James Love (Consumer Project on Technology)


Re: [HACKERS] What about LIMIT in SELECT ?

От
"Marc G. Fournier"
Дата:
On Tue, 13 Oct 1998, Eric Lee Green wrote:

> Whoops! Sorry, I goofed in my post (typing  faster than my brain :-).
> What I *MEANT* to say was that this superiority of cursors was not
> applicable in a web environment.

    S'alright...now please backup your statement with the *why*...

> > but wha is the different between using LIMIT to get X records, and
> > definiing a cursor to FETCH X records?
>
> >From a logical point of view, none. From an implementation point of
> view, it is a matter of speed. Declaring a cursor four times, doing a
> query four times, and fetching X records four times takes more time
> than just doing a query with a LIMIT clause four times (assuming your
> query results in four screenfulls of records).

    I'm going to be brain-dead here, since, as I've disclaimered
before, I've not used CURSORs/FETCHs as of yet...one person came back
already and stated that, for him, CURSOR/FETCH results were near
instantaneous with a 167k+ table...have you tested the two to ensure that,
in fact, one is/isn't faster then the other?

Marc G. Fournier                               scrappy@hub.org
Systems Administrator @ hub.org
scrappy@{postgresql|isc}.org                       ICQ#7615664


Re: [HACKERS] What about LIMIT in SELECT ?

От
Eric Lee Green
Дата:
On Tue, 13 Oct 1998, Bruce Momjian wrote:
> > Theoretically a cursor is superior to the "LIMIT" clause because you're
> > eventually going to want the B's and K's and etc. anyhow -- but only in a
> > stateful enviornment. In the stateless web environment, a cursor is
> > useless because the connection can close at any time even when you're
> > using "persistent" connections (and of course when the connection closes
> What we could do is _if_ there is only one table(no joins), and an index
> exists that matches the ORDER BY, we could use the index to
> short-circuit the query.

This is exactly what MySQL does in this situation, except that it can use
the ORDER BY to do the short circuiting even if there is a join involved
if all of the elements of the ORDER BY belong to one table. Obviously if
I'm doing an "ORDER BY table1.foo table2.bar" that isn't going to work!
But "select table1.fsname,table1.lname,table2.receivables where
table2.receivables > 0 and table1.custnum=table2.custnum order by
(table1.lname,table1.fsname) limit 50" can be short-circuited by fiddling
with the join order -- table1.fsname table1.lname have to be the first two
things in the join order.

Whether this is feasible in PostgreSQL I have no earthly idea. This would
seem to conflict with the join optimizer.

> happier?  If there is an ORDER BY and no index, or a join, I can't
> figure out how we would short-circuit the query.

If there is an ORDER BY and no index you can't short-circuit the query.
MySQL doesn't either. Under certain circumstances (such as above) you can
short-circuit a join, but it's unclear whether it'd be easy to add such
a capability to PostgreSQL given the current structure of the query
optimizer. (And I certainly am not in a position to tackle it, at the
moment MySQL is sufficing for my project despite the fact that it is
quite limited compared to PostgreSQL, I need to get my project finished
first).

--
Eric Lee Green         eric@linux-hw.com     http://www.linux-hw.com/~eric
"To call Microsoft an innovator is like calling the Pope Jewish ..."
            -- James Love (Consumer Project on Technology)


Re: [HACKERS] What about LIMIT in SELECT ?

От
Eric Lee Green
Дата:
On Tue, 13 Oct 1998, Marc G. Fournier wrote:
> On Tue, 13 Oct 1998, Eric Lee Green wrote:
> > Whoops! Sorry, I goofed in my post (typing  faster than my brain :-).
> > What I *MEANT* to say was that this superiority of cursors was not
> > applicable in a web environment.
>
>     S'alright...now please backup your statement with the *why*...

Okay. It is because CGI is a stateless environment. You cannot just keep a
cursor open and walk up and down it, which is the superiority of cursors
(it is always faster to walk up and down a pre-SELECT'ed list than it is
to perform additional SELECTs). You have to destroy it upon exiting the
CGI script (which presumably just fetched 25 items or so to display on
an HTML page -- think DejaNews).

Creating a cursor and destroying a cursor take time. Less time, in a
normal environment, than it would take to make multiple SELECT statements,
which is the superiority of cursors in a normal environment. But, like I
said, CGI isn't normal -- the CGI script exits at the end of displaying 25
items, at which point the cursor is destroyed, thus destroying any benefit
you could have gotten while adding additional overhead.

In addition there is the possibility of short-circuiting the SELECT if
there is a LIMIT clause and there is no ORDER BY clause or the ORDER BY
clause is walking down an index (the later being a possibility only if
there is no 'join' involved or if the 'join' is simple enough that it can
be done without running afoul of the join optimizer). Cursors, by their
nature, require performing the entire tamale first.

> > >From a logical point of view, none. From an implementation point of
> > view, it is a matter of speed. Declaring a cursor four times, doing a
>
> already and stated that, for him, CURSOR/FETCH results were near
> instantaneous with a 167k+ table...have you tested the two to ensure that,
> in fact, one is/isn't faster then the other?

CURSOR/FETCH *SHOULD* be nearly instantaneous, because you're merely
fetching values from a pre-existing query result. As I said, in normal
(non-CGI) use, a cursor is FAR superior to a "LIMIT" clause.

But the question of whether declaring a cursor four times and destroying
it four times takes a sizable amount of time compared to a LIMIT
clause... it's really hard to test, unfortunately, due to the differing
natures of MySQL and PostgreSQL. MySQL starts up a connection very fast
while PostgreSQL takes awhile (has anybody done work on the "herd of
servers"  concept to tackle that?). It is hard, in a CGI environment, to
detirmine if the poor speed (in terms of number of hits the server can
take) is due to the slow connection startup time or due to the cursor
overhead. I could write a benchmark program that kept the connection open
and did just the cursor timings, but I'm not particularly motivated.

I think RMS has a point when he decries the fact that non-free software is
becoming more available for Linux (MySQL is definitely non-free)... i.e.,
that it takes away people's motivation to improve the free software. The
only good part there is that MySQL is hardly suitable for normal database
work -- it is very much optimized for web serving and other applications
of that sort where speed and CGI-friendliness are more important than
functionality.

--
Eric Lee Green         eric@linux-hw.com     http://www.linux-hw.com/~eric
"To call Microsoft an innovator is like calling the Pope Jewish ..."
            -- James Love (Consumer Project on Technology)


[HACKERS] Alternative to LIMIT in SELECT ?

От
Marc Howard Zuckman
Дата:
I can't speak to the relative efficiencies of the methods, but I do
perform queries that present data subsets to web browsers using postgresql
with the following method:

    1) collect data input; do cgi query; write tuples to temporary file
    2) html page index sent back to browser contains page specific
        references to temporary file name and tuple range.
    3) Subsequent data retrievals reference temporary file using sed and
        tuple range
    4) temporary file is destroyed 15min after last access time by a
        background process.
This consumes disk space, but I assume it conserves memory compared to
a cursor/fetch sequence performed in a persistent db connection.

For a general purpose query, I'm not sure if there is any other
alternative to this method unless you wish to reperform the query
for each retrieved html page.

Marc Zuckman
marc@fallon.classyad.com

_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
_     Visit The Home and Condo MarketPlace              _
_          http://www.ClassyAd.com                  _
_                                  _
_  FREE basic property listings/advertisements and searches.  _
_                                  _
_  Try our premium, yet inexpensive services for a real          _
_   selling or buying edge!                      _
_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_


Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> Whereas with the SELECT...LIMIT paradigm, the database engine does NOT
> process the entire huge query, it quits processing once it reaches the
> limit.  (Well, at least MySQL does so, if you happen to be using an
> "ORDER BY" supported by an index). Obviously doing 1/4th the work four times
> is better than doing the whole tamale four times :-}.

And no join, I assume.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026


Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
Tatsuo Ishii
Дата:
This might be off-topic, but...

I've found ExecutorLimit() (in executor/execMain.c) is useful for me
especially when issuing an ad-hock query via psql. I personally use
the function with customized set command.

set query_limit to 'num';

    limit the max number of results returned by the backend

show query_limit;

    display the current query limit

reset query_limit;

    disable the query limit (unlimited number of results allowed)
--
Tatsuo Ishii
t-ishii@sra.co.jp

Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
"Thomas G. Lockhart"
Дата:
> I've found ExecutorLimit() (in executor/execMain.c) is useful for me
> especially when issuing an ad-hock query via psql. I personally use
> the function with customized set command.

Looks interesting. So where are the patches? :)

                    - Tom

Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
Tatsuo Ishii
Дата:
>> I've found ExecutorLimit() (in executor/execMain.c) is useful for me
>> especially when issuing an ad-hock query via psql. I personally use
>> the function with customized set command.
>
>Looks interesting. So where are the patches? :)

I'll post pacthes within 24 hours:-)
--
Tatsuo Ishii
t-ishii@sra.co.jp

Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
Tatsuo Ishii
Дата:
>>> I've found ExecutorLimit() (in executor/execMain.c) is useful for me
>>> especially when issuing an ad-hock query via psql. I personally use
>>> the function with customized set command.
>>
>>Looks interesting. So where are the patches? :)
>
>I'll post pacthes within 24 hours:-)

Here it is.
--
Tatsuo Ishii
t-ishii@sra.co.jp
----------------------------------------------------------------
*** backend/commands/variable.c.orig    Fri Oct  9 09:56:51 1998
--- backend/commands/variable.c    Wed Oct 14 13:06:15 1998
***************
*** 18,23 ****
--- 18,27 ----
  #ifdef MULTIBYTE
  #include "mb/pg_wchar.h"
  #endif
+ #ifdef QUERY_LIMIT
+ #include "executor/executor.h"
+ #include "executor/execdefs.h"
+ #endif

  static bool show_date(void);
  static bool reset_date(void);
***************
*** 40,45 ****
--- 44,54 ----
  static bool show_ksqo(void);
  static bool reset_ksqo(void);
  static bool parse_ksqo(const char *);
+ #ifdef QUERY_LIMIT
+ static bool show_query_limit(void);
+ static bool reset_query_limit(void);
+ static bool parse_query_limit(const char *);
+ #endif

  extern Cost _cpu_page_wight_;
  extern Cost _cpu_index_page_wight_;
***************
*** 546,551 ****
--- 555,600 ----
  }    /* reset_timezone() */

  /*-----------------------------------------------------------------------*/
+ #ifdef QUERY_LIMIT
+ static bool
+ parse_query_limit(const char *value)
+ {
+   int32 limit;
+
+   if (value == NULL) {
+     reset_query_limit();
+     return(TRUE);
+   }
+   limit = pg_atoi(value, sizeof(int32), '\0');
+   if (limit <= -1) {
+     elog(ERROR, "Bad value for # of query limit (%s)", value);
+   }
+   ExecutorLimit(limit);
+   return(TRUE);
+ }
+
+ static bool
+ show_query_limit(void)
+ {
+   int limit;
+
+   limit = ExecutorGetLimit();
+   if (limit == ALL_TUPLES) {
+     elog(NOTICE, "No query limit is set");
+   } else {
+     elog(NOTICE, "query limit is %d",limit);
+   }
+   return(TRUE);
+ }
+
+ static bool
+ reset_query_limit(void)
+ {
+   ExecutorLimit(ALL_TUPLES);
+   return(TRUE);
+ }
+ #endif
+ /*-----------------------------------------------------------------------*/
  struct VariableParsers
  {
      const char *name;
***************
*** 584,589 ****
--- 633,643 ----
      {
          "ksqo", parse_ksqo, show_ksqo, reset_ksqo
      },
+ #ifdef QUERY_LIMIT
+     {
+         "query_limit", parse_query_limit, show_query_limit, reset_query_limit
+     },
+ #endif
      {
          NULL, NULL, NULL, NULL
      }
*** backend/executor/execMain.c.orig    Thu Oct  1 11:03:58 1998
--- backend/executor/execMain.c    Wed Oct 14 11:24:06 1998
***************
*** 83,94 ****
  #undef ALL_TUPLES
  #define ALL_TUPLES queryLimit

- int            ExecutorLimit(int limit);
-
  int
  ExecutorLimit(int limit)
  {
      return queryLimit = limit;
  }

  #endif
--- 83,98 ----
  #undef ALL_TUPLES
  #define ALL_TUPLES queryLimit

  int
  ExecutorLimit(int limit)
  {
      return queryLimit = limit;
+ }
+
+ int
+ ExecutorGetLimit()
+ {
+     return queryLimit;
  }

  #endif
*** include/executor/executor.h.orig    Fri Oct  9 10:02:07 1998
--- include/executor/executor.h    Wed Oct 14 11:24:07 1998
***************
*** 86,91 ****
--- 86,95 ----
  extern TupleTableSlot *ExecutorRun(QueryDesc *queryDesc, EState *estate, int feature, int count);
  extern void ExecutorEnd(QueryDesc *queryDesc, EState *estate);
  extern HeapTuple ExecConstraints(char *caller, Relation rel, HeapTuple tuple);
+ #ifdef QUERY_LIMIT
+ extern int ExecutorLimit(int limit);
+ extern int ExecutorGetLimit(void);
+ #endif

  /*
   * prototypes from functions in execProcnode.c

Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
Bruce Momjian
Дата:
Applied, with one question.


> >>> I've found ExecutorLimit() (in executor/execMain.c) is useful for me
> >>> especially when issuing an ad-hock query via psql. I personally use
> >>> the function with customized set command.
> >>
> >>Looks interesting. So where are the patches? :)
> >
> >I'll post pacthes within 24 hours:-)
>
> Here it is.
> + #ifdef QUERY_LIMIT
> + static bool
> + parse_query_limit(const char *value)
> + {
> +   int32 limit;
> +
> +   if (value == NULL) {
> +     reset_query_limit();
> +     return(TRUE);
> +   }

Any idea how 'value' could be null?  I could not see how that would
happen.  I can see how GEQO could have a NULL when you say ON, and no
value.  Same with rplans.



--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
Tatsuo Ishii
Дата:
>> + #ifdef QUERY_LIMIT
>> + static bool
>> + parse_query_limit(const char *value)
>> + {
>> +   int32 limit;
>> +
>> +   if (value == NULL) {
>> +     reset_query_limit();
>> +     return(TRUE);
>> +   }
>
>Any idea how 'value' could be null?  I could not see how that would
>happen.

Not sure. I just followed the way other set commands are doing.

>I can see how GEQO could have a NULL when you say ON, and no
>value.  Same with rplans.

Hmm... I think in that case, 'value' would be 'ON', not NULL. right?
--
Tatsuo Ishii
t-ishii@sra.co.jp

Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
"Thomas G. Lockhart"
Дата:
> >> +   if (value == NULL) {
> >> +     reset_query_limit();
> >> +     return(TRUE);
> >> +   }
> >Any idea how 'value' could be null?  I could not see how that would
> >happen.
> Not sure. I just followed the way other set commands are doing.

This is how RESET is implemented.

                    - Tom

Re: [HACKERS] What about LIMIT in SELECT ?

От
"Matthew N. Dodd"
Дата:
On Tue, 13 Oct 1998, Bruce Momjian wrote:
> My guess in a web application is that the transaction is started for
> every new page, so you can't have transactions spanning SQL sessions.
>
> LIMIT theoretically would allow you to start up where you left off.

It really does depend largly on the architecuture of the website doesn't
it.

LIMIT probably allows web site developers a quick and dirty way to do what
should properly be done with a web-DB proxy.  I seem to remember mod_perl
having a solution for this sort of thing.

--
| Matthew N. Dodd  | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |
| winter@jurai.net |      This Space For Rent     | ix86,sparc,m68k,pmax,vax  |
| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage?   |


Re: [HACKERS] What about LIMIT in SELECT ?

От
"Matthew N. Dodd"
Дата:
On Tue, 13 Oct 1998, Eric Lee Green wrote:
> it's really hard to test, unfortunately, due to the differing natures
> of MySQL and PostgreSQL. MySQL starts up a connection very fast while
> PostgreSQL takes awhile (has anybody done work on the "herd of
> servers"  concept to tackle that?).

Is MySQL really all that much faster?  I've got a large number of CLI
utilities that pull data from a DB on a central server and I'm lagging on
scroll speed in my xterms for the most part.  I've yet to see any
measureable lag in connection setup.

My hardware isn't all that fast either.  (Ultra5 client, Ultra1/170E
server.)

--
| Matthew N. Dodd  | 78 280Z | 75 164E | 84 245DL | FreeBSD/NetBSD/Sprite/VMS |
| winter@jurai.net |      This Space For Rent     | ix86,sparc,m68k,pmax,vax  |
| http://www.jurai.net/~winter | Are you k-rad elite enough for my webpage?   |


Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
Eric Lee Green wrote:
>
> On Tue, 13 Oct 1998, Jeff Hoffmann wrote:
> > >I agree completely, LIMIT would be VERY usefull in web based apps, which
> > >is all I run.  It does not matter to me if it is not part of a formal
> > >standard.  The idea is so common that it is a defacto standard.
> >
> > i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
> > be achieved by declaring a cursor and fetching however many records in the
> > cursor?  it's a very noticeable improvement when you only want the first 20
> > out of 500 in a 200k record database, at least.
>
> The problem with declaring a cursor vs. the "LIMIT" clause is that the
> "LIMIT" clause, if used properly by the database engine (along with the
> database engine using indexes in "ORDER BY" clauses) allows the database
> engine to short-circuit the tail end of the query. That is, if you have 25
> names and the last one ends with BEAVIS, the database engine doesn't have
> to go through the BUTTHEADS and KENNYs and etc.
>
> Theoretically a cursor is superior to the "LIMIT" clause because you're
> eventually going to want the B's and K's and etc. anyhow -- but only in a
> stateful enviornment. In the stateless web environment, a cursor is
> useless because the connection can close at any time even when you're
> using "persistent" connections (and of course when the connection closes
> the cursor closes).

    I'm  missing something. Well it's right that in the stateless
    web environment a cursor has to be declared  and  closed  for
    any  single  CGI  call.  But even if you have a LIMIT clause,
    your CGI must know with which key to start.

    So your query must look like

        SELECT ... WHERE key > 'last processed key' ORDER BY key;

    And your key must be unique (or at least contain no duplicate
    entries)  or you might miss some rows between the pages (have
    100 Brown's in the table and last processed key was  a  Brown
    while using LIMIT).

    In  postgres you could actually do the following (but read on
    below - it's not optimized correct)

        BEGIN;
        DECLARE c CURSOR FOR SELECT ... WHERE key > 'last' ORDER BY key;
        FETCH 20 IN c;
        (process the 20 rows in CGI)
        CLOSE c;
        COMMIT;

    Having LIMIT looks more elegant and has less overhead in CGI-
    backend   communication.   But  the  cursor  version  is  SQL
    standard and portable.

>
> I wanted very badly to use PostgreSQL for a web project I'm working on,
> but it just wouldn't do the job :-(.

    I've done some tests and what I found out might be a  bug  in
    PostgreSQL's  query  optimizer.  Having a table with 25k rows
    where key is a text field with a unique  index.  Now  I  used
    EXPLAIN for some queries

        SELECT * FROM tab;

        results in a seqscan - expected.

        SELECT * FROM tab ORDER BY key;

        results in a sort->seqscan - I would have
        expected an indexscan!

        SELECT * FROM tab WHERE key > 'G';

        results in an indexscan - expected.

        SELECT * FROM tab WHERE key > 'G' ORDER BY key;

        results in a sort->indexscan - hmmm.

    These  results  stay  the same even if I blow up the table by
    duplicating all rows (now with a non-unique  index)  to  100k
    rows and have them presorted in the table.

    Needless to say that everything is vacuum'd for statistics.

    The  last  one  is  the  query  we  would  need  in  the  web
    environment used over a cursor as in the example  above.  But
    due  to  the  sort,  the backend selects until the end of the
    table, sorts them and then returns only  the  first  20  rows
    (out of sorts result).

    This  is very painful if the qualification (key > ...) points
    to the beginning of the key list.

    Looking at planner.c I can see, that if there is a sortClause
    in  the  parsetree,  the planner creates a sort node and does
    absolutely not check if there is an index that could be  used
    to  do  it.  In  the  examples  above, the sort is absolutely
    needless because the  index  scan  will  already  return  the
    tuples in the right order :-).

    Somewhere  deep  in my brain I found a statement that sorting
    sorted  data  isn't  only  unnecessary  (except   the   order
    changes),  it  is  slow too compared against sorting randomly
    ordered data.

    Can we fix this before 6.4 release, will it be a past 6.4  or
    am I doing something wrong here? I think it isn't a fix (it's
    a planner enhancement) so it should  really  be  a  past  6.4
    item.

    For  now, the only possibility is to omit the ORDER BY in the
    query and hope the planner will always generate an index scan
    (because  of  the  qualification  'key  >  ...').  Doing so I
    selected multiple times 20 rows (with the last key qual  like
    a  CGI  would do) in separate transactions.  Using cursor and
    fetch speeds up the access by a factor of 1000!   But  it  is
    unsafe  and thus NOT RECOMMENDED! It's only a test if cursors
    can do the LIMIT job - and they could if the planner would do
    a better job.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

Re: [HACKERS] What about LIMIT in SELECT ?

От
Oleg Bartunov
Дата:
On Wed, 14 Oct 1998, Jan Wieck wrote:

> Date: Wed, 14 Oct 1998 13:09:21 +0200 (MET DST)
> From: Jan Wieck <jwieck@debis.com>
> To: Eric Lee Green <eric@linux-hw.com>
> Cc: jeff@remapcorp.com, hackers@postgreSQL.org
> Subject: Re: [HACKERS] What about LIMIT in SELECT ?
>
> Eric Lee Green wrote:
> >
> > On Tue, 13 Oct 1998, Jeff Hoffmann wrote:
> > > >I agree completely, LIMIT would be VERY usefull in web based apps, which
> > > >is all I run.  It does not matter to me if it is not part of a formal
> > > >standard.  The idea is so common that it is a defacto standard.
> > >
> > > i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
> > > be achieved by declaring a cursor and fetching however many records in the
> > > cursor?  it's a very noticeable improvement when you only want the first 20
> > > out of 500 in a 200k record database, at least.
> >
> > The problem with declaring a cursor vs. the "LIMIT" clause is that the
> > "LIMIT" clause, if used properly by the database engine (along with the
> > database engine using indexes in "ORDER BY" clauses) allows the database
> > engine to short-circuit the tail end of the query. That is, if you have 25
> > names and the last one ends with BEAVIS, the database engine doesn't have
> > to go through the BUTTHEADS and KENNYs and etc.
> >
> > Theoretically a cursor is superior to the "LIMIT" clause because you're
> > eventually going to want the B's and K's and etc. anyhow -- but only in a
> > stateful enviornment. In the stateless web environment, a cursor is
> > useless because the connection can close at any time even when you're
> > using "persistent" connections (and of course when the connection closes
> > the cursor closes).
>
>     I'm  missing something. Well it's right that in the stateless
>     web environment a cursor has to be declared  and  closed  for
>     any  single  CGI  call.  But even if you have a LIMIT clause,
>     your CGI must know with which key to start.
>
      This is not a problem for CGI-script to know which key to start.
      Without LIMIT every CGI call backend will do *FULL* selection
      and cursor helps just in fetching a definite number of rows,
      in principle I can do this with CGI-script. Also, cursor
      returns data back in  ASCII  format (man l declare) and this requires
      additional job for backend to convert data from intrinsic (binary)
      format. Right implementation of LIMIT offset,number_of_rows could be
      a great win and make postgres superior free database engine for
      Web applications. Many colleagues of mine used mysql instead of
      postgres just because of lacking LIMIT. Tatsuo posted a patch
      for set query_limit to 'num', I just tested it and seems it
      works fine. Now, we need only possibility to specify offset,
      say
         set query_limit to 'offset,num'
      ( Tatsuo, How difficult to do this ?)
      and LIMIT problem will ne gone.

      I'm wonder how many useful patches could be hidden from people  :-),

    Regards,

        Oleg

PS.

    Tatsuo, do you have patch for 6.3.2 ?
        I can't wait for 6.4 :-)
_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83




Re: [HACKERS] What about LIMIT in SELECT ?

От
"Thomas G. Lockhart"
Дата:
>     I've done some tests and what I found out might be a  bug  in
>     PostgreSQL's  query  optimizer.
>         SELECT * FROM tab ORDER BY key;
>         results in a sort->seqscan - I would have
>         expected an indexscan!

Given that a table _could_ be completely unsorted on disk, it is
probably reasonable to suck the data in for a possible in-memory sort
rather than skipping around the disk to pick up individual tuples via
the index. Don't know if vacuum has a statistic on "orderness"...

>         SELECT * FROM tab WHERE key > 'G' ORDER BY key;
>         results in a sort->indexscan - hmmm.
>     The  last  one  is  the  query  we  would  need  in  the  web
>     environment used over a cursor as in the example  above.  But
>     due  to  the  sort,  the backend selects until the end of the
>     table, sorts them and then returns only  the  first  20  rows
>     (out of sorts result).

So you are saying that for this last case the sort was unnecessary? Does
the backend traverse the index in the correct order to guarantee that
the tuples are coming out already sorted? Does a hash index give the
same plan (I would expect a sort->seqscan for a hash index)?

                      - Tom

Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
Oleg Bartunov wrote:
>       This is not a problem for CGI-script to know which key to start.

    Never  meant that would be a problem. A FORM variable will of
    course do this.

>       Without LIMIT every CGI call backend will do *FULL* selection
>       and cursor helps just in fetching a definite number of rows,
>       in principle I can do this with CGI-script. Also, cursor
>       returns data back in  ASCII  format (man l declare) and this requires
>       additional job for backend to convert data from intrinsic (binary)
>       format. Right implementation of LIMIT offset,number_of_rows could be
>       a great win and make postgres superior free database engine for
>       Web applications. Many colleagues of mine used mysql instead of

    That's the point I was missing. The offset!

>       postgres just because of lacking LIMIT. Tatsuo posted a patch
>       for set query_limit to 'num', I just tested it and seems it
>       works fine. Now, we need only possibility to specify offset,
>       say
>          set query_limit to 'offset,num'
>       ( Tatsuo, How difficult to do this ?)
>       and LIMIT problem will ne gone.

    Think you haven't read my posting completely. Even  with  the
    executor  limit,  the  complete scan into the sort is done by
    the backend.  You need to specify ORDER BY to  get  the  same
    list  again  (without  the  offset  doesn't  make sense). But
    currently, ORDER BY forces a sort node into the query plan.

    What the executor limit  tells  is  how  many  rows  will  be
    returned  from  the sorted data. Not what goes into the sort.
    Filling the sort and sorting the data consumes the most  time
    of the queries execution.

    I  haven't  looked  at  Tatsuo's  patch  very well. But if it
    limits the amount of data going into the sort (on ORDER  BY),
    it  will  break it! The requested ordering could be different
    from what the choosen index might return. The used  index  is
    choosen by the planner upon the qualifications given, not the
    ordering wanted.

    So if you select WHERE b = 1 ORDER BY a, then it will use  an
    index on attribute b to match the qualification. The complete
    result of that index scan goes into the sort to  get  ordered
    by  a. If now the executor limit stops sort filling after the
    limit is exceeded, only the same tuples will go into the sort
    every  time.  But  they have nothing to do with the requested
    order by a.

    What LIMIT first needs is  a  planner  enhancement.  In  file
    backend/optimizer/plan/planner.c  line 284 it must be checked
    if the actual plan is an indexscan, if the indexed attributes
    are  all  the same as those in the given sort clause and that
    the requested sort order (operator) is that  what  the  index
    will  return.   If  that  all matches, it can ignore the sort
    clause and return the index scan itself.

    Second enhancement must be the handling of  the  offset.   In
    the  executor,  the  index scan must skip offset index tuples
    before returning the first. But  NOT  if  the  plan  isn't  a
    1-table-index-scan.  In that case the result tuples (from the
    topmost unique/join/whatever node) have to be skipped.

    With these enhancements,  the  index  tuples  to  be  skipped
    (offset)  will still be scanned, but not the data tuples they
    point to. Index scanning might be somewhat faster.

    This all will only speedup simple 1-table-queries,  no  joins
    or  if  the requested order isn't that what the index exactly
    returns.

    Anyway, I'll take a look if I can change the planner to  omit
    the  sort  if  the tests described above are true. I think it
    would be good anyway.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
>
> >         SELECT * FROM tab WHERE key > 'G' ORDER BY key;
> >         results in a sort->indexscan - hmmm.
> >     The  last  one  is  the  query  we  would  need  in  the  web
> >     environment used over a cursor as in the example  above.  But
> >     due  to  the  sort,  the backend selects until the end of the
> >     table, sorts them and then returns only  the  first  20  rows
> >     (out of sorts result).
>
> So you are saying that for this last case the sort was unnecessary? Does
> the backend traverse the index in the correct order to guarantee that
> the tuples are coming out already sorted? Does a hash index give the
> same plan (I would expect a sort->seqscan for a hash index)?

    Good  point!  As  far as I can see, the planner chooses index
    usage only depending on the WHERE clause.  A  hash  index  is
    only  usable  when  the  given  qualification  uses  = on the
    indexed attribute(s).

    If the sortClause exactly matches the indexed  attributes  of
    the  ONE used btree index and all operators request ascending
    order I think the index  scan  already  returns  the  correct
    order. Who know's definitely?

    Addition  to  my  last  posting: ... and if the index scan is
    using a btree index ...


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> >> + #ifdef QUERY_LIMIT
> >> + static bool
> >> + parse_query_limit(const char *value)
> >> + {
> >> +   int32 limit;
> >> +
> >> +   if (value == NULL) {
> >> +     reset_query_limit();
> >> +     return(TRUE);
> >> +   }
> >
> >Any idea how 'value' could be null?  I could not see how that would
> >happen.
>
> Not sure. I just followed the way other set commands are doing.
>
> >I can see how GEQO could have a NULL when you say ON, and no
> >value.  Same with rplans.
>
> Hmm... I think in that case, 'value' would be 'ON', not NULL. right?

Yes, I think so, value would be ON.  I will look into it.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] Alternative to LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> > >> +   if (value == NULL) {
> > >> +     reset_query_limit();
> > >> +     return(TRUE);
> > >> +   }
> > >Any idea how 'value' could be null?  I could not see how that would
> > >happen.
> > Not sure. I just followed the way other set commands are doing.
>
> This is how RESET is implemented.

Oh.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] What about LIMIT in SELECT ?

От
Terry Mackintosh
Дата:
On Tue, 13 Oct 1998, Jeff Hoffmann wrote:

> >Hi, my 2 cents...
> >
> >I agree completely, LIMIT would be VERY usefull in web based apps, which
> >is all I run.  It does not matter to me if it is not part of a formal
> >standard.  The idea is so common that it is a defacto standard.
>
> i'm not familiar with mysql and using "LIMIT" but wouldn't this same effect
> be achieved by declaring a cursor and fetching however many records in the
> cursor?  it's a very noticeable improvement when you only want the first 20
> out of 500 in a 200k record database, at least.

Yes, while this is an improvement, it still has to do the entire query,
would be nice if the query could be terminated after a designated number
of rows where found, thus freeing system resources that are other wise
consumed.
I have seen web users run ridculous querys, like search for the
letter 'a', and it happens to be a substring search, now the box go'es ape
shit for 5 or 10 min.s while it basically gets the whole db as the search
result.  All this befor you can do a 'FETCH', as I understand FETCH, I
will need to read up on it.

Note that I do not have any databases that larg on my box, I was thinking
back to my VFP/NT experiances.

Have a great day
Terry Mackintosh <terry@terrym.com>          http://www.terrym.com
sysadmin/owner  Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!


Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> >     I've done some tests and what I found out might be a  bug  in
> >     PostgreSQL's  query  optimizer.
> >         SELECT * FROM tab ORDER BY key;
> >         results in a sort->seqscan - I would have
> >         expected an indexscan!
>
> Given that a table _could_ be completely unsorted on disk, it is
> probably reasonable to suck the data in for a possible in-memory sort
> rather than skipping around the disk to pick up individual tuples via
> the index. Don't know if vacuum has a statistic on "orderness"...

Thomas is correct on this.  Vadim has run some tests, and with our
optimized psort() code, the in-memory sort is often faster than using
the index to get the tuple, because you are jumping all over the drive.
I don't remember, but obviously there is a break-even point where
getting X rows using the index on a table of Y rows is faster , but
getting X+1 rows on a table of Y rows is faster getting all the rows
sequentailly, and doing the sort.

You would have to pick only certain queries(no joins, index matches
ORDER BY), take the number of rows requested, and the number of rows
selected, and figure out if it is faster to use the index, or a
sequential scan and do the ORDER BY yourself.


Add to this the OFFSET capability.  I am not sure how you are going to
get into the index and start at the n-th entry, unless perhaps you just
sequential scan the index.

In fact, many queries just get column already indexed, and we could just
pull the data right out of the index.

I have added this to the TODO list:

    * Pull requested data directly from indexes, bypassing heap data

I think this has to be post-6.4 work, but I think we need to work in
this direction.  I am holding off any cnfify fixes for post-6.4, but a
6.4.1 performance release certainly is possible.


But, you are correct that certain cases where in index is already being
used on a query, you could just skip the sort IF you used the index to
get the rows from the base table.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] What about LIMIT in SELECT ?

От
Terry Mackintosh
Дата:
On Tue, 13 Oct 1998, Bruce Momjian wrote:

> What we could do is _if_ there is only one table(no joins), and an index
> exists that matches the ORDER BY, we could use the index to
> short-circuit the query.
>
> I have added this item to the TODO list:
>
> * Allow LIMIT ability on single-table queries that have no ORDER BY or
>         a matching index
>
> This looks do-able, and a real win.  Would this make web applications
> happier?  If there is an ORDER BY and no index, or a join, I can't
> figure out how we would short-circuit the query.
>
Yes, this would do for most of my apps.
It may just be my lack of sophistication, but I find that most web apps
are very simple in nature/table layout, and thus queries are often on only
a single table.

Thanks
Terry Mackintosh <terry@terrym.com>          http://www.terrym.com
sysadmin/owner  Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!


Re: [HACKERS] What about LIMIT in SELECT ?

От
Terry Mackintosh
Дата:
On Tue, 13 Oct 1998, Bruce Momjian wrote:

> My guess in a web application is that the transaction is started for
> every new page, so you can't have transactions spanning SQL sessions.
>
> LIMIT theoretically would allow you to start up where you left off.

************           EXACTLY !-)
Plus, it could also be used to limit bogus-run-away queries.

Terry Mackintosh <terry@terrym.com>          http://www.terrym.com
sysadmin/owner  Please! No MIME encoded or HTML mail, unless needed.

Proudly powered by R H Linux 4.2, Apache 1.3, PHP 3, PostgreSQL 6.3
-------------------------------------------------------------------
Success Is A Choice ... book by Rick Patino, get it, read it!


Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
> But, you are correct that certain cases where in index is already being
> used on a query, you could just skip the sort IF you used the index to
> get the rows from the base table.

    Especially in the case where

        SELECT ... WHERE key > 'val' ORDER BY key;

    creates  a Sort->IndexScan plan. The index scan already jumps
    around on the disc to collect the sorts input  and  the  sort
    finally returns exactly the same output (if the used index is
    only on key).

    And this is the case for  large  tables.  The  planner  first
    decides  to  use  an  index  scan due to the WHERE clause and
    later it notices the ORDER BY clause and creates a sort  over
    the scan.

    I'm  actually  hacking  around on it to see what happens if I
    suppress the sort node in some cases.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

RE: [HACKERS] What about LIMIT in SELECT ?

От
"Taral"
Дата:
> Thomas is correct on this.  Vadim has run some tests, and with our
> optimized psort() code, the in-memory sort is often faster than using
> the index to get the tuple, because you are jumping all over the drive.
> I don't remember, but obviously there is a break-even point where
> getting X rows using the index on a table of Y rows is faster , but
> getting X+1 rows on a table of Y rows is faster getting all the rows
> sequentailly, and doing the sort.
>
> You would have to pick only certain queries(no joins, index matches
> ORDER BY), take the number of rows requested, and the number of rows
> selected, and figure out if it is faster to use the index, or a
> sequential scan and do the ORDER BY yourself.

Since a sort loads the data into memory anyway, how about speeding up the
sort by using the index? Or does that take up too much memory? (approx 40%
more than the data alone, I think)

TAra


Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> Thomas is correct on this.  Vadim has run some tests, and with our
> optimized psort() code, the in-memory sort is often faster than using
> the index to get the tuple, because you are jumping all over the drive.
> I don't remember, but obviously there is a break-even point where
> getting X rows using the index on a table of Y rows is faster , but
> getting X+1 rows on a table of Y rows is faster getting all the rows
> sequentailly, and doing the sort.
>
> You would have to pick only certain queries(no joins, index matches
> ORDER BY), take the number of rows requested, and the number of rows
> selected, and figure out if it is faster to use the index, or a
> sequential scan and do the ORDER BY yourself.
>
> Add to this the OFFSET capability.  I am not sure how you are going to
> get into the index and start at the n-th entry, unless perhaps you just
> sequential scan the index.
>
> In fact, many queries just get column already indexed, and we could just
> pull the data right out of the index.
>
> I have added this to the TODO list:
>
>     * Pull requested data directly from indexes, bypassing heap data
>
> I think this has to be post-6.4 work, but I think we need to work in
> this direction.  I am holding off any cnfify fixes for post-6.4, but a
> 6.4.1 performance release certainly is possible.
>
>
> But, you are correct that certain cases where in index is already being
> used on a query, you could just skip the sort IF you used the index to
> get the rows from the base table.

I have had more time to think about this.  Basically, for pre-sorted
data, our psort code is very fast, because it does not need to sort
anything.  It just moves the rows in and out of the sort memory.  Yes,
it could be removed in some cases, and probably should be, but it is not
going to produce great speedups.

The more general case I will describe below.

First, let's look at a normal query:

    SELECT *
    FROM tab
    ORDER BY col1

This is not going to use an index, and probably should not because it is
faster for large tables to sort them in memory, rather than moving all
over the disk.  For small tables, if the entire table fits in the buffer
cache, it may be faster to use the index, but on a small table the sort
doesn't take very long either, and the buffer cache effectiveness is
affected by other backends using it, so it may be better not to count on
it for a speedup.

However, if you only want the first 10 rows, that is a different story.
We pull all the rows into the backend, sort them, then return 10 rows.
The query, if we could do it, should be written as:

    SELECT *
    FROM tab
    WHERE col1 < some_unknown_value
    ORDER BY col1

In this case, the optimizer looks at the column statistics, and properly
uses an index to pull only a small subset of the table.  This is the
type of behavior people want for queries returning only a few values.

But, unfortunately, we don't know that mystery value.

Now, everyone agrees we need an index matching the ORDER BY to make this
query quick, but we don't know that mystery value, so currently we
execute the whole query, and do a fetch.

What I am now thinking is that maybe we need a way to walk around that
index.  Someone months ago asked how to do that, and we told him he
couldn't, because this not a C-ISAM/dbm type database.  However, if we
could somehow pass into the query the index location we want to start
at, and how many rows we need, that would solve our problem, and perhaps
even allow joined queries to work, assuming the table in the ORDER BY is
in an outer join loop.

    SELECT *
    FROM tab
    WHERE col1 < some_unknown_value
    ORDER BY col1
    USING INDEX tab_idx(452) COUNT 100

where 452 is an 452th index entry, and COUNT is the number of index rows
you want to process.  The query may return more or less than 100 rows if
there is a join and it joins to zero or more than one row in the joined
table, but this seems like perhaps a good way to go at it.  We need to
do it this way because if a single index row returns 4 result rows, and
only two of the four rows fit in the number of rows returnd as set by the
user, it is hard to re-start the query at the proper point, because you
would have to process the index rows a second time, and return just part
of the result, and that is hard.

If the index changes, or rows are added, the results are going to be
unreliable, but that is probably going to be true of any state-less
implementation we can devise.

I think this may be fairly easy to implement.  We could sequential scan
the index to get to the 452th row.  That is going to be quick.  We can
pass the 452 into the btree index code, so only a certain range of index
tuples are returned, and the system believes it has processed the entire
query, while we know it hasn't.  Doesn't really work with hash, so we
will not allow it for those indexes.

To make it really easy, we could implement it as a 'SET' command, so we
don't actually have it as part of the query, and have to pass it around
through all the modules.  You would do the proper 'SET' before running
the query.  Optimizer would look at 'SET' value to force index use.

    SET INDEX TO tab_idx START 452 COUNT 100

or

    SET INDEX TO tab_idx FROM 452 COUNT 451

There would have to be some way to signal that the end of the index had
been reached, because returning zero rows is not enough of a guarantee
in a joined SELECT.

Comments?

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
[Charset iso-8859-1 unsupported, filtering to ASCII...]
> > Thomas is correct on this.  Vadim has run some tests, and with our
> > optimized psort() code, the in-memory sort is often faster than using
> > the index to get the tuple, because you are jumping all over the drive.
> > I don't remember, but obviously there is a break-even point where
> > getting X rows using the index on a table of Y rows is faster , but
> > getting X+1 rows on a table of Y rows is faster getting all the rows
> > sequentailly, and doing the sort.
> >
> > You would have to pick only certain queries(no joins, index matches
> > ORDER BY), take the number of rows requested, and the number of rows
> > selected, and figure out if it is faster to use the index, or a
> > sequential scan and do the ORDER BY yourself.
>
> Since a sort loads the data into memory anyway, how about speeding up the
> sort by using the index? Or does that take up too much memory? (approx 40%
> more than the data alone, I think)

Not sure you can do that.  The index points to heap tuples/tids, and
though there are tids in the rows, you can't access them as tids in
memory.


--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
> I have had more time to think about this.  Basically, for pre-sorted
> data, our psort code is very fast, because it does not need to sort
> anything.  It just moves the rows in and out of the sort memory.  Yes,
> it could be removed in some cases, and probably should be, but it is not
> going to produce great speedups.

    And I got the time to hack around about this.

    I  hacked  in  a little check into the planner, that compares
    the sortClause against the key field list of  an  index  scan
    and  just  suppresses  the sort node if it exactly matchs and
    all sort operators are "<".

    I tested with a 10k row table where key is a text field.  The
    base query is a

        SELECT ... WHERE key > 'val' ORDER BY key;

    The  used 'val' is always a key that is close to the first of
    all keys in the table ('' on the first  query  and  the  last
    selected value on subsequent ones).

    Scenario  1  (S1)  uses exactly the above query but processes
    only the first 20 rows  from  the  result  buffer.  Thus  the
    frontend receives nearly the whole table.

    Scenario  2  (S2)  uses a cursor and FETCH 20. But closes the
    cursor and creates a new one for  the  next  selection  (only
    with another 'val') as it would occur in a web application.

    If  there  is  no index on key, the backend will allways do a
    Sort->SeqScan and due  to  the  'val'  close  to  the  lowest
    existing  key  nearly all tuples get scanned and put into the
    sort. S1 here runs about 10 seconds and S2 about  6  seconds.
    The  speedup  in  S2  results  from  the  reduced overhead of
    sending not wanted tuples into the frontend.

    Now with a btree index  on  key  and  an  unpatched  backend.
    Produced  plan  is  always  a  Sort->IndexScan.   S1 needs 16
    seconds and S2 needs 12 seconds. Again nearly all data is put
    into  the  sort but this time over the index scan and that is
    slower.

    Last with the btree index on key  and  the  patched  backend.
    This  time the plan is a plain IndexScan because the ORDER BY
    clause exactly matches the sort order of the  choosen  index.
    S1  needs  13  seconds  and  S2 less than 0.2!  This dramatic
    speedup comes from the fact, that this time the index scan is
    the  toplevel  executor  node and the executor run is stopped
    after 20 tuples have been selected.

    Analysis of the above timings:

    If there is an ORDER BY clause, using an index  scan  is  the
    clever  way  if  the  indexqual  dramatically reduces the the
    amount of data selected and sorted.   I  think  this  is  the
    normal case (who really selects nearly all rows from a 5M row
    table?). So choosing the index path  is  correct.  This  will
    hurt if someone really selects most of the rows and the index
    scan jumps over the disc.  But here the programmer should use
    an  unqualified  query  to  perform  a  seqscan  and  do  the
    qualification in the frontend application.

    The speedup for the cursor/fetch scenario  is  so  impressive
    that  I'll  create  a  post 6.4 patch. I don't want it in 6.4
    because there is absolutely no query in the whole  regression
    test,  where  it  suppresses  the  sort  node.   So  we  have
    absolutely no check that it doesn't break anything.

    For a web application, that can use a unique  key  to  select
    the next amount of rows, it will be a big win.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

Re: [HACKERS] What about LIMIT in SELECT ?

От
Tatsuo Ishii
Дата:
>>       postgres just because of lacking LIMIT. Tatsuo posted a patch
>>       for set query_limit to 'num', I just tested it and seems it
>>       works fine. Now, we need only possibility to specify offset,
>>       say
>>          set query_limit to 'offset,num'
>>       ( Tatsuo, How difficult to do this ?)
>>       and LIMIT problem will ne gone.
>
>    Think you haven't read my posting completely. Even  with  the
>    executor  limit,  the  complete scan into the sort is done by
>    the backend.  You need to specify ORDER BY to  get  the  same
>    list  again  (without  the  offset  doesn't  make sense). But
>    currently, ORDER BY forces a sort node into the query plan.

I think we have understanded your point. set query_limit is just a
easy alternative of using cursor and fetch.

>    I  haven't  looked  at  Tatsuo's  patch  very well. But if it
>    limits the amount of data going into the sort (on ORDER  BY),
>    it  will  break it! The requested ordering could be different
>    from what the choosen index might return. The used  index  is
>    choosen by the planner upon the qualifications given, not the
>    ordering wanted.

I think it limits the final result. When query_limit is set,
the arg "numberTuples" of ExecutePlan() is set to it instead of 0
(this means no limit).

Talking about "offset," it shouldn't be very difficult. I guess all we
have to do is adding a new arg "offset" to ExecutePlan() then making
obvious modifications. (and of course we have to modify set
query_limit syntax but it's trivial)

However, before going ahead, I would like to ask other hackers about
this direction. This might be convenient for some users, but still the
essential performance issue would remain. In another word, this is a
short-term solution not a intrinsic one, IMHO.
--
Tatsuo Ishii
t-ishii@sra.co.jp

Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> > I have had more time to think about this.  Basically, for pre-sorted
> > data, our psort code is very fast, because it does not need to sort
> > anything.  It just moves the rows in and out of the sort memory.  Yes,
> > it could be removed in some cases, and probably should be, but it is not
> > going to produce great speedups.
>
>     And I got the time to hack around about this.
>
>     I  hacked  in  a little check into the planner, that compares
>     the sortClause against the key field list of  an  index  scan
>     and  just  suppresses  the sort node if it exactly matchs and
>     all sort operators are "<".
>
>     I tested with a 10k row table where key is a text field.  The
>     base query is a
>
>         SELECT ... WHERE key > 'val' ORDER BY key;
>
>     The  used 'val' is always a key that is close to the first of
>     all keys in the table ('' on the first  query  and  the  last
>     selected value on subsequent ones).

This is good stuff.  I want to think about it for a day.  Sounds very
promising.


--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
Tatsuo Ishii wrote:

> I think we have understanded your point. set query_limit is just a
> easy alternative of using cursor and fetch.
>
> >    I  haven't  looked  at  Tatsuo's  patch  very well. But if it
> >    limits the amount of data going into the sort (on ORDER  BY),
> >    it  will  break it! The requested ordering could be different
> >    from what the choosen index might return. The used  index  is
> >    choosen by the planner upon the qualifications given, not the
> >    ordering wanted.
>
> I think it limits the final result. When query_limit is set,
> the arg "numberTuples" of ExecutePlan() is set to it instead of 0
> (this means no limit).
>
> Talking about "offset," it shouldn't be very difficult. I guess all we
> have to do is adding a new arg "offset" to ExecutePlan() then making
> obvious modifications. (and of course we have to modify set
> query_limit syntax but it's trivial)

    The offset could become

        FETCH n IN cursor [OFFSET n];

    and

        SELECT ... [LIMIT offset,count];

    The  FETCH command already calls ExecutorRun() with the given
    count (the tuple limit). Telling it the offset too is  really
    simple.   And  ExecutorRun()  could  check  if  the  toplevel
    executor node is an index scan. Skipping  tuples  during  the
    index  scan  requires,  that  all  qualifications  are in the
    indexqual, thus any tuple returned by it will become a  final
    result  row  (as it would be in the simple 1-table-queries we
    discussed).  If  that  isn't  the  case,  the  executor  must
    fallback to skip the final result tuples and that is after an
    eventually processed sort/merge of the complete  result  set.
    That would only reduce communication to the client and memory
    required there to buffer the result  set  (not  a  bad  thing
    either).

    ProcessQueryDesc()  in tcop/pquery.c also calls ExecutorRun()
    but with a constant 0 tuple count. Having offset and count in
    the  parsetree  would  make it without any state variables or
    SET command. And it's the only clean way to restrict LIMIT to
    SELECT  queries.  Any  thrown  in LIMIT to ExecutorRun() from
    another place could badly hurt the rewrite  system.  Remember
    that   non-instead   actions   on   insert/update/delete  are
    processed before the  original  query!  And  what  about  SQL
    functions that get processed during the evaluation of another
    query (view using an SQL function for count(*))?

    A little better would it be to make the LIMIT values able  to
    be  parameter  nodes. C or PL functions use the prepared plan
    feature  of  the  SPI  manager   for   performance   reasons.
    Especially  the  offset  value  might  there  need  to  be  a
    parameter that the executor has to pick  out  first.   If  we
    change  the  count  argument of ExecutorRun to a List *limit,
    this one could be NIL (to mean  the  old  0  count  0  offset
    behaviour)  or a list of two elements that both can be either
    a Const or a Param of type int4.  Easy for  the  executor  to
    evaluate.

    The   only   places   where   ExecutorRun()   is  called  are
    tcop/pquery.c  (queries  from  frontend),  commands/command.c
    (FETCH  command),  executor/functions.c  (SQL  functions) and
    executor/spi.c (SPI manager). So it is  easy  to  change  the
    call interface too.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

Re: [HACKERS] What about LIMIT in SELECT ?

От
Oleg Bartunov
Дата:
This is a little bit off-topic,
I did some timings with latest cvs on my real database
( all output redirected to /dev/null ), table contains 8798 records,
31 columns, order key have indices.

1.select count(*) from work_flats;
0.02user 0.00system 0:00.18elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+21minor)pagefaults 0swaps

2.select * from work_flats order by rooms, metro_id;
2.35user 0.25system 0:10.11elapsed 25%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+2799minor)pagefaults 0swaps

3.set query_limit to '150';
SET VARIABLE
select * from work_flats order by rooms, metro_id;
0.06user 0.00system 0:02.75elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+67minor)pagefaults 0swaps

4.begin;
declare tt cursor  for
select * from work_flats order by rooms, metro_id;
fetch 150 in tt;
end;
0.05user 0.01system 0:02.76elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (131major+67minor)pagefaults 0swaps

As you can see timings for query_limit and cursor are very similar,
I didn't expected this. So, in principle, enhanced version of fetch
(with offset) would cover all we need from LIMIT, but query_limit would be
still useful, for example to restrict loadness of server.
Will all enhancements you discussed go to the 6.4 ?
I'm really interested in testing this stuff because I begin new project
and everything we discussed here are badly needed.


    Regards,

     Oleg



On Thu, 15 Oct 1998, Jan Wieck wrote:

> Date: Thu, 15 Oct 1998 14:23:43 +0200 (MET DST)
> From: Jan Wieck <jwieck@debis.com>
> To: t-ishii@sra.co.jp
> Cc: jwieck@debis.com, oleg@sai.msu.su, hackers@postgreSQL.org
> Subject: Re: [HACKERS] What about LIMIT in SELECT ?
>
> Tatsuo Ishii wrote:
>
> > I think we have understanded your point. set query_limit is just a
> > easy alternative of using cursor and fetch.
> >
> > >    I  haven't  looked  at  Tatsuo's  patch  very well. But if it
> > >    limits the amount of data going into the sort (on ORDER  BY),
> > >    it  will  break it! The requested ordering could be different
> > >    from what the choosen index might return. The used  index  is
> > >    choosen by the planner upon the qualifications given, not the
> > >    ordering wanted.
> >
> > I think it limits the final result. When query_limit is set,
> > the arg "numberTuples" of ExecutePlan() is set to it instead of 0
> > (this means no limit).
> >
> > Talking about "offset," it shouldn't be very difficult. I guess all we
> > have to do is adding a new arg "offset" to ExecutePlan() then making
> > obvious modifications. (and of course we have to modify set
> > query_limit syntax but it's trivial)
>
>     The offset could become
>
>         FETCH n IN cursor [OFFSET n];
>
>     and
>
>         SELECT ... [LIMIT offset,count];
>
>     The  FETCH command already calls ExecutorRun() with the given
>     count (the tuple limit). Telling it the offset too is  really
>     simple.   And  ExecutorRun()  could  check  if  the  toplevel
>     executor node is an index scan. Skipping  tuples  during  the
>     index  scan  requires,  that  all  qualifications  are in the
>     indexqual, thus any tuple returned by it will become a  final
>     result  row  (as it would be in the simple 1-table-queries we
>     discussed).  If  that  isn't  the  case,  the  executor  must
>     fallback to skip the final result tuples and that is after an
>     eventually processed sort/merge of the complete  result  set.
>     That would only reduce communication to the client and memory
>     required there to buffer the result  set  (not  a  bad  thing
>     either).
>
>     ProcessQueryDesc()  in tcop/pquery.c also calls ExecutorRun()
>     but with a constant 0 tuple count. Having offset and count in
>     the  parsetree  would  make it without any state variables or
>     SET command. And it's the only clean way to restrict LIMIT to
>     SELECT  queries.  Any  thrown  in LIMIT to ExecutorRun() from
>     another place could badly hurt the rewrite  system.  Remember
>     that   non-instead   actions   on   insert/update/delete  are
>     processed before the  original  query!  And  what  about  SQL
>     functions that get processed during the evaluation of another
>     query (view using an SQL function for count(*))?
>
>     A little better would it be to make the LIMIT values able  to
>     be  parameter  nodes. C or PL functions use the prepared plan
>     feature  of  the  SPI  manager   for   performance   reasons.
>     Especially  the  offset  value  might  there  need  to  be  a
>     parameter that the executor has to pick  out  first.   If  we
>     change  the  count  argument of ExecutorRun to a List *limit,
>     this one could be NIL (to mean  the  old  0  count  0  offset
>     behaviour)  or a list of two elements that both can be either
>     a Const or a Param of type int4.  Easy for  the  executor  to
>     evaluate.
>
>     The   only   places   where   ExecutorRun()   is  called  are
>     tcop/pquery.c  (queries  from  frontend),  commands/command.c
>     (FETCH  command),  executor/functions.c  (SQL  functions) and
>     executor/spi.c (SPI manager). So it is  easy  to  change  the
>     call interface too.
>
>
> Jan
>
> --
>
> #======================================================================#
> # It's easier to get forgiveness for being wrong than for being right. #
> # Let's break this rule - forgive me.                                  #
> #======================================== jwieck@debis.com (Jan Wieck) #
>
>

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83


Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
OK, I have had my day of thinking, and will address this specific
posting first, because it is the most fundamental concerning the future
direction of the optimization.

>
>     And I got the time to hack around about this.
>
>     I  hacked  in  a little check into the planner, that compares
>     the sortClause against the key field list of  an  index  scan
>     and  just  suppresses  the sort node if it exactly matchs and
>     all sort operators are "<".
>
>     I tested with a 10k row table where key is a text field.  The
>     base query is a
>
>         SELECT ... WHERE key > 'val' ORDER BY key;
>
>     The  used 'val' is always a key that is close to the first of
>     all keys in the table ('' on the first  query  and  the  last
>     selected value on subsequent ones).
>
>     Scenario  1  (S1)  uses exactly the above query but processes
>     only the first 20 rows  from  the  result  buffer.  Thus  the
>     frontend receives nearly the whole table.

OK.

>
>     Scenario  2  (S2)  uses a cursor and FETCH 20. But closes the
>     cursor and creates a new one for  the  next  selection  (only
>     with another 'val') as it would occur in a web application.
>
>     If  there  is  no index on key, the backend will allways do a
>     Sort->SeqScan and due  to  the  'val'  close  to  the  lowest
>     existing  key  nearly all tuples get scanned and put into the
>     sort. S1 here runs about 10 seconds and S2 about  6  seconds.
>     The  speedup  in  S2  results  from  the  reduced overhead of
>     sending not wanted tuples into the frontend.

Makes sense.  All rows are processed, but not sent to client.

>
>     Now with a btree index  on  key  and  an  unpatched  backend.
>     Produced  plan  is  always  a  Sort->IndexScan.   S1 needs 16
>     seconds and S2 needs 12 seconds. Again nearly all data is put
>     into  the  sort but this time over the index scan and that is
>     slower.

VACUUM ANALYZE could affect this.  Because it had no stats, it thought
index use would be faster, but in fact because 'val' was near the lowest
value, it as selecting 90% of the table, and would have been better with
a sequential scan.  pg_statistics's low/hi values for a column could
have told that to the optimizer.

I know the good part of the posting is coming.

>     Last with the btree index on key  and  the  patched  backend.
>     This  time the plan is a plain IndexScan because the ORDER BY
>     clause exactly matches the sort order of the  chosen  index.
>     S1  needs  13  seconds  and  S2 less than 0.2!  This dramatic
>     speedup comes from the fact, that this time the index scan is
>     the  toplevel  executor  node and the executor run is stopped
>     after 20 tuples have been selected.

OK, seems like in the S1 case, the use of the psort/ORDER BY code on top
of the index was taking and extra 3 seconds, which is 23%.  That is a
lot more than I thought for the psort code, and shows we could gain a
lot by removing unneeded sorts from queries that are already using
matching indexes.

Just for clarity, added to TODO.  I think everyone is clear on this one,
and its magnitude is a surprise to me:

  * Prevent psort() usage when query already using index matching ORDER BY


>     Analysis of the above timings:
>
>     If there is an ORDER BY clause, using an index  scan  is  the
>     clever  way  if  the  indexqual  dramatically reduces the the
>     amount of data selected and sorted.   I  think  this  is  the
>     normal case (who really selects nearly all rows from a 5M row
>     table?). So choosing the index path  is  correct.  This  will
>     hurt if someone really selects most of the rows and the index
>     scan jumps over the disc.  But here the programmer should use
>     an  unqualified  query  to  perform  a  seqscan  and  do  the
>     qualification in the frontend application.

Fortunately, the optimizer already does the index selection for us, and
guesses pretty well if the index or sequential scan is better.  Once we
implement the above removal of psort(), we will have to change the
timings because now you have to compare index scan against sequential
scan AND psort(), because in the index scan situation, you don't need
the psort(), assuming the ORDER BY matches the index exactly.

>     The speedup for the cursor/fetch scenario  is  so  impressive
>     that  I'll  create  a  post 6.4 patch. I don't want it in 6.4
>     because there is absolutely no query in the whole  regression
>     test,  where  it  suppresses  the  sort  node.   So  we  have
>     absolutely no check that it doesn't break anything.
>
>     For a web application, that can use a unique  key  to  select
>     the next amount of rows, it will be a big win.

OK, I think the reason the regression test did not show your code being
used is important.

First, most of the tables are small in the regression test, so sequential
scans are faster.  Second, most queries using indexes are either joins,
which do the entire table, or equality tests, like col = 3, where there
is no matching ORDER BY because all the col values are 3.  Again, your
code can't help with these.

The only regression-type code that would use it would be a 'col > 3'
qualification with a col ORDER BY, and there aren't many of those.

However, if we think of the actual application you are addressing, it is
a major win.  If we are going after only one row of the index, it is
fast.  If we are going after the entire table, it is faster to
sequential scan and psort().  You big win is with the partial queries,
where you end up doing a full sequential scan or index scan, then and
ORDER BY, while you really only need a few rows from the query, and if
you deal directly with the index, you can prevent many rows from being
processed.  It is the ability to skip processing those extra rows that
makes it a big win, not so much the removal of the ORDER BY, though that
helps too.

Your solution really is tailored for this 'partial' query application,
and I think it is a big need for certain applications that can't use
cursors, like web apps.  Most other apps have long-time connections to
the database, and are better off with cursors.

I did profiling to improve startup time, because the database
requirements of web apps are different from normal db apps, and we have
to adjust to that.

So, to reiterate, full queries are not benefited as much from the new
code, because sequential scan/psort is faster, or because the index only
retrieves a small number of rows because the qualification of values is
very specific.

Those open-ended, give me the rows from 100 to 199 really need your
modifications.

OK, we have QUERY_LIMIT, and that allows us to throw any query at the
system, and it will return that many of the first rows for the ORDER BY.
No fancy stuff required.  If we can get a matching index, we may be able
to remove the requirement of scanning all the row (with Jan's patch),
and that is a big win.  If not, we at least prevent the rows from being
returned to the client.

However, there is the OFFSET issue.  This is really a case where the
user wants to _restart_ the query where they left off.  That is a
different problem.  All of a sudden, we need to evaluate more of the
query, and return a segment from the middle of the result set.

I think we need to decide how to handle such a restart.  Do we
re-evaluate the entire query, skipping all the rows up to OFFSET, and
return the number of rows they requested after OFFSET.  I would think we
don't want to do that, do we.  It would be much easier to code.  If it
is a single table, skipping forward has to be done anyway, because we
can't just _jump_ to the 100th entry in the index, unless we pass some
_tid_ to the user, and expect them to pass that back to start the query.
I don't think we went to do that.  It is ugly, and the row may have
moved since we started.  So, for a single table, adding a QUERY_OFFSET
would do exactly what we need, with Jan's patches.

For a joined query, I think you will have to do the entire _join_ before
returning anything.

You can't just process all the joins up to the OFFSET location, and you
can't just jump to the 100th index location, because you don't know that
the 100th index location produced the 100th result just returned to the
user.  You have to process the whole query, and because of the join and
not knowing which data row from each table is going to make which entry
in the final result.  If you are really craft, and the ORDER BY table is
in the outer part of the join loop, you could start processing the table
that is part of the outer loop in _index_ order, because you know that
the rows processed in index order are going to produce the output in
result order.  You then could process and throw away the results up to
offset, and generate the needed rows and stop.

The other way of doing it is to specify a query limit based on specific
index entries, so you say I want the query returned by the first 20
index entries matching the ORDER BY, or entries 100-199, and the query
is limited to using only those entries in the index.  In that case,
though, in joins, you could return more or less rows in the result
depending on the other tables, and that may be unacceptable.  However,
for this case, the advantage is that you don't need to process the rows
from 1 to 99 because you have been told the user only wants rows from
certain index slots.  If the user requests rows 50000-50100, this would
be much faster because you don't have to process the 50000 rows before
returning any data.  However, I question how often people grab stuff
from the center of large data sets.  Seems the QUERY_OFFSET idea may be
easier for users.

I will be commenting on the rest of the optimization postings tomorrow.

--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> This is a little bit off-topic,
> I did some timings with latest cvs on my real database
> ( all output redirected to /dev/null ), table contains 8798 records,
> 31 columns, order key have indices.
>
> 1.select count(*) from work_flats;
> 0.02user 0.00system 0:00.18elapsed 10%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (131major+21minor)pagefaults 0swaps
>
> 2.select * from work_flats order by rooms, metro_id;
> 2.35user 0.25system 0:10.11elapsed 25%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (131major+2799minor)pagefaults 0swaps
>
> 3.set query_limit to '150';
> SET VARIABLE
> select * from work_flats order by rooms, metro_id;
> 0.06user 0.00system 0:02.75elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (131major+67minor)pagefaults 0swaps
>
> 4.begin;
> declare tt cursor  for
> select * from work_flats order by rooms, metro_id;
> fetch 150 in tt;
> end;
> 0.05user 0.01system 0:02.76elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (131major+67minor)pagefaults 0swaps
>
> As you can see timings for query_limit and cursor are very similar,
> I didn't expected this. So, in principle, enhanced version of fetch
> (with offset) would cover all we need from LIMIT, but query_limit would be
> still useful, for example to restrict loadness of server.
> Will all enhancements you discussed go to the 6.4 ?
> I'm really interested in testing this stuff because I begin new project
> and everything we discussed here are badly needed.
>

When you say output to /dev/null, is that on the client, on the backend?
I will assume the client, because of the timings you are reporting.

What is the time of this, which has no ORDER BY?

    select * from work_flats;


As far as I can tell, the timing differences you are seeing are based on
the fact that the data is not being transfered to the client.  This is
the current sole use of query_limit, and a good one.  The web-app need
is to prevent processing of the entire table for just a few rows, and
currently query_limit does not do this, though Jan's patches do this.


--
  Bruce Momjian                        |  http://www.op.net/~candle
  maillist@candle.pha.pa.us            |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: [HACKERS] What about LIMIT in SELECT ?

От
"Thomas G. Lockhart"
Дата:
> Your solution really is tailored for this 'partial' query application,
> and I think it is a big need for certain applications that can't use
> cursors, like web apps.  Most other apps have long-time connections to
> the database, and are better off with cursors.

And there are persistant web servers available too, to help work around
this "stateless connection problem"? Let's remember that we are solving
a problem which has few requirements for data integrity, and which is
starting to get out of the realm of Postgres' strengths (almost any
scheme can barf data up to a client if it doesn't care whether it is
repeatable or complete).

Neat stuff though :)

                     - Tom

RE: [HACKERS] What about LIMIT in SELECT ?

От
"Hiroshi Inoue"
Дата:
>
> I know the good part of the posting is coming.
>
> >     Last with the btree index on key  and  the  patched  backend.
> >     This  time the plan is a plain IndexScan because the ORDER BY
> >     clause exactly matches the sort order of the  chosen  index.
> >     S1  needs  13  seconds  and  S2 less than 0.2!  This dramatic
> >     speedup comes from the fact, that this time the index scan is
> >     the  toplevel  executor  node and the executor run is stopped
> >     after 20 tuples have been selected.
>
> OK, seems like in the S1 case, the use of the psort/ORDER BY code on top
> of the index was taking and extra 3 seconds, which is 23%.  That is a
> lot more than I thought for the psort code, and shows we could gain a
> lot by removing unneeded sorts from queries that are already using
> matching indexes.
>
> Just for clarity, added to TODO.  I think everyone is clear on this one,
> and its magnitude is a surprise to me:
>
>   * Prevent psort() usage when query already using index matching ORDER BY
>
>

I can't find the reference to descending order cases except my posting.
If  we use an index scan to remove sorts in those cases,backward positioning
and scanning are necessary.

> >     Analysis of the above timings:
> >
> >     If there is an ORDER BY clause, using an index  scan  is  the
> >     clever  way  if  the  indexqual  dramatically reduces the the
> >     amount of data selected and sorted.   I  think  this  is  the
> >     normal case (who really selects nearly all rows from a 5M row
> >     table?). So choosing the index path  is  correct.  This  will
> >     hurt if someone really selects most of the rows and the index
> >     scan jumps over the disc.  But here the programmer should use
> >     an  unqualified  query  to  perform  a  seqscan  and  do  the
> >     qualification in the frontend application.
>
> Fortunately, the optimizer already does the index selection for us, and
> guesses pretty well if the index or sequential scan is better.  Once we
> implement the above removal of psort(), we will have to change the
> timings because now you have to compare index scan against sequential
> scan AND psort(), because in the index scan situation, you don't need
> the psort(), assuming the ORDER BY matches the index exactly.
>

Let t be a table with 2 indices, index1(key1,key2), index2(key1,key3).
i.e. key1 is common to index1 and index2.

And for the query select * from t where key1>....;

If   PosgreSQL optimizer choose [ index scan on index1 ] we can't remove
sorts from the following query.select * from t where key1>... order by key1,key3;

Similarly if  [ index scan on index2 ] are chosen  we can't remove sorts
from the following query.select * from t where key1>... order by key1,key2;

But in both cases (clever) optimizer can choose another index for scan.

Thanks.

Hiroshi Inoue
Inoue@tpf.co.jp



Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
Hiroshi Inoue wrote:

> >   * Prevent psort() usage when query already using index matching ORDER BY
> >
> >
>
> I can't find the reference to descending order cases except my posting.
> If  we use an index scan to remove sorts in those cases,backward positioning
> and scanning are necessary.

    I  think  it's  only thought as a reminder that the optimizer
    needs some optimization.

    That topic, and the LIMIT stuff too I think, is past 6.4 work
    and  may  go into a 6.4.1 performance release. So when we are
    after 6.4, we have enough time to work out a  real  solution,
    instead of just throwing in a patch as a quick shot.

    What  we  two did where steps in the same direction. Your one
    covers more situations, but after all if multiple people have
    the  same  idea  there  is a good chance that it is the right
    thing to do.

>
> Let t be a table with 2 indices, index1(key1,key2), index2(key1,key3).
> i.e. key1 is common to index1 and index2.
>
> And for the query
>   select * from t where key1>....;
>
> If   PosgreSQL optimizer choose [ index scan on index1 ] we can't remove
> sorts from the following query.
>    select * from t where key1>... order by key1,key3;
>
> Similarly if  [ index scan on index2 ] are chosen  we can't remove sorts
> from the following query.
>    select * from t where key1>... order by key1,key2;
>
> But in both cases (clever) optimizer can choose another index for scan.

    Right. As I remember, your solution does basically  the  same
    as  my  one. It does not change the optimizers decision about
    the index or if an index at all is used.  So  I  assume  they
    hook  into  the same position where depending on the order by
    clause the sort node is added. And that is at the very end of
    the optimizer.

    What  you  describe above requires changes in upper levels of
    optimization.  Doing that is far away from my knowledge about
    the  optimizer.   And  some of your earlier statements let me
    think you aren't familiar enough with  it  too.  We  need  at
    least help from others to do it well.

    I  don't want to dive that deep into the optimizer. There was
    a far too long time where the rule system was broken and  got
    out  of  sync with the parser/optimizer capabilities. I fixed
    many things in it for 6.4. My first priority now is,  not  to
    let such a situation come up again.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
>     Right. As I remember, your solution does basically  the  same
>     as  my  one. It does not change the optimizers decision about
>     the index or if an index at all is used.  So  I  assume  they
>     hook  into  the same position where depending on the order by
>     clause the sort node is added. And that is at the very end of
>     the optimizer.
> 
>     What  you  describe above requires changes in upper levels of
>     optimization.  Doing that is far away from my knowledge about
>     the  optimizer.   And  some of your earlier statements let me
>     think you aren't familiar enough with  it  too.  We  need  at
>     least help from others to do it well.
> 
>     I  don't want to dive that deep into the optimizer. There was
>     a far too long time where the rule system was broken and  got
>     out  of  sync with the parser/optimizer capabilities. I fixed
>     many things in it for 6.4. My first priority now is,  not  to
>     let such a situation come up again.

I agree.  Another good thing is that the LIMIT thing will not require a
dump/reload, so it is a good candidate for a minor release.

--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
[Charset iso-8859-1 unsupported, filtering to ASCII...]
> >
> > I know the good part of the posting is coming.
> >
> > >     Last with the btree index on key  and  the  patched  backend.
> > >     This  time the plan is a plain IndexScan because the ORDER BY
> > >     clause exactly matches the sort order of the  chosen  index.
> > >     S1  needs  13  seconds  and  S2 less than 0.2!  This dramatic
> > >     speedup comes from the fact, that this time the index scan is
> > >     the  toplevel  executor  node and the executor run is stopped
> > >     after 20 tuples have been selected.
> >
> > OK, seems like in the S1 case, the use of the psort/ORDER BY code on top
> > of the index was taking and extra 3 seconds, which is 23%.  That is a
> > lot more than I thought for the psort code, and shows we could gain a
> > lot by removing unneeded sorts from queries that are already using
> > matching indexes.
> >
> > Just for clarity, added to TODO.  I think everyone is clear on this one,
> > and its magnitude is a surprise to me:
> >
> >   * Prevent psort() usage when query already using index matching ORDER BY
> >
> >

In a multi-column ORDER BY, the direction of the sorts will have to be
identical too.  That is assumed, I think.  If all are descending, I
think we can traverse the index in reverse order, or can't we do that. 
I am not sure, but if we can't, descending would fail, and require a
psort.


> 
> I can't find the reference to descending order cases except my posting.
> If  we use an index scan to remove sorts in those cases,backward positioning
> and scanning are necessary.
> 
> > >     Analysis of the above timings:
> > >
> > >     If there is an ORDER BY clause, using an index  scan  is  the
> > >     clever  way  if  the  indexqual  dramatically reduces the the
> > >     amount of data selected and sorted.   I  think  this  is  the
> > >     normal case (who really selects nearly all rows from a 5M row
> > >     table?). So choosing the index path  is  correct.  This  will
> > >     hurt if someone really selects most of the rows and the index
> > >     scan jumps over the disc.  But here the programmer should use
> > >     an  unqualified  query  to  perform  a  seqscan  and  do  the
> > >     qualification in the frontend application.
> >
> > Fortunately, the optimizer already does the index selection for us, and
> > guesses pretty well if the index or sequential scan is better.  Once we
> > implement the above removal of psort(), we will have to change the
> > timings because now you have to compare index scan against sequential
> > scan AND psort(), because in the index scan situation, you don't need
> > the psort(), assuming the ORDER BY matches the index exactly.
> >
> 
> Let t be a table with 2 indices, index1(key1,key2), index2(key1,key3).
> i.e. key1 is common to index1 and index2.
> 
> And for the query
>   select * from t where key1>....;
> 
> If   PosgreSQL optimizer choose [ index scan on index1 ] we can't remove
> sorts from the following query.
>     select * from t where key1>... order by key1,key3;
> 
> Similarly if  [ index scan on index2 ] are chosen  we can't remove sorts
> from the following query.
>     select * from t where key1>... order by key1,key2;
> 
> But in both cases (clever) optimizer can choose another index for scan.

Yes, the optimizer is going to have to be smart by looking at the ORDER
BY, and nudging the code to favor a certain index.  This is also true in
a join, where we will want to use an index in cases we would normally
not use it, and prefer a certain index over others.


--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
>
> I agree.  Another good thing is that the LIMIT thing will not require a
> dump/reload, so it is a good candidate for a minor release.

    That's wrong, sorry.

    The  limit  thing as I implemented it adds 2 new variables to
    the Query structure. Rewrite rules are stored  as  querytrees
    and in the existing pg_rewrite entries that would be missing.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> >
> > I agree.  Another good thing is that the LIMIT thing will not require a
> > dump/reload, so it is a good candidate for a minor release.
> 
>     That's wrong, sorry.
> 
>     The  limit  thing as I implemented it adds 2 new variables to
>     the Query structure. Rewrite rules are stored  as  querytrees
>     and in the existing pg_rewrite entries that would be missing.

Oh, sorry.  I forgot.  That could be tough.

--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
>
> > >
> > > I agree.  Another good thing is that the LIMIT thing will not require a
> > > dump/reload, so it is a good candidate for a minor release.
> >
> >     That's wrong, sorry.
> >
> >     The  limit  thing as I implemented it adds 2 new variables to
> >     the Query structure. Rewrite rules are stored  as  querytrees
> >     and in the existing pg_rewrite entries that would be missing.
>
> Oh, sorry.  I forgot.  That could be tough.

    But it wouldn't hurt to add them now to have them in
    place. The required out-, read- and copyfuncs are in
    my patch too. This  would prevent  dump/load when we
    later add the real LIMIT functionality. And  it does
    not change anything now.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> >
> > I agree.  Another good thing is that the LIMIT thing will not require a
> > dump/reload, so it is a good candidate for a minor release.
> 
>     That's wrong, sorry.
> 
>     The  limit  thing as I implemented it adds 2 new variables to
>     the Query structure. Rewrite rules are stored  as  querytrees
>     and in the existing pg_rewrite entries that would be missing.

Not sure how to address this.  Perhaps we could write a query as part of
the upgrade that added these to the existing rules, or we could require
an initdb of all beta users.

--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> > 
> > > >
> > > > I agree.  Another good thing is that the LIMIT thing will not require a
> > > > dump/reload, so it is a good candidate for a minor release.
> > > 
> > >     That's wrong, sorry.
> > > 
> > >     The  limit  thing as I implemented it adds 2 new variables to
> > >     the Query structure. Rewrite rules are stored  as  querytrees
> > >     and in the existing pg_rewrite entries that would be missing.
> > 
> > Oh, sorry.  I forgot.  That could be tough.
> 
>     But it wouldn't hurt to add them now to have them in
>     place. The required out-, read- and copyfuncs are in
>     my patch too. This  would prevent  dump/load when we
>     later add the real LIMIT functionality. And  it does
>     not change anything now.
> 

Jan, we found that I am having to require an initdb for the INET/CIDR
type, so if you want stuff to change the views/rules for the limit
addition post 6.4, please send them in and I will apply them.

You clearly have the syntax down, so I think you should go ahead.


--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: [HACKERS] What about LIMIT in SELECT ?

От
jwieck@debis.com (Jan Wieck)
Дата:
> Jan, we found that I am having to require an initdb for the INET/CIDR
> type, so if you want stuff to change the views/rules for the limit
> addition post 6.4, please send them in and I will apply them.
>
> You clearly have the syntax down, so I think you should go ahead.

    This is the part that will enable post 6.4 add of the
    LIMIT stuff without initdb.

    Regression tested.


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#======================================== jwieck@debis.com (Jan Wieck) #

diff -cr src.orig/backend/nodes/copyfuncs.c src/backend/nodes/copyfuncs.c
*** src.orig/backend/nodes/copyfuncs.c    Fri Oct 16 11:53:40 1998
--- src/backend/nodes/copyfuncs.c    Fri Oct 16 13:32:35 1998
***************
*** 1578,1583 ****
--- 1578,1586 ----
          newnode->unionClause = temp_list;
      }

+     Node_Copy(from, newnode, limitOffset);
+     Node_Copy(from, newnode, limitCount);
+
      return newnode;
  }

diff -cr src.orig/backend/nodes/outfuncs.c src/backend/nodes/outfuncs.c
*** src.orig/backend/nodes/outfuncs.c    Fri Oct 16 11:53:40 1998
--- src/backend/nodes/outfuncs.c    Fri Oct 16 13:30:50 1998
***************
*** 259,264 ****
--- 259,268 ----
      appendStringInfo(str, (node->hasSubLinks ? "true" : "false"));
      appendStringInfo(str, " :unionClause ");
      _outNode(str, node->unionClause);
+     appendStringInfo(str, " :limitOffset ");
+     _outNode(str, node->limitOffset);
+     appendStringInfo(str, " :limitCount ");
+     _outNode(str, node->limitCount);
  }

  static void
diff -cr src.orig/backend/nodes/readfuncs.c src/backend/nodes/readfuncs.c
*** src.orig/backend/nodes/readfuncs.c    Fri Oct 16 11:53:40 1998
--- src/backend/nodes/readfuncs.c    Fri Oct 16 13:31:43 1998
***************
*** 163,168 ****
--- 163,174 ----
      token = lsptok(NULL, &length);        /* skip :unionClause */
      local_node->unionClause = nodeRead(true);

+     token = lsptok(NULL, &length);        /* skip :limitOffset */
+     local_node->limitOffset = nodeRead(true);
+
+     token = lsptok(NULL, &length);        /* skip :limitCount */
+     local_node->limitCount = nodeRead(true);
+
      return local_node;
  }

diff -cr src.orig/include/nodes/parsenodes.h src/include/nodes/parsenodes.h
*** src.orig/include/nodes/parsenodes.h    Fri Oct 16 11:53:58 1998
--- src/include/nodes/parsenodes.h    Fri Oct 16 13:35:32 1998
***************
*** 60,65 ****
--- 60,67 ----

      List       *unionClause;    /* unions are linked under the previous
                                   * query */
+     Node       *limitOffset;    /* # of result tuples to skip */
+     Node       *limitCount;        /* # of result tuples to return */

      /* internal to planner */
      List       *base_rel_list;    /* base relation list */
***************
*** 639,644 ****
--- 641,648 ----
      char       *portalname;        /* the portal (cursor) to create */
      bool        binary;            /* a binary (internal) portal? */
      bool        unionall;        /* union without unique sort */
+     Node       *limitOffset;    /* # of result tuples to skip */
+     Node       *limitCount;        /* # of result tuples to return */
  } SelectStmt;

Re: [HACKERS] What about LIMIT in SELECT ?

От
Bruce Momjian
Дата:
> > Jan, we found that I am having to require an initdb for the INET/CIDR
> > type, so if you want stuff to change the views/rules for the limit
> > addition post 6.4, please send them in and I will apply them.
> > 
> > You clearly have the syntax down, so I think you should go ahead.
> 
>     This is the part that will enable post 6.4 add of the
>     LIMIT stuff without initdb.
> 
>     Regression tested.

Applied.

--  Bruce Momjian                        |  http://www.op.net/~candle maillist@candle.pha.pa.us            |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026