Обсуждение: Suspending SELECTs

Поиск
Список
Период
Сортировка

Suspending SELECTs

От
Alessandro Baretta
Дата:
I am aware that what I am dreaming of is already available through cursors, but
in a web application, cursors are bad boys, and should be avoided. What I would
like to be able to do is to plan a query and run the plan to retreive a limited
number of rows as well as the executor's state. This way, the burden of
maintaining the cursor "on hold", between activations of the web resource which
uses it, is transferred from the DBMS to the web application server, and, most
importantly, the responsibility for garbage-collecting stale cursors is
implicitely delegated to the garbage-collector of active user sessions. Without
this mechanism, we are left with two equally unpleasant solutions: first, any
time a user instantiates a new session, a new cursor would have to be declared
for all relevant queries, and an ad-hoc garbage collection daemon would have to
be written to periodically scan the database for stale cursors to be closed;
otherwise, instead of using cursors, the web application could resort to
OFFSET-LIMIT queries--no garbage collection issues but pathetic performance and
server-load.

Do we have any way out?

Alex

--
*********************************************************************
http://www.barettadeit.com/
Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)
<http://www.asxcaml.org/>

The FreerP Project
<http://www.freerp.org/>

Re: Suspending SELECTs

От
Tom Lane
Дата:
Alessandro Baretta <a.baretta@barettadeit.com> writes:
> I am aware that what I am dreaming of is already available through
> cursors, but in a web application, cursors are bad boys, and should be
> avoided. What I would like to be able to do is to plan a query and run
> the plan to retreive a limited number of rows as well as the
> executor's state. This way, the burden of maintaining the cursor "on
> hold", between activations of the web resource which uses it, is
> transferred from the DBMS to the web application server,

This is a pipe dream, I'm afraid, as the state of a cursor does not
consist exclusively of bits that can be sent somewhere else and then
retrieved.  There are also locks to worry about, as well as the open
transaction itself, and these must stay alive inside the DBMS because
they affect the behavior of other transactions.  As an example, once
the cursor's originating transaction closes, there is nothing to stop
other transactions from modifying or removing rows it would have read.

            regards, tom lane

Re: Suspending SELECTs

От
Alvaro Herrera
Дата:
Tom Lane wrote:
> Alessandro Baretta <a.baretta@barettadeit.com> writes:
> > I am aware that what I am dreaming of is already available through
> > cursors, but in a web application, cursors are bad boys, and should be
> > avoided. What I would like to be able to do is to plan a query and run
> > the plan to retreive a limited number of rows as well as the
> > executor's state. This way, the burden of maintaining the cursor "on
> > hold", between activations of the web resource which uses it, is
> > transferred from the DBMS to the web application server,
>
> This is a pipe dream, I'm afraid, as the state of a cursor does not
> consist exclusively of bits that can be sent somewhere else and then
> retrieved.

I wonder if we could have a way to "suspend" a transaction and restart
it later in another backend.  I think we could do something like this
using the 2PC machinery.

Not that I'm up for coding it; just an idea that crossed my mind.

--
Alvaro Herrera                           Developer, http://www.PostgreSQL.org
Oh, oh, las chicas galacianas, lo harán por las perlas,
¡Y las de Arrakis por el agua! Pero si buscas damas
Que se consuman como llamas, ¡Prueba una hija de Caladan! (Gurney Halleck)

Re: Suspending SELECTs

От
Tom Lane
Дата:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> I wonder if we could have a way to "suspend" a transaction and restart
> it later in another backend.  I think we could do something like this
> using the 2PC machinery.
> Not that I'm up for coding it; just an idea that crossed my mind.

It's not impossible, perhaps, but it would require an order-of-magnitude
expansion of the 2PC machinery --- the amount of state associated with
an open execution plan is daunting.  I think there are discussions about
this in the archives.

            regards, tom lane

Re: Suspending SELECTs

От
Mark Lewis
Дата:
On Mon, 2006-01-16 at 11:13 +0100, Alessandro Baretta wrote:
> I am aware that what I am dreaming of is already available through cursors, but
> in a web application, cursors are bad boys, and should be avoided. What I would
> like to be able to do is to plan a query and run the plan to retreive a limited
> number of rows as well as the executor's state. This way, the burden of
> maintaining the cursor "on hold", between activations of the web resource which
> uses it, is transferred from the DBMS to the web application server, and, most
> importantly, the responsibility for garbage-collecting stale cursors is
> implicitely delegated to the garbage-collector of active user sessions. Without
> this mechanism, we are left with two equally unpleasant solutions: first, any
> time a user instantiates a new session, a new cursor would have to be declared
> for all relevant queries, and an ad-hoc garbage collection daemon would have to
> be written to periodically scan the database for stale cursors to be closed;
> otherwise, instead of using cursors, the web application could resort to
> OFFSET-LIMIT queries--no garbage collection issues but pathetic performance and
> server-load.
>
> Do we have any way out?
>
> Alex

I know that Tom has pretty much ruled out any persistent cursor
implementation in the database, but here's an idea for a workaround in
the app:

Have a pool of connections used for these queries.  When a user runs a
query the first time, create a cursor and remember that this user
session is associated with that particular connection.  When the user
tries to view the next page of results, request that particular
connection from the pool and continue to use the cursor.  Between
requests, this connection could of course be used to service other
users.

This avoids the awfulness of tying up a connection for the entire course
of a user session, but still allows you to use cursors for
performance.

When a user session is invalidated or times out, you remove the mapping
for this connection and close the cursor.  Whenever there are no more
mappings for a particular connection, you can use the opportunity to
close the current transaction (to prevent eternal transactions).

If the site is at all busy, you will need to implement a pooling policy
such as 'do not open new cursors on the connection with the oldest
transaction', which will ensure that all transactions can be closed in a
finite amount of time, the upper bound on the duration of a transaction
is (longest_session_duration * connections in pool).

Limitations:

1. You shouldn't do anything that acquires write locks on the database
using these connections, because the transactions will be long-running.
To mitigate this, use a separate connection pool.

2. Doesn't work well if some queries take a long time to run, because
other users may need to wait for the connection, and another connection
won't do.

3. If this is a busy web site, you might end up with potentially many
thousands of open cursors.  I don't know if this introduces an
unacceptable performance penalty or other bottleneck in the server?

-- Mark Lewis

Re: Suspending SELECTs

От
"Craig A. James"
Дата:
Alessandro Baretta <a.baretta@barettadeit.com> writes:
>I am aware that what I am dreaming of is already available through
>cursors, but in a web application, cursors are bad boys, and should be
>avoided. What I would like to be able to do is to plan a query and run
>the plan to retreive a limited number of rows as well as the
>executor's state. This way, the burden of maintaining the cursor "on
>hold", between activations of the web resource which uses it, is
>transferred from the DBMS to the web application server,

I think you're trying to do something at the wrong layer of your architecture.  This task normally goes in your
middlewarelayer, not your database layer. 

There are several technologies that allow you to keep persistent database sessions open (for example, mod_perl, mod_cgi
amongothers).  If you combine these with what's called "session affinity" (the ability of a load-balancing server to
routea particular user back to the same persistent session object every time), then you can create a middleware layer
thatdoes exactly what you need. 

Basically, you create a session object that holds all of the state (such as your cursor, and anything else you need to
maintainbetween requests), and send back a cookie to the client.  Each time the client reconnects, your server finds
theuser's session object using the cookie, and you're ready to go. 

The main trick is that you have to manage your session objects, primarily to flush the full state to the database, if
toomuch time elapses between requests, and then be able to re-create them on demand.  Enterprise Java Beans has a large
fractionof its design devoted to this sort of object management. 

There are solutions for this in just about every middleware technology, from Apache/perl to EJB to CORBA.  Search for
"sessionaffinity" and you should find a lot of information. 

Craig

Re: Suspending SELECTs

От
Alessandro Baretta
Дата:
Tom Lane wrote:
> Alessandro Baretta <a.baretta@barettadeit.com> writes:
>
>>I am aware that what I am dreaming of is already available through
>>cursors, but in a web application, cursors are bad boys, and should be
>>avoided. What I would like to be able to do is to plan a query and run
>>the plan to retreive a limited number of rows as well as the
>>executor's state. This way, the burden of maintaining the cursor "on
>>hold", between activations of the web resource which uses it, is
>>transferred from the DBMS to the web application server,
>
>
> This is a pipe dream, I'm afraid, as the state of a cursor does not
> consist exclusively of bits that can be sent somewhere else and then
> retrieved.  There are also locks to worry about, as well as the open
> transaction itself, and these must stay alive inside the DBMS because
> they affect the behavior of other transactions.  As an example, once
> the cursor's originating transaction closes, there is nothing to stop
> other transactions from modifying or removing rows it would have read.

I understand most of these issues, and expected this kind of reply. Please,
allow me to insist that we reason on this problem and try to find a solution. My
reason for doing so is that the future software industry is likely to see more
and more web applications retrieving data from virtually endless databases, and
in such contexts, it is sensible to ask the final client--the web client--to
store the "cursor state", because web interaction is intrinsically asynchronous,
and you cannot count on users logging out when they're done, releasing resources
allocated to them. Think of Google.

Let me propose a possible solution strategy for the problem of "client-side
cursors".
* Let us admit the limitation that a "client-side cursor" can only be declared
in a transaction where no inserts, updates or deletes are allowed, so that such
a transaction is virtually non-existent to other transactions. This allows the
backend to close the transaction and release locks as soon as the cursor is
declared.
* When the cursor state is pushed back to the backend, no new transaction is
instantiated, but the XID of the original transaction is reused. In the MVCC
system, this allows us to achieve a perfectly consistent view of the database at
the instant the original transaction started, unless a VACUUM command has been
executed in the meantime, in which case I would lose track of tuples which would
have been live in the context of the original transaction, but have been updated
or deleted and later vacuumed; however, this does not bother me at all.

Is this not a viable solution?

Alex

--
*********************************************************************
http://www.barettadeit.com/
Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)
<http://www.asxcaml.org/>

The FreerP Project
<http://www.freerp.org/>

Re: Suspending SELECTs

От
Michael Stone
Дата:
On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
>I understand most of these issues, and expected this kind of reply. Please,
>allow me to insist that we reason on this problem and try to find a
>solution. My reason for doing so is that the future software industry is
>likely to see more and more web applications retrieving data from virtually
>endless databases, and in such contexts, it is sensible to ask the final
>client--the web client--to store the "cursor state", because web
>interaction is intrinsically asynchronous, and you cannot count on users
>logging out when they're done, releasing resources allocated to them. Think
>of Google.

I don't understand why it is better to rework the db instead of just
having the web middleware keep track of what cursors are associated with
what sessions?

Mike Stone

Re: Suspending SELECTs

От
Alessandro Baretta
Дата:
Craig A. James wrote:
>
> Alessandro Baretta <a.baretta@barettadeit.com> writes:
>
> I think you're trying to do something at the wrong layer of your
> architecture.  This task normally goes in your middleware layer, not
> your database layer.

I am developing my applications in Objective Caml, and I have written the
middleware layer myself. I could easily implement a cursor-pooling strategy, but
there is no perfect solution to the problem of guaranteeing that cursors be
closed. Remember that web applications require the user to "open a session" by
connecting the appropriate HTTP resource, but users as never required to log
out. Hence, in order to eventually reclaim all cursors, I must use magical
"log-out detection" algorithm, which is usually implemented with a simple
timeout. This guarantees the required property of safety (the population of
cursors does not diverge) but does not guarantee the required property of
liveness (a user connecting to the application, who has opened a session but has
not logged out, and thus possesses a session token, should have access the
execution context identified by his token).

Alex



--
*********************************************************************
http://www.barettadeit.com/
Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)
<http://www.asxcaml.org/>

The FreerP Project
<http://www.freerp.org/>

Re: Suspending SELECTs

От
Tom Lane
Дата:
Alessandro Baretta <a.baretta@barettadeit.com> writes:
> * When the cursor state is pushed back to the backend, no new
> transaction is instantiated, but the XID of the original transaction
> is reused. In the MVCC system, this allows us to achieve a perfectly
> consistent view of the database at the instant the original
> transaction started, unless a VACUUM command has been executed in the
> meantime, in which case I would lose track of tuples which would have
> been live in the context of the original transaction, but have been
> updated or deleted and later vacuumed; however, this does not bother
> me at all.

> Is this not a viable solution?

No.  I'm not interested in "solutions" that can be translated as "you
may or may not get the right answer, and there's no way even to know
whether you did or not".  That might be acceptable for your particular
application but you certainly can't argue that it's of general
usefulness.

Also, I can't accept the concept of pushing the entire execution engine
state out to the client and then back again; that state is large enough
that doing so for every few dozen rows would yield incredibly bad
performance.  (In most scenarios I think it'd be just as efficient for
the client to pull the whole cursor output at the start and page through
it for itself.)  Worse yet: this would represent a security hole large
enough to wheel West Virginia through.  We'd have no reasonable way to
validate the data the client sends back.

Lastly, you underestimate the problems associated with not holding the
locks the cursor is using.  As an example, it's likely that a btree
indexscan wouldn't successfully restart at all, because it couldn't find
where it had been if the index page had been split or deleted meanwhile.
So not running VACUUM is not enough to guarantee the query will still
work.

            regards, tom lane

Re: Suspending SELECTs

От
"Jim C. Nasby"
Дата:
On Tue, Jan 17, 2006 at 09:06:53PM +0100, Alessandro Baretta wrote:
> Craig A. James wrote:
> >
> >Alessandro Baretta <a.baretta@barettadeit.com> writes:
> >
> >I think you're trying to do something at the wrong layer of your
> >architecture.  This task normally goes in your middleware layer, not
> >your database layer.
>
> I am developing my applications in Objective Caml, and I have written the
> middleware layer myself. I could easily implement a cursor-pooling
> strategy, but there is no perfect solution to the problem of guaranteeing
> that cursors be closed. Remember that web applications require the user to
> "open a session" by connecting the appropriate HTTP resource, but users as
> never required to log out. Hence, in order to eventually reclaim all
> cursors, I must use magical "log-out detection" algorithm, which is usually
> implemented with a simple timeout. This guarantees the required property of
> safety (the population of cursors does not diverge) but does not guarantee
> the required property of liveness (a user connecting to the application,
> who has opened a session but has not logged out, and thus possesses a
> session token, should have access the execution context identified by his
> token).

With some "AJAX magic", it would probably be pretty easy to create an
application that let you know very quickly if a user left the
application (ie: browsed to another site, or closed the browser).
Essentially, you should be able to set it up so that it will ping the
application server fairly frequently (like every 10 seconds), so you
could drastically reduce the timeout interval.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: Suspending SELECTs

От
mark@mark.mielke.cc
Дата:
On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
> I understand most of these issues, and expected this kind of reply. Please,
> allow me to insist that we reason on this problem and try to find a
> solution. My reason for doing so is that the future software industry is
> likely to see more and more web applications retrieving data from virtually
> endless databases, and in such contexts, it is sensible to ask the final
> client--the web client--to store the "cursor state", because web
> interaction is intrinsically asynchronous, and you cannot count on users
> logging out when they're done, releasing resources allocated to them. Think
> of Google.

What is wrong with LIMIT and OFFSET? I assume your results are ordered
in some manner.

Especially with web users, who become bored if the page doesn't flicker
in a way that appeals to them, how could one have any expectation that
the cursor would ever be useful at all?

As a 'general' solution, I think optimizing the case where the same
query is executed multiple times, with only the LIMIT and OFFSET
parameters changing, would be a better bang for the buck. I'm thinking
along the lines of materialized views, for queries executed more than
a dozen times in a short length of time... :-)

In the mean time, I successfully use LIMIT and OFFSET without such an
optimization, and things have been fine for me.

Cheers,
mark

--
mark@mielke.cc / markm@ncf.ca / markm@nortel.com     __________________________
.  .  _  ._  . .   .__    .  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/    |_     |\/|  |  |_  |   |/  |_   |
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
                       and in the darkness bind them...

                           http://mark.mielke.cc/


Re: Suspending SELECTs

От
Mark Lewis
Дата:
> I am developing my applications in Objective Caml, and I have written the
> middleware layer myself. I could easily implement a cursor-pooling strategy, but
> there is no perfect solution to the problem of guaranteeing that cursors be
> closed. Remember that web applications require the user to "open a session" by
> connecting the appropriate HTTP resource, but users as never required to log
> out. Hence, in order to eventually reclaim all cursors, I must use magical
> "log-out detection" algorithm, which is usually implemented with a simple
> timeout. This guarantees the required property of safety (the population of
> cursors does not diverge) but does not guarantee the required property of
> liveness (a user connecting to the application, who has opened a session but has
> not logged out, and thus possesses a session token, should have access the
> execution context identified by his token).

I fail to see the problem here.  Why should "liveness" be a required
property?  If is it simply that you can't promptly detect when a user is
finished with their web session so you can free resources, then remember
that there is no requirement that you dedicate a connection to their
session in the first place.  Even if you're using your own custom
middleware, it isn't a very complicated or conceptually difficult thing
to implement (see my previous post).  Certainly it's simpler than
allowing clients to pass around runtime state.

As far as implementing this sort of thing in the back-end, it would be
really hard with the PostgreSQL versioning model.  Oracle can more
easily (and kind of does) support cursors like you've described because
they implement MVCC differently than PostgreSQL, and in their
implementation you're guaranteed that you always have access to the most
recent x megabytes of historical rows, so even without an open
transaction to keep the required rows around you can still be relatively
sure they'll be around for "long enough".  In PostgreSQL, historical
rows are kept in the tables themselves and periodically vacuumed, so
there is no such guarantee, which means that you would need to either
implement a lot of complex locking for little material gain, or just
hold the cursors in moderately long-running transactions, which leads
back to the solution suggested earlier.

-- Mark Lewis



Re: Suspending SELECTs

От
Frank Wiles
Дата:
On Tue, 17 Jan 2006 16:12:59 -0500
mark@mark.mielke.cc wrote:

> In the mean time, I successfully use LIMIT and OFFSET without such an
> optimization, and things have been fine for me.

  Same here.

 ---------------------------------
   Frank Wiles <frank@wiles.org>
   http://www.wiles.org
 ---------------------------------


Re: Suspending SELECTs

От
Josh Berkus
Дата:
Alessandro,

> I understand most of these issues, and expected this kind of reply.
> Please, allow me to insist that we reason on this problem and try to
> find a solution. My reason for doing so is that the future software
> industry is likely to see more and more web applications retrieving data
> from virtually endless databases, and in such contexts, it is sensible
> to ask the final client--the web client--to store the "cursor state",
> because web interaction is intrinsically asynchronous, and you cannot
> count on users logging out when they're done, releasing resources
> allocated to them. Think of Google.

I think you're trying to use an unreasonable difficult method to solve a
problem that's already been solved multiple times.  What you want is
called "query caching."   There are about 800 different ways to do this on
the middleware or application layer which are 1000% easier than what
you're proposing.

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

Re: Suspending SELECTs

От
Josh Berkus
Дата:
People:

To follow up further, what Alessandro is talking about is known as a
"keyset cursor".   Sybase and SQL Server used to support them; I beleive
that they were strictly read-only and had weird issues with record
visibility.

--
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

Re: Suspending SELECTs

От
Mark Kirkwood
Дата:
mark@mark.mielke.cc wrote:
> On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
>
>
> What is wrong with LIMIT and OFFSET? I assume your results are ordered
> in some manner.
>
> Especially with web users, who become bored if the page doesn't flicker
> in a way that appeals to them, how could one have any expectation that
> the cursor would ever be useful at all?
>
> As a 'general' solution, I think optimizing the case where the same
> query is executed multiple times, with only the LIMIT and OFFSET
> parameters changing, would be a better bang for the buck. I'm thinking
> along the lines of materialized views, for queries executed more than
> a dozen times in a short length of time... :-)
>
> In the mean time, I successfully use LIMIT and OFFSET without such an
> optimization, and things have been fine for me.
>

Second that.

I do seem to recall a case where I used a different variant of this
method (possibly a database product that didn't have OFFSET, or maybe
because OFFSET was expensive for the case in point), where the ORDER BY
key for the last record on the page was saved and the query amended to
use it filter for the "next' screen - e.g:

1st time in:

SELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;

Suppose this displays records for id 10000 -> 10020.
When the user hits next, and page saves id=10020 in the session state
and executes:

SELECT ... FROM table WHERE ... AND id > 10020 ORDER BY id LIMIT 20;

Clearly you have to be a little careful about whether to use '>' or '>='
depending on whether 'id' is unique or not (to continue using '>' in the
non unique case, you can just save and use all the members of the
primary key too).

Cheers

Mark

Re: Suspending SELECTs

От
"Craig A. James"
Дата:
Alessandro Baretta wrote:
>> I think you're trying to do something at the wrong layer of your
>> architecture.  This task normally goes in your middleware layer, not
>> your database layer.
>
> I am developing my applications in Objective Caml, and I have written
> the middleware layer myself. I could easily implement a cursor-pooling
> strategy...

You're trying to solve a very hard problem, and you're rewriting a lot of stuff that's been worked on for years by
teamsof people.  If there's any way you switch use something like JBOSS, it might save you a lot of grief and hard
work.

I eliminated this problem a different way, using what we call a "hitlist".  Basically, every query becomes a "select
into",something like this: 

  insert into hitlist_xxxx (select id from ...)

where "xxxx" is your user's id.  Once you do this, it's trivial to return each page to the user almost instantly using
offset/limit,or by adding a "ROW_NUM" column of some sort.  We manage very large hitlists -- millions of rows.  Going
frompage 1 to page 100,000 takes a fraction of a second. 

It also has the advantage that the user can come back in a week or a month and the results are still there.

The drawback are:

1. Before the user gets the first page, the entire query must complete.
2. You need a way to clean up old hitlists.
3. If you have tens of thousands of users, you'll have a large number of hitlists, and you have to use tablespaces to
ensurethat Linux filesystem directories don't get too large. 
4. It takes space to store everyone's data.  (But disk space is so cheap this isn't much of an issue.)

You can eliminate #3 by a single shared hitlist with a column of UserID's.  But experience shows that a big shared
hitlistdoesn't work very well:  Inserts get slower because the UserID column must be indexed, and you can truncate
individualhitlists but you have to delete from a shared hitlist. 

Craig

Re: Suspending SELECTs

От
Tom Lane
Дата:
Mark Kirkwood <markir@paradise.net.nz> writes:
> SELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;

> Suppose this displays records for id 10000 -> 10020.
> When the user hits next, and page saves id=10020 in the session state
> and executes:

> SELECT ... FROM table WHERE ... AND id > 10020 ORDER BY id LIMIT 20;

> Clearly you have to be a little careful about whether to use '>' or '>='
> depending on whether 'id' is unique or not (to continue using '>' in the
> non unique case, you can just save and use all the members of the
> primary key too).

This is actually fairly painful to get right for a multi-column key
at the moment.  It'll be much easier once I finish up the
SQL-spec-row-comparison project.  See this thread for background:
http://archives.postgresql.org/pgsql-performance/2004-07/msg00188.php

            regards, tom lane

Re: Suspending SELECTs

От
Mark Kirkwood
Дата:
Tom Lane wrote:
> Mark Kirkwood <markir@paradise.net.nz> writes:
>
>>SELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;
>
>
>>Suppose this displays records for id 10000 -> 10020.
>>When the user hits next, and page saves id=10020 in the session state
>>and executes:
>
>
>>SELECT ... FROM table WHERE ... AND id > 10020 ORDER BY id LIMIT 20;
>
>
>>Clearly you have to be a little careful about whether to use '>' or '>='
>>depending on whether 'id' is unique or not (to continue using '>' in the
>>non unique case, you can just save and use all the members of the
>>primary key too).
>
>
> This is actually fairly painful to get right for a multi-column key
> at the moment.  It'll be much easier once I finish up the
> SQL-spec-row-comparison project.

Right, I think it was actually an Oracle 7.3 based web app (err...
showing age here...) that I used this technique on.

Cheers

Mark

Re: Suspending SELECTs

От
Alessandro Baretta
Дата:
mark@mark.mielke.cc wrote:
> On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
>
>>I understand most of these issues, and expected this kind of reply. Please,
>>allow me to insist that we reason on this problem and try to find a
>>solution. My reason for doing so is that the future software industry is
>>likely to see more and more web applications retrieving data from virtually
>>endless databases, and in such contexts, it is sensible to ask the final
>>client--the web client--to store the "cursor state", because web
>>interaction is intrinsically asynchronous, and you cannot count on users
>>logging out when they're done, releasing resources allocated to them. Think
>>of Google.
>
>
> What is wrong with LIMIT and OFFSET? I assume your results are ordered
> in some manner.

It looks like this is the only possible solution at present--and in the future,
too--but it has a tremendouse performance impact on queries returning thousands
of rows.

Alex

--
*********************************************************************
http://www.barettadeit.com/
Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)
<http://www.asxcaml.org/>

The FreerP Project
<http://www.freerp.org/>

Re: Suspending SELECTs

От
Tino Wildenhain
Дата:
Alessandro Baretta schrieb:
> mark@mark.mielke.cc wrote:
>
...
>
> It looks like this is the only possible solution at present--and in the
> future, too--but it has a tremendouse performance impact on queries
> returning thousands of rows.
>
Well actually one of the better solutions would be persistent cursors
(and therefore connection pooling). I bet this is easier then
fiddling with the problems of offset/limit and inventing even more
compex caching in the application.

Just my 0.02c
++Tino

Re: Suspending SELECTs

От
Alessandro Baretta
Дата:
Josh Berkus wrote:
> People:
>
> To follow up further, what Alessandro is talking about is known as a
> "keyset cursor".   Sybase and SQL Server used to support them; I beleive
> that they were strictly read-only and had weird issues with record
> visibility.

I would like to thank everyone for sharing their ideas with me. I democratically
accept the idea that my middleware will have to support the functionality I
would have liked to delegate to PostgreSQL. If I have to implement anything of
this sort--just like Tom--I don't want to spend time on a solution lacking
generality or imposing unacceptable resource requirements under high load. The
keyset-cursor idea is probably the best bet--and BTW, let me specifically thank
Josh for mentioning them.

What I could do relatively easily is instantiate a thread to iteratively scan a
traditional cursor N rows at a time, retrieving only record keys, and finally
send them to the query-cache-manager. The application thread would then scan
through the cursor results by fetching the rows associated to a given "page" of
keys. I would have to keep the full cursor keyset in the application server's
session state, but, hopefully, this is not nearly as bad as storing the entire
recordset.

Alex



--
*********************************************************************
http://www.barettadeit.com/
Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)
<http://www.asxcaml.org/>

The FreerP Project
<http://www.freerp.org/>

Re: Suspending SELECTs

От
mark@mark.mielke.cc
Дата:
On Wed, Jan 18, 2006 at 09:57:50AM +0100, Alessandro Baretta wrote:
> mark@mark.mielke.cc wrote:
> >On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:
> >>I understand most of these issues, and expected this kind of reply.
> >>Please, allow me to insist that we reason on this problem and try to find
> >>a solution. My reason for doing so is that the future software industry
> >>is likely to see more and more web applications retrieving data from
> >>virtually endless databases, and in such contexts, it is sensible to ask
> >>the final client--the web client--to store the "cursor state", because
> >>web interaction is intrinsically asynchronous, and you cannot count on
> >>users logging out when they're done, releasing resources allocated to
> >>them. Think of Google.
> >What is wrong with LIMIT and OFFSET? I assume your results are ordered
> >in some manner.
> It looks like this is the only possible solution at present--and in the
> future, too--but it has a tremendouse performance impact on queries
> returning thousands of rows.

In the case of one web user generating one query, I don't see how it would
have a tremendous performance impact on large queries.

You mentioned google. I don't know how you use google - but most of the
people I know, rarely ever search through the pages. Generally the answer
we want is on the first page. If the ratio of users who search through
multiple pages of results, and users who always stop on the first page,
is anything significant (better than 2:1?) LIMIT and OFFSET are the
desired approach. Why have complicated magic in an application, for a
subset of the users?

I there is to be a change to PostgreSQL to optimize for this case, I
suggest it involve the caching of query plans, executor plans, query
results (materialized views?), LIMIT and OFFSET. If we had all of
this, you would have exactly what you want, while benefitting many
more people than just you. No ugly 'persistent state cursors' or
'import/export cursor state' implementation. People would automatically
benefit, without changing their applications.

Cheers,
mark

--
mark@mielke.cc / markm@ncf.ca / markm@nortel.com     __________________________
.  .  _  ._  . .   .__    .  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/    |_     |\/|  |  |_  |   |/  |_   |
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
                       and in the darkness bind them...

                           http://mark.mielke.cc/


Re: Suspending SELECTs

От
Alessandro Baretta
Дата:
mark@mark.mielke.cc wrote:
> On Wed, Jan 18, 2006 at 09:57:50AM +0100, Alessandro Baretta wrote:
>
> I there is to be a change to PostgreSQL to optimize for this case, I
> suggest it involve the caching of query plans, executor plans, query
> results (materialized views?), LIMIT and OFFSET. If we had all of
> this, you would have exactly what you want, while benefitting many
> more people than just you. No ugly 'persistent state cursors' or
> 'import/export cursor state' implementation. People would automatically
> benefit, without changing their applications.

Actually, many of the features you mention (caching executor plans--that is
imposing the use of prepared queries, caching query results and materializing
views) I have already implemented in my "middleware". Somehow, apparently, my
intuition on the problem of determining what ought to be delegated to the DBMS
and what to the "middleware" is the opposites of most people on this list. As I
mentioned earlier, I democratically accept the position of the majority--and
monarchically I accept Tom's. And, scientifically, I have taken resposibility
for proving myself wrong: I have stated my assumptions, I have formulated the
hypothesis, I have designed an experiment capable of disproving it, and I have
collected the data. Here are the steps and the results of the experiment.

Assumptions: Google defines the "best current practices" in web applications.

Hypothesis: The "best practice" for returning large data sets is to devise an
algorithm (say, persistent cursors, for example) allowing subdivision of
recordset is pages of a fixed maximum size, in such a way that sequentially
fetching pages of records requires the system to compute each record only once.

Experiment: Given the stated assumption, record the time taken by Google to
retrieve a sequence of pages of results, relative to the same query string.
Repeat the experiment with different settings. Notice that Google actually
displays on the results page the time taken to process the request.

Results: I'll omit the numerical data, which everyone can easily obtain in only
a few minutes, repeating the experiment. I used several query strings containing
very common words ("linux debian", "linux kernel", "linux tux"), each yielding
millions of results. I set Google to retrieve 100 results per page. Then I ran
the query and paged through the data set. The obvious result is that execution
time is a monotonously growing function of the page number. This clearly
indicates that Google does not use any algorithm of the proposed kind, but
rather an OFFSET/LIMIT strategy, thus disproving the hypothesis.

It must also be noted that Google refuses to return more than 1000 results per
query, thus indicating that the strategy the adopted quite apparently cannot
scale indefinitely, for on a query returning a potentially flooding dataset, a
user paging through the data would experience a linear slowdown on the number of
pages already fetched, and the DBMS workload would also be linear on the number
of fetched pages.

I do not like this approach, but the fact that Google came up with no better
solution is a clear indication that Tome et al. are more than correct.

Alex

--
*********************************************************************
http://www.barettadeit.com/
Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)
<http://www.asxcaml.org/>

The FreerP Project
<http://www.freerp.org/>

Re: Suspending SELECTs

От
Harry Jackson
Дата:
Your experiment made far too many assumptions and the data does not
stand up to scrutiny.

On 1/18/06, Alessandro Baretta <a.baretta@barettadeit.com> wrote:
> Results: I'll omit the numerical data, which everyone can easily obtain in only
> a few minutes, repeating the experiment. I used several query strings containing
> very common words ("linux debian", "linux kernel", "linux tux"), each yielding
> millions of results. I set Google to retrieve 100 results per page. Then I ran
> the query and paged through the data set. The obvious result is that execution
> time is a monotonously growing function of the page number. This clearly
> indicates that Google does not use any algorithm of the proposed kind, but
> rather an OFFSET/LIMIT strategy, thus disproving the hypothesis.

I just ran the same test and I got a different outcome than you. The
last page came back twice as fast as page 4. I noticed no trend in the
speed of the results from each page.

Of course it is probably in cache because its such a common thing to
be searched on so the experiment is pointless.

You cannot jump to your conclusions based on a few searches on google.

> It must also be noted that Google refuses to return more than 1000 results per
> query, thus indicating that the strategy the adopted quite apparently cannot
> scale indefinitely, for on a query returning a potentially flooding dataset, a
> user paging through the data would experience a linear slowdown on the number of
> pages already fetched, and the DBMS workload would also be linear on the number
> of fetched pages.

There are various reason why google might  want to limit the search
result returned ie to encourage people to narrow their search. Prevent
screen scrapers from hitting them really hard blah blah. Perhaps less
than 0.00000001% of real users (not scrapers) actually dig down to the
10th page so whats the point.

There are numerous methods that you can use to give separate result
pages some of which include going back to the database and some don't.
I prefer not to go back to the database if I can avoid it and if all
you want to do is provide a few links to further pages of results then
going back to the database and using offsets is a waste of IO.

--
Harry
http://www.hjackson.org
http://www.uklug.co.uk

Re: Suspending SELECTs

От
mark@mark.mielke.cc
Дата:
On Wed, Jan 18, 2006 at 03:41:57PM +0000, Harry Jackson wrote:
> There are various reason why google might  want to limit the search
> result returned ie to encourage people to narrow their search. Prevent
> screen scrapers from hitting them really hard blah blah. Perhaps less
> than 0.00000001% of real users (not scrapers) actually dig down to the
> 10th page so whats the point.

I recall a day when google crashed, apparently due to a Windows virus
that would use google to obtain email addresses.

As an unsubstantiated theory - this may have involved many, many clients,
all accessing search page results beyond the first page.

I don't see google optimizing for the multiple page scenario. Most
people (as I think you agree above), are happy with the first or
second page, and they are gone. Keeping a cursor for these people as
anything more than an offset into search criteria, would not be
useful.

Cheers,

--
mark@mielke.cc / markm@ncf.ca / markm@nortel.com     __________________________
.  .  _  ._  . .   .__    .  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/    |_     |\/|  |  |_  |   |/  |_   |
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
                       and in the darkness bind them...

                           http://mark.mielke.cc/


Re: Suspending SELECTs

От
August Zajonc
Дата:
Alessandro Baretta wrote:
>
> What I could do relatively easily is instantiate a thread to iteratively
> scan a traditional cursor N rows at a time, retrieving only record keys,
> and finally send them to the query-cache-manager. The application thread
> would then scan through the cursor results by fetching the rows
> associated to a given "page" of keys. I would have to keep the full
> cursor keyset in the application server's session state, but, hopefully,
> this is not nearly as bad as storing the entire recordset.
>
> Alex
>
>
>

Alessandro,

I've very much enjoyed reading your thoughts and the problem your facing
and everyone's responses.

Since you control the middle layer, could you not use a cookie to keep a
cursor open on the middle layer and tie into it on subsequent queries?

If you are concerned with too many connections open, you could timeout
the sessions quickly and recreate the cursor if someone came back. If
they waited 5 minutes to make the next query, certainly they could wait
a few extra seconds to offset and reacquire a cursor?

The hitlist idea was also a nice one if the size of the data returned is
not overwhelming and does not need to track the underlying db at all
(ie, no refresh).

Mark had a number of good general suggestions though, and I'd like to
echo the materialized views as an option that I could see a lot of uses
for (and have worked around in the past with SELECT INTO's and like).

Interesting stuff.

- August

Re: Suspending SELECTs

От
Alessandro Baretta
Дата:
August Zajonc wrote:
> Alessandro Baretta wrote:
>
> Alessandro,
>
> I've very much enjoyed reading your thoughts and the problem your facing
> and everyone's responses.

Thank you for your interest, Agust.

> Since you control the middle layer, could you not use a cookie to keep a
> cursor open on the middle layer and tie into it on subsequent queries?

I do. The AS/Xcaml calls it "session key". It is usually passed in a cookie for
websites and elsewhere--query string or url--for Intranet/Extranet web
applications. The session information kept by the AS/Xcaml virtual machine
includes cached query results and state information for servlets. I
could--although not terribly easily--also use the Xcaml session manager to
handle allocation of DB connections from the pool, thus allocating one
connection per active session. The session garbage collector would then also
have to manage the recycling of stale DB connections.

> If you are concerned with too many connections open, you could timeout
> the sessions quickly and recreate the cursor if someone came back. If
> they waited 5 minutes to make the next query, certainly they could wait
> a few extra seconds to offset and reacquire a cursor?

Yes I could. Actually, there are quite a few means of handling this issue. The
possible strategies for garbage collecting resources allocated to a remote peer
is are collectively called "failure-detection algorithms" in the theory of
distributed computing. In most cases an "eventually weak failure detector" is
necessary and sufficient to guarantee a number of desirable properties in
asynchronous systems: termination of execution, bounded open connections, and
others.

Yet, it is a theorm that no asynchronous algorithm can be devised to implement
an eventually weak failure detector. This, in turn, implies that no distributed
asynchronous system--i.e. a web application--possesses the above mentioned
desirable properties. Hence, from a theoretical standpoint, we can either choose
to relax the axioms of the system allowing synchronicity--a failure detector
based on a timeout explicitly requires the notion of time--or, as I would
prefer, by eliminating the need for termination of execution--i.e. explicit
client logout--and bounded open connections by delegating to the client the
responsibility of maintaing all relevant state information. Under these
assumptions we can live happily in a perfectly asynchronous stateless world like
that of HTTP.

Now, neither of the two solutions is perfect. In an Intranet/Extranet context, I
want to store server side a significant amount of state information, including
cached query results, thus entailing the need for a synchronous
failure-detector--let's call it "implicit logout detector"--to garbage collect
the stale session files generated by the application. In an open website--no
login--I do not usually want to use sessions, so I prefer to implement the
application so that all relevant state informatoin is maintained by the client.
This model is perfect until we reach the issue of "suspending SELECTs", that is,
limiting the the cardinality of the record set to a configurable "page-size",
allowing the user to page through a vast query result.


> The hitlist idea was also a nice one if the size of the data returned is
> not overwhelming and does not need to track the underlying db at all
> (ie, no refresh).

In an open website, immediate refresh is not critical, so long as I can
guarantee some decent property of data consistency. Full consistency cannot be
achieved, as Tom pointed out. I cordially disagree with Tom on the commandment
that "Thou shalt have no property of consistency other than me". Just like we
have two different transaction isolation levels, guarateeing different degrees
of ACIDity, we could, conceivably wish to formalize a weaker notion of
consistency and implement functionality to match with it, which would not be
possible under the stronger definition property.

> Mark had a number of good general suggestions though, and I'd like to
> echo the materialized views as an option that I could see a lot of uses
> for (and have worked around in the past with SELECT INTO's and like).

I already use materialized views. The database design layer of the AS/Xcaml
allows the definition of fragmented materialized views: the view is split in
fragments, that is, equivalence classes of the record set with respect to the
operation of projection of the view signature to a (small) subset of its
columns. Yet, this actually makes the original problem worse, for materialiazed
view fragments must be garbage collected at some point, thus offering much of
the same conceptual difficulties as cursor pooling strategy.

Alex


--
*********************************************************************
http://www.barettadeit.com/
Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)
<http://www.asxcaml.org/>

The FreerP Project
<http://www.freerp.org/>