Обсуждение: how to deal with sparse/to-be populated tables

Поиск
Список
Период
Сортировка

how to deal with sparse/to-be populated tables

От
Alfred Perlstein
Дата:
We were having some trouble doing updates to our database,
a lot of our database sort of works like this:


dbfunc(data) somedatatype    *data;
{somedatatype    *existing_row;
existing_row = exists_in_table(data);
if (existing_row != NULL) {    update_table(existing_row, count = count + data->count)} else
insert_into_table(data);

}

Is there anything built into postgresql to accomplish this without
the "double" work that goes on here?

something like: update_row_but_insert_if_it_doesn't_exist(data,    update = 'count = count + data->count');

Meaning, if a row matching the 'new' data exists, update it, otherwise
store our new data as a new record?

It seems like the database has to do an awful amount of extra work
for our application because we haven't figured out how to do this
effeciently.

Any pointers?

thanks,
-- 
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]


Re: [HACKERS] how to deal with sparse/to-be populated tables

От
Chris Bitmead
Дата:
The thing is, in the relational model there isn't a standard
defininition of "already exists". For example, when you say
"already exists", I presume you mean that a record with the
same primary key already exists. But not all tables have
primary keys.

There are two things you can do...

1) remember if a record came out of the database in the first place
with a flag. This is what an object database would do.

2) If there is a unique index, instead of checking 
whether the record exists with exists_in_table,
attempt to update the record. If you get a database error, THEN
do an insert. This is a common programming technique, often used
with unix system calls. Try one option and if error try the other.
Don't try to predict yourself whether an error will occur. This
will save 1 or two database calls depending on whether it exists
or not.

Alfred Perlstein wrote:
> 
> We were having some trouble doing updates to our database,
> a lot of our database sort of works like this:
> 
> dbfunc(data)
>         somedatatype    *data;
> {
>         somedatatype    *existing_row;
> 
>         existing_row = exists_in_table(data);
> 
>         if (existing_row != NULL) {
>                 update_table(existing_row, count = count + data->count)
>         } else
>                 insert_into_table(data);
> 
> }
> 
> Is there anything built into postgresql to accomplish this without
> the "double" work that goes on here?
> 
> something like:
>   update_row_but_insert_if_it_doesn't_exist(data,
>     update = 'count = count + data->count');
> 
> Meaning, if a row matching the 'new' data exists, update it, otherwise
> store our new data as a new record?
> 
> It seems like the database has to do an awful amount of extra work
> for our application because we haven't figured out how to do this
> effeciently.
> 
> Any pointers?
> 
> thanks,
> --
> -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
> 
> ************


Re: [HACKERS] how to deal with sparse/to-be populated tables

От
Alfred Perlstein
Дата:
* Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> [000203 16:32] wrote:
> The thing is, in the relational model there isn't a standard
> defininition of "already exists". For example, when you say
> "already exists", I presume you mean that a record with the
> same primary key already exists. But not all tables have
> primary keys.

I could adopt the tables to use this particular field as a primary
key, but see my questions about interpreting errors in responce
to suggestion #2.

> There are two things you can do...
> 
> 1) remember if a record came out of the database in the first place
> with a flag. This is what an object database would do.

You mean implement an LRU cache outside the database, I've thought about
this and could actually do it, the thing that bugs me about it is
that i'm essentially trying to outsmart a 10+ year (guessing) old
piece of software with something that I'd have to hack up in a
matter of days.

> 2) If there is a unique index, instead of checking 
> whether the record exists with exists_in_table,
> attempt to update the record. If you get a database error, THEN
> do an insert. This is a common programming technique, often used
> with unix system calls. Try one option and if error try the other.
> Don't try to predict yourself whether an error will occur. This
> will save 1 or two database calls depending on whether it exists
> or not.

This is what I was thinking, the problem then becomes that I'm
not aware of way to determine the error with
some degree of accuracy so that I don't mistake:insert error because of duplication
with:insert error because of database connectivity (or other factors)

Is it possible to do that?  I guess I could parse the error responce
from the backend, but maybe there's an easier/more-correct way?

-Alfred


> 
> Alfred Perlstein wrote:
> > 
> > We were having some trouble doing updates to our database,
> > a lot of our database sort of works like this:
> > 
> > dbfunc(data)
> >         somedatatype    *data;
> > {
> >         somedatatype    *existing_row;
> > 
> >         existing_row = exists_in_table(data);
> > 
> >         if (existing_row != NULL) {
> >                 update_table(existing_row, count = count + data->count)
> >         } else
> >                 insert_into_table(data);
> > 
> > }
> > 
> > Is there anything built into postgresql to accomplish this without
> > the "double" work that goes on here?
> > 
> > something like:
> >   update_row_but_insert_if_it_doesn't_exist(data,
> >     update = 'count = count + data->count');
> > 
> > Meaning, if a row matching the 'new' data exists, update it, otherwise
> > store our new data as a new record?
> > 
> > It seems like the database has to do an awful amount of extra work
> > for our application because we haven't figured out how to do this
> > effeciently.
> > 
> > Any pointers?
> > 
> > thanks,
> > --
> > -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
> > 
> > ************

-- 
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]


Re: [HACKERS] how to deal with sparse/to-be populated tables

От
Chris Bitmead
Дата:
Alfred Perlstein wrote:
> > There are two things you can do...
> >
> > 1) remember if a record came out of the database in the first place
> > with a flag. This is what an object database would do.
> 
> You mean implement an LRU cache outside the database, I've thought about
> this and could actually do it, the thing that bugs me about it is
> that i'm essentially trying to outsmart a 10+ year (guessing) old
> piece of software with something that I'd have to hack up in a
> matter of days.

Well, you only gave a small code snippet, I don't know how your app
works.

But often you retrieve tuples from the database and populate a C struct
or something...

struct Person {char *firstname;char *lastname;
};

What I'm saying is, if you are already doing something like this, then
just add one more boolean to say if it is a new or existing Person.
If you are not doing anything like this currently then it's not an
option.

Alternatively wait for my ODBMS features :-)

> This is what I was thinking, the problem then becomes that I'm
> not aware of way to determine the error with
> some degree of accuracy so that I don't mistake:
>  insert error because of duplication
> with:
>  insert error because of database connectivity (or other factors)
> 
> Is it possible to do that?  I guess I could parse the error responce
> from the backend, but maybe there's an easier/more-correct way?

Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think
it does, I've never seen one. There should be a way to get error
codes without comparing strings. Should this be on the TODO?


Re: [HACKERS] how to deal with sparse/to-be populated tables

От
Tom Lane
Дата:
Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes:
> Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think
> it does, I've never seen one. There should be a way to get error
> codes without comparing strings. Should this be on the TODO?

It doesn't, there should, and it already is ;-)

In the meantime, looking at the error message string is Alfred's
only option for distinguishing duplicate-record from other errors,
I'm afraid.

A partial answer to his performance concern is to use a rule
(or possibly a trigger) on the database side to reinterpret
"insert into table X" as "either insert or update in table Y,
depending on whether the key is already there".  This wouldn't
buy anything in terms of database cycles, but it would avoid two
rounds of client-to-backend communication and query parsing.

I've never done that myself, but perhaps someone else on the
list has a working example.
        regards, tom lane


RE: [HACKERS] how to deal with sparse/to-be populated tables

От
"Hiroshi Inoue"
Дата:
> -----Original Message-----
> From: owner-pgsql-hackers@postgreSQL.org
> [mailto:owner-pgsql-hackers@postgreSQL.org]On Behalf Of Tom Lane
> 
> Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes:
> > Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think
> > it does, I've never seen one. There should be a way to get error
> > codes without comparing strings. Should this be on the TODO?
> 
> It doesn't, there should, and it already is ;-)
> 

Doens't the following TODO imply it ?

* Allow elog() to return error codes, not just messages

Many people have complained about it.
However,it seems not effective without a functionality of statement
level rollback.  AFAIK,Vadim has planed it together with savepoint
functionality.

Regards.

Hiroshi Inoue
Inoue@tpf.co.jp


Re: [HACKERS] how to deal with sparse/to-be populated tables

От
Alfred Perlstein
Дата:
* Tom Lane <tgl@sss.pgh.pa.us> [000203 20:58] wrote:
> Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes:
> > Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think
> > it does, I've never seen one. There should be a way to get error
> > codes without comparing strings. Should this be on the TODO?
> 
> It doesn't, there should, and it already is ;-)
> 
> In the meantime, looking at the error message string is Alfred's
> only option for distinguishing duplicate-record from other errors,
> I'm afraid.
> 
> A partial answer to his performance concern is to use a rule
> (or possibly a trigger) on the database side to reinterpret
> "insert into table X" as "either insert or update in table Y,
> depending on whether the key is already there".  This wouldn't
> buy anything in terms of database cycles, but it would avoid two
> rounds of client-to-backend communication and query parsing.
> 
> I've never done that myself, but perhaps someone else on the
> list has a working example.

Actually we have some plpgsql code lying around that does this.
The issue isn't ease of implementation, but actually the speed of
the implementation.  Even parsing the error return isn't as optimal
as a insert_new|update_existing_with_args single op would be.

One of the more fustrating aspects is that we could use the field
that we merge rows on as a primary index, this would allow us to
do a insert or update on failed insert...

however... if we fail to locate the row on the initial query (to
see if it exists) we pay a large penalty because the insert must
be validated to be unique.  This effectively doubles the search.
This is also a problem if we do "update or insert on fail", basically
a double scan is required.

(yes, I just thought about only indexing, and trying the update
first and only on failure doing an insert, however we really can't
determine if the initial update failed because no record matched(ok),
or possible some other error (ouch))

That's why we can't use this feild as a primary index, even though
it is supposed to be unqiue.

Basically the database seems to force a _double_ lookup, the only
way I see around this is to then switch over to a bulk copy getting
around the double lookup.  However, this will only work for our
special case where there is only a single reader/writer updating
the table at any time, otherwise we need special locking to avoid
races.

Even if this isn't a TODO item, if there's a wish list out there
it'd be nice to see this request for feature listed.

I think once the dust settles over here and the need to scale goes
from very scalable to insanely scalable I'm going to have an even
greater interest in learning postgresql internals. :)

thanks,
-- 
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]


Re: [HACKERS] how to deal with sparse/to-be populated tables

От
Alfred Perlstein
Дата:
* Hiroshi Inoue <Inoue@tpf.co.jp> [000203 21:34] wrote:
> > -----Original Message-----
> > From: owner-pgsql-hackers@postgreSQL.org
> > [mailto:owner-pgsql-hackers@postgreSQL.org]On Behalf Of Tom Lane
> > 
> > Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes:
> > > Hmm. Doesn't PostgreSQL have a big list of error codes? I don't think
> > > it does, I've never seen one. There should be a way to get error
> > > codes without comparing strings. Should this be on the TODO?
> > 
> > It doesn't, there should, and it already is ;-)
> > 
> 
> Doens't the following TODO imply it ?
> 
> * Allow elog() to return error codes, not just messages
> 
> Many people have complained about it.
> However,it seems not effective without a functionality of statement
> level rollback.  AFAIK,Vadim has planed it together with savepoint
> functionality.

It would help, but it wouldn't be avoid the double searches I seem
to need to do to maintain a unique index.

-- 
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]


Time travel

От
Chris Bitmead
Дата:
>From time to time the old Time Travel postgres functionality is
mentioned.
When it is mentioned, somebody usually says "Yeah well you can implement
it
just as well with triggers therefore it's redundant" and the doco says
"New 
features such as triggers allow one to mimic the behavior of time travel
when 
desired, without incurring the overhead when it is not needed (for most
users, 
this is most of the time).

This seems to fail to take into account the original design which was
to take advantage of a different style of storage manager, that doesn't
have an undo log. Unless I'm missing something, postgres is indeed still
"incurring the overhead" of time travel, but losing the feature.

In fact, if you have fsync turned on for full safety, the postgres
performance is going to be bad compared to a regular design
storage manager.

On the other hand the postgres storage manager had the advantage of time
travel because it does not update in place.

Now in the documentation it mentioned removing time travel because  
"performance impact, storage size, and a pg_time file which
grows toward infinite size in a short period of time.".

Now since I believe the postgres storage manager does not replace
records in place when updated, I can't see how it is different to
having the time travel feature with vacuum configured to remove
all old records immediately. I don't know what the pg_time file
is.

Have I missed something about why taking out time travel has
improved performance, as opposed to simply making immediate
vacuum the default? Clearly the performance of triggers as an
alternative is going to suck very badly, since the postgres
storage manager was built specially from the ground up to
support time travel with its non-update semantics, and it
still has these characteristics.


Re: [HACKERS] how to deal with sparse/to-be populated tables

От
Tom Lane
Дата:
Alfred Perlstein <bright@wintelcom.net> writes:
> (yes, I just thought about only indexing, and trying the update
> first and only on failure doing an insert, however we really can't
> determine if the initial update failed because no record matched(ok),
> or possible some other error (ouch))

Uh ... why not?  "UPDATE 0" is a perfectly recognizable result
signature, it seems like.  (I forget just how that looks at the
libpq API level, but if psql can see it so can you.)

Alternatively, if you think the insert is more likely to be the
right thing, try it first and look to see if you get a "can't
insert duplicate key into unique index" error.

You're right that SQL provides no combination statement that would
allow these sequences to be done with only one index probe.  But
FWIW, I'd think that the amount of wasted I/O would be pretty minimal;
the relevant index pages should still be in the buffer cache when
the second query gets to the backend.
        regards, tom lane


Re: [HACKERS] how to deal with sparse/to-be populated tables

От
Karl DeBisschop
Дата:
>This is what I was thinking, the problem then becomes that I'm
>not aware of way to determine the error with
>some degree of accuracy so that I don't mistake:
> insert error because of duplication
>with:
> insert error because of database connectivity (or other factors)
>
>Is it possible to do that?  I guess I could parse the error responce
>from the backend, but maybe there's an easier/more-correct way?

Not sure what interface you are using,  But for example, perl will
easily tell the difference.

========================================================================      execute
            $rv = $sth->execute                || die $sth->errstr;            $rv = $sth->execute(@bind_values)  ||
die$sth->errstr;
 
          Perform whatever processing is necessary to execute          the prepared statement.  An undef is returned if
an         error occurs, a successful execute always returns true          regardless of the number of rows affected
(evenif          it's zero, see below). It is always important to check          the return status of execute (and most
otherDBI          methods) for errors.
 
          For a non-select statement, execute returns the number          of rows affected (if known). If no rows were
affected         then execute returns "0E0" which Perl will treat as 0          but will regard as true. Note that it
isnot an error          for no rows to be affected by a statement. If the          number of rows affected is not known
thenexecute          returns -1.
 
========================================================================

which means the return value will be 0 if the insert is blocked, but
undef in there is a connectivity error.

In other words, failing to insert where a unique index prevents the
insertion is not an error.

PHP is similar.

One trick is to insert all tuple into a temporary table.  Then do an
update using the natural join.  The do the insert from that same
table.

If you can use a copy to create the temporary table, I think your
performance will be best.

Typically I would index the primary key of the temp table so that the
join proceeds well, but you may want to bench yourself with and
without the index.  I don't think it's needed in the case you
describe.

-- 
Karl DeBisschop <kdebisschop@alert.infoplease.com>
617.832.0332 (Fax: 617.956.2696)

Information Please - your source for FREE online reference
http://www.infoplease.com  - Your Ultimate Fact Finder
http://kids.infoplease.com - The Great Homework Helper

Netsaint Plugins Development
http://netsaintplug.sourceforge.net


Re: [HACKERS] Time travel

От
Hannu Krosing
Дата:
Chris Bitmead wrote:
> 
> Now in the documentation it mentioned removing time travel because
> "performance impact, storage size, and a pg_time file which
> grows toward infinite size in a short period of time.".

In the time this was written 200MB disk was a big disk. 

> Now since I believe the postgres storage manager does not replace
> records in place when updated,

Yes, it's true at least for 6.5.3 (I've written a small script that 
extracts the old/hidden tuples) and I'm pretty sure that for 7.x too
perhaps it is the removal of pg_time (which i think recorded correspondence 
between transaction ids and timestamps) that gives the big performance win.

> I can't see how it is different to
> having the time travel feature with vacuum configured to remove
> all old records immediately. I don't know what the pg_time file
> is.

I guss it could be just a add_only, monotonuously growing 'tape'-type file, 
suitable for being searched using binary search. So really not nearly as 
much overhead as would be a regular pg table with two indexes.

> Have I missed something about why taking out time travel has
> improved performance, as opposed to simply making immediate
> vacuum the default? Clearly the performance of triggers as an
> alternative is going to suck very badly, since the postgres
> storage manager was built specially from the ground up to
> support time travel with its non-update semantics, and it
> still has these characteristics.

Implementing time-travel with triggers wil actually give us double 
time-travel, on hidden and one visible ;)