Обсуждение: disabling autocommit

Поиск
Список
Период
Сортировка

disabling autocommit

От
"Matt Van Mater"
Дата:
I'm looking to get a little more performance out of my database, and saw in
the docs a section about disabling autocommit by using the BEGIN and COMMIT
keywords.

My problem is this: I enforce unique rows for all data, and occasionally
there is an error where I try to insert a duplicate entry.  I expect to see
these duplicate entries and depend on the DB to enforce the row uniqueness.
When I just run the insert statements without the begin and commit keywords
the insert only fails for that single insert, but If I disable autocommit
then all the inserts fail because of one error.

As a test I ran about 1000 identical inserts with autocommit on and also
with it off.  I get roughly a 33% speed increase with the autocommit off, so
it's definitely a good thing.  The problem is, to parse the insert
statements and ensure there are no duplicates I feel like I would be losing
the advantage that disabling autocommit gives me, and simply spending the
cpu cycles somewhere else.

Is there a way for me to say 'only commit the successful commands and ignore
the unsuccessful ones'?  I know that's the point behind using this type of
transaction/rollback statement but I was curious if there was a way I could
fix it.

Matt

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/


Re: disabling autocommit

От
"Michael Kleiser"
Дата:
I had postet a similar question some time ago.
Someone answerded me, in the next PostgreSQL
it would be possible to use nested transactions.
Is is now implemented in version 8.0-beta

You can set a savepoint before the insert that possible fails.
http://developer.postgresql.org/docs/postgres/sql-savepoint.html

Probbaly the savepint also needs a roundtrip between client and server.
If so, this can decrease performance.
This can be avoided if you put  the "savepoint, insert , rollback to savepoint and probably the insert  in a stored
procedure.



-----Ursprüngliche Nachricht-----
Von: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]Im Auftrag von Matt Van Mater
Gesendet: Mittwoch, 11. August 2004 20:25
An: pgsql-general@postgresql.org
Betreff: [GENERAL] disabling autocommit


I'm looking to get a little more performance out of my database, and saw in
the docs a section about disabling autocommit by using the BEGIN and COMMIT
keywords.

My problem is this: I enforce unique rows for all data, and occasionally
there is an error where I try to insert a duplicate entry.  I expect to see
these duplicate entries and depend on the DB to enforce the row uniqueness.
When I just run the insert statements without the begin and commit keywords
the insert only fails for that single insert, but If I disable autocommit
then all the inserts fail because of one error.

As a test I ran about 1000 identical inserts with autocommit on and also
with it off.  I get roughly a 33% speed increase with the autocommit off, so
it's definitely a good thing.  The problem is, to parse the insert
statements and ensure there are no duplicates I feel like I would be losing
the advantage that disabling autocommit gives me, and simply spending the
cpu cycles somewhere else.

Is there a way for me to say 'only commit the successful commands and ignore
the unsuccessful ones'?  I know that's the point behind using this type of
transaction/rollback statement but I was curious if there was a way I could
fix it.

Matt

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org

Re: disabling autocommit

От
Vivek Khera
Дата:
>>>>> "MVM" == Matt Van Mater <nutter_@hotmail.com> writes:

MVM> My problem is this: I enforce unique rows for all data, and
MVM> occasionally there is an error where I try to insert a duplicate
MVM> entry.  I expect to see these duplicate entries and depend on the DB
MVM> to enforce the row uniqueness.  When I just run the insert statements
MVM> without the begin and commit keywords the insert only fails for that
MVM> single insert, but If I disable autocommit then all the inserts fail
MVM> because of one error.

I have a situation like this, but with foreign key dependencies.
Occasionally someone will perform some action that is tracked by my
system, but they will use a very old stale link that has no associated
record with it in the database any more, so I should ignore logging
that action.

What I do is make it opportunistic.  First I try to insert my batch of
log records within a transaction.  If the tx fails for a FK violation,
I then run that same batch again, but I do a select prior to each
insert to ensure that the FK violation won't occur.

In something like 1 out of 200 batches do I need to retry with the
explicit integrity checks on.

However, if your expected norm is to encounter duplicates, then try
just doing the select prior to insert always.

In PG 8.0, I expect to be able to deal with this with the nested
transactions.


--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD  +1-301-869-4449 x806
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/