Обсуждение: Transaction Exception Question

Поиск
Список
Период
Сортировка

Transaction Exception Question

От
Jon Swinth
Дата:
A while back, I ran into a problem that turned out to be in Postgre on
purpose.  In a long running transaction (or any transaction for that matter)
if an exception is thrown then you have no choice but to rollback that
transaction.  Is there someone that can point me in the right direction in
finding out why this would be?  It has bitten me a few times and will limit
Postgre's ability to work in a high volume operation.

Re: Transaction Exception Question

От
"scott.marlowe"
Дата:
On Tue, 13 Aug 2002, Jon Swinth wrote:

> A while back, I ran into a problem that turned out to be in Postgre on
> purpose.  In a long running transaction (or any transaction for that matter)
> if an exception is thrown then you have no choice but to rollback that
> transaction.  Is there someone that can point me in the right direction in
> finding out why this would be?  It has bitten me a few times and will limit
> Postgre's ability to work in a high volume operation.

Seeing as how the purpose of a transaction is to ensure that either all
the changes to the database are made or none are made, I'm not sure what
should change about this behaviour.

Or were you looking for things like commit / rollback segments?  In
general, instead of using commit / rollback segments I just do a begin /
end pair around each set of data that I would have used a commit /
rollback segment.

Sometimes I think postgresql's tendency to get pedantic about which errors
cause an auto abort is a little bothersome (i.e. an error thrown by a
select or set statement will abort the current transaction) but for
update/delete/insert commands, and single error SHOULD cause the whole
transaction to abort, thus ensuring transactional integrity.


Re: Transaction Exception Question

От
Jon Swinth
Дата:
Thanks Scott for your reply.

I don't agree that an insert/update/delete error should automatically abort
the transaction.  You have not provided for the fact that the error may be
handled.  I will give you an example that makes my case.

Lets say you have an inventory table.  The inventory table has a primary key
of an integer and a unique key of location and product.  The unique key makes
sure that there is only one record for each product against a single
location.  Now imagine that you have a high volume database with many clients
and you have a process that attempts to put quantity of a product into a
location.  That process would first select to see if the record already
existed so it could be update and then insert a row when it wasn't found.
Now imagine that this is just part of a long running transaction and that
multiple clients will want to put more of the same product in the same
location.

Here is what happens with Postgre:  Client A runs the process sees that there
is no record, inserts, and goes on to the next thing to be done in the same
transaction.  Client B runs the process, sees that there is no record because
Client A has not commited, attempts an insert, and blocks until Client A
commit or rollback.  Client A commits, Client B gets an exception and is now
forced to rollback everything else in the transaction.

Here is what happens with another DB (ex. Oracle) that doesn't abort the
transaction:  When client B gets the exception, the exception is caught by
the process, the process selects back the newly created row from Client A,
and the existing record is updated.

You may not think that this would happen very often, but my experience says
otherwise.  As the number of clients goes up and the DB machine begins to
slow down from the strain, this can happen a great deal.

Just because a statement has been issued that results in an error does not
automatically mean that (1) all the other statements in the transaction are
not valid and (2) that the application code does not have a work around for
that error.  Whether the transaction should be rolled back or not is a
question for the application/client, not the DB.

On Tuesday 13 August 2002 09:40 am, scott.marlowe wrote:
> On Tue, 13 Aug 2002, Jon Swinth wrote:
> > A while back, I ran into a problem that turned out to be in Postgre on
> > purpose.  In a long running transaction (or any transaction for that
> > matter) if an exception is thrown then you have no choice but to rollback
> > that transaction.  Is there someone that can point me in the right
> > direction in finding out why this would be?  It has bitten me a few times
> > and will limit Postgre's ability to work in a high volume operation.
>
> Seeing as how the purpose of a transaction is to ensure that either all
> the changes to the database are made or none are made, I'm not sure what
> should change about this behaviour.
>
> Or were you looking for things like commit / rollback segments?  In
> general, instead of using commit / rollback segments I just do a begin /
> end pair around each set of data that I would have used a commit /
> rollback segment.
>
> Sometimes I think postgresql's tendency to get pedantic about which errors
> cause an auto abort is a little bothersome (i.e. an error thrown by a
> select or set statement will abort the current transaction) but for
> update/delete/insert commands, and single error SHOULD cause the whole
> transaction to abort, thus ensuring transactional integrity.

Re: Transaction Exception Question

От
"Ian Harding"
Дата:
Create a record for every location for every item and leave quantity null.  Then it is always an update.

Or,  wait for nested transactions.

I used MSSQL Server and grew extremely tired of the default behaviour which is "ignore all errors, just do what you
can"and the unbelievalbe Rube Goldgberg workarounds required to check each error code and then roll back, but wait,
that'sonly if @@TRANCOUNT > @@TRANCOUNT was at the start of this function, unless...  Ugh. 

Ian A. Harding
Programmer/Analyst II
Tacoma-Pierce County Health Department
(253) 798-3549
mailto: iharding@tpchd.org

We have only two things to worry about:  That things will never get
back to normal, and that they already have.


>>> Jon Swinth <jswinth@atomicpc.com> 08/13/02 10:42AM >>>
Thanks Scott for your reply.

I don't agree that an insert/update/delete error should automatically abort
the transaction.  You have not provided for the fact that the error may be
handled.  I will give you an example that makes my case.

Lets say you have an inventory table.  The inventory table has a primary key
of an integer and a unique key of location and product.  The unique key makes
sure that there is only one record for each product against a single
location.  Now imagine that you have a high volume database with many clients
and you have a process that attempts to put quantity of a product into a
location.  That process would first select to see if the record already
existed so it could be update and then insert a row when it wasn't found.
Now imagine that this is just part of a long running transaction and that
multiple clients will want to put more of the same product in the same
location.

Here is what happens with Postgre:  Client A runs the process sees that there
is no record, inserts, and goes on to the next thing to be done in the same
transaction.  Client B runs the process, sees that there is no record because
Client A has not commited, attempts an insert, and blocks until Client A
commit or rollback.  Client A commits, Client B gets an exception and is now
forced to rollback everything else in the transaction.

Here is what happens with another DB (ex. Oracle) that doesn't abort the
transaction:  When client B gets the exception, the exception is caught by
the process, the process selects back the newly created row from Client A,
and the existing record is updated.

You may not think that this would happen very often, but my experience says
otherwise.  As the number of clients goes up and the DB machine begins to
slow down from the strain, this can happen a great deal.

Just because a statement has been issued that results in an error does not
automatically mean that (1) all the other statements in the transaction are
not valid and (2) that the application code does not have a work around for
that error.  Whether the transaction should be rolled back or not is a
question for the application/client, not the DB.

On Tuesday 13 August 2002 09:40 am, scott.marlowe wrote:
> On Tue, 13 Aug 2002, Jon Swinth wrote:
> > A while back, I ran into a problem that turned out to be in Postgre on
> > purpose.  In a long running transaction (or any transaction for that
> > matter) if an exception is thrown then you have no choice but to rollback
> > that transaction.  Is there someone that can point me in the right
> > direction in finding out why this would be?  It has bitten me a few times
> > and will limit Postgre's ability to work in a high volume operation.
>
> Seeing as how the purpose of a transaction is to ensure that either all
> the changes to the database are made or none are made, I'm not sure what
> should change about this behaviour.
>
> Or were you looking for things like commit / rollback segments?  In
> general, instead of using commit / rollback segments I just do a begin /
> end pair around each set of data that I would have used a commit /
> rollback segment.
>
> Sometimes I think postgresql's tendency to get pedantic about which errors
> cause an auto abort is a little bothersome (i.e. an error thrown by a
> select or set statement will abort the current transaction) but for
> update/delete/insert commands, and single error SHOULD cause the whole
> transaction to abort, thus ensuring transactional integrity.

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

Re: Transaction Exception Question

От
Jon Swinth
Дата:
Thanks for the suggestion Ian.  It wouldn't be very practical when there are
50K of locations and 10K of products.  I'm not sure what MSSQL was doing.
The error should still be thrown by the DB, it should just be up to the
application/client what to do next.

On Tuesday 13 August 2002 11:03 am, Ian Harding wrote:
> Create a record for every location for every item and leave quantity null.
> Then it is always an update.
>
> Or,  wait for nested transactions.
>
> I used MSSQL Server and grew extremely tired of the default behaviour which
> is "ignore all errors, just do what you can" and the unbelievalbe Rube
> Goldgberg workarounds required to check each error code and then roll back,
> but wait, that's only if @@TRANCOUNT > @@TRANCOUNT was at the start of this
> function, unless...  Ugh.
>
> Ian A. Harding
> Programmer/Analyst II
> Tacoma-Pierce County Health Department
> (253) 798-3549
> mailto: iharding@tpchd.org
>
> We have only two things to worry about:  That things will never get
> back to normal, and that they already have.
>
> >>> Jon Swinth <jswinth@atomicpc.com> 08/13/02 10:42AM >>>
>
> Thanks Scott for your reply.
>
> I don't agree that an insert/update/delete error should automatically abort
> the transaction.  You have not provided for the fact that the error may be
> handled.  I will give you an example that makes my case.
>
> Lets say you have an inventory table.  The inventory table has a primary
> key of an integer and a unique key of location and product.  The unique key
> makes sure that there is only one record for each product against a single
> location.  Now imagine that you have a high volume database with many
> clients and you have a process that attempts to put quantity of a product
> into a location.  That process would first select to see if the record
> already existed so it could be update and then insert a row when it wasn't
> found. Now imagine that this is just part of a long running transaction and
> that multiple clients will want to put more of the same product in the same
> location.
>
> Here is what happens with Postgre:  Client A runs the process sees that
> there is no record, inserts, and goes on to the next thing to be done in
> the same transaction.  Client B runs the process, sees that there is no
> record because Client A has not commited, attempts an insert, and blocks
> until Client A commit or rollback.  Client A commits, Client B gets an
> exception and is now forced to rollback everything else in the transaction.
>
> Here is what happens with another DB (ex. Oracle) that doesn't abort the
> transaction:  When client B gets the exception, the exception is caught by
> the process, the process selects back the newly created row from Client A,
> and the existing record is updated.
>
> You may not think that this would happen very often, but my experience says
> otherwise.  As the number of clients goes up and the DB machine begins to
> slow down from the strain, this can happen a great deal.
>
> Just because a statement has been issued that results in an error does not
> automatically mean that (1) all the other statements in the transaction are
> not valid and (2) that the application code does not have a work around for
> that error.  Whether the transaction should be rolled back or not is a
> question for the application/client, not the DB.
>
> On Tuesday 13 August 2002 09:40 am, scott.marlowe wrote:
> > On Tue, 13 Aug 2002, Jon Swinth wrote:
> > > A while back, I ran into a problem that turned out to be in Postgre on
> > > purpose.  In a long running transaction (or any transaction for that
> > > matter) if an exception is thrown then you have no choice but to
> > > rollback that transaction.  Is there someone that can point me in the
> > > right direction in finding out why this would be?  It has bitten me a
> > > few times and will limit Postgre's ability to work in a high volume
> > > operation.
> >
> > Seeing as how the purpose of a transaction is to ensure that either all
> > the changes to the database are made or none are made, I'm not sure what
> > should change about this behaviour.
> >
> > Or were you looking for things like commit / rollback segments?  In
> > general, instead of using commit / rollback segments I just do a begin /
> > end pair around each set of data that I would have used a commit /
> > rollback segment.
> >
> > Sometimes I think postgresql's tendency to get pedantic about which
> > errors cause an auto abort is a little bothersome (i.e. an error thrown
> > by a select or set statement will abort the current transaction) but for
> > update/delete/insert commands, and single error SHOULD cause the whole
> > transaction to abort, thus ensuring transactional integrity.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org

Re: Transaction Exception Question

От
Martijn van Oosterhout
Дата:
On Tue, Aug 13, 2002 at 10:42:07AM -0700, Jon Swinth wrote:
> Thanks Scott for your reply.
>
> I don't agree that an insert/update/delete error should automatically abort
> the transaction.  You have not provided for the fact that the error may be
> handled.  I will give you an example that makes my case.
>
> Lets say you have an inventory table.  The inventory table has a primary key
> of an integer and a unique key of location and product.  The unique key makes
> sure that there is only one record for each product against a single
> location.  Now imagine that you have a high volume database with many clients
> and you have a process that attempts to put quantity of a product into a
> location.  That process would first select to see if the record already
> existed so it could be update and then insert a row when it wasn't found.
> Now imagine that this is just part of a long running transaction and that
> multiple clients will want to put more of the same product in the same
> location.

Quick question (maybe I'm misunderstanding something) but why are all these
unrelated queries all in the same transaction? If you commited between each
update your problem goes away.

What you are trying to do could be acheived using using LOCKs but you don't
want that.

--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> There are 10 kinds of people in the world, those that can do binary
> arithmetic and those that can't.

Re: Transaction Exception Question

От
Jon Swinth
Дата:
Thanks Martijn.  What you missed is that there are many clients each with
thier own transaction trying to do simular things.  What I was trying to
illistrate is that there are occasions where a SQL error may happen that can
be handled and would not need the transaction to be aborted.

In the example I gave, the record is already there but the second client
cannot see it yet (not commited) so it attempts an insert too.  If the first
client is successful and commits then the second client will get an SQL error
on insert for duplicate key.  In Postgre currently this required that the
second client rollback everything in the transaction when it would be a
simple matter to catch the duplicate key error, select back the record, and
update it.

On Tuesday 13 August 2002 09:16 pm, Martijn van Oosterhout wrote:
> On Tue, Aug 13, 2002 at 10:42:07AM -0700, Jon Swinth wrote:
> > Thanks Scott for your reply.
> >
> > I don't agree that an insert/update/delete error should automatically
> > abort the transaction.  You have not provided for the fact that the error
> > may be handled.  I will give you an example that makes my case.
> >
> > Lets say you have an inventory table.  The inventory table has a primary
> > key of an integer and a unique key of location and product.  The unique
> > key makes sure that there is only one record for each product against a
> > single location.  Now imagine that you have a high volume database with
> > many clients and you have a process that attempts to put quantity of a
> > product into a location.  That process would first select to see if the
> > record already existed so it could be update and then insert a row when
> > it wasn't found. Now imagine that this is just part of a long running
> > transaction and that multiple clients will want to put more of the same
> > product in the same location.
>
> Quick question (maybe I'm misunderstanding something) but why are all these
> unrelated queries all in the same transaction? If you commited between each
> update your problem goes away.
>
> What you are trying to do could be acheived using using LOCKs but you don't
> want that.


Re: Transaction Exception Question

От
Andrew Sullivan
Дата:
On Wed, Aug 14, 2002 at 08:50:32AM -0700, Jon Swinth wrote:
>
> In the example I gave, the record is already there but the second client
> cannot see it yet (not commited) so it attempts an insert too.  If the first
> client is successful and commits then the second client will get an SQL error
> on insert for duplicate key.  In Postgre currently this required that the
> second client rollback everything in the transaction when it would be a
> simple matter to catch the duplicate key error, select back the record, and
> update it.

Could you cache the locally-submitted bits from previously in the
transaction, and then resubmit them as part of a new transaction?  I
know that's not terribly efifcient, but if you _really_ need
transactions running that long, it may be the only way until
savepoints are added.

I wonder, however, if this isn't one of those cases where proper
theory-approved normalisation is the wrong way to go.  Maybe you need
an order-submission queue table to keep contention low on the
(products?  I think that was your example) table.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


Re: Transaction Exception Question

От
Jon Swinth
Дата:
Thanks Andrew for your reply.

You confused me at first.  I guess the second paragraph was on my issue about
FK triggers using write locks on parent tables.

Your right in that I may need a work around for transactions being forced to
rollback on exception.  Savepoints may indeed be the answer I am looking for.
 Although I would like to see them implemented internally in the DB so that
the transaction automatically goes back to the point just before the
exception.  I have already accepted the fact that the DB will be this way for
a while.  The purpose of the original e-mail was to see if things were this
way for technical reasons (which would mean this could be added to the todo
list) or for idealistic reasons.

As for the FK issue.  An order queue isn't feasable because of a current
requirement that the customer receive immediate feedback if the credit card
is declined and I can't contact to the credit card company without a concrete
order number (keeping in mind that some customers will hit back on their
browser and try to submit again).  I would illiminate a lot of contention if
I could do the credit card authorization later and just cancel the order.

As for de-normalizing the DB.  Product is only one of the FK fields in
contention.  There is also order status, carrier, carrier service, inv type,
inv status, and others.  If I have to disable all the FK's to make things
work, why did I insist in a DB with foreign keys in the first place?

I am rasing these issues because I think PostgreSQL can be a serious
contender on high volume applications.  I just don't want to have to trade
good DB and application design for speed.

On Wednesday 14 August 2002 11:12 am, Andrew Sullivan wrote:
> On Wed, Aug 14, 2002 at 08:50:32AM -0700, Jon Swinth wrote:
> > In the example I gave, the record is already there but the second client
> > cannot see it yet (not commited) so it attempts an insert too.  If the
> > first client is successful and commits then the second client will get an
> > SQL error on insert for duplicate key.  In Postgre currently this
> > required that the second client rollback everything in the transaction
> > when it would be a simple matter to catch the duplicate key error, select
> > back the record, and update it.
>
> Could you cache the locally-submitted bits from previously in the
> transaction, and then resubmit them as part of a new transaction?  I
> know that's not terribly efifcient, but if you _really_ need
> transactions running that long, it may be the only way until
> savepoints are added.
>
> I wonder, however, if this isn't one of those cases where proper
> theory-approved normalisation is the wrong way to go.  Maybe you need
> an order-submission queue table to keep contention low on the
> (products?  I think that was your example) table.
>
> A


Re: Transaction Exception Question

От
Andrew Sullivan
Дата:
On Wed, Aug 14, 2002 at 11:40:50AM -0700, Jon Swinth wrote:
> As for the FK issue.  An order queue isn't feasable because of a
> current requirement that the customer receive immediate feedback if
> the credit card is declined and I can't contact to the credit card
> company without a concrete order number (keeping in mind that some

Oh, you can get that.  Think of a high-volume transaction system.
You have a "pending" table and a "posted" table.  The order gets
authorised, and the charges get approved, and the order number
assigned when the order gets posted to the "pending" table.  A
separate process moves orders from the "pending" to the "posted"
table, and this latter action is what does updates, if any, to the
"product" table.  Or is this not the problem?  I've only ever had
writers block with FK constraints.  Maybe (probably) I'm
misunderstanding the problem here, and your design is different from
what I'm imagining.

(On the general point that the FK implementation is less than
optimal, I totally agree.  I just wish I had an idea on how to make
it better.)

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


Re: Transaction Exception Question

От
Jon Swinth
Дата:
ah, now I understand where you got FK from.  The transaction exeception I run
into most often is caused by an Unique Key (not the PK either).  An insert
will block on UK violation when the existing record has been inserted from a
non-complete transaction.  The insert must block until the other transaction
is committed or rolled back.  A commit of the other transaction would result
in a UK violation SQL error and rollback.  A rollback of the other
transaction will result in the insert being successful.

As for your pending and posted idea, are you proposing to not have FK on the
pending table?  What do I do when the order fails an FK when moving from
pending to posted?  The whole point of the transaction is that when I am
done, everything is updated properly or nothing is updated.

Based on what I know of Postgre so far, there are two ways to solve the FK
lock issues.  Both require that the concept of read lock be added to the core
of postgre.  One way would be to keep the current trigger based FKs and add
to the allowed SQL a statement like SELECT ... FOR READ OF "table".  Another
would be to implement FKs as part of the schema definitions and do away with
the generated triggers, but that would probably be more work.  Some debate
may be necessary as to what a read lock constitutes.  For FK, a read lock
only needs to prevent the record from being deleted or the referencing key
changed.  Some would say that a read lock should block write locks but allow
other read locks.

On Wednesday 14 August 2002 01:26 pm, Andrew Sullivan wrote:
> On Wed, Aug 14, 2002 at 11:40:50AM -0700, Jon Swinth wrote:
> > As for the FK issue.  An order queue isn't feasable because of a
> > current requirement that the customer receive immediate feedback if
> > the credit card is declined and I can't contact to the credit card
> > company without a concrete order number (keeping in mind that some
>
> Oh, you can get that.  Think of a high-volume transaction system.
> You have a "pending" table and a "posted" table.  The order gets
> authorised, and the charges get approved, and the order number
> assigned when the order gets posted to the "pending" table.  A
> separate process moves orders from the "pending" to the "posted"
> table, and this latter action is what does updates, if any, to the
> "product" table.  Or is this not the problem?  I've only ever had
> writers block with FK constraints.  Maybe (probably) I'm
> misunderstanding the problem here, and your design is different from
> what I'm imagining.
>
> (On the general point that the FK implementation is less than
> optimal, I totally agree.  I just wish I had an idea on how to make
> it better.)
>
> A


Re: Transaction Exception Question

От
Andrew Sullivan
Дата:
On Wed, Aug 14, 2002 at 01:56:57PM -0700, Jon Swinth wrote:
> ah, now I understand where you got FK from.  The transaction exeception I run
> into most often is caused by an Unique Key (not the PK either).  An insert
> will block on UK violation when the existing record has been inserted from a
> non-complete transaction.

Ok, so these are just separate issues.  Sorry, I'm especially dim
this week (we're moving offices is my best excuse).

> As for your pending and posted idea, are you proposing to not have FK on the
> pending table?  What do I do when the order fails an FK when moving from
> pending to posted?  The whole point of the transaction is that when I am
> done, everything is updated properly or nothing is updated.

No, you should have the FKs on the pending table.  Hmm.  I see, now:
the problem may be related also to the long-running transaction,
because you end up having to take the lock for the duration.  So
never mind all of what I said.

> Based on what I know of Postgre so far, there are two ways to solve the FK
> lock issues.  Both require that the concept of read lock be added to the core
> of postgre.

Yes, I think this is right.  And yes, that lock mechanism would be
valuable.

A

--
----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110