Re: Transaction Exception Question

Поиск
Список
Период
Сортировка
От Jon Swinth
Тема Re: Transaction Exception Question
Дата
Msg-id 200208131811.g7DIAmM02565@localhost.localdomain
обсуждение исходный текст
Ответ на Re: Transaction Exception Question  ("Ian Harding" <ianh@tpchd.org>)
Список pgsql-general
Thanks for the suggestion Ian.  It wouldn't be very practical when there are
50K of locations and 10K of products.  I'm not sure what MSSQL was doing.
The error should still be thrown by the DB, it should just be up to the
application/client what to do next.

On Tuesday 13 August 2002 11:03 am, Ian Harding wrote:
> Create a record for every location for every item and leave quantity null.
> Then it is always an update.
>
> Or,  wait for nested transactions.
>
> I used MSSQL Server and grew extremely tired of the default behaviour which
> is "ignore all errors, just do what you can" and the unbelievalbe Rube
> Goldgberg workarounds required to check each error code and then roll back,
> but wait, that's only if @@TRANCOUNT > @@TRANCOUNT was at the start of this
> function, unless...  Ugh.
>
> Ian A. Harding
> Programmer/Analyst II
> Tacoma-Pierce County Health Department
> (253) 798-3549
> mailto: iharding@tpchd.org
>
> We have only two things to worry about:  That things will never get
> back to normal, and that they already have.
>
> >>> Jon Swinth <jswinth@atomicpc.com> 08/13/02 10:42AM >>>
>
> Thanks Scott for your reply.
>
> I don't agree that an insert/update/delete error should automatically abort
> the transaction.  You have not provided for the fact that the error may be
> handled.  I will give you an example that makes my case.
>
> Lets say you have an inventory table.  The inventory table has a primary
> key of an integer and a unique key of location and product.  The unique key
> makes sure that there is only one record for each product against a single
> location.  Now imagine that you have a high volume database with many
> clients and you have a process that attempts to put quantity of a product
> into a location.  That process would first select to see if the record
> already existed so it could be update and then insert a row when it wasn't
> found. Now imagine that this is just part of a long running transaction and
> that multiple clients will want to put more of the same product in the same
> location.
>
> Here is what happens with Postgre:  Client A runs the process sees that
> there is no record, inserts, and goes on to the next thing to be done in
> the same transaction.  Client B runs the process, sees that there is no
> record because Client A has not commited, attempts an insert, and blocks
> until Client A commit or rollback.  Client A commits, Client B gets an
> exception and is now forced to rollback everything else in the transaction.
>
> Here is what happens with another DB (ex. Oracle) that doesn't abort the
> transaction:  When client B gets the exception, the exception is caught by
> the process, the process selects back the newly created row from Client A,
> and the existing record is updated.
>
> You may not think that this would happen very often, but my experience says
> otherwise.  As the number of clients goes up and the DB machine begins to
> slow down from the strain, this can happen a great deal.
>
> Just because a statement has been issued that results in an error does not
> automatically mean that (1) all the other statements in the transaction are
> not valid and (2) that the application code does not have a work around for
> that error.  Whether the transaction should be rolled back or not is a
> question for the application/client, not the DB.
>
> On Tuesday 13 August 2002 09:40 am, scott.marlowe wrote:
> > On Tue, 13 Aug 2002, Jon Swinth wrote:
> > > A while back, I ran into a problem that turned out to be in Postgre on
> > > purpose.  In a long running transaction (or any transaction for that
> > > matter) if an exception is thrown then you have no choice but to
> > > rollback that transaction.  Is there someone that can point me in the
> > > right direction in finding out why this would be?  It has bitten me a
> > > few times and will limit Postgre's ability to work in a high volume
> > > operation.
> >
> > Seeing as how the purpose of a transaction is to ensure that either all
> > the changes to the database are made or none are made, I'm not sure what
> > should change about this behaviour.
> >
> > Or were you looking for things like commit / rollback segments?  In
> > general, instead of using commit / rollback segments I just do a begin /
> > end pair around each set of data that I would have used a commit /
> > rollback segment.
> >
> > Sometimes I think postgresql's tendency to get pedantic about which
> > errors cause an auto abort is a little bothersome (i.e. an error thrown
> > by a select or set statement will abort the current transaction) but for
> > update/delete/insert commands, and single error SHOULD cause the whole
> > transaction to abort, thus ensuring transactional integrity.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org

В списке pgsql-general по дате отправления:

Предыдущее
От: Greg Copeland
Дата:
Сообщение: Re: [HACKERS] Linux Largefile Support In Postgresql RPMS
Следующее
От: Greg Copeland
Дата:
Сообщение: Re: [HACKERS] Linux Largefile Support In Postgresql RPMS