Re: Performance issues with compaq server
От | John Gray |
---|---|
Тема | Re: Performance issues with compaq server |
Дата | |
Msg-id | 1020944616.1592.103.camel@adzuki обсуждение исходный текст |
Ответ на | Re: Performance issues with compaq server (Holger Marzen <holger@marzen.de>) |
Список | pgsql-general |
On Thu, 2002-05-09 at 12:23, Holger Marzen wrote: > On 9 May 2002, Doug McNaught wrote: > > > Holger Marzen <holger@marzen.de> writes: > > > > > Is there a rule in SQL standards that describes what should happen if > > > some statemens in a transaction fail and the program issues a commit? > > > > I think PG's is standard behavior; that's kind of the whole point of > > having transactions. > > - rolling back the whole transaction if you do a COMMIT > or > - keeping the changes until the first failing statement and ignoring > everything that follows if you do a COMMIT > I can't speak to the phrasing of the standard, but PG behaviour is: Everything in the transaction is ignored. All the inserts made before the error, since the BEGIN, are rolled back. All subsequent inserts are ignored (generating an error that transaction is in ABORT state). The subsequent COMMIT will end the transaction (and thus clear the abort state) but *will not commit anything*. This means that scripts will work as intended -namely that all or none of the commands within a transaction block will succeed[*]. So if you split your insert into blocks, and one block fails, you will only need to correct that block and try to load it again. Regards John [*] This is handy for doing things like reindexing or drop/recreate table in a script. The drop table stands a good chance of succeeding, whereas the prospect of a syntax error in the create table makes it more likely to fail. A subsequent commit of the DROP TABLE would not produce the end result you intended.... -- John Gray Azuli IT www.azuli.co.uk
В списке pgsql-general по дате отправления: