Re: Optimising inside transactions
| От | Tom Lane |
|---|---|
| Тема | Re: Optimising inside transactions |
| Дата | |
| Msg-id | 10113.1023896190@sss.pgh.pa.us обсуждение |
| Ответ на | Optimising inside transactions (John Taylor <postgres@jtresponse.co.uk>) |
| Ответы |
Re: Optimising inside transactions
|
| Список | pgsql-novice |
John Taylor <postgres@jtresponse.co.uk> writes:
> I'm running a transaction with about 1600 INSERTs.
> Each INSERT involves a subselect.
> I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about
> 1/2 the time expected.
> Is postgresql optimising the inserts, knowing that it will rollback at the end ?
> If not, why do the queries run faster after the failure ?
Queries after the failure aren't run at all; they're only passed through
the parser's grammar so it can look for a COMMIT or ROLLBACK command.
Normal processing resumes after ROLLBACK. If you were paying attention
to the return codes you'd notice complaints like
regression=# begin;
BEGIN
regression=# select 1/0;
ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by
zero
-- subsequent queries will be rejected like so:
regression=# select 1/0;
WARNING: current transaction is aborted, queries ignored until end of transaction block
*ABORT STATE*
I'd actually expect much more than a 2:1 speed differential, because the
grammar is not a significant part of the runtime AFAICT. Perhaps you
are including some large amount of communication overhead in that
comparison?
regards, tom lane
В списке pgsql-novice по дате отправления: