Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
| От | Tom Lane |
|---|---|
| Тема | Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |
| Дата | |
| Msg-id | 23588.1205159638@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit ("Heikki Linnakangas" <heikki@enterprisedb.com>) |
| Ответы |
Re: Very slow (2 tuples/second) sequential scan after bulk
insert; speed returns to ~500 tuples/second after commit
Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |
| Список | pgsql-performance |
"Heikki Linnakangas" <heikki@enterprisedb.com> writes:
> For 8.4, it would be nice to improve that. I tested that on my laptop
> with a similarly-sized table, inserting each row in a pl/pgsql function
> with an exception handler, and I got very similar run times. According
> to oprofile, all the time is spent in TransactionIdIsInProgress. I think
> it would be pretty straightforward to store the committed subtransaction
> ids in a sorted array, instead of a linked list, and binary search.
I think the OP is not complaining about the time to run the transaction
that has all the subtransactions; he's complaining about the time to
scan the table that it emitted. Presumably, each row in the table has a
different (sub)transaction ID and so we are thrashing the clog lookup
mechanism. It only happens once because after that the XMIN_COMMITTED
hint bits are set.
This probably ties into the recent discussions about eliminating the
fixed-size allocations for SLRU buffers --- I suspect it would've run
better if it could have scaled up the number of pg_clog pages held in
memory.
regards, tom lane
В списке pgsql-performance по дате отправления: