Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
| От | Heikki Linnakangas |
|---|---|
| Тема | Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |
| Дата | |
| Msg-id | 47D54DB2.4020405@enterprisedb.com обсуждение исходный текст |
| Ответ на | Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit (Craig Ringer <craig@postnewspapers.com.au>) |
| Ответы |
Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
|
| Список | pgsql-performance |
Craig Ringer wrote:
> I'll bang out a couple of examples at work tomorrow to see what I land
> up with, since this is clearly something that can benefit from a neat
> test case.
Here's what I used to reproduce this:
postgres=# BEGIN;
BEGIN
postgres=# CREATE TABLE foo (id int4,t text);CREATE TABLE
postgres=# CREATE OR REPLACE FUNCTION insertfunc() RETURNS void LANGUAGE
plpgsql AS $$
begin
INSERT INTO foo VALUES ( 1, repeat('a',110));
exception when unique_violation THEN end;
$$;
CREATE FUNCTION
postgres=# SELECT COUNT(insertfunc()) FROM generate_series(1,300000);
count
--------
300000
(1 row)
postgres=# EXPLAIN ANALYZE SELECT COUNT(*) FROM foo;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=13595.93..13595.94 rows=1 width=0) (actual
time=239535.904..239535.906 rows=1 loops=1)
-> Seq Scan on foo (cost=0.00..11948.34 rows=659034 width=0)
(actual time=0.022..239133.898 rows=300000 loops=1)
Total runtime: 239535.974 ms
(3 rows)
The oprofile output is pretty damning:
samples % symbol name
42148 99.7468 TransactionIdIsCurrentTransactionId
If you put a COMMIT right before "EXPLAIN ANALYZE..." it runs in < 1s.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
В списке pgsql-performance по дате отправления: