performance regression in 9.2 when loading lots of small tables

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема performance regression in 9.2 when loading lots of small tables
Дата
Msg-id CAMkU=1x8PYyCa_LiNqY2q6h7B8HouNf10-bG877zLVH8MkrmUA@mail.gmail.com
обсуждение исходный текст
Ответы Re: performance regression in 9.2 when loading lots of small tables  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
There was a regression introduced in 9.2 that effects the creation and
loading of lots of small tables in a single transaction.

It affects the loading of a pg_dump file which has a large number of
small tables (10,000 schemas, one table per schema, 10 rows per
table).  I did not test other schema configurations, so these
specifics might not be needed to invoke the problem.

It causes the loading of a dump with "psql -1 -f " to run at half the
previous speed.  Speed of loading without the -1 is not changed.

The regression was introduced in 39a68e5c6ca7b41b, "Fix toast table
creation".  Perhaps the slowdown is an inevitable result of fixing the
bug.

The regression was removed from 9_1_STABLE at commit dff178f8017e4412,
"More cleanup after failed reduced-lock-levels-for-DDL feature".  It
is still present in 9_2_STABLE.

I don't really understand what is going on in these patches, but it
seems that either 9_1_STABLE now has a bug that was fixed and then
unfixed, or that 9_2_STABLE is slower than it needs to be.


The dump file I used can be obtained like this:

perl -le 'print "set client_min_messages=warning;"; print "create
schema foo$_; create table foo$_.foo$_ (k integer, v integer); insert
into foo$_.foo$_ select * from generate_series(1,10); " foreach
$ARGV[0]..$ARGV[0]+$ARGV[1]-1' 0 10000 | psql > /dev/null ; pg_dump >
10000.dump


Cheers,

Jeff


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: patch: avoid heavyweight locking on hash metapage
Следующее
От: Robert Haas
Дата:
Сообщение: Re: [RFC][PATCH] Logical Replication/BDR prototype and architecture