Re: pg_dump and thousands of schemas

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: pg_dump and thousands of schemas
Дата
Msg-id 12147.1338306742@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
Ответы Re: pg_dump and thousands of schemas
Re: pg_dump and thousands of schemas
Список pgsql-performance
Tatsuo Ishii <ishii@postgresql.org> writes:
> So I did qucik test with old PostgreSQL 9.0.2 and current (as of
> commit 2755abf386e6572bad15cb6a032e504ad32308cc). In a fresh initdb-ed
> database I created 100,000 tables, and each has two integer
> attributes, one of them is a primary key. Creating tables were
> resonably fast as expected (18-20 minutes). This created a 1.4GB
> database cluster.

> pg_dump dbname >/dev/null took 188 minutes on 9.0.2, which was pretty
> long time as the customer complained. Now what was current?  Well it
> took 125 minutes. Ps showed that most of time was spent in backend.

Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
management in the server.  What I fixed so far on the pg_dump side
should be enough to let partial dumps run at reasonable speed even if
the whole database contains many tables.  But if psql is taking
AccessShareLock on lots of tables, there's still a problem.

            regards, tom lane

В списке pgsql-performance по дате отправления:

Предыдущее
От: Job
Дата:
Сообщение: Strong slowdown on huge tables
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Strong slowdown on huge tables