Re: pg_dump and thousands of schemas

Поиск
Список
Период
Сортировка
От Tatsuo Ishii
Тема Re: pg_dump and thousands of schemas
Дата
Msg-id 20120613.104525.2089875055094693929.t-ishii@sraoss.co.jp
обсуждение исходный текст
Ответ на Re: pg_dump and thousands of schemas  (Jeff Janes <jeff.janes@gmail.com>)
Ответы Re: pg_dump and thousands of schemas  ("Hugo <Nabble>" <hugo.tech@gmail.com>)
Список pgsql-performance
> On Tue, Jun 12, 2012 at 1:54 AM, Tatsuo Ishii <ishii@postgresql.org> wrote:
>>> On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
>>>> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii <ishii@postgresql.org> wrote:
>>>>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
>>>>>> management in the server.  What I fixed so far on the pg_dump side
>>>>>> should be enough to let partial dumps run at reasonable speed even if
>>>>>> the whole database contains many tables.  But if psql is taking
>>>>>> AccessShareLock on lots of tables, there's still a problem.
>>>>>
>>>>> Ok, I modified the part of pg_dump where tremendous number of LOCK
>>>>> TABLE are issued. I replace them with single LOCK TABLE with multiple
>>>>> tables. With 100k tables LOCK statements took 13 minutes in total, now
>>>>> it only takes 3 seconds. Comments?
>>>>
>>>> Could you rebase this?  I tried doing it myself, but must have messed
>>>> it up because it got slower rather than faster.
>>>
>>> OK, I found the problem.  In fixing a merge conflict, I had it execute
>>> the query every time it appended a table, rather than just at the end.
>>>
>>> With my proposed patch in place, I find that for a full default dump
>>> your patch is slightly faster with < 300,000 tables, and slightly
>>> slower with > 300,000.  The differences are generally <2% in either
>>> direction.  When it comes to back-patching and partial dumps, I'm not
>>> really sure what to test.
>>>
>>> For the record, there is still a quadratic performance on the server,
>>> albeit with a much smaller constant factor than the Reassign one.  It
>>> is in get_tabstat_entry.  I don't know if is worth working on that in
>>> isolation--if PG is going to try to accommodate 100s of thousands of
>>> table, there probably needs to be a more general way to limit the
>>> memory used by all aspects of the rel caches.
>>
>> I would like to test your patch and w/without my patch. Could you
>> please give me the patches? Or do you have your own git repository?
>
> The main patch is in the commit fest as "Resource Owner reassign Locks
> for the sake of pg_dump"
>
> My re-basing of your patch is attached.

I tested your patches with current master head. The result was pretty
good. Before it took 125 minutes (with 9.2 devel) to dump 100k empty
tables and now it takes only less than 4 minutes!

$ time pg_dump test >/dev/null

real    3m56.412s
user    0m12.059s
sys    0m3.571s

Good job!

Now I applied rebased pg_dump patch.

real    4m1.779s
user    0m11.621s
sys    0m3.052s

Unfortunately I see no improvement. Probably my patch's value is for
dumping against older backend.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

В списке pgsql-performance по дате отправления:

Предыдущее
От: Jeff Janes
Дата:
Сообщение: Re: pg_dump and thousands of schemas
Следующее
От: Anish Kejariwal
Дата:
Сообщение: Expected performance of querying 5k records from 4 million records?