Re: improve performance of pg_dump with many sequences
| От | Tom Lane |
|---|---|
| Тема | Re: improve performance of pg_dump with many sequences |
| Дата | |
| Msg-id | 2885944.1767029161@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: improve performance of pg_dump with many sequences (Nathan Bossart <nathandbossart@gmail.com>) |
| Список | pgsql-hackers |
Nathan Bossart <nathandbossart@gmail.com> writes:
> Committed.
In the no-good-deed-goes-unpunished department: pg_dump's use
of pg_get_sequence_data() (nee pg_sequence_read_tuple()) is
evidently responsible for the complaint in bug #19365 [1]
that pg_dump can no longer survive concurrent sequence drops.
Given that that function already silently returns NULLs if the
sequence isn't readable for other reasons, I think it'd be
sane to make it silently return NULL if the sequence isn't
there anymore. Unfortunately, that looks like it'd require
nontrivial restructuring of init_sequence().
Or maybe we could make it not use init_sequence()? For the moment
a plain try_relation_open and check that it's a sequence should do,
but I'm not sure how that'd fit into people's plans for future
improvement of the sequence API.
There are other reasons not to like use of init_sequence in this
code path, too. pg_dump's session will build a SeqTable entry for
every sequence in the database, which there could be a lot of,
and it will acquire RowExclusiveLock on every sequence and hold
that to the end of the dump, which seems likely to be troublesome
from a concurrency standpoint. Since pg_get_sequence_data is a
read-only operation this lock level feels wrong.
BTW, I'm unconvinced that pg_dump behaves sanely when this function
does return nulls. I think the ideal thing would be for it to skip
issuing setval(), but right now it looks like it will issue one with
garbage values.
regards, tom lane
[1] https://www.postgresql.org/message-id/19365-6245240d8b926327%40postgresql.org
В списке pgsql-hackers по дате отправления: