Re: Dump/Reload pg_statistic to cut time from pg_upgrade?
| От | Tom Lane |
|---|---|
| Тема | Re: Dump/Reload pg_statistic to cut time from pg_upgrade? |
| Дата | |
| Msg-id | 26325.1373474786@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: Dump/Reload pg_statistic to cut time from pg_upgrade? (Jerry Sievers <gsievers19@comcast.net>) |
| Ответы |
Re: Dump/Reload pg_statistic to cut time from pg_upgrade?
|
| Список | pgsql-admin |
Jerry Sievers <gsievers19@comcast.net> writes:
> Kevin Grittner <kgrittn@ymail.com> writes:
>> Jerry Sievers <gsievers19@comcast.net> wrote:
>>> Planning to pg_upgrade some large (3TB) clusters using hard link
>>> method.� Run time for the upgrade itself takes around 5 minutes.
>>> Unfortunately the post-upgrade analyze of the entire cluster is going
>>> to take a minimum of 1.5 hours running several threads to analyze all
>>> tables.� This was measured in an R&D environment.
At least for some combinations of source and destination server
versions, it seems like it ought to be possible for pg_upgrade to just
move the old cluster's pg_statistic tables over to the new, as though
they were user data. pg_upgrade takes pains to preserve relation OIDs
and attnums, so the key values should be compatible. Except in
releases where we've added physical columns to pg_statistic or made a
non-backward-compatible redefinition of statistics meanings, it seems
like this should Just Work. In cases where it doesn't work, pg_dump
and reload of that table would not work either (even without the
anyarray problem).
regards, tom lane
В списке pgsql-admin по дате отправления: