Re: pg_upgrade on high number tables database issues

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: pg_upgrade on high number tables database issues
Дата
Msg-id CAMkU=1yMnhpBpPE3__CFPz+pyEm5capExRxo=ncnq5Smj2T2Kg@mail.gmail.com
обсуждение исходный текст
Ответ на pg_upgrade on high number tables database issues  (Pavel Stehule <pavel.stehule@gmail.com>)
Ответы Re: pg_upgrade on high number tables database issues  (Bruce Momjian <bruce@momjian.us>)
Список pgsql-hackers
On Mon, Mar 10, 2014 at 6:58 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
Hello

I had to migrate our databases from 9.1 to 9.2. We have high number of databases per cluster (more than 1000) and high number of tables (indexes) per database (sometimes more than 10K, exceptionally more than 100K).

I seen two problems:

a) too long files
pg_upgrade_dump_db.sql, pg_upgrade_dump_all.sql in postgres HOME directory. Is not possible to change a directory for these files.

Those files should go into whatever your current directory is when you execute pg_upgrade.  Why not just cd into whatever directory you want them to be in?

 
b) very slow first stage of upgrade - schema export is very slow without high IO or CPU utilization.

Just the pg_upgrade executable has low IO and CPU utilization, or the entire server does?

There were several bottlenecks in this area removed in 9.2 and 9.3.  Unfortunately the worst of those bottlenecks were in the server, so they depend on what database you are upgrading from, and so won't help you much upgrading from 9.1.

Cheers,

Jeff

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Amit Kapila
Дата:
Сообщение: Re: Retain dynamic shared memory segments for postmaster lifetime
Следующее
От: Robert Haas
Дата:
Сообщение: Re: Retain dynamic shared memory segments for postmaster lifetime