Re: pg_dump and thousands of schemas

Поиск
Список
Период
Сортировка
От Hugo
Тема Re: pg_dump and thousands of schemas
Дата
Msg-id 1343799184312-5718532.post@n5.nabble.com
обсуждение исходный текст
Ответ на Re: pg_dump and thousands of schemas  (Tatsuo Ishii <ishii@postgresql.org>)
Список pgsql-performance
Hi guys,

I just want to let you know that we have created our own solution to dump
and restore our databases. The tool was written in java and the source is
now on Github (MIT license): https://github.com/tig100/JdbcPgBackup

The main approach was to cache all database objects - schemas, tables,
indexes, etc., and instead of having postgres do joins between the
pg_catalog tables (which include thousands of schemas in pg_namespace and
millions of columns in pg_attribute), we do full table scans and then find
which schema or table an object belongs to by looking it up in a hash map in
java, based on schema and table oid's. The dump is not transactionally safe,
so it should be performed on a replica db only (WAL importing disabled), not
on a live db. Some statistics:

Dump 11,000 schemas = 3 hours.
Dump 28,000 schemas = 8 hours.

You can read more about the tool on the github page.

Best regards,
Hugo




--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5718532.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Craig James
Дата:
Сообщение: Re: ZFS vs. UFS
Следующее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Using ctid column changes plan drastically