Re: Thousands databases or schemas

Поиск
Список
Период
Сортировка
От Mark Thornton
Тема Re: Thousands databases or schemas
Дата
Msg-id 509BB8B5.9080308@optrak.com
обсуждение исходный текст
Ответ на Thousands databases or schemas  (Denis <socsam@gmail.com>)
Список pgsql-performance
On 08/11/12 09:36, Denis wrote:
> We have a web application where we create a schema or a database with a
> number of tables in it for each customer. Now we have about 2600 clients.
>
> The problem we met using a separate DB for each client is that the creation
> of new DB can take up to 2 minutes, that is absolutely unacceptable. Using
> schemes instead (one DB with a number of schemes containing similar tables
> in it) solved this problem (schemes are created in a couple of seconds), but
> created two other blocking points:
> 1. sometimes creation of a new table in a schema takes up to 5 seconds. In
> case when we have create up to 40 tables in a schema this takes way too much
> time.
> 2. "pg_dump -n schema_name db_name" takes from 30 to 60 seconds, no matter
> how big is the amount of data in the schema. Also, the dump of the tables
> structure only takes at least 30 seconds. Basing on this topic
> http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-td5709766i60.html,
> pg_dump always analyses ALL the tables in the DB, i.e. in my case more than
> 100 000 tables.
The obvious solution would be to write your own version of pg_dump which
only examines the tables within a schema. You can even start with the
source of the standard pg_dump! However, you could then eliminate the
per customer schema/tables and add an extra 'customer' key column on
each table. Now you modify pg_dump to only dump the parts of each table
matching a given customer id.

Mark Thornton



В списке pgsql-performance по дате отправления:

Предыдущее
От: Pavel Stehule
Дата:
Сообщение: Re: Thousands databases or schemas
Следующее
От: Shaun Thomas
Дата:
Сообщение: Re: HT on or off for E5-26xx ?