Re: Partitioning into thousands of tables?

Поиск
Список
Период
Сортировка
От Vick Khera
Тема Re: Partitioning into thousands of tables?
Дата
Msg-id AANLkTi=h-13OPfaO17mVWeGrKK7GZt9ZX8mPD2s0nazr@mail.gmail.com
обсуждение исходный текст
Ответ на Partitioning into thousands of tables?  (Data Growth Pty Ltd <datagrowth@gmail.com>)
Список pgsql-general
On Fri, Aug 6, 2010 at 1:10 AM, Data Growth Pty Ltd
<datagrowth@gmail.com> wrote:
> I have a table of around 200 million rows, occupying around 50G of disk.  It
> is slow to write, so I would like to partition it better.
>

How big do you expect your data to get?  I have two tables partitioned
into 100 subtables using a modulo operator on the PK integer ID
column.  This keeps the row counts for each partition in the 5-million
range, which postgres handles extremely well.  When I do a mass
update/select that causes all partitions to be scanned, it is very
fast at skipping over partitions based on a quick index lookup.
Nothing really gets hammered.

В списке pgsql-general по дате отправления:

Предыдущее
От: Vick Khera
Дата:
Сообщение: Re: pg 9.0, streaming replication, fail over and fail back strategies
Следующее
От: Vick Khera
Дата:
Сообщение: Re: MySQL versus Postgres