Re: Partitioning into thousands of tables?

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: Partitioning into thousands of tables?
Дата
Msg-id AANLkTinn-WJgTVHRdqiBG-Y8Q2JM+v2PcnqEpq10L92Y@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Partitioning into thousands of tables?  (Joshua Tolley <eggyknap@gmail.com>)
Список pgsql-general
On Fri, Aug 6, 2010 at 8:08 AM, Joshua Tolley <eggyknap@gmail.com> wrote:
> On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote:
>>    Is there any significant performance problem associated with partitioning
>>    a table into 2500 sub-tables?  I realise a table scan would be horrendous,
>>    but what if all accesses specified the partitioning criteria "sid".  Such
>>    a scheme would be the simplest to maintain (I think) with the best
>>    localisation of writes.
>
> I seem to remember some discussion on pgsql-hackers recently about the number
> of partitions and its effect on performance, especially planning time.
> Unfortunately I can't find it right now, but in general the conclusion was
> it's bad to have lots of partitions, where "lots" is probably 100 or more.

When it comes to planning time, it's a trade off.  If you have a
reporting database that routinely runs queries that take 30 seconds to
30 minutes, an extra 10 seconds planning is no big deal.  If you need
to have your queries run in sub-second times, then an extra 10 seconds
is a very big deal.

We partition our stats data at work by day, and keep it around for
years.  So, we have 600 to 1000 partitions there.  But any query we
run takes minutes to run, so a little extra planning time is no big
deal there.

В списке pgsql-general по дате отправления:

Предыдущее
От: "Peter C. Lai"
Дата:
Сообщение: Re: MySQL versus Postgres
Следующее
От: Merlin Moncure
Дата:
Сообщение: Re: pg 9.0, streaming replication, fail over and fail back strategies