Обсуждение: Partitioning into thousands of tables?

Поиск
Список
Период
Сортировка

Partitioning into thousands of tables?

От
Data Growth Pty Ltd
Дата:
I have a table of around 200 million rows, occupying around 50G of disk.  It is slow to write, so I would like to partition it better.

The table is roughly:
  id: integer  # unique from sequence
  external_id : varchar(255) # unique, used to interface with external systems, not updated (only select or insert)
  sid : integer # secondary partial id, not unique, not updated (only select or insert)
  columns: many, including text, date etc, frequently updated

Currently I have the table partitioned by ranges of id, with the aim of keeping each partition table to around 1G (so needing around 50 such tables).  Almost every select query is referenced by id, so access to an individual record is OK and scanning is avoided.  "external_id" is moved to its own table, tableB (id, external_id), as it has a unique index, which would otherwise result in all 50 sub-tables being searched.

The problem is that when I need to update say 1 million records (which happens a lot), it is very slow because "id" is essentially uncorrelated with the updates I need to perform.  Thus all partitions are hammered.

It turns our that writes (inserts and updates) are highly localised to the column "sid" (all update transactions and most batches of transactions share a single value for "sid" or a pair of values for "sid"). So I would like to partition on "sid" instead.  But "sid" currently has around 2500 unique values, with some values being very common, and some rare.  In hindsight I can measure the size of these sub-ranges, but I can't really predict in advance which will prove to be the highly updated or numerically large sub-ranges.  New values are added almost daily (the total is increasing by around 200 per year).

So my questions:

Is there any significant performance problem associated with partitioning a table into 2500 sub-tables?  I realise a table scan would be horrendous, but what if all accesses specified the partitioning criteria "sid".  Such a scheme would be the simplest to maintain (I think) with the best localisation of writes.

Is there a particular size for a sub-table that I should aim for or avoid?

If say 50 tables is much better than 2500, is there a better way to perform the partitioning than writing a giant rule such as:

CREATE OR REPLACE FUNCTION my_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.sid in (1, 7, 14)) THEN
INSERT INTO subtable_1 VALUES (NEW.*);
ELSIF ( NEW.sid in (2, 3, 31, 32, 1027, 1028, 1029, 1965)) THEN
INSERT INTO subtable_2 VALUES (NEW.*);
...
ELSE
RAISE EXCEPTION 'SID out of range. Fix the my_insert_trigger() function!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;
which would need to be updated fairly regularly as new values of "sid" are added, and the frequency of existing "sid" groupings changes.  Can partitioning triggers perform a table lookup, or is this a big performance no-no?

Or should I adopt a simpler, more stable "level 1" partitioning, that would result in very uneven partition sizes (say a simple hash function on "sid").  And then partition any of those sub-tables that are too big with a further partition (partition within a partition)?


Chapter 5.9 in the manual is a bit brief on examples (particularly the maintenance of complex partitions).  So any pointers or tips would be appreciated.

Stephen

Re: Partitioning into thousands of tables?

От
Joshua Tolley
Дата:
On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote:
>    Is there any significant performance problem associated with partitioning
>    a table into 2500 sub-tables?  I realise a table scan would be horrendous,
>    but what if all accesses specified the partitioning criteria "sid".  Such
>    a scheme would be the simplest to maintain (I think) with the best
>    localisation of writes.

I seem to remember some discussion on pgsql-hackers recently about the number
of partitions and its effect on performance, especially planning time.
Unfortunately I can't find it right now, but in general the conclusion was
it's bad to have lots of partitions, where "lots" is probably 100 or more.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com

Вложения

Re: Partitioning into thousands of tables?

От
Tom Lane
Дата:
Joshua Tolley <eggyknap@gmail.com> writes:
> On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote:
>> Is there any significant performance problem associated with partitioning
>> a table into 2500 sub-tables?  I realise a table scan would be horrendous,
>> but what if all accesses specified the partitioning criteria "sid".  Such
>> a scheme would be the simplest to maintain (I think) with the best
>> localisation of writes.

> I seem to remember some discussion on pgsql-hackers recently about the number
> of partitions and its effect on performance, especially planning time.
> Unfortunately I can't find it right now, but in general the conclusion was
> it's bad to have lots of partitions, where "lots" is probably 100 or more.

It's in the fine manual: see last para of
http://www.postgresql.org/docs/8.4/static/ddl-partitioning.html#DDL-PARTITIONING-CAVEATS

            regards, tom lane

Re: Partitioning into thousands of tables?

От
"Davor J."
Дата:
I have a table of around 200 million rows, occupying around 50G of disk.  It is slow to write, so I would like to partition it better.
Have you actually tested this? Why do you expect an improvement? I am quite interested.
 
Manual states:
 
"The benefits will normally be worthwhile only when a table would otherwise be very large. The exact point at which a table will benefit from partitioning depends on the application, although a rule of
thumb is that the size of the table should exceed the physical memory of the database server."
 
Unfortunately, this last sentence is not explained. What operations would benefit from partitioning and what operations would not?
 
Another problem is that there is no time complexity information in the manual, and it can hardly be found on the net either.
 
But here is a try based on my limited understanding from the docs (and on which I would appreciate some feedback):
 
- INSERT on an unconstrained, unindex, etc (i.e. plain table): O(1). So the table size "in itself" doesn't play a role. But you probably have indexes. If they are B-trees you probably would be in the range of O(log(n)). (See http://en.wikipedia.org/wiki/B-tree. Unfortunately it doesn't show complexity, but it does say in the heading that "B-tree is optimized for systems that read and write large blocks of data". Also check http://en.wikipedia.org/wiki/B-tree#Insertion) Now 200M or 2M records... I wouldn't expect much improvement.
 
-UPDATES: I read somewhere that indexes use pointers to the data. I suppose your UPDATE-constraints are indexed, so there is no need for sequential scans on the implicit SELECT. So partitioning will not give you better performance. System cache will do it's job here.
 
A possible problem would be if your indexes are larger than your available memory. What impact that would have I completely do not know and I think it would be nice if someone could clear that up a bit. What impact would it have on SELECT? But in your case with 200M records, the indexes probably fit well into memory?
 
Regards,
Davor Josipovic

Re: Partitioning into thousands of tables?

От
Vick Khera
Дата:
On Fri, Aug 6, 2010 at 1:10 AM, Data Growth Pty Ltd
<datagrowth@gmail.com> wrote:
> I have a table of around 200 million rows, occupying around 50G of disk.  It
> is slow to write, so I would like to partition it better.
>

How big do you expect your data to get?  I have two tables partitioned
into 100 subtables using a modulo operator on the PK integer ID
column.  This keeps the row counts for each partition in the 5-million
range, which postgres handles extremely well.  When I do a mass
update/select that causes all partitions to be scanned, it is very
fast at skipping over partitions based on a quick index lookup.
Nothing really gets hammered.

Re: Partitioning into thousands of tables?

От
Scott Marlowe
Дата:
On Fri, Aug 6, 2010 at 8:08 AM, Joshua Tolley <eggyknap@gmail.com> wrote:
> On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote:
>>    Is there any significant performance problem associated with partitioning
>>    a table into 2500 sub-tables?  I realise a table scan would be horrendous,
>>    but what if all accesses specified the partitioning criteria "sid".  Such
>>    a scheme would be the simplest to maintain (I think) with the best
>>    localisation of writes.
>
> I seem to remember some discussion on pgsql-hackers recently about the number
> of partitions and its effect on performance, especially planning time.
> Unfortunately I can't find it right now, but in general the conclusion was
> it's bad to have lots of partitions, where "lots" is probably 100 or more.

When it comes to planning time, it's a trade off.  If you have a
reporting database that routinely runs queries that take 30 seconds to
30 minutes, an extra 10 seconds planning is no big deal.  If you need
to have your queries run in sub-second times, then an extra 10 seconds
is a very big deal.

We partition our stats data at work by day, and keep it around for
years.  So, we have 600 to 1000 partitions there.  But any query we
run takes minutes to run, so a little extra planning time is no big
deal there.