partitioning performance question

Поиск
Список
Период
Сортировка
От Kevin Kempter
Тема partitioning performance question
Дата
Msg-id 4FD38ED4.3020003@consistentstate.com
обсуждение исходный текст
Ответы Re: partitioning performance question  (Robert Klemme <shortcutter@googlemail.com>)
Список pgsql-performance
Hi All;

We have a client that has a table where large blobs (bytea) are stored.
the table has a key column that is numbers (like 112362) but
unfortunately it's a varchar column so the blobs are accessed via
queries like:

select * from bigtable where keycol = '217765'

The primary reason we want to partition the table is for maintenance,
the table is well over 1.2 Terabytes in size and they have never
successfully vacuumed it. However I don't want to make performance even
worse. The table does have a serial key, I'm thinking the best options
will be to partition by range on the serial key, or maybe via the keycol
character column via using an in statement on the check constraints,
thus allowing the planner to actually leverage the above sql.  I suspect
doing a typecast to integer in the check constraints will prove to be a
bad idea if the keycol column remains a varchar.

Thoughts?

Here's the table:


                                         Table "problemchild"
   Column   |           Type           |
Modifiers
-----------+--------------------------+--------------------------------------------------------------------

  keycol                | character varying        |
  blob_data           | bytea                    |
  removed_date    | timestamp with time zone |
  alt_key | bigint                   | not null default
nextval('problemchild_alt_key_seq'::regclass)
Indexes:
     "pc_pkey" PRIMARY KEY, btree (alt_key)
     "key2" btree (keycol)



Thanks in advance




В списке pgsql-performance по дате отправления:

Предыдущее
От: "Kevin Grittner"
Дата:
Сообщение: Re: how to change the index chosen in plan?
Следующее
От: Craig Ringer
Дата:
Сообщение: Re: pg 9.1 brings host machine down