table partitioning & max_locks_per_transaction

Поиск
Список
Период
Сортировка
От Brian Karlak
Тема table partitioning & max_locks_per_transaction
Дата
Msg-id A4945A16-09BD-42F3-B603-B1558A199A1B@metaweb.com
обсуждение исходный текст
Ответы Re: table partitioning & max_locks_per_transaction  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-performance
Hello All --

I have implemented table partitioning in order to increase performance
in my database-backed queuing system.  My queue is partitioned by
job_id into separate tables that all inherit from a base "queue" table.

Things were working swimmingly until my system started managing
thousands of jobs.  As soon as I had ~1070 queue subtables, queries to
the main queue table would fail with:

    "out of shared memory HINT:  You might need to increase
max_locks_per_transaction"

I found this thread on the archives:

    http://archives.postgresql.org/pgsql-general/2007-08/msg01992.php

Still, I have a few questions/problems:

1) We've already tuned postgres to use ~2BG of shared memory -- which
is SHMAX for our kernel.  If I try to increase
max_locks_per_transaction, postgres will not start because our shared
memory is exceeding SHMAX.  How can I increase
max_locks_per_transaction without having my shared memory requirements
increase?

2) Why do I need locks for all of my subtables, anyways?  I have
constraint_exclusion on.  The query planner tells me that I am only
using three tables for the queries that are failing.  Why are all of
the locks getting allocated?  Is there any way to prevent this?

Many thanks in advance for any and all help anyone can provide!

Brian

В списке pgsql-performance по дате отправления:

Предыдущее
От: Mark Mielke
Дата:
Сообщение: Re: UUID as primary key
Следующее
От: Scott Marlowe
Дата:
Сообщение: Re: Databases vs Schemas