Re: table partitioning & max_locks_per_transaction

Поиск
Список
Период
Сортировка
От Brian Karlak
Тема Re: table partitioning & max_locks_per_transaction
Дата
Msg-id 277C854D-63F2-4353-9756-E151CFD2CB90@metaweb.com
обсуждение исходный текст
Ответ на Re: table partitioning & max_locks_per_transaction  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-performance
Tom --

Thanks for the pointers and advice.  We've started by doubling
max_locks and halving shared_buffers, we'll see how it goes.

Brian

On Oct 10, 2009, at 7:56 PM, Tom Lane wrote:

> Brian Karlak <zenkat@metaweb.com> writes:
>>     "out of shared memory HINT:  You might need to increase
>> max_locks_per_transaction"
>
> You want to do what it says ...
>
>> 1) We've already tuned postgres to use ~2BG of shared memory -- which
>> is SHMAX for our kernel.  If I try to increase
>> max_locks_per_transaction, postgres will not start because our shared
>> memory is exceeding SHMAX.  How can I increase
>> max_locks_per_transaction without having my shared memory
>> requirements
>> increase?
>
> Back off shared_buffers a bit?  2GB is certainly more than enough
> to run Postgres in.
>
>> 2) Why do I need locks for all of my subtables, anyways?  I have
>> constraint_exclusion on.  The query planner tells me that I am only
>> using three tables for the queries that are failing.  Why are all of
>> the locks getting allocated?
>
> Because the planner has to look at all the subtables and make sure
> that they in fact don't match the query.  So it takes AccessShareLock
> on each one, which is the minimum strength lock needed to be sure that
> the table definition isn't changing underneath you.  Without *some*
> lock
> it's not really safe to examine the table at all.
>
>             regards, tom lane


В списке pgsql-performance по дате отправления:

Предыдущее
От: Heikki Linnakangas
Дата:
Сообщение: Re: vacuumdb command
Следующее
От: Nimesh Satam
Дата:
Сообщение: Using unnest function on multi-dimensional array.