Обсуждение: increasing max_pred_locks_per_transaction, what shuold I look for?

Поиск
Список
Период
Сортировка

increasing max_pred_locks_per_transaction, what shuold I look for?

От
Brian Ferhle
Дата:
Hi all,

I've got a situation where I need to increase max_pred_locks_per_transaction value to allow the addition of a slony
nodeto complete on postgres 9.1. We had this issue before and we increased the default value from 64 to 128, but now we
havea sync that takes even longer and the value for 128 isn't enough either. 

Looking at the activity I saw in pg_locks, I believe I need to set it to at least 256, but a value this high I want to
makesure that I don't cause other issues with the system. The docs mention that it may need to request more system v
sharedmemory, but should I also re-calculate other parameters such as work_mem, shared_buffers, etc? 

Current System:
Postgres 9.1.3
252 GB system memory
shared_buffers 40GB
work_mem 200MB
max_connections = 300
max_prepared_transactions = 0

free -g
             total       used       free     shared    buffers     cached
Mem:           252        239         12          0          1        221

I have a lot of cashed memory, but I just want to make sure this isn't something that cascades out of control and I'm
suddenlyhaving major 'out of memory' issues. 

thanks in advance,
- Brian F

Re: increasing max_pred_locks_per_transaction, what shuold I look for?

От
"Kevin Grittner"
Дата:
Brian Ferhle <brianf@consistentstate.com> wrote:

> I've got a situation where I need to increase
> max_pred_locks_per_transaction value to allow the addition of a
> slony node to complete on postgres 9.1. We had this issue before
> and we increased the default value from 64 to 128, but now we have
> a sync that takes even longer and the value for 128 isn't enough
> either.
>
> Looking at the activity I saw in pg_locks, I believe I need to set
> it to at least 256, but a value this high I want to make sure that
> I don't cause other issues with the system. The docs mention that
> it may need to request more system v shared memory, but should I
> also re-calculate other parameters such as work_mem,
> shared_buffers, etc?
>
> Current System:
> Postgres 9.1.3
> 252 GB system memory
> shared_buffers 40GB
> work_mem 200MB
> max_connections = 300
> max_prepared_transactions = 0
>
> free -g
>              total       used       free     shared    buffers
> cached
> Mem:           252        239         12          0          1
>    221
>
> I have a lot of cashed memory, but I just want to make sure this
> isn't something that cascades out of control and I'm suddenly
> having major 'out of memory' issues.

Given all the other settings, doubling
max_pred_locks_per_transaction would probably increase the shared
memory needs by something less than 24MB.  As long as the OS isn't
near its limit on shared memory allocations, this would come out of
OS cache without any other significant effects.

The other issue that jumps out at me, though, is max_connections =
300.  How many cores are in this machines, and what does the storage
system look like?  You might well benefit from a connection pool
which limits the number of concurrent transactions which are active.
 In general, 300 connections is more that what is optimal for both
throughput and latency, and when using serializable transactions you
might see the impact of too many connections rather sooner than at
other transaction isolation levels.

We have a web application which at any one moment typically has
several thousand users actively hitting it, and we incrementally
changed our connection pool size until we found the "sweet spot"
where we got best performance for that load -- it was with 35
connections for the web application and 6 connections for
replicating data from the 72 sources that feed into the database.
That's on a system with 16 cores, 40 spindles, and a moderate level
of caching (5TB of database and 128GB RAM).

On the other hand, I would really like to improve the heuristic used
for promoting predicate locks of one granularity to another, to
allow a more graceful performance degradation when predicate locks
get tight, but I've lacked data on what sort of workloads hit this.
If you could send me (of list) a copy of your pg_locks data when you
are at or approaching this problem, it would be helpful in adjusting
this.  A rough description of the workload would help, too.

-Kevin