Обсуждение: Re: effective_cache_size
Joel Benelli <joel.benelli@gmail.com> writes: > We have a Linux server 16GB RAM and HD SSD with Postgres 9.5 and > configuration: > ... > effective_cache_size = 12GB > With these settings the memory usage is between 10 ~ 13GB and when running > a dump occur errors of "out of memory" preventing new requests. By changing > the effective_cache_size setting to 6GB, the memory usage is between 8 ~ > 12GB, without errors during the dump. That seems *very* odd. The only thing effective_cache_size actually affects is the planner's opinion of the cost of indexscans [1], so that it might choose different query plans after you change the setting. But there's no obvious reason why that would lead to memory problems in one case and not the other. I'd suggest looking in the postmaster log to discover exactly which query is failing, and then manually EXPLAIN'ing that query with both settings to see what plans we are talking about. That might yield some more insight. regards, tom lane
Joel Benelli <joel.benelli@gmail.com> writes: > We have a Linux server 16GB RAM and HD SSD with Postgres 9.5 and > configuration: > ... > effective_cache_size = 12GB > With these settings the memory usage is between 10 ~ 13GB and when running > a dump occur errors of "out of memory" preventing new requests. By changing > the effective_cache_size setting to 6GB, the memory usage is between 8 ~ > 12GB, without errors during the dump. That seems *very* odd. The only thing effective_cache_size actually affects is the planner's opinion of the cost of indexscans [1], so that it might choose different query plans after you change the setting. But there's no obvious reason why that would lead to memory problems in one case and not the other. I'd suggest looking in the postmaster log to discover exactly which query is failing, and then manually EXPLAIN'ing that query with both settings to see what plans we are talking about. That might yield some more insight. regards, tom lane
Hi,
We have a Linux server 16GB RAM and HD SSD with Postgres 9.5 and configuration:
max_connections = 155
shared_buffers = 4GB
work_mem = 14MB
maintenance_work_mem = 744MB
effective_io_concurrency = 200
effective_cache_size = 12GB
With these settings the memory usage is between 10 ~ 13GB and when running a dump occur errors of "out of memory" preventing new requests. By changing the effective_cache_size setting to 6GB, the memory usage is between 8 ~ 12GB, without errors during the dump.
From the manual i understood that the performance of the effective_cache_size configuration would be on the scheduler with not having such a large impact on memory usage. Has anyone done any study on this setup and know the why of this behavior?
Kind regards