On Wed, Nov 9, 2011 at 2:25 AM, Venkat Balaji <venkat.balaji@verse.in> wrote:
> Hello Everyone,
> I could see the following in the production server (result of the "top" M
> command) -
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
> COMMAND
> 25265 postgres 15 0 3329m 2.5g 1.9g S 0.0 4.0
> 542:47.83 postgres: writer process
> The "writer process" refers to bg_writer ? and we have shared_buffers set to
> 1920 MB (around 1.9 GB).
So it is using 2.5G of mem of which 1.9G is shared memory (i.e. shared
buffers) so the actual amount of RAM it's using is ~600Megs.
I see no problem.
> In an other similar situation, we have "postgres writer process" using up 7
> - 8 GB memory constantly.
I doubt it. Sounds more like you're misreading the output of top.
> pg_tune is suggesting to increase the shared_buffers to 8 GB.
Reasonable.
> If the shared_buffer is not enough, Postgres uses OS cache ?
Not really how things work. The OS uses all spare memory as cache.
PostgreSQL uses shared_buffers as a cache. The OS is much more
efficient about caching in dozens of gigabytes than pgsql is.
> We have a 64 GB RAM.
> We have decided the following -
> 1. We have 20 databases running in one cluster and all are more or less
> highly active databases.
> 2. We will be splitting across the databases across multiple clusters to
> have multiple writer processes working across databases.
> Please help us if you have any other solutions around this.
You have shown us no actual problem.