Обсуждение: tune memory usage for master/slave nodes in cluster

Поиск
Список
Период
Сортировка

tune memory usage for master/slave nodes in cluster

От
stanimir petrov
Дата:
 Hi
I need to configure for high performance and memory usage my postgres 8.4 clusters.

Description:
The database is partitioned in regions. Every region has master and slave db. The application write:read=1:50.
At the moment i had delpoyed the app on 3 servers as follows:
Node1: db master2, db slave1, pgpool2, slony-1 replication of master2 to slave2.
Node2: db master3, db slave2, pgpool2, slony-1 replication of master3 to slave3.
Node3: db master1, db slave3, pgpool2, slony-1 replication of master1 to slave1.

There is apache server on every Node that serve php pages. Read queries are sent to its slave  only (which is on the same server to prevent network traffic).
No i must configure the memory parameters of the postgres for the master and for the slave, because the slave must use much memory and writes the replicated data fast and the master must be fast writing.
Also how to configure the max_connections, now i set them to 250, but i think to decrease them to 200 bcause i have pgpool infront of every db server.
The database is not so big right now, but it will grow fast. I expect to have 10^5 simultanious users at the end, but when this happens the hardware will be changed, may be and other things.
Now the servers are dual core CPU 2.6 with 1.7 GB RAM and i standart disk. (the small instance from AWS)

The slave (read-only) has
shared_buffers = 128MB
effective_cache_size = 1GB
max_connections = 250
everything else is default
The master(write-only) has default settings



-----------------------------------------------------------------
Не бъди безразличен-попълни анкетата за Рак на гърдата

Re: tune memory usage for master/slave nodes in cluster

От
Greg Smith
Дата:
stanimir petrov wrote:
> Now the servers are dual core CPU 2.6 with 1.7 GB RAM and i standart
> disk. (the small instance from AWS)

You're never going to be able to tune for writing data fast on a AWS
environment; there just isn't enough disk throughput available.  If this
application really does take off the way you expect it to, don't be
surprised to find you have to move it to real hardware to keep up.
Dedicated database servers tend to have tens of disks in them to keep up
with the sort of load you're expecting, and you just can't get that in a
cloud environment.  You can do some work to improve I/O using multiple
storage instances;
http://blog.endpoint.com/2010/02/postgresql-ec2-ebs-raid0-snapshot.html
is a good introduction to that.

The basic tuning advice you're looking for is available at
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

If you are trying to get faster writes out of AWS hardware, you may have
to turn off synchronous_commit to accomplish that.  That has some
potential lost transaction downsides, but simple disks just can't write
data that fast so it may be the only way to make this work well.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us