Re: Postgres configuration for 64 CPUs, 128 GB RAM...

Поиск
Список
Период
Сортировка
От Gavin M. Roy
Тема Re: Postgres configuration for 64 CPUs, 128 GB RAM...
Дата
Msg-id af1bce590707201718r7ae94471hc42ccdefa8654ba8@mail.gmail.com
обсуждение исходный текст
Ответ на Postgres configuration for 64 CPUs, 128 GB RAM...  ("Marc Mamin" <M.Mamin@intershop.de>)
Список pgsql-performance
Having done something similar recently, I would recommend that you look at adding connection pooling using pgBouncer transaction pooling between your benchmark app and PgSQL.  In our application we have about 2000 clients funneling down to 30 backends and are able to sustain large transaction per second volume.  This has been the #1 key to success for us in running on monster hardware.

Regards,

Gavin

On 7/17/07, Marc Mamin <M.Mamin@intershop.de> wrote:

Postgres configuration for 64 CPUs, 128 GB RAM...

Hello,

We have the oppotunity to benchmark our application on a large server. I have to prepare the Postgres configuration and I'd appreciate some comments on it as I am not experienced with servers of such a scale. Moreover the configuration should be fail-proof as I won't be able to attend the tests.

Our application (java + perl) and Postgres will run on the same server, whereas the application activity is low when Postgres has large transactions to process.

There is a large gap between our current produtcion server (Linux, 4GB RAM, 4 cpus) and the benchmark server; one of the target of this  benchmark is to verify the scalability of our application.

And you have no reason to be envious as the server doesn't belong us :-)

Thanks for your comments,

Marc Mamin



Posgres version: 8.2.1

Server Specifications:
----------------------

Sun SPARC Enterprise M8000 Server:

http://www.sun.com/servers/highend/m8000/specs.xml

File system:

http://en.wikipedia.org/wiki/ZFS

Planned configuration:
--------------------------------

# we don't expect more than 150 parallel connections,
# but I suspect a leak in our application that let some idle connections open

max_connections=2000

ssl = off

#maximum allowed
shared_buffers= 262143

# on our current best production server with 4GB RAM (not dedicated to Postgres), work_mem is set to 600 MB
# this limitation is probably the bottleneck for our application as the files in pgsql_tmp grows up to 15 GB
# during large aggregations (we have a locking mechanismus to avoid parallel processing of such transactions)
work_mem = 31457280  # (30 GB)

# index creation time is also an issue for us; the process is locking other large processes too.
# our largest table so far is 13 GB + 11 GB indexes
maintenance_work_mem = 31457280  # (30 GB)

# more than the max number of tables +indexes expected during the benchmark
max_fsm_relations = 100000

max_fsm_pages = 1800000

# don't know if I schoud modify this.
# seems to be sufficient on our production servers
max_stack_depth = 2MB

# vacuum will be done per hand between each test session
autovacuum = off

# required to analyse the benchmark
log_min_duration_statement = 1000

max_prepared_transaction = 100

# seems to be required to drop schema/roles containing large number of objects
max_locks_per_transaction = 128


# I use the default for the bgwriter as I couldnt find recommendation on those

#bgwriter_delay = 200ms                 # 10-10000ms between rounds
#bgwriter_lru_percent = 1.0             # 0-100% of LRU buffers scanned/round
#bgwriter_lru_maxpages = 5              # 0-1000 buffers max written/round
#bgwriter_all_percent = 0.333           # 0-100% of all buffers scanned/round
#bgwriter_all_maxpages = 5              # 0-1000 buffers max written/round

#WAL

fsync = on

#use default
#wal_sync_method

# we are using 32 on our production system
wal_buffers=64

# we didn't make any testing with this parameter until now, but this should'nt be a relevant
# point as our performance focus is on large transactions
commit_delay = 0

#CHECKPOINT

# xlog will be  on a separate disk
checkpoint_segments=256

checkpoint_timeout = 5min


 

В списке pgsql-performance по дате отправления:

Предыдущее
От: Josh Berkus
Дата:
Сообщение: Re: large number of connected connections to postgres database (v8.0)
Следующее
От: David Boreham
Дата:
Сообщение: Re: User concurrency thresholding: where do I look?