Re: Postgres configuration for 64 CPUs, 128 GB RAM...

От: Jim C. Nasby
Тема: Re: Postgres configuration for 64 CPUs, 128 GB RAM...
Дата: ,
Msg-id: 20070717154759.GB39272@nasby.net
(см: обсуждение, исходный текст)
Ответ на: Postgres configuration for 64 CPUs, 128 GB RAM...  ("Marc Mamin")
Список: pgsql-performance

Скрыть дерево обсуждения

Postgres configuration for 64 CPUs, 128 GB RAM...  ("Marc Mamin", )
 Re: Postgres configuration for 64 CPUs, 128 GB RAM...  (Stefan Kaltenbrunner, )
 Re: Postgres configuration for 64 CPUs, 128 GB RAM...  ("Jim C. Nasby", )
 Re: Postgres configuration for 64 CPUs, 128 GB RAM...  (Gregory Stark, )
  Re: Postgres configuration for 64 CPUs, 128 GB RAM...  ("Strong, David", )
 Re: Postgres configuration for 64 CPUs, 128 GB RAM...  (Greg Smith, )
 Re: Postgres configuration for 64 CPUs, 128 GB RAM...  (PFC, )
  Re: Postgres configuration for 64 CPUs, 128 GB RAM...  ("Marc Mamin", )
   Re: Postgres configuration for 64 CPUs, 128 GB RAM...  ("Luke Lonergan", )
 Re: Postgres configuration for 64 CPUs, 128 GB RAM...  (Josh Berkus, )
  Re: Postgres configuration for 64 CPUs, 128 GB RAM...  ("Luke Lonergan", )
   Re: Postgres configuration for 64 CPUs, 128 GB RAM...  (Dimitri, )
 Re: Postgres configuration for 64 CPUs, 128 GB RAM...  ("Gavin M. Roy", )
 Re: Postgres configuration for 64 CPUs, 128 GB RAM...  ("Luke Lonergan", )
  Re: Postgres configuration for 64 CPUs, 128 GB RAM...  (Dimitri, )

On Tue, Jul 17, 2007 at 04:10:30PM +0200, Marc Mamin wrote:
> shared_buffers= 262143

You should at least try some runs with this set far, far larger. At
least 10% of memory, but it'd be nice to see what happens with this set
to 50% or higher as well (though don't set it larger than the database
since it'd be a waste).

How big is the database, anyway?

> # on our current best production server with 4GB RAM (not dedicated to
> Postgres), work_mem is set to 600 MB
> # this limitation is probably the bottleneck for our application as the
> files in pgsql_tmp grows up to 15 GB
> # during large aggregations (we have a locking mechanismus to avoid
> parallel processing of such transactions)

Keep in mind that a good filesystem will be caching most of pgsql_tmp if
it can.

> max_prepared_transaction = 100

Are you using 2PC? If not, there's no reason to touch this (could could
just set it to 0).

> # I use the default for the bgwriter as I couldnt find recommendation on
> those
>
> #bgwriter_delay = 200ms                 # 10-10000ms between rounds
> #bgwriter_lru_percent = 1.0             # 0-100% of LRU buffers
> scanned/round
> #bgwriter_lru_maxpages = 5              # 0-1000 buffers max
> written/round
> #bgwriter_all_percent = 0.333           # 0-100% of all buffers
> scanned/round
> #bgwriter_all_maxpages = 5              # 0-1000 buffers max
> written/round

You'll probably want to increase both maxpages parameters substantially,
assuming that you've got good IO hardware.

> #CHECKPOINT
>
> # xlog will be  on a separate disk
> checkpoint_segments=256
>
> checkpoint_timeout = 5min

The further apart your checkpoints, the better. Might want to look at 10
minutes. I'd also set checkpoint_warning to just a bit below
checkpoint_timeout and watch for warnings to make sure you're not
checkpointing a lot more frequently than you're expecting.
--
Jim Nasby                                      
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Вложения

В списке pgsql-performance по дате сообщения:

От: Thomas Finneid
Дата:
Сообщение: importance of fast disks with pg
От: Dan Harris
Дата:
Сообщение: Re: importance of fast disks with pg