Обсуждение: Server crash... trying to figure it out

Поиск
Список
Период
Сортировка

Server crash... trying to figure it out

От
Wells Oliver
Дата:
This has happened twice over the last couple of nights:

2011-05-30 02:08:27 PDT LOG:  server process (PID 29979) was terminated by signal 9: Killed
2011-05-30 02:08:27 PDT LOG:  terminating any other active server processes
2011-05-30 02:08:31 PDT LOG:  all server processes terminated; reinitializing
2011-05-30 02:08:31 PDT FATAL:  could not create shared memory segment: Cannot allocate memory
2011-05-30 02:08:31 PDT DETAIL:  Failed system call was shmget(key=5432001, size=6595420160, 03600).
2011-05-30 02:08:31 PDT HINT:  This error usually means that PostgreSQL's request for a shared memory segment exceeded
availablememory or swap space. To reduce the request size (currently 6595420160 bytes), reduce PostgreSQL's
shared_buffersparameter (currently 786432) and/or its max_connections parameter (currently 203). 
    The PostgreSQL documentation contains more information about shared memory configuration.

There isn't much server load at the time: the only messages in the log are the annoying autovacuum finding orphan temp
tablesb/c people haven't closed out their sessions. I'm guessing it's unrelated, but I'm not entirely sure. 

The value of shared_buffers is set to 6GB, work_mem is 64MB, maintenance_work_mem is 128MB, effective_cache_size is
4GB.

max_connections is set to 200.

Any ideas here? The system has 12GB; raising the shared_buffers seems reasonable but 6GB also feels like it should be
sufficient.

Any thoughts appreciated. Thanks!

--
Wells Oliver
Architect, Baseball Systems
619-795-5359
San Diego Padres | 100 Park Boulevard | San Diego CA 92101


Re: Server crash... trying to figure it out

От
"Kevin Grittner"
Дата:
Wells Oliver <woliver@padres.com> wrote:

> This has happened twice over the last couple of nights:
>
> 2011-05-30 02:08:27 PDT LOG:  server process (PID 29979) was
> terminated by signal 9: Killed

> 2011-05-30 02:08:31 PDT FATAL:  could not create shared memory
> segment: Cannot allocate memory

> To reduce the request size (currently 6595420160 bytes), reduce
> PostgreSQL's shared_buffers parameter [...]

> The value of shared_buffers is set to 6GB, work_mem is 64MB,
> maintenance_work_mem is 128MB, effective_cache_size is 4GB.
>
> max_connections is set to 200.
>
> Any ideas here? The system has 12GB; raising the shared_buffers
> seems reasonable but 6GB also feels like it should be sufficient.

You're probably overcommitting memory and running afoul of the oom
killer.

You've got an actual 12MB, but you can easily allocate up to
shared_buffers + (user_connections * work_mem), which is 18.5 GB.  I
would start by reducing shared_buggers to 3GB or less, and cut
work_mem in half.  Then I would get a connection pooler set up to
funnel 200 client-side connections through a pool of 10 or 20
database connections.  Then I would try edging up work_mem while
monitoring performance and memory usage.

-Kevin

Re: Server crash... trying to figure it out

От
Wells Oliver
Дата:
On May 31, 2011, at 11:31 AM, Kevin Grittner wrote:

You're probably overcommitting memory and running afoul of the oom
killer.

You've got an actual 12MB, but you can easily allocate up to
shared_buffers + (user_connections * work_mem), which is 18.5 GB.  I
would start by reducing shared_buggers to 3GB or less, and cut
work_mem in half.  Then I would get a connection pooler set up to
funnel 200 client-side connections through a pool of 10 or 20
database connections.  Then I would try edging up work_mem while
monitoring performance and memory usage.

-Kevin

Thanks for the response- great food for thought. Do you have any recommendations on a pg connection pooler?

--
Wells Oliver
Architect, Baseball Systems
619-795-5359
San Diego Padres | 100 Park Boulevard | San Diego CA 92101


Re: Server crash... trying to figure it out

От
"Kevin Grittner"
Дата:
Wells Oliver <woliver@padres.com> wrote:

> Do you have any recommendations on a pg connection pooler?

If the software you're using includes a connection pool, that is
often the best choice; if not (or it doesn't work well) both pgpool
and pgbouncer have their followings.

In my view the most important things are that the pool have a hard
upper limit on the number of database connections, that it is aware
of database transactions, and that when all connections are busy it
will queue a request to start a new transaction until one of the
existing transactions completes.  A good pooler, configured to the
right size (which is usually at or close to the number of actual
cores on the machine, times two), will improve both throughput and
response time under load.

-Kevin