Re: Postgresql shared_buffer and SHMMAX configuration

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: Postgresql shared_buffer and SHMMAX configuration
Дата
Msg-id 4C3AD8E60200002500033421@gw.wicourts.gov
обсуждение исходный текст
Ответ на Re: Postgresql shared_buffer and SHMMAX configuration  ("Gnanakumar" <gnanam@zoniac.com>)
Список pgsql-admin
"Gnanakumar" <gnanam@zoniac.com> wrote:

> We have our java servlet code running in Tomcat and the connection
> is managed by pgpool.

If you're running under Tomcat, you might not even need pgpool; see
this recent post:

http://archives.postgresql.org/pgsql-performance/2010-07/msg00154.php

> When we did a Load Test in this environment with 150 connections
> in num_init_children we got error in java side saying "DBACCESS
> CONNECTION exception no connection could be got"
> While there was no error in PGPOOL and POSTGRESQL logs
>
> When we increased this to 420, it was working fine.

You're not going to get errors with your current configuration, but
you're also not going to see optimal performance.  If performance is
"good enough" for your needs, maybe you don't need to care.

>> For starters, you should not be configuring a shared memory
>> maximum of over 18GB for your 15GB machine.
>
>> More like (assuming your "given" settings):
>
>> (250 kB) + (2304 MB * 1.025) + (14.2 kB * 500)
>>  = 2.48384348 * 10^9 bytes
>
>> plus whatever shared memory you need for other processes and a
>> little "slush".  I might just go to something in the 4GB range,
>> unless I know something else needs a lot.
> Is there any documentation link for the above formula?
> 250kB + 8.2kB * shared_buffer + 14.2kB * max_Connections Was our
> formula

Actually, I was just using your formula, correcting for an obvious
error -- you took "8.2kB * shared_buffer" to mean that you multiply
the kB of buffer space by 8.2kB.  This is obviously silly, since
there's no such thing as square kB.  The thing to multiply by 8.2kB
would appear to be the *number of 8kB buffers*, which is allowing
for some overhead involved in tracking the buffers.  8.2 / 8 =
1.025.

>> I would expect these changes in the pgpool and PostgreSQL
>> settings to actually make things worse, although the extra
>> caching from the additional RAM may counteract that to some
>> degree.  Before I could venture a suggestion on what good
>> settings might be, I would need to know more about the database
>> server hardware.  How many CPU cores?  How many disks in what
>> arrangement?
>
> We have our database server running in Amazon EC2.
> It is not dedicated database server our java application is also
> running in the same instance.
>
> Regarding server hardware, CPU core, etc.   It is 4 virtual cores
> with 2 EC2 Compute Units each.

You said nothing about the effective spindle count.  Benchmarks by
myself and others have suggested that PostgreSQL gets best
performance (both response time and throughput) when a connection
pool limits the number of active queries to somewhere around:

((2 * CPU core count) + (effective spindle count))

Beyond that, resource contention and context switching causes
performance to degrade.  I always try to use a pooling technique
which will limit active queries on the database to some hard limit,
queuing excess queries to run as other queries finish.  You're using
what by all accounts is a good pooling product, but you're not
configuring it to do this for you.  And since Tomcat has a built-in
pool, it would probably be better to use that and not even set up an
intermediate pool; although since you already have one I would
recommend benchmarking both ways to check.

-Kevin

В списке pgsql-admin по дате отправления:

Предыдущее
От: Brad Nicholson
Дата:
Сообщение: Re: High Availability: Hot Standby vs. Warm Standby
Следующее
От: Greg Smith
Дата:
Сообщение: Re: High Availability: Hot Standby vs. Warm Standby