Обсуждение: shared-buffers set to 24GB but the RAM only use 4-5 GB average

Поиск
Список
Период
Сортировка

shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
FattahRozzaq
Дата:
I have configured postgresql.conf with parameters as below:

log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
log_rotation_age = 1d
log_rotation_size = 1024MB
listen_addresses = '*'
checkpoint_segments = 64
wal_keep_segments = 128
max_connections = 9999
max_prepared_transactions = 9999
checkpoint_completion_target = 0.9
default_statistics_target = 10
maintenance_work_mem = 1GB
effective_cache_size = 64GB
shared_buffers = 24GB
work_mem = 5MB
wal_buffers = 8MB
port = 40003
pooler_port = 40053
gtm_host = 'node03'
gtm_port = 10053

As you can see, I have set the shared_buffers to 24GB, but my server
still only use 4-5 GB average.
I have 128GB RAM in a single server.
My database has 2 tables:
- room (3GB size if pg_dump'ed)
- message (17GB if pg_dump'ed)

The backend application is a messaging server, in average there will
be 40-180 connections to the postgres Server.
The traffic is quite almost-heavy.

How to make postgres-xl effectively utilizes the resource of RAM for
9999 max_connections?


Thanks,
FattahRozzaq


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
Merlin Moncure
Дата:
On Mon, Oct 5, 2015 at 9:51 AM, FattahRozzaq <ssoorruu@gmail.com> wrote:
> I have configured postgresql.conf with parameters as below:
>
> log_destination = 'stderr'
> logging_collector = on
> log_directory = 'pg_log'
> listen_addresses = '*'
> log_destination = 'stderr'
> logging_collector = on
> log_directory = 'pg_log'
> log_rotation_age = 1d
> log_rotation_size = 1024MB
> listen_addresses = '*'
> checkpoint_segments = 64
> wal_keep_segments = 128
> max_connections = 9999
> max_prepared_transactions = 9999
> checkpoint_completion_target = 0.9
> default_statistics_target = 10
> maintenance_work_mem = 1GB
> effective_cache_size = 64GB
> shared_buffers = 24GB
> work_mem = 5MB
> wal_buffers = 8MB
> port = 40003
> pooler_port = 40053
> gtm_host = 'node03'
> gtm_port = 10053
>
> As you can see, I have set the shared_buffers to 24GB, but my server
> still only use 4-5 GB average.

How did you measure that?

merlin


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
Igor Neyman
Дата:

-----Original Message-----
From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of FattahRozzaq
Sent: Monday, October 05, 2015 10:51 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB average

I have configured postgresql.conf with parameters as below:

log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
log_rotation_age = 1d
log_rotation_size = 1024MB
listen_addresses = '*'
checkpoint_segments = 64
wal_keep_segments = 128
max_connections = 9999
max_prepared_transactions = 9999
checkpoint_completion_target = 0.9
default_statistics_target = 10
maintenance_work_mem = 1GB
effective_cache_size = 64GB
shared_buffers = 24GB
work_mem = 5MB
wal_buffers = 8MB
port = 40003
pooler_port = 40053
gtm_host = 'node03'
gtm_port = 10053

As you can see, I have set the shared_buffers to 24GB, but my server still only use 4-5 GB average.
I have 128GB RAM in a single server.
My database has 2 tables:
- room (3GB size if pg_dump'ed)
- message (17GB if pg_dump'ed)

The backend application is a messaging server, in average there will be 40-180 connections to the postgres Server.
The traffic is quite almost-heavy.

How to make postgres-xl effectively utilizes the resource of RAM for
9999 max_connections?


Thanks,
FattahRozzaq
____________________________________

Why are you looking at memory consumption?
Are you experiencing performance problems?

Regards,
Igor Neyman


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
Igor Neyman
Дата:

-----Original Message-----
From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Igor Neyman
Sent: Monday, October 05, 2015 2:25 PM
To: FattahRozzaq <ssoorruu@gmail.com>; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB average



-----Original Message-----
From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of FattahRozzaq
Sent: Monday, October 05, 2015 10:51 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB average

I have configured postgresql.conf with parameters as below:

log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
log_rotation_age = 1d
log_rotation_size = 1024MB
listen_addresses = '*'
checkpoint_segments = 64
wal_keep_segments = 128
max_connections = 9999
max_prepared_transactions = 9999
checkpoint_completion_target = 0.9
default_statistics_target = 10
maintenance_work_mem = 1GB
effective_cache_size = 64GB
shared_buffers = 24GB
work_mem = 5MB
wal_buffers = 8MB
port = 40003
pooler_port = 40053
gtm_host = 'node03'
gtm_port = 10053

As you can see, I have set the shared_buffers to 24GB, but my server still only use 4-5 GB average.
I have 128GB RAM in a single server.
My database has 2 tables:
- room (3GB size if pg_dump'ed)
- message (17GB if pg_dump'ed)

The backend application is a messaging server, in average there will be 40-180 connections to the postgres Server.
The traffic is quite almost-heavy.

How to make postgres-xl effectively utilizes the resource of RAM for
9999 max_connections?


Thanks,
FattahRozzaq
____________________________________

Why are you looking at memory consumption?
Are you experiencing performance problems?

Regards,
Igor Neyman

_______________________

Also,
Postgres-xl has it's own mailing lists:
http://sourceforge.net/p/postgres-xl/mailman/

Regards,
Igor Neyman


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
FattahRozzaq
Дата:
@Merlin Moncure, I got the calculation using pg_tune. And I modified
the shared_buffers=24GB and the effective_cache_size=64GB

@Igor Neyman,
Yes, I had performance problem which sometimes the response time took
11ms, with the exactly same query it took 100ms, and the response time
seems randomly fluctuating even with the exact same query.

Any idea on how I should configure postgres to effectively utilize the
hardware and reduce the response time to be quicker?
*(RAM=128GB, CPU=24cores, RAID-1+0:SSD)


Thanks,
FattahRozzaq
*looking for answer*

On 06/10/2015, Igor Neyman <ineyman@perceptron.com> wrote:
>
>
> -----Original Message-----
> From: pgsql-performance-owner@postgresql.org
> [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Igor Neyman
> Sent: Monday, October 05, 2015 2:25 PM
> To: FattahRozzaq <ssoorruu@gmail.com>; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5
> GB average
>
>
>
> -----Original Message-----
> From: pgsql-performance-owner@postgresql.org
> [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of FattahRozzaq
> Sent: Monday, October 05, 2015 10:51 AM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB
> average
>
> I have configured postgresql.conf with parameters as below:
>
> log_destination = 'stderr'
> logging_collector = on
> log_directory = 'pg_log'
> listen_addresses = '*'
> log_destination = 'stderr'
> logging_collector = on
> log_directory = 'pg_log'
> log_rotation_age = 1d
> log_rotation_size = 1024MB
> listen_addresses = '*'
> checkpoint_segments = 64
> wal_keep_segments = 128
> max_connections = 9999
> max_prepared_transactions = 9999
> checkpoint_completion_target = 0.9
> default_statistics_target = 10
> maintenance_work_mem = 1GB
> effective_cache_size = 64GB
> shared_buffers = 24GB
> work_mem = 5MB
> wal_buffers = 8MB
> port = 40003
> pooler_port = 40053
> gtm_host = 'node03'
> gtm_port = 10053
>
> As you can see, I have set the shared_buffers to 24GB, but my server still
> only use 4-5 GB average.
> I have 128GB RAM in a single server.
> My database has 2 tables:
> - room (3GB size if pg_dump'ed)
> - message (17GB if pg_dump'ed)
>
> The backend application is a messaging server, in average there will be
> 40-180 connections to the postgres Server.
> The traffic is quite almost-heavy.
>
> How to make postgres-xl effectively utilizes the resource of RAM for
> 9999 max_connections?
>
>
> Thanks,
> FattahRozzaq
> ____________________________________
>
> Why are you looking at memory consumption?
> Are you experiencing performance problems?
>
> Regards,
> Igor Neyman
>
> _______________________
>
> Also,
> Postgres-xl has it's own mailing lists:
> http://sourceforge.net/p/postgres-xl/mailman/
>
> Regards,
> Igor Neyman
>
>


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
Scott Marlowe
Дата:
On Tue, Oct 6, 2015 at 3:33 AM, FattahRozzaq <ssoorruu@gmail.com> wrote:
> @Merlin Moncure, I got the calculation using pg_tune. And I modified
> the shared_buffers=24GB and the effective_cache_size=64GB
>
> @Igor Neyman,
> Yes, I had performance problem which sometimes the response time took
> 11ms, with the exactly same query it took 100ms, and the response time
> seems randomly fluctuating even with the exact same query.
>
> Any idea on how I should configure postgres to effectively utilize the
> hardware and reduce the response time to be quicker?
> *(RAM=128GB, CPU=24cores, RAID-1+0:SSD)

OK I'm gonna copy and paste some stuff from previous messages since
top-posting kinda messed up the formatting.

First, this line:

>> max_connections = 9999

When you are experiencing this problem, how many connections are
there? There's a bell shaped curve for performance, and the peak is
WAY less than 9999. The IPC / shared memory performance etc will drop
off very quickly after a few dozen or at most a hundred or so
connections. If your application layer needs to keep more than a
couple dozen connections open, then it's a REAL good idea to throw a
connection pooler between the app and the db. I recommend pgbouncer as
it's very easy to setup.

BUT more important than that, it appears you're looking for a "go
faster" knob, and there may or may not be one for what you're doing.

I'd recommend profiling your db server under load to see what's going
on. What does iostat, iotop, top, etc show you when this is happening?
Are you running out of IO? Memory, CPU? What does "explain analyze
slowquerygoeshere" tell you?

I would recommend you consider reducing shared_buffers unless you have
some concrete proof that 24GB is helping. Big shared_buffers have
maintenance costs that affect write speeds, and slow writing can make
everything kind of back up behind it.  Typically something under 1GB
is fine. PostgreSQL relies on the OS to cache most read data. So
trying to crank up shared_buffers to do the same job is often either
counter-productive or of no real gain.


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
Merlin Moncure
Дата:
On Tue, Oct 6, 2015 at 10:10 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Tue, Oct 6, 2015 at 3:33 AM, FattahRozzaq <ssoorruu@gmail.com> wrote:
>> @Merlin Moncure, I got the calculation using pg_tune. And I modified
>> the shared_buffers=24GB and the effective_cache_size=64GB
>>
>> @Igor Neyman,
>> Yes, I had performance problem which sometimes the response time took
>> 11ms, with the exactly same query it took 100ms, and the response time
>> seems randomly fluctuating even with the exact same query.
>>
>> Any idea on how I should configure postgres to effectively utilize the
>> hardware and reduce the response time to be quicker?
>> *(RAM=128GB, CPU=24cores, RAID-1+0:SSD)
>
> OK I'm gonna copy and paste some stuff from previous messages since
> top-posting kinda messed up the formatting.
>
> First, this line:
>
>>> max_connections = 9999
>
> When you are experiencing this problem, how many connections are
> there? There's a bell shaped curve for performance, and the peak is
> WAY less than 9999. The IPC / shared memory performance etc will drop
> off very quickly after a few dozen or at most a hundred or so
> connections. If your application layer needs to keep more than a
> couple dozen connections open, then it's a REAL good idea to throw a
> connection pooler between the app and the db. I recommend pgbouncer as
> it's very easy to setup.
>
> BUT more important than that, it appears you're looking for a "go
> faster" knob, and there may or may not be one for what you're doing.
>
> I'd recommend profiling your db server under load to see what's going
> on. What does iostat, iotop, top, etc show you when this is happening?
> Are you running out of IO? Memory, CPU? What does "explain analyze
> slowquerygoeshere" tell you?
>
> I would recommend you consider reducing shared_buffers unless you have
> some concrete proof that 24GB is helping. Big shared_buffers have
> maintenance costs that affect write speeds, and slow writing can make
> everything kind of back up behind it.  Typically something under 1GB
> is fine. PostgreSQL relies on the OS to cache most read data. So
> trying to crank up shared_buffers to do the same job is often either
> counter-productive or of no real gain.

This is spot on.  9999 max_connections is gross overconfiguration
(unless your server has 9999 cores....).   If you need to support a
large number of hopefully idle clients, you need to immediately
explore pgbouncer.

Also, OP did not answer my previous question correctly: "how did you
measure that?" was asking how you determined that the server was only
using 4-5GB.  Reason for that question is that measuring shared memory
usage is a little more complex than it looks on the surface.  It's
pretty common for sysadmins unfamiliar with it to under- or over-
count usage.  We need to confirm your measurements with a some
diagnostics from utilities like 'top'.

If your performance issues are in fact cache related (which is really
storage), you have the traditional mitigation strategies: prewarm the
cache, buy faster storage etc.

merlin


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
Josh Berkus
Дата:
On 10/06/2015 02:33 AM, FattahRozzaq wrote:
> @Merlin Moncure, I got the calculation using pg_tune. And I modified
> the shared_buffers=24GB and the effective_cache_size=64GB

I really need to get Greg to take down pg_tune. It's way out of date.

Probably, I should replace it.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
FattahRozzaq
Дата:
Response from you all are very precious.

@Merlin,
I'm misunderstood the question. 
Yes, I didn't measure it. I only monitor RAM and CPU using htop (I also use nmon for disk IO, iftop for the network utilization).
Did 1 connection need 1 core dedicatedly? 
(I was having 40-80 connections in stable condition. And when the problem happened the connections would be more than 150)

@Scott,
Just before the problem happened and when the problem happened, my server didn't running out of IO/RAM/CPU.
The SSD IO total usage was 25-50% (I use nmon to monitor the disk IO)
The RAM total usage was 4-5GB of total 128GB (I monitor it using htop)
The CPU was 100% in 2 cores, 70% in 3 cores, the other 19 cores were under 5% (I monitor it using htop)
The network interface utilization was only 300-400 Mbps of total 1Gbps (I monitor it using iftop)
So, maybe the 128GB RAM will never all be fully use by PostgreSQL?

I will test PostgreSQL with pg_bouncer in an identical logical (OS, softwares, etc) condition and physical resource condition.


Respect,
FattahRozzaq

Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
Scott Marlowe
Дата:
On Wed, Oct 7, 2015 at 4:29 AM, FattahRozzaq <ssoorruu@gmail.com> wrote:
> Response from you all are very precious.
>
> @Merlin,
> I'm misunderstood the question.
> Yes, I didn't measure it. I only monitor RAM and CPU using htop (I also use
> nmon for disk IO, iftop for the network utilization).
> Did 1 connection need 1 core dedicatedly?
> (I was having 40-80 connections in stable condition. And when the problem
> happened the connections would be more than 150)
>
> @Scott,
> Just before the problem happened and when the problem happened, my server
> didn't running out of IO/RAM/CPU.
> The SSD IO total usage was 25-50% (I use nmon to monitor the disk IO)
> The RAM total usage was 4-5GB of total 128GB (I monitor it using htop)
> The CPU was 100% in 2 cores, 70% in 3 cores, the other 19 cores were under
> 5% (I monitor it using htop)
> The network interface utilization was only 300-400 Mbps of total 1Gbps (I
> monitor it using iftop)
> So, maybe the 128GB RAM will never all be fully use by PostgreSQL?

Check what vmstat 10 has to say, specifically the in and cs columns,
which is interrupts and context switches per second. What you'll
likely see is it ranging from 10k to 20k normally and spiking to 10 or
100 times when this is happening. That's the typical symptom that your
OS is spending all its time trying to switch between 1,000 processes
instead of servicing a handful at a time.

You should also see a huge uptick in pg processes that are active at
once, either in top or via pg_stat_activity.

Don't worry about making PostgreSQL use all your RAM, the OS will do
that for you, worry about getting PostgreSQL to process as many
queries per second as it can. And you do that by using a connection
pooler. I have machines with 350G dbs on machines with > 512GB RAM,
and eventually the whole db is in kernel cache and the only IO is when
blocks get written to disk. But the kernel only caches the parts of
the db that get read. If your db isn't reading more than a few dozen
gigabytes then that's how much memory will be used to cache the db.


Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average

От
Merlin Moncure
Дата:
On Wed, Oct 7, 2015 at 5:29 AM, FattahRozzaq <ssoorruu@gmail.com> wrote:
> Response from you all are very precious.
>
> @Merlin,
> I'm misunderstood the question.
> Yes, I didn't measure it. I only monitor RAM and CPU using htop (I also use

Can you be a little more specific.  What values did you look at and
how did you sum them up?  Assuming your measurement was correct, you
might be looking at simple prewarm issue in terms of getting shared
buffers stuffed up.  There are some tactics to warm up shared buffers
(like pg_prewarm), but it's not clear that would be useful in your
case.

One cause (especially with older kernels) of low memory utilization is
misconfigured NUMA.  Note this would only affect the backing o/s
cache, not pg's shared buffers.

Very first thing you need to figure out is if your measured issues are
coming from storage or not.  iowait % above single digits suggests
this. With fast SSD it's pretty difficult to max out storage,
especially when reading data, but it's always the first thing to look
at.   Context switch issues (as Scott notes) as another major
potential cause of performance variability, as is server internal
contention.   But rule out storage first.

merlin