Обсуждение: Howto Increased performace ?

От:
"Amrit Angsusingh"
Дата:

I used postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram
of 4 Gb. Since 1 1/2 yr. when I started to use the database server after
optimizing the postgresql.conf everything went fine until a couple of
weeks ago , my database grew up to 3.5 Gb and there were more than 140
concurent connections.
The server seemed to be slower in the rush hour peroid than before . There
is some swap process too. My top and meminfo are shown here below:
14:52:13  up 13 days,  2:50,  2 users,  load average: 5.58, 5.97, 6.11
218 processes: 210 sleeping, 1 running, 0 zombie, 7 stopped
CPU0 states:   7.2% user  55.2% system    0.0% nice   0.0% iowait  36.4% idle
CPU1 states:   8.3% user  56.1% system    0.0% nice   0.0% iowait  34.4% idle
CPU2 states:  10.0% user  57.0% system    0.0% nice   0.0% iowait  32.4% idle
CPU3 states:   6.2% user  55.3% system    0.0% nice   0.0% iowait  37.3% idle
Mem:  4124720k av, 4105916k used,   18804k free,       0k shrd,   10152k buff
                   2900720k actv,  219908k in_d,  167468k in_c
Swap: 20370412k av,  390372k used, 19980040k free                 2781256k
cached

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
   14 root      18   0     0    0     0 SW   54.5  0.0 766:10   1
kscand/HighMem
13304 postgres  17   0  280M 280M  276M D    52.5  6.9   0:10   2 postmaster
12035 postgres  16   0  175M 174M  169M D    33.0  4.3   0:26   3 postmaster
13193 postgres  16   0  128M 127M  124M S    28.4  3.1   0:05   3 postmaster
12137 postgres  16   0  498M 497M  431M D    27.2 12.3   0:34   1 postmaster
   11 root      15   0     0    0     0 SW   13.9  0.0 363:00   2 kswapd
13241 postgres  16   0  318M 318M  314M D     7.3  7.9   0:09   2 postmaster
   13 root      16   0     0    0     0 SW    6.9  0.0  82:17   0
kscand/Normal
13367 postgres  15   0  196M 196M  193M D     6.5  4.8   0:02   2 postmaster
11984 postgres  15   0  305M 305M  301M S     4.9  7.5   2:55   1 postmaster
13331 postgres  16   0  970M 970M  966M S     4.9 24.0   0:22   1 postmaster
12388 postgres  15   0  293M 292M  289M S     3.9  7.2   2:42   3 postmaster
13328 postgres  15   0  276M 276M  272M S     2.7  6.8   0:22   0 postmaster
   26 root      16   0     0    0     0 SW    2.3  0.0  10:12   1 kjournald
11831 postgres  15   0  634M 634M  630M S     1.5 15.7   1:33   3 postmaster
12127 postgres  15   0  117M 116M  114M S     1.1  2.8   0:20   1 postmaster
12002 postgres  15   0  429M 429M  426M S     0.9 10.6   0:24   1 postmaster
12991 postgres  15   0  143M 143M  139M S     0.7  3.5   0:29   1 postmaster
13234 postgres  15   0  288M 288M  284M S     0.7  7.1   0:17   0 postmaster
13337 postgres  15   0  172M 171M  168M S     0.3  4.2   0:06   0 postmaster
13413 root      15   0  1276 1276   856 R     0.3  0.0   0:00   0 top
11937 postgres  15   0  379M 379M  375M S     0.1  9.4   2:59   2 postmaster

Shared kernel mem:
[root@data3 root]# cat < /proc/sys/kernel/shmmax
4000000000
[root@data3 root]# cat < /proc/sys/kernel/shmall
300000000

meminfo :
        total:    used:    free:  shared: buffers:  cached:
Mem:  4223713280 4200480768 23232512        0 11497472 3555827712
Swap: 20859301888 303460352 20555841536
MemTotal:      4124720 kB
MemFree:         22688 kB
MemShared:           0 kB
Buffers:         11228 kB
Cached:        3367688 kB
SwapCached:     104800 kB
Active:        3141224 kB
ActiveAnon:     684960 kB
ActiveCache:   2456264 kB
Inact_dirty:    220504 kB
Inact_laundry:  166844 kB
Inact_clean:     94252 kB
Inact_target:   724564 kB
HighTotal:     3276736 kB
HighFree:         3832 kB
LowTotal:       847984 kB
LowFree:         18856 kB
SwapTotal:    20370412 kB
SwapFree:     20074064 kB

Postgresql.conf :
#       Connection Parameters
#
tcpip_socket = true
#ssl = false

#max_connections = 32
max_connections = 180
#superuser_reserved_connections = 2

#port = 5432
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777 # octal

#virtual_host = ''

#krb_server_keyfile = ''


#
#       Shared Memory Size
#
#shared_buffers = 64            # min max_connections*2 or 16, 8KB each
shared_buffers = 250000
#max_fsm_relations = 1000       # min 10, fsm is free space map, ~40 bytes
#max_fsm_pages = 10000          # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 64 # min 10
#wal_buffers = 8                # min 4, typically 8KB each

#
#       Non-shared Memory Sizes
#
#sort_mem = 1024 # min 64, size in KB
sort_mem = 60000
#vacuum_mem = 8192              # min 1024, size in KB
vacuum_mem = 20072

#       Write-ahead log (WAL)
#
#checkpoint_segments = 3        # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300       # range 30-3600, in seconds
#
#commit_delay = 0               # range 0-100000, in microseconds
commit_delay = 10
#commit_siblings = 5            # range 1-1000
#
#fsync = true
fsync = false
#wal_sync_method = fsync        # the default varies across platforms:
#                               # fsync, fdatasync, open_sync, or
open_datasync
#wal_debug = 0                  # range 0-16


#
#       Optimizer Parameters
#
#enable_seqscan = true
#enable_indexscan = true
#enable_tidscan = true
#enable_sort = true
#enable_nestloop = true
#enable_mergejoin = true
#enable_hashjoin = true

#effective_cache_size = 1000    # typically 8KB each
effective_cache_size = 5000
#random_page_cost = 4           # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01          # (same)
#cpu_index_tuple_cost = 0.001   # (same)
#cpu_operator_cost = 0.0025     # (same)

#default_statistics_target = 10 # range 1-1000

#
#       GEQO Optimizer Parameters
#
#geqo = true
#geqo_selection_bias = 2.0      # range 1.5-2.0
#geqo_threshold = 11
#geqo_pool_size = 0             # default based on tables in statement,
                                # range 128-1024
#geqo_effort = 1
#geqo_generations = 0



Please give me any comment about adjustment my mechine.
Amrit Angsusingh
nakornsawan , Thailand



От:
Ragnar Hafstað
Дата:

On Tue, 2004-12-21 at 16:31 +0700, Amrit Angsusingh wrote:
> I used postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram
> of 4 Gb. Since 1 1/2 yr. when I started to use the database server after
> optimizing the postgresql.conf everything went fine until a couple of
> weeks ago , my database grew up to 3.5 Gb and there were more than 140
> concurent connections.
...
> shared_buffers = 250000
this is much higher than usually adviced on this list.
try to reduce this to 25000

> effective_cache_size = 5000
and increase this instead, to say, 50000


gnari



От:
"Iain"
Дата:

Hi,

> #sort_mem = 1024 # min 64, size in KB
> sort_mem = 60000

I think this might be too much. You are using 60000KB _per connection_ here
= 10GB for your maximum of 180 connections.

By comparison, I am specifiying 4096 (subject to adjustment) for a machine
with a similar spec to yours.

regards
Iain





От:
"Iain"
Дата:

Hi,

These are some settings that I am planning to start with for a 4GB RAM dual
opteron system with a maximum of 100 connections:


shared_buffers 8192 (=67MB RAM)
sort_mem 4096 (=400MB RAM for 100 connections)
effective_cache_size 380000(@8KB  =3.04GB RAM)
vacuum_mem 32768 KB
wal_buffers 64
checkpoint_segments 8

In theory, effective cache size is the amount of memory left over for the OS
to cache the filesystem after running all programs and having 100 users
connected, plus a little slack.

regards
Iain
----- Original Message -----
From: "Amrit Angsusingh" <>
To: "Iain" <>
Cc: <>
Sent: Monday, December 27, 2004 6:21 PM
Subject: Re: [PERFORM] Howto Increased performace ?


>
>
>>> #sort_mem = 1024 # min 64, size in KB
>>> sort_mem = 60000
>
>> I think this might be too much. You are using 60000KB _per connection_
>> here
>> = 10GB for your maximum of 180 connections.
>>
>> By comparison, I am specifiying 4096 (subject to adjustment) for a
>> machine
>> with a similar spec to yours.
>>
>> regards
>> Iain
>
> I reduced it to
> sort_mem = 8192
> If I increase it higher , what will be result I could expect.
>
> and I also reduce the
> max connection to 160
> and
> shared buffer to shared_buffers = 27853
> effective_cache_size = 81920 [what does it for?]
>
> do you think is it still too much especialy effective cache ?
>
> Thanks
> Amrit
>
> Amrit Angsusingh
> Nakornsawan,Thailand


От:
Ragnar Hafstað
Дата:

On Mon, 2004-12-27 at 22:31 +0700, Amrit Angsusingh wrote:
>  [  ]
> >
> > These are some settings that I am planning to start with for a 4GB RAM
> > dual
> > opteron system with a maximum of 100 connections:
> >
> >
> > shared_buffers 8192 (=67MB RAM)
> > sort_mem 4096 (=400MB RAM for 100 connections)
> > effective_cache_size 380000(@8KB  =3.04GB RAM)
> > vacuum_mem 32768 KB
> > wal_buffers 64
> > checkpoint_segments 8
> >
> > In theory, effective cache size is the amount of memory left over for the
> > OS
> > to cache the filesystem after running all programs and having 100 users
> > connected, plus a little slack.

> I reduced the connection to 160 and configured as below there is some
> improvement in speed .
> shared_buffers = 27853 [Should I reduce it to nearly as you do and what
> will happen?]

at some point, more shared buffers will do less good than leaving the
memory to the OS to use as disk buffers. you might want to experiment
a bit with different values to find what suits your real-life conditions

> sort_mem = 8192
> vacuum_mem = 16384
> effective_cache_size = 81920 [Should I increase it to more than 200000 ?]
as Iain wrote, this value is an indication of how much memory will be
available to the OS for disk cache.
when all other settings have been made, try to see how much memory your
OS has left under normal conditions, and adjust your setting
accordingly, if it differs significantly.
I have seen cases where an incorrect value (too low) influenced the
planner to use sequential scans instead of better indexscans,
presumably because of a higher ratio of estimated cache hits.

> Thanks for any comment again.
>
> NB. There is a huge diaster in my country "Tsunamies" and all the people
> over the country include me felt into deep sorrow.

my condolescences.

> Amrit Angsusingh
> Thailand

gnari



От:
"Iain"
Дата:

Hi,

These are the /etc/sysctl.conf settings that I am planning to use.
Coincidentally, these are the settings recommended by Oracle. If anything
they would be generous, I think.

file-max 65536 (for 2.2 and 2.4 kernels)
kernel.shmall 134217728 (=128MB)
kernel.shmmax 268435456
fs.file-max 65536

By the way, when you tested your changes, was that with a busy system? I
think that a configuration that gives the best performance (at the client
end) on a machine with just a few connections might not be the configuration
that will give you the best throughput when the system is stressed.

I'm certainly no expert on tuning Linux systems, or even Postgres but I'd
suggest that you become knowlegable in the use of the various system
monitoring tools that Linux has and keep a record of their output so you can
compare as you change your configuration. In the end though,  I think your
aim is to reduce swapping by tuning your memory usage for busy times.

Also, I heard that (most?what versions?) 32 bit linux kernals are slow at
handling more than 2GB memory so a kernal upgrade might be worth
considering.

regards
Iain


От:
"Iain"
Дата:

Hi Amrit,

I'm sorry to hear about the disaster in Thailand. I live in a tsunami prone
area myself :-(

I think that you have enough information to solve your problem now, but it
will just take some time and testing. When you have eliminated the excessive
swapping and tuned your system as best you can, then you can decide if that
is fast enough for you. More memory might help, but I can't say for sure.
There are many other things to consider. I suggest that you spend some time
reading through the performance and maybe the admin lists.

regards
Iain

----- Original Message -----
From: "Amrit Angsusingh" <>
To: "Iain" <>
Cc: <>
Sent: Tuesday, December 28, 2004 1:48 AM
Subject: Re: [PERFORM] Howto Increased performace ?


>> Hi,
>>
>> These are some settings that I am planning to start with for a 4GB RAM
>> dual
>> opteron system with a maximum of 100 connections:
>>
>>
>> shared_buffers 8192 (=67MB RAM)
>> sort_mem 4096 (=400MB RAM for 100 connections)
>> effective_cache_size 380000(@8KB  =3.04GB RAM)
>> vacuum_mem 32768 KB
>> wal_buffers 64
>> checkpoint_segments 8
>>
>> In theory, effective cache size is the amount of memory left over for the
>> OS
>> to cache the filesystem after running all programs and having 100 users
>> connected, plus a little slack.
>>
>> regards
>> Iain
>
>
> I'm not sure if I put more RAM on my mechine ie: 6 GB . The performance
> would increase for more than 20 % ?
> Any comment please,
>
> Amrit Angsusingh
> Comcenter
> Sawanpracharuck Hospital
> Thailand
>
>


От:
"Iain"
Дата:

Ho Cosimo,

I had read that before, so you are right. The amount of memory being used
could run much higher than I wrote.

In my case, I know that not all the connections are not busy all the time
(this isn't a web application with thousands of users connecting to a pool)
so not all active connections will be doing sorts all the time. As far as I
can tell, sort memory is allocated as needed, so my estimate of 400MB should
still be reasonable, and I have plenty of unaccounted for memory outside the
effective cache so it shouldn't be a problem.

Presumably, that memory isn't needed after the result set is built.

If I understand correctly, there isn't any way to limit the amount of memory
allocated for sorting, which means that you can't specifiy generous sort_mem
values to help out when there is spare capacity (few connections) because in
the worst case it could cause swapping when the system is busy. In the the
not so bad case, the effective cache size estimate will just be completely
wrong.

Maybe a global sort memory limit would be a good idea, I don't know.

regards
Iain


> Iain wrote:
>
>> sort_mem 4096 (=400MB RAM for 100 connections)
>
> If I understand correctly, memory usage related to `sort_mem'
> is per connection *and* per sort.
> If every client runs a query with 3 sorts in its plan, you are
> going to need (in theory) 100 connections * 4Mb * 3 sorts,
> which is 1.2 Gb.
>
> Please correct me if I'm wrong...
>
> --
> Cosimo



От:
Cosimo Streppone
Дата:

Iain wrote:

> sort_mem 4096 (=400MB RAM for 100 connections)

If I understand correctly, memory usage related to `sort_mem'
is per connection *and* per sort.
If every client runs a query with 3 sorts in its plan, you are
going to need (in theory) 100 connections * 4Mb * 3 sorts,
which is 1.2 Gb.

Please correct me if I'm wrong...

--
Cosimo