Re: postgresql-8.0.1 performance tuning

Поиск
Список
Период
Сортировка
От Martin Fandel
Тема Re: postgresql-8.0.1 performance tuning
Дата
Msg-id 1117615831.7612.113.camel@fandelm.ecommit.de
обсуждение исходный текст
Ответ на Re: postgresql-8.0.1 performance tuning  (John A Meinel <john@arbash-meinel.com>)
Список pgsql-performance
Hi John,

thank you very much for the answer :). I moved the pg_xlog to another
partition and made a symlink to it. Know the database is much more
faster than before. A sample select which was finished in 68seconds
before, is now finished in 58seconds :).

I will test the other changes today also and will write a feedback
after testing. :)

Thanks a lot. I'm very confusing to tuning the postgresql-db. #:-)

best regards
Martin


Am Dienstag, den 31.05.2005, 13:46 -0500 schrieb John A Meinel:
> Martin Fandel wrote:
>
> > Hi @ all,
> >
> > i'm trying to tune my postgresql-db but i don't know if the values are
> > right
> > set.
> >
> > I use the following environment for the postgres-db:
> >
> > ######### Hardware ############
> > cpu: 2x P4 3Ghz
> > ram: 1024MB DDR 266Mhz
> >
> > partitions:
> > /dev/sda3              23G  9,6G   13G  44% /
> > /dev/sda1              11G  156M  9,9G   2% /var
> > /dev/sdb1              69G   13G   57G  19% /var/lib/pgsql
> >
> > /dev/sda is in raid 1  (2x 35GB / 10000upm / sca)
> > /dev/sdb is in raid 10 (4x 35GB / 10000upm / sca)
> > ######### /Hardware ############
>
> You probably want to put the pg_xlog file onto /dev/sda rather than
> having it in /dev/sdb. Having it separate from the data usually boosts
> performance a lot. I believe you can just mv it to a different
> directory, and then recreate it as a symlink. (Stop the database first :)
>
> >
> > ######### Config ############
> > /etc/sysctl.conf:
> > kernel.shmall = 786432000
> > kernel.shmmax = 786432000
> >
> Not really sure about these two.
>
> > /etc/fstab:
> > /dev/sdb1 /var/lib/pgsql reiserfs
> > acl,user_xattr,noatime,data=writeback 1 2
> >
> Seems decent.
>
> > /var/lib/pgsql/data/postgresql.conf
> > superuser_reserved_connections  = 2
> > shared_buffers                  = 3000
> > work_mem                        = 131072
> > maintenance_work_mem            = 131072
>
> These both seem pretty large. But it depends on how many concurrent
> connections doing sorting/hashing/etc you expect. If you are only
> expecting 1 connection, these are probably fine. Otherwise with 1GB of
> RAM I would probably make work_mem more like 4096/8192.
> Remember, running out of work_mem means postgres will spill to disk,
> slowing that query. Running out of RAM causes the system to swap, making
> everything slow.
>
> > max_stack_depth                 = 2048
> > max_fsm_pages                   = 20000
> > max_fsm_relations               = 1000
> > max_files_per_process           = 1000
> > vacuum_cost_delay               = 10
> > vacuum_cost_page_hit            = 1
> > vacuum_cost_page_miss           = 10
> > vacuum_cost_page_dirty          = 20
> > vacuum_cost_limit               = 200
> > bgwriter_delay                  = 200
> > bgwriter_percent                = 1
> > bgwriter_maxpages               = 100
> > fsync                           = true
> > wal_sync_method                 = fsync
> > wal_buffers                     = 64
> > commit_delay                    = 0
> > commit_siblings                 = 5
> > checkpoint_segments             = 256
> > checkpoint_timeout              = 900
> > checkpoint_warning              = 30
> > effective_cache_size            = 10000
> > random_page_cost                = 4
> > cpu_tuple_cost                  = 0.01
> > cpu_index_tuple_cost            = 0.001
> > cpu_operator_cost               = 0.0025
> > geqo                            = true
> > geqo_threshold                  = 12
> > geqo_effort                     = 5
> > geqo_pool_size                  = 0
> > geqo_generations                = 0
> > geqo_selection_bias             = 2.0
> > deadlock_timeout                = 1000
> > max_locks_per_transaction       = 64
> > ######### /Config ############
> >
> > ######### Transactions ############
> > we have about 115-300 transactions/min in about 65 tables.
> > ######### /Transactions ############
> >
> > I'm really new at using postgres. So i need some experience to set this
> > parameters in the postgresql- and the system-config. I can't find standard
> > calculations for this. :/ The postgresql-documentation doesn't help me to
> > set the best values for this.
> >
> > The database must be high-availble. I configured rsync to sync the
> > complete
> > /var/lib/pgsql-directory to my hot-standby. On the hotstandby i will
> > make the
> > dumps of the database to improve the performance of the master-db.
> >
> I didn't think an rsync was completely valid. Probably you should look
> more into Slony.
> http://slony.info
>
> It is a single-master asynchronous replication system. I believe it is
> pretty easy to setup, and does what you really want.
>
> > In my tests the synchronization works fine. I synchronised the hole
> > directory
> > and restarted the database of the hotstandby. While restarting,
> > postgresql turned
> > back the old (not archived) wals and the database of my hotstandby was
> > consistent. Is this solution recommended? Or must i use archived wal's
> > with
> > real system-snapshots?
> >
> > best regards,
> >
> > Martin Fandel
>
> John
> =:->
>


В списке pgsql-performance по дате отправления:

Предыдущее
От: "Mindaugas Riauba"
Дата:
Сообщение: How to avoid database bloat
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: very large table