Performance problem. Could it be related to 8.3-beta4?

Поиск
Список
Период
Сортировка
От Clodoaldo
Тема Performance problem. Could it be related to 8.3-beta4?
Дата
Msg-id a595de7a0801041645g36270371h758add5ea9962679@mail.gmail.com
обсуждение исходный текст
Ответы Re: Performance problem. Could it be related to 8.3-beta4?  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: Performance problem. Could it be related to 8.3-beta4?  (Greg Smith <gsmith@gregsmith.com>)
Список pgsql-general
I built a new system and installed 8.3-beta4. While i'm testing it i
noticed a big performance gap between the production system running
8.2.5 and the new one. The new one, in spite of being much better, is
much slower.

The same insert query takes 20 minutes in the production system and 2
hours in the new one.

The machines' specs:
Production: Fedora Core 6, postgresql 8.2.5, AMD XP2600, 2 GB mem, two
7200 ide disks with pg_xlog alone in the second disk.
New: Fedora 8, postgresql 8.3-beta4, Core Duo 2.33 MHz, 4 GB mem, two
7200 sata disks in Raid 1.

The relevant (IHMO) lines of postgresql.conf:
Production system:
shared_buffers = 32MB                   # min 128kB or max_connections*16kB
max_prepared_transactions = 150         # can be 0 or more
work_mem = 32MB                         # min 64kB
maintenance_work_mem = 512MB            # min 1MB
max_stack_depth = 4MB                   # min 100kB
vacuum_cost_delay = 1000                        # 0-1000 milliseconds
vacuum_cost_limit = 5000                # 0-10000 credits
max_fsm_pages = 1000000         # min max_fsm_relations*16, 6 bytes each
bgwriter_lru_maxpages = 0               # 0-1000 buffers max written/round
bgwriter_all_maxpages = 0               # 0-1000 buffers max written/round
fsync = off                             # turns forced synchronization on or #
full_page_writes = on                  # recover from partial page writes
wal_buffers = 512kB                     # min 32kB
checkpoint_segments = 60                # in logfile segments, min 1, 16MB
checkpoint_timeout = 15min              # range 30s-1h
effective_cache_size = 512MB
geqo_effort = 8                 # range 1-10
#autovacuum = off                       # enable autovacuum subprocess?

New system's postgresql.conf:
shared_buffers = 32MB                   # min 128kB or max_connections*16kB
#temp_buffers = 8MB                     # min 800kB
max_prepared_transactions = 50          # can be 0 or more
work_mem = 32MB                         # min 64kB
maintenance_work_mem = 768MB            # min 1MB
max_stack_depth = 4MB                   # min 100kB
max_fsm_pages = 1000000                 # min max_fsm_relations*16, 6 bytes
vacuum_cost_delay = 1000                        # 0-1000 milliseconds
vacuum_cost_limit = 5000                # 0-10000 credits
bgwriter_lru_maxpages = 0               # 0-1000 max buffers written/round
fsync = off                             # turns forced synchronization on or off
full_page_writes = off
wal_buffers = 1024kB                    # min 32kB
checkpoint_segments = 60                # in logfile segments, min 1, 16MB
checkpoint_timeout = 10min              # range 30s-1h
effective_cache_size = 768MB
geqo_effort = 8                 # range 1-10


Explain in the production server:

fahstats=# explain
fahstats-# insert into usuarios (
fahstats(#   data,
fahstats(#   usuario,
fahstats(#   pontos,
fahstats(#   wus
fahstats(#   )
fahstats-#   select
fahstats-#     (select data_serial from data_serial) as data,
fahstats-#     ui.usuario_serial as usuario,
fahstats-#     sum(pontos) as pontos,
fahstats-#     sum(wus) as wus
fahstats-#   from usuarios_temp as ut inner join usuarios_indice as ui
fahstats-#     on ut.usuario = ui.usuario_nome and ut.n_time = ui.n_time
fahstats-#   group by data, ui.usuario_serial
fahstats-#   ;
                                              QUERY PLAN
-------------------------------------------------------------------------------------------------------
 Subquery Scan "*SELECT*"  (cost=300078.40..324227.64 rows=878154 width=20)
   ->  HashAggregate  (cost=300078.40..313250.71 rows=878154 width=12)
         InitPlan
           ->  Seq Scan on data_serial  (cost=0.00..31.40 rows=2140 width=4)
         ->  Merge Join  (cost=101968.41..249646.83 rows=5040017 width=12)
               Merge Cond: ((ut.n_time = ui.n_time) AND
((ut.usuario)::text = "inner"."?column4?"))
               ->  Index Scan using usuarios_temp_ndx on usuarios_temp
ut  (cost=0.00..55060.07 rows=883949 width=26)
               ->  Sort  (cost=101968.41..104163.79 rows=878154 width=22)
                     Sort Key: ui.n_time, (ui.usuario_nome)::text
                     ->  Seq Scan on usuarios_indice ui
(cost=0.00..15276.54 rows=878154 width=22)
(10 rows)

The same explain in the new server:

 Subquery Scan "*SELECT*"  (cost=313715.35..337863.02 rows=878097 width=20)
   ->  HashAggregate  (cost=313715.35..326886.81 rows=878097 width=12)
         InitPlan
           ->  Seq Scan on data_serial  (cost=0.00..34.00 rows=2400 width=4)
         ->  Merge Join  (cost=101809.80..257473.94 rows=5620741 width=12)
               Merge Cond: ((ut.n_time = ui.n_time) AND
((ut.usuario)::text = (ui.usuario_nome)::text))
               ->  Index Scan using usuarios_temp_ndx on usuarios_temp
ut  (cost=0.00..52883.91 rows=883856 width=23)
               ->  Sort  (cost=101809.80..104005.04 rows=878097 width=19)
                     Sort Key: ui.n_time, ui.usuario_nome
                     ->  Seq Scan on usuarios_indice ui
(cost=0.00..15123.97 rows=878097 width=19)
(10 rows)


I though the new raid 1 array would have some problem but indeed it is faster:

iostat -d -k in the new server:

Device:__tps____kB_read/s____kB_wrtn/s____kB_read____kB_wrtn
sda_____0.00_________0.01_________0.00_______1105________145
sdb____35.26_______442.22_______992.55___39105956___87771701
md1___264.72_______442.22_______990.33___39105169___87574680
md0_____0.00_________0.01_________0.00________724________108
md2_____0.00_________0.01_________0.00________516__________5

iostat -d -k in the production server:

Device:__tps____kB_read/s____kB_wrtn/s____kB_read____kB_wrtn
hda_____1.64_________5.89_______328.40___13789188__769151093
hdc____60.32_______194.32_______537.77__455123494_1259522944
dm-0___82.50_________5.86_______328.39___13736257__769116252
dm-1____0.01_________0.02_________0.01______51212______34832


Any idea about what could be wrong or how to investigate it?

Regards, Clodoaldo Pinto Neto

В списке pgsql-general по дате отправления:

Предыдущее
От: "Ed L."
Дата:
Сообщение: large table vacuum issues
Следующее
От: ljb
Дата:
Сообщение: Re: TCL