Re: Index tuple killing code committed

Поиск
Список
Период
Сортировка
От Tatsuo Ishii
Тема Re: Index tuple killing code committed
Дата
Msg-id 20030903.001317.74752038.t-ishii@sra.co.jp
обсуждение исходный текст
Ответ на Index tuple killing code committed  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Index tuple killing code committed  (Tatsuo Ishii <t-ishii@sra.co.jp>)
Re: Index tuple killing code committed  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
I found following mail in my mail archive and tried the same test with
7.4 current.  Contrary to my expectation 7.4 showed some performance
degration with continuous pgbench runs:

$ pgbench -c 5 -t 1000 -n test
tps = 57.444037 (including connections establishing)
tps = 57.455300 (excluding connections establishing)

$ pgbench -c 5 -t 1000 -n test
tps = 54.125785 (including connections establishing)
tps = 54.134871 (excluding connections establishing)

$pgbench -c 5 -t 1000 -n test
tps = 51.116465 (including connections establishing)
tps = 51.124878 (excluding connections establishing)

$ pgbench -c 5 -t 1000 -n test
tps = 50.410659 (including connections establishing)
tps = 50.420215 (excluding connections establishing)

$ pgbench -c 5 -t 1000 test
tps = 46.791980 (including connections establishing)
tps = 46.799837 (excluding connections establishing)

Any idea?

data is initialized by pgbench -i -s 10.
--
Tatsuo Ishii

> From: Tom Lane <tgl@sss.pgh.pa.us>
> To: pgsql-hackers@postgresql.org
> Date: Fri, 24 May 2002 16:42:55 -0400
>
> Per previous discussion, I have committed changes that cause the btree
> and hash index methods to mark index tuples "killed" the first time they
> are fetched after becoming globally dead.  Subsequently the killed
> entries are not returned out of indexscans, saving useless heap fetches.
> (I haven't changed rtree and gist yet; they will need some internal
> restructuring to do this efficiently.  Perhaps Oleg or Teodor would like
> to take that on.)
> 
> This seems to make a useful improvement in pgbench results.  Yesterday's
> CVS tip gave me these results:
> 
> (Running postmaster with "-i -F -B 1024", other parameters at defaults,
> and pgbench initialized with "pgbench -i -s 10 bench")
> 
> $ time pgbench -c 5 -t 1000 -n bench
> tps = 26.428787(including connections establishing)
> tps = 26.443410(excluding connections establishing)
> real    3:09.74
> $ time pgbench -c 5 -t 1000 -n bench
> tps = 18.838304(including connections establishing)
> tps = 18.846281(excluding connections establishing)
> real    4:26.41
> $ time pgbench -c 5 -t 1000 -n bench
> tps = 13.541641(including connections establishing)
> tps = 13.545646(excluding connections establishing)
> real    6:10.19
> 
> Note the "-n" switches here to prevent vacuums between runs; the point
> is to observe the degradation as more and more dead tuples accumulate.
> 
> With the just-committed changes I get (starting from a fresh start):
> 
> $ time pgbench -c 5 -t 1000 -n bench
> tps = 28.393271(including connections establishing)
> tps = 28.410117(excluding connections establishing)
> real    2:56.53
> $ time pgbench -c 5 -t 1000 -n bench
> tps = 23.498645(including connections establishing)
> tps = 23.510134(excluding connections establishing)
> real    3:33.89
> $ time pgbench -c 5 -t 1000 -n bench
> tps = 18.773239(including connections establishing)
> tps = 18.780936(excluding connections establishing)
> real    4:26.84
> 
> The remaining degradation is actually in seqscan performance, not
> indexscan --- unless one uses a much larger -s setting, the planner will
> think it ought to use seqscans for updating the "branches" and "tellers"
> tables, since those nominally have just a few rows; and there's no way
> to avoid scanning lots of dead tuples in a seqscan.  Forcing indexscans
> helps some in the former CVS tip:
> 
> $ PGOPTIONS="-fs" time pgbench -c 5 -t 1000 -n bench
> tps = 28.840678(including connections establishing)
> tps = 28.857442(excluding connections establishing)
> real     2:53.9
> $ PGOPTIONS="-fs" time pgbench -c 5 -t 1000 -n bench
> tps = 25.670674(including connections establishing)
> tps = 25.684493(excluding connections establishing)
> real     3:15.7
> $ PGOPTIONS="-fs" time pgbench -c 5 -t 1000 -n bench
> tps = 22.593429(including connections establishing)
> tps = 22.603928(excluding connections establishing)
> real     3:42.7
> 
> and with the changes I get:
> 
> $ PGOPTIONS=-fs time pgbench -c 5 -t 1000 -n bench
> tps = 29.445004(including connections establishing)
> tps = 29.463948(excluding connections establishing)
> real     2:50.3
> $ PGOPTIONS=-fs time pgbench -c 5 -t 1000 -n bench
> tps = 30.277968(including connections establishing)
> tps = 30.301363(excluding connections establishing)
> real     2:45.6
> $ PGOPTIONS=-fs time pgbench -c 5 -t 1000 -n bench
> tps = 30.209377(including connections establishing)
> tps = 30.230646(excluding connections establishing)
> real     2:46.0
> 
> 
> This is the first time I have ever seen repeated pgbench runs without
> substantial performance degradation.  Not a bad result for a Friday
> afternoon...
> 
>             regards, tom lane
> 


В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Alexander Schulz"
Дата:
Сообщение: Re: Win32 native port
Следующее
От: "Jeroen T. Vermeulen"
Дата:
Сообщение: Re: C++ and libpq