Обсуждение: UPDATE performance degradation (6.5.1)

Поиск
Список
Период
Сортировка

UPDATE performance degradation (6.5.1)

От
Oleg Bartunov
Дата:
Hi,

after I got DBIlogging work, I run several tests and noticed performance 
degradation when doing  sequential updating of *one* row.

I have 5 processes updated the same row. I use
LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE

When I run 200 requests I got about 16 req/sec, which  is quite enough
for my purposes. I expected the same speed if I just increase a number of 
requests, but it  decreases. for 2000 requests I got about 10 req/sec
and for 20,000 - about  2.5 req/sec !
I see no reason for such performance degradation - no way to use
postgres for logging in 24*7*365 Web-site. Probably this is very 
specific case when several processes updates only one row,
but again, I see no reason for such big degradation.
Table hits itself contains only 1 row !
I'll try to elimanate httpd, perl in my test bench to  test only 
postgres, I dont' have right now such a tool, probable someone
already did this ? What tool I can use for testing concurrent update
Regards,    Oleg


This is my home machine, Linux 2.2.10. postgres 6.5.1
Load is about 2-2.5

Typical output of ps:

11:21[om]:/usr/local/apache/logs>psg disc1036  ?  S   24:17 /usr/local/pgsql/bin/postgres localhost httpd discovery
LOCK1040 ?  R   24:09 /usr/local/pgsql/bin/postgres localhost httpd discovery idle1042  ?  S   24:02
/usr/local/pgsql/bin/postgreslocalhost httpd discovery LOCK1044  ?  R   23:51 /usr/local/pgsql/bin/postgres localhost
httpddiscovery idle1046  ?  S   23:49 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK1048  ?  S   23:47
/usr/local/pgsql/bin/postgreslocalhost httpd discovery LOCK
 

I see only one process with SELECT, this is what I expected when use
IN SHARE ROW EXCLUSIVE MODE. Right ?

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83




Re: [HACKERS] UPDATE performance degradation (6.5.1)

От
Oleg Bartunov
Дата:
Probably I found the problem. After running my test, whiich  became
very slow I looked at /usr/local/pgsql/data/base/discovery

-rw-------   1 postgres users     5070848 Jul 27 16:14 hits
-rw-------   1 postgres users     1409024 Jul 27 16:14 hits_pkey

This is for table with one row  after a lot of updates.
Too much. vacuum analyze this table was a good  medicine !
Is this a design problem ? 
Regards,    Oleg

On Tue, 27 Jul 1999, Oleg Bartunov wrote:

> Date: Tue, 27 Jul 1999 12:51:07 +0400 (MSD)
> From: Oleg Bartunov <oleg@sai.msu.su>
> To: pgsql-hackers@postgreSQL.org
> Subject: [HACKERS] UPDATE performance degradation (6.5.1)
> 
> Hi,
> 
> after I got DBIlogging work, I run several tests and noticed performance 
> degradation when doing  sequential updating of *one* row.
> 
> I have 5 processes updated the same row. I use
> LOCK TABLE hits IN SHARE ROW EXCLUSIVE MODE
> 
> When I run 200 requests I got about 16 req/sec, which  is quite enough
> for my purposes. I expected the same speed if I just increase a number of 
> requests, but it  decreases. for 2000 requests I got about 10 req/sec
> and for 20,000 - about  2.5 req/sec !
> I see no reason for such performance degradation - no way to use
> postgres for logging in 24*7*365 Web-site. Probably this is very 
> specific case when several processes updates only one row,
> but again, I see no reason for such big degradation.
> Table hits itself contains only 1 row !
> I'll try to elimanate httpd, perl in my test bench to  test only 
> postgres, I dont' have right now such a tool, probable someone
> already did this ? What tool I can use for testing concurrent update
> 
>     Regards,
>         Oleg
> 
> 
> This is my home machine, Linux 2.2.10. postgres 6.5.1
> Load is about 2-2.5
> 
> Typical output of ps:
> 
> 11:21[om]:/usr/local/apache/logs>psg disc
>  1036  ?  S   24:17 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK
>  1040  ?  R   24:09 /usr/local/pgsql/bin/postgres localhost httpd discovery idle
>  1042  ?  S   24:02 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK
>  1044  ?  R   23:51 /usr/local/pgsql/bin/postgres localhost httpd discovery idle
>  1046  ?  S   23:49 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK
>  1048  ?  S   23:47 /usr/local/pgsql/bin/postgres localhost httpd discovery LOCK
> 
> I see only one process with SELECT, this is what I expected when use
> IN SHARE ROW EXCLUSIVE MODE. Right ?
> 
> _____________________________________________________________
> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> Sternberg Astronomical Institute, Moscow University (Russia)
> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
> phone: +007(095)939-16-83, +007(095)939-23-83
> 
> 
> 

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83



Re: [HACKERS] UPDATE performance degradation (6.5.1)

От
Tom Lane
Дата:
Oleg Bartunov <oleg@sai.msu.su> writes:
> Probably I found the problem. After running my test, whiich  became
> very slow I looked at /usr/local/pgsql/data/base/discovery

> -rw-------   1 postgres users     5070848 Jul 27 16:14 hits
> -rw-------   1 postgres users     1409024 Jul 27 16:14 hits_pkey

> This is for table with one row  after a lot of updates.
> Too much. vacuum analyze this table was a good  medicine !

If the table contains only one row, why are you bothering with an
index on it?

> Is this a design problem ? 

Only that space in tables and indexes can't be re-used until vacuum.
I'm not sure if there's any good way around that or not...
        regards, tom lane


Re: [HACKERS] UPDATE performance degradation (6.5.1)

От
Oleg Bartunov
Дата:
On Tue, 27 Jul 1999, Tom Lane wrote:

> Date: Tue, 27 Jul 1999 10:39:40 -0400
> From: Tom Lane <tgl@sss.pgh.pa.us>
> To: Oleg Bartunov <oleg@sai.msu.su>
> Cc: pgsql-hackers@postgreSQL.org
> Subject: Re: [HACKERS] UPDATE performance degradation (6.5.1) 
> 
> Oleg Bartunov <oleg@sai.msu.su> writes:
> > Probably I found the problem. After running my test, whiich  became
> > very slow I looked at /usr/local/pgsql/data/base/discovery
> 
> > -rw-------   1 postgres users     5070848 Jul 27 16:14 hits
> > -rw-------   1 postgres users     1409024 Jul 27 16:14 hits_pkey
> 
> > This is for table with one row  after a lot of updates.
> > Too much. vacuum analyze this table was a good  medicine !
> 
> If the table contains only one row, why are you bothering with an
> index on it?
> 

This table with one row is just for test. In production it will
contain many thousands of msg_id. I didn't test yet waht will happens
if I populate table by thousands of row. But could imagine how long
it will be updated. Ooh. 


> > Is this a design problem ? 
> 
> Only that space in tables and indexes can't be re-used until vacuum.
> I'm not sure if there's any good way around that or not...

So, I need a cron job to vaccuum database. I'm curious how mysql works
so fast and has no problem in Web environment. I know some sites with
mysql logging and millions of updates every day.
Oleg

18:54[om]:/usr/local/apache/comps/discovery/db>psql discovery -c 'select * from hits'
msg_id|count|first_access                |last_access                 
------+-----+----------------------------+---------------------------- 1463|44417|Tue 27 Jul 10:30:18 1999 MSD|Tue 27
Jul18:44:31 1999 MSD  123|58814|Mon 26 Jul 22:54:54 1999 MSD|Tue 27 Jul 10:29:54 1999 MSD    4|  219|Mon 26 Jul
22:48:481999 MSD|Mon 26 Jul 22:49:02 1999 MSD    2|  418|Mon 26 Jul 22:47:28 1999 MSD|Mon 26 Jul 22:48:12 1999 MSD
1| 211|Mon 26 Jul 22:46:44 1999 MSD|Mon 26 Jul 22:47:09 1999 MSD   13|    1|Sat 24 Jul 23:56:57 1999 MSD|
             1464|    1|Tue 27 Jul 18:17:51 1999 MSD|                            
 
(7 rows)

and after vacuum analyze:

-rw-------   1 postgres users        8192 Jul 27 18:54 hits
-rw-------   1 postgres users     1703936 Jul 27 18:54 hits_pkey

Why hits_pkey is so big ? I have only 7 rows in the table.


> 
>             regards, tom lane
> 

_____________________________________________________________
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83



Re: [HACKERS] UPDATE performance degradation (6.5.1)

От
Tom Lane
Дата:
Oleg Bartunov <oleg@sai.msu.su> writes:
> and after vacuum analyze:
> -rw-------   1 postgres users        8192 Jul 27 18:54 hits
> -rw-------   1 postgres users     1703936 Jul 27 18:54 hits_pkey
> Why hits_pkey is so big ? I have only 7 rows in the table.

Looks like vacuum reclaims the extra space in the table itself,
but does not do so with indexes.  Ugh.

I've thought for some time that vacuum ought to drop and rebuild
indexes instead of trying to update them.  This might be another
reason for doing that...
        regards, tom lane