Обсуждение: postgres getting slow under heavy load though autivacuum is enabled

Поиск
Список
Период
Сортировка

postgres getting slow under heavy load though autivacuum is enabled

От
"tamanna madaan"
Дата:

Hi

I am using postgres 8.1.2 with slony 1.1.5 used for replication between two nodes. Very high number of db operations like (2.8 million inserts, 1.4 million update and 4.5 lakhs deletes.) are being done on db in one transaction and this is repeated for 5-6 times a day  at an interval of let say 2 hours. This process is runnning for 5 consective days. It is obeserved that db is getting very slow with time. The number of dead tuples getting increased in pg_listener,  sl_log_1 and sl_seqlog tables with time though I have autovacuum enabled and slony related tables like (sl_log_1 , sl_seqlog etc) are configured not to be processed by autovacuum . Please let me know what could be the reason of increasing dead tuples in these tables and postgres getting slow.

Thanks..
Tamanna

Re: postgres getting slow under heavy load though autivacuum is enabled

От
Grzegorz Jaśkiewicz
Дата:
On Tue, Jun 9, 2009 at 9:37 AM, tamanna
madaan<tamanna.madan@globallogic.com> wrote:
> Hi
>
> I am using postgres 8.1.2 with slony 1.1.5 used for replication between two
> nodes. Very high number of db operations like (2.8 million inserts, 1.4
> million update and 4.5 lakhs deletes.) are being done on db in one
> transaction and this is repeated for 5-6 times a day  at an interval of let
> say 2 hours. This process is runnning for 5 consective days. It is obeserved
> that db is getting very slow with time. The number of dead tuples getting
> increased in pg_listener,  sl_log_1 and sl_seqlog tables with time though I
> have autovacuum enabled and slony related tables like (sl_log_1 , sl_seqlog
> etc) are configured not to be processed by autovacuum . Please let me know
> what could be the reason of increasing dead tuples in these tables and
> postgres getting slow.

increase fsm size, in config.
upgrade to newest release of 8.1.x
The slony tables perhaps should be vacuumed too.



--
GJ

Re: postgres getting slow under heavy load though autivacuum is enabled

От
Bill Moran
Дата:
In response to "tamanna madaan" <tamanna.madan@globallogic.com>:
>
> I am using postgres 8.1.2 with slony 1.1.5 used for replication between two nodes. Very high number of db operations
like(2.8 million inserts, 1.4 million update and 4.5 lakhs deletes.) are being done on db in one transaction and this
isrepeated for 5-6 times a day  at an interval of let say 2 hours. This process is runnning for 5 consective days. It
isobeserved that db is getting very slow with time. The number of dead tuples getting increased in pg_listener,
sl_log_1and sl_seqlog tables with time though I have autovacuum enabled and slony related tables like (sl_log_1 ,
sl_seqlogetc) are configured not to be processed by autovacuum . Please let me know what could be the reason of
increasingdead tuples in these tables and postgres getting slow.  

The slony docs state, and I quote:
"Long running transactions are Evil"
http://slony.info/documentation/slonyadmin.html#BESTPRACTICES

I'm going to guess that a single transaction with multi millions of
tuple changes is about as evil as it gets.

This is a known shortcoming of Slony.  You're going to need carefully
tuned vacuum, well designed schema, adequate hardware, and clever
schema design to keep a system like that healthy.  That is, if you can't
figure out a way to avoid the huge transactions.

Considering you mention nothing about your hardware, it's possible that
you simply don't have enough.  We do some huge transactions over Slony
(although not into the millions per transaction) but we have enough
free RAM, free disk space, and free CPU cycles to clean up after it so
it's not hurting us.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: postgres getting slow under heavy load though autivacuum is enabled

От
Dimitri Fontaine
Дата:
Bill Moran <wmoran@potentialtech.com> writes:

> In response to "tamanna madaan" <tamanna.madan@globallogic.com>:
>>
>> I am using postgres 8.1.2 with slony 1.1.5 used for replication
>> between two nodes. Very high number of db operations like (2.8
>> million inserts, 1.4 million update and 4.5 lakhs deletes.) are being
>> done on db in one transaction and this is repeated for 5-6 times a
>> day at an interval of let say 2 hours. This process is runnning for 5
>> consective days. It is obeserved that db is getting very slow with
>> time. The number of dead tuples getting increased in pg_listener,
>> sl_log_1 and sl_seqlog tables with time though I have autovacuum
>> enabled and slony related tables like (sl_log_1 , sl_seqlog etc) are
>> configured not to be processed by autovacuum . Please let me know
>> what could be the reason of increasing dead tuples in these tables
>> and postgres getting slow.
>
> The slony docs state, and I quote:
> "Long running transactions are Evil"
> http://slony.info/documentation/slonyadmin.html#BESTPRACTICES
>
> I'm going to guess that a single transaction with multi millions of
> tuple changes is about as evil as it gets.
>
> This is a known shortcoming of Slony.  You're going to need carefully
> tuned vacuum, well designed schema, adequate hardware, and clever
> schema design to keep a system like that healthy.  That is, if you can't
> figure out a way to avoid the huge transactions.

Ok, if we're going to ask about the environment leading to the problem,
I have to point this:
  http://wiki.postgresql.org/wiki/Londiste_Tutorial#Londiste_is_eating_all_my_CPU_and_lag_is_raising

With Skytools (Londiste), you can set pgq_lazy_fetch such as the
replicas are using cursors to consume big batches of events, and it runs
smoothly.

HTH, Regards,
--
dim

Re: postgres getting slow under heavy load though autivacuum is enabled

От
Vick Khera
Дата:
2009/6/9 Grzegorz Jaśkiewicz <gryzman@gmail.com>:
> The slony tables perhaps should be vacuumed too.
>

slony vacuum's its own tables as necessary.