Обсуждение: Deteriorating performance when loading large objects

Поиск
Список
Период
Сортировка

Deteriorating performance when loading large objects

От
"Vegard Bønes"
Дата:
Hi,

I have a problem with large objects in postgresql 8.1: The performance of
loading large objects into a database goes way down after a few days of
operation.

I have a cron job kicking in twice a day, which generates and loads around
6000 large objects of 3.7MB each. Each night, old data is deleted, so
there is never more than 24000 large object in the database.

If I start loading on a freshly installed database, load times for this is
around 13 minutes, including generating the data to be stored. If I let
the database run for a few days, this takes much longer. After one or two
days, this goes down to almost an hour, with logs indicating that this
extra time is solely spent transferring the large objects from file to
database.

Turning autovacuum on or off seems to have no effect on this.

I have only made the following changes to  the default postgresql.conf file:
max_fsm_pages = 25000000
vacuum_cost_delay = 10
checkpoint_segments = 256

So, my question for you is: Why does this happen, and what can I do about it?

Regards,

Vegard Bønes



Re: Deteriorating performance when loading large objects

От
Ivan Voras
Дата:
Vegard Bønes wrote:
> Hi,
>
> I have a problem with large objects in postgresql 8.1: The performance of
> loading large objects into a database goes way down after a few days of
> operation.
>
> I have a cron job kicking in twice a day, which generates and loads around
> 6000 large objects of 3.7MB each. Each night, old data is deleted, so
> there is never more than 24000 large object in the database.

> So, my question for you is: Why does this happen, and what can I do about it?

Try putting a "vacuumdb -zf" command as a cron job after the data is
deleted.


Вложения

Re: Deteriorating performance when loading large objects

От
Tom Lane
Дата:
"=?iso-8859-1?Q?Vegard_B=F8nes?=" <vegard.bones@met.no> writes:
> I have a problem with large objects in postgresql 8.1: The performance of
> loading large objects into a database goes way down after a few days of
> operation.

> I have a cron job kicking in twice a day, which generates and loads around
> 6000 large objects of 3.7MB each. Each night, old data is deleted, so
> there is never more than 24000 large object in the database.

Are you sure you're deleting the large objects themselves (ie,
lo_unlink), and not just deleting some references to them?

A manual "vacuum verbose" on pg_largeobject might be informative.

            regards, tom lane

Re: Deteriorating performance when loading large objects

От
"Vegard Bønes"
Дата:
> "=?iso-8859-1?Q?Vegard_B=F8nes?=" <vegard.bones@met.no> writes:
>> I have a problem with large objects in postgresql 8.1: The performance
>> of loading large objects into a database goes way down after a few
>> days of operation.
>
>> I have a cron job kicking in twice a day, which generates and loads
>> around 6000 large objects of 3.7MB each. Each night, old data is
>> deleted, so there is never more than 24000 large object in the
>> database.
>
> Are you sure you're deleting the large objects themselves (ie,
> lo_unlink), and not just deleting some references to them?
>
> A manual "vacuum verbose" on pg_largeobject might be informative.


I do call lo_unlink via a trigger function. Also, a SELECT count(distinct
loid) FROM pg_largeobject yields the same result as a similar call to the
table which references the large objects.

Running VACUUM VERBOSE pg_largeobject took quite some time. Here's the
output:

INFO:  vacuuming "pg_catalog.pg_largeobject"
INFO:  index "pg_largeobject_loid_pn_index" now contains 11060658 row
versions in 230587 pages
DETAIL:  178683 index pages have been deleted, 80875 are currently reusable.
CPU 0.92s/0.10u sec elapsed 199.38 sec.
INFO:  "pg_largeobject": found 0 removable, 11060658 nonremovable row
versions in 6849398 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 84508215 unused item pointers.
0 pages are entirely empty.
CPU 0.98s/0.10u sec elapsed 4421.17 sec.
VACUUM

I will try to run VACUUM ANALYZE FULL after the next delete tonight, as
suggested by Ivan Voras in another post. But as I understand it, this will
put an exclusive lock on whatever table is being vacuumed, so it is not
really an option for the database in question, as it needs to be
accessitble 24 hours a day.

Is there any other possible solution to this?

As a side note, I have noticed that loading times seem to have stabilized
at  just above an hour.

Regards,

Vegard Bønes




Re: Deteriorating performance when loading large objects

От
Tom Lane
Дата:
"=?iso-8859-1?Q?Vegard_B=F8nes?=" <vegard.bones@met.no> writes:
> Running VACUUM VERBOSE pg_largeobject took quite some time. Here's the
> output:

> INFO:  vacuuming "pg_catalog.pg_largeobject"
> INFO:  index "pg_largeobject_loid_pn_index" now contains 11060658 row
> versions in 230587 pages
> DETAIL:  178683 index pages have been deleted, 80875 are currently reusable.
> CPU 0.92s/0.10u sec elapsed 199.38 sec.
> INFO:  "pg_largeobject": found 0 removable, 11060658 nonremovable row
> versions in 6849398 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 84508215 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.98s/0.10u sec elapsed 4421.17 sec.
> VACUUM

Hmm ... although you have no dead rows now, the very large number of
unused item pointers suggests that there were times in the past when
pg_largeobject didn't get vacuumed often enough.  You need to look at
your vacuuming policy.  If you're using autovacuum, it might need to have
its parameters adjusted.  Otherwise, how often are you vacuuming, and
are you doing it as superuser?

> I will try to run VACUUM ANALYZE FULL after the next delete tonight, as
> suggested by Ivan Voras in another post.

Actually, a CLUSTER might be more effective.

            regards, tom lane

Re: Deteriorating performance when loading large objects

От
Mario Weilguni
Дата:
Tom Lane schrieb:
> "=?iso-8859-1?Q?Vegard_B=F8nes?=" <vegard.bones@met.no> writes:
>
>> Running VACUUM VERBOSE pg_largeobject took quite some time. Here's the
>> output:
>>
>
>
>> INFO:  vacuuming "pg_catalog.pg_largeobject"
>> INFO:  index "pg_largeobject_loid_pn_index" now contains 11060658 row
>> versions in 230587 pages
>> DETAIL:  178683 index pages have been deleted, 80875 are currently reusable.
>> CPU 0.92s/0.10u sec elapsed 199.38 sec.
>> INFO:  "pg_largeobject": found 0 removable, 11060658 nonremovable row
>> versions in 6849398 pages
>> DETAIL:  0 dead row versions cannot be removed yet.
>> There were 84508215 unused item pointers.
>> 0 pages are entirely empty.
>> CPU 0.98s/0.10u sec elapsed 4421.17 sec.
>> VACUUM
>>
>
> Hmm ... although you have no dead rows now, the very large number of
> unused item pointers suggests that there were times in the past when
> pg_largeobject didn't get vacuumed often enough.  You need to look at
> your vacuuming policy.  If you're using autovacuum, it might need to have
> its parameters adjusted.  Otherwise, how often are you vacuuming, and
> are you doing it as superuser?
>
>
>> I will try to run VACUUM ANALYZE FULL after the next delete tonight, as
>> suggested by Ivan Voras in another post.
>>
>
> Actually, a CLUSTER might be more effective.
>
>             regards, tom lane
>
>

Does CLUSTER really help here? On my 8.2 database, I get:
CLUSTER pg_largeobject_loid_pn_index on pg_largeobject ;
ERROR:  "pg_largeobject" is a system catalog


Has this changed in >= 8.3?

Re: Deteriorating performance when loading large objects

От
Vegard Bønes
Дата:
Tom Lane wrote:
>> INFO:  vacuuming "pg_catalog.pg_largeobject"
>> INFO:  index "pg_largeobject_loid_pn_index" now contains 11060658 row
>> versions in 230587 pages
>> DETAIL:  178683 index pages have been deleted, 80875 are currently reusable.
>> CPU 0.92s/0.10u sec elapsed 199.38 sec.
>> INFO:  "pg_largeobject": found 0 removable, 11060658 nonremovable row
>> versions in 6849398 pages
>> DETAIL:  0 dead row versions cannot be removed yet.
>> There were 84508215 unused item pointers.
>> 0 pages are entirely empty.
>> CPU 0.98s/0.10u sec elapsed 4421.17 sec.
>> VACUUM
>
> Hmm ... although you have no dead rows now, the very large number of
> unused item pointers suggests that there were times in the past when
> pg_largeobject didn't get vacuumed often enough.  You need to look at
> your vacuuming policy.  If you're using autovacuum, it might need to have
> its parameters adjusted.  Otherwise, how often are you vacuuming, and
> are you doing it as superuser?
>
>> I will try to run VACUUM ANALYZE FULL after the next delete tonight, as
>> suggested by Ivan Voras in another post.
>
> Actually, a CLUSTER might be more effective.
>
>             regards, tom lane
>

I have autovacuum turned on, but the high number of unused item pointers
may be related to some experimentation I did earlier on the same
database with only vacuuming after the nightly delete. I have, however,
seen the same performance degradation on a database that had only ever
been vacuumed by autovacuum.

I am a bit unsure about what parameters to adjust in order to maintain a
good loading performance for bulk loading. Do you have any suggestions?

Also, VACUUM ANALYZE FULL has been running for 15 hours now, blocking
the loading of today's data. It will be interesting to see how the
database will work once it is completed. Is there any point in trying to
use CLUSTER instead if this does not give any result?


Regards,

Vegard Bønes