Re: Disk space consumed by pk not returned after vacuum or reindex

Поиск
Список
Период
Сортировка
От Kevin Johnson
Тема Re: Disk space consumed by pk not returned after vacuum or reindex
Дата
Msg-id 452531AB.2050709@noaa.gov
обсуждение исходный текст
Ответ на Re: Disk space consumed by pk not returned after vacuum or reindex  (Bruno Wolff III <bruno@wolff.to>)
Ответы Re: Disk space consumed by pk not returned after vacuum or reindex
Список pgsql-admin
Thank you for the suggestion, Bruno.  The clustering did the trick in reducing the current disk usage, however eventually the disk space get consumed once more.  I fear that we may just need to update the version of Postgres to help alleviate index bloat!


Bruno Wolff III wrote:
On Wed, Sep 13, 2006 at 09:53:16 -0400, Kevin Johnson <Kevin.Johnson@noaa.gov> wrote: 
We have a database, which consistently consumes more and more of the 
disk space in it's lvol until it reaches 100%.  So far, we have tried to 
run a full vacuum on the database, with limited success.  Eventually, we 
had to drop and reload the database with the same data inside.  It 
brought the disk usage down to 73%.  It then began to creep once more 
toward 100%.  After some research, I was able to use the pg_class 
catalog to find that the items which are expanding the quickest are 
primary key (btree) indexes.  I attempted to run a REINDEX on one of the 
tables with the pk taking up the largest amount of space.  The usage 
according to pg_class dropped dramatically, however the disk space was 
not returned to the system.  So I attempted another full vacuum 
afterwards, and still nothing was returned to the system.  These tables 
are updated extremely frequently (although their total number of rows is 
close to constant), which is my guess as to why the pk indexes increase 
so rapidly in terms of their disk usage.  Unfortunately, PostgreSQL 
knowledge is limited, and I was wondering if anyone had experienced 
something similar / knows what else we can do to return this disk space 
back to the system?   
This is possibly index bloat due to new keys always being larger than existing
keys. This was fixed in later releases. There is still some potential for
bloat due to fragmentation, but I believe that has a constant bound.
You might try using the cluster command. I think that will both clean up
the indexes and remove dead rows and do it faster than using a vacuum full
and reindexing. The downside is that the table will be unavailable during
the cluster which might be a deal breaker for you. 

-- 
Kevin Johnson
Raytheon
AWIPS Sr. Systems Engineer
NWS Network Control Facility
p: 301.713.9362x325
f: 301.713.1905

В списке pgsql-admin по дате отправления:

Предыдущее
От: Adam Radlowski
Дата:
Сообщение: Re: Recursive use
Следующее
От: "Benjamin Krajmalnik"
Дата:
Сообщение: pg_dump/pg_restore problem