Re: Unable to Vacuum Large Defragmented Table

Поиск
Список
Период
Сортировка
От Igal Sapir
Тема Re: Unable to Vacuum Large Defragmented Table
Дата
Msg-id CA+zig08tLL1DXzyZ0_R7DrkyNuteMT3qTn32OYbSYfa7jMtLLg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Unable to Vacuum Large Defragmented Table  (David Rowley <david.rowley@2ndquadrant.com>)
Ответы Re: Unable to Vacuum Large Defragmented Table
Список pgsql-general
David,

On Sun, Apr 7, 2019 at 7:28 PM David Rowley <david.rowley@2ndquadrant.com> wrote:
On Mon, 8 Apr 2019 at 14:19, Igal Sapir <igal@lucee.org> wrote:
>
> On Sun, Apr 7, 2019 at 6:20 PM David Rowley <david.rowley@2ndquadrant.com> wrote:
>>
>> On Mon, 8 Apr 2019 at 10:09, Igal Sapir <igal@lucee.org> wrote:
>> >
>> > I have a table for which pg_relation_size() shows only 31MB, but pg_total_relation_size() shows a whopping 84GB.
>> >
>> > The database engine is running inside a Docker container, with the data mounted as a volume from a partition on the host's file system.
>> >
>> > When I try to run `VACUUM FULL`, the disk usage goes up until it reaches the full capacity of the partition (about 27GB of free space), at which point it fails.
>>
>> That sort of indicates that the table might not be as bloated as you
>> seem to think it is.  Remember that variable length attributes can be
>> toasted and stored in the relation's toast table.
>
>
> I think that you're on to something here.  The table has a JSONB column which has possibly toasted.
>
> I have deleted many rows from the table itself though, and still fail to reclaim disk space.  Is there something else I should do to delete the toasted data?

The toast data is part of the data. It's just stored out of line since
there's a hard limit of just under 8k per tuple and since tuples
cannot span multiple pages, PostgreSQL internally breaks them into
chunks, possibly compresses them and stores them in the toast table.
This can occur for any variable length type.

This means if you want to remove the toast data, then you'll need to
remove the data from the main table, either the form of deleting rows
or updating them to remove the toasted values.

However, I have now deleted about 50,000 rows more and the table has only 119,688 rows.  The pg_relation_size() still shows 31MB and pg_total_relation_size() still shows 84GB.

It doesn't make sense that after deleting about 30% of the rows the values here do not change.

Attempting to copy the data to a different table results in the out of disk error as well, so that is in line with your assessment.  But it actually just shows the problem.  The new table to which the data was copied (though failed due to out of disk) shows 0 rows, but  pg_total_relation_size() for that table shows 27GB.  So now I have an "empty" table that takes 27GB of disk space.

This is mostly transient data, so I don't mind deleting rows, but if some day this could happen in production then I have to know how to deal with it without losing all of the data.

Thanks,

Igal

В списке pgsql-general по дате отправления:

Предыдущее
От: David Rowley
Дата:
Сообщение: Re: Unable to Vacuum Large Defragmented Table
Следующее
От: David Rowley
Дата:
Сообщение: Re: Unable to Vacuum Large Defragmented Table