Re: vacuum to prevent wraparound

Поиск
Список
Период
Сортировка
От Laurenz Albe
Тема Re: vacuum to prevent wraparound
Дата
Msg-id f853768ada725f311efc72fd15a6efd918d429f1.camel@cybertec.at
обсуждение исходный текст
Ответ на vacuum to prevent wraparound  (Marcos Pegoraro <marcos@f10.com.br>)
Список pgsql-general
On Tue, 2023-06-06 at 11:36 -0300, Marcos Pegoraro wrote:
> I have a replica server using Postgres 14.4.
> Replica is done using Publication/Subscription and I have triggers for auditing,
> so every record which comes from production is audited.
> Some months ago I changed the way I was auditing and replaced that audit table
> for a new one, but didn't remove the old table. So I have both, new (AUDIT) and
> old (SYS_AUDIT) tables. 
> Then last night I received this message that to prevent wraparound postgres would
> do an aggressive vacuum on (SYS_AUDIT), and that took several hours to complete,
> in a table that is not used for 6 or 8 months. Why ?

Because otherwise the rows in that table would suffer data corruption at some point.
See https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

PostgreSQL doesn't know what you don't need these data any more.

> I know it would be good to run vacuum on that table, it has lots of dead tuples,
> but that table is not used anymore, so why vacuum it ?

This is not about dead tuples at all.

> I have to drop immediately that huge table that is not used anymore because it can
> stop the server to prevent a wraparound some day ?

No, you don't have to drop it.

If you know that the table will not change any more, you can run

  VACUUM (FREEZE) tablename;

That will be as intense as the anti-wraparound autovacuum that caused your concern,
but afterwards you will never again have a long-running, intense autovacuum run on
that table.

Yours,
Laurenz Albe



В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: No prompt for setting up a master password
Следующее
От: Lorusso Domenico
Дата:
Сообщение: Re: Is there any good optimization solution to improve the query efficiency?