Обсуждение: pg_repack in cluster

Поиск
Список
Период
Сортировка

pg_repack in cluster

От
dagamier
Дата:
I have two databases that are clustered. One is my primary (DB1) and the
other is my secondary (DB2). Both have the same tables and schemas. Could I
use pg_repack against each of these separately (I am wanting to do this at a
"by table" level) to clean up dead space that hasn't been returned? Should I
be doing it this way? I would assume that dead space in DB1 would also exist
in DB2 as deletes are performed as part of the replication process, but the
reclamation of space back to the system wouldn't take place? Just curious if
the fact that the database is being replicated would prevent the needed
steps from occurring as it's my understanding that you can't drop a table
that's being replicated between two instances.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg_repack in cluster

От
Adrian Klaver
Дата:
On 1/10/20 10:01 AM, dagamier wrote:
> I have two databases that are clustered. One is my primary (DB1) and the
> other is my secondary (DB2). Both have the same tables and schemas. Could I
> use pg_repack against each of these separately (I am wanting to do this at a
> "by table" level) to clean up dead space that hasn't been returned? Should I
> be doing it this way? I would assume that dead space in DB1 would also exist
> in DB2 as deletes are performed as part of the replication process, but the
> reclamation of space back to the system wouldn't take place? Just curious if
> the fact that the database is being replicated would prevent the needed
> steps from occurring as it's my understanding that you can't drop a table
> that's being replicated between two instances.

Why not use VACUUM?

> 
> 
> 
> --
> Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
> 
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com



Re: pg_repack in cluster

От
dagamier
Дата:
Can't afford the downtime of a "vaccuum full" and the autovacuum process is
not cleaning up all the dead space tied up by the database. pg_repack works
great on my test database in reclaiming disk space (in my tests I got back
46% of my disk space back) due to all the deletions that happen in our
tables. I think that some of the issues are the fact that some of these
tables are quite large and won't allow for an autovacuum to clean up as well
as it should. pg_repack allows me to essentially do a "vacuum full" in
online mode (I thought I read it also rebuilds the indexes as well) and this
would greatly increase our overall performance on our systems.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg_repack in cluster

От
Adrian Klaver
Дата:
On 1/13/20 7:41 AM, dagamier wrote:
> Can't afford the downtime of a "vaccuum full" and the autovacuum process is
> not cleaning up all the dead space tied up by the database. pg_repack works
> great on my test database in reclaiming disk space (in my tests I got back
> 46% of my disk space back) due to all the deletions that happen in our
> tables. I think that some of the issues are the fact that some of these
> tables are quite large and won't allow for an autovacuum to clean up as well
> as it should. pg_repack allows me to essentially do a "vacuum full" in
> online mode (I thought I read it also rebuilds the indexes as well) and this
> would greatly increase our overall performance on our systems.

I believe more information is needed to come up with an answer:

1) What version of Postgres?

2) What replication are you using logical or binary?

3) Is the space tied up needed by the OS or other programs?
Otherwise VACUUM will mark it as reusable by the database.

4) What are the size of the tables and the deletion churn rate?




-- 
Adrian Klaver
adrian.klaver@aklaver.com