Обсуждение: how to shrink pg_attribute table in some database

Поиск
Список
Период
Сортировка

how to shrink pg_attribute table in some database

От
Artem Tomyuk
Дата:
Hi, Gurus!!!!

Maybe someone knows what is the best way to shrink pg_attribute  in size? 
Maybe backup and restore of this table will sanitize "dead" records in it?
Or only full vacuum can help in this situation?

Thanks!

Re: how to shrink pg_attribute table in some database

От
Keith
Дата:


On Mon, Mar 26, 2018 at 9:35 AM, Artem Tomyuk <admin@leboutique.com> wrote:
Hi, Gurus!!!!

Maybe someone knows what is the best way to shrink pg_attribute  in size? 
Maybe backup and restore of this table will sanitize "dead" records in it?
Or only full vacuum can help in this situation?

Thanks!

VACUUM FULL would be the way to go if bloat is the issue you're trying to fix. Trying to backup up & restore a single system catalog would not be recommended.

Keith

Re: how to shrink pg_attribute table in some database

От
Artem Tomyuk
Дата:
Thanks for quick response, Keith.

Can you explain your point more deeply, why its a bad idea to do backup/restore of pg_attribute? 

2018-03-26 16:41 GMT+03:00 Keith <keith@keithf4.com>:


On Mon, Mar 26, 2018 at 9:35 AM, Artem Tomyuk <admin@leboutique.com> wrote:
Hi, Gurus!!!!

Maybe someone knows what is the best way to shrink pg_attribute  in size? 
Maybe backup and restore of this table will sanitize "dead" records in it?
Or only full vacuum can help in this situation?

Thanks!

VACUUM FULL would be the way to go if bloat is the issue you're trying to fix. Trying to backup up & restore a single system catalog would not be recommended.

Keith

Re: how to shrink pg_attribute table in some database

От
Artem Tomyuk
Дата:
For now pg_attribute bloated to 300GB in size, and we have only 260GB's of free space on disk.
In normal situation pg_attribute takes 50mb in size for our DB... 
Can we assume that if we will run vacuum full on pg_attribute it will take additional 50MB on disk?

Thanks

2018-03-26 16:41 GMT+03:00 Keith <keith@keithf4.com>:


On Mon, Mar 26, 2018 at 9:35 AM, Artem Tomyuk <admin@leboutique.com> wrote:
Hi, Gurus!!!!

Maybe someone knows what is the best way to shrink pg_attribute  in size? 
Maybe backup and restore of this table will sanitize "dead" records in it?
Or only full vacuum can help in this situation?

Thanks!

VACUUM FULL would be the way to go if bloat is the issue you're trying to fix. Trying to backup up & restore a single system catalog would not be recommended.

Keith

Re: how to shrink pg_attribute table in some database

От
hubert depesz lubaczewski
Дата:
On Mon, Mar 26, 2018 at 05:15:14PM +0300, Artem Tomyuk wrote:
> For now pg_attribute bloated to 300GB in size, and we have only 260GB's of
> free space on disk.
> In normal situation pg_attribute takes 50mb in size for our DB...
> Can we assume that if we will run vacuum full on pg_attribute it will take
> additional 50MB on disk?

You can estimate by doing:

create table z as select * from pg_Attribute;

and then checking size of z.

bloat there suggests that you have huge churn in tables - like: create
table, drop table, lots of times.

Best regards,

depesz



Re: how to shrink pg_attribute table in some database

От
Artem Tomyuk
Дата:
Can't, it generates huge IO spikes.

But....

Few hours ago i manually started vacuum verbose on pg_attribute, now its finished and i have some outputs:

INFO: "pg_attribute": found 554728466 removable, 212058 nonremovable row versions in 44550921 out of 49326696 pages DETAIL: 178215 dead row versions cannot be removed yet. There were 53479 unused item pointers. 0 pages are entirely empty. CPU 1097.53s/1949.50u sec elapsed 6337.86 sec. Query returned successfully with no result in 01:47:3626 hours.

what do you think?

select count(*) on pg_attribute returns:
158340 rows

So as i understand vacuum full will create new pg_attribute and will wrote those amount of "valid" rows, but still it will scan 300GB old table?
So estimate will be even ~same compering with regular vacuum?
 

2018-03-26 17:17 GMT+03:00 hubert depesz lubaczewski <depesz@depesz.com>:
On Mon, Mar 26, 2018 at 05:15:14PM +0300, Artem Tomyuk wrote:
> For now pg_attribute bloated to 300GB in size, and we have only 260GB's of
> free space on disk.
> In normal situation pg_attribute takes 50mb in size for our DB...
> Can we assume that if we will run vacuum full on pg_attribute it will take
> additional 50MB on disk?

You can estimate by doing:

create table z as select * from pg_Attribute;

and then checking size of z.

bloat there suggests that you have huge churn in tables - like: create
table, drop table, lots of times.

Best regards,

depesz


Re: how to shrink pg_attribute table in some database

От
hubert depesz lubaczewski
Дата:
On Mon, Mar 26, 2018 at 05:33:19PM +0300, Artem Tomyuk wrote:
> Can't, it generates huge IO spikes.
> 
> But....
> 
> Few hours ago i manually started vacuum verbose on pg_attribute, now its
> finished and i have some outputs:
> 
> INFO: "pg_attribute": found 554728466 removable, 212058 nonremovable row
> versions in 44550921 out of 49326696 pages DETAIL: 178215 dead row versions
> cannot be removed yet. There were 53479 unused item pointers. 0 pages are
> entirely empty. CPU 1097.53s/1949.50u sec elapsed 6337.86 sec. Query
> returned successfully with no result in 01:47:3626 hours.
> 
> what do you think?
> 
> select count(*) on pg_attribute returns:
> 158340 rows
> 
> So as i understand vacuum full will create new pg_attribute and will wrote
> those amount of "valid" rows, but still it will scan 300GB old table?
> So estimate will be even ~same compering with regular vacuum?

more or less, yes.

the thing is - find and fix whatever is causing this insane churn of
tables/attributes.

Best regards,

depesz



Re: how to shrink pg_attribute table in some database

От
Artem Tomyuk
Дата:
Thanks,

as we understand there is a lack of autovacuum max workers in postgresql.conf.

One question more, what impact would be on streaming replication? Will full vacuum create extra wal files during full vacuum?

Thanks a lot



2018-03-26 18:24 GMT+03:00 hubert depesz lubaczewski <depesz@depesz.com>:
On Mon, Mar 26, 2018 at 05:33:19PM +0300, Artem Tomyuk wrote:
> Can't, it generates huge IO spikes.
>
> But....
>
> Few hours ago i manually started vacuum verbose on pg_attribute, now its
> finished and i have some outputs:
>
> INFO: "pg_attribute": found 554728466 removable, 212058 nonremovable row
> versions in 44550921 out of 49326696 pages DETAIL: 178215 dead row versions
> cannot be removed yet. There were 53479 unused item pointers. 0 pages are
> entirely empty. CPU 1097.53s/1949.50u sec elapsed 6337.86 sec. Query
> returned successfully with no result in 01:47:3626 hours.
>
> what do you think?
>
> select count(*) on pg_attribute returns:
> 158340 rows
>
> So as i understand vacuum full will create new pg_attribute and will wrote
> those amount of "valid" rows, but still it will scan 300GB old table?
> So estimate will be even ~same compering with regular vacuum?

more or less, yes.

the thing is - find and fix whatever is causing this insane churn of
tables/attributes.

Best regards,

depesz


Re: how to shrink pg_attribute table in some database

От
hubert depesz lubaczewski
Дата:
On Mon, Mar 26, 2018 at 06:32:25PM +0300, Artem Tomyuk wrote:
> Thanks,
> 
> as we understand there is a lack of autovacuum max workers in
> postgresql.conf.

that is a contributing factor. but the sheer size of this table means
that something is doing something very bad in your application.

> 
> One question more, what impact would be on streaming replication? Will full
> vacuum create extra wal files during full vacuum?

Yes it will - in the size of new table.

Best regards,

depesz



Re: how to shrink pg_attribute table in some database

От
Artem Tomyuk
Дата:
There was broken query that every time was creating temp table and than was trying to insert into this table some data in another query but within single connection, what happened next that second query was unable to be executed because of syntax error (yes, human factor), this query was repeated every minute by cron 24/7.



2018-03-26 18:33 GMT+03:00 hubert depesz lubaczewski <depesz@depesz.com>:
On Mon, Mar 26, 2018 at 06:32:25PM +0300, Artem Tomyuk wrote:
> Thanks,
>
> as we understand there is a lack of autovacuum max workers in
> postgresql.conf.

that is a contributing factor. but the sheer size of this table means
that something is doing something very bad in your application.

>
> One question more, what impact would be on streaming replication? Will full
> vacuum create extra wal files during full vacuum?

Yes it will - in the size of new table.

Best regards,

depesz


Re: how to shrink pg_attribute table in some database

От
Tom Lane
Дата:
Artem Tomyuk <admin@leboutique.com> writes:
> There was broken query that every time was creating temp table and than was
> trying to insert into this table some data in another query but within
> single connection, what happened next that second query was unable to be
> executed because of syntax error (yes, human factor), this query was
> repeated every minute by cron 24/7.

One temp table per minute doesn't explain this amount of bloat, unless
autovacuum had failed to process pg_attribute for a really really long
time.  It would be good to figure out why that happened.

            regards, tom lane


Re: how to shrink pg_attribute table in some database

От
Artem Tomyuk
Дата:
our monitoring shows that this bloat happened within 72h. 
after query was fixed - db stopped to grow in size.... so what other ideas?

2018-03-26 18:46 GMT+03:00 Tom Lane <tgl@sss.pgh.pa.us>:
Artem Tomyuk <admin@leboutique.com> writes:
> There was broken query that every time was creating temp table and than was
> trying to insert into this table some data in another query but within
> single connection, what happened next that second query was unable to be
> executed because of syntax error (yes, human factor), this query was
> repeated every minute by cron 24/7.

One temp table per minute doesn't explain this amount of bloat, unless
autovacuum had failed to process pg_attribute for a really really long
time.  It would be good to figure out why that happened.

                        regards, tom lane