Обсуждение: Question on full vacuum clearing waste space

Поиск
Список
Период
Сортировка

Question on full vacuum clearing waste space

От
Wenjun Che
Дата:
Hi

I am testing full vacuum with pg 10.10 on AWS RDS.  I noticed for some tables, the number of waste bytes stays at a few MB after I run full vacuum.  I double-checked that there are no long running transactions, no orphaned prepared transactions and no abandoned replication slots. 

Here is output from full vacuum for one of the tables:

VACUUM(FULL, ANALYZE, VERBOSE) app_events_users
vacuuming "app_events_users"
"app_events_users": found 0 removable, 1198881 nonremovable row versions in 13369 pages
analyzing "licensing.app_events_users"
"app_events_users": scanned 13369 of 13369 pages, containing 1198881 live rows and 0 dead rows; 30000 rows in sample, 1198881 estimated total rows

What else can prevent full vacuum from reclaiming all waste space ?

Thank you 

Re: Question on full vacuum clearing waste space

От
Mohamed Wael Khobalatte
Дата:

On Sat, Jun 6, 2020 at 11:24 PM Wenjun Che <wenjun@openfin.co> wrote:
Hi

I am testing full vacuum with pg 10.10 on AWS RDS.  I noticed for some tables, the number of waste bytes stays at a few MB after I run full vacuum.  I double-checked that there are no long running transactions, no orphaned prepared transactions and no abandoned replication slots. 

Here is output from full vacuum for one of the tables:

VACUUM(FULL, ANALYZE, VERBOSE) app_events_users
vacuuming "app_events_users"
"app_events_users": found 0 removable, 1198881 nonremovable row versions in 13369 pages
analyzing "licensing.app_events_users"
"app_events_users": scanned 13369 of 13369 pages, containing 1198881 live rows and 0 dead rows; 30000 rows in sample, 1198881 estimated total rows

What else can prevent full vacuum from reclaiming all waste space ?

Thank you 

What "waste query" are you running? Those tend to be estimates only. Vacuum Full clearly did its job from that log you shared. 

Re: Question on full vacuum clearing waste space

От
Wenjun Che
Дата:
Thank you for the quick response.

I ran the script from https://wiki.postgresql.org/wiki/Show_database_bloat, which shows "app_event_users" table has 3751936 as wastedbytes.

On Sun, Jun 7, 2020 at 12:32 AM Mohamed Wael Khobalatte <mkhobalatte@grubhub.com> wrote:

On Sat, Jun 6, 2020 at 11:24 PM Wenjun Che <wenjun@openfin.co> wrote:
Hi

I am testing full vacuum with pg 10.10 on AWS RDS.  I noticed for some tables, the number of waste bytes stays at a few MB after I run full vacuum.  I double-checked that there are no long running transactions, no orphaned prepared transactions and no abandoned replication slots. 

Here is output from full vacuum for one of the tables:

VACUUM(FULL, ANALYZE, VERBOSE) app_events_users
vacuuming "app_events_users"
"app_events_users": found 0 removable, 1198881 nonremovable row versions in 13369 pages
analyzing "licensing.app_events_users"
"app_events_users": scanned 13369 of 13369 pages, containing 1198881 live rows and 0 dead rows; 30000 rows in sample, 1198881 estimated total rows

What else can prevent full vacuum from reclaiming all waste space ?

Thank you 

What "waste query" are you running? Those tend to be estimates only. Vacuum Full clearly did its job from that log you shared. 


--
Wenjun Che
VP of Engineering | OpenFin

Move Fast.  Break Nothing.
www.openfin.co | @openfintech

Re: Question on full vacuum clearing waste space

От
Adrian Klaver
Дата:
On 6/7/20 6:06 AM, Wenjun Che wrote:
> Thank you for the quick response.
> 
> I ran the script from 
> https://wiki.postgresql.org/wiki/Show_database_bloat, which shows 
> "app_event_users" table has 3751936 as wastedbytes.

https://bucardo.org/check_postgres/check_postgres.pl.html#bloat

"Please note that the values computed by this action are not precise, 
and should be used as a guideline only. Great effort was made to 
estimate the correct size of a table, but in the end it is only an 
estimate. The correct index size is even more of a guess than the 
correct table size, but both should give a rough idea of how bloated 
things are."

> 
> On Sun, Jun 7, 2020 at 12:32 AM Mohamed Wael Khobalatte 
> <mkhobalatte@grubhub.com <mailto:mkhobalatte@grubhub.com>> wrote:
> 
> 
>     On Sat, Jun 6, 2020 at 11:24 PM Wenjun Che <wenjun@openfin.co
>     <mailto:wenjun@openfin.co>> wrote:
> 
>         Hi
> 
>         I am testing full vacuum with pg 10.10 on AWS RDS.  I noticed
>         for some tables, the number of waste bytes stays at a few MB
>         after I run full vacuum.  I double-checked that there are no
>         long running transactions, no orphaned prepared transactions and
>         no abandoned replication slots.
> 
>         Here is output from full vacuum for one of the tables:
> 
>         VACUUM(FULL, ANALYZE, VERBOSE) app_events_users
>         vacuuming "app_events_users"
>         "app_events_users": found 0 removable, 1198881 nonremovable row
>         versions in 13369 pages
>         analyzing "licensing.app_events_users"
>         "app_events_users": scanned 13369 of 13369 pages, containing
>         1198881 live rows and 0 dead rows; 30000 rows in sample, 1198881
>         estimated total rows
> 
>         What else can prevent full vacuum from reclaiming all waste space ?
> 
>         Thank you 
> 
> 
>     What "waste query" are you running? Those tend to be estimates only.
>     Vacuum Full clearly did its job from that log you shared.
> 
> 
> 
> -- 
> Wenjun Che
> VP of Engineering | OpenFin
> wenjun@openfin.co <mailto:wenjun@openfin.co>
> 
> *Move Fast.  Break Nothing.*
> www.openfin.co <http://www.openfin.co> | @openfintech


-- 
Adrian Klaver
adrian.klaver@aklaver.com