Обсуждение: vacuum full 100 mins plus?

Поиск
Список
Период
Сортировка

vacuum full 100 mins plus?

От
Patrick Hatcher
Дата:



Should I be concerned that my vacuum process has taken upwards of 100 +
minutes to complete?  I dropped all indexes before starting and also
increased the vacuum_mem before starting.
Looking at the output below, it appears that a vacuum full hasn't been done
on this table for quite sometime.  Would I be better off exporting the data
vacuuming the table and reimporting the data?  I cannot drop the table do
to views attached to the table


mdc_oz=# set vacuum_mem = 10240;
SET
mdc_oz=# vacuum full verbose cdm.cdm_ddw_Tran_item;
INFO:  vacuuming "cdm.cdm_ddw_tran_item"
INFO:  "cdm_ddw_tran_item": found 15322404 removable, 10950460 nonremovable
row versions in 934724 pages
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 233 to 308 bytes long.
There were 1081 unused item pointers.
Total free space (including removable row versions) is 4474020460 bytes.
544679 pages are or will become empty, including 0 at the end of the table.
692980 pages containing 4433398408 free bytes are potential move
destinations.
CPU 29.55s/4.13u sec elapsed 107.82 sec.


TIA
Patrick Hatcher


Re: vacuum full 100 mins plus?

От
Patrick Hatcher
Дата:





Answered my own question.  I gave up the vacuum full after 150 mins.  I was
able to export to a file, vacuum full the empty table, and reimport in less
than 10 mins.  I suspect the empty item pointers and the sheer number of
removable rows was causing an issue.




             Patrick Hatcher
             <PHatcher@macys.c
             om>                                                        To
             Sent by:                  pgsql-performance@postgresql.org
             pgsql-performance                                          cc
             -owner@postgresql
             .org                                                  Subject
                                       [PERFORM] vacuum full 100 mins
                                       plus?
             07/14/2004 02:29
             PM












Should I be concerned that my vacuum process has taken upwards of 100 +
minutes to complete?  I dropped all indexes before starting and also
increased the vacuum_mem before starting.
Looking at the output below, it appears that a vacuum full hasn't been done
on this table for quite sometime.  Would I be better off exporting the data
vacuuming the table and reimporting the data?  I cannot drop the table do
to views attached to the table


mdc_oz=# set vacuum_mem = 10240;
SET
mdc_oz=# vacuum full verbose cdm.cdm_ddw_Tran_item;
INFO:  vacuuming "cdm.cdm_ddw_tran_item"
INFO:  "cdm_ddw_tran_item": found 15322404 removable, 10950460 nonremovable
row versions in 934724 pages
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 233 to 308 bytes long.
There were 1081 unused item pointers.
Total free space (including removable row versions) is 4474020460 bytes.
544679 pages are or will become empty, including 0 at the end of the table.
692980 pages containing 4433398408 free bytes are potential move
destinations.
CPU 29.55s/4.13u sec elapsed 107.82 sec.


TIA
Patrick Hatcher


---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend



Re: vacuum full 100 mins plus?

От
Josh Berkus
Дата:
Patrick,

> Answered my own question.  I gave up the vacuum full after 150 mins.  I was
> able to export to a file, vacuum full the empty table, and reimport in less
> than 10 mins.  I suspect the empty item pointers and the sheer number of
> removable rows was causing an issue.

Yeah.  If you've a table that's not been vacuumed in a month, it's often
faster to clean it out and import it.

I've seen vacuums take up to 3 hours in really bad cases.

--
-Josh Berkus
 Aglio Database Solutions
 San Francisco


Re: vacuum full 100 mins plus?

От
Christopher Browne
Дата:
A long time ago, in a galaxy far, far away, PHatcher@macys.com (Patrick Hatcher) wrote:
> Answered my own question.  I gave up the vacuum full after 150 mins.  I was
> able to export to a file, vacuum full the empty table, and reimport in less
> than 10 mins.  I suspect the empty item pointers and the sheer number of
> removable rows was causing an issue.

In that case, you'd be a little further better off if the steps were:
 - drop indices;
 - copy table to file (perhaps via pg_dump -t my_table);
 - truncate the table, or drop-and-recreate, both of which make
   it unnecessary to do _any_ vacuum of the result;
 - recreate indices, probably with SORT_MEM set high, to minimize
   paging to disk
 - analyze the table (no need to vacuum if you haven't created any
   dead tuples)
 - cut SORT_MEM back down to "normal" sizes
--
output = reverse("gro.gultn" "@" "enworbbc")
http://www3.sympatico.ca/cbbrowne/spreadsheets.html
Signs  of a   Klingon  Programmer  #6: "Debugging?   Klingons  do  not
debug. Our software does not coddle the weak."

Re: vacuum full 100 mins plus?

От
Tom Lane
Дата:
Christopher Browne <cbbrowne@acm.org> writes:
> A long time ago, in a galaxy far, far away, PHatcher@macys.com (Patrick Hatcher) wrote:
>> Answered my own question.  I gave up the vacuum full after 150 mins.  I was
>> able to export to a file, vacuum full the empty table, and reimport in less
>> than 10 mins.  I suspect the empty item pointers and the sheer number of
>> removable rows was causing an issue.

> In that case, you'd be a little further better off if the steps were:
>  - drop indices;
>  - copy table to file (perhaps via pg_dump -t my_table);
>  - truncate the table, or drop-and-recreate, both of which make
>    it unnecessary to do _any_ vacuum of the result;
>  - recreate indices, probably with SORT_MEM set high, to minimize
>    paging to disk
>  - analyze the table (no need to vacuum if you haven't created any
>    dead tuples)
>  - cut SORT_MEM back down to "normal" sizes

Rather than doing all this manually, you can just CLUSTER on any handy
index.  In 7.5, another possibility is to issue one of the forms of
ALTER TABLE that force a table rewrite.

The range of usefulness of VACUUM FULL is really looking narrower and
narrower to me.  I can foresee a day when we'll abandon it completely.

            regards, tom lane

Re: vacuum full 100 mins plus?

От
Mike Rylander
Дата:
Tom Lane wrote:

> Christopher Browne <cbbrowne@acm.org> writes:
>> A long time ago, in a galaxy far, farpliers PHatcher@macys.com (Patrick
>> Hatcher) wrote:
>>> Answered my own question.  I gave up the vacuum full after 150 mins.  I
>>> was able to export to a file, vacuum full the empty table, and reimport
>>> in less
>>> than 10 mins.  I suspect the empty item pointers and the sheer number of
>>> removable rows was causing an issue.
>
>> In that case, you'd be a little further better off if the steps were:
>>  - drop indices;
>>  - copy table to file (perhaps via pg_dump -t my_table);
>>  - truncate the table, or drop-and-recreate, both of which make
>>    it unnecessary to do _any_ vacuum of the result;
>>  - recreate indices, probably with SORT_MEM set high, to minimize
>>    paging to disk
>>  - analyze the table (no need to vacuum if you haven't created any
>>    dead tuples)
>>  - cut SORT_MEM back down to "normal" sizes
>
> Rather than doing all this manually, you can just CLUSTER on any handy
> index.  In 7.5, another possibility is to issue one of the forms of
> ALTER TABLE that force a table rewrite.
>
> The range of usefulness of VACUUM FULL is really looking narrower and
> narrower to me.  I can foresee a day when we'll abandon it completely.

I would love to see this 10lb sledge hammer go away when we have enough tiny
screwdrivers and needlenose pliers to make it obsolete!

>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html