VACUUM FULL pg_largeobject without (much) downtime?

Поиск
Список
Период
Сортировка
От Adam Hooper
Тема VACUUM FULL pg_largeobject without (much) downtime?
Дата
Msg-id CAMWjz6GF9TM+vWM_0ymQYPi4Xk_bv2nYaREMWR1EcsqBS404vw@mail.gmail.com
обсуждение исходный текст
Ответы Re: VACUUM FULL pg_largeobject without (much) downtime?  (Bill Moran <wmoran@potentialtech.com>)
Список pgsql-general
Hi list,

We run a website. We once stored all sorts of files in pg_largeobject,
which grew to 266GB. This is on an m1.large on Amazon EC2 on a single,
magnetic, non-provisioned-IO volume. In that context, 266GB is a lot.

We've since moved all but 60GB of that data to S3. We plan to reduce
that to 1GB by deleting old, unused data. Of course, pg_largeobject
will still take up 266GB because autovacuum doesn't reduce disk space.

We want to move our entire database to an SSD volume, with as little
downtime as possible. My tentative plan:

1. Use CREATE TABLESPACE and pg_repack to move user tables to a temporary volume
2. Take down Postgres, copy system-table files to the new volume, and
start up Postgres from the new volume
3. Use pg_repack to move everything to the new volume

This plan won't work: Step 2 will be too slow because pg_largeobject
still takes 266GB. We tested `VACUUM FULL pg_largeobject` on our
staging database: it took two hours, during which pg_largeobject was
locked. When pg_largeobject is locked, lots of our website doesn't
work.

How can we move our database without much downtime? Is there a way to
`VACUUM FULL` pg_largeobject without locking it for very long? Aside
from that problem, is the rest of my upgrade plan sane?

For what it's worth, here's some info from VACUUM VERBOSE:

overview=# VACUUM (VERBOSE, ANALYZE) pg_largeobject;
INFO:  vacuuming "pg_catalog.pg_largeobject"
INFO:  scanned index "pg_largeobject_loid_pn_index" to remove 1112630
row versions
DETAIL:  CPU 3.38s/9.89u sec elapsed 69.02 sec.
INFO:  "pg_largeobject": removed 1112630 row versions in 374889 pages
DETAIL:  CPU 7.48s/2.22u sec elapsed 150.44 sec.
INFO:  index "pg_largeobject_loid_pn_index" now contains 29373858 row
versions in 370470 pages
DETAIL:  1112630 index row versions were removed.
279489 index pages have been deleted, 276070 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.06 sec.
INFO:  "pg_largeobject": found 1112622 removable, 231974 nonremovable
row versions in 3189820 out of 34522175 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 10318306 unused item pointers.
0 pages are entirely empty.
CPU 69.81s/28.83u sec elapsed 1402.53 sec.
INFO:  analyzing "pg_catalog.pg_largeobject"
INFO:  "pg_largeobject": scanned 30000 of 34522175 pages, containing
25085 live rows and 0 dead rows; 25085 rows in sample, 24203398
estimated total rows
VACUUM

Enjoy life,
Adam

--
Adam Hooper
+1-613-986-3339
http://adamhooper.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: postgres cust types
Следующее
От: Bill Moran
Дата:
Сообщение: Re: VACUUM FULL pg_largeobject without (much) downtime?