Re: Freeze avoidance of very large table.

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: Freeze avoidance of very large table.
Дата
Msg-id 2011829201.2201963.1429726992897.JavaMail.yahoo@mail.yahoo.com
обсуждение исходный текст
Ответ на Re: Freeze avoidance of very large table.  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Freeze avoidance of very large table.  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
Robert Haas <robertmhaas@gmail.com> wrote:

> I just tested "pgbench -i -s 40 -n" followed by "VACUUM" or
> alternatively followed by "VACUUM FREEZE".  The VACUUM generated
> 4641kB of WAL.  The VACUUM FREEZE generated 515MB of WAL - that
> is, 113 times more.

Essentially a bulk load.  OK, so if you bulk load data and then
vacuum it before updating 100% of it, this approach will generate a
lot more WAL than we currently do.  Of course, if you don't VACUUM
FREEZE after a bulk load and then are engaged in a fairly normal
OLTP workload with peak and off-peak cycles, you are currently
almost certain to hit a point during peak OLTP load where you begin
to sequentially scan all tables, rewriting them in place, with WAL
logging.  Incidentally, this tends to flush a lot of your "hot"
data out of cache, increasing disk reads.  The first time I hit
this "interesting" experience in production it was so devastating,
and generated so many user complaints, that I never again
considered a bulk load complete until I had run VACUUM FREEZE on it
-- although I was sometimes able to defer that to an off-peak
window of time.

In other words, for the production environments I managed, the only
value of that number is in demonstrating the importance of using
unlogged COPY followed by VACUUM FREEZE for bulk-loading and
capturing a fresh base backup upon completion.  A better way to use
pgbench to measure WAL size cost might be to initialize, VACUUM
FREEZE to set a "long term baseline", and do a reasonable length
run with crontab running VACUUM FREEZE periodically (including
after the run was complete) versus doing the same with plain VACUUM
(followed by a VACUUM FREEZE at the end?).  Comparing the total WAL
sizes generated following the initial load and VACUUM FREEZE would
give a more accurate picture of the impact on an OLTP load, I
think.

> We'll still come out ahead if those tuples are going to stick
> around long enough that they would have eventually gotten frozen
> anyway, but if they get deleted again the loss is pretty
> significant.

Perhaps my perception is biased by having worked in an environment
where the vast majority of tuples (both in terms of tuple count and
byte count) were never updated and were only eligible for deletion
after a period of years.  Our current approach is pretty bad in
such an environment, at least if you try to leave all vacuuming to
autovacuum.  I'll admit that we were able to work around the
problems by running VACUUM FREEZE every night for most databases.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: Custom/Foreign-Join-APIs (Re: [v9.5] Custom Plan API)
Следующее
От: Robert Haas
Дата:
Сообщение: Re: Freeze avoidance of very large table.