Re: Freeze avoidance of very large table.

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Freeze avoidance of very large table.
Дата
Msg-id CA+TgmoZSEc0ShOsa-v5+mQ2u+uaoKOsid-zu4X=hAxyq=GY8XA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Freeze avoidance of very large table.  (Kevin Grittner <kgrittn@ymail.com>)
Список pgsql-hackers
On Wed, Apr 22, 2015 at 2:23 PM, Kevin Grittner <kgrittn@ymail.com> wrote:
> Robert Haas <robertmhaas@gmail.com> wrote:
>> I just tested "pgbench -i -s 40 -n" followed by "VACUUM" or
>> alternatively followed by "VACUUM FREEZE".  The VACUUM generated
>> 4641kB of WAL.  The VACUUM FREEZE generated 515MB of WAL - that
>> is, 113 times more.
>
> Essentially a bulk load.  OK, so if you bulk load data and then
> vacuum it before updating 100% of it, this approach will generate a
> lot more WAL than we currently do.  Of course, if you don't VACUUM
> FREEZE after a bulk load and then are engaged in a fairly normal
> OLTP workload with peak and off-peak cycles, you are currently
> almost certain to hit a point during peak OLTP load where you begin
> to sequentially scan all tables, rewriting them in place, with WAL
> logging.  Incidentally, this tends to flush a lot of your "hot"
> data out of cache, increasing disk reads.  The first time I hit
> this "interesting" experience in production it was so devastating,
> and generated so many user complaints, that I never again
> considered a bulk load complete until I had run VACUUM FREEZE on it
> -- although I was sometimes able to defer that to an off-peak
> window of time.
>
> In other words, for the production environments I managed, the only
> value of that number is in demonstrating the importance of using
> unlogged COPY followed by VACUUM FREEZE for bulk-loading and
> capturing a fresh base backup upon completion.  A better way to use
> pgbench to measure WAL size cost might be to initialize, VACUUM
> FREEZE to set a "long term baseline", and do a reasonable length
> run with crontab running VACUUM FREEZE periodically (including
> after the run was complete) versus doing the same with plain VACUUM
> (followed by a VACUUM FREEZE at the end?).  Comparing the total WAL
> sizes generated following the initial load and VACUUM FREEZE would
> give a more accurate picture of the impact on an OLTP load, I
> think.

Sure, that would be a better test.  But I'm pretty sure the impact
will still be fairly substantial.

>> We'll still come out ahead if those tuples are going to stick
>> around long enough that they would have eventually gotten frozen
>> anyway, but if they get deleted again the loss is pretty
>> significant.
>
> Perhaps my perception is biased by having worked in an environment
> where the vast majority of tuples (both in terms of tuple count and
> byte count) were never updated and were only eligible for deletion
> after a period of years.  Our current approach is pretty bad in
> such an environment, at least if you try to leave all vacuuming to
> autovacuum.  I'll admit that we were able to work around the
> problems by running VACUUM FREEZE every night for most databases.

Yeah.  And that breaks down when you have very big databases with a
high XID consumption rate, because the mostly-no-op VACUUM FREEZE runs
for longer than you can tolerate.  I'm not saying we don't need to fix
this problem; we clearly do.  I'm just saying that we've got to be
careful not to harm other scenarios in the process.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: Freeze avoidance of very large table.
Следующее
От: Robert Haas
Дата:
Сообщение: Re: Streaming replication and WAL archive interactions