Обсуждение: Resume vacuum and autovacuum from interruption and cancellation

Поиск
Список
Период
Сортировка

Resume vacuum and autovacuum from interruption and cancellation

От
Masahiko Sawada
Дата:
Hi all,

Long-running vacuum could be sometimes cancelled by administrator. And
autovacuums could be cancelled by concurrent processes. Even if it
retries after cancellation, since it always restart from the first
block of table it could vacuums blocks again that we vacuumed last
time. We have visibility map to skip scanning all-visible blocks but
in case where the table is large and often modified, we're more likely
to reclaim more garbage from blocks other than we processed last time
than scanning from the first block.

So I'd like to propose to make vacuums save its progress and resume
vacuuming based on it. The mechanism I'm thinking is simple; vacuums
periodically report the current block number to the stats collector.
If table has indexes, reports it after heap vacuum whereas reports it
every certain amount of blocks (e.g. 1024 blocks = 8MB) if no indexes.
We can see that value on new column vacuum_resume_block of
pg_stat_all_tables. I'm going to add one vacuum command option RESUME
and one new reloption vacuum_resume. If the option is true vacuums
fetch the block number from stats collector before starting and start
vacuuming from that block. I wonder if we could make it true by
default for autovacuums but it must be false when aggressive vacuum.

If we start to vacuum from not first block, we can update neither
relfrozenxid nor relfrozenxmxid. And we might not be able to update
even relation statistics.

Comment and feedback are very welcome.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center



Re: Resume vacuum and autovacuum from interruption and cancellation

От
Masahiko Sawada
Дата:
On Wed, Jun 12, 2019 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> Hi all,
>
> Long-running vacuum could be sometimes cancelled by administrator. And
> autovacuums could be cancelled by concurrent processes. Even if it
> retries after cancellation, since it always restart from the first
> block of table it could vacuums blocks again that we vacuumed last
> time. We have visibility map to skip scanning all-visible blocks but
> in case where the table is large and often modified, we're more likely
> to reclaim more garbage from blocks other than we processed last time
> than scanning from the first block.
>
> So I'd like to propose to make vacuums save its progress and resume
> vacuuming based on it. The mechanism I'm thinking is simple; vacuums
> periodically report the current block number to the stats collector.
> If table has indexes, reports it after heap vacuum whereas reports it
> every certain amount of blocks (e.g. 1024 blocks = 8MB) if no indexes.
> We can see that value on new column vacuum_resume_block of
> pg_stat_all_tables. I'm going to add one vacuum command option RESUME
> and one new reloption vacuum_resume. If the option is true vacuums
> fetch the block number from stats collector before starting and start
> vacuuming from that block. I wonder if we could make it true by
> default for autovacuums but it must be false when aggressive vacuum.
>
> If we start to vacuum from not first block, we can update neither
> relfrozenxid nor relfrozenxmxid. And we might not be able to update
> even relation statistics.
>

Attached the first version of patch. And registered this item to the
next commit fest.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

Вложения

Re: Resume vacuum and autovacuum from interruption and cancellation

От
Rafia Sabih
Дата:
On Tue, 16 Jul 2019 at 13:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> On Wed, Jun 12, 2019 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > Hi all,
> >
> > Long-running vacuum could be sometimes cancelled by administrator. And
> > autovacuums could be cancelled by concurrent processes. Even if it
> > retries after cancellation, since it always restart from the first
> > block of table it could vacuums blocks again that we vacuumed last
> > time. We have visibility map to skip scanning all-visible blocks but
> > in case where the table is large and often modified, we're more likely
> > to reclaim more garbage from blocks other than we processed last time
> > than scanning from the first block.
> >
> > So I'd like to propose to make vacuums save its progress and resume
> > vacuuming based on it. The mechanism I'm thinking is simple; vacuums
> > periodically report the current block number to the stats collector.
> > If table has indexes, reports it after heap vacuum whereas reports it
> > every certain amount of blocks (e.g. 1024 blocks = 8MB) if no indexes.
> > We can see that value on new column vacuum_resume_block of
> > pg_stat_all_tables. I'm going to add one vacuum command option RESUME
> > and one new reloption vacuum_resume. If the option is true vacuums
> > fetch the block number from stats collector before starting and start
> > vacuuming from that block. I wonder if we could make it true by
> > default for autovacuums but it must be false when aggressive vacuum.
> >
> > If we start to vacuum from not first block, we can update neither
> > relfrozenxid nor relfrozenxmxid. And we might not be able to update
> > even relation statistics.
> >

Sounds like an interesting idea, but does it really help? Because if
vacuum was interrupted previously, wouldn't it already know the dead
tuples, etc in the next run quite quickly, as the VM, FSM is already
updated for the page in the previous run.

A few minor things I noticed in the first look,
+/*
+ * When a table has no indexes, save the progress every 8GB so that we can
+ * resume vacuum from the middle of table. When table has indexes we save it
+ * after the second heap pass finished.
+ */
+#define VACUUM_RESUME_BLK_INTERVAL 1024 /* 8MB */
Discrepancy with the memory unit here.

/* No found valid saved block number, resume from the first block */
Can be better framed.

--
Regards,
Rafia Sabih



Re: Resume vacuum and autovacuum from interruption and cancellation

От
Masahiko Sawada
Дата:
On Thu, Aug 8, 2019 at 10:42 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:
>
> On Tue, 16 Jul 2019 at 13:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Wed, Jun 12, 2019 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> > >
> > > Hi all,
> > >
> > > Long-running vacuum could be sometimes cancelled by administrator. And
> > > autovacuums could be cancelled by concurrent processes. Even if it
> > > retries after cancellation, since it always restart from the first
> > > block of table it could vacuums blocks again that we vacuumed last
> > > time. We have visibility map to skip scanning all-visible blocks but
> > > in case where the table is large and often modified, we're more likely
> > > to reclaim more garbage from blocks other than we processed last time
> > > than scanning from the first block.
> > >
> > > So I'd like to propose to make vacuums save its progress and resume
> > > vacuuming based on it. The mechanism I'm thinking is simple; vacuums
> > > periodically report the current block number to the stats collector.
> > > If table has indexes, reports it after heap vacuum whereas reports it
> > > every certain amount of blocks (e.g. 1024 blocks = 8MB) if no indexes.
> > > We can see that value on new column vacuum_resume_block of
> > > pg_stat_all_tables. I'm going to add one vacuum command option RESUME
> > > and one new reloption vacuum_resume. If the option is true vacuums
> > > fetch the block number from stats collector before starting and start
> > > vacuuming from that block. I wonder if we could make it true by
> > > default for autovacuums but it must be false when aggressive vacuum.
> > >
> > > If we start to vacuum from not first block, we can update neither
> > > relfrozenxid nor relfrozenxmxid. And we might not be able to update
> > > even relation statistics.
> > >
>
> Sounds like an interesting idea, but does it really help? Because if
> vacuum was interrupted previously, wouldn't it already know the dead
> tuples, etc in the next run quite quickly, as the VM, FSM is already
> updated for the page in the previous run.

Since tables are modified even during vacuum, if vacuum runs again
after interruption it could need to vacuum the part of table again
that has already been cleaned by the last vacuum. But the rest part of
the table is likely to have more garbage in many cases. Therefore I
think this would be helpful especially for a case where table is large
and heavily updated. Even if the table has not gotten dirtied since
the last vacuum it can skip already-vacuumed pages by looking vm or
the last vacuumed block. I think that it doesn't make thing worse than
today's vacuum in many cases.

>
> A few minor things I noticed in the first look,

Thanks for reviewing the patch.

> +/*
> + * When a table has no indexes, save the progress every 8GB so that we can
> + * resume vacuum from the middle of table. When table has indexes we save it
> + * after the second heap pass finished.
> + */
> +#define VACUUM_RESUME_BLK_INTERVAL 1024 /* 8MB */
> Discrepancy with the memory unit here.
>

Fixed.

> /* No found valid saved block number, resume from the first block */
> Can be better framed.

Fixed.

Attached the updated version patch.


Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

Вложения

RE: Resume vacuum and autovacuum from interruption and cancellation

От
"Jamison, Kirk"
Дата:
On Monday, August 19, 2019 10:39 AM (GMT+9), Masahiko Sawada wrote:
> Fixed.
> 
> Attached the updated version patch.

Hi Sawada-san,

I haven't tested it with heavily updated large tables, but I think the patch
is reasonable as it helps to shorten the execution time of vacuum by removing
the redundant vacuuming and prioritizing reclaiming the garbage instead.
I'm not sure if it's commonly reported to have problems even when we repeat
vacuuming the already-vacuumed blocks, but I think it's a reasonable improvement.

I skimmed the patch and have few comments. If they deem fit, feel free to
follow, but it's also ok if you don't.
1.
>+     <entry>Block number to resume vacuuming from</entry>
Perhaps you could drop "from".

2.
>+      <xref linkend="pg-stat-all-tables-view"/>. This behavior is helpful
>+      when to resume vacuuming from interruption and cancellation.The default
when resuming vacuum run from interruption and cancellation.
There should also be space between period and "The".

3.
>+      set to true. This option is ignored if either the <literal>FULL</literal>,
>+      the <literal>FREEZE</literal> or <literal>DISABLE_PAGE_SKIPPING</literal>
>+      option is used.
..if either of the <literal>FULL</literal>, <literal>FREEZE</literal>, or <literal>DISABLE_PAGE_SKIPPING</literal>
optionsis used.
 

4.
>+                next_fsm_block_to_vacuum,
>+                next_block_to_resume;
Clearer one would be "next_block_to_resume_vacuum"?
You may disregard if that's too long.

5.
>+    Assert(start_blkno <= nblocks);    /* both are the same iif it's empty */
iif -> if there are no blocks / if nblocks is 0

6.
>+     * If not found a valid saved block number, resume from the
>+     * first block.
>+     */
>+    if (!found ||
>+        tabentry->vacuum_resume_block >= RelationGetNumberOfBlocks(onerel))
This describes when vacuum_resume_block > RelationGetNumberOfBlocks.., isn't it?
Perhaps a better framing would be
"If the saved block number is found invalid,...",

7.
>+    bool        vacuum_resume;        /* enables vacuum to resuming from last
>+                                     * vacuumed block. */
resuming --> resume


Regards,
Kirk Jamison

Re: Resume vacuum and autovacuum from interruption and cancellation

От
Masahiko Sawada
Дата:
On Tue, Aug 27, 2019 at 2:55 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:
>
> On Monday, August 19, 2019 10:39 AM (GMT+9), Masahiko Sawada wrote:
> > Fixed.
> >
> > Attached the updated version patch.
>
> Hi Sawada-san,
>
> I haven't tested it with heavily updated large tables, but I think the patch
> is reasonable as it helps to shorten the execution time of vacuum by removing
> the redundant vacuuming and prioritizing reclaiming the garbage instead.
> I'm not sure if it's commonly reported to have problems even when we repeat
> vacuuming the already-vacuumed blocks, but I think it's a reasonable improvement.
>
> I skimmed the patch and have few comments. If they deem fit, feel free to
> follow, but it's also ok if you don't.

Thank you for reviewing this patch! I've attached the updated patch
incorporated all your comments and some improvements.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

Вложения

Re: Resume vacuum and autovacuum from interruption and cancellation

От
Alvaro Herrera
Дата:
Apparently this patch now has a duplicate OID.  Please do use random
OIDs >8000 as suggested by the unused_oids script.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Resume vacuum and autovacuum from interruption and cancellation

От
Ibrar Ahmed
Дата:


On Thu, Sep 26, 2019 at 1:53 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Apparently this patch now has a duplicate OID.  Please do use random
OIDs >8000 as suggested by the unused_oids script.

--
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



I have updated the patch using OIDs > 8000 


--
Ibrar Ahmed
Вложения

Re: Resume vacuum and autovacuum from interruption and cancellation

От
Robert Haas
Дата:
On Thu, Aug 8, 2019 at 9:42 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:
> Sounds like an interesting idea, but does it really help? Because if
> vacuum was interrupted previously, wouldn't it already know the dead
> tuples, etc in the next run quite quickly, as the VM, FSM is already
> updated for the page in the previous run.

+1. I don't deny that a patch like this could sometimes save
something, but it doesn't seem like it would save all that much all
that often. If your autovacuum runs are being frequently cancelled,
that's going to be a big problem, I think. And as Rafia says, even
though you might do a little extra work reclaiming garbage from
subsequently-modified pages toward the beginning of the table, it
would be unusual if they'd *all* been modified. Plus, if they've
recently been modified, they're more likely to be in cache.

I think this patch really needs a test scenario or demonstration of
some kind to prove that it produces a measurable benefit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Resume vacuum and autovacuum from interruption and cancellation

От
Masahiko Sawada
Дата:
On Sat, 2 Nov 2019 at 02:10, Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Thu, Aug 8, 2019 at 9:42 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:
> > Sounds like an interesting idea, but does it really help? Because if
> > vacuum was interrupted previously, wouldn't it already know the dead
> > tuples, etc in the next run quite quickly, as the VM, FSM is already
> > updated for the page in the previous run.
>
> +1. I don't deny that a patch like this could sometimes save
> something, but it doesn't seem like it would save all that much all
> that often. If your autovacuum runs are being frequently cancelled,
> that's going to be a big problem, I think.

I've observed the case where user wants to cancel a very long running
autovacuum (sometimes for anti-wraparound) for doing DDL or something
maintenance works. If the table is very large autovacuum could take a
long time and they might not reclaim garbage enough.

> And as Rafia says, even
> though you might do a little extra work reclaiming garbage from
> subsequently-modified pages toward the beginning of the table, it
> would be unusual if they'd *all* been modified. Plus, if they've
> recently been modified, they're more likely to be in cache.
>
> I think this patch really needs a test scenario or demonstration of
> some kind to prove that it produces a measurable benefit.

Okay. A simple test could be that we cancel a long running vacuum on a
large table that is being updated and rerun vacuum. And then we see
the garbage on that table. I'll test it.

--
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Resume vacuum and autovacuum from interruption and cancellation

От
btkimurayuzk
Дата:
+    VACOPT_RESUME = 1 << 8        /* resume from the previous point */

I think this unused ENUM value is not needed.

Regards,

Yu Kimura




Re: Resume vacuum and autovacuum from interruption and cancellation

От
Masahiko Sawada
Дата:
On Tue, 5 Nov 2019 at 15:57, Masahiko Sawada
<masahiko.sawada@2ndquadrant.com> wrote:
>
> On Sat, 2 Nov 2019 at 02:10, Robert Haas <robertmhaas@gmail.com> wrote:
> >
> > On Thu, Aug 8, 2019 at 9:42 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:
> > > Sounds like an interesting idea, but does it really help? Because if
> > > vacuum was interrupted previously, wouldn't it already know the dead
> > > tuples, etc in the next run quite quickly, as the VM, FSM is already
> > > updated for the page in the previous run.
> >
> > +1. I don't deny that a patch like this could sometimes save
> > something, but it doesn't seem like it would save all that much all
> > that often. If your autovacuum runs are being frequently cancelled,
> > that's going to be a big problem, I think.
>
> I've observed the case where user wants to cancel a very long running
> autovacuum (sometimes for anti-wraparound) for doing DDL or something
> maintenance works. If the table is very large autovacuum could take a
> long time and they might not reclaim garbage enough.
>
> > And as Rafia says, even
> > though you might do a little extra work reclaiming garbage from
> > subsequently-modified pages toward the beginning of the table, it
> > would be unusual if they'd *all* been modified. Plus, if they've
> > recently been modified, they're more likely to be in cache.
> >
> > I think this patch really needs a test scenario or demonstration of
> > some kind to prove that it produces a measurable benefit.
>
> Okay. A simple test could be that we cancel a long running vacuum on a
> large table that is being updated and rerun vacuum. And then we see
> the garbage on that table. I'll test it.
>

Attached the updated version patch.

I've measured the effect by this patch. In the test, I simulate the
case where autovacuum running on the table that is being updated is
canceled in the middle of vacuum, and then rerun (or resume)
autovacuum on the table. Since the vacuum resume block is saved after
heap vacuum, I set maintenance_work_mem so that vacuum on that table
needs heap vacuum twice or more. In other words, maintenance_work_mem
are used up during autovacuum at least more than once. The detail step
is:

1.  Make table dirty for 15 min
2.  Run vacuum with vacuum delays
3.  After the first heap vacuum, cancel it
4.  Rerun vacuum (or with the patch resume vacuum)
Through step #2 to step #4 the table is being updated in background. I
used pgbench and \random command, so the table is updated uniformly.

 I've measured the dead tuple percentage of the table. In these tests,
how long step #4 took and how much collected garbage at step #4 are
important.

1. Canceled vacuum after processing about 20% of table at step #2.
1-1. HEAD
After making dirtied (after step #1): 6.96%
After cancellation (after step #3): 6.13%

At step #4, vacuum reduced it to 4.01% and took 12m 49s. The vacuum
efficiency is 0.16%/m (2.12% down in 12.8min).

1-2. Patched (resume vacuum)
After making dirtied (after step #1): 6.92%
After cancellation (after step #3): 5.84%

At step #4, vacuum reduced it to 4.32% and took 10m 26s. The vacuum
efficiency is 0.14%/m.

------
2. Canceled vacuum after processing about 40% of table at step #2.
2-1. HEAD
After making dirtied (after step #1): 6.97%
After cancellation (after step #3): 4.56%

At step #4, vacuum reduced it to 1.91% and took 8m 15s.The vacuum
efficiency is 0.32%/m.

2-2. Patched (resume vacuum)
After making dirtied (after step #1): 6.97%
After cancellation (after step #3): 4.46%

At step #4, vacuum reduced it to 1.94% and took 6m 30s. The vacuum
efficiency is 0.38%/m.

-----
3. Canceled vacuum after processing about 70% of table at step #2.
3-1. HEAD
After making dirtied (after step #1): 6.97%
After cancellation (after step #3): 4.73%

At step #4, vacuum reduced it to 2.32% and took 8m 11s. The vacuum
efficiency is 0.29%/m.

3-2. Patched (resume vacuum)
After making dirtied (after step #1): 6.96%
After cancellation (after step #3): 4.73%

At step #4, vacuum reduced it to 3.25% and took 4m 12s. The vacuum
efficiency is 0.35%/m.

According to those results, it's thought that the more we resume
vacuum from the tail of the table, the efficiency is good. Since the
table is being updated uniformly even during autovacuum it was more
efficient to restart autovacuum from last position rather than from
the beginning of the table. I think that results shows somewhat the
benefit of this patch but I'm concerned that it might be difficult for
users when to use this option. In practice the efficiency completely
depends on the dispersion of updated pages, and that test made pages
dirty uniformly, which is not a common situation. So probably if we
want this feature, I think we should automatically enable resuming
when we can basically be sure that resuming is better. For example, we
remember both the last vacuumed block and how many vacuum-able pages
seems to exist from there, and we decide to resume vacuum if we can
expect to process more many pages.

Regards

--
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Вложения

Re: Resume vacuum and autovacuum from interruption and cancellation

От
Ibrar Ahmed
Дата:
The following review has been posted through the commitfest application:
make installcheck-world:  tested, failed
Implements feature:       tested, failed
Spec compliant:           not tested
Documentation:            not tested

Please fix the regression test cases.

The new status of this patch is: Waiting on Author

Re: Resume vacuum and autovacuum from interruption and cancellation

От
David Steele
Дата:
On 2/28/20 8:56 AM, Masahiko Sawada wrote:
> 
> According to those results, it's thought that the more we resume
> vacuum from the tail of the table, the efficiency is good. Since the
> table is being updated uniformly even during autovacuum it was more
> efficient to restart autovacuum from last position rather than from
> the beginning of the table. I think that results shows somewhat the
> benefit of this patch but I'm concerned that it might be difficult for
> users when to use this option. In practice the efficiency completely
> depends on the dispersion of updated pages, and that test made pages
> dirty uniformly, which is not a common situation. So probably if we
> want this feature, I think we should automatically enable resuming
> when we can basically be sure that resuming is better. For example, we
> remember both the last vacuumed block and how many vacuum-able pages
> seems to exist from there, and we decide to resume vacuum if we can
> expect to process more many pages.

I have to say I'm a bit confused by the point of this patch. I get that 
starting in progress is faster but that's only true because the entire 
table is not being vacuumed?

If as you say:

 > If we start to vacuum from not first block, we can update neither
 > relfrozenxid nor relfrozenxmxid. And we might not be able to update
 > even relation statistics.

Then we'll still need to vacuum the entire table before we can be sure 
the oldest xid has been removed/frozen. If we could do those updates on 
a resume then that would change my thoughts on the feature a lot.

What am I missing?

I'm marking this Returned with Feedback due concerns expressed up-thread 
(and mine) and because the patch has been Waiting on Author for nearly 
the entire CF.

Regards,
-- 
-David
david@pgmasters.net