Re: [HACKERS] Block level parallel vacuum
| От | Amit Kapila |
|---|---|
| Тема | Re: [HACKERS] Block level parallel vacuum |
| Дата | |
| Msg-id | CAA4eK1LQ+YGjmSS-XqhuAa6eb=Xykpx1LiT7UXJHmEKP=0QtsA@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: [HACKERS] Block level parallel vacuum (Amit Kapila <amit.kapila16@gmail.com>) |
| Ответы |
Re: [HACKERS] Block level parallel vacuum
|
| Список | pgsql-hackers |
On Fri, Nov 22, 2019 at 2:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Wed, Nov 20, 2019 at 11:01 AM Masahiko Sawada
> <masahiko.sawada@2ndquadrant.com> wrote:
> >
> > I've attached the latest version patch set. The patch set includes all
> > discussed points regarding index AM options as well as shared cost
> > balance. Also I added some test cases used all types of index AM.
> >
>
> I have reviewed the first patch and made a number of modifications
> that include adding/modifying comments, made some corrections and
> modifications in the documentation. You can find my changes in
> v33-0001-delta-amit.patch.
>
I have continued my review for this patch series and reviewed/hacked
the second patch. I have added/modified comments, changed function
ordering in file to make them look consistent and a few other changes.
You can find my changes in v33-0002-delta-amit.patch. Are you
working on review comments given recently, if you have not started
yet, then it might be better to prepare a patch atop of v33 version as
I am also going to work on this patch series, that way it will be easy
to merge changes. OTOH, if you are already working on those, then it
is fine. I can merge any remaining changes with your new patch.
Whatever be the case, please let me know.
Few more comments on v33-0002-Add-parallel-option-to-VACUUM-command.patch:
---------------------------------------------------------------------------------------------------------------------------
1.
+ * leader process re-initializes the parallel context while keeping recorded
+ * dead tuples so that the leader can launch parallel workers again in the next
+ * time.
In this sentence, it is not clear to me why we need to keep the
recorded dead tuples while re-initialize parallel workers? The next
time when workers are launched, they should process a new set of dead
tuples, no?
2.
lazy_parallel_vacuum_or_cleanup_indexes()
{
..
+ /*
+ * Increment the active worker count. We cannot decrement until the
+ * all parallel workers finish.
+ */
+
pg_atomic_add_fetch_u32(VacuumActiveNWorkers, 1);
+
+ /*
+ * Join as parallel workers. The leader process alone does that in
+ * case where
no workers launched.
+ */
+ if (lps->leaderparticipates || lps->pcxt->nworkers_launched == 0)
+ vacuum_or_cleanup_indexes_worker
(Irel, nindexes, stats, lps->lvshared,
+ vacrelstats->dead_tuples);
+
+ /*
+
* Here, the indexes that had been skipped during parallel index vacuuming
+ * are remaining. If there are such indexes the leader process does
vacuum
+ * or cleanup them one by one.
+ */
+ nindexes_remains = nindexes -
pg_atomic_read_u32(&(lps->lvshared->nprocessed));
+ if
(nindexes_remains > 0)
+ {
+ int i;
+#ifdef USE_ASSERT_CHECKING
+ int nprocessed = 0;
+#endif
+
+ for (i = 0; i <
nindexes; i++)
+ {
+ bool processed = !skip_parallel_index_vacuum(Irel[i],
+
lps->lvshared->for_cleanup,
+
lps->lvshared->first_time);
+
+ /* Skip the already processed indexes */
+
if (processed)
+ continue;
+
+ if (lps->lvshared->for_cleanup)
+
lazy_cleanup_index(Irel[i], &stats[i],
+ vacrelstats->new_rel_tuples,
+
vacrelstats->tupcount_pages < vacrelstats->rel_pages);
+ else
+
lazy_vacuum_index(Irel[i], &stats[i], vacrelstats->dead_tuples,
+ vacrelstats-
>old_live_tuples);
+#ifdef USE_ASSERT_CHECKING
+ nprocessed++;
+#endif
+ }
+#ifdef USE_ASSERT_CHECKING
+ Assert
(nprocessed == nindexes_remains);
+#endif
+ }
+
+ /*
+ * We have completed the index vacuum so decrement the active worker
+ * count.
+
*/
+ pg_atomic_sub_fetch_u32(VacuumActiveNWorkers, 1);
..
}
Here, it seems that we can increment/decrement the
VacuumActiveNWorkers even when there is no work performed by the
leader backend. How about moving increment/decrement inside function
vacuum_or_cleanup_indexes_worker? In that case, we need to do it in
this function when we are actually doing an index vacuum or cleanup.
After doing that the other usage of increment/decrement of
VacuumActiveNWorkers in other function heap_parallel_vacuum_main can
be removed.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
Вложения
В списке pgsql-hackers по дате отправления: