Re: Microvacuum support for Hash Index

Поиск
Список
Период
Сортировка
От Ashutosh Sharma
Тема Re: Microvacuum support for Hash Index
Дата
Msg-id CAE9k0PkoEzu3Ef4su4FyXcdTtwHapowP=_P0_=iQmSwVQOz6hQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Microvacuum support for Hash Index  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
Hi Amit,

Thanks for showing your interest and reviewing my patch. I have
started looking into your review comments. I will share the updated
patch in a day or two.

With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com

On Fri, Oct 28, 2016 at 4:42 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Mon, Oct 24, 2016 at 2:21 PM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:
>> Hi All,
>>
>> I have added a microvacuum support for hash index access method and
>> attached is the v1 patch for the same.
>>
>
> This is an important functionality for hash index as we already do
> have same functionality for other types of indexes like btree.
>
>> The patch basically takes care
>> of the following things:
>>
>> 1. Firstly, it changes the marking of dead tuples from
>> 'tuple-at-a-time' to 'page-at-a-time' during hash index scan. For this
>> we accumulate the heap tids and offset of all the hash index tuples if
>> it is pointed by kill_prior_tuple during scan and then mark all
>> accumulated tids as LP_DEAD either while stepping from one page to
>> another (assuming the scan in both forward and backward direction) or
>> during end of the hash index scan or during rescan.
>>
>> 2. Secondly, when inserting tuple into hash index table, if not enough
>> space is found on a current page then it ensures that we first clean
>> the dead tuples if found in the current hash index page before moving
>> to the next page in a bucket chain or going for a bucket split. This
>> basically increases the page reusability and reduces the number of
>> page splits, thereby reducing the overall size of hash index table.
>>
>
> Few comments on patch:
>
> 1.
> +static void
> +hash_xlog_vacuum_one_page(XLogReaderState *record)
> +{
> + XLogRecPtr lsn = record->EndRecPtr;
> + xl_hash_vacuum *xldata = (xl_hash_vacuum *) XLogRecGetData(record);
> + Buffer bucketbuf = InvalidBuffer;
> + Buffer buffer;
> + Buffer metabuf;
> + Page page;
> + XLogRedoAction action;
>
> While replaying the delete/vacuum record on standby, it can conflict
> with some already running queries.  Basically the replay can remove
> some row which can be visible on standby.  You need to resolve
> conflicts similar to what we do in btree delete records (refer
> btree_xlog_delete).
>
> 2.
> + /*
> + * Write-lock the meta page so that we can decrement
> + * tuple count.
> + */
> + _hash_chgbufaccess(rel, metabuf, HASH_NOLOCK, HASH_WRITE);
> +
> + _hash_vacuum_one_page(rel, metabuf, buf, bucket_buf,
> +  (buf == bucket_buf) ? true : false);
> +
> + _hash_chgbufaccess(rel, metabuf, HASH_READ, HASH_NOLOCK);
>
> It seems here meta page lock is acquired for duration more than
> required and also it is not required when there are no deletable items
> on page. You can take the metapage lock before decrementing the count.
>
> 3.
>   Assert(offnum <= maxoff);
> +
>
> Spurious space.  There are some other similar spurious white space
> changes in patch, remove them as well.
>
>
>
> --
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jim Nasby
Дата:
Сообщение: Re: make coverage-html on OS X
Следующее
От: Peter Eisentraut
Дата:
Сообщение: commit fest manager for CF 2016-11?