Re: [HACKERS] Page Scan Mode in Hash Index

Поиск
Список
Период
Сортировка
От Ashutosh Sharma
Тема Re: [HACKERS] Page Scan Mode in Hash Index
Дата
Msg-id CAE9k0P=4_JJ1Qawrgu8Mo4BD2LW4cPW6u++NVDV-5s=pmrdO1A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Page Scan Mode in Hash Index  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: [HACKERS] Page Scan Mode in Hash Index  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Sat, Aug 19, 2017 at 11:37 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Aug 11, 2017 at 6:51 PM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:
>>>
>>> 7.
>>> _hash_kill_items(IndexScanDesc scan)
>>> {
>>> ..
>>> + /*
>>> + * If page LSN differs it means that the page was modified since the
>>> + * last read. killedItems could be not valid so LP_DEAD hints apply-
>>> + * ing is not safe.
>>> + */
>>> + page = BufferGetPage(buf);
>>> + if (PageGetLSN(page) != so->currPos.lsn)
>>> + {
>>> + _hash_relbuf(rel, buf);
>>> + return;
>>> + }
>>> ..
>>> }
>>>
>>> How does this check cover the case of unlogged tables?
>>
>> Thanks for putting that point. It doesn't cover the case for unlogged
>> tables. As suggested by you in one of your email in this mailing list, i am
>> now not allowing vacuum to release lock on current page before acquiring
>> lock on next page for unlogged tables. This will ensure that scan is always
>> behind vacuum if they are running on the same bucket simultaneously.
>> Therefore, there is danger in marking tuples as dead for unlogged pages even
>> if they are not having any lsn.
>>
>

Once again, Thank you for reviewing my patches.

> In the last line, I guess you wanted to say "there is *no* danger
> ..."?

Yes, i meant that because, it ensures that scan will always be following VACUUM.

Today, while again thinking about this part of the patch
> (_hash_kill_items) it occurred to me that we can't rely on a pin on an
> overflow page to guarantee that it is not modified by Vacuum.
> Consider a case where vacuum started vacuuming the bucket before the
> scan and then in-between scan overtakes it.  Now, it is possible that
> even if a scan has a pin on a page (and no lock), vacuum might clean
> that page, if that happens, then we can't prevent the reuse of TID.
> What do you think?
>

I think, you are talking about non-mvcc scan case, because in case of
mvcc scans, even if we have released both pin and lock on a page,
VACUUM can't remove tuples from a page if it is visible to some
concurrently running transactions (mvcc scan in our case). So, i don't
think it can happen in case of MVCC scans however it can happen for
non-mvcc scans and for that to handle i think, it is better that we
drop an idea of allowing scan to overtake
VACUUM (done by
0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index_v5.patch).

However, B-Tree has handled this in _bt_drop_lock_and_maybe_pin()
where it releases both pin and lock on a page if it is MVCC snapshot
else just
releases lock on the page.

> Few other comments:
>
> 1.
> + * On failure exit (no more tuples), we return FALSE with pin
> + * pin held on bucket page but no pins or locks held on overflow
> + * page.
>   */
>  bool
>  _hash_next(IndexScanDesc scan, ScanDirection dir)
>
> In the above part of comment 'pin' is used twice.

OKay, I will remove one extra pin (from comment) in my next version of patch.

>
> 0003-Improve-locking-startegy-during-VACUUM-in-Hash-Index_v5
>
> 2.
> - * not at all by the rearrangement we are performing here.  To prevent
> - * any concurrent scan to cross the squeeze scan we use lock chaining
> - * similar to hasbucketcleanup.  Refer comments atop hashbucketcleanup.
> + * not at all by the rearrangement we are performing here.
>
> In _hash_squeezebucket, we still use lock chaining, so removing the
> above comment doesn't seem like a good idea.  I think you should copy
> part of a comment from hasbucketcleanup starting from "There can't be
> any concurrent .."

Okay, I will correct it in my next version of patch.

>
> 3.
> _hash_freeovflpage()
> {
> ..
>
> * Concurrency issues are avoided by using lock chaining as
> * described atop hashbucketcleanup.
> ..
> }
>
> After fixing #2, you need to change the function name in above comment.
>

Sure, I will correct that in my next version of patch.

With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Fabien COELHO
Дата:
Сообщение: Re: [HACKERS] proposal: psql command \graw
Следующее
От: Pavel Stehule
Дата:
Сообщение: Re: [HACKERS] proposal: psql command \graw