Re: [HACKERS] GSoC 2017: weekly progress reports (week 4) and patchfor hash index

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: [HACKERS] GSoC 2017: weekly progress reports (week 4) and patchfor hash index
Дата
Msg-id CAA4eK1+7gpjvkSu2OZUofEN-BxbXYz1gubeB+tbthwBCZpPi7w@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] GSoC 2017: weekly progress reports (week 4) and patchfor hash index  (Alexander Korotkov <a.korotkov@postgrespro.ru>)
Ответы Re: [HACKERS] GSoC 2017: weekly progress reports (week 4) and patchfor hash index
Список pgsql-hackers
On Thu, Jan 25, 2018 at 7:29 PM, Alexander Korotkov
<a.korotkov@postgrespro.ru> wrote:
> On Sat, Jan 20, 2018 at 4:24 PM, Amit Kapila <amit.kapila16@gmail.com>
> wrote:
>>
>> On Fri, Sep 29, 2017 at 8:20 PM, Alexander Korotkov
>> <a.korotkov@postgrespro.ru> wrote:
>> > +1,
>> > Very nice idea!  Locking hash values directly seems to be superior over
>> > locking hash index pages.
>> > Shubham, do you have any comment on this?
>> >
>>
>> As Shubham seems to be running out of time, I thought of helping him
>> by looking into the above-suggested idea.  I think one way to lock a
>> particular hash value is we can use TID of heap tuple associated with
>> each index entry (index entry for the hash index will be hash value).
>
>
> Sorry, I didn't get what do you particularly mean.  If locking either TID of
> associated heap tuple or TID of hash index tuple, then what will we lock
> in the case when nothing found?  Even if we found nothing, we have
> to place some lock according to search key in order to detect cases when
> somebody has inserted the row which we might see according to that search
> key.
>

Okay, but if you use hash value as lock tag (which is possible) how
will we deal with things like page split?  I think even if use
blocknumber/page/bucketnumber corresponding to the hash value along
with hash value in lock tag, then also it doesn't appear to work.  I
think using page level locks for index makes sense, especially because
it will be convinient to deal with page splits.  Also, as predicate
locks stay in-memory, so creating too many such locks doesn't sound
like a nice strategy even though we have a way to upgrade it to next
level (page) as that has a separate cost to it.

>>
>> However, do we really need it for implementing predicate locking for
>> hash indexes?  If we look at the "Index AM implementations" section of
>> README-SSI, it doesn't seem to be required.  Basically, if we look at
>> the strategy of predicate locks in btree [1], it seems to me locking
>> at page level for hash index seems to be a right direction as similar
>> to btree, the corresponding heap tuple read will be locked.
>
>
> Btree uses leaf-pages locking because it supports both range searches
> and exact value searches.  And it needs to detect overlaps between
> these two kinds of searches.  Therefore, btree locks leaf-pages in both
> cases.
>

Also, probably using page level locks makes it easier to deal index
operations like page split.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: JIT compiling with LLVM v9.0
Следующее
От: Andres Freund
Дата:
Сообщение: Cancelling parallel query leads to segfault