Re: pg_verify_checksums failure with hash indexes

Поиск
Список
Период
Сортировка
От Dilip Kumar
Тема Re: pg_verify_checksums failure with hash indexes
Дата
Msg-id CAFiTN-sz7onsbuaGBRviFWegJf+Mt_570=r-=Xri_AhMz78B6Q@mail.gmail.com
обсуждение исходный текст
Ответ на Re: pg_verify_checksums failure with hash indexes  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: pg_verify_checksums failure with hash indexes  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Sat, Sep 1, 2018 at 8:22 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Aug 30, 2018 at 7:27 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> We have previously changed this define in 620b49a1 with the intent to
>> allow many non-unique values in hash indexes without worrying to reach
>> the limit of the number of overflow pages.  I think this didn't occur
>> to us that it won't work for smaller block sizes.  As such, I don't
>> see any problem with the suggested fix.  It will allow us the same
>> limit for the number of overflow pages at 8K block size and a smaller
>> limit at smaller block size.  I am not sure if we can do any better
>> with the current design.  As it will change the metapage, I think we
>> need to bump HASH_VERSION.
>
> I wouldn't bother bumping HASH_VERSION.  First, the fix needs to be
> back-patched, and you certainly can't back-patch a HASH_VERSION bump.
> Second, you should just pick a formula that gives the same answer as
> now for the cases where the overrun doesn't occur, and some other
> sufficiently-value for the cases where an overrun currently does
> occur.  If you do that, you're not changing the behavior in any case
> that currently works, so there's really no reason for a version bump.
> It just becomes a bug fix at that point.
>

I think if we compute with below formula which I suggested upthread

#define HASH_MAX_BITMAPS                       Min(BLCKSZ / 8, 1024)

then for BLCKSZ 8K and bigger, it will remain the same value where it
does not overrun.  And, for the small BLCKSZ, I think it will give
sufficient space for the hash map. If the BLCKSZ is 1K then the sizeof
(HashMetaPageData) + sizeof (HashPageOpaque) = 968 which is very close
to the BLCKSZ.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: remove ancient pre-dlopen dynloader code
Следующее
От: Fabien COELHO
Дата:
Сообщение: Re: pg_verify_checksums and -fno-strict-aliasing