Re: pg_verify_checksums failure with hash indexes

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: pg_verify_checksums failure with hash indexes
Дата
Msg-id CAA4eK1LtF4VmU4mx_+i72ff1MdNZ8XaJMGkt2HV8+uSWcn8t4A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: pg_verify_checksums failure with hash indexes  (Dilip Kumar <dilipbalaut@gmail.com>)
Ответы Re: pg_verify_checksums failure with hash indexes
Список pgsql-hackers
On Wed, Aug 29, 2018 at 4:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Wed, Aug 29, 2018 at 3:39 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >> SHOW block_size ;
> >>  block_size
> >> ────────────
> >>  4096
> >>
> >> CREATE TABLE foo(val text);
> >> INSERT INTO foo VALUES('bernd');
> >>
> >> CREATE INDEX ON foo USING hash(val);
> >> ERROR:  index "foo_val_idx" contains corrupted page at block 0
> >> HINT:  Please REINDEX it.
> >>
> >> I have no idea wether this could be related, but  i thought it won't
> >> harm to share this here.
> >>
> >
> > This issue seems different than the one got fixed in this thread.  The
> > reason for this issue is that the size of the hashm_mapp in
> > HashMetaPageData is 4096, irrespective of the block size.  So when the
> > block size is big enough (i.e. 8192) then there is no problem, but
> > when you set it to 4096, in that case, the hashm_mapp of the meta page
> > is overwriting the special space of the meta page.  That's the reason
> > its showing corrupted page while checking the hash_page.
>

Your analysis appears correct to me.

> Just to verify this I just hacked it like below and it worked.  I
> think we need a more thoughtful value for HASH_MAX_BITMAPS.
>
> diff --git a/src/include/access/hash.h b/src/include/access/hash.h
..
> -#define HASH_MAX_BITMAPS                       1024
> +#define HASH_MAX_BITMAPS                       Min(BLCKSZ / 8, 1024)
>

We have previously changed this define in 620b49a1 with the intent to
allow many non-unique values in hash indexes without worrying to reach
the limit of the number of overflow pages.  I think this didn't occur
to us that it won't work for smaller block sizes.  As such, I don't
see any problem with the suggested fix.  It will allow us the same
limit for the number of overflow pages at 8K block size and a smaller
limit at smaller block size.  I am not sure if we can do any better
with the current design.  As it will change the metapage, I think we
need to bump HASH_VERSION.

Robert, others, any thoughts?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Fabien COELHO
Дата:
Сообщение: Re: pg_verify_checksums and -fno-strict-aliasing
Следующее
От: Etsuro Fujita
Дата:
Сообщение: Extra word in src/backend/optimizer/README