On Thu, Dec 22, 2016 at 2:04 AM, Mithun Cy <mithun.cy@enterprisedb.com> wrote:
> There seems to be some base bug in _hash_doinsert().
> + * XXX this is useless code if we are only storing hash keys.
>
> + */
>
> + if (itemsz > HashMaxItemSize((Page) metap))
>
> + ereport(ERROR,
>
> + (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
>
> + errmsg("index row size %zu exceeds hash maximum %zu",
>
> + itemsz, HashMaxItemSize((Page) metap)),
>
> + errhint("Values larger than a buffer page cannot be
> indexed.")));
>
> "metap" (HashMetaPage) is not a Page (they the entirely different
> structure), so above casting is not legal. Added a patch to correct same by
> passing actual Page which stores "HashMetaPage".
So, let's see here. HashMaxItemSize uses page only to call
PageGetPageSize, which accesses page->pd_pagesize_version, stored 18
bytes into the structure. Instead we're passing it a pointer to
HashMetaPageData, where at that offset it will find hashm_bsize which
on my system has a value of 8152, which is close enough to the actual
page size of 8192 that this check fails to fire in error.
It seems like a better idea to declare a new variable for this rather
than reusing one of the correct type that happens not to be doing
anything during this part of the function, so I committed and
back-patched this that way.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company