Обсуждение: Out of overflow pages. Out of luck.

Поиск
Список
Период
Сортировка

Out of overflow pages. Out of luck.

От
John Frank
Дата:
pgsql-general:

Does anyone have experience hacking the HASH index code to allow more
overflow pages?

I get the following when indexing a table with about 300M entries:

db=# \d table1
       Table "table1"
 Attribute |     Type     | Modifier
-----------+--------------+----------
 field1    | varchar(256) |
 field2    | integer      |
 field3    | float8       |

db=# create index table1_field1 on table1 using hash(field1);
ERROR:  HASH: Out of overflow pages.  Out of luck.

This also happens for field2.

I looked in the source for postgresql-7.0.3 from
src/include/access/hash.h:

 * The reason that the size is restricted to NCACHED (32) is because
 * the bitmaps are 16 bits: upper 5 represent the splitpoint, lower 11
 * indicate the page number within the splitpoint. Since there are
 * only 5 bits to store the splitpoint, there can only be 32 splitpoints.
 * Both spares[] and bitmaps[] use splitpoints as there indices, so there
 * can only be 32 of them.
 */
#define NCACHED                 32

Is there a way around this?!  If not: what a horrific limitation.

Thanks!  John




Re: Out of overflow pages. Out of luck.

От
Tom Lane
Дата:
John Frank <jrf@segovia.mit.edu> writes:
> Does anyone have experience hacking the HASH index code to allow more
> overflow pages?

Er, why not use btree instead?  The hash index code isn't nearly as well
debugged or maintained as btree.

Of course, if you want to adopt the hash stuff as a personal project,
be my guest --- fixes are always appreciated.  But it's got a number of
known problems, like being subject to deadlocks under concurrent access.

            regards, tom lane