Feature Request: bigtsvector

Поиск
Список
Период
Сортировка
От CPT
Тема Feature Request: bigtsvector
Дата
Msg-id 55810C7D.6000801@novozymes.com
обсуждение исходный текст
Список pgsql-general
Hi all;

We are running a multi-TB bioinformatics system on PostgreSQL and use a
denormalized schema in
places with a lot of tsvectors aggregated together for centralized
searching.  This is
very important to the performance of the system.  These aggregate many
documents (sometimes tens of thousands), many of which contain large
numbers of references to other documents.  It isn't uncommon to have
tens of thousands of lexemes.  The tsvectors hold mixed document id and
natural language search information (all f which comes in from the same
documents).

Recently we have started hitting the 1MB limit on tsvector size.  We
have found it possible to
patch PostgreSQL to make the tsvector larger but this changes the
on-disk layout.  How likely is
it that either the tsvector size could be increased in future versions
to allow for vectors up to toastable size (1GB logical)?  I can't
imagine we are the only ones with such a problem.  Since, I think,
changing the on-disk layout might not be such a good idea, maybe it
would be worth considering having a new bigtsvector type?

Btw, we've been very impressed with the extent that PostgreSQL has
tolerated all kinds of loads we have thrown at it.

Regards,
CPT


В списке pgsql-general по дате отправления:

Предыдущее
От: Bill Moran
Дата:
Сообщение: Re: serialization failure why?
Следующее
От: Albe Laurenz
Дата:
Сообщение: Re: pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux