Re: JSONB performance enhancement for 9.6

Поиск
Список
Период
Сортировка
От Tom Smith
Тема Re: JSONB performance enhancement for 9.6
Дата
Msg-id CAKwSVFHNNDNjx4ipga2EO1hL5ihFKpoOvdR=8Q+4kwqs-W3yDQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: JSONB performance enhancement for 9.6  (Thomas Kellerer <spam_eater@gmx.net>)
Ответы Re: JSONB performance enhancement for 9.6  (Bill Moran <wmoran@potentialtech.com>)
Re: JSONB performance enhancement for 9.6  (Arthur Silva <arthurprs@gmail.com>)
Список pgsql-general
Hi, Thanks for everyone's response.

The issue is not just compression, but lack of "indexing" or "segmentation" when a
single doc has, say 2000 top level keys (with multiple levels of subkeys).
 right now, if I query for one key,  the whole doc
has to be first uncompressed and loaded and then search for the single key.

Compared to traditional way of storing each top level key with a separate column, this is huge overhead when table scan is required.  Some kind of "keyed/slotted" storage for the doc could
help, (for illustration, all keys starting with 'A' would have its own storage unit, so on,
so when I search for key  "A1" only that unit would be unpacked and traversed to get :"A1" value". it is like postgresql predfine 26 columns/slots for the whole doc. an internal indexing
within each doc for fast retrieval of individual field values.

Someone mentioned a plan in roadmap for this route but I'd like to know if it is in 9.6 plan.

below url mentions the similar issue. I am not sure if it has been completely resolved.

http://stackoverflow.com/questions/26259374/jsonb-performance-degrades-as-number-of-keys-increase

below url mentions the potential issue.

https://www.reddit.com/r/PostgreSQL/comments/36rdlr/improving_select_performance_with_jsonb_vs_hstore/

Thanks



On Sun, Nov 29, 2015 at 7:35 AM, Thomas Kellerer <spam_eater@gmx.net> wrote:
Tom Smith schrieb am 29.11.2015 um 03:27:
Hello:

Is there a plan for 9.6 to resolve the issue of very slow query/retrieval of jsonb fields
when there are large number (maybe several thousands) of top level keys.
Currently, if I save a large json document with top level keys of thousands and query/retrieve
field values,  the whole document has to be first decompressed and load to memory
before searching for the specific field key/value.

Thanks in Advance

If you are concerned about the compression overhead, then why don't you use (or try) JSON instead?







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

В списке pgsql-general по дате отправления:

Предыдущее
От: Thomas Kellerer
Дата:
Сообщение: Re: JSONB performance enhancement for 9.6
Следующее
От: Tom Smith
Дата:
Сообщение: Re: JSONB performance enhancement for 9.6