Re: WIP Incremental JSON Parser

Поиск
Список
Период
Сортировка
От Nico Williams
Тема Re: WIP Incremental JSON Parser
Дата
Msg-id ZZXvjd9gSNlYWaRG@ubby
обсуждение исходный текст
Ответ на Re: WIP Incremental JSON Parser  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: WIP Incremental JSON Parser  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Tue, Jan 02, 2024 at 10:14:16AM -0500, Robert Haas wrote:
> It seems like a pretty significant savings no matter what. Suppose the
> backup_manifest file is 2GB, and instead of creating a 2GB buffer, you
> create an 1MB buffer and feed the data to the parser in 1MB chunks.
> Well, that saves 2GB less 1MB, full stop. Now if we address the issue
> you raise here in some way, we can potentially save even more memory,
> which is great, but even if we don't, we still saved a bunch of memory
> that could not have been saved in any other way.

You could also build a streaming incremental parser.  That is, one that
outputs a path and a leaf value (where leaf values are scalar values,
`null`, `true`, `false`, numbers, and strings).  Then if the caller is
doing something JSONPath-like then the caller can probably immediately
free almost all allocations and even terminate the parse early.

Nico
-- 



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Matthias van de Meent
Дата:
Сообщение: Re: Reducing output size of nodeToString
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Add a perl function in Cluster.pm to generate WAL