Re: WIP Incremental JSON Parser
| От | Robert Haas |
|---|---|
| Тема | Re: WIP Incremental JSON Parser |
| Дата | |
| Msg-id | CA+TgmoYLi7RjjbWvBHa_9+2rVTuO4C1xPHJqzCdNUnHxR3-NdA@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: WIP Incremental JSON Parser (Andrew Dunstan <andrew@dunslane.net>) |
| Ответы |
Re: WIP Incremental JSON Parser
|
| Список | pgsql-hackers |
On Wed, Jan 3, 2024 at 6:57 AM Andrew Dunstan <andrew@dunslane.net> wrote:
> Yeah. One idea I had yesterday was to stash the field names, which in
> large JSON docs tent to be pretty repetitive, in a hash table instead of
> pstrduping each instance. The name would be valid until the end of the
> parse, and would only need to be duplicated by the callback function if
> it were needed beyond that. That's not the case currently with the
> parse_manifest code. I'll work on using a hash table.
IMHO, this is not a good direction. Anybody who is parsing JSON
probably wants to discard the duplicated labels and convert other
heavily duplicated strings to enum values or something. (e.g. if every
record has {"color":"red"} or {"color":"green"}). So the hash table
lookups will cost but won't really save anything more than just
freeing the memory not needed, but will probably be more expensive.
> The parse_manifest code does seem to pfree the scalar values it no
> longer needs fairly well, so maybe we don't need to to anything there.
Hmm. This makes me wonder if you've measured how much actual leakage there is?
--
Robert Haas
EDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: