Re: Making jsonb_agg() faster
От | Tom Lane |
---|---|
Тема | Re: Making jsonb_agg() faster |
Дата | |
Msg-id | 2466173.1756498180@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Making jsonb_agg() faster (Chao Li <li.evan.chao@gmail.com>) |
Список | pgsql-hackers |
Chao Li <li.evan.chao@gmail.com> writes: > A few more suggestions for pushJsonValue(): > ... > To push WJB_BEGIN_OBJECT and WJB_END_OBJECT, we can directly call pushJsonValueScalar(), because once entering pushJsonbValue,they will meet the check of (seq != WJB_ELEM && seq != WJB_VALUE). Directly calling pushJsonValueScalar()will saves one level of recursion. I'm not excited about that idea, because I think it'd be quite confusing for some of the calls in those stanzas to be to pushJsonbValueScalar while others are to pushJsonbValue. I don't think the recursive-push paths are particularly hot, so giving up readability to make them faster doesn't seem like the right tradeoff. There's certainly room to argue that the separation between pushJsonbValue and pushJsonbValueScalar is poorly thought out and could be done better. But I don't have a concrete idea about what it could look like instead. > And for pushJsonbValueScalar(): > - (*pstate)->size = 4; > + ppstate->size = 4; /* initial guess at array size */ > Can we do lazy allocation? Initially assume size = 0, only allocate memory when pushing the first element? This way, wewon’t allocate memory for empty objects and arrays. Optimizing for the empty-array or empty-object case surely seems like the wrong thing; how often will that apply? I actually think that the initial allocation could stand to be a good bit larger, maybe 64 or 256 or so entries, to reduce the number of repallocs. I did experiment with that a little bit and could not show any definitive speedup on the test case I was using ... but 4 entries seems miserly small. regards, tom lane
В списке pgsql-hackers по дате отправления: