Re: bad estimation together with large work_mem generates terrible slow hash joins
| От | Tom Lane |
|---|---|
| Тема | Re: bad estimation together with large work_mem generates terrible slow hash joins |
| Дата | |
| Msg-id | 18330.1410449299@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: bad estimation together with large work_mem generates terrible slow hash joins ("Tomas Vondra" <tv@fuzzy.cz>) |
| Ответы |
Re: bad estimation together with large work_mem generates
terrible slow hash joins
|
| Список | pgsql-hackers |
"Tomas Vondra" <tv@fuzzy.cz> writes:
> On 11 Září 2014, 16:11, Tom Lane wrote:
>> Ah. Well, that would mean that we need a heuristic for deciding when to
>> increase the number of buckets versus the number of batches ... seems
>> like a difficult decision.
> That's true, but that's not the aim of this patch. The patch simply
> increases the number of buckets if the load happens to get too high, and
> does not try to decide between increasing nbuckets and nbatch.
On further thought, increasing nbuckets without changing the batch
boundaries would not get us out of an out-of-work_mem situation, in fact
it makes memory consumption worse not better (assuming you count the
bucket headers towards work_mem ;-)).
So in principle, the rule seems like it ought to go "if load (defined as
max bucket chain length, I imagine?) gets too high, but we are still
well below work_mem, increase nbuckets; else increase nbatch". And
perhaps we reset nbuckets again for the next batch, not sure. If we
are dealing with an out-of-work_mem situation then only increasing nbatch
would be a suitable response.
Because of the risk that increasing nbuckets would itself lead to a
work_mem violation, I don't think it's sane to ignore the interaction
entirely, even in a first patch.
regards, tom lane
В списке pgsql-hackers по дате отправления: