Re: bad estimation together with large work_mem generates terrible slow hash joins
От | Tomas Vondra |
---|---|
Тема | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Дата | |
Msg-id | 69c67586b7c33a741710a0b0b8f53a11.squirrel@sq.gransy.com обсуждение исходный текст |
Ответ на | Re: bad estimation together with large work_mem generates terrible slow hash joins (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: bad estimation together with large work_mem generates terrible slow hash joins
|
Список | pgsql-hackers |
On 11 Září 2014, 16:11, Tom Lane wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Thu, Sep 11, 2014 at 9:59 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Robert Haas <robertmhaas@gmail.com> writes: >>>> (3) It allows the number of batches to increase on the fly while the >>>> hash join is in process. > >>> Pardon me for not having read the patch yet, but what part of (3) >>> wasn't there already? > >> EINSUFFICIENTCAFFEINE. > >> It allows the number of BUCKETS to increase, not the number of >> batches. As you say, the number of batches could already increase. > > Ah. Well, that would mean that we need a heuristic for deciding when to > increase the number of buckets versus the number of batches ... seems > like a difficult decision. That's true, but that's not the aim of this patch. The patch simply increases the number of buckets if the load happens to get too high, and does not try to decide between increasing nbuckets and nbatch. It's true that we can often get better performance with more batches, but the cases I was able to inspect were caused either by (a) underestimates resulting in inappropriate nbucket count, or (b) caching effects. This patch solves (a), not even trying to fix (b). regards Tomas
В списке pgsql-hackers по дате отправления: