Re: [HACKERS] Partition-wise aggregation/grouping

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: [HACKERS] Partition-wise aggregation/grouping
Дата
Msg-id CA+Tgmoa33cn8KBpYzWMNOR_2g_hgkdCe0LFF5KO09qxDpRxBSw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Partition-wise aggregation/grouping  (Jeevan Chalke <jeevan.chalke@enterprisedb.com>)
Список pgsql-hackers
On Thu, Feb 8, 2018 at 8:05 AM, Jeevan Chalke
<jeevan.chalke@enterprisedb.com> wrote:
> 0003 - 0006 are refactoring patches as before.

I have committed 0006 with some modifications.  In particular, [1] I
revised the comments and formatting; [2] I made cost_merge_append()
add cpu_tuple_cost * APPEND_CPU_COST_MULTIPLIER in lieu of, rather
than in addition to, cpu_operator_cost; and [3] I modified the
regression test so that the overall plan shape didn't change.

[2] was proposed upthread, but not adopted.  I had the same thought
while reading the patch (having forgotten the previous discussion) and
that seemed like a good enough reason to do it according to the
previous proposal.  If there is a good reason to think MergeAppend
needs that extra cost increment to be fairly-costed, I don't see it on
this thread.

[3] was also remarked upon upthread -- Ashutosh mentioned that the
change in plan shape was "sad" but there was no further discussion of
the matter.  I also found it sad; hence the change.  This is, by the
way, an interesting illustration of how partition-wise join could
conceivably lose.  Up until now I've thought that it seemed to be a
slam dunk to always win or at least break even, but if you've got a
relatively unselective join, such that the output is much larger than
either input, then doing the join partition-wise means putting all of
the output rows through an Append node, whereas doing it the normal
way means putting only the input rows through Append nodes.  If the
smaller number of rows being joined at one time doesn't help -- e.g.
all of the inner rows across all partitions fit in a tiny little hash
table -- then we're just feeding more rows through the Append for no
gain.  Not a common case, perhaps, but not impossible.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Craig Ringer
Дата:
Сообщение: Re: [bug fix] Produce a crash dump before main() on Windows
Следующее
От: Tom Lane
Дата:
Сообщение: Re: [HACKERS] Constifying numeric.c's local vars