Re: Default setting for enable_hashagg_disk
От | Melanie Plageman |
---|---|
Тема | Re: Default setting for enable_hashagg_disk |
Дата | |
Msg-id | CAAKRu_YsUV2jwR4WKv=UHoZDh3DJm-5BGfkh=_CZnChFBSYMwA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Default setting for enable_hashagg_disk (Jeff Davis <pgsql@j-davis.com>) |
Список | pgsql-hackers |
On Wed, Jun 10, 2020 at 10:39 AM Jeff Davis <pgsql@j-davis.com> wrote:
On Tue, 2020-06-09 at 18:20 -0700, Melanie Plageman wrote:
> So, I was catching up on email and noticed the last email in this
> thread.
>
> I think I am not fully understanding what
> enable_groupingsets_hash_disk
> does. Is it only for testing?
It's mostly for testing. I could imagine cases where it would be useful
to force groupingsets to use the disk, but I mainly wanted the setting
there for testing the grouping sets hash disk code path.
> Using the tests you added to src/test/regress/sql/groupingsets.sql, I
> did get a plan that looks like hashagg is spilling to disk (goes
> through
I had something that worked as a test for a while, but then when I
tweaked the costing, it started using the Sort path (therefore not
testing my grouping sets hash disk code at all) and a bug crept in. So
I thought it would be best to have a more forceful knob.
Perhaps I should just get rid of that GUC and use the stats trick?
throw in that other trick that is used in groupingsets.sql and make some
of the grouping columns unhashable and some unsortable so you know that
you will not pick only the Sort Path and do just a GroupAgg.
This slight modification of my previous example will probably yield
consistent results:
set enable_hashagg_disk = true;
SET enable_groupingsets_hash_disk = false;
SET work_mem='64kB';
SET enable_hashagg = true;
drop table if exists gs_hash_1;
create table gs_hash_1 as
select g%1000 as g1000, g%100 as g100, g%10 as g10, g,
g::text::xid as g_unsortable, g::bit(4) as g_unhashable
from generate_series(0,199999) g;
analyze gs_hash_1;
alter table gs_hash_1 set (autovacuum_enabled = 'false');
update pg_class set reltuples = 10 where relname = 'gs_hash_1';
explain (analyze, costs off, timing off)
select g1000, g100, g10
from gs_hash_1
group by grouping sets ((g1000,g100), (g10, g_unhashable), (g100, g_unsortable));
QUERY PLAN
----------------------------------------------------------------
MixedAggregate (actual rows=201080 loops=1)
Hash Key: g100, g_unsortable
Group Key: g1000, g100
Sort Key: g10, g_unhashable
Group Key: g10, g_unhashable
Peak Memory Usage: 109 kB
Disk Usage: 13504 kB
HashAgg Batches: 10111
-> Sort (actual rows=200000 loops=1)
Sort Key: g1000, g100
Sort Method: external merge Disk: 9856kB
-> Seq Scan on gs_hash_1 (actual rows=200000 loops=1)
While we are on the topic of the tests, I was wondering if you had
considered making a user defined type that had a lot of padding so that
the tests could use fewer rows. I did this for adaptive hashjoin and it
helped me with iteration time.
I don't know if that would still be the kind of test you are looking for
since a user probably wouldn't have a couple hundred really fat
untoasted tuples, but, I just thought I would check if that would be
useful.
--
Melanie Plageman
В списке pgsql-hackers по дате отправления:
Предыдущее
От: "Bossart, Nathan"Дата:
Сообщение: Add support for INDEX_CLEANUP and TRUNCATE to vacuumdb