Обсуждение: [PATCH] Add extra statistics to explain for Nested Loop

Поиск
Список
Период
Сортировка

[PATCH] Add extra statistics to explain for Nested Loop

От
e.sokolova@postgrespro.ru
Дата:
Hi, hackers.
For some distributions of data in tables, different loops in nested loop 
joins can take different time and process different amounts of entries. 
It makes average statistics returned by explain analyze not very useful 
for DBA.
To fix it, here is the patch that add printing of min and max statistics 
for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
Please don't hesitate to share any thoughts on this topic!
-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Pavel Stehule
Дата:


pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:
Hi, hackers.
For some distributions of data in tables, different loops in nested loop
joins can take different time and process different amounts of entries.
It makes average statistics returned by explain analyze not very useful
for DBA.
To fix it, here is the patch that add printing of min and max statistics
for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
Please don't hesitate to share any thoughts on this topic!

+1

This is great feature - sometimes it can be pretty messy current limited format

Pavel

--
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :


pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:
Hi, hackers.
For some distributions of data in tables, different loops in nested loop
joins can take different time and process different amounts of entries.
It makes average statistics returned by explain analyze not very useful
for DBA.
To fix it, here is the patch that add printing of min and max statistics
for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
Please don't hesitate to share any thoughts on this topic!

+1

This is great feature - sometimes it can be pretty messy current limited format

+1, this can be very handy! 

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Anastasia Lubennikova
Дата:
On 16.10.2020 12:07, Julien Rouhaud wrote:
Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :


pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:
Hi, hackers.
For some distributions of data in tables, different loops in nested loop
joins can take different time and process different amounts of entries.
It makes average statistics returned by explain analyze not very useful
for DBA.
To fix it, here is the patch that add printing of min and max statistics
for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
Please don't hesitate to share any thoughts on this topic!

+1

This is great feature - sometimes it can be pretty messy current limited format

+1, this can be very handy! 

Cool.
I have added your patch to the commitfest, so it won't get lost.
https://commitfest.postgresql.org/30/2765/

I will review the code next week.  Unfortunately, I cannot give any feedback about usability of this feature.

User visible change is:

-               ->  Nested Loop (actual rows=N loops=N)
+              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)

Pavel, Julien, could you please say if it looks good?

-- 
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Pavel Stehule
Дата:


so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <a.lubennikova@postgrespro.ru> napsal:
On 16.10.2020 12:07, Julien Rouhaud wrote:
Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :


pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:
Hi, hackers.
For some distributions of data in tables, different loops in nested loop
joins can take different time and process different amounts of entries.
It makes average statistics returned by explain analyze not very useful
for DBA.
To fix it, here is the patch that add printing of min and max statistics
for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
Please don't hesitate to share any thoughts on this topic!

+1

This is great feature - sometimes it can be pretty messy current limited format

+1, this can be very handy! 

Cool.
I have added your patch to the commitfest, so it won't get lost.
https://commitfest.postgresql.org/30/2765/

I will review the code next week.  Unfortunately, I cannot give any feedback about usability of this feature.

User visible change is:

-               ->  Nested Loop (actual rows=N loops=N)
+              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)


This interface is ok - there is not too much space for creativity. I can imagine displaying variance or average - but I am afraid about very bad performance impacts.

Regards

Pavel


Pavel, Julien, could you please say if it looks good?

-- 
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
On Sat, Oct 17, 2020 at 12:15 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:
>
> so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <a.lubennikova@postgrespro.ru> napsal:
>>
>> On 16.10.2020 12:07, Julien Rouhaud wrote:
>>
>> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :
>>>
>>>
>>>
>>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:
>>>>
>>>> Hi, hackers.
>>>> For some distributions of data in tables, different loops in nested loop
>>>> joins can take different time and process different amounts of entries.
>>>> It makes average statistics returned by explain analyze not very useful
>>>> for DBA.
>>>> To fix it, here is the patch that add printing of min and max statistics
>>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
>>>> Please don't hesitate to share any thoughts on this topic!
>>>
>>>
>>> +1
>>>
>>> This is great feature - sometimes it can be pretty messy current limited format
>>
>>
>> +1, this can be very handy!
>>
>> Cool.
>> I have added your patch to the commitfest, so it won't get lost.

Thanks!  I'll also try to review it next week.

>> https://commitfest.postgresql.org/30/2765/
>>
>> I will review the code next week.  Unfortunately, I cannot give any feedback about usability of this feature.
>>
>> User visible change is:
>>
>> -               ->  Nested Loop (actual rows=N loops=N)
>> +              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)
>
>
> This interface is ok - there is not too much space for creativity.

Yes I also think it's ok. We should also consider usability for tools
like explain.depesz.com, I don't know if the current output is best.
I'm adding Depesz and Pierre which are both working on this kind of
tool for additional input.

> I can imagine displaying variance or average - but I am afraid about very bad performance impacts.

The original counter (rows here) is already an average right?
Variance could be nice too.  Instrumentation will already spam
gettimeofday() calls for nested loops, I don't think that computing
variance would add that much overhead?



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Pavel Stehule
Дата:


so 17. 10. 2020 v 6:26 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:
On Sat, Oct 17, 2020 at 12:15 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:
>
> so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <a.lubennikova@postgrespro.ru> napsal:
>>
>> On 16.10.2020 12:07, Julien Rouhaud wrote:
>>
>> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :
>>>
>>>
>>>
>>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:
>>>>
>>>> Hi, hackers.
>>>> For some distributions of data in tables, different loops in nested loop
>>>> joins can take different time and process different amounts of entries.
>>>> It makes average statistics returned by explain analyze not very useful
>>>> for DBA.
>>>> To fix it, here is the patch that add printing of min and max statistics
>>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
>>>> Please don't hesitate to share any thoughts on this topic!
>>>
>>>
>>> +1
>>>
>>> This is great feature - sometimes it can be pretty messy current limited format
>>
>>
>> +1, this can be very handy!
>>
>> Cool.
>> I have added your patch to the commitfest, so it won't get lost.

Thanks!  I'll also try to review it next week.

>> https://commitfest.postgresql.org/30/2765/
>>
>> I will review the code next week.  Unfortunately, I cannot give any feedback about usability of this feature.
>>
>> User visible change is:
>>
>> -               ->  Nested Loop (actual rows=N loops=N)
>> +              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)
>
>
> This interface is ok - there is not too much space for creativity.

Yes I also think it's ok. We should also consider usability for tools
like explain.depesz.com, I don't know if the current output is best.
I'm adding Depesz and Pierre which are both working on this kind of
tool for additional input.

> I can imagine displaying variance or average - but I am afraid about very bad performance impacts.

The original counter (rows here) is already an average right?
Variance could be nice too.  Instrumentation will already spam
gettimeofday() calls for nested loops, I don't think that computing
variance would add that much overhead?

There is not any problem to write benchmark for worst case and test it


Re: [PATCH] Add extra statistics to explain for Nested Loop

От
hubert depesz lubaczewski
Дата:
On Sat, Oct 17, 2020 at 12:26:08PM +0800, Julien Rouhaud wrote:
> >> -               ->  Nested Loop (actual rows=N loops=N)
> >> +              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)
> > This interface is ok - there is not too much space for creativity.
> Yes I also think it's ok. We should also consider usability for tools
> like explain.depesz.com, I don't know if the current output is best.
> I'm adding Depesz and Pierre which are both working on this kind of
> tool for additional input.

Thanks for heads up. This definitely will need some changes on my side,
but should be easy to handle.

Best regards,

depesz




Re: [PATCH] Add extra statistics to explain for Nested Loop

От
"David G. Johnston"
Дата:
On Fri, Oct 16, 2020 at 3:11 PM Anastasia Lubennikova <a.lubennikova@postgrespro.ru> wrote:
User visible change is:


-               ->  Nested Loop (actual rows=N loops=N)
+              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)

I'd be inclined to append both new rows to the end.

(actual rows=N loops=N min_rows=N max_rows=N)

rows * loops is still an important calculation.

Why not just add total_rows while we are at it - last in the listing?

(actual rows=N loops=N min_rows=N max_rows=N total_rows=N) 
 
David J.

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
On Sat, Oct 17, 2020 at 6:11 AM Anastasia Lubennikova
<a.lubennikova@postgrespro.ru> wrote:
>
> On 16.10.2020 12:07, Julien Rouhaud wrote:
>
> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :
>>
>>
>>
>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:
>>>
>>> Hi, hackers.
>>> For some distributions of data in tables, different loops in nested loop
>>> joins can take different time and process different amounts of entries.
>>> It makes average statistics returned by explain analyze not very useful
>>> for DBA.
>>> To fix it, here is the patch that add printing of min and max statistics
>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
>>> Please don't hesitate to share any thoughts on this topic!
>>
>>
>> +1
>>
>> This is great feature - sometimes it can be pretty messy current limited format
>
>
> +1, this can be very handy!
>
> Cool.
> I have added your patch to the commitfest, so it won't get lost.
> https://commitfest.postgresql.org/30/2765/
>
> I will review the code next week.  Unfortunately, I cannot give any feedback about usability of this feature.
>
> User visible change is:
>
> -               ->  Nested Loop (actual rows=N loops=N)
> +              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)

Thanks for working on this feature!  Here are some comments on the patch.

First, cosmetic issues.  There are a lot of whitespace issues, the new
code is space indented while it should be tab indented.  Also there
are 3 patches included with some fixups, could you instead push a
single patch?

It also misses some modification in the regression tests.  For instance:

diff --git a/src/test/regress/expected/partition_prune.out
b/src/test/regress/expected/partition_prune.out
index 50d2a7e4b9..db0b167ef4 100644
--- a/src/test/regress/expected/partition_prune.out
+++ b/src/test/regress/expected/partition_prune.out
@@ -2065,7 +2065,7 @@ select explain_parallel_append('select avg(ab.a)
from ab inner join lprt_a a on
          Workers Planned: 1
          Workers Launched: N
          ->  Partial Aggregate (actual rows=N loops=N)
-               ->  Nested Loop (actual rows=N loops=N)
+               ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)
                      ->  Parallel Seq Scan on lprt_a a (actual rows=N loops=N)

You should update the explain_parallel_append() plpgsql function
created in that test file to make sure that both "rows" and the two
new counters are changed to "N".  There might be other similar changes
needed.


Now, for the changes themselves.  For the min/max time, you're
aggregating "totaltime - instr->firsttuple".  Why removing the startup
time from the loop execution time?  I think this should be kept.
Also, in explain.c you display the counters in the "Nested loop" node,
but this should be done in the inner plan node instead, as this is the
one being looped on.  So the condition should probably be "nloops > 1"
rather than testing if it's a NestLoop.

I'm switching the patch to WoA.



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Pierre Giraud
Дата:

Le 17/10/2020 à 06:26, Julien Rouhaud a écrit :
> On Sat, Oct 17, 2020 at 12:15 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:
>>
>> so 17. 10. 2020 v 0:11 odesílatel Anastasia Lubennikova <a.lubennikova@postgrespro.ru> napsal:
>>>
>>> On 16.10.2020 12:07, Julien Rouhaud wrote:
>>>
>>> Le ven. 16 oct. 2020 à 16:12, Pavel Stehule <pavel.stehule@gmail.com> a écrit :
>>>>
>>>>
>>>>
>>>> pá 16. 10. 2020 v 9:43 odesílatel <e.sokolova@postgrespro.ru> napsal:
>>>>>
>>>>> Hi, hackers.
>>>>> For some distributions of data in tables, different loops in nested loop
>>>>> joins can take different time and process different amounts of entries.
>>>>> It makes average statistics returned by explain analyze not very useful
>>>>> for DBA.
>>>>> To fix it, here is the patch that add printing of min and max statistics
>>>>> for time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
>>>>> Please don't hesitate to share any thoughts on this topic!
>>>>
>>>>
>>>> +1
>>>>
>>>> This is great feature - sometimes it can be pretty messy current limited format
>>>
>>>
>>> +1, this can be very handy!
>>>
>>> Cool.
>>> I have added your patch to the commitfest, so it won't get lost.
> 
> Thanks!  I'll also try to review it next week.
> 
>>> https://commitfest.postgresql.org/30/2765/
>>>
>>> I will review the code next week.  Unfortunately, I cannot give any feedback about usability of this feature.
>>>
>>> User visible change is:
>>>
>>> -               ->  Nested Loop (actual rows=N loops=N)
>>> +              ->  Nested Loop (actual min_rows=0 rows=0 max_rows=0 loops=2)
>>
>>
>> This interface is ok - there is not too much space for creativity.
> 
> Yes I also think it's ok. We should also consider usability for tools
> like explain.depesz.com, I don't know if the current output is best.
> I'm adding Depesz and Pierre which are both working on this kind of
> tool for additional input.

Same for me and PEV2. It should be fairly easy to parse this new format.

> 
>> I can imagine displaying variance or average - but I am afraid about very bad performance impacts.
> 
> The original counter (rows here) is already an average right?
> Variance could be nice too.  Instrumentation will already spam
> gettimeofday() calls for nested loops, I don't think that computing
> variance would add that much overhead?

Thus, it's an average value. And to be mentioned: a rounded one! Which I
found a bit tricky to figure out.



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Andres Freund
Дата:
Hi,

On 2020-10-16 10:42:43 +0300, e.sokolova@postgrespro.ru wrote:
> For some distributions of data in tables, different loops in nested loop
> joins can take different time and process different amounts of entries. It
> makes average statistics returned by explain analyze not very useful for
> DBA.
> To fix it, here is the patch that add printing of min and max statistics for
> time and rows across all loops in Nested Loop to EXPLAIN ANALYSE.
> Please don't hesitate to share any thoughts on this topic!

Interesting idea!

I'm a bit worried that further increasing the size of struct
Instrumentation will increase the overhead of EXPLAIN ANALYZE further -
in some workloads we spend a fair bit of time in code handling that. It
would be good to try to find a few bad cases, and see what the overhead is.

Unfortunately your patch is pretty hard to look at - you seem to have
included your incremental hacking efforts?

> From 7871ac1afe7837a6dc0676a6c9819cc68a5c0f07 Mon Sep 17 00:00:00 2001
> From: "e.sokolova" <e.sokolova@postgrespro.ru>
> Date: Fri, 4 Sep 2020 18:00:47 +0300
> Subject: Add min and max statistics without case of
>  parallel workers. Tags: commitfest_hotfix.

> From ebdfe117e4074d268e3e7c480b98d375d1d6f62b Mon Sep 17 00:00:00 2001
> From: "e.sokolova" <e.sokolova@postgrespro.ru>
> Date: Fri, 11 Sep 2020 23:04:34 +0300
> Subject: Add case of parallel workers. Tags:
>  commitfest_hotfix.

> From ecbf04d519e17b8968103364e89169ab965b41d7 Mon Sep 17 00:00:00 2001
> From: "e.sokolova" <e.sokolova@postgrespro.ru>
> Date: Fri, 18 Sep 2020 13:35:19 +0300
> Subject: Fix bugs. Tags: commitfest_hotfix.

> From 7566a98bbc33a24052e1334b0afe2cf341c0818f Mon Sep 17 00:00:00 2001
> From: "e.sokolova" <e.sokolova@postgrespro.ru>
> Date: Fri, 25 Sep 2020 20:09:22 +0300
> Subject: Fix tests. Tags: commitfest_hotfix.

Greetings,

Andres Freund



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
e.sokolova@postgrespro.ru
Дата:
<rjuju123@gmail.com> wrote:
> You should update the explain_parallel_append() plpgsql function
> created in that test file to make sure that both "rows" and the two
> new counters are changed to "N".  There might be other similar changes
> needed.

Thank you for watching this issue. I made the necessary changes in tests 
following your advice.

> Now, for the changes themselves.  For the min/max time, you're
> aggregating "totaltime - instr->firsttuple".  Why removing the startup
> time from the loop execution time?  I think this should be kept.

I think it's right remark. I fixed it.

> Also, in explain.c you display the counters in the "Nested loop" node,
> but this should be done in the inner plan node instead, as this is the
> one being looped on.  So the condition should probably be "nloops > 1"
> rather than testing if it's a NestLoop.

Condition "nloops > 1" is not the same as checking if it's NestLoop. 
This condition will lead to printing extra statistics for nodes with 
different types of join, not only Nested Loop Join. If this statistic is 
useful for other plan nodes, it makes sense to change the condition.

<andres@anarazel.de> wrote:
> I'm a bit worried that further increasing the size of struct
> Instrumentation will increase the overhead of EXPLAIN ANALYZE further -
> in some workloads we spend a fair bit of time in code handling that. It
> would be good to try to find a few bad cases, and see what the overhead 
> is.

Thank you for this comment, I will try to figure it out. Do you have 
some examples of large overhead dependent on this struct? I think I need 
some sample to know which way to think.

Thank you all for the feedback. I hope the new version of my patch will 
be more correct and useful.
Please don't hesitate to share any thoughts on this topic!
-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Tomas Vondra
Дата:
Hello Ekaterina,

seems like an interesting and useful improvement. I did a quick review
of the patch - attached is a 0002 patch with a couple minor changes (the
0001 is just your v1 patch, to keep cfbot happy).

1) There's a couple changes to follow project code style (e.g. brackets
after "if" on a separate line, no brackets around single-line blocks,
etc.). I've reverted some unnecessary whitespace changes. Minor stuff.

2) I don't think InstrEndLoop needs to check if min_t == 0 before
initializing it in the first loop. It certainly has to be 0, no? Same
for min_tuples. I also suggest comment explaining that we don't have to
initialize the max values.

3) In ExplainNode, in the part processing per-worker stats, I think some
of the fields are incorrectly referencing planstate->instrument instead
of using the 'instrument' variable from WorkerInstrumentation.


In general, I agree with Andres this might add overhead to explain
analyze, although I doubt it's going to be measurable. But maybe try
doing some measurements for common and worst-cases.

I wonder if we should have another option EXPLAIN option enabling this.
I.e. by default we'd not collect/print this, and users would have to
pass some option to EXPLAIN. Or maybe we could tie this to VERBOSE?

Also, why print this only for nested loop, and not for all nodes with
(nloops > 1)? I see there was some discussion why checking nodeTag is
necessary to identify NL, but that's not my point.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Georgios Kokolatos
Дата:
Hi,

I noticed that this patch fails on the cfbot.
For this, I changed the status to: 'Waiting on Author'.

Cheers,
//Georgios

The new status of this patch is: Waiting on Author

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
e.sokolova@postgrespro.ru
Дата:
Tomas Vondra писал 2020-10-31 04:20:

> seems like an interesting and useful improvement. I did a quick review
> of the patch - attached is a 0002 patch with a couple minor changes 
> (the
> 0001 is just your v1 patch, to keep cfbot happy).

  Thank you for your review and changes!

> 3) In ExplainNode, in the part processing per-worker stats, I think 
> some
> of the fields are incorrectly referencing planstate->instrument instead
> of using the 'instrument' variable from WorkerInstrumentation.

It's correct behavior because of struct WorkerInstrumentation contains 
struct Instrumentation instrument. But planstate->instrument is struct 
Instrumentation too.

> I wonder if we should have another option EXPLAIN option enabling this.
> I.e. by default we'd not collect/print this, and users would have to
> pass some option to EXPLAIN. Or maybe we could tie this to VERBOSE?

It's good idea. Now additional statistics are only printed when we set 
the VERBOSE.

New version of this patch prints extra statistics for all cases of 
multiple loops, not only for Nested Loop. Also I fixed the example by 
adding VERBOSE.

Please don't hesitate to share any thoughts on this topic!
-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Michael Christofides
Дата:
> New version of this patch prints extra statistics for all cases of
> multiple loops, not only for Nested Loop. Also I fixed the example by
> adding VERBOSE.
>
> Please don't hesitate to share any thoughts on this topic!

Thanks a lot for working on this! I really like the extra details, and
including it only with VERBOSE sounds good.

> rows * loops is still an important calculation.
>
> Why not just add total_rows while we are at it - last in the listing?
>
> (actual rows=N loops=N min_rows=N max_rows=N total_rows=N)

This total_rows idea from David would really help us too, especially
in the cases where the actual rows is rounded down to zero. We make an
explain visualisation tool, and it'd be nice to show people a better
total than loops * actual rows. It would also help the accuracy of
some of our tips, that use this number.

Apologies if this input is too late to be helpful.

Cheers,
Michael



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Yugo NAGATA
Дата:
Hello,

On Thu, 12 Nov 2020 23:10:05 +0300
e.sokolova@postgrespro.ru wrote:

> New version of this patch prints extra statistics for all cases of 
> multiple loops, not only for Nested Loop. Also I fixed the example by 
> adding VERBOSE.

I think this extra statistics seems good because it is useful for DBA
to understand explained plan. I reviewed this patch. Here are a few
comments:

1) 
postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;
                                                                                QUERY PLAN
                                                  
 

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..2752.00 rows=991 width=8) (actual time=0.021..17.651 rows=991 loops=1)
   Output: a.i, b.j
   Join Filter: (a.i = b.j)
   Rows Removed by Join Filter: 99009
   ->  Seq Scan on public.b  (cost=0.00..2.00 rows=100 width=4) (actual time=0.009..0.023 rows=100 loops=1)
         Output: b.j
   ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 min_time=0.065
max_time=0.163min_rows=1000 rows=1000 max_rows=1000 loops=100)
 
         Output: a.i
 Planning Time: 0.066 ms
 Execution Time: 17.719 ms
(10 rows)

I don't like this format where the extra statistics appear in the same
line of existing information because the output format differs depended
on whether the plan node's loops > 1 or not. This makes the length of a
line too long. Also, other information reported by VERBOSE doesn't change
the exiting row format and just add extra rows for new information. 

Instead, it seems good for me to add extra rows for the new statistics
without changint the existing row format as other VERBOSE information,
like below.

   ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000  loops=100)
         Output: a.i
         Min Time: 0.065 ms
         Max Time: 0.163 ms
         Min Rows: 1000
         Max Rows: 1000

or, like Buffers,

   ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000  loops=100)
         Output: a.i
         Loops: min_time=0.065 max_time=0.163 min_rows=1000 max_rows=1000

and so  on. What do you think about it?

2)
In parallel scan, the extra statistics are not reported correctly.

postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;
                                                                                   QUERY PLAN
                                                        
 

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..2463.52 rows=991 width=8) (actual time=0.823..25.651 rows=991 loops=1)
   Output: a.i, b.j
   Workers Planned: 2
   Workers Launched: 2
   ->  Nested Loop  (cost=0.00..1364.42 rows=413 width=8) (actual time=9.426..16.723 min_time=0.000 max_time=22.017
min_rows=0rows=330 max_rows=991 loops=3)
 
         Output: a.i, b.j
         Join Filter: (a.i = b.j)
         Rows Removed by Join Filter: 33003
         Worker 0:  actual time=14.689..14.692 rows=0 loops=1
         Worker 1:  actual time=13.458..13.460 rows=0 loops=1
         ->  Parallel Seq Scan on public.a  (cost=0.00..9.17 rows=417 width=4) (actual time=0.049..0.152 min_time=0.000
max_time=0.202min_rows=0 rows=333 max_rows=452 loops=3)
 
               Output: a.i
               Worker 0:  actual time=0.040..0.130 rows=322 loops=1
               Worker 1:  actual time=0.039..0.125 rows=226 loops=1
         ->  Seq Scan on public.b  (cost=0.00..2.00 rows=100 width=4) (actual time=0.006..0.026 min_time=0.012
max_time=0.066min_rows=100 rows=100 max_rows=100 loops=1000)
 
               Output: b.j
               Worker 0:  actual time=0.006..0.024 min_time=0.000  max_time=0.000 min_rows=0 rows=100 max_rows=0
loops=322
               Worker 1:  actual time=0.008..0.030 min_time=0.000  max_time=0.000 min_rows=0 rows=100 max_rows=0
loops=226
 Planning Time: 0.186 ms
 Execution Time: 25.838 ms
(20 rows)

This reports max/min rows or time of inner scan as 0 in parallel workers,
and as a result only the leader process's ones are accounted. To fix this,
we would change InstrAggNode as below.

@@ -167,6 +196,10 @@ InstrAggNode(Instrumentation *dst, Instrumentation *add)
        dst->nloops += add->nloops;
        dst->nfiltered1 += add->nfiltered1;
        dst->nfiltered2 += add->nfiltered2;
+       dst->min_t = Min(dst->min_t, add->min_t);
+       dst->max_t = Max(dst->max_t, add->max_t);
+       dst->min_tuples = Min(dst->min_tuples, add->min_tuples);
+       dst->max_tuples = Max(dst->max_tuples, add->max_tuples);


3)
There are garbage lines and I could not apply this patch.

diff --git a/src/test/regress/expected/timetz.out b/src/test/regress/expected/timetz.out
index 038bb5fa094..5294179aa45 100644


Regards,
Yugo Nagata

-- 
Yugo NAGATA <nagata@sraoss.co.jp>



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
On Thu, Jan 28, 2021 at 8:38 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:
>
> postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;
>                                                                                 QUERY PLAN
>
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  Nested Loop  (cost=0.00..2752.00 rows=991 width=8) (actual time=0.021..17.651 rows=991 loops=1)
>    Output: a.i, b.j
>    Join Filter: (a.i = b.j)
>    Rows Removed by Join Filter: 99009
>    ->  Seq Scan on public.b  (cost=0.00..2.00 rows=100 width=4) (actual time=0.009..0.023 rows=100 loops=1)
>          Output: b.j
>    ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 min_time=0.065
max_time=0.163min_rows=1000 rows=1000 max_rows=1000 loops=100)
 
>          Output: a.i
>  Planning Time: 0.066 ms
>  Execution Time: 17.719 ms
> (10 rows)
>
> I don't like this format where the extra statistics appear in the same
> line of existing information because the output format differs depended
> on whether the plan node's loops > 1 or not. This makes the length of a
> line too long. Also, other information reported by VERBOSE doesn't change
> the exiting row format and just add extra rows for new information.
>
> Instead, it seems good for me to add extra rows for the new statistics
> without changint the existing row format as other VERBOSE information,
> like below.
>
>    ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000  loops=100)
>          Output: a.i
>          Min Time: 0.065 ms
>          Max Time: 0.163 ms
>          Min Rows: 1000
>          Max Rows: 1000
>
> or, like Buffers,
>
>    ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000  loops=100)
>          Output: a.i
>          Loops: min_time=0.065 max_time=0.163 min_rows=1000 max_rows=1000
>
> and so  on. What do you think about it?

It's true that the current output is a bit long, which isn't really
convenient to read.  Using one of those alternative format would also
have the advantage of not breaking compatibility with tools that
process those entries.  I personally prefer the 2nd option with the
extra "Loops:" line .  For non text format, should we keep the current
format?

> 2)
> In parallel scan, the extra statistics are not reported correctly.
>
> postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;
>                                                                                    QUERY PLAN
>
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  Gather  (cost=1000.00..2463.52 rows=991 width=8) (actual time=0.823..25.651 rows=991 loops=1)
>    Output: a.i, b.j
>    Workers Planned: 2
>    Workers Launched: 2
>    ->  Nested Loop  (cost=0.00..1364.42 rows=413 width=8) (actual time=9.426..16.723 min_time=0.000 max_time=22.017
min_rows=0rows=330 max_rows=991 loops=3)
 
>          Output: a.i, b.j
>          Join Filter: (a.i = b.j)
>          Rows Removed by Join Filter: 33003
>          Worker 0:  actual time=14.689..14.692 rows=0 loops=1
>          Worker 1:  actual time=13.458..13.460 rows=0 loops=1
>          ->  Parallel Seq Scan on public.a  (cost=0.00..9.17 rows=417 width=4) (actual time=0.049..0.152
min_time=0.000max_time=0.202 min_rows=0 rows=333 max_rows=452 loops=3)
 
>                Output: a.i
>                Worker 0:  actual time=0.040..0.130 rows=322 loops=1
>                Worker 1:  actual time=0.039..0.125 rows=226 loops=1
>          ->  Seq Scan on public.b  (cost=0.00..2.00 rows=100 width=4) (actual time=0.006..0.026 min_time=0.012
max_time=0.066min_rows=100 rows=100 max_rows=100 loops=1000)
 
>                Output: b.j
>                Worker 0:  actual time=0.006..0.024 min_time=0.000  max_time=0.000 min_rows=0 rows=100 max_rows=0
loops=322
>                Worker 1:  actual time=0.008..0.030 min_time=0.000  max_time=0.000 min_rows=0 rows=100 max_rows=0
loops=226
>  Planning Time: 0.186 ms
>  Execution Time: 25.838 ms
> (20 rows)
>
> This reports max/min rows or time of inner scan as 0 in parallel workers,
> and as a result only the leader process's ones are accounted. To fix this,
> we would change InstrAggNode as below.
>
> @@ -167,6 +196,10 @@ InstrAggNode(Instrumentation *dst, Instrumentation *add)
>         dst->nloops += add->nloops;
>         dst->nfiltered1 += add->nfiltered1;
>         dst->nfiltered2 += add->nfiltered2;
> +       dst->min_t = Min(dst->min_t, add->min_t);
> +       dst->max_t = Max(dst->max_t, add->max_t);
> +       dst->min_tuples = Min(dst->min_tuples, add->min_tuples);
> +       dst->max_tuples = Max(dst->max_tuples, add->max_tuples);

Agreed.

> 3)
> There are garbage lines and I could not apply this patch.
>
> diff --git a/src/test/regress/expected/timetz.out b/src/test/regress/expected/timetz.out
> index 038bb5fa094..5294179aa45 100644

I also switch the patch to "waiting on author" on the commit fest app.



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Yugo NAGATA
Дата:
On Mon, 1 Feb 2021 13:28:45 +0800
Julien Rouhaud <rjuju123@gmail.com> wrote:

> On Thu, Jan 28, 2021 at 8:38 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:
> >
> > postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;
> >                                                                                 QUERY PLAN
> >
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> >  Nested Loop  (cost=0.00..2752.00 rows=991 width=8) (actual time=0.021..17.651 rows=991 loops=1)
> >    Output: a.i, b.j
> >    Join Filter: (a.i = b.j)
> >    Rows Removed by Join Filter: 99009
> >    ->  Seq Scan on public.b  (cost=0.00..2.00 rows=100 width=4) (actual time=0.009..0.023 rows=100 loops=1)
> >          Output: b.j
> >    ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 min_time=0.065
max_time=0.163min_rows=1000 rows=1000 max_rows=1000 loops=100)
 
> >          Output: a.i
> >  Planning Time: 0.066 ms
> >  Execution Time: 17.719 ms
> > (10 rows)
> >
> > I don't like this format where the extra statistics appear in the same
> > line of existing information because the output format differs depended
> > on whether the plan node's loops > 1 or not. This makes the length of a
> > line too long. Also, other information reported by VERBOSE doesn't change
> > the exiting row format and just add extra rows for new information.
> >
> > Instead, it seems good for me to add extra rows for the new statistics
> > without changint the existing row format as other VERBOSE information,
> > like below.
> >
> >    ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000  loops=100)
> >          Output: a.i
> >          Min Time: 0.065 ms
> >          Max Time: 0.163 ms
> >          Min Rows: 1000
> >          Max Rows: 1000
> >
> > or, like Buffers,
> >
> >    ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) (actual time=0.005..0.091 rows=1000  loops=100)
> >          Output: a.i
> >          Loops: min_time=0.065 max_time=0.163 min_rows=1000 max_rows=1000
> >
> > and so  on. What do you think about it?
> 
> It's true that the current output is a bit long, which isn't really
> convenient to read.  Using one of those alternative format would also
> have the advantage of not breaking compatibility with tools that
> process those entries.  I personally prefer the 2nd option with the
> extra "Loops:" line .  For non text format, should we keep the current
> format?

For non text format, I think "Max/Min Rows", "Max/Min Times" are a bit
simple and the meaning is unclear. Instead, similar to a style of "Buffers",
does it make sense using "Max/Min Rows in Loops" and "Max/Min Times in Loops"?

Regards,
Yugo Nagata

-- 
Yugo NAGATA <nagata@sraoss.co.jp>



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
e.sokolova@postgrespro.ru
Дата:
Thank you all for your feedback and reforms.
I attach a new version of the patch with the some changes and fixes. 
Here's a list of the major changes:
1) New format of extra statistics. This is now contained in a line 
separate from the main statistics.

Julien Rouhaud писал 2021-02-01 08:28:
> On Thu, Jan 28, 2021 at 8:38 PM Yugo NAGATA <nagata@sraoss.co.jp> 
> wrote:
>> 
>> postgres=# explain (analyze, verbose) select * from a,b where a.i=b.j;
>>                                                                        
>>          QUERY PLAN
>>
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>  Nested Loop  (cost=0.00..2752.00 rows=991 width=8) (actual 
>> time=0.021..17.651 rows=991 loops=1)
>>    Output: a.i, b.j
>>    Join Filter: (a.i = b.j)
>>    Rows Removed by Join Filter: 99009
>>    ->  Seq Scan on public.b  (cost=0.00..2.00 rows=100 width=4) 
>> (actual time=0.009..0.023 rows=100 loops=1)
>>          Output: b.j
>>    ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) 
>> (actual time=0.005..0.091 min_time=0.065 max_time=0.163 min_rows=1000 
>> rows=1000 max_rows=1000 loops=100)
>>          Output: a.i
>>  Planning Time: 0.066 ms
>>  Execution Time: 17.719 ms
>> (10 rows)
>> 
>> I don't like this format where the extra statistics appear in the same
>> line of existing information because the output format differs 
>> depended
>> on whether the plan node's loops > 1 or not. This makes the length of 
>> a
>> line too long. Also, other information reported by VERBOSE doesn't 
>> change
>> the exiting row format and just add extra rows for new information.
>> 
>> Instead, it seems good for me to add extra rows for the new statistics
>> without changint the existing row format as other VERBOSE information,
>> like below.
>> 
>>    ->  Seq Scan on public.a  (cost=0.00..15.00 rows=1000 width=4) 
>> (actual time=0.005..0.091 rows=1000  loops=100)
>>          Output: a.i
>>          Loops: min_time=0.065 max_time=0.163 min_rows=1000 
>> max_rows=1000
>> 
>> and so  on. What do you think about it?
> 

2) Correction of the case of parallel scan

>> In parallel scan, the extra statistics are not reported correctly.
>> 
>> This reports max/min rows or time of inner scan as 0 in parallel 
>> workers,
>> and as a result only the leader process's ones are accounted. To fix 
>> this,
>> we would change InstrAggNode as below.
>> 
> 

3) Adding extra statistics about total number of rows (total rows). 
There were many wishes for this here.

Please don't hesitate to share any thoughts on this topic.

-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Justin Pryzby
Дата:
> diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
> index afc45429ba4..723eccca013 100644
> --- a/src/backend/commands/explain.c
> +++ b/src/backend/commands/explain.c
> @@ -1589,29 +1589,82 @@ ExplainNode(PlanState *planstate, List *ancestors,
>          double        startup_ms = 1000.0 * planstate->instrument->startup / nloops;
>          double        total_ms = 1000.0 * planstate->instrument->total / nloops;
>          double        rows = planstate->instrument->ntuples / nloops;
> +        double        total_rows = planstate->instrument->ntuples;
> +        double        min_r = planstate->instrument->min_tuples;
> +        double        max_r = planstate->instrument->max_tuples;
> +        double        min_t_ms = 1000.0 * planstate->instrument->min_t;
> +        double        max_t_ms = 1000.0 * planstate->instrument->max_t;
>  
>          if (es->format == EXPLAIN_FORMAT_TEXT)
>          {
> -            if (es->timing)
> -                appendStringInfo(es->str,
> -                                 " (actual time=%.3f..%.3f rows=%.0f loops=%.0f)",
> -                                 startup_ms, total_ms, rows, nloops);
> +            if (nloops > 1 && es->verbose)
> +            {
> +                if (es->timing)
> +                {
> +                    appendStringInfo(es->str,
> +                                     " (actual time=%.3f..%.3f rows=%.0f loops=%.0f)\n",
> +                                     startup_ms, total_ms, rows, nloops);
> +                    ExplainIndentText(es);
> +                    appendStringInfo(es->str,
> +                                     "Loop: min_time=%.3f max_time=%.3f min_rows=%.0f max_rows=%.0f
total_rows=%.0f",
> +                                     min_t_ms, max_t_ms, min_r, max_r, total_rows);

Lines with "colon" format shouldn't use equal signs, and should use two spaces
between fields.  See:
https://www.postgresql.org/message-id/20200619022001.GY17995@telsasoft.com
https://www.postgresql.org/message-id/20200402054120.GC14618@telsasoft.com
https://www.postgresql.org/message-id/20200407042521.GH2228%40telsasoft.com


> +                }
> +                else
> +                {
> +                    appendStringInfo(es->str,
> +                                     " (actual rows=%.0f loops=%.0f)\n",
> +                                     rows, nloops);
> +                    ExplainIndentText(es);
> +                    appendStringInfo(es->str,
> +                                     "Loop: min_rows=%.0f max_rows=%.0f total_rows=%.0f",
> +                                     min_r, max_r, total_rows);
> +                }
> +            }
>              else
> -                appendStringInfo(es->str,
> -                                 " (actual rows=%.0f loops=%.0f)",
> -                                 rows, nloops);
> +            {
> +                if (es->timing)
> +                    appendStringInfo(es->str,
> +                                     " (actual time=%.3f..%.3f rows=%.0f loops=%.0f)",
> +                                     startup_ms, total_ms, rows, nloops);
> +                else
> +                    appendStringInfo(es->str,
> +                                     " (actual rows=%.0f loops=%.0f)",
> +                                     rows, nloops);
> +            }
>          }
>          else

Since this is now on a separate line, the "if (nloops > 1 && es->verbose)"
can be after the existing "if (es->timing)", and shouldn't need its own
"if (es->timing)".  It should conditionally add a separate line, rather than
duplicating the "(actual.*" line.

> -            if (es->timing)
> +            if (nloops > 1 && es->verbose)

In non-text mode, think you should not check "nloops > 1".  Rather, print the
field as 0.

The whole logic is duplicated for parallel workers.  This could instead be a
function, called from both places.  I think this would handle the computation
as well as the output.  This would make the patch shorter.

> +                        ExplainPropertyFloat("Min Time", "ms",
> +                                             min_t_ms, 3, es);
> +                        ExplainPropertyFloat("Max Time", "ms",
> +                                             max_t_ms, 3, es);

I think the labels in non-text format should say "Loop Min Time" or similar.

> diff --git a/src/include/executor/instrument.h b/src/include/executor/instrument.h
> index aa8eceda5f4..93ba7c83461 100644
> --- a/src/include/executor/instrument.h
> +++ b/src/include/executor/instrument.h
> @@ -66,7 +66,13 @@ typedef struct Instrumentation
>      /* Accumulated statistics across all completed cycles: */
>      double        startup;        /* total startup time (in seconds) */
>      double        total;            /* total time (in seconds) */
> +    double        min_t;            /* time of fastest loop (in seconds) */
> +    double        max_t;            /* time of slowest loop (in seconds) */
>      double        ntuples;        /* total tuples produced */
> +    double        min_tuples;        /* min counter of produced tuples for all
> +                                 * loops */
> +    double        max_tuples;        /* max counter of produced tuples for all
> +                                 * loops */

And these variables should have a loop_ prefix like loop_min_t ?

-- 
Justin



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
e.sokolova@postgrespro.ru
Дата:
Thank you for working on this issue. Your comments helped me make this 
patch more correct.

> Lines with "colon" format shouldn't use equal signs, and should use two 
> spaces
> between fields.
Done. Now extra line looks like "Loop min_rows: %.0f  max_rows: %.0f  
total_rows: %.0f" or "Loop min_time: %.3f  max_time: %.3f  min_rows: 
%.0f  max_rows: %.0f  total_rows: %.0f".

> Since this is now on a separate line, the "if (nloops > 1 && 
> es->verbose)"
> can be after the existing "if (es->timing)", and shouldn't need its own
> "if (es->timing)".  It should conditionally add a separate line, rather 
> than
> duplicating the "(actual.*" line.
> 
>> -            if (es->timing)
>> +            if (nloops > 1 && es->verbose)
New version of patch contains this correction. It helped make the patch 
shorter.

> In non-text mode, think you should not check "nloops > 1".  Rather, 
> print the
> field as 0.
The fields will not be zeros. New line will almost repeat the line with 
main sttistics.

> I think the labels in non-text format should say "Loop Min Time" or 
> similar.
> 
> And these variables should have a loop_ prefix like loop_min_t ?
There are good ideas. I changed it.

I apply new version of this patch. I hope it got better.
Please don't hesitate to share any thoughts on this topic.

-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
vignesh C
Дата:
On Wed, Apr 14, 2021 at 4:57 PM <e.sokolova@postgrespro.ru> wrote:
>
> Thank you for working on this issue. Your comments helped me make this
> patch more correct.
>
> > Lines with "colon" format shouldn't use equal signs, and should use two
> > spaces
> > between fields.
> Done. Now extra line looks like "Loop min_rows: %.0f  max_rows: %.0f
> total_rows: %.0f" or "Loop min_time: %.3f  max_time: %.3f  min_rows:
> %.0f  max_rows: %.0f  total_rows: %.0f".
>
> > Since this is now on a separate line, the "if (nloops > 1 &&
> > es->verbose)"
> > can be after the existing "if (es->timing)", and shouldn't need its own
> > "if (es->timing)".  It should conditionally add a separate line, rather
> > than
> > duplicating the "(actual.*" line.
> >
> >> -                    if (es->timing)
> >> +                    if (nloops > 1 && es->verbose)
> New version of patch contains this correction. It helped make the patch
> shorter.
>
> > In non-text mode, think you should not check "nloops > 1".  Rather,
> > print the
> > field as 0.
> The fields will not be zeros. New line will almost repeat the line with
> main sttistics.
>
> > I think the labels in non-text format should say "Loop Min Time" or
> > similar.
> >
> > And these variables should have a loop_ prefix like loop_min_t ?
> There are good ideas. I changed it.
>
> I apply new version of this patch. I hope it got better.
> Please don't hesitate to share any thoughts on this topic.

The patch does not apply on Head, I'm changing the status to "Waiting
for Author":
1 out of 2 hunks FAILED -- saving rejects to file
src/test/regress/expected/partition_prune.out.rej
patching file src/test/regress/sql/partition_prune.sql
Hunk #1 FAILED at 467.
Hunk #2 succeeded at 654 (offset -3 lines).
1 out of 2 hunks FAILED -- saving rejects to file
src/test/regress/sql/partition_prune.sql.rej

Please post a new patch rebased on head.

Regards,
Vignesh



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Ekaterina Sokolova
Дата:
Hi, hackers.

Here is the new version of patch that add printing of min, max and total statistics for time and rows across all loops to EXPLAIN ANALYSE.

1) Please add VERBOSE to display extra statistics.
2) Format of extra statistics is:

   a) FORMAT TEXT

Loop min_time: N  max_time: N  min_rows: N  max_rows: N  total_rows: N
Output: ...

    b) FORMAT JSON

...
"Actual Total Time": N,
"Loop Min Time": N,
"Loop Max Time": N,
"Actual Rows": N,
"Loop Min Rows": N,
"Loop Max Rows": N,
"Loop Total Rows": N,
"Actual Loops": N,
...

I hope you find this patch useful.
Please don't hesitate to share any thoughts on this topic!
-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Justin Pryzby
Дата:
Hi, and sorry to take such a long break from this patch.

On Wed, Apr 14, 2021 at 02:27:36PM +0300, e.sokolova@postgrespro.ru wrote:
> diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c
> index b62a76e7e5a..bf8c37baefd 100644
> --- a/src/backend/commands/explain.c
> +++ b/src/backend/commands/explain.c
> @@ -1615,6 +1615,11 @@ ExplainNode(PlanState *planstate, List *ancestors,
>          double        startup_ms = 1000.0 * planstate->instrument->startup / nloops;
>          double        total_ms = 1000.0 * planstate->instrument->total / nloops;
>          double        rows = planstate->instrument->ntuples / nloops;
> +        double        loop_total_rows = planstate->instrument->ntuples;
> +        double        loop_min_r = planstate->instrument->min_tuples;
> +        double        loop_max_r = planstate->instrument->max_tuples;
> +        double        loop_min_t_ms = 1000.0 * planstate->instrument->min_t;
> +        double        loop_max_t_ms = 1000.0 * planstate->instrument->max_t;
>  
>          if (es->format == EXPLAIN_FORMAT_TEXT)
>          {
> @@ -1626,6 +1631,19 @@ ExplainNode(PlanState *planstate, List *ancestors,
>                  appendStringInfo(es->str,
>                                   " (actual rows=%.0f loops=%.0f)",
>                                   rows, nloops);
> +            if (nloops > 1 && es->verbose)
> +            {
> +                appendStringInfo(es->str, "\n");
> +                ExplainIndentText(es);
> +                if (es->timing)
> +                    appendStringInfo(es->str,
> +                                     "Loop min_time: %.3f  max_time: %.3f  min_rows: %.0f  max_rows: %.0f
total_rows:%.0f",
 
> +                                     loop_min_t_ms, loop_max_t_ms, loop_min_r, loop_max_r, loop_total_rows);

Now that I see it, I think it should say it with spaces, and not underscores,
like
| Loop Min Time: %.3f  Max Time: %.3f ...

"Memory Usage:" already has spaces in its fields names, so this is more
consistent, and isn't doing anything new.

I think the min/max/total should be first, and the timing should follow, if
enabled.  The "if(timing)" doesn't even need to duplicate the output, it could
append just the timing part.

I refactored this all into a separate function.  I don't see why we'd repeat
these.

+               double          loop_total_rows = planstate->instrument->ntuples;
+               double          loop_min_r = planstate->instrument->min_tuples;
+               double          loop_max_r = planstate->instrument->max_tuples;
+               double          loop_min_t_ms = 1000.0 * planstate->instrument->min_t;
+               double          loop_max_t_ms = 1000.0 * planstate->instrument->max_t;

I realize the duplication doesn't originate with your patch.  But because of
the duplication, there can be inconsistencies; for example, you wrote "ms" in
one place and "s" in another.  Maybe you copied from before
f90c708a048667befbf6bbe5f48ae9695cb89de4 (an issue I reported the first time I
was looking at this patch).

I think the non-text format timing stuff needs to be within "if (timing)".

I'm curious to hear what you and others think of the refactoring.

It'd be nice if there's a good way to add a test case for verbose output
involving parallel workers, but the output is unstable ...

-- 
Justin

Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Lukas Fittl
Дата:
On Sun, Nov 21, 2021 at 8:55 PM Justin Pryzby <pryzby@telsasoft.com> wrote:
I'm curious to hear what you and others think of the refactoring.

It'd be nice if there's a good way to add a test case for verbose output
involving parallel workers, but the output is unstable ...

I've reviewed this patch, and it works as expected - the refactoring changes by Justin also appear to make sense to me.

I've briefly thought whether this needs documentation (currently the patch includes none), but there does not appear to be a good place to add documentation about this from a quick glance, so it seems acceptable to leave this out given the lack of more detailed EXPLAIN documentation in general.

The one item that still feels a bit open to me is benchmarking, based on Andres' comment a while ago:

On Mon, Oct 19, 2020 at 4:20 PM Andres Freund <andres@anarazel.de> wrote:
I'm a bit worried that further increasing the size of struct
Instrumentation will increase the overhead of EXPLAIN ANALYZE further -
in some workloads we spend a fair bit of time in code handling that. It
would be good to try to find a few bad cases, and see what the overhead is.

Whilst no specific bad cases were provided, I wonder if even a simple pgbench with auto_explain (and log_analyze=1) would be a way to test this?

The overhead of the Instrumentation struct size should show regardless of whether a plan actually includes a Nested Loop.

Thanks,
Lukas

--
Lukas Fittl

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Ekaterina Sokolova
Дата:
Hi, hackers.

I apply the new version of patch.

Justin Pryzby <pryzby@telsasoft.com> wrote:
> I'm curious to hear what you and others think of the refactoring.
Thank you so much. With your changes, the patch has become more 
understandable and readable.

> It'd be nice if there's a good way to add a test case for verbose 
> output
> involving parallel workers, but the output is unstable ...
Done!

Lukas Fittl <lukas@fittl.com> wrote:
> I've briefly thought whether this needs documentation (currently the 
> patch includes none),
> but there does not appear to be a good place to add documentation about 
> this from a
> quick glance, so it seems acceptable to leave this out given the lack 
> of more detailed
> EXPLAIN documentation in general.
You're right! I added feature description to the patch header.

> Whilst no specific bad cases were provided, I wonder if even a simple 
> pgbench with
> auto_explain (and log_analyze=1) would be a way to test this?
I wanted to measure overheads, but could't choose correct way. Thanks 
for idea with auto_explain.
I loaded it and made 10 requests of pgbench (number of clients: 1, of 
threads: 1).
I'm not sure I chose the right way to measure overhead, so any 
suggestions are welcome.
Current results are in file overhead_v0.txt.

Please feel free to share your suggestions and comments. Regards,

-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
Hi,

On Thu, Feb 03, 2022 at 12:59:03AM +0300, Ekaterina Sokolova wrote:
>
> I apply the new version of patch.
>
> I wanted to measure overheads, but could't choose correct way. Thanks for
> idea with auto_explain.
> I loaded it and made 10 requests of pgbench (number of clients: 1, of
> threads: 1).
> I'm not sure I chose the right way to measure overhead, so any suggestions
> are welcome.
> Current results are in file overhead_v0.txt.
>
> Please feel free to share your suggestions and comments. Regards,
>

>    | master latency (ms) | master tps |  | new latency (ms) |   new tps
> --------------------------------------------------------------------------
>  1 |        2,462        | 406,190341 |  |       4,485      | 222,950527
>  2 |        3,877        | 257,89813  |  |       4,141      | 241,493395
>  3 |        3,789        | 263,935811 |  |       2,837      | 352,522297
>  4 |        3,53         | 283,310196 |  |       5,510      | 181,488203
>  5 |        3,413        | 292,997363 |  |       6,475      | 154,432999
>  6 |        3,757        | 266,148564 |  |       4,073      | 245,507218
>  7 |        3,752        | 266,560043 |  |       3,901      | 256,331385
>  8 |        4,389        | 227,847524 |  |       4,658      | 214,675196
>  9 |        4,341        | 230,372282 |  |       4,220      | 236,983672
> 10 |        3,893        | 256,891104 |  |       7.059      | 141,667139
> --------------------------------------------------------------------------
> avg|        3,7203       | 275,215136 |  |       4,03       | 224,8052031
>
>
> master/new latency | 0,92315 |
> master/new tps     | 1,22424 |
>
> new/master latency | 1,08325 |
> new/master tps     | 0,81683 |

The overhead is quite significant (at least for OLTP-style workload).

I think this should be done with a new InstrumentOption, like
INSTRUMENT_LOOP_DETAILS or something like that, and set it where appropriate.
Otherwise you will have to pay that overhead even if you won't use the new
fields at all.  It could be EXPLAIN (ANALYZE, VERBOSE OFF), but also simply
using pg_stat_statements which doesn't seem acceptable.

One problem is that some extensions (like pg_stat_statements) can rely on
INSTRUMENT_ALL but may or may not care about those extra counters.  Maybe we
should remove that alias and instead provide two (like INSTRUMENT_ALL_VERBOSE
and INSTRUMENT_ALL_SOMETHINGELSE, I don't have any bright name right now) so
that authors can decide what they need instead of silently having such
extension ruin the performance for no reason.

About the implementation itself:

+static void show_loop_info(Instrumentation *instrument, bool isworker,
+                          ExplainState *es);

I think this should be done as a separate refactoring commit.

+   /*
+    * this is first loop
+    *
+    * We only initialize the min values. We don't need to bother with the
+    * max, because those are 0 and the non-zero values will get updated a
+    * couple lines later.
+    */
+   if (instr->nloops == 0)
+   {
+       instr->min_t = totaltime;
+       instr->min_tuples = instr->tuplecount;
+   }
+
+   if (instr->min_t > totaltime)
+       instr->min_t = totaltime;
+
+   if (instr->max_t < totaltime)
+       instr->max_t = totaltime;
+
    instr->ntuples += instr->tuplecount;
+
+   if (instr->min_tuples > instr->tuplecount)
+       instr->min_tuples = instr->tuplecount;
+
+   if (instr->max_tuples < instr->tuplecount)
+       instr->max_tuples = instr->tuplecount;
+
    instr->nloops += 1;

Why do you need to initialize min_t and min_tuples but not max_t and
max_tuples while both will initially be 0 and possibly updated afterwards?

I think you should either entirely remove this if (instr->nloops == 0) part, or
handle some else block.



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Greg Stark
Дата:
This patch got some very positive feedback and some significant amount
of work earlier in the release cycle. The feedback from Julien earlier
this month seemed pretty minor.

Ekaterina, is there any chance you'll be able to work on this this
week and do you think it has a chance of making this release? Julien,
do you think it's likely to be possible to polish for this release?

Otherwise I guess we should move it to the next CF but it seems a
shame given how much work has been done and how close it is.

On Mon, 7 Mar 2022 at 00:17, Julien Rouhaud <rjuju123@gmail.com> wrote:
>
> Hi,
>
> On Thu, Feb 03, 2022 at 12:59:03AM +0300, Ekaterina Sokolova wrote:
> >
> > I apply the new version of patch.
> >
> > I wanted to measure overheads, but could't choose correct way. Thanks for
> > idea with auto_explain.
> > I loaded it and made 10 requests of pgbench (number of clients: 1, of
> > threads: 1).
> > I'm not sure I chose the right way to measure overhead, so any suggestions
> > are welcome.
> > Current results are in file overhead_v0.txt.
> >
> > Please feel free to share your suggestions and comments. Regards,
> >
>
> >    | master latency (ms) | master tps |  | new latency (ms) |   new tps
> > --------------------------------------------------------------------------
> >  1 |        2,462        | 406,190341 |  |       4,485      | 222,950527
> >  2 |        3,877        | 257,89813  |  |       4,141      | 241,493395
> >  3 |        3,789        | 263,935811 |  |       2,837      | 352,522297
> >  4 |        3,53         | 283,310196 |  |       5,510      | 181,488203
> >  5 |        3,413        | 292,997363 |  |       6,475      | 154,432999
> >  6 |        3,757        | 266,148564 |  |       4,073      | 245,507218
> >  7 |        3,752        | 266,560043 |  |       3,901      | 256,331385
> >  8 |        4,389        | 227,847524 |  |       4,658      | 214,675196
> >  9 |        4,341        | 230,372282 |  |       4,220      | 236,983672
> > 10 |        3,893        | 256,891104 |  |       7.059      | 141,667139
> > --------------------------------------------------------------------------
> > avg|        3,7203       | 275,215136 |  |       4,03       | 224,8052031
> >
> >
> > master/new latency | 0,92315 |
> > master/new tps     | 1,22424 |
> >
> > new/master latency | 1,08325 |
> > new/master tps     | 0,81683 |
>
> The overhead is quite significant (at least for OLTP-style workload).
>
> I think this should be done with a new InstrumentOption, like
> INSTRUMENT_LOOP_DETAILS or something like that, and set it where appropriate.
> Otherwise you will have to pay that overhead even if you won't use the new
> fields at all.  It could be EXPLAIN (ANALYZE, VERBOSE OFF), but also simply
> using pg_stat_statements which doesn't seem acceptable.
>
> One problem is that some extensions (like pg_stat_statements) can rely on
> INSTRUMENT_ALL but may or may not care about those extra counters.  Maybe we
> should remove that alias and instead provide two (like INSTRUMENT_ALL_VERBOSE
> and INSTRUMENT_ALL_SOMETHINGELSE, I don't have any bright name right now) so
> that authors can decide what they need instead of silently having such
> extension ruin the performance for no reason.
>
> About the implementation itself:
>
> +static void show_loop_info(Instrumentation *instrument, bool isworker,
> +                          ExplainState *es);
>
> I think this should be done as a separate refactoring commit.
>
> +   /*
> +    * this is first loop
> +    *
> +    * We only initialize the min values. We don't need to bother with the
> +    * max, because those are 0 and the non-zero values will get updated a
> +    * couple lines later.
> +    */
> +   if (instr->nloops == 0)
> +   {
> +       instr->min_t = totaltime;
> +       instr->min_tuples = instr->tuplecount;
> +   }
> +
> +   if (instr->min_t > totaltime)
> +       instr->min_t = totaltime;
> +
> +   if (instr->max_t < totaltime)
> +       instr->max_t = totaltime;
> +
>     instr->ntuples += instr->tuplecount;
> +
> +   if (instr->min_tuples > instr->tuplecount)
> +       instr->min_tuples = instr->tuplecount;
> +
> +   if (instr->max_tuples < instr->tuplecount)
> +       instr->max_tuples = instr->tuplecount;
> +
>     instr->nloops += 1;
>
> Why do you need to initialize min_t and min_tuples but not max_t and
> max_tuples while both will initially be 0 and possibly updated afterwards?
>
> I think you should either entirely remove this if (instr->nloops == 0) part, or
> handle some else block.
>
>


-- 
greg



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Justin Pryzby
Дата:
> > +static void show_loop_info(Instrumentation *instrument, bool isworker,
> > +                          ExplainState *es);
> >
> > I think this should be done as a separate refactoring commit.

Right - the 0001 patch I sent seems independently beneficial, and makes the
changes in 0002 more apparent.  My 0001 could also be applied after the feature
freeze and before branching for v16..



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
Hi,

On Mon, Mar 28, 2022 at 03:09:12PM -0400, Greg Stark wrote:
> This patch got some very positive feedback and some significant amount
> of work earlier in the release cycle. The feedback from Julien earlier
> this month seemed pretty minor.
> 
> Ekaterina, is there any chance you'll be able to work on this this
> week and do you think it has a chance of making this release? Julien,
> do you think it's likely to be possible to polish for this release?

Most of the comments I have are easy to fix.  But I think that the real problem
is the significant overhead shown by Ekaterina that for now would apply even if
you don't consume the new stats, for instance if you have pg_stat_statements.
And I'm still not sure of what is the best way to avoid that.



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Ekaterina Sokolova
Дата:
Hi, hackers. Thank you for your attention to this topic.

Julien Rouhaud wrote:
> +static void show_loop_info(Instrumentation *instrument, bool isworker,
> +                           ExplainState *es);
> 
> I think this should be done as a separate refactoring commit.
Sure. I divided the patch. Now Justin's refactor commit is separated. 
Also I actualized it a bit.

> Most of the comments I have are easy to fix.  But I think that the real 
> problem
> is the significant overhead shown by Ekaterina that for now would apply 
> even if
> you don't consume the new stats, for instance if you have 
> pg_stat_statements.
> And I'm still not sure of what is the best way to avoid that.
I took your advice about InstrumentOption. Now INSTRUMENT_EXTRA exists.
So currently it's no overheads during basic load. Operations using 
INSTRUMENT_ALL contain overheads (because of INSTRUMENT_EXTRA is a part 
of INSTRUMENT_ALL), but they are much less significant than before. I 
apply new overhead statistics collected by pgbench with auto _explain 
enabled.

> Why do you need to initialize min_t and min_tuples but not max_t and
> max_tuples while both will initially be 0 and possibly updated 
> afterwards?
We need this initialization for min values so comment about it located 
above the block of code with initialization.

I am convinced that the latest changes have affected the patch in a 
positive way. I'll be pleased to hear your thoughts on this.

-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Justin Pryzby
Дата:
This message lost track of the email headers so CFBOT isn't processing the new
patches.  Which I'm attempting to remedy now.
https://www.postgresql.org/message-id/flat/ae576cac3f451d318374f2a2e494aab1@postgrespro.ru

On Fri, Apr 01, 2022 at 11:46:47PM +0300, Ekaterina Sokolova wrote:
> Hi, hackers. Thank you for your attention to this topic.
> 
> Julien Rouhaud wrote:
> > +static void show_loop_info(Instrumentation *instrument, bool isworker,
> > +                           ExplainState *es);
> > 
> > I think this should be done as a separate refactoring commit.
> Sure. I divided the patch. Now Justin's refactor commit is separated. Also I
> actualized it a bit.
> 
> > Most of the comments I have are easy to fix.  But I think that the real
> > problem
> > is the significant overhead shown by Ekaterina that for now would apply
> > even if
> > you don't consume the new stats, for instance if you have
> > pg_stat_statements.
> > And I'm still not sure of what is the best way to avoid that.
> I took your advice about InstrumentOption. Now INSTRUMENT_EXTRA exists.
> So currently it's no overheads during basic load. Operations using
> INSTRUMENT_ALL contain overheads (because of INSTRUMENT_EXTRA is a part of
> INSTRUMENT_ALL), but they are much less significant than before. I apply new
> overhead statistics collected by pgbench with auto _explain enabled.
> 
> > Why do you need to initialize min_t and min_tuples but not max_t and
> > max_tuples while both will initially be 0 and possibly updated
> > afterwards?
> We need this initialization for min values so comment about it located above
> the block of code with initialization.
> 
> I am convinced that the latest changes have affected the patch in a positive
> way. I'll be pleased to hear your thoughts on this.

Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
Hi,

On Fri, Apr 01, 2022 at 11:46:47PM +0300, Ekaterina Sokolova wrote:
>
> > Most of the comments I have are easy to fix.  But I think that the real
> > problem
> > is the significant overhead shown by Ekaterina that for now would apply
> > even if
> > you don't consume the new stats, for instance if you have
> > pg_stat_statements.
> > And I'm still not sure of what is the best way to avoid that.
> I took your advice about InstrumentOption. Now INSTRUMENT_EXTRA exists.
> So currently it's no overheads during basic load. Operations using
> INSTRUMENT_ALL contain overheads (because of INSTRUMENT_EXTRA is a part of
> INSTRUMENT_ALL), but they are much less significant than before. I apply new
> overhead statistics collected by pgbench with auto _explain enabled.

Can you give a bit more details on your bench scenario?  I see contradictory
results, where the patched version with more code is sometimes way faster,
sometimes way slower.  If you're using pgbench
default queries (including write queries) I don't think that any of them will
hit the loop code, so it's really a best case scenario.  Also write queries
will make tests less stable for no added value wrt. this code.

Ideally you would need a custom scenario with a single read-only query
involving a nested loop or something like that to check how much overhead you
really get when you cumulate those values.  I will try to
>
> > Why do you need to initialize min_t and min_tuples but not max_t and
> > max_tuples while both will initially be 0 and possibly updated
> > afterwards?
> We need this initialization for min values so comment about it located above
> the block of code with initialization.

Sure, but if we're going to have a branch for nloops == 0, I think it would be
better to avoid redundant / useless instructions, something like:

if (nloops == 0)
{
    min_t = totaltime;
    min_tuple = tuplecount;
}
else
{
    if (min_t...)
    ...
}

While on that part of the patch, there's an extra new line between max_t and
min_tuple processing.



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Greg Stark
Дата:
This is not passing regression tests due to some details of the plan
output - marking Waiting on Author:

diff -w -U3 c:/cirrus/src/test/regress/expected/partition_prune.out
c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out
--- c:/cirrus/src/test/regress/expected/partition_prune.out 2022-04-05
17:00:25.433576100 +0000
+++ c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out
2022-04-05 17:18:30.092203500 +0000
@@ -2251,10 +2251,7 @@
          Workers Planned: 2
          Workers Launched: N
          ->  Parallel Seq Scan on public.lprt_b (actual rows=N loops=N)
-               Loop Min Rows: N  Max Rows: N  Total Rows: N
                Output: lprt_b.b
-               Worker 0:  actual rows=N loops=N
-               Worker 1:  actual rows=N loops=N
    ->  Materialize (actual rows=N loops=N)
          Loop Min Rows: N  Max Rows: N  Total Rows: N
          Output: lprt_a.a
@@ -2263,10 +2260,8 @@
                Workers Planned: 1
                Workers Launched: N
                ->  Parallel Seq Scan on public.lprt_a (actual rows=N loops=N)
-                     Loop Min Rows: N  Max Rows: N  Total Rows: N
                      Output: lprt_a.a
-                     Worker 0:  actual rows=N loops=N
-(24 rows)
+(19 rows)

 drop table lprt_b;
 delete from lprt_a where a = 1;



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Justin Pryzby
Дата:
On Tue, Apr 05, 2022 at 05:14:09PM -0400, Greg Stark wrote:
> This is not passing regression tests due to some details of the plan
> output - marking Waiting on Author:

It's unstable due to parallel workers.
I'm not sure what the usual workarounds here.
Maybe set parallel_leader_participation=no for this test.

> diff -w -U3 c:/cirrus/src/test/regress/expected/partition_prune.out
> c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out
> --- c:/cirrus/src/test/regress/expected/partition_prune.out 2022-04-05
> 17:00:25.433576100 +0000
> +++ c:/cirrus/src/test/recovery/tmp_check/results/partition_prune.out
> 2022-04-05 17:18:30.092203500 +0000
> @@ -2251,10 +2251,7 @@
>           Workers Planned: 2
>           Workers Launched: N
>           ->  Parallel Seq Scan on public.lprt_b (actual rows=N loops=N)
> -               Loop Min Rows: N  Max Rows: N  Total Rows: N
>                 Output: lprt_b.b
> -               Worker 0:  actual rows=N loops=N
> -               Worker 1:  actual rows=N loops=N
>     ->  Materialize (actual rows=N loops=N)
>           Loop Min Rows: N  Max Rows: N  Total Rows: N
>           Output: lprt_a.a
> @@ -2263,10 +2260,8 @@
>                 Workers Planned: 1
>                 Workers Launched: N
>                 ->  Parallel Seq Scan on public.lprt_a (actual rows=N loops=N)
> -                     Loop Min Rows: N  Max Rows: N  Total Rows: N
>                       Output: lprt_a.a
> -                     Worker 0:  actual rows=N loops=N
> -(24 rows)
> +(19 rows)



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Ekaterina Sokolova
Дата:
Hi, hackers!

We started discussion about overheads and how to calculate it correctly.

Julien Rouhaud wrote:
> Can you give a bit more details on your bench scenario?  I see 
> contradictory
> results, where the patched version with more code is sometimes way 
> faster,
> sometimes way slower.  If you're using pgbench
> default queries (including write queries) I don't think that any of 
> them will
> hit the loop code, so it's really a best case scenario.  Also write 
> queries
> will make tests less stable for no added value wrt. this code.
> 
> Ideally you would need a custom scenario with a single read-only query
> involving a nested loop or something like that to check how much 
> overhead you
> really get when you cumulate those values.
I created 2 custom scenarios. First one contains VERBOSE flag so this 
scenario uses extra statistics. Second one doesn't use new feature and 
doesn't disable its use (therefore still collect data).
I attach scripts for pgbench to this letter.

Main conclusions are:
1) the use of additional statistics affects no more than 4.5%;
2) data collection affects no more than 1.5%.
I think testing on another machine would be very helpful, so if you get 
a chance, I'd be happy if you share your observations.

Some fixes:

> Sure, but if we're going to have a branch for nloops == 0, I think it 
> would be
> better to avoid redundant / useless instructions
Right. I done it.

Justin Pryzby wrote:
> Maybe set parallel_leader_participation=no for this test.
Thanks for reporting the issue and advice. I set 
parallel_leader_participation = off. I hope this helps to solve the 
problem of inconsistencies in the outputs.

If you have any comments on this topic or want to share your 
impressions, please write to me.
Thank you very much for your contribution to the development of this 
patch.

-- 
Ekaterina Sokolova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Вложения

Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
Hi,

On Fri, Jun 24, 2022 at 08:16:06PM +0300, Ekaterina Sokolova wrote:
>
> We started discussion about overheads and how to calculate it correctly.
>
> Julien Rouhaud wrote:
> > Can you give a bit more details on your bench scenario?
> > [...]
> > Ideally you would need a custom scenario with a single read-only query
> > involving a nested loop or something like that to check how much
> > overhead you
> > really get when you cumulate those values.
> I created 2 custom scenarios. First one contains VERBOSE flag so this
> scenario uses extra statistics. Second one doesn't use new feature and
> doesn't disable its use (therefore still collect data).
> I attach scripts for pgbench to this letter.

I don't think that this scenario is really representative for the problem I was
mentioning as you're only testing the overhead using the EXPLAIN (ANALYZE)
command, which doesn't say much about normal query execution.

I did a simple benchmark using a scale 50 pgbench on a pg_stat_statements
enabled instance, and the following scenario:

SET enable_mergejoin = off;
SELECT count(*) FROM pgbench_accounts JOIN pgbench_tellers on aid = tid;

(which forces a nested loop) and compared the result from this patch and fixing
pg_stat_statements to not request INSTRUMENT extra, something like:

diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 049da9fe6d..9a2177e438 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -985,7 +985,7 @@ pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)
            MemoryContext oldcxt;

            oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
-           queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL, false);
+           queryDesc->totaltime = InstrAlloc(1, (INSTRUMENT_ALL & ~INSTRUMENT_EXTRA), false);
            MemoryContextSwitchTo(oldcxt);
        }
    }

It turns out that having pg_stat_statements with INSTRUMENT_EXTRA indirectly
requested by INSTRUMENT_ALL adds a ~27% overhead.

I'm not sure that I actually believe these results, but they're really
consistent, so maybe that's real.

Anyway, even if the overheadwas only 1.5% like in your own benchmark, that
still wouldn't be acceptable.  Such a feature is in my opinion very welcome,
but it shouldn't add *any* overhead outside of EXPLAIN (ANALYZE, VERBOSE).

Note that this was done using a "production build" (so with -02, without assert
and such).  Doing the same on a debug build (and a scale 20 pgbench), the
overhead is about 1.75%, which is closer to your result.  What was the
configure option you used for your benchmark?

Also, I don't think it's not acceptable to ask every single extension that
currently relies on INSTRUMENT_ALL to be patched and drop some random
INSTRUMENT_XXX flags to avoid this overhead.  So as I mentioned previously, I
think we should keep INSTRUMENT_ALL to mean something like "all instrumentation
that gives metrics at the statement level", and have INSTRUMENT_EXTRA be
outside of INSTRUMENT_ALL.  Maybe this new category should have a global flag
to request all of them, and maybe there should be some additional alias to grab
all categories.

While at it, INSTRUMENT_EXTRA doesn't really seem like a nice name either since
there's no guarantee that the next time someone adds a new instrument option
for per-node information, she will want to combine it with this one.  Maybe
INSTRUMENT_MINMAX_LOOPS or something like that?



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Julien Rouhaud
Дата:
On Sat, Jul 30, 2022 at 08:54:33PM +0800, Julien Rouhaud wrote:
>
> It turns out that having pg_stat_statements with INSTRUMENT_EXTRA indirectly
> requested by INSTRUMENT_ALL adds a ~27% overhead.
>
> I'm not sure that I actually believe these results, but they're really
> consistent, so maybe that's real.
>
> Anyway, even if the overheadwas only 1.5% like in your own benchmark, that
> still wouldn't be acceptable.  Such a feature is in my opinion very welcome,
> but it shouldn't add *any* overhead outside of EXPLAIN (ANALYZE, VERBOSE).

I did the same benchmark this morning, although trying to stop all background
jobs and things on my machine that could interfere with the results, using
longer runs and more runs and I now get a reproducible ~1% overhead, which is
way more believable.  Not sure what happened yesterday as I got reproducible
number doing the same benchmark twice, I guess that the fun doing performance
tests on a development machine.

Anyway, 1% is in my opinion still too much overhead for extensions that won't
get any extra information.



Re: [PATCH] Add extra statistics to explain for Nested Loop

От
Andrey Lepikhov
Дата:
On 31/7/2022 10:49, Julien Rouhaud wrote:
> On Sat, Jul 30, 2022 at 08:54:33PM +0800, Julien Rouhaud wrote:
> Anyway, 1% is in my opinion still too much overhead for extensions that won't
> get any extra information.
I have read all the thread and still can't understand something. What 
valuable data can I find with these extra statistics if no one 
parameterized node in the plan exists?
Also, thinking about min/max time in the explain, I guess it would be 
necessary in rare cases. Usually, the execution time will correlate to 
the number of tuples scanned, won't it? So, maybe skip the time 
boundaries in the instrument structure?
In my experience, it is enough to know the total number of tuples 
bubbled up from a parameterized node to decide further optimizations. 
Maybe simplify this feature up to the one total_rows field in the case 
of nloops > 1 and in the presence of parameters?
And at the end. If someone wants a lot of additional statistics, why not 
give them that by extension? It is only needed to add a hook into the 
point of the node explanation and some efforts to make instrumentation 
extensible. But here, honestly, I don't have code/ideas so far.

-- 
regards,
Andrey Lepikhov
Postgres Professional