Re: Logging parallel worker draught

Поиск
Список
Период
Сортировка
От Imseih (AWS), Sami
Тема Re: Logging parallel worker draught
Дата
Msg-id D04977E3-9F54-452C-A4C4-CDA67F392BD1@amazon.com
обсуждение исходный текст
Ответ на Re: Logging parallel worker draught  (Benoit Lobréau <benoit.lobreau@dalibo.com>)
Ответы Re: Logging parallel worker draught  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Список pgsql-hackers
> I believe both cumulative statistics and logs are needed. Logs excel in 
> pinpointing specific queries at precise times, while statistics provide 
> a broader overview of the situation. Additionally, I often encounter 
> situations where clients lack pg_stat_statements and can't restart their 
> production promptly.

I agree that logging will be very useful here. 
Cumulative stats/pg_stat_statements can be handled in a separate discussion.

> log_temp_files exhibits similar behavior when a query involves multiple
> on-disk sorts. I'm uncertain whether this is something we should or need
> to address. I'll explore whether the error message can be made more
> informative.


> [local]:5437 postgres@postgres=# SET work_mem to '125kB';
> [local]:5437 postgres@postgres=# SET log_temp_files TO 0;
> [local]:5437 postgres@postgres=# SET client_min_messages TO log;
> [local]:5437 postgres@postgres=# WITH a AS ( SELECT x FROM
> generate_series(1,10000) AS F(x) ORDER BY 1 ) , b AS (SELECT x FROM
> generate_series(1,10000) AS F(x) ORDER BY 1 ) SELECT * FROM a,b;
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.20", size
> 122880 => First sort
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.19", size 140000
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.23", size 140000
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.22", size
> 122880 => Second sort
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.21", size 140000

That is true.

Users should also control if they want this logging overhead or not, 
The best answer is a new GUC that is OFF by default.

I am also not sure if we want to log draught only. I think it's important
to not only see which operations are in parallel draught, but to also log 
operations are using 100% of planned workers. 
This will help the DBA tune queries that are eating up the parallel workers.

Regards,

Sami


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Making aggregate deserialization (and WAL receive) functions slightly faster
Следующее
От: Konstantin Knizhnik
Дата:
Сообщение: Can concurrent create index concurrently block each other?