Re: optimizing number of workers

Поиск
Список
Период
Сортировка
От Greg Hennessy
Тема Re: optimizing number of workers
Дата
Msg-id CA+mZaON_Ku7tfC-oX=tRX+PaD-dpo_FTEhXFi_FjGaGb2Ed0gw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: optimizing number of workers  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
if I "alter table allwise set (parallel_workers = 64);" then I can get 64 workers. I wonder if the code
to check the rel_parallel_workers do deal with the default algorithm not allocating sufficient
parallel_workers.


On Mon, Jul 14, 2025 at 2:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Greg Hennessy <greg.hennessy@gmail.com> writes:
>> Postgres has chosen to use only a small fraction of the CPU's I have on
>> my machine. Given the query returns an answer in about 8 seconds, it may be
>> that Postgresql has allocated the proper number of works. But if I wanted
>> to try to tweak some config parameters to see if using more workers
>> would give me an answer faster, I don't seem to see any obvious knobs
>> to turn. Are there parameters that I can adjust to see if I can increase
>> throughput? Would adjusting parallel_setup_cost or parallel_tuple_cost
>> likely to be of help?

See the bit about

             * Select the number of workers based on the log of the size of
             * the relation.  This probably needs to be a good deal more
             * sophisticated, but we need something here for now.  Note that

in compute_parallel_worker().  You can move things at the margins by
changing min_parallel_table_scan_size, but that logarithmic behavior
will constrain the number of workers pretty quickly.  You'd have to
change that code to assign a whole bunch of workers to one scan.

(No, I don't know why it's done like that.  There might be related
discussion in our archives, but finding it could be difficult.)

                        regards, tom lane

В списке pgsql-general по дате отправления: