Re: optimizing number of workers
От | Weck, Luis |
---|---|
Тема | Re: optimizing number of workers |
Дата | |
Msg-id | IA3PR10MB811375110A03F754BA29A0048D4BA@IA3PR10MB8113.namprd10.prod.outlook.com обсуждение исходный текст |
Ответ на | optimizing number of workers (Greg Hennessy <greg.hennessy@gmail.com>) |
Ответы |
Re: optimizing number of workers
|
Список | pgsql-general |
From: Greg Hennessy <greg.hennessy@gmail.com>Date: Thursday, July 10, 2025 at 4:40 PMTo: pgsql-general@lists.postgresql.org <pgsql-general@lists.postgresql.org>Subject: optimizing number of workers
Having just received a shiny new dual CPU machine to use as a postgresqlserver, I'm trying to do some reasonable efforts to configure it correctly. The hardware has 128 cores, and I am running a VM with Redhat 9 and Postgresql 16.9.In postgresql.conf I have:max_worker_processes = 90 # (change requires restart)max_parallel_workers_per_gather = 72 # gsh 26 oct 2022max_parallel_maintenance_workers = 72 # gsh 12 jun 2025max_parallel_workers = 72 # gsh 12 jun 2025max_logical_replication_workers = 72 # gsh 12 jun 2025max_sync_workers_per_subscription = 72 # gsh 12 jun 2025autovacuum_max_workers = 12 # max number of autovacuum subprocessesWhen I do a simple count of a large (large being 1.8 Billion entries), I getabout 10 workers used.prod_v1_0_0_rc1=# explain (analyze, buffers) select count(*) from gaiadr3.gaia_source;QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Finalize Aggregate (cost=14379796.81..14379796.82 rows=1 width=8) (actual time=16702.806..16705.479 rows=1 loops=1)Buffers: shared hit=2507481-> Gather (cost=14379795.78..14379796.79 rows=10 width=8) (actual time=16702.513..16705.470 rows=11 loops=1)Workers Planned: 10Workers Launched: 10Buffers: shared hit=2507481-> Partial Aggregate (cost=14379785.78..14379785.79 rows=1 width=8) (actual time=16691.820..16691.821 rows=1 loops=11)Buffers: shared hit=2507481-> Parallel Index Only Scan using gaia_source_nest128 on gaia_source (cost=0.58..13926632.85 rows=181261171 width=0) (actual time=0.025..9559.644 rows=164700888 loops=11)Heap Fetches: 0Buffers: shared hit=2507481Planning:Buffers: shared hit=163Planning Time: 14.898 msExecution Time: 16705.592 msPostgres has chosen to use only a small fraction of the CPU's I have onmy machine. Given the query returns an answer in about 8 seconds, it may bethat Postgresql has allocated the proper number of works. But if I wantedto try to tweak some config parameters to see if using more workerswould give me an answer faster, I don't seem to see any obvious knobsto turn. Are there parameters that I can adjust to see if I can increasethroughput? Would adjusting parallel_setup_cost or parallel_tuple_costlikely to be of help?
I believe you can decrease min_parallel_table_scan_size (default is 8MB) and min_parallel_index_scan_size (default 5112kB). The number of workers depends also on a multiple of these settings.
В списке pgsql-general по дате отправления: