Re: [HACKERS] Block level parallel vacuum

Поиск
Список
Период
Сортировка
От Masahiko Sawada
Тема Re: [HACKERS] Block level parallel vacuum
Дата
Msg-id CA+fd4k7=xXaPrk8616W8s=Qi73VvRV-p+M-tc=sR5DoS7x+D4A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Block level parallel vacuum  (Mahendra Singh <mahi6run@gmail.com>)
Список pgsql-hackers
On Sat, 30 Nov 2019 at 22:11, Mahendra Singh <mahi6run@gmail.com> wrote:
>
> On Sat, 30 Nov 2019 at 19:18, Sergei Kornilov <sk@zsrv.org> wrote:
>>
>> Hello
>>
>> Its possible to change order of index processing by parallel leader? In v35 patchset I see following order:
>> - start parallel processes
>> - leader and parallel workers processed index lixt and possible skip some entries
>> - after that parallel leader recheck index list and process the skipped indexes
>> - WaitForParallelWorkersToFinish
>>
>> I think it would be better to:
>> - start parallel processes
>> - parallel leader goes through index list and process only indexes which are skip_parallel_index_vacuum = true
>> - parallel workers processes indexes with skip_parallel_index_vacuum = false
>> - parallel leader start participate with remainings parallel-safe index processing
>> - WaitForParallelWorkersToFinish
>>
>> This would be less running time and better load balance across leader and workers in case of few non-parallel and
fewparallel indexes.
 
>> (if this is expected and required by some reason, we need a comment in code)
>>
>> Also few notes to vacuumdb:
>> Seems we need version check at least in vacuum_one_database and prepare_vacuum_command. Similar to SKIP_LOCKED or
DISABLE_PAGE_SKIPPINGfeatures.
 
>> discussion question: difference between --parallel and --jobs parameters will be confusing? We need more description
forthis options
 
>
>
> While doing testing with different server configuration settings, I am getting error (ERROR:  no unpinned buffers
available)in parallel vacuum but normal vacuum is working fine.
 
>
> Test Setup:
> max_worker_processes = 40
> autovacuum = off
> shared_buffers = 128kB
> max_parallel_workers = 40
> max_parallel_maintenance_workers = 40
> vacuum_cost_limit = 2000
> vacuum_cost_delay = 10
>
> Table description: table have 16 indexes(14 btree, 1 hash, 1 BRIN ) and total 10,00,000 tuples and I am deleting all
thetuples, then firing vacuum command.
 
> Run attached .sql file (test_16_indexes.sql)
> $ ./psql postgres
> postgres=# \i test_16_indexes.sql
>
> Re-start the server and do vacuum.
> Case 1) normal vacuum:
> postgres=# vacuum test ;
> VACUUM
> Time: 115174.470 ms (01:55.174)
>
> Case 2) parallel vacuum using 10 parallel workers:
> postgres=# vacuum (parallel 10)test ;
> ERROR:  no unpinned buffers available
> CONTEXT:  parallel worker
> postgres=#
>
> This error is coming due to 128kB shared buffer. I think, I launched 10 parallel workers and all are working
parallelingso due to less shared buffer, I am getting this error.
 
>

Thank you for testing!

> Is this expected behavior with small shared buffer size or we should try to come with a solution for this.  Please
letme know your thoughts.
 

I think it's normal behavior when the shared buffer is not enough.
Since the total 10 processes were processing different pages at the
same time and you set a small value to shared_buffers the shared
buffer gets full easily. And you got the proper error. So I think in
this case we should consider either to increase the shared buffer size
or to decrease the parallel degree. I guess you can get this error
even when you vacuum 10 different tables concurrently instead.

Regards,

-- 
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Justin Pryzby
Дата:
Сообщение: Re: bitmaps and correlation
Следующее
От: Tomas Vondra
Дата:
Сообщение: surprisingly expensive join planning query