Re: BUG #13750: Autovacuum slows down with large numbers of tables. More workers makes it slower.
От | David Gould |
---|---|
Тема | Re: BUG #13750: Autovacuum slows down with large numbers of tables. More workers makes it slower. |
Дата | |
Msg-id | 20151030072704.1a4db344@engels обсуждение исходный текст |
Ответ на | Re: BUG #13750: Autovacuum slows down with large numbers of tables. More workers makes it slower. (Alvaro Herrera <alvherre@2ndquadrant.com>) |
Ответы |
Re: BUG #13750: Autovacuum slows down with large numbers of
tables. More workers makes it slower.
|
Список | pgsql-bugs |
On Fri, 30 Oct 2015 10:46:46 -0300 Alvaro Herrera <alvherre@2ndquadrant.com> wrote: > daveg@sonic.net wrote: > > > With more than a few tens of thousands of tables in one database > > autovacuuming slows down radically and becomes ineffective. Increasing the > > number of autovacuum workers makes the slow down worse. > > Yeah, you need to decrease autovacuum_vacuum_cost_delay if you want to > make them go faster. (As more workers are started, the existing ones > slow down. The intent is that the I/O bandwidth allocation is kept > constant regardless of how many workers there are.) The cost delays are all 0. We care about bloat, not bandwidth. Anyway, they are not actually vacuuming. They are waiting on the VacuumScheduleLock. And requesting freshs snapshots from the stats_collector. Basically there is a loop in do_autovacuum() that looks like: ... build list of all tables to vacuum ... for tab in tables_to_vacuum: lock(VacuumScheduleLock) for worker in autovacuum_workers: if worker.working_on == tab: skip = true if skip or very_expensive_check_to_see_if_already_vacuumed(tab): unlock(VacuumScheduleLock) continue unlock(VacuumScheduleLock) actually_vacuum(tab) Since all the workers are working on the same list they all compete to vacuum the next item on the list. -dg -- David Gould daveg@sonic.net If simplicity worked, the world would be overrun with insects.
В списке pgsql-bugs по дате отправления: