Re: BUG #13750: Autovacuum slows down with large numbers of tables. More workers makes it slower.
От | David Gould |
---|---|
Тема | Re: BUG #13750: Autovacuum slows down with large numbers of tables. More workers makes it slower. |
Дата | |
Msg-id | 20160318150818.32982178@engels обсуждение исходный текст |
Ответ на | Re: BUG #13750: Autovacuum slows down with large numbers of tables. More workers makes it slower. (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: BUG #13750: Autovacuum slows down with large numbers of
tables. More workers makes it slower.
Re: BUG #13750: Autovacuum slows down with large numbers of tables. More workers makes it slower. |
Список | pgsql-bugs |
On Fri, 18 Mar 2016 09:39:34 -0400 Tom Lane <tgl@sss.pgh.pa.us> wrote: > Jim Nasby <Jim.Nasby@BlueTreble.com> writes: > > I actually wonder if instead of doing all the the hard way in C whether > > we should just use SPI for each worker to build it's list of tables. The > > big advantage that would provide is the ability for users to customize > > the scheduling, but I suspect it'd make the code simpler too. > > By that you mean "user can write a SQL query that determines autovacuum > targets"? -1. That would bring us back to the bad old days where a > poorly-thought-out vacuum cron job would miss tables and lead to a > database shutdown. Not to mention SQL injection risks. > > If we need to improve autovac's strategy, let's do that, but not by > deeming it the user's problem. I have some thoughts for a different approach. In short, the stats collector actually knows what needs vacuuming because queries that create dead tuples tell it. I'm considering have the stats collector maintain a queue of vacuum work and that autovacuum request work from the stats collector. When I have something more concrete I'll post it on hackers. -dg -- David Gould 510 282 0869 daveg@sonic.net If simplicity worked, the world would be overrun with insects.
В списке pgsql-bugs по дате отправления: