Re: autovacuum next steps, take 2
| От | Matthew T. O'Connor |
|---|---|
| Тема | Re: autovacuum next steps, take 2 |
| Дата | |
| Msg-id | 45E3A636.4050602@zeut.net обсуждение исходный текст |
| Ответ на | Re: autovacuum next steps, take 2 (Tom Lane <tgl@sss.pgh.pa.us>) |
| Ответы |
Re: autovacuum next steps, take 2
|
| Список | pgsql-hackers |
Tom Lane wrote: > BTW, to what extent might this whole problem be simplified if we adopt > chunk-at-a-time vacuuming (compare current discussion with Galy Lee)? > If the unit of work has a reasonable upper bound regardless of table > size, maybe the problem of big tables starving small ones goes away. So if we adopted chunk-at-a-time then perhaps each worker processes the list of tables in OID order (or some unique and stable order) and does one chunk per table that needs vacuuming. This way an equal amount of bandwidth is given to all tables. That does sounds simpler. Is chunk-at-a-time a realistic option for 8.3? Matt
В списке pgsql-hackers по дате отправления: