Re: Speeding up an in-progress wraparound-preventing vacuum
От | Vincent de Phily |
---|---|
Тема | Re: Speeding up an in-progress wraparound-preventing vacuum |
Дата | |
Msg-id | 2115723.l95MbpnbDW@moltowork обсуждение исходный текст |
Ответ на | Re: Speeding up an in-progress wraparound-preventing vacuum (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Speeding up an in-progress wraparound-preventing vacuum
|
Список | pgsql-general |
On Tuesday 09 December 2014 16:56:39 Tom Lane wrote: > Vincent de Phily <vincent.dephily@mobile-devices.fr> writes: > > It reads about 8G of the table (often doing a similar number of writes, > > but > > not always), then starts reading the pkey index and the second index (only > > 2 indexes on this table), reading both of them fully (some writes as > > well, but not as many as for the table), which takes around 8h. > > > > And the cycle apparently repeats: process a few more GB of the table, then > > go reprocess both indexes fully. A rough estimate is that it spends ~6x > > more time (re)processing the indexes as it does processing the table > > (looking at data size alone the ratio would be 41x, but the indexes go > > faster). I'm probably lucky to only have two indexes on this table. > > > > Is that the expected behaviour ? > > Yes. It can only remember so many dead tuples at a time, and it has > to go clean the indexes when the dead-TIDs buffer fills up. Fair enough. And I guess it scans the whole index each time because the dead tuples are spread all over ? What happens when vacuum is killed before it had time to go though the index with its dead-TID buffer ? Surely the index isn't irreversibly bloated; and whatever is done then could be done in the normal case ? It still feels like a lot of wasted IO. > You could > increase maintenance_work_mem to increase the size of that buffer. Will do, thanks. -- Vincent de Phily
В списке pgsql-general по дате отправления: