Re: Eagerly scan all-visible pages to amortize aggressive vacuum
От | Robert Haas |
---|---|
Тема | Re: Eagerly scan all-visible pages to amortize aggressive vacuum |
Дата | |
Msg-id | CA+TgmobrKkj2zC68H2ugC_GizmpnvU9JowQPBW=54N+c+ruGbQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Eagerly scan all-visible pages to amortize aggressive vacuum (Robert Treat <rob@xzilla.net>) |
Ответы |
Re: Eagerly scan all-visible pages to amortize aggressive vacuum
|
Список | pgsql-hackers |
On Tue, Feb 4, 2025 at 2:57 PM Robert Treat <rob@xzilla.net> wrote: > > Yea, I thought that counting them as failures made sense because we > > did fail to freeze them. However, now that you mention it, we didn't > > fail to freeze them because of age, so maybe we don't want to count > > them as failures. I don't expect us to have a bunch of contended > > all-visible pages, so I think the question is about what makes it more > > clear in the code. What do you think? Should I reset was_eager_scanned > > to false if we don't get the cleanup lock? > > I feel like if we are making the trade-off in resources to attempt > eager scanning, and we weren't making progress for whatever reason > (and in the lock failure cases, wouldn't some of those be things that > would prevent us from freezing?) then it would generally be ok to bias > towards bailing sooner rather than later. Failures to acquire cleanup locks are, hopefully, rare, so it may not matter that much. Having said that, if we skip a page because we can't acquire a cleanup lock on it, I think that means that it was already present in shared_buffers, which means that we didn't have to do an I/O to get it. Since I think the point of the failure cap is mostly to limit wasted I/O, I would lean toward NOT counting such cases as failures. -- Robert Haas EDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: