On Tue, Apr 21, 2015 at 04:36:53PM -0400, Robert Haas wrote:
> > Keep in mind there's a disconnect between dirtying a page and writing it
> > to storage. A page could remain dirty for a long time in the buffer
> > cache. This writing of sequential pages would occur at checkpoint time
> > only, which seems the wrong thing to optimize. If some other process
> > needs to evict pages to make room to read some other page in, surely
> > it's going to try one page at a time, not write "many sequential dirty
> > pages."
>
> Well, for a big sequential scan, we use a ring buffer, so we will
> typically be evicting the pages that we ourselves read in moments
> before. So in this case we would do a lot of sequential writes of
> dirty pages.
Ah, yes, this again supports the prune-then-skip approach, rather than
doing the first X% pruneable pages seen.
-- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB
http://enterprisedb.com
+ Everyone has their own god. +