On Thu, Dec 5, 2013 at 3:50 PM, Josh Berkus <josh@agliodbs.com> wrote:
> There are fairly well researched algorithms for block-based sampling
> which estimate for the skew introduced by looking at consecutive rows in
> a block. In general, a minimum sample size of 5% is required, and the
> error is no worse than our current system. However, the idea was shot
> down at the time, partly because I think other hackers didn't get the math.
I think that this certainly warrants revisiting. The benefits would be
considerable.
Has anyone ever thought about opportunistic ANALYZE piggy-backing on
other full-table scans? That doesn't really help Greg, because his
complaint is mostly that a fresh ANALYZE is too expensive, but it
could be an interesting, albeit risky approach.
Opportunistically/unpredictably acquiring a ShareUpdateExclusiveLock
would be kind of weird, for one thing, but if a full table scan really
is very expensive, would it be so unreasonable to attempt to amortize
that cost?
--
Peter Geoghegan