On Thu, Jul 30, 2009 at 1:24 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote:
> "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
>> The timings vary by up to 2.5% between runs, so that's the noise
>> level. Five runs of each (alternating between the two) last night
>> give an average performance of 1.89% faster for the patched version.
>> Combining that with yesterday's results starts to give me pretty good
>> confidence that the patch is beneficial for this database with this
>> configuration. I haven't found any database or configuration where it
>> hurts. (For most tests, adding up the results gave a net difference
>> measured in thousandths of a percent.)
>
>> Is that good enough, or is it still worth the effort of constructing
>> the artificial case where it might *really* shine? Or should I keep
>> running with the "real" database a few more nights to get a big enough
>> sample to further increase the confidence level with this test?
>
> I think we've pretty much established that it doesn't make things
> *worse*, so I'm sort of inclined to go ahead and apply it. The
> theoretical advantage of eliminating O(N^2) search behavior seems
> like reason enough, even if it takes a ridiculous number of tables
> for that to become significant.
That makes sense to me, but OTOH if Kevin's willing to be more testing
on some artificial cases, particularly the interleaved-index-names
case, I think those results would be interesting too. We already know
that the slowness of dump + restore is a big issue, so any data we can
gather to understand it better (and perhaps eventually design further
improvements) seems like it would be time well spent.
...Robert