> Well, it's right at the instant of creation, but I think that's much too
> simplistic a way of looking at it. Tables are generally created with
> the intention of putting data into them. It's a reasonable assumption
> that the table will shortly have some rows in it.
>
> Now, any particular estimate like 1000 is obviously going to be wrong.
> The point I'm trying to make is that the optimizer is more likely to
> generate a sane plan if it assumes that the table contains a moderate
> number of rows. We have seen gripes time and time again from people
> who made a table, didn't bother to do a vacuum, and got horribly slow
> nested-loop plans from the optimizer because it assumed their table
> was empty. With a nonzero initial estimate, the optimizer will choose
> a plan that might be somewhat inefficient if the table really is small;
> but it won't be seriously unusable if the table is large.
>
> Once you've done a vacuum, of course, the whole question is moot.
> But I think the system's behavior would be more robust if it assumed
> that a never-yet-vacuumed table contained some rows, not no rows.
True, but the new optimizer code favors ordered/mergejoin over nested
loop because it favors ordered results over unordered ones like nested
loop. Should fix the problem.
-- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610)
853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill,
Pennsylvania19026