Обсуждение: tsearch query plan
Hello, I've just started using the tsearch2 system. I'm VERY impressed by the speed. I've got one question about query planning. Is it understandable to have the query plan estimation be off by a couple of orders of magnitude? Or, is it the fact that the cost estimation is small to begin with that the error between the actual and the estimated is "normal"? Here is my explain analyze run immediately after a vacuum full analyze: kjv=# vacuum full analyze; VACUUM kjv=# explain analyze select * from kjv where idxFTI @@ 'corinth'::tsquery; QUERY PLAN ------------------------------------------------------------------------ --------------------------------------------- Index Scan using idxfti_idx on kjv (cost=0.00..125.44 rows=32 width=193) (actual time=0.796..1.510 rows=6 loops=1) Index Cond: (idxfti @@ '\'corinth\''::tsquery) Filter: (idxfti @@ '\'corinth\''::tsquery) Total runtime: 1.679 ms (4 rows) Thanks! Mark
Mark, > I've got one question about query planning. Is it understandable to > have the query plan estimation be off by a couple of orders of > magnitude? Or, is it the fact that the cost estimation is small to > begin with that the error between the actual and the estimated is > "normal"? Well, your example is not "a couple orders of magnitude". 6 vs. 32 is actually pretty good accuracy. Now, 6 vs 192 would potentially be a problem, let alone 32 vs 13,471. -- -Josh Berkus Aglio Database Solutions San Francisco
On Apr 8, 2004, at 6:33 PM, Josh Berkus wrote: > Mark, > >> I've got one question about query planning. Is it understandable to >> have the query plan estimation be off by a couple of orders of >> magnitude? Or, is it the fact that the cost estimation is small to >> begin with that the error between the actual and the estimated is >> "normal"? > > Well, your example is not "a couple orders of magnitude". 6 vs. 32 is > actually pretty good accuracy. > > Now, 6 vs 192 would potentially be a problem, let alone 32 vs 13,471. > I guess I was looking more at the cost estimate and not so much at the rows estimate. I agree that the row estimate wasn't too bad. But the cost estimate seems way out of line. I'm somewhat new to examining explain analyze output and I'm looking at this as more of an education, since the speed is certainly good anyway. I just expected the cost estimate to be more in line especially immediately after an analyze. -Mark
Mark Lubratt <mark.lubratt@indeq.com> writes: > I guess I was looking more at the cost estimate and not so much at the > rows estimate. I agree that the row estimate wasn't too bad. But the > cost estimate seems way out of line. The cost estimates are not even in the same units as the actual runtime. Cost is in an arbitrary scale in which 1 unit = 1 sequential disk block fetch. It is unknown what this might equate to on your machine ... but it's quite unlikely that it's 1.0 millisecond. The thing to look at when considering EXPLAIN results is whether the ratios of different cost estimates are proportional to the actual runtimes. regards, tom lane