Re: 500x speed-down: Wrong statistics!
От | Alessandro Baretta |
---|---|
Тема | Re: 500x speed-down: Wrong statistics! |
Дата | |
Msg-id | 43C4D315.8010505@barettadeit.com обсуждение исходный текст |
Ответ на | Re: 500x speed-down: Wrong statistics! (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-performance |
Tom Lane wrote: > Alessandro Baretta <a.baretta@barettadeit.com> writes: > >>I have no clue as to how or why the statistics were wrong >>yesterday--as I vacuum-analyzed continuously out of lack of any better >>idea--and I was stupid enough to re-timestamp everything before >>selecting from pg_stats. > > > Too bad. I would be interested to find out how, if the stats were > up-to-date, the thing was still getting the row estimate so wrong. > If you manage to get the database back into its prior state please > do send along the pg_stats info. I have some more information on this issue, which clears PostgreSQL's planner of all suspects. I am observing severe corruption of the bookkeeping fields managed by the xdbs rule/trigger "complex". I am unable to pinpoint the cause, right now, but the effect is that after running a few hours' test on the end-user application (which never interacts directly with xdbs_* fields, and thus cannot possibly mangle them) most tuples (the older ones, apparently) get thei timestamps set to NULL. Before vacuum-analyzing the table, yesterday's statistics were in effect, and the planner used the appropriate indexes. Now, after vacuum-analyzing the table, the pg_stats row for the xdbs_modified field no longer exists (!), and the planner has reverted to the Nested Loop Seq Scan join strategy. Hence, all the vacuum-analyzing I was doing when complaining against the planner was actually collecting completely screwed statistics, and this is why the ALTER TABLE ... SET STATISTICS 1000 did not help at all! Ok. I plead guilty and ask for the clemency of the court. I'll pay my debt with society with a long term of pl/pgsql code debugging... Alex
В списке pgsql-performance по дате отправления: