Re: A questions on planner choices
| От | Edoardo Panfili |
|---|---|
| Тема | Re: A questions on planner choices |
| Дата | |
| Msg-id | 4E4EC99D.5090009@aspix.it обсуждение исходный текст |
| Ответ на | Re: A questions on planner choices (Scott Marlowe <scott.marlowe@gmail.com>) |
| Ответы |
Re: A questions on planner choices
|
| Список | pgsql-general |
Il 19/08/11 22:15, Scott Marlowe ha scritto: > On Fri, Aug 19, 2011 at 1:05 PM, Edoardo Panfili<edoardo@aspix.it> wrote: >> [1] Plan for the firts query >> ------------------------------------------------------------------------------------------------------------------------------------------------------- >> Sort (cost=20.45..20.46 rows=1 width=931) (actual time=4457.775..4457.786 >> rows=76 loops=1) >> Sort Key: cartellino.id >> Sort Method: quicksort Memory: 74kB >> -> Hash Join (cost=8.32..20.44 rows=1 width=931) (actual >> time=243.679..4457.658 rows=76 loops=1) >> Hash Cond: (cartellino.idspecie = principale.id) >> -> Nested Loop (cost=0.00..9.81 rows=614 width=886) (actual >> time=4.094..4439.024 rows=18370 loops=1) > > The row estimate here is off by a factor of 30 or so. In this case a > different join method would likely work better. It might be that > cranking up stats for the columns involved will help, but if that > doesn't change the estimates then we might need to look elsewhere. > > What's your work_mem and random_page_cost? work_mem = 1MB random_page_cost = 4 I am using an SSD but the production system uses a standard hard disk. I did a try also with set default_statistics_target=10000; vacuum analyze cartellino; vacuum analyze specie; -- the base table for specienomi vacuum analyze confini_regioni; but is always 4617.023 ms Edoardo
В списке pgsql-general по дате отправления: