Re: Sequential scans
От | Simon Riggs |
---|---|
Тема | Re: Sequential scans |
Дата | |
Msg-id | 1178175693.3633.38.camel@silverbirch.site обсуждение исходный текст |
Ответ на | Re: Sequential scans (Heikki Linnakangas <heikki@enterprisedb.com>) |
Ответы |
Re: Sequential scans
Re: Sequential scans |
Список | pgsql-hackers |
On Wed, 2007-05-02 at 23:59 +0100, Heikki Linnakangas wrote: > Umm, you naturally have just entry per relation, but we were talking > about how many entries the table needs to hold.. You're patch had a > hard-coded value of 1000 which is quite arbitrary. We need to think of the interaction with partitioning here. People will ask whether we would recommend that individual partitions of a large table should be larger/smaller than a particular size, to allow these optimizations to kick in. My thinking is that database designers would attempt to set partition size larger than the sync scan limit, whatever it is. That means: - they wouldn't want the limit to vary when cache increases, so we *do* need a GUC to control the limit. My suggestion now would be large_scan_threshold, since it effects both caching and synch scans. - so there will be lots of partitions, so a hardcoded limit of 1000 would not be sufficient. A new GUC, or a link to an existing one, is probably required. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: