Re: [HACKERS] Solution for LIMIT cost estimation
| От | Tom Lane |
|---|---|
| Тема | Re: [HACKERS] Solution for LIMIT cost estimation |
| Дата | |
| Msg-id | 9411.950485411@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: [HACKERS] Solution for LIMIT cost estimation (Don Baccus <dhogaza@pacifier.com>) |
| Ответы |
Re: [HACKERS] Solution for LIMIT cost estimation
|
| Список | pgsql-hackers |
Don Baccus <dhogaza@pacifier.com> writes:
>> The optimizer's job would be far simpler if no-brainer rules like
>> "indexscan is always better" worked.
> Yet the optimizer currently takes the no-brainer point-of-view that
> "indexscan is slow for tables much larger than the disk cache, therefore
> treat all tables as though they're much larger than the disk cache".
Ah, you haven't seen the (as-yet-uncommitted) optimizer changes I'm
working on ;-)
What I still lack is a believable approximation curve for cache hit
ratio vs. table-size-divided-by-cache-size. Anybody seen any papers
about that? I made up a plausible-shaped function but it'd be nice to
have something with some actual theory or measurement behind it...
(Of course the cache size is only a magic number in the absence of any
hard info about what the kernel is doing --- but at least it will
optimize big tables differently than small ones now.)
regards, tom lane
В списке pgsql-hackers по дате отправления: