Re: Erroneous cost estimation for nested loop join

Поиск
Список
Период
Сортировка
От Bruce Momjian
Тема Re: Erroneous cost estimation for nested loop join
Дата
Msg-id 20151203014210.GA12766@momjian.us
обсуждение исходный текст
Ответ на Re: Erroneous cost estimation for nested loop join  (KAWAMICHI Ryoji <kawamichi@tkl.iis.u-tokyo.ac.jp>)
Ответы Re: Erroneous cost estimation for nested loop join  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Mon, Nov 30, 2015 at 04:29:43PM +0900, KAWAMICHI Ryoji wrote:
> 
> 
> Robert Haas <robertmhaas@gmail.com> wrote:
> > 
> > - If we're sequential scanning a small table, let's say less than 1/4
> > of shared_buffers, which is the point where synchronized scans kick
> > in, then assume the data is coming from shared_buffers.
> > - If we're scanning a medium-sized table, let's say less than
> > effective_cache_size, then assume the data is coming from the OS
> > cache.  Maybe this is the same cost as the previous case, or maybe
> > it's slightly more.
> > - Otherwise, assume that the first effective_cache_size pages are
> > coming from cache and the rest has to be read from disk.  This is
> > perhaps unrealistic, but we don't want the cost curve to be
> > discontinuous.
> 
> I think this improvement is so reasonable, and I expect it will be merged 
> into current optimizer code.
> 
> 
> > A problem with this sort of thing, of course, is that it's really hard
> > to test a proposed change broadly enough to be certain how it will
> > play out in the real world.
> 
> That’s the problem we’re really interested in and trying to tackle.
> 
> For example, with extensive experiments, I’m really sure my modification of 
> cost model is effective for our environment, but I can’t see if it is also 
> efficient or unfortunately harmful in general environments.
> 
> And I think that, in postgres community, there must be (maybe buried) 
> knowledge on how to judge the effectiveness of cost model modifications 
> because someone should have considered something like that at each commit.
> I’m interested in it, and hopefully would like to contribute to finding 
> a better way to improve the optimizer through cost model refinement.

No one mentioned the random page docs so I will quote it here:
http://www.postgresql.org/docs/9.5/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS
Random access to mechanical disk storage is normally much more expensivethan four times sequential access. However, a
lowerdefault is used(4.0) because the majority of random accesses to disk, such as indexedreads, are assumed to be in
cache.The default value can be thought ofas modeling random access as 40 times slower than sequential, whileexpecting
90%of random reads to be cached.If you believe a 90% cache rate is an incorrect assumption for yourworkload, you can
increaserandom_page_cost to better reflect the truecost of random storage reads. Correspondingly, if your data is
likelytobe completely in cache, such as when the database is smaller than thetotal server memory, decreasing
random_page_costcan be appropriate.Storage that has a low random read cost relative to sequential, e.g.solid-state
drives,might also be better modeled with a lower value forrandom_page_cost.
 

What we don't have is way to know how much is in the cache, not only at
planning time, but at execution time.  (Those times are often
different for prepared queries.)  I think that is the crux of what has
to be addressed here.

--  Bruce Momjian  <bruce@momjian.us>        http://momjian.us EnterpriseDB
http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription                             +



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: psql: add \pset true/false
Следующее
От: David Rowley
Дата:
Сообщение: Re: Removing Functionally Dependent GROUP BY Columns