Re: optimizer cost calculation problem
| От | Tom Lane | 
|---|---|
| Тема | Re: optimizer cost calculation problem | 
| Дата | |
| Msg-id | 25900.1049151887@sss.pgh.pa.us обсуждение исходный текст  | 
		
| Ответ на | optimizer cost calculation problem (Tatsuo Ishii <t-ishii@sra.co.jp>) | 
| Ответы | 
                	
            		Re: optimizer cost calculation problem
            		
            		 Re: optimizer cost calculation problem  | 
		
| Список | pgsql-hackers | 
Tatsuo Ishii <t-ishii@sra.co.jp> writes:
> Kenji Sugita has identified a problem with cost_sort() in costsize.c.
> In the following code fragment, sortmembytes is defined as long. So
>         double        nruns = nbytes / (sortmembytes * 2);
> may cause an integer overflow if sortmembytes exceeds 2^30, which in
> turn make optimizer to produce wrong query plan(this actually happned
> in a large PostgreSQL installation which has tons of memory).
I find it really really hard to believe that it's wise to run with
sort_mem exceeding 2 gig ;-).  Does that installation have so much
RAM that it can afford to run multiple many-Gb sorts concurrently?
This is far from being the only place that multiplies SortMem by 1024.
My inclination is that a safer fix is to alter guc.c's entry for
SortMem to establish a maximum value of INT_MAX/1024 for the variable.
Probably some of the other GUC variables like shared_buffers ought to
have overflow-related maxima established, too.
        regards, tom lane
		
	В списке pgsql-hackers по дате отправления: