Re: Auto-tuning work_mem and maintenance_work_mem

Поиск
Список
Период
Сортировка
От Jim Nasby
Тема Re: Auto-tuning work_mem and maintenance_work_mem
Дата
Msg-id 525C579D.7050105@nasby.net
обсуждение исходный текст
Ответ на Re: Auto-tuning work_mem and maintenance_work_mem  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On 10/14/13 8:18 AM, Robert Haas wrote:
> On Sat, Oct 12, 2013 at 3:07 AM, Magnus Hagander <magnus@hagander.net> wrote:
>> On Oct 11, 2013 10:23 PM, "Josh Berkus" <josh@agliodbs.com> wrote:
>>> On 10/11/2013 01:11 PM, Bruce Momjian wrote:
>>>> In summary, I think we need to:
>>>>
>>>> *  decide on new defaults for work_mem and maintenance_work_mem
>>>> *  add an initdb flag to allow users/packagers to set shared_bufffers?
>>>> *  add an autovacuum_work_mem setting?
>>>> *  change the default for temp_buffers?
>>>
>>> If we're changing defaults, bgwriter_lru_maxpages and vacuum_cost_limit
>>> could also use a bump; those thresholds were set for servers with < 1GB
>>> of RAM
>>
>> Uh, those are there to limit io and not memory, right? More memory isn't the
>> reason to increase them, more io is. For people deploying on modern server
>> hardware then yes it's often low, but for all those deploying in virtualized
>> environments with io performance reminding you of the 1990ies, I'm not so
>> sure it is...
>
> bgwriter_lru_maxpages is clearly related to the size of
> shared_buffers, although confusingly it is expressed as a number of
> buffers, while shared_buffers is expressed as a quantity of memory.  I
> think we might have done better to call the GUC
> bgwriter_lru_maxpercent and make it a percentage of shared buffers.
>

Also, more memory generally means more filesystem cache which means you can do more vacuum work per round.

FWIW, on our 512G servers...

cnuapp_prod@postgres11.obr=# select name, setting from pg_settings where name ~ 'vacuum_cost';             name
   | setting
 
------------------------------+--------- autovacuum_vacuum_cost_delay | 10 autovacuum_vacuum_cost_limit | -1
vacuum_cost_delay           | 10 vacuum_cost_limit            | 2000 vacuum_cost_page_dirty       | 10
vacuum_cost_page_hit        | 1 vacuum_cost_page_miss        | 10
 
(7 rows)

The page_hit cost is intentionally the same as the page_dirty limit because writes to the SAN are generally far cheaper
thanreads that actually hit spindles. Of course with the amount of FS cache we have (512G-8G shared buffers at most)
readsare often very likely to hit the FS cache, but tuning of these settings while watching IO stats has shown these
settingsto be minimally disruptive.
 
-- 
Jim C. Nasby, Data Architect                       jim@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: buildfarm failures on smew and anole
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: [PATCH] Add use of asprintf()