Re: Should we increase the default vacuum_cost_limit?

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: Should we increase the default vacuum_cost_limit?
Дата
Msg-id CAKJS1f9Rg_dms4JsyNWiisS3BseHjhNB7LWfFJtviZMkoTyj7A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Should we increase the default vacuum_cost_limit?  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Should we increase the default vacuum_cost_limit?
Список pgsql-hackers
On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> The second patch is a delta that rounds off to the next smaller unit
> if there is one, producing a less noisy result:
>
> regression=# set work_mem = '30.1GB';
> SET
> regression=# show work_mem;
>  work_mem
> ----------
>  30822MB
> (1 row)
>
> I'm not sure if that's a good idea or just overthinking the problem.
> Thoughts?

I don't think you're over thinking it.  I often have to look at such
settings and I'm probably not unique in when I glance at 30822MB I can
see that's roughly 30GB, whereas when I look at 31562138kB, I'm either
counting digits or reaching for a calculator.  This is going to reduce
the time it takes for a human to process the pg_settings output, so I
think it's a good idea.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Rahila Syed
Дата:
Сообщение: Re: monitoring CREATE INDEX [CONCURRENTLY]
Следующее
От: MikalaiKeida@ibagroup.eu
Дата:
Сообщение: RE: Timeout parameters