Re: Should we increase the default vacuum_cost_limit?

Поиск
Список
Период
Сортировка
От Julien Rouhaud
Тема Re: Should we increase the default vacuum_cost_limit?
Дата
Msg-id CAOBaU_a2tLyonOMJ62=SiDmo84Xo1fy81YA8K=B+=OtTc3sYSQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Should we increase the default vacuum_cost_limit?  (David Rowley <david.rowley@2ndquadrant.com>)
Ответы Re: [Suspect SPAM] Re: Should we increase the defaultvacuum_cost_limit?
Re: [Suspect SPAM] Re: Should we increase the defaultvacuum_cost_limit?
Список pgsql-hackers
On Mon, Mar 11, 2019 at 10:03 AM David Rowley
<david.rowley@2ndquadrant.com> wrote:
>
> On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > The second patch is a delta that rounds off to the next smaller unit
> > if there is one, producing a less noisy result:
> >
> > regression=# set work_mem = '30.1GB';
> > SET
> > regression=# show work_mem;
> >  work_mem
> > ----------
> >  30822MB
> > (1 row)
> >
> > I'm not sure if that's a good idea or just overthinking the problem.
> > Thoughts?
>
> I don't think you're over thinking it.  I often have to look at such
> settings and I'm probably not unique in when I glance at 30822MB I can
> see that's roughly 30GB, whereas when I look at 31562138kB, I'm either
> counting digits or reaching for a calculator.  This is going to reduce
> the time it takes for a human to process the pg_settings output, so I
> think it's a good idea.

Definitely, rounding up will spare people from wasting time to check
what's the actual value.


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Alvaro Herrera
Дата:
Сообщение: Re: monitoring CREATE INDEX [CONCURRENTLY]
Следующее
От: Bruno Hass
Дата:
Сообщение: Best way to keep track of a sliced TOAST