Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects

Поиск
Список
Период
Сортировка
От Dmitry Koterov
Тема Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects
Дата
Msg-id d7df81620901060323v1cd89b9bm2714c0d522b987e2@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects  (Greg Smith <gsmith@gregsmith.com>)
Ответы Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects  (Greg Smith <gsmith@gregsmith.com>)
Список pgsql-performance
Thank you.

But why buffers_backend is so high? As I understood from your article, buffers_backend shows the number of writes immediately caused by any write operations, e.g. when an INSERT has to flush something on disk, because it has no space left for a new data in shared buffers. I suppose these flushes slow down operating greatly, and I realy see this: in my environment INSERT is usually performed in 1-2 ms, but sometimes it is executed in 5-6 seconds or even more (10 seconds), which touches statement_timeout barrier and fails the whole transaction.

The main purpose is to minimize INSERT/UPDATE time or, at least, make it more predictable.

Could you please give an advice how to achieve this?


On Mon, Dec 29, 2008 at 1:04 AM, Greg Smith <gsmith@gregsmith.com> wrote:
On Fri, 26 Dec 2008, Dmitry Koterov wrote:

checkpoint_timeout = 1min

Your system is having a checkpoint every minute.  You can't do that and expect the background writer to do anything useful.  As shown in your stats, all the dirty buffers are getting written out by those constant checkpoints.


What I am trying to achieve is that all writing operation are performed asynchronously and mostly flushed to the disk before a CHECKPOINT occurred, so CHECKPOINT is cheap thanks to bgwiter work.

The background writer only tries to write out things that haven't been accessed recently, because the tests we did suggested the duplicated writes from any other approach negated the benefits from writing them earlier.  So it's not possible to get all the buffers clean before the checkpoint starts, the ones that have been recently used can't get written except during a checkpoint.

What does work instead is to spread the checkpoint writes over a long period, such that they are an asynchronous trickle of smaller writes. For that to work, you need to set checkpoint_timeout to a fairly long period (at least the default of 5 minutes if not longer) and checkpoint_segments to something fairly large.  You can know the segments are large enough when most of the checkpoints show up in the checkpoints_timed count.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

В списке pgsql-performance по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: bad selectivity estimates for CASE
Следующее
От: "Robert Haas"
Дата:
Сообщение: Re: bad selectivity estimates for CASE