Re: Waiting on ExclusiveLock on extension 9.3, 9.4 and 9.5

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: Waiting on ExclusiveLock on extension 9.3, 9.4 and 9.5
Дата
Msg-id CAMkU=1zVE8O-dDrWK4hagDJp80a=iDpM2rAmYdfbzccpe5418g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Waiting on ExclusiveLock on extension 9.3, 9.4 and 9.5  (Tom Dearman <tom.dearman@gmail.com>)
Список pgsql-general
On Mon, Nov 2, 2015 at 7:32 AM, Tom Dearman <tom.dearman@gmail.com> wrote:
> Thanks for the prompt replies so far, I have done some more investigation to
> be able to clearly answer some of the question.
>
> The original shared-buffers was 8G and I have done another run on Friday
> using this old value instead of my more recent 1G limit.  There was no
> noticeable improvement.  I also installed the extension pg_buffercache and
> following some articles such as:
>
>  http://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/
>
> using:
>
> SELECT pg_size_pretty(count(*) * 8192) as ideal_shared_buffers
> FROM pg_class c
> INNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenode
> INNER JOIN pg_database d ON (b.reldatabase = d.oid AND d.datname =
> current_database())
> WHERE usagecount >= 3;
>
>
> My system under load is using just over 500M of the shared_buffer at usage
> count >= 3.  Our system is very write heavy, with all of the big tables
> written to but not read from (at least during the load test run).  Although
> our db will grow (under load) to 100G in a few hours and keep growing, the
> data in shared_buffers - according to my observations above - seems low.  We
> have the WAL on a different disk from the main tables.

What fraction of that is dirty?

Is your data loading done by INSERT statements in a loop?  Or by COPY?
 INSERT in a loop will insert rows into a chosen buffer one by one,
increasing the usage_count each time.  As soon as the buffer is full,
it becomes completely cold in reality, but it is still very hot
according to the usage count.

> We have the
> following representative TPS for the disks (from SAR) when under our load
> test:
>
> 06:34:01 PM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz
> await     svctm     %util
> 06:35:01 PM    dev8-0    176.15     25.89   8773.98     49.96      0.83
> 4.73      3.79     66.85
> 06:35:01 PM    dev8-1    174.74      0.13   8746.49     50.06      0.81
> 4.64      3.82     66.81
> 06:35:01 PM    dev8-2      0.00      0.00      0.00      0.00      0.00
> 0.00      0.00      0.00
> 06:35:01 PM    dev8-5      1.41     25.76     27.49     37.74      0.02
> 14.96      2.31      0.33
> 06:35:01 PM   dev8-16     15.02      0.00   3994.82    265.95      1.27
> 84.88      2.76      4.14
> 06:35:01 PM   dev8-17     15.02      0.00   3994.82    265.95      1.27
> 84.88      2.76      4.14


I'm no expert in `sar -d`, but isn't 84.88 pretty high for await?


> As far as the checkpoint goes, it does happen every 5 minutes and takes
> about 4.5 mins which corresponds to the 0.9 checkpoint_completion_target we
> have set.

So, any idea what is happening at 20 minute intervals?

Cheers,

Jeff


В списке pgsql-general по дате отправления:

Предыдущее
От: Sami Pietilä
Дата:
Сообщение: How to search a string inside a json structure
Следующее
От: Thomas Kellerer
Дата:
Сообщение: Re: Is there bigintarray?