Re: [HACKERS] Increase Vacuum ring buffer.

Поиск
Список
Период
Сортировка
От Sokolov Yura
Тема Re: [HACKERS] Increase Vacuum ring buffer.
Дата
Msg-id 535409cd89927f551bd106cacd2149f4@postgrespro.ru
обсуждение исходный текст
Ответ на Re: [HACKERS] Increase Vacuum ring buffer.  (Sokolov Yura <funny.falcon@postgrespro.ru>)
Ответы Re: [HACKERS] Increase Vacuum ring buffer.  (Sokolov Yura <funny.falcon@postgrespro.ru>)
Список pgsql-hackers
On 2017-07-27 11:53, Sokolov Yura wrote:
> On 2017-07-26 20:28, Sokolov Yura wrote:
>> On 2017-07-26 19:46, Claudio Freire wrote:
>>> On Wed, Jul 26, 2017 at 1:39 PM, Sokolov Yura
>>> <funny.falcon@postgrespro.ru> wrote:
>>>> On 2017-07-24 12:41, Sokolov Yura wrote:
>>>> test_master_1/pretty.log
>>> ...
>>>> time   activity      tps  latency   stddev      min      max
>>>> 11130     av+ch      198    198ms    374ms      7ms   1956ms
>>>> 11160     av+ch      248    163ms    401ms      7ms   2601ms
>>>> 11190     av+ch      321    125ms    363ms      7ms   2722ms
>>>> 11220     av+ch     1155     35ms    123ms      7ms   2668ms
>>>> 11250     av+ch     1390     29ms     79ms      7ms   1422ms
>>> 
>>> vs
>>> 
>>>> test_master_ring16_1/pretty.log
>>>> time   activity      tps  latency   stddev      min      max
>>>> 11130     av+ch       26   1575ms    635ms    101ms   2536ms
>>>> 11160     av+ch       25   1552ms    648ms     58ms   2376ms
>>>> 11190     av+ch       32   1275ms    726ms     16ms   2493ms
>>>> 11220     av+ch       23   1584ms    674ms     48ms   2454ms
>>>> 11250     av+ch       35   1235ms    777ms     22ms   3627ms
>>> 
>>> That's a very huge change in latency for the worse
>>> 
>>> Are you sure that's the ring buffer's doing and not some methodology 
>>> snafu?
>> 
>> Well, I tuned postgresql.conf so that there is no such
>> catastrophic slows down on master branch. (with default
>> settings such slowdown happens quite frequently).
>> bgwriter_lru_maxpages = 10 (instead of default 200) were one
>> of such tuning.
>> 
>> Probably there were some magic "border" that triggers this
>> behavior. Tuning postgresql.conf shifted master branch on
>> "good side" of this border, and faster autovacuum crossed it
>> to "bad side" again.
>> 
>> Probably, backend_flush_after = 2MB (instead of default 0) is
>> also part of this border. I didn't try to bench without this
>> option yet.
>> 
>> Any way, given checkpoint and autovacuum interference could be
>> such noticeable, checkpoint clearly should affect autovacuum
>> cost mechanism, imho.
>> 
>> With regards,
> 
> I'll run two times with default postgresql.conf (except
> shared_buffers and maintence_work_mem) to find out behavior on
> default setting.
> 
> Then I'll try to investigate checkpoint co-operation with
> autovacuum to fix behavior with "tuned" postgresql.conf.
> 

I've accidentally lost results of this run, so I will rerun it.

This I remembered:
- even with default settings, autovacuum runs 3 times faster:
9000s on master, 3000s with increased ring buffer.
So xlog-fsync really slows down autovacuum.
- but concurrent transactions slows down (not so extremely as in
previous test, but still significantly).
I could not draw pretty table now, cause I lost results. I'll do
it after re-run completes.

Could someone suggest, how to cooperate checkpoint with autovacuum,
to slow down autovacuum a bit during checkpoint?

With regards,
-- 
Sokolov Yura aka funny_falcon
Postgres Professional: https://postgrespro.ru
The Russian Postgres Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: [HACKERS] Transactions involving multiple postgres foreign servers
Следующее
От: Mark Rofail
Дата:
Сообщение: Re: [HACKERS] GSoC 2017: Foreign Key Arrays