Re: WAL insert delay settings

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: WAL insert delay settings
Дата
Msg-id 97de8c71-9577-72b6-4ed4-5debb0900b22@2ndquadrant.com
обсуждение исходный текст
Ответ на Re: WAL insert delay settings  (Andres Freund <andres@anarazel.de>)
Ответы Re: WAL insert delay settings  (Peter Eisentraut <peter.eisentraut@2ndquadrant.com>)
Список pgsql-hackers

On 2/14/19 10:36 AM, Andres Freund wrote:
> 
> 
> On February 14, 2019 10:31:57 AM GMT+01:00, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
>>
>>
>> On 2/14/19 10:06 AM, Andres Freund wrote:
>>> Hi,
>>>
>>> On 2019-02-14 10:00:38 +0100, Tomas Vondra wrote:
>>>> On 2/13/19 4:31 PM, Stephen Frost wrote:
>>>>> Greetings,
>>>>>
>>>>> * Peter Eisentraut (peter.eisentraut@2ndquadrant.com) wrote:
>>>>>> Bulk operations like CREATE INDEX, ALTER TABLE, or bulk loads can
>> create
>>>>>> a lot of WAL.  A lot of WAL at once can cause delays in
>> replication.
>>>>>
>>>>> Agreed, though I think VACUUM should certainly be included in this.
>>>>>
>>>>
>>>> Won't these two throttling criteria interact in undesirable and/or
>>>> unpredictable way? With the regular vacuum throttling (based on
>>>> hit/miss/dirty) it's possible to compute rough read/write I/O
>> limits.
>>>> But with the additional sleeps based on amount-of-WAL, we may sleep
>> for
>>>> one of two reasons, so we may not reach either limit. No?
>>>
>>> Well, it'd be max rates for either, if done right. I think we only
>>> should start adding delays for WAL logging if we're exceeding the WAL
>>> write rate.
>>
>> Not really, I think. If you add additional sleep() calls somewhere,
>> that
>> may affect the limits in vacuum, making it throttle before reaching the
>> derived throughput limits.
> 
> I don't understand. Obviously, if you have two limits, the scarcer
> resource can limit full use of the other resource. That seems OK? The
> thing u think we need to be careful about is not to limit in a way,
> e.g. by adding sleeps even when below the limit, that a WAL limit
> causes throttling of normal IO before the WAL limit is reached.
> 

With the vacuum throttling, rough I/O throughput maximums can be
computed by by counting the number of pages you can read/write between
sleeps. For example with the defaults (200 credits, 20ms sleeps, miss
cost 10 credits) this means 20 writes/round, with 50 rounds/second, so
8MB/s. But this is based on assumption that the work between sleeps
takes almost no time - that's not perfect, but generally works.

But if you add extra sleep() calls somewhere (say because there's also
limit on WAL throughput), it will affect how fast VACUUM works in
general. Yet it'll continue with the cost-based throttling, but it will
never reach the limits. Say you do another 20ms sleep somewhere.
Suddenly it means it only does 25 rounds/second, and the actual write
limit drops to 4 MB/s.

> 
>>> That's obviously more complicated than the stuff we do for
>>> the current VACUUM throttling, but I can't see two such systems
>>> interacting well. Also, the current logic just doesn't work well when
>>> you consider IO actually taking time, and/or process scheduling
>> effects
>>> on busy systems.
>>>
>>
>> True, but making it even less predictable is hardly an improvement.
> 
> I don't quite see the problem here. Could you expand?
> 

All I'm saying that you can now estimate how much reads/writes vacuum
does. With the extra sleeps (due to additional throttling mechanism) it
will be harder.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Magnus Hagander
Дата:
Сообщение: Re: pg_basebackup ignores the existing data directory permissions
Следующее
От: Masahiko Sawada
Дата:
Сообщение: Re: [HACKERS] Block level parallel vacuum