Re: pgbench vs. wait events

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: pgbench vs. wait events
Дата
Msg-id CAMkU=1xZUMTAbcgYUk1x4g3Fcxu18pPfR1A3w4NRYapcSNiPTg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: pgbench vs. wait events  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Fri, Oct 7, 2016 at 11:14 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

> Another strategy that may work is actually intentionally waiting/buffering
> some few ms between flushes/fsync,

We do that before attempting to write if user has set "commit_delay"
and "commit_siblings" guc parameters.

If you have a fast, high resolution timer, then one thing you can do is keep track of when the previous xlog sync finished. Then instead of having commit_delay be an absolute amount of time to sleep, it would mean "wait until that amount of time has passed since the previous sync finished."  So you would set it based on the RPM of your drive, so that the time it is sleeping to allow more work to happen from other processes is time it would have to spend waiting on rotational delay anyway.

But I dropped this, because it would be hard to tune, hard to implement in a cross-platform way, and because anyone with such high performance needs is just going to buy a nonvolatile write-cache and be done with it.
 

Now here, we can't buffer the fsync requests as current we are doing
both writes and fsync under one lock.  However, if we can split the
work such that writes are done under one lock and fsync under separate
lock, then probably we can try to buffer fsync requests and after
fsyncing the current pending requests, we can recheck if there are
more pending requests and try to flush them.

What I implemented at one point was:

(Already have the lock before getting here)
Write to the extent it is ready to be written.
Update the shared structure to reflect written upto.
Drop the lock
fsync
Take the lock again
update the shared structure to reflect flushed upto.
Drop the lock again.

This way, multiple process could all be waiting on the kernel's fsync response, rather than on each others locks.  What I was hoping would happen is that if one process wrote everything that was ready and called fsync, while it was waiting for the platter to come around to the writing head, more processes could make more data ready, write that more data, and call an fsync of their own. And the kernel would be smart enough to amalgamate them together. But the kernel evidently was not that smart, and performance did not improve.

Cheers,

Jeff

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jeff Janes
Дата:
Сообщение: Re: pgbench vs. wait events
Следующее
От: Jeff Janes
Дата:
Сообщение: Re: pgbench vs. wait events