Re: Allow a per-tablespace effective_io_concurrency setting

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: Allow a per-tablespace effective_io_concurrency setting
Дата
Msg-id 55E78D51.9050004@2ndquadrant.com
обсуждение исходный текст
Ответ на Re: Allow a per-tablespace effective_io_concurrency setting  (Andres Freund <andres@anarazel.de>)
Ответы Re: Allow a per-tablespace effective_io_concurrency setting  (Andres Freund <andres@anarazel.de>)
Список pgsql-hackers

On 09/03/2015 12:23 AM, Andres Freund wrote:
> On 2015-09-02 14:31:35 -0700, Josh Berkus wrote:
>> On 09/02/2015 02:25 PM, Tomas Vondra wrote:
>>>
>>> As I explained, spindles have very little to do with it - you need
>>> multiple I/O requests per device, to get the benefit. Sure, the DBAs
>>> should know how many spindles they have and should be able to determine
>>> optimal IO depth. But we actually say this in the docs:
>>
>> My experience with performance tuning is that values above 3 have no
>> real effect on how queries are executed.
>
> I saw pretty much the opposite - the benefits seldomly were
> significant below 30 or so. Even on single disks.

That's a bit surprising, especially considering that e_i_c=30 means ~100 
pages to prefetch if I'm doing the math right.

AFAIK queue depth for SATA drives generally is 32 (so prefetching 100 
pages should not make a difference), 256 for SAS drives and ~1000 for 
most current RAID controllers.

I'm not entirely surprised that values beyond 30 make a difference, but 
that you seldomly saw significant improvements below this value.

No doubt there are workloads like that, but I'd expect them to be quite 
rare and not prevalent as you're claiming.

> Which actually isn't that surprising - to be actually beneficial
> (that is, turn an IO into a CPU bound workload) the prefetched buffer
> needs to actually have been read in by the time its needed. In many
> queries processing a single heap page takes far shorter than
> prefetching the data from storage, even if it's on good SSDs.>
> Therefore what you actually need is a queue of prefetches for the
> next XX buffers so that between starting a prefetch and actually
> needing the buffer ienough time has passed that the data is
> completely read in. And the point is that that's the case even for a
> single rotating disk!

So instead of "How many blocks I need to prefetch to saturate the 
devices?" you're asking "How many blocks I need to prefetch to never 
actually wait for the I/O?"

I do like this view, but I'm not really sure how could we determine the 
right value? It seems to be very dependent on hardware and workload.

For spinning drives the speedup comes from optimizing random seeks to a 
more optimal path (thanks to NCQ/TCQ), and on SSDs thanks to using the 
parallel channels (and possibly faster access to the same block).

I guess the best thing we could do at this level is simply keep the 
on-device queues fully saturated, no?

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: Allow replication roles to use file access functions
Следующее
От: Peter Geoghegan
Дата:
Сообщение: Re: Memory prefetching while sequentially fetching from SortTuple array, tuplestore