Re: 8K recordsize bad on ZFS?

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: 8K recordsize bad on ZFS?
Дата
Msg-id j2w407d949e1005101301r9a1c0f7dk9995279fcd7c26a8@mail.gmail.com
обсуждение исходный текст
Ответ на Re: 8K recordsize bad on ZFS?  (Josh Berkus <josh@agliodbs.com>)
Ответы Re: 8K recordsize bad on ZFS?  (Josh Berkus <josh@agliodbs.com>)
Список pgsql-performance
On Mon, May 10, 2010 at 8:30 PM, Josh Berkus <josh@agliodbs.com> wrote:
> Ivan,
>
>> Other things could have influenced your result - 260 MB/s vs 300 MB/s is
>> close enough to be influenced by data position on (some of) the drives.
>> (I'm not saying anything about the original question.)
>
> You misread my post.  It's *87mb/s* vs. 300mb/s.  I kinda doubt that's
> position on the drive.

That still is consistent with it being caused by the files being
discontiguous. Copying them moved all the blocks to be contiguous and
sequential on disk and might have had the same effect even if you had
left the settings at 8kB blocks. You described it as "overloading the
array/drives with commands" which is probably accurate but sounds less
exotic if you say "the files were fragmented causing lots of seeks so
our drives we saturated the drives' iops capacity". How many iops were
you doing before and after anyways?

That said that doesn't change very much. The point remains that with
8kB blocks ZFS is susceptible to files becoming discontinuous and
sequential i/o performing poorly whereas with 128kB blocks hopefully
that would happen less. Of course with 128kB blocks updates become a
whole lot more expensive too.


--
greg

В списке pgsql-performance по дате отправления:

Предыдущее
От: Josh Berkus
Дата:
Сообщение: Re: 8K recordsize bad on ZFS?
Следующее
От: Josh Berkus
Дата:
Сообщение: Re: 8K recordsize bad on ZFS?