Re: Raid 10 chunksize

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: Raid 10 chunksize
Дата
Msg-id b42b73150904021058v2b390b04k53aeabc58eb4e769@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Raid 10 chunksize  (Scott Carey <scott@richrelevance.com>)
Ответы Re: Raid 10 chunksize  (Scott Carey <scott@richrelevance.com>)
Список pgsql-performance
On Wed, Mar 25, 2009 at 12:16 PM, Scott Carey <scott@richrelevance.com> wrote:
> On 3/25/09 1:07 AM, "Greg Smith" <gsmith@gregsmith.com> wrote:
>> On Wed, 25 Mar 2009, Mark Kirkwood wrote:
>>> I'm thinking that the raid chunksize may well be the issue.
>>
>> Why?  I'm not saying you're wrong, I just don't see why that parameter
>> jumped out as a likely cause here.
>>
>
> If postgres is random reading or writing at 8k block size, and the raid
> array is set with 4k block size, then every 8k random i/o will create TWO
> disk seeks since it gets split to two disks.   Effectively, iops will be cut
> in half.

I disagree.  The 4k raid chunks are likely to be grouped together on
disk and read sequentially.  This will only give two seeks in special
cases.  Now, if the PostgreSQL block size is _smaller_ than the raid
chunk size,  random writes can get expensive (especially for raid 5)
because the raid chunk has to be fully read in and written back out.
But this is mainly a theoretical problem I think.

I'm going to go out on a limb and say that for block sizes that are
within one or two 'powers of two' of each other, it doesn't matter a
whole lot.  SSDs might be different, because of the 'erase' block
which might be 128k, but I bet this is dealt with in such a fashion
that you wouldn't really notice it when dealing with different block
sizes in pg.

merlin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Craig Ringer
Дата:
Сообщение: Re: Very specialised query
Следующее
От: James Mansion
Дата:
Сообщение: Re: Raid 10 chunksize