Re: Opinions on SSDs

Поиск
Список
Период
Сортировка
От Scott Whitney
Тема Re: Opinions on SSDs
Дата
Msg-id 27963212.272424.1376350086735.JavaMail.root@mail.int.journyx.com
обсуждение исходный текст
Ответ на Re: Opinions on SSDs  (Charles Sprickman <spork@biglist.com>)
Список pgsql-admin
I tested the hybrid approach during my months-long testing and performance stuff, and I was a bit underwhelmed.

That said, what _I_ personally really needed was increase in peak iops. Using spindles for "static" data (OS, some logs, and such) worked fine, but no matter how I split up the pg stuff (logs, data, etc), I couldn't get anywhere above about 15% of the speed of the true SSD solution.

And, on my end, I'm using IBM ServeRAID cards (8 and 10 series) with battery back, so I didn't bother with the EPL stuff. Maybe I should, but when you're talking that many moving parts, I have my doubts that the EPL stuff would do anything when under the control of a BBU  _true_ hardware (not kernel-based software-raid-on chip) RAID card.

Also, so far I haven't seen any "marketing BS." I mean, my drives are rated either 30k (write) and 40k (read) iops or maybe 35/45. I forget. Somewhere in that range. So, IN THEORY (think "in marketing speak"), with a 6-drive config (RAID 10 2+2 plus 2 hot spare), IN THEORY I'm running 70,000ish iops on those 2 online drives. From what I've seen, while I'm sure this is a bit overblown on the marketing, it's well within 20% of the marketed iops.

Again, just for math's sake, I WAS running 3-4k REQUESTED* iops about 20% of the time, and sometimes I'd spike to 7k. My old RAID array was capable of about 1,050 iops or so, and I was having I/O wait like mad. I have not had that problem a single time since migrating my servers to pure SSD solution.

* When I say "requested iops," understand that if I have a hard limit of 1,050 that my drives can actually handle, the 1051st enters a wait state. If another one comes in, I have 1,052 (not really, but you get the point for simplification) io requests needing service. Basically, then, they stack up to a reported 3k, that might not mean that I'm actually requesting 3,000 iops at any given time (I might only NEED 1200 right now), but there are 3,000 in the queue waiting to be serviced.  There's a bit of calculus that would go on there if you truly cared about determining what the actual wait queue is at any given second.






On Aug 12, 2013, at 4:15 PM, Lonni J Friedman wrote:

> On Mon, Aug 12, 2013 at 1:05 PM, Bruce Momjian <bruce@momjian.us> wrote:
>> On Mon, Aug 12, 2013 at 08:33:04AM -0700, Joshua D. Drake wrote:
>>>> 1) Has anyone had experience with Intel 520 SSDs?  Are they reliable?
>>>> When they fail, do they fail nicely (ie, failure detected and bad drive
>>>> removed from RAID array) or horribly (data silently corrupted...) ?
>>>
>>> I don't recall if the 520s have powerloss protection but you will
>>> want to check that.
>>
>> I am pretty sure they don't.  The only options are the Intel 320 and
>> 710, I think.  Here is a blog about it:
>>
>>        http://blog.2ndquadrant.com/intel_ssds_lifetime_and_the_32/
>>
>> Look for "Enhanced Power Loss Data Protection".  Intel does not make it
>> easy to find all drive that have it --- you have to look at each spec
>> sheet.
>
> The S3700 series also has power loss data protection:
> http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-s3700-series.html

And the much more affordable S3500 series:

http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-dc-s3500-series.html

The 320 and 710 are still available, but the prices are totally jacked up, which I assume means it's all old stock and people that need to get exact replacements are the main market at this point.

We run 4 320s on an SSD-only box and it's been amazing.  We're using ZFS so no hardware RAID, but it does allow us to pull members of each mirrored pair out one by one to take care of both pre-emptive replacement and array growth (started with 160GB drives, on the first refresh moved to 250GB on one pair).  Wear indicator on the replaced drives was at 98%, so those got moved to another box for some quick scratch storage.  The next replacement we'll probably cycle the old SSDs in as ZIL on other (non-db) servers and bring in these new Intel S3500s.

Another non-traditional and cheap option is to combine some decent spinny drives with SSDs.  The slave to our main all-SSD box is a hybrid with 4 10K raptors paired with two small Intel 320s as ZFS ZIL.  The ZIL "absorbs" the sync writes, so we get ssd-like random write performance but with the data also safe on the traditional spinny drives.  pgbench on that setup did something like 15K TPS, I've got graphs of that laying around somewhere if anyone's interested.

The budget hybrid SSD approach is apparently an odd setup, as I've not seen anyone else discuss it. :)

Charles

>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin



--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

В списке pgsql-admin по дате отправления:

Предыдущее
От: Charles Sprickman
Дата:
Сообщение: Re: Opinions on SSDs
Следующее
От: Natalie Wenz
Дата:
Сообщение: vacuum freeze performance, wraparound issues