Re: ZFS vs. UFS

Поиск
Список
Период
Сортировка
От Craig James
Тема Re: ZFS vs. UFS
Дата
Msg-id CAFwQ8rd_bp+91u2aGOKPhBX4QuQDKmF0Z6M+Ngouftwb6WZ8jg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: ZFS vs. UFS  (Laszlo Nagy <gandalf@shopzeus.com>)
Список pgsql-performance


On Tue, Jul 31, 2012 at 1:50 AM, Laszlo Nagy <gandalf@shopzeus.com> wrote:

When Intel RAID controller is that?  All of the ones on the motherboard are pretty much useless if that's what you have. Those are slower than software RAID and it's going to add driver issues you could otherwise avoid.  Better to connect the drives to the non-RAID ports or configure the controller in JBOD mode first.

Using one of the better RAID controllers, one of Dell's good PERC models for example, is one of the biggest hardware upgrades you could make to this server.  If your database is mostly read traffic, it won't matter very much.  Write-heavy loads really benefit from a good RAID controller's write cache.
Actually, it is a PERC with write-cache and BBU.

Last time I checked, "PERC" was a meaningless name.  Dell put that label on a variety of different controllers ... some were quite good, some were terrible.  The latest PERC controllers are pretty good.  If your machine is a few years old, the PERC controller may be a piece of junk.

Craig
 

ZFS will heavily use server RAM for caching by default, much more so than UFS.  Make sure you check into that, and leave enough RAM for the database to run too.  (Doing *some* caching that way is good for Postgres; you just don't want *all* the memory to be used for that)
Right now, the size of the database is below 5GB. So I guess it will fit into memory. I'm concerned about data safety and availability. I have been in a situation where the RAID card went wrong and I was not able to recover the data because I could not get an identical RAID card in time. I have also been in a situation where the system was crashing two times a day, and we didn't know why. (As it turned out, it was a bug in the "stable" kernel and we could not identify this for two weeks.) However, we had to do fsck after every crash. With a 10TB disk array, it was extremely painful. ZFS is much better: short recovery time and it is RAID card independent. So I think I have answered my own question - I'm going to use ZFS to have better availability, even if it leads to poor performance. (That was the original question: how bad it it to use ZFS for PostgreSQL, instead of the native UFS.)

Moving disks to another server is a very low probability fix for a broken system.  The disks are a likely place for the actual failure to happen at in the first place.
Yes, but we don't have to worry about that. raidz2 + hot spare is safe enough. The RAID card is the only single point of failure.
I like to think more in terms of "how can I create a real-time replica of this data?" to protect databases, and the standby server for that doesn't need to be an expensive system.  That said, there is no reason to set things up so that they only work with that Intel RAID controller, given that it's not a very good piece of hardware anyway.
I'm not sure how to create a real-time replica. This database is updated frequently. There is always a process that reads/writes into the database. I was thinking about using slony to create slave databases. I have no experience with that. We have a 100Mbit connection. I'm not sure how much bandwidth we need to maintain a real-time slave database. It might be a good idea.

I'm sorry, I feel I'm being off-topic.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

В списке pgsql-performance по дате отправления:

Предыдущее
От: Laszlo Nagy
Дата:
Сообщение: Re: ZFS vs. UFS
Следующее
От: "Hugo "
Дата:
Сообщение: Re: pg_dump and thousands of schemas