Обсуждение: coping with failing disks
Greetings, we are setting up a new database server with quite some disks for our inhouse Postgresql-based "data warehouse". We are considering to use separate sets of disks for indices (index space on SSDs in this case) and a table space for tables which are used as temporary tables (but for some reasons are standard tables for Postgresql). The storage for those should be as fast as possible, possibly sacrifying reliability for this. If we would set up the SSDs for the indices as a non-redundant RAID0, it would be quite likely that this volume became faulty at some point. Theorectically, this shouldn't hurt us to much as we would just have to rebuild the indices from the existing, unharmed data. But is it that simple in pratice? Would the consistency of the database be affected if all indices are suddenly gone? The same goes for the temporary tables. If the storage for those becomes unavailable, only the current queries should be affected. But how can we tell Postgresql to just forget about those tables, and consider the remaining database as consistent? We can afford some down time, obviously. thanks, Joachim
On Thu, Sep 2, 2010 at 8:16 AM, Joachim Worringen <joachim.worringen@iathh.de> wrote: > Would the consistency of the database be affected if all indices are > suddenly gone? The unique constraint is implemented as a unique index. So I'd say "yeah, you could break your consistency". Why not purchase a robust RAM/SSD disk system designed for DB use rather than hacking one up on the cheap?
Am 02.09.2010 16:32, schrieb Vick Khera: > On Thu, Sep 2, 2010 at 8:16 AM, Joachim Worringen > <joachim.worringen@iathh.de> wrote: >> Would the consistency of the database be affected if all indices are >> suddenly gone? > > The unique constraint is implemented as a unique index. So I'd say > "yeah, you could break your consistency". True. But we could use a separate index space only for our own indices - what if storage for this goes away? > Why not purchase a robust RAM/SSD disk system designed for DB use > rather than hacking one up on the cheap? 15k SAS drives and Intel SLC SSDs are not really on the cheap side (especially in quantity to fill up a storage array), but we still want to get the most of them. Things like RAMsan (Texas Memory Systems) are currently considered overkill here. Anything else (no, not Fusion I/O)? It's a matter of tradeoff between performance and availability. thanks, Joachim