Re: RAMFS with Postgres

Поиск
Список
Период
Сортировка
От Christopher Browne
Тема Re: RAMFS with Postgres
Дата
Msg-id m38xzxku87.fsf@knuth.cbbrowne.com
обсуждение исходный текст
Ответ на Re: RAMFS with Postgres  ("vinita bansal" <sagivini@hotmail.com>)
Список pgsql-general
Quoth alexs@advfn.com (Alex Stapleton):
> On 21 Jul 2005, at 17:02, Scott Marlowe wrote:
>
>> On Thu, 2005-07-21 at 02:43, vinita bansal wrote:
>>
>>> Hi,
>>>
>>> My application is database intensive. I am using 4 processes since
>>> I have 4
>>> processeors on my box. There are times when all the 4 processes
>>> write to the
>>> database at the same time and times when all of them will read all
>>> at once.
>>> The database is definitely not read only. Out of the entire
>>> database, there
>>> are a few tables which are accessed most of the times and they are
>>> the ones
>>> which seem to be the bottleneck. I am trying to get as much
>>> performance
>>> improvement as possible by putting some of these tables in RAM so
>>> that they
>>> dont have to be read to/written from hard disk as they will be
>>> directly
>>> available in RAM. Here's where slony comes into picture, since
>>> we'll have to
>>> mainatin a copy of the database somewhere before running our
>>> application
>>> (everything in RAM will be lost if there's a power failure or
>>> anything else
>>> goes wrong).
>>>
>>> My concern is how good Slony is?
>>> How much time does it take to replicate database? If the time
>>> taken to
>>> replicate is much more then the perf. improvement we are getting
>>> by putting
>>> tables in memory, then there's no point in going in for such a
>>> solution. Do
>>> I have an alternative?
>>>
>>
>> My feeling is that you may be going about this the wrong way.  Most
>> likely the issue so far has been I/O contention.  Have you tested
>> your application using a fast, battery backed caching RAID
>> controller on top of, say, a 10 disk RAID 1+0 array?  Or even RAID
>> 0 with another machine as the slony slave?
>
> Isn't that slightly cost prohibitive? Even basic memory has
> enormously fast access/throughput these days, and for a fraction of
> the price.

Actually, the real question is whether or not *data loss* is "cost
prohibitive."

If you can accept significant risk of data loss, then there are plenty
of optimizations available.

If the cost of data loss is high enough, then building some form of
disk array is likely to be the answer.

No other answer than "beefing up disk" will speed things up without
introducing much greater risks of data loss.

>> Slony, by the way, is quite capable, but using a RAMFS master and a
>> Disk drive based slave is kind of a recipe for disaster in ANY
>> replication system under heavy load, since it is quite possible
>> that the master could get very far ahead of the slave, since Slony
>> is asynchronous replication.  At some point you could have more
>> data waiting to be replicated than your ramfs can hold and have
>> some problems.
>>
>> If a built in RAID controller with battery backed caching isn't
>> enough, you might want to look at a large, external storage array
>> then.  many hosting centers offer these as a standard part of their
>> package, so rather than buying one, you might want to just rent
>> one, so to speak.
>
> Again with the *money* RAM = Cheap. Disks = Expensive. At least when
> you look at speed/$. Your right about replicating to disk and to ram
> though, that is pretty likely to result in horrible problems if you
> don't keep load down. For some workloads though, I can see it
> working. As long as the total amount of data doesn't get larger than
> your RAMFS it could probably survive.

Memory does *zero* to improve the speed of committing transactions
onto disk, and therefore every dollar spent on memory is
counterproductive to that purpose.

More disks can (in some sense) help achieve the goal of committing
more transactions in less time, and is therefore a potentially useful
strategy for increasing transactions per second.

The fact that RAM might be pretty cheap doesn't mean it helps commit
transactions to disk faster, and is therefore an entire red herring.
--
output = ("cbbrowne" "@" "gmail.com")
http://cbbrowne.com/info/linuxdistributions.html
The Dalai  Lama walks up to  a hot dog  vendor and says, "Make  me one
with everything."

В списке pgsql-general по дате отправления:

Предыдущее
От: Peter Fein
Дата:
Сообщение: Re: Privileges needed by pg_autovacuum?
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: Privileges needed by pg_autovacuum?