Обсуждение: Asynchronous commit | Transaction loss at server crash

Поиск
Список
Период
Сортировка

Asynchronous commit | Transaction loss at server crash

От
Balkrishna Sharma
Дата:
Hello,
Couple of questions:
1. For the 'Asynchronous commit' mode, I know that WAL transactions not flushed to permanent storage will be  lost in event of a server crash. Is it possible to know what were the non-flushed transactions that were lost, in any shape/form/part ? I guess not, but wanted to confirm.

2. If above is true, then for my application 'Asynchronous commit' is not an option. In that case, how is it possible to increase the speed of 'Synchronous Commit' ? Can a SDD rather than HDD make a difference ? Can throwing RAM have an impact ? Is there some test somewhere of how much RAM will help to beef up the write process (for synch commit).

I need to support several hundreds of concurrent update/inserts from an online form with pretty low latency (maybe couple of milliseconds at max). Think of a save to database at every 'tab-out' in an online form.

Thanks,
-Bala



The New Busy is not the old busy. Search, chat and e-mail from your inbox. Get started.

Re: Asynchronous commit | Transaction loss at server crash

От
Scott Marlowe
Дата:
On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma <b_ki@hotmail.com> wrote:
> Hello,
> Couple of questions:
> 1. For the 'Asynchronous commit' mode, I know that WAL transactions not
> flushed to permanent storage will be  lost in event of a server crash. Is it

That's not exactly correct.  Transactions that haven't been written to
WAL may be lost.  This would be a small number of transactions.
Transactions written to the WAL but not to the main data store will
NOT be lost.

However, this may still not be an acceptable case for your usage.

Re: Asynchronous commit | Transaction loss at server crash

От
Scott Marlowe
Дата:
On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma <b_ki@hotmail.com> wrote:
> I need to support several hundreds of concurrent update/inserts from an
> online form with pretty low latency (maybe couple of milliseconds at max).
> Think of a save to database at every 'tab-out' in an online form.

You can get nearly the same performance by using a RAID controller
with battery backed cache without the same danger of losing
transactions.

installation on Sun Solaris for version 8.4

От
Sherry.CTR.Zhu@faa.gov
Дата:

All,

  I downloaded the file for Sun solaris 8.4 version, and extracted.  Can someone tell me where the configure script is?  Which unix account should run this script?  You help is very appreciated.  

15.5. Installation Procedure
1.        Configuration

The first step of the installation procedure is to configure the source tree for your system and choose the options you would like. This is done by running the configure script. For a default installation simply enter:

./configure

This script will run a number of tests to determine values for various system dependent variables and detect any quirks of your operating system, and finally will create several files in the build tree to record what it found. (You can also run configure in a directory outside the source tree if you want to keep the build directory separate.)

The default configuration will build the server and utilities, as well as all client applications and interfaces that require only a C compiler. All files will be installed under /usr/local/pgsql by default.

You can customize the build and installation process by supplying one or more of the following command line options to configure:

--prefix=PREFIX



Thanks much!

Xuefeng Zhu (Sherry)
Crown Consulting Inc. -- Oracle DBA
AIM Lab Data Team
(703) 925-3192

Re: installation on Sun Solaris for version 8.4

От
Scott Marlowe
Дата:
On Thu, May 20, 2010 at 11:56 AM,  <Sherry.CTR.Zhu@faa.gov> wrote:
>
> All,
>
>   I downloaded the file for Sun solaris 8.4 version, and extracted.  Can
> someone tell me where the configure script is?  Which unix account should
> run this script?  You help is very appreciated.

It looks like you're compiling from source so you can run this from
any account really.

./configure
make
sudo make install

After that you can create a service account for it (just a regular
user account is fine really) and use that to run initdb and pg_ctl

sudo adduser postgres
sudo mkdir /usr/local/pgsql/data
sudo chown postgres.postgres /usr/local/pgsql/data
sudo su - postgres
initdb -D /usr/local/pgsql/data
pg_ctl -D /usr/local/pgsql/data start

OR something like that.  I'm a RedHat / Ubuntu guy so I'm not sure
what command in Solaris is used to create an account, but I'm sure you
do, so just substitute it up there where I ran adduser.

Re: installation on Sun Solaris for version 8.4

От
Scott Marlowe
Дата:
And don't include

pgsql-admin-owner@postgresql.org

in your cc list etc...  Just pgsql-admin is plenty.

Re: Asynchronous commit | Transaction loss at server crash

От
Balkrishna Sharma
Дата:
Good suggestion. Thanks.
What's your take on SSD ? I read somewhere that moving the WAL to SSD helps as well.

> Date: Thu, 20 May 2010 11:36:31 -0600
> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server crash
> From: scott.marlowe@gmail.com
> To: b_ki@hotmail.com
> CC: pgsql-admin@postgresql.org
>
> On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma <b_ki@hotmail.com> wrote:
> > I need to support several hundreds of concurrent update/inserts from an
> > online form with pretty low latency (maybe couple of milliseconds at max).
> > Think of a save to database at every 'tab-out' in an online form.
>
> You can get nearly the same performance by using a RAID controller
> with battery backed cache without the same danger of losing
> transactions.
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


The New Busy is not the too busy. Combine all your e-mail accounts with Hotmail. Get busy.

Re: Asynchronous commit | Transaction loss at server crash

От
Scott Marlowe
Дата:
SSD and battery backed cache kind of do the same thing, in that they
reduce random access times close to 0.  However, most SSDs are still
not considered reliable due to their internal caching systems.  hard
drives and bbu RAID are proven solutions, SSD is still not really
there just yet in terms of being proven reliable.

On Thu, May 20, 2010 at 1:02 PM, Balkrishna Sharma <b_ki@hotmail.com> wrote:
> Good suggestion. Thanks.
> What's your take on SSD ? I read somewhere that moving the WAL to SSD helps
> as well.
>
>> Date: Thu, 20 May 2010 11:36:31 -0600
>> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
>> crash
>> From: scott.marlowe@gmail.com
>> To: b_ki@hotmail.com
>> CC: pgsql-admin@postgresql.org
>>
>> On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma <b_ki@hotmail.com>
>> wrote:
>> > I need to support several hundreds of concurrent update/inserts from an
>> > online form with pretty low latency (maybe couple of milliseconds at
>> > max).
>> > Think of a save to database at every 'tab-out' in an online form.
>>
>> You can get nearly the same performance by using a RAID controller
>> with battery backed cache without the same danger of losing
>> transactions.
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>
> ________________________________
> The New Busy is not the too busy. Combine all your e-mail accounts with
> Hotmail. Get busy.



--
When fascism comes to America, it will be intolerance sold as diversity.

Re: Asynchronous commit | Transaction loss at server crash

От
Balkrishna Sharma
Дата:
What if we don't rely on the cache of SSD, i.e. have write-through setting and not write-back. Is the performance gain then not significant to justify SSD ?

> Date: Thu, 20 May 2010 13:35:54 -0600
> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server crash
> From: scott.marlowe@gmail.com
> To: b_ki@hotmail.com
> CC: pgsql-admin@postgresql.org
>
> SSD and battery backed cache kind of do the same thing, in that they
> reduce random access times close to 0. However, most SSDs are still
> not considered reliable due to their internal caching systems. hard
> drives and bbu RAID are proven solutions, SSD is still not really
> there just yet in terms of being proven reliable.
>
> On Thu, May 20, 2010 at 1:02 PM, Balkrishna Sharma <b_ki@hotmail.com> wrote:
> > Good suggestion. Thanks.
> > What's your take on SSD ? I read somewhere that moving the WAL to SSD helps
> > as well.
> >
> >> Date: Thu, 20 May 2010 11:36:31 -0600
> >> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
> >> crash
> >> From: scott.marlowe@gmail.com
> >> To: b_ki@hotmail.com
> >> CC: pgsql-admin@postgresql.org
> >>
> >> On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma <b_ki@hotmail.com>
> >> wrote:
> >> > I need to support several hundreds of concurrent update/inserts from an
> >> > online form with pretty low latency (maybe couple of milliseconds at
> >> > max).
> >> > Think of a save to database at every 'tab-out' in an online form.
> >>
> >> You can get nearly the same performance by using a RAID controller
> >> with battery backed cache without the same danger of losing
> >> transactions.
> >>
> >> --
> >> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-admin
> >
> > ________________________________
> > The New Busy is not the too busy. Combine all your e-mail accounts with
> > Hotmail. Get busy.
>
>
>
> --
> When fascism comes to America, it will be intolerance sold as diversity.
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


The New Busy is not the old busy. Search, chat and e-mail from your inbox. Get started.

Re: Asynchronous commit | Transaction loss at server crash

От
Scott Marlowe
Дата:
The design of SSD is such that it cannot run without caching.  It has
to cache to arrange things to be written out due to issues with the
fact that it cannot write small blocks one at a time but needs to
write large chunks together at once.

On Thu, May 20, 2010 at 2:10 PM, Balkrishna Sharma <b_ki@hotmail.com> wrote:
> What if we don't rely on the cache of SSD, i.e. have write-through setting
> and not write-back. Is the performance gain then not significant to justify
> SSD ?
>
>> Date: Thu, 20 May 2010 13:35:54 -0600
>> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
>> crash
>> From: scott.marlowe@gmail.com
>> To: b_ki@hotmail.com
>> CC: pgsql-admin@postgresql.org
>>
>> SSD and battery backed cache kind of do the same thing, in that they
>> reduce random access times close to 0. However, most SSDs are still
>> not considered reliable due to their internal caching systems. hard
>> drives and bbu RAID are proven solutions, SSD is still not really
>> there just yet in terms of being proven reliable.
>>
>> On Thu, May 20, 2010 at 1:02 PM, Balkrishna Sharma <b_ki@hotmail.com>
>> wrote:
>> > Good suggestion. Thanks.
>> > What's your take on SSD ? I read somewhere that moving the WAL to SSD
>> > helps
>> > as well.
>> >
>> >> Date: Thu, 20 May 2010 11:36:31 -0600
>> >> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
>> >> crash
>> >> From: scott.marlowe@gmail.com
>> >> To: b_ki@hotmail.com
>> >> CC: pgsql-admin@postgresql.org
>> >>
>> >> On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma <b_ki@hotmail.com>
>> >> wrote:
>> >> > I need to support several hundreds of concurrent update/inserts from
>> >> > an
>> >> > online form with pretty low latency (maybe couple of milliseconds at
>> >> > max).
>> >> > Think of a save to database at every 'tab-out' in an online form.
>> >>
>> >> You can get nearly the same performance by using a RAID controller
>> >> with battery backed cache without the same danger of losing
>> >> transactions.
>> >>
>> >> --
>> >> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> >> To make changes to your subscription:
>> >> http://www.postgresql.org/mailpref/pgsql-admin
>> >
>> > ________________________________
>> > The New Busy is not the too busy. Combine all your e-mail accounts with
>> > Hotmail. Get busy.
>>
>>
>>
>> --
>> When fascism comes to America, it will be intolerance sold as diversity.
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>
> ________________________________
> The New Busy is not the old busy. Search, chat and e-mail from your inbox.
> Get started.



--
When fascism comes to America, it will be intolerance sold as diversity.

Re: Asynchronous commit | Transaction loss at server crash

От
Balkrishna Sharma
Дата:
But if we have write-through setting, failure before the cache can write to disk will result in incomplete transaction (i.e. host will know that the transaction was incomplete). Right ?

Two things I need for my system is:
1. Unsuccessful transactions with a notification back that it is unsuccessful is ok but telling it is a successful transaction and not being able to write to database is not acceptable (ever).
2. My write time (random access time) should be as minimal as possible.

Can a SSD with write-thru cache achieve this ?

Thanks for your inputs.


> Date: Thu, 20 May 2010 14:12:33 -0600
> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server crash
> From: scott.marlowe@gmail.com
> To: b_ki@hotmail.com
> CC: pgsql-admin@postgresql.org
>
> The design of SSD is such that it cannot run without caching. It has
> to cache to arrange things to be written out due to issues with the
> fact that it cannot write small blocks one at a time but needs to
> write large chunks together at once.
>
> On Thu, May 20, 2010 at 2:10 PM, Balkrishna Sharma <b_ki@hotmail.com> wrote:
> > What if we don't rely on the cache of SSD, i.e. have write-through setting
> > and not write-back. Is the performance gain then not significant to justify
> > SSD ?
> >
> >> Date: Thu, 20 May 2010 13:35:54 -0600
> >> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
> >> crash
> >> From: scott.marlowe@gmail.com
> >> To: b_ki@hotmail.com
> >> CC: pgsql-admin@postgresql.org
> >>
> >> SSD and battery backed cache kind of do the same thing, in that they
> >> reduce random access times close to 0. However, most SSDs are still
> >> not considered reliable due to their internal caching systems. hard
> >> drives and bbu RAID are proven solutions, SSD is still not really
> >> there just yet in terms of being proven reliable.
> >>
> >> On Thu, May 20, 2010 at 1:02 PM, Balkrishna Sharma <b_ki@hotmail.com>
> >> wrote:
> >> > Good suggestion. Thanks.
> >> > What's your take on SSD ? I read somewhere that moving the WAL to SSD
> >> > helps
> >> > as well.
> >> >
> >> >> Date: Thu, 20 May 2010 11:36:31 -0600
> >> >> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
> >> >> crash
> >> >> From: scott.marlowe@gmail.com
> >> >> To: b_ki@hotmail.com
> >> >> CC: pgsql-admin@postgresql.org
> >> >>
> >> >> On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma <b_ki@hotmail.com>
> >> >> wrote:
> >> >> > I need to support several hundreds of concurrent update/inserts from
> >> >> > an
> >> >> > online form with pretty low latency (maybe couple of milliseconds at
> >> >> > max).
> >> >> > Think of a save to database at every 'tab-out' in an online form.
> >> >>
> >> >> You can get nearly the same performance by using a RAID controller
> >> >> with battery backed cache without the same danger of losing
> >> >> transactions.
> >> >>
> >> >> --
> >> >> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> >> >> To make changes to your subscription:
> >> >> http://www.postgresql.org/mailpref/pgsql-admin
> >> >
> >> > ________________________________
> >> > The New Busy is not the too busy. Combine all your e-mail accounts with
> >> > Hotmail. Get busy.
> >>
> >>
> >>
> >> --
> >> When fascism comes to America, it will be intolerance sold as diversity.
> >>
> >> --
> >> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-admin
> >
> > ________________________________
> > The New Busy is not the old busy. Search, chat and e-mail from your inbox.
> > Get started.
>
>
>
> --
> When fascism comes to America, it will be intolerance sold as diversity.
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


Hotmail is redefining busy with tools for the New Busy. Get more from your inbox. See how.

Re: Asynchronous commit | Transaction loss at server crash

От
Scott Marlowe
Дата:
On Thu, May 20, 2010 at 2:26 PM, Balkrishna Sharma <b_ki@hotmail.com> wrote:
> But if we have write-through setting, failure before the cache can write to
> disk will result in incomplete transaction (i.e. host will know that the
> transaction was incomplete). Right ?
> Two things I need for my system is:
> 1. Unsuccessful transactions with a notification back that it is
> unsuccessful is ok but telling it is a successful transaction and not being
> able to write to database is not acceptable (ever).
> 2. My write time (random access time) should be as minimal as possible.
> Can a SSD with write-thru cache achieve this ?
> Thanks for your inputs.

Not at present.  The write cache in an SSD cannot be disabled, because
it has to aggregate a bunch of writes together.  So, it reads say
128k, changes x % then writes it back out.  During this period, power
loss could result in those writes being lost.  However, the SSD will
have reported success already.  There are some that supposedly have a
big enough capacitor to complete this write cache, but I have seen no
definitive tests with pgsql that this actually works to keep your data
safe in event of power loss during write.

A battery backed caching RAID controller CAN be depended on, because
they have been tested and shown to do the right thing.

Re: Asynchronous commit | Transaction loss at server crash

От
Jesper Krogh
Дата:
On 2010-05-20 22:26, Balkrishna Sharma wrote:
> But if we have write-through setting, failure before the cache can write to disk will result in incomplete
transaction(i.e. host will know that the transaction was incomplete). Right 
>
> Two things I need for my system is:1. Unsuccessful transactions with a notification back that it is unsuccessful is
okbut telling it is a successful transaction and not being able to write to database is not acceptable (ever).2. My
writetime (random access time) should be as minimal as possible. 
> Can a SSD with write-thru cache achieve this
>

A Battery Backed raid controller is not that expensive. (in the range of
1 or 2 SSD disks).
And it is (more or less) a silverbullet to the task you describe.

SSD "might" solve the problem, but comes with a huge range of unknowns
at the moment.

* Wear over time.
* Degraded performance in write-through mode.
* Degrading peformance over time.
* Writeback mode not robust to power-failures.

Plugging your system (SSD's) with an UPS and trusting it fully
could solve most of the problems (running in writeback mode).
But compared in complexity, I would say that the Battery backed
raid controller is way more easy to get right.

... if you had a huge dataset you were doing random reads into and
couldn't beef your system with more memory(cheapy) SSD's might
be a good solution for that.

--
Jesper


Re: Asynchronous commit | Transaction loss at server crash

От
Greg Smith
Дата:
Jesper Krogh wrote:
> A Battery Backed raid controller is not that expensive. (in the range
> of 1 or 2 SSD disks).
> And it is (more or less) a silverbullet to the task you describe.

Maybe even less; in order to get a SSD that's reliable at all in terms
of good crash recovery, you have buy a fairly expensive one.  Also, and
this is really important, you really don't want to deploy onto a single
SSD and put critical system files there.  Their failure rates are not
that low.  You need to put them into a RAID-1 setup and budget for two
of them, which brings you right back to

Also, it's questionable whether a SSD is even going to be faster than
standard disks for the sequential WAL writes anyway, once a non-volatile
write cache is available.  Sequential writes to SSD are the area where
the gap in performance between them and spinning disks is the smallest.

> Plugging your system (SSD's) with an UPS and trusting it fully
> could solve most of the problems (running in writeback mode).

UPS batteries fail, and people accidentally knock out over server power
cords.  It's a pretty bad server that can't survive someone tripping
over the cord while it's busy, and that's the situation the "use a UPS"
idea doesn't improve.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: Asynchronous commit | Transaction loss at server crash

От
Rosser Schwarz
Дата:
On Thu, May 20, 2010 at 4:04 PM, Greg Smith <greg@2ndquadrant.com> wrote:

> Also, it's questionable whether a SSD is even going to be faster than
> standard disks for the sequential WAL writes anyway, once a non-volatile
> write cache is available.  Sequential writes to SSD are the area where the
> gap in performance between them and spinning disks is the smallest.

Yeah, at this point, the only place I'd consider using an SSD in
production is as a tablespace for indexes.  Their win is huge for
random IO, and indexes can always be rebuilt.  Data, not so much.
Transaction logs, even less.

rls

--
:wq

Re: Asynchronous commit | Transaction loss at server crash

От
Greg Smith
Дата:
Balkrishna Sharma wrote:
> I need to support several hundreds of concurrent update/inserts from
> an online form with pretty low latency (maybe couple of milliseconds
> at max). Think of a save to database at every 'tab-out' in an online form.

I regularly see 2000 - 4000 small write transactions per second on
systems with a battery-backed write cache and a moderate disk array
attached.  2000 TPS = 0.5 ms, on average.  Note however that it's
extremely difficult to bound the worst-case behavior possible here
anywhere near that tight.  Under a benchmark load I can normally get
even an extremely tuned Linux configuration to occasionally pause for
1-3 seconds at commit time, when the OS write cache is full, a
checkpoint is finishing, and the client doing the commit is stuck
waiting for that.  They're rare but you should expect to see that
situation sometimes.

We know basically what causes that and how to make it less likely to
happen in a real application.  But the possibility is still there, and
if your design cannot tolerate an occasional latency uptick you may be
disappointed because that's very, very hard to guarantee with the
workload you're expecting here.  There are plenty of ideas for how to
tune in that direction both at the source code level and by carefully
selecting the OS/filesystem combination used, but that's not a very well
explored territory.  The checkpoint design in the database has known
weaknesses in this particular area, and they're impossible to solve just
by throwing hardware at the problem.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com   www.2ndQuadrant.us


Re: Asynchronous commit | Transaction loss at server crash

От
Jesper Krogh
Дата:
On 2010-05-21 00:04, Greg Smith wrote:
> Jesper Krogh wrote:
>> A Battery Backed raid controller is not that expensive. (in the
>> range of 1 or 2 SSD disks). And it is (more or less) a silverbullet
>> to the task you describe.
>
> Maybe even less; in order to get a SSD that's reliable at all in
> terms of good crash recovery, you have buy a fairly expensive one.
> Also, and this is really important, you really don't want to deploy
> onto a single SSD and put critical system files there.  Their failure
> rates are not that low.  You need to put them into a RAID-1 setup and
> budget for two of them, which brings you right back to

I'm currently buillding a HP D2700 box with 25 x X25-M SSD, I have added
a LSI 8888ELP raid-controller with 256MB BBWC and 2 seperate ups'es
for the 2 independent PSU's on the D2700 (in the pricing numbers that
wasn't a huge part of it).

It has to do with the application. It consists of around 1TB of data, that
is accessed fairly rarely and on more or less random basis. A webapplication
is connected that tries to deliver say 200 random rows from a main table
and for each of them traversing to connected tables for information, so
an individual page can easily add up to 1000+ random reads (just
for confirming row information).

What we have done so far is to add a quite big amount of code that tries to
collapes the datastructure and cache each row for the view, so the 1000+
gets down in the order of 200, but it raises the complexity of the applications
which isn't a good thing either.

I still havent got the application onto it, and 12 months of production usage on
top, but so far I'm really looking forward to seeing it, because in this
applications it seems like a very good fit.

And about the disk-wear, as long as they dont blow up all at the same time
then I dont mind having to change a disk every now and then, so it'll
be really interesting to see if the 20GB/disk/day (the X25-M is speced for)
 is going to be something that really matters in my hands.

I plan on putting the xlog and wal-archive on a fibre-channel slice, so they
essentially dont count into above numbers.

I dont know if bonnie is accurate in that range but the last run delivered
over 500K random 4KB read /s, and it saturated the 2 x 3gpbs SAS links
out of the controller in seq-read/seq-writes.

On like up to 10 runs..

> Also, it's questionable whether a SSD is even going to be faster than
> standard disks for the sequential WAL writes anyway, once a
> non-volatile write cache is available.  Sequential writes to SSD are
> the area where the gap in performance between them and spinning disks
> is the smallest.


They are not in a totally other ballpark than spinning disks, but they
requires much less "intellegent logic" in the OS/filesystem for read-ahead
and block io and elevator ..

>> Plugging your system (SSD's) with an UPS and trusting it fully
>> could solve most of the problems (running in writeback mode).
>
> UPS batteries fail, and people accidentally knock out over server
> power cords.  It's a pretty bad server that can't survive someone
> tripping over the cord while it's busy, and that's the situation the
> "use a UPS" idea doesn't improve.

Mounted in a rack with "a lot" of cable binders. Keeping in mind that
it should only have the power for a few ms before the voliatile cache is
flushed.

But I totally agree with you, it is a matter of what applications you're
building on top.

... and we do backup to tape every night, so the "worst case" is not that
the system blows up. It is more:
* The system ends up not performining any better due to "something unknown".
or
* The systems end up taking way to much work on the system administrative
side in changning worn disks and rebuilding arrays and such.

This is not a type if system where a "single lost transaction" is of any matters,
more in the analytics/data-mining category, where last weeks backup is
more or less equally good as todays.

Jesper
--
Jesper