Re: sustained update load of 1-2k/sec

Поиск
Список
Период
Сортировка
От Alex Turner
Тема Re: sustained update load of 1-2k/sec
Дата
Msg-id 33c6269f0508191231b12f5b@mail.gmail.com
обсуждение исходный текст
Ответ на Re: sustained update load of 1-2k/sec  (Ron <rjpeace@earthlink.net>)
Ответы Re: sustained update load of 1-2k/sec
Список pgsql-performance
Don't forget that Ultra 320 is the speed of the bus, not each drive.
No matter how many honking 15k disks you put on a 320MB bus, you can
only get 320MB/sec! and have so many outstanding IO/s on the bus.

Not so with SATA! Each drive is on it's own bus, and you are only
limited by the speed of your PCI-X Bus, which can be as high as
800MB/sec at 133Mhz/64bit.

It's cheap and it's fast - all you have to do is pay for the
enclosure, which can be a bit pricey, but there are some nice 24bay
and even 40bay enclosures out there for SATA.

Yes a 15k RPM drive will give you better seek time and better peak
through put, but put them all on a single U320 bus and you won't see
much return past a stripe size of 3 or 4.

If it's raw transactions per second data warehouse style, it's all
about the xlog baby which is sequential writes, and all about large
block reads, which is sequential reads.

Alex Turner
NetEconomist
P.S. Sorry if i'm a bit punchy, I've been up since yestarday with
server upgrade nightmares that continue ;)

On 8/19/05, Ron <rjpeace@earthlink.net> wrote:
> Alex mentions a nice setup, but I'm pretty sure I know how to beat
> that IO subsystems HW's performance by at least 1.5x or 2x.  Possibly
> more.  (No, I do NOT work for any vendor I'm about to discuss.)
>
> Start by replacing the WD Raptors with Maxtor Atlas 15K II's.
> At 5.5ms average access, 97.4MB/s outer track throughput, 85.9MB/s
> average, and 74.4 MB/s inner track throughput, they have the best
> performance characteristics of any tested shipping HDs I know
> of.  (Supposedly the new SAS versions will _sustain_ ~98MB/s, but
> I'll believe that only if I see it under independent testing).
> In comparison, the numbers on the WD740GD are 8.1ms average access,
> 71.8, 62.9, and 53.9 MB/s outer, average and inner track throughputs
> respectively.
>
> Be prepared to use as many of them as possible (read: as many you can
> afford) if you want to maximize transaction rates, particularly for
> small transactions like this application seems to be mentioning.
>
> Next, use a better RAID card.  The TOL enterprise stuff (Xyratex,
> Engino, Dot-hill) is probably too expensive, but in the commodity
> market benchmarks indicate that that Areca's 1GB buffer RAID cards
> currently outperform all the other commodity RAID stuff.
>
> 9 Atlas II's per card in a RAID 5 set, or 16 per card in a RAID 10
> set, should max the RAID card's throughput and come very close to, if
> not attaining, the real world peak bandwidth of the 64b 133MHz PCI-X
> bus they are plugged into.  Say somewhere in the 700-800MB/s range.
>
> Repeat the above for as many independent PCI-X buses as you have for
> a very fast commodity RAID IO subsystem.
>
> Two such configured cards used in the dame manner as mentioned by
> Alex should easily attain 1.5x - 2x the transaction numbers mentioned
> by Alex unless there's a bottleneck somewhere else in the system design.
>
> Hope this helps,
> Ron Peacetree
>
> At 08:40 AM 8/19/2005, Alex Turner wrote:
> >I have managed tx speeds that high from postgresql going even as high
> >as 2500/sec for small tables, but it does require a good RAID
> >controler card (yes I'm even running with fsync on).  I'm using 3ware
> >9500S-8MI with Raptor drives in multiple RAID 10s.  The box wasn't too
> >$$$ at just around $7k.  I have two independant controlers on two
> >independant PCI buses to give max throughput. on with a 6 drive RAID
> >10 and the other with two 4 drive RAID 10s.
> >
> >Alex Turner
> >NetEconomist
> >
> >On 8/19/05, Mark Cotner <mcotner@yahoo.com> wrote:
> > > Hi all,
> > > I bet you get tired of the same ole questions over and
> > > over.
> > >
> > > I'm currently working on an application that will poll
> > > thousands of cable modems per minute and I would like
> > > to use PostgreSQL to maintain state between polls of
> > > each device.  This requires a very heavy amount of
> > > updates in place on a reasonably large table(100k-500k
> > > rows, ~7 columns mostly integers/bigint).  Each row
> > > will be refreshed every 15 minutes, or at least that's
> > > how fast I can poll via SNMP.  I hope I can tune the
> > > DB to keep up.
> > >
> > > The app is threaded and will likely have well over 100
> > > concurrent db connections.  Temp tables for storage
> > > aren't a preferred option since this is designed to be
> > > a shared nothing approach and I will likely have
> > > several polling processes.
> > >
> > > Here are some of my assumptions so far . . .
> > >
> > > HUGE WAL
> > > Vacuum hourly if not more often
> > >
> > > I'm getting 1700tx/sec from MySQL and I would REALLY
> > > prefer to use PG.  I don't need to match the number,
> > > just get close.
> > >
> > > Is there a global temp table option?  In memory tables
> > > would be very beneficial in this case.  I could just
> > > flush it to disk occasionally with an insert into blah
> > > select from memory table.
> > >
> > > Any help or creative alternatives would be greatly
> > > appreciated.  :)
> > >
> > > 'njoy,
> > > Mark
> > >
> > >
> > > --
> > > Writing software requires an intelligent person,
> > > creating functional art requires an artist.
> > > -- Unknown
> > >
> > >
> > > ---------------------------(end of broadcast)---------------------------
> > > TIP 3: Have you checked our extensive FAQ?
> > >
> > >                http://www.postgresql.org/docs/faq
> > >
> >
> >---------------------------(end of broadcast)---------------------------
> >TIP 2: Don't 'kill -9' the postmaster
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>       choose an index scan if your joining column's datatypes do not
>       match
>

В списке pgsql-performance по дате отправления:

Предыдущее
От: John A Meinel
Дата:
Сообщение: Re: extremly low memory usage
Следующее
От: Jeremiah Jahn
Дата:
Сообщение: Re: extremly low memory usage