Re: How clustering for scale out works in PostgreSQL

Поиск
Список
Период
Сортировка
От Jim Nasby
Тема Re: How clustering for scale out works in PostgreSQL
Дата
Msg-id 5231DB62.4080507@nasby.net
обсуждение исходный текст
Ответ на Re: How clustering for scale out works in PostgreSQL  (Kevin Grittner <kgrittn@ymail.com>)
Ответы Re: How clustering for scale out works in PostgreSQL  (Kevin Grittner <kgrittn@ymail.com>)
Список pgsql-performance
On 8/31/13 9:44 AM, Kevin Grittner wrote:
> bsreejithin <bsreejithin@gmail.com> wrote:
>
>> What I posted is about a new setup that's going to come
>> up..Discussions are on whether to setup DB cluster to handle 1000
>> concurrent users.
>
> I previously worked for Wisconsin Courts, where we had a single
> server which handled about 3000 web users collectively generating
> hundreds of web hits per second generating thousands of queries per
> second, while at the same time functioning as a replication target
> from 80 sources sending about 20 transactions per second which
> modified data (many having a large number of DML statements per
> transaction) against a 3 TB database.  The same machine also hosted
> a transaction repository for all modifications to the database,
> indexed for audit reports and ad hoc queries; that was another 3
> TB.  Each of these was running on a 40-drive RAID.
>
> Shortly before I left we upgraded from a machine with 16 cores and
> 256 GB RAM to one with 32 cores and 512 GB RAM, because there is
> constant growth in both database size and load.  Performance was
> still good on the smaller machine, but monitoring showed we were
> approaching saturation.  We had started to see some performance
> degradation on the old machine, but were able to buy time by
> reducing the size of the web connection pool (in the Java
> application code) from 65 to 35.  Testing different connection pool
> sizes showed that pool size to be optimal for our workload on that
> machine; your ideal pool size can only be determined through
> testing.
>
> You can poke around in this application here, if you like:
> http://wcca.wicourts.gov/

Just to add another data point...

We run multiple ~2TB databases that see an average workload of ~700 transactions per second with peaks well above 4000
TPS.This is on servers with 512G of memory and varying numbers of cores. 

We probably wouldn't need such beefy hardware for this, except our IO performance (seen by the server) is pretty
pathetic,there's some flaws in the data model (that I inherited), and Rails likes to do some things that are patently
stupid.Were it not for those issues we could probably get by with 256G or even less. 

Granted, the servers we're running on cost around $30k a pop and there's a SAN behind them. But by the time you get to
thatkind of volume you should be able to afford good hardware... if not you should be rethinking your business model!
;)

If you setup some form of replication it's very easy to move to larger servers as you grow. I'm sure that when Kevin
movedtheir database it was a complete non-event. 
--
Jim C. Nasby, Data Architect                       jim@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net


В списке pgsql-performance по дате отправления:

Предыдущее
От: Mikkel Lauritsen
Дата:
Сообщение: Re: Reasons for choosing one execution plan overanother?
Следующее
От: Jim Nasby
Дата:
Сообщение: Re: Optimising views