Обсуждение: PostgreSQL survey
Hi community,
We have a mission critical system on Oracle, and would like to migrate to PostgreSQL. Would like to know from this community:
1. Anyone using PostgreSQL for enterprise mission critical system ?
2. How big are the servers you are running PostgreSQL, Is there anyone using more than 32 cores or 256GB memory ?
3. What OS you are using to run this mission critical system on PostgreSQL ? Linux, Unix ?
4. Who provides PostgreSQL support ? Do you have any support contract with a third party company ? If so, how much is the monthly support fee ?
Thank you for your help to decide for the migration.
Best Regards,
cesarmk
Cesar, > 1. Anyone using PostgreSQL for enterprise mission critical system ? Many people including: Caixa Bank The Chicago Futures Exchange Afilias The Federal Aviation Administration The French Social Security office > 2. How big are the servers you are running PostgreSQL, Is there anyone > using more than 32 cores or 256GB memory ? Current PostgreSQL will run on this size machine, but in general will fail to take advantage of the large number of cores and large amounts of memory; that is, performance at 64 cores and 512GB of memory will generally not be substantially better than at half that. This is one of the primary focuses of PostgreSQL 9.2 development, and probably 9.3 development as well. PostgreSQL scales quite well up to 32 cores on most workloads, however. > 3. What OS you are using to run this mission critical system on PostgreSQL > ? Linux, Unix ? Most of our users run on Linux. However, I know of mission-critical systems running on Solaris, AIX, and even Windows. > 4. Who provides PostgreSQL support ? Do you have any support contract with > a third party company ? If so, how much is the monthly support fee ? The following companies provide 24/7 support for PostgreSQL, depending out your part of the world: - EnterpriseDB - 2ndQuadrant - Fujitsu - Red Hat (with JBoss only, AFAIK) - Credativ - S.R.A. Japan - Command Prompt - Cybertec.AT There are probably additional support companies of which I am unaware. Fees for support are usually annual, and are usually around 10% to 20% of the cost of an Oracle server support contract. --Josh Berkus -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
Cesar Massaki Kamiya wrote: > 1. Anyone using PostgreSQL for enterprise mission critical system ? Josh mentioned a few. I'm aware of others, but don't want to speak for anything beyond my own experience. The Wisconsin Court System is using PostgreSQL for everything from filing appeals to the State Supreme Court (the court has adopted a rule that the appeal must be submitted electronically), to case management for the Circuit Courts, to the daily operation of various court agencies (Board of Bar Examiners, Office of Lawyer Regulation, etc.). We have about 3000 directly connected users, dozens of web applications getting millions of hits per day, and electronic interfaces to many business partners. We have been very happy with PostgreSQL. It is faster and more reliable than the commercial software from which we converted. It has more features and requires fewer resources to manage. Support from the community (on the mailing lists) is far superior to what we got under a paid contract with the commercial product. With open source, we have been able to "scratch our own itches" by adding features we needed -- something which is just not possible with most commercial software. The new extensions support, and the related PGXN site, are fantastic. I haven't seen a down side to PostgreSQL compared to any other product in any area which matters to our shop. Frankly, if PostgreSQL and all commercial products cost the same, my first choice would be PostgreSQL. > 2. How big are the servers you are running PostgreSQL, Is there > anyone using more than 32 cores or 256GB memory ? Our biggest server, which has just gone into production, is 32 cores with 256GB RAM. We are able to comfortably support several TB of databases running tens of millions of database transactions per day on servers with 16 cores and 128GB RAM. In benchmarking the latest development code, containing features targeted for next year's performance-oriented release, I was seeing over 500,000 tps for a read-only transaction load and over 30,000 tps for a mixed load including a lot of updates. They are not done adding performance features for the next release, though. :-) > 3. What OS you are using to run this mission critical system on > PostgreSQL ? Linux, Unix ? We started out running PostgreSQL on Windows, but it didn't make sense to use an OS which was so much less reliable (at least in our experience) than the database itself. We converted it all to Linux. No regrets there, either. > 4. Who provides PostgreSQL support ? Do you have any support > contract with a third party company ? If so, how much is the > monthly support fee ? We have a team of four DBAs to support the 200 databases we run, spread out over 80 locations. We're able to handle most issues. Where we need additional help, the community support on the mailing lists is fantastic. As Josh mentioned, there are several great companies offering contract support for those who are more comfortable with that. I hope that is of some help. If you have any questions, just ask. -Kevin
> -----Original Message----- > From: pgsql-advocacy-owner@postgresql.org [mailto:pgsql-advocacy- > owner@postgresql.org] On Behalf Of Kevin Grittner > Sent: Monday, December 12, 2011 3:43 PM > To: cesarmk@gmail.com; pgsql-advocacy@postgresql.org > Subject: Re: [pgsql-advocacy] PostgreSQL survey > > > 2. How big are the servers you are running PostgreSQL, Is there > > anyone using more than 32 cores or 256GB memory ? > > Our biggest server, which has just gone into production, is 32 cores > with 256GB RAM. We are able to comfortably support several TB of > databases running tens of millions of database transactions per day > on servers with 16 cores and 128GB RAM. In benchmarking the latest > development code, containing features targeted for next year's > performance-oriented release, I was seeing over 500,000 tps for a > read-only transaction load and over 30,000 tps for a mixed load > including a lot of updates. They are not done adding performance > features for the next release, though. :-) Sorry to derail the thread - but 500k tps on read and 30k tps on mixed workload of a single server - wow... Do you havea comparison for the workload against 9.1? I'm curious about the factor of improvement. Thanks, Brad.
> 1. Anyone using PostgreSQL for enterprise mission critical system ?
I've worked at two companies that run their mission critical applications on PostgreSQL.
I've worked at two companies that run their mission critical applications on PostgreSQL.
> 2. How big are the servers you are running PostgreSQL, Is there
> anyone using more than 32 cores or 256GB memory ?
> anyone using more than 32 cores or 256GB memory ?
Ours are likely about half that size and work wonderfully.
> 3. What OS you are using to run this mission critical system on
> PostgreSQL ? Linux, Unix ?
> PostgreSQL ? Linux, Unix ?
I've seen both RHEL and Debian.
> 4. Who provides PostgreSQL support ? Do you have any support
> contract with a third party company ? If so, how much is the
> monthly support fee ?
We've got impecable support from the mailing lists. It is tough to find a DBA that knows PostgreSQL. In general my experience has been that there are far fewer warts on PostgreSQL than on Oracle.
> contract with a third party company ? If so, how much is the
> monthly support fee ?
We've got impecable support from the mailing lists. It is tough to find a DBA that knows PostgreSQL. In general my experience has been that there are far fewer warts on PostgreSQL than on Oracle.
Nik
On Tue, Dec 13, 2011 at 10:57 AM, Nicholson, Brad (Toronto, ON, CA) <bnicholson@hp.com> wrote:
> -----Original Message-----Sorry to derail the thread - but 500k tps on read and 30k tps on mixed workload of a single server - wow... Do you have a comparison for the workload against 9.1? I'm curious about the factor of improvement.
> From: pgsql-advocacy-owner@postgresql.org [mailto:pgsql-advocacy-
> owner@postgresql.org] On Behalf Of Kevin Grittner
> Sent: Monday, December 12, 2011 3:43 PM
> To: cesarmk@gmail.com; pgsql-advocacy@postgresql.org
> Subject: Re: [pgsql-advocacy] PostgreSQL survey
>
> > 2. How big are the servers you are running PostgreSQL, Is there
> > anyone using more than 32 cores or 256GB memory ?
>
> Our biggest server, which has just gone into production, is 32 cores
> with 256GB RAM. We are able to comfortably support several TB of
> databases running tens of millions of database transactions per day
> on servers with 16 cores and 128GB RAM. In benchmarking the latest
> development code, containing features targeted for next year's
> performance-oriented release, I was seeing over 500,000 tps for a
> read-only transaction load and over 30,000 tps for a mixed load
> including a lot of updates. They are not done adding performance
> features for the next release, though. :-)
Thanks,
Brad.
--
Sent via pgsql-advocacy mailing list (pgsql-advocacy@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-advocacy
>> 1. Anyone using PostgreSQL for enterprise mission critical system ? xTuple has over 300 paying customers running their mission critical ERP system on PostgreSQL - and thousands more community users running the free xTuple PostBooks edition. >> 2. How big are the servers you are running PostgreSQL, Is there >> anyone using more than 32 cores or 256GB memory ? Most of our DBs are likely smaller than what you're looking for here. >> 3. What OS you are using to run this mission critical system on >> PostgreSQL ? Linux, Unix ? Most often Linux (RHEL, Ubuntu, CentOS, SuSE, more) ... but also Windows and Mac OSX. >> 4. Who provides PostgreSQL support ? Do you have any support >> contract with a third party company ? If so, how much is the >> monthly support fee ? We include support for the database in our ERP support, as we leverage the PostgreSQL backend (pl/pgsql, triggers, view-based APIs, etc.) for most of the application business logic. Regards, Ned -- Ned Lilly President and CEO xTuple 119 West York Street Norfolk, VA 23510 tel. 757.461.3022 x101 email: ned@xtuple.com www.xtuple.com
On 12/12/2011 03:42 PM, Kevin Grittner wrote: > Our biggest server, which has just gone into production, is 32 cores > with 256GB RAM. We are able to comfortably support several TB of > databases running tens of millions of database transactions per day > on servers with 16 cores and 128GB RAM. I think around 16 cores (two CPU sockets) and 64 to 128GB of RAM is the sweet spot for PostgreSQL up to version 9.1. I have customers with servers going up to 48 cores and 256GB of RAM...they really don't improve that much yet though. Part if this isn't just Postgres, it's the hardware. Check out my "Bottom-Up Database Benchmarking" talk slides at http://www.2ndquadrant.com/en/talks/ and stare carefully at the "DDR3 Era" results. The servers that hit the highest memory throughput there are the 4 X 6172 (48 cores, AMD) and 4 X E7540 (48 HT cores, Intel) servers. But trace those curves back to where they start. Until you clear 6 active cores, they're sometimes significantly slower than the smaller boxes. That's the trade-off in the current architecture; the systems with lots of cores and RAM segment things such that no one core can really achieve great speeds on its own. Those big servers are only worthwhile when the workload is always heavy. If it drops to only a few processes...you'd do better with one of the two-socket Intel boxes, like the 2 X X5560 (8 cores!) and 2 X E5620 (16 HT cores) shown there. It's kind of embarassing when I discover my $250 i7-870 at home outruns a customer's 48 core beast when running a single core job, because I get 10GB/s per core while they get 5GB/s. If you do always prioritize for always busy (like Kevin's workload), the bigger systems can still make perfect sense. There's no denying that they can hit major throughput when enough processes are running. And the scalability improvements coming in PostgreSQL 9.2 will help CPU bound systems go even faster. Just make sure you're really CPU bound though. I'm having one of these discussions right now with a customer who's IO bound specifically on seeking; they really need to re-prioritize their budget toward less cores and RAM than they'd planned for, and use SSD storage instead. I'm working on a white paper right now about how long it takes to populate all the RAM usefully on a big system. I's not pretty seeing how long a server with 256GB of RAM but regular storage takes to return to typical performance after a reboot, the answer can be measured in hours sometimes. > 3. What OS you are using to run this mission critical system on PostgreSQL ? Linux, Unix ? Most of my customers are on Linux, with the same basic line-up Josh Berkus already listed as other platforms. The only major platform I have multiple customers on I haven't seen mentioned yet is FreeBSD. Main reason for Linux over FreeBSD is general popularity and broader hardware support. You still need to be pretty careful what server hardware you use for FreeBSD, and it's tougher to hire people who know it well than Linux. Major reasons to choose FreeBSD include DTrace (which still has advantages over Systemtap on Linux, especially in ease of use/available sample code), and ZFS. As someone who provides an answer to question (4), I don't want to expand this into an extended ad. Here's a list of places that include some notable lists or stories about serious PostgreSQL or close to standard commercial versions of PostgreSQL, and only one of them happens to mention me twice: http://www.postgresql.org/about/users/ http://en.wikipedia.org/wiki/PostgreSQL#Prominent_users http://www.enterprisedb.com/success-stories/customers http://www.2ndquadrant.com/en/case-studies/ I think our case studies have something interesting to say about good ways to approach deploying an open-source stack, wouldn't mention them otherwise. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
Sorry for the late reply. I have been swamped with work this week and haven't gotten to this until now. (Development and preparing to deliver PostgreSQL training in Malaysia.) On Fri, Dec 9, 2011 at 1:42 PM, Cesar Massaki Kamiya <cesarmk@gmail.com> wrote: > Hi community, > > We have a mission critical system on Oracle, and would like to migrate to > PostgreSQL. Would like to know from this community: > > 1. Anyone using PostgreSQL for enterprise mission critical system ? It's hard to know how many people are using LedgerSMB in production or how many paying customers expist between Command Prompt and us. I would consider (as Ned does!) accounting and ERP to be mission critical. However, our largest customers are midsized businesses which limits the size and scope of what you are talking about. The largest is a decent-sized financial services company. > > 2. How big are the servers you are running PostgreSQL, Is there anyone using > more than 32 cores or 256GB memory ? We don't have any businesses deploying anything that big. I think our largest deployment is either 4 or 8 cores, and it performs very well under a very complex load (the accounting app is web-based and so we have to do some less than optimal things performance-wise to ensure state of accounting access gets tracked correctly between sessions, and yet PostgreSQL is rarely if ever the bottleneck). > > 3. What OS you are using to run this mission critical system on PostgreSQL ? > Linux, Unix ? Linux. > > 4. Who provides PostgreSQL support ? Do you have any support contract with > a third party company ? If so, how much is the monthly support fee ? I don't have anyone else to add to the list at present. Best Wishes, Chris Travers
"Nicholson, Brad (Toronto, ON, CA)" wrote: > 500k tps on read and 30k tps on mixed workload of a single server - > wow... Do you have a comparison for the workload against 9.1? I'm > curious about the factor of improvement. Unfortunately, I was only able to grab a few days on this box before it was put into production, and a large enough battery of tests to have high confidence in the results took about 20 hours to run. I focused on the impact of specific proposed patches against current 9.2 development HEAD; unfortunately, I didn't have time to do a run comparing to the 9.1 production release. :-( I may be able to get the machine for an occasional window of time on a few more weekends. With dedicated time on that machine being a fairly precious resource, I need to pick the tests to run pretty carefully. Perhaps a 9.1 to 9.2 comparison will be a good one as we near release time next year. Before that, I'm inclined to think that any time I can grab would be more valuable evaluating proposed patches. With all of that said, I'd bet that Robert Haas has some overall numbers from the big machine he's been able to use. Robert? -Kevin
On Dec 9, 2011, at 3:42 PM, Cesar Massaki Kamiya wrote:<br /><blockquote type="cite">1. Anyone using PostgreSQL for enterprisemission critical system ?<br /></blockquote><br />Enova Financial runs all of our OLTP and a large portion of ourreporting on Postgres. Our largest OLTP database is ~1.8TB and the last time I measured it (almost 2 years ago) it averaged640TPS and peaked at over 4kTPS.<br /><br />Downtime on that database costs the company well over $100k/hour.<br/><br /><blockquote type="cite">2. How big are the servers you are running PostgreSQL, Is there anyone usingmore than 32 cores or 256GB memory ?<br /></blockquote><br />We don't currently have anything over 32 cores, but wehave several servers with 1/2TB of memory. The vast majority of that memory is used by the filesystem cache.<br /><br /><blockquotetype="cite">3. What OS you are using to run this mission critical system on PostgreSQL ? Linux, Unix ?<br /></blockquote><br/>Linux.<br /><br /><blockquote type="cite">4. Who provides PostgreSQL support ? Do you have any supportcontract with a third party company ? If so, how much is the monthly support fee ?<br /></blockquote><br />We useCommand Prompt for our formal support contract, but we have also used consulting services from PgX and 2nd Quadrant.<br/><br />As others have mentioned, support on the mailing list is generally excellent and you will have a challengehiring someone that is highly knowledgable in Postgres (I would estimate the pool of people who could be consideredexperts in Postgres and would consider a job offer in the US to be less than 1000). Your best bet may be to findsomeone who is experienced in a number of other RDBMSes and is willing to learn Postgres. Just make sure to bump up theircompensation as they become experienced or you risk loosing them (my rule of thumb is that PG knowledge is worth ~25%more than comparable Oracle knowlege).<br />--<br />Jim C. Nasby, Database Architect jim@nasby.net<br/>512.569.9461 (cell) http://jim.nasby.net<br /><br /><br />