Обсуждение: SSDs
Tried harder to find info on the write cycles: found som CFs that claim 2million cycles, and found the Mtron SSDs which claim to have very advanced wear levelling and a suitably long lifetime as a result even with an assumption that the underlying flash can do 100k writes only. The 'consumer' MTrons are not shabby on the face of it and not too expensive, and the pro models even faster. But ... the spec pdf shows really hight performance for average access, stream read *and* write, random read ... and absolutely pants performance for random write. Like 130/s, for .5k and 4k writes. Its so pants it looks like a misprint and it doesn't seem to square with the review on tomshardware: http://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb/page7.html Even there, the database IO rate does seem lower than you might hope, and this *might* be because the random reads are very very fast and the random writes ... aren't. Which is a shame, because that's exactly the bit I'd hope was fast. So, more work to do somewhere.
My colleague has tested a single Mtron Mobo's and a set of 4. He also mentioned the write performance was pretty bad compared to a Western Digital Raptor. He had a solution for that however, just plug the SSD in a raid-controller with decent cache performance (his favorites are the Areca controllers) and the "bad" write performance is masked by the controller's cache. It wood probably be really nice if you'd get tuned controllers for ssd's so they use less cache for reads and more for writes. Best regards, Arjen On 2-4-2008 8:16, James Mansion wrote: > Tried harder to find info on the write cycles: found som CFs that claim > 2million > cycles, and found the Mtron SSDs which claim to have very advanced wear > levelling and a suitably long lifetime as a result even with an > assumption that > the underlying flash can do 100k writes only. > > The 'consumer' MTrons are not shabby on the face of it and not too > expensive, > and the pro models even faster. > > But ... the spec pdf shows really hight performance for average access, > stream > read *and* write, random read ... and absolutely pants performance for > random > write. Like 130/s, for .5k and 4k writes. > > Its so pants it looks like a misprint and it doesn't seem to square with > the > review on tomshardware: > http://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb/page7.html > > Even there, the database IO rate does seem lower than you might hope, > and this *might* be because the random reads are very very fast and the > random writes ... aren't. Which is a shame, because that's exactly the > bit I'd hope was fast. > > So, more work to do somewhere. > > >
On Wed, Apr 2, 2008 at 1:16 AM, James Mansion <james@mansionfamily.plus.com> wrote: > Tried harder to find info on the write cycles: found som CFs that claim > 2million > cycles, and found the Mtron SSDs which claim to have very advanced wear > levelling and a suitably long lifetime as a result even with an > assumption that > the underlying flash can do 100k writes only. > > The 'consumer' MTrons are not shabby on the face of it and not too > expensive, > and the pro models even faster. > > But ... the spec pdf shows really hight performance for average access, > stream > read *and* write, random read ... and absolutely pants performance for > random > write. Like 130/s, for .5k and 4k writes. > > Its so pants it looks like a misprint and it doesn't seem to square with the > review on tomshardware: > http://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb/page7.html > > Even there, the database IO rate does seem lower than you might hope, > and this *might* be because the random reads are very very fast and the > random writes ... aren't. Which is a shame, because that's exactly the > bit I'd hope was fast. > > So, more work to do somewhere. if flash ssd random write was as good as random read, a single flash ssd could replace a stack of 15k disks in terms of iops (!). unfortunately, the random write performance of flash SSD is indeed grim. there are some technical reasons for this that are basically fundamental tradeoffs in how flash works, and the electronic processes involved. unfortunately even with 10% write 90% read workloads this makes flash a non-starter for 'OLTP' systems (exactly the sort of workloads you would want the super seek times). a major contributing factor is that decades of optimization and research have gone into disk based sytems which are pretty similar in terms of read and write performance. since flash just behaves differently, these optimizations read this paper for a good explanation of this [pdf]: http://tinyurl.com/357zux my personal opinion is these problems will prove correctable due to improvements in flash technology, improvement of filesystems and raid controllers in terms of flash, and introduction of other non volatile memory. so the ssd is coming...it's inevitable, just not as soon as some of us had hoped. merlin
What can be set as max of postgreSQL shared_buffers and work_mem
2008-04-03
bitaoxiao
There is NO MAX....
It is according to your hardware you have, and the db you have.
It is according to your hardware you have, and the db you have.
2008/4/3 bitaoxiao <bitaoxiao@gmail.com>:
What can be set as max of postgreSQL shared_buffers and work_mem2008-04-03bitaoxiao
On Thu, Apr 3, 2008 at 4:10 AM, sathiya psql <sathiya.psql@gmail.com> wrote: > There is NO MAX.... > > It is according to your hardware you have, and the db you have. Not entirely true. on 32 bit OS / software, the limit is just under 2 Gig. I'd imagine that the limit on 64 bit hardware / software is therefore something around 2^63-somesmallnumber which is, for all practical purposes, unlimited.
On 04/04/2008, Scott Marlowe <scott.marlowe@gmail.com> wrote: > Not entirely true. on 32 bit OS / software, the limit is just under 2 > Gig. Where do you get that figure from? There's an architectural (theoretical) limitation of RAM at 4GB, but with the PAE (that pretty much any CPU since the Pentium Pro offers) one can happily address 64GB on 32-bit. Or are you talking about some Postgres limitation? Cheers, Andrej -- Please don't top post, and don't use HTML e-Mail :} Make your quotes concise. http://www.american.edu/econ/notes/htmlmail.htm
On Fri, 4 Apr 2008, Andrej Ricnik-Bay wrote: > On 04/04/2008, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> Not entirely true. on 32 bit OS / software, the limit is just under 2 >> Gig. > > Or are you talking about some Postgres limitation? Since the original question was: > What can be set as max of ¿½postgreS shared_buffers and work_mem? that would be a "Yes." Matthew -- I quite understand I'm doing algebra on the blackboard and the usual response is to throw objects... If you're going to freak out... wait until party time and invite me along -- Computer Science Lecturer
On Thu, Apr 3, 2008 at 11:16 AM, Andrej Ricnik-Bay <andrej.groups@gmail.com> wrote: > On 04/04/2008, Scott Marlowe <scott.marlowe@gmail.com> wrote: > > Not entirely true. on 32 bit OS / software, the limit is just under 2 > > Gig. > > Where do you get that figure from? > > There's an architectural (theoretical) limitation of RAM at 4GB, > but with the PAE (that pretty much any CPU since the Pentium Pro > offers) one can happily address 64GB on 32-bit. > > Or are you talking about some Postgres limitation? Note I was talking about running 32 bit postgresql (on either 32 or 64 bit hardware, it doesn't matter) where the limit we've seen in the perf group over the years has been just under 2G. I'm extrapolating that on 64 bit hardware, 64 bit postgresql's limit would be similar, i.e. 2^63-x where x is some small number that keeps us just under 2^63. So, experience and reading here for a few years is where I get that number from. But feel free to test it. It'd be nice to know you could get >2Gig shared buffer on 32 bit postgresql on some environment.
Andrej Ricnik-Bay wrote: > On 04/04/2008, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> Not entirely true. on 32 bit OS / software, the limit is just under 2 >> Gig. That depends on the OS. On Linux it's AFAIK closer to 3GB because of less address space being consumed by the kernel, though I think free app address space might be further reduced with truly *massive* amounts of RAM. There are patches (the "4GB/4GB" patches) that do dodgy address space mapping to support a full 4GB application address space. > Where do you get that figure from? > > There's an architectural (theoretical) limitation of RAM at 4GB, > but with the PAE (that pretty much any CPU since the Pentium Pro > offers) one can happily address 64GB on 32-bit. The OS can address more than 4GB of physical RAM with PAE, yes. However, AFAIK no single process may directly use more than (4GB - kernel address space requirements) of RAM without using special extensions like address space windowing. Of course, they still benefit from the extra RAM indirectly through bigger disk caches, less competition with other processes for free physical RAM, etc. As Pg uses a multiprocess model I imagine individual backends can make use of a large amount of RAM (as work_mem etc), though the address space consumed by the shared memory will limit how much it can use. There's a decent, if Microsoft-specific, article about PAE here: http://www.microsoft.com/whdc/system/platform/server/PAE/pae_os.mspx -- Craig Ringer