Обсуждение: higth performance write to disk
Hello, in order to improve the performance of my database included in the solution SAN SSD disk on RAID 10, but I see that the performance of the database is the same. whichpostgresql.conf parameters that I do recommend to change for use writing more. Thank you very much.
On Wed, Sep 4, 2013 at 10:22 AM, Jeison Bedoya Delgado <jeisonb@audifarma.com.co> wrote: > Hello, in order to improve the performance of my database included in the > solution SAN SSD disk on RAID 10, but I see that the performance of the > database is the same. whichpostgresql.conf parameters that I do recommend to > change for use writing more. Going to need some more detailed information. In particular, transaction rates, hardware specifics, database version, definition of 'slow', etc etc etc. merlin
Hi merlin, Thanks for your interest, I'm using version 9.2.2, I have a machine with 128GB RAM, 32 cores, and my BD weighs 400GB. When I say slow I meant that while consultations take or failing a copy with pgdump, this taking the same time before had 10K disks in raid 10 and now that I have SSDs in Raid 10. That behavior is normal, or you can improve writing and reading. thank you very much El 04/09/2013 10:27 a.m., Merlin Moncure escribió: > On Wed, Sep 4, 2013 at 10:22 AM, Jeison Bedoya Delgado > <jeisonb@audifarma.com.co> wrote: >> Hello, in order to improve the performance of my database included in the >> solution SAN SSD disk on RAID 10, but I see that the performance of the >> database is the same. whichpostgresql.conf parameters that I do recommend to >> change for use writing more. > Going to need some more detailed information. In particular, > transaction rates, hardware specifics, database version, definition of > 'slow', etc etc etc. > > merlin > > -- Atentamente, JEISON BEDOYA DELGADO ADM.Servidores y comunicaciones AUDIFARMA S.A.
On 4.9.2013 20:52, Jeison Bedoya Delgado wrote: > Hi merlin, Thanks for your interest, I'm using version 9.2.2, I have > a machine with 128GB RAM, 32 cores, and my BD weighs 400GB. When I > say slow I meant that while consultations take or failing a copy with > pgdump, this taking the same time before had 10K disks in raid 10 and > now that I have SSDs in Raid 10. > > That behavior is normal, or you can improve writing and reading. SSDs are great random I/O, not that great for sequential I/O (better than spinning drives, but you'll often run into other bottlenecks, for example CPU). I'd bet this is what you're seeing. pg_dump is a heavily sequential workload (read the whole table from start to end, write a huge dump to the disk). A good RAID array with 10k SAS drives can give you very good performance (I'd say ~500MB/s reads and writes for 6 drives in RAID10). I don't think the pg_dump will produce the data much faster. Have you done any tests (e.g. using fio) to test the performance of the two configurations? There might be some hw issue but if you have no benchmarks it's difficult to judge. Can you run the fio tests now? The code is here: http://freecode.com/projects/fio and there are even a basic example: http://git.kernel.dk/?p=fio.git;a=blob_plain;f=examples/ssd-test.fio And how exactly are you running the pg_dump? And collect some basic stats next time it's running, for example a few samples from vmstat 5 iostat -x -k 5 and watch top how much CPU it's using. Tomas
On Wed, Sep 4, 2013 at 2:19 PM, Tomas Vondra <tv@fuzzy.cz> wrote: > On 4.9.2013 20:52, Jeison Bedoya Delgado wrote: >> Hi merlin, Thanks for your interest, I'm using version 9.2.2, I have >> a machine with 128GB RAM, 32 cores, and my BD weighs 400GB. When I >> say slow I meant that while consultations take or failing a copy with >> pgdump, this taking the same time before had 10K disks in raid 10 and >> now that I have SSDs in Raid 10. >> >> That behavior is normal, or you can improve writing and reading. > > SSDs are great random I/O, not that great for sequential I/O (better > than spinning drives, but you'll often run into other bottlenecks, for > example CPU). > > I'd bet this is what you're seeing. pg_dump is a heavily sequential > workload (read the whole table from start to end, write a huge dump to > the disk). A good RAID array with 10k SAS drives can give you very good > performance (I'd say ~500MB/s reads and writes for 6 drives in RAID10). > I don't think the pg_dump will produce the data much faster. > > Have you done any tests (e.g. using fio) to test the performance of the > two configurations? There might be some hw issue but if you have no > benchmarks it's difficult to judge. > > Can you run the fio tests now? The code is here: > > http://freecode.com/projects/fio > > and there are even a basic example: > http://git.kernel.dk/?p=fio.git;a=blob_plain;f=examples/ssd-test.fio > > > And how exactly are you running the pg_dump? And collect some basic > stats next time it's running, for example a few samples from > > vmstat 5 > iostat -x -k 5 > > and watch top how much CPU it's using. yeah. Also, some basic stats would be nice. For example how much data is getting written out and how long is it taking? We need to establish benchmark of 'slow'. merlin