On Fri, Aug 28, 2009 at 2:08 AM, Greg Smith<gsmith@gregsmith.com> wrote:
>
> This sort of workload involves random I/O rather than sequential. On
> regular hard drives this normally happens at a tiny fraction of the speed
> because of how the disk has to seek around. Typically a single drive
> capable of 50-100MB/s on sequential I/O will only do 1-2MB/s on a completely
> random workload. You look like you're getting somewhere in the middle
> there, on the low side which doesn't surprise me.
>
> The main two things you can do to improve this on the database side:
>
> -Increase checkpoint_segments, which reduces how often updated data has to
> be flushed to disk
>
> -Increase shared_buffers in order to hold more of the working set of data in
> RAM, so that more reads are satisfied by the database cache and less data
> gets evicted to disk.
After that you have to start looking at hardware. Soimething as
simple as a different drive for indexes and another for WAL, and
another for the base tables can make a big difference.