Ok, a quick view on the system, and some things that may be important to note:
> Our deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID
> controller configured in this way:
>
> * VD0: two 15k SAS disks (ext4, OS partition, WAL partition,
> RAID1)
> * VD1: ten 10k SAS disks (XFS, Postgres data partition, RAID5)
>
Well...usually RAID5 have the worst performance in writing...EVER!!! Have you tested this in another raid
configuration?RAID10 is usually the best bet.
>
>
> This system has the following configuration:
>
> * Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64)
> * 128GB RAM (DDR3, 8x16GB @1600Mhz)
> * two Intel Xeon E5-2640 v2 @2Ghz
> * Dell Perc H710 with 512MB RAM (Write cache: "WriteBack", Read
> cache: "ReadAhead", Disk cache: "disabled"):
> * VD0 (OS and WAL partition): two 15k SAS disks (ext4, RAID1)
> * VD1 (Postgres data partition): ten 10k SAS disks (XFS,
> RAID5)
> * PostgreSQL 9.4 (updated to the latest available version)
> * moved pg_stat_tmp to RAM disk
>
>
[...]> versions.
>
You did not mention any "postgres" configuration at all. If you let the default checkpoint_segments=3, that would be an
IOhell for your disk controler...and the RAID5 making things worst...Can you show us the values of:
checkpoint_segments
shared_buffers
work_mem
maintenance_work_mem
effective_io_concurrency
I would start from there, few changes, and check again. I would change the RAID first of all things, and try those
testsagain.
Cheers.
Gerardo