Re: Performances issues with SSD volume ?

Поиск
Список
Период
Сортировка
От Thomas SIMON
Тема Re: Performances issues with SSD volume ?
Дата
Msg-id 556C2060.5020105@neteven.com
обсуждение исходный текст
Ответ на Re: Performances issues with SSD volume ?  (Glyn Astill <glynastill@yahoo.co.uk>)
Список pgsql-admin
Hi everyone,

Coming back to you after my server switch, results are very positive !

For answering your last question Glyn, I figured from where reads was
from, this is wal senders sent to slave, so it is "normal". No more
significative reads when no archieving to slave.

Seems that kernel change combined to raid optimisations have resolved
the problem.
No more crazy load peaks or io bound problem on sdd partition, SSD
performances are great for now, pg badger total query duration decreased
about 35% since I switch servers.

So I guess this topic is solved now, thanks to many contributors who
helped me, and special thanks to Glyn who spent a lot of time to answer
me, and helped me a lot !

Thomas

Le 26/05/2015 17:42, Glyn Astill a écrit :
> ----- Original Message -----
>
>> From: Glyn Astill <glynastill@yahoo.co.uk>
>> To: Thomas SIMON <tsimon@neteven.com>
>> Cc: "pgsql-admin@postgresql.org" <pgsql-admin@postgresql.org>
>> Sent: Tuesday, 26 May 2015, 16:27
>> Subject: Re: [ADMIN] Performances issues with SSD volume ?
>>
>>>   From: Thomas SIMON <tsimon@neteven.com>
>>> To: Glyn Astill <glynastill@yahoo.co.uk>
>>> Cc: "pgsql-admin@postgresql.org"
>> <pgsql-admin@postgresql.org>
>>> Sent: Tuesday, 26 May 2015, 15:44
>>> Subject: Re: [ADMIN] Performances issues with SSD volume ?
>>>
>>> I can't do bonnie++ now, beacause I've already prepared server as a
>>> slave, and I have no sufficient disk space available (program says File
>>> size should be double RAM for good results output)
>>>
>>> My current sar output on productions (HDD) server is:
>>> 14:25:01          DEV                    tps  rd_sec/s  wr_sec/s
>>> avgrq-sz  avgqu-sz     await     svctm     %util
>>>
>>> 11:05:01    vg_data-lv_data    954.37   2417.35  18941.77 22.38
>>> 31.10     32.46      0.34     32.70
>>> 11:15:01    vg_data-lv_data   1155.79   8716.91  21995.77 26.57
>>> 25.15     21.74      0.40     46.70
>>> 11:25:01    vg_data-lv_data   1250.62   6478.67  23450.07 23.93
>>> 39.77     31.78      0.41     51.34
>>> 11:35:01    vg_data-lv_data    842.48   2051.11  17120.92 22.76
>>> 15.63     18.53      0.29     24.04
>>> 11:45:01    vg_data-lv_data    666.21   1403.32  14174.47 23.38
>>> 10.11     15.12      0.24     15.79
>>> 11:55:01    vg_data-lv_data    923.51   6763.36  15337.58 23.93
>>> 13.07     14.14      0.35     32.63
>>> 12:05:01    vg_data-lv_data    989.86   9148.71  16252.59 25.66
>>> 19.42     19.56      0.45     44.21
>>> 12:15:01    vg_data-lv_data   1369.24   8631.93  24737.60 24.37
>>> 35.04     25.54      0.45     61.33
>>> 12:25:01    vg_data-lv_data   1776.12   7070.01  39851.34 26.42
>>> 74.81     42.05      0.44     77.29
>>> 12:35:01    vg_data-lv_data   1529.15   6635.80  85865.14 60.49
>>> 54.11     35.34      0.48     72.89
>>> 12:45:01    vg_data-lv_data   1187.43   4528.74  40366.95 37.81
>>> 36.07     30.36      0.39     45.81
>>> 12:55:01    vg_data-lv_data    984.48   3520.06  21539.36 25.45
>>> 17.91     18.17      0.31     30.20
>>> 13:05:01    vg_data-lv_data    926.54   6304.44  16688.94 24.82
>>> 17.36     18.69      0.41     38.05
>>> 13:15:01    vg_data-lv_data   1232.46   7199.65  29852.49 30.06
>>> 40.17     32.53      0.42     51.60
>>> 13:25:01    vg_data-lv_data   1223.46   3945.05  27448.15 25.66
>>> 31.07     25.31      0.35     42.65
>>> 13:35:01    vg_data-lv_data   1126.91   2811.70  22067.19 22.08
>>> 24.33     21.55      0.32     36.00
>>> 13:45:01    vg_data-lv_data    833.33   1805.26  17274.43 22.90
>>> 24.40     29.25      0.30     25.41
>>> 13:55:02    vg_data-lv_data   1085.88   7616.75  19140.67 24.64
>>> 17.48     16.06      0.39     42.15
>>> 14:05:01    vg_data-lv_data    691.52   3852.50  13125.53 24.55
>>> 7.75     11.15      0.30     20.74
>>> 14:15:01    vg_data-lv_data   1288.88   5390.41  24171.07 22.94
>>> 33.31     25.76      0.36     46.31
>>> 14:25:01    vg_data-lv_data   1592.88   3637.77  29836.89 21.02
>>> 76.45     47.94      0.40     63.28
>>> 14:35:01    vg_data-lv_data   1652.78   9502.87  31587.68 24.86
>>> 58.97     35.58      0.44     72.46
>>> 14:45:01    vg_data-lv_data   1623.82   6249.52  34148.46 24.88
>>> 53.47     32.83      0.40     65.19
>>> 14:55:01    vg_data-lv_data   1330.44   6516.11  26828.59 25.06
>>> 55.66     41.81      0.42     55.46
>>> Average:    vg_data-lv_data   1176.55   5508.02  26324.37 27.06
>>> 33.86     28.72      0.39     45.59
>>
>> So the i/o is read heavy, it would be interesting to see why that might be and
>> some insight into running queries would go a long way there.
>>
>
> Scratch that, I've obviously misaligned the output when reading it and I was looking at writes when I thought I was
lookingat reads.  Still would be nice to see what the reads are though. 
>
>



В списке pgsql-admin по дате отправления:

Предыдущее
От: Albe Laurenz
Дата:
Сообщение: Re: pg_dump not dumping some schemas
Следующее
От: "Graeme B. Bell"
Дата:
Сообщение: Re: raid writethrough mode (WT), ssds and your DB. (was Performances issues with SSD volume ?)