Re: [PERFORM] Postgres performance issue

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: [PERFORM] Postgres performance issue
Дата
Msg-id CAOR=d=2CA03xqgL9ayUzAnomMafzWwW9zVKbmqiBsJFo8vOs4Q@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [PERFORM] Postgres performance issue  (Scott Marlowe <scott.marlowe@gmail.com>)
Список pgsql-performance
On Thu, May 4, 2017 at 8:36 AM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Thu, May 4, 2017 at 8:10 AM, Junaid Malik <junaid.malik@confiz.com> wrote:
>> Hello Guys,
>>
>> We are facing problem related to performance of Postgres. Indexes are not
>> being utilized and Postgres is giving priority to seq scan. I read many
>> articles of Postgres performance and found that we need to set the
>> randome_page_cost value same as seq_page_cost because we are using SSD
>> drives. We are running copy of Discourse forum, you can read more about
>> Discourse here meta.discourse.org. Details of all Server hardware and
>> Postgres version are given below.
>>
>> I am adding my Postgres configuration file in attachment, kindly review it
>> and suggest the changes so that i can improve the performance of whole
>> system. Currently queries are taking lot of time. I can also share the
>> schema with you and queries in detail too.
>>
>> Thanks
>>
>>
>>
>>  Postgres Version : 9.5.4
>>
>>  Server Hardware details :
>>    Dedicate machine
>>    16 Physical cores  32 Logical cores
>>    RAM : 64 GB
>>    RAM Type : DDR3
>>    Drive Type : SSD
>>    Raid Controller : MegaRAID SAS 2108 Raid Card
>>    Configured Raids : 10
>>    No of Drives : 4
>>    File System : XFS
>
> Please read this page
> https://wiki.postgresql.org/wiki/Guide_to_reporting_problems
>

OK so here's my quick critique of your conf file.

max_connections = 2000

If you really need to handle 2000 connections get a connection pooler
like pgbouncer in there to do it. 2000 active connections can swamp
most modern servers pretty quickly.

shared_buffers = 20GB

This is fairly high and in my experience on a 64G machine is probably
a bit much. It likely isn't hurting performance much though.

work_mem = 10GB                # min 64kB

This is insanely high. A lot of folks look at work_mem and think it's
a total number. It's not. It's per sort / operation. I.e. if 100
people run queries that each have 3 sorts they COULD allocated
100*3*10G = 3000G of RAM.  Further this is the kind of setting that
only becomes dangerous under heavy-ish loads. If you handle 3 or 4
users at a time normally, you'll never see a problem. Then someone
points a new site at your discourse instance and 10,000 people show up
and bam, server goes unresponsive.

#effective_io_concurrency = 1

Given your SSD raid you can probably look at raising this to 5 to 10 or so.


That's all I'm getting from your postgresql.conf. Not sure what your
usage pattern is, but on something like a forum, it's likely there are
no heavy transactional load, mostly read etc.

As for indexes getting used or not, if you have a small db right now,
seq scans are likely as fast as index scans because there's just not
as much to read. OTOH, if you have a decent sized db (couple gig to a
couple hundred gig) then if indexes are getting ignored they may not
be capable of being used due to data types and collation. In short we
need a much more detailed post of what you're doing, and how you're
measuring performance and index usage and all that.

The more information you can post the better generally.


В списке pgsql-performance по дате отправления:

Предыдущее
От: Scott Marlowe
Дата:
Сообщение: Re: [PERFORM] Postgres performance issue
Следующее
От: Vincent Veyron
Дата:
Сообщение: [PERFORM] Speed differences between two servers