Обсуждение: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

Поиск
Список
Период
Сортировка

[GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

От
ajmcello
Дата:

I am trying to optimize and tune my server for fastest simple queries with highest connection to server possible. Basically, a SELECT item from table takes 30 minutes on a machine with SSD drives. The table has 900K entries and 12 columns.  Using that SELECT query, I then have the ability to make about 500 simultaneous connections to the server before errors start to occur. So, I can live with 500, but the slow query gives me grief.

I have a GIST index for the table. I tried btree, but according to performance tests, GIST was faster. So I went with GIST.

The system drive and pgsql drive are separate. I can separate them further if need to be. Total ram is 56GB. I added 32GB of swap.

Is there anything I can do to speed up the SELECT statement?

Here is what I have:

[sysctl.conf]
net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1
net.ipv4.tcp_syncookies=1
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv4.conf.all.accept_redirects=0
net.ipv6.conf.all.accept_redirects=0
net.ipv4.conf.all.accept_source_route=0
net.ipv6.conf.all.accept_source_route=0
net.ipv4.conf.all.log_martians=1
kernel.sysrq=0
kernel.shmmax=2147483999999
kernel.shmall=2097159999999
#32GBkernel.shmmax=17179869184
#32GBkernel.shmall=4194304
kernel.shmmni=999999999
kernel.shmmin=1
kernel.shmseg=10
semmsl, semmns, semopm, semmni kernel.sem=250 32000 100 128
fs.file-max=65536
kern.maxfiles=50000
kern.maxfilesperproc=50000
net.ipv4.ip_local_port_range=1024 65535
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_fin_timeout=10
net.ipv4.tcp_tw_reuse=1
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_max_syn_backlog=4096
net.ipv4.tcp_syncookies=1
kernel.sched_migration_cost_ns=5000000
kernel.sched_migration_cost_ns=5000000
kernel.sched_autogroup_enabled=0
vm.swappiness=10


[postgresql.conf]
max_connections = 100000
max_files_per_process = 1000000
shared_buffers = 24GB
max_locks_per_transaction  = 1000
effective_cache_size = 50GB
work_mem = 100MB
maintenance_work_mem = 2GB
log_min_duration_statement = 10000
checkpoint_completion_target = 0.9
wal_buffers = 32MB
default_statistics_target = 100
listen_addresses = '*'
port = 5432
ssl = off
wal_sync_method = fdatasync
synchronous_commit = on
fsync = off
wal_level = minimal
#client_min_messages = fatal
#log_min_messages = fatal
#log_min_error_statement = fatal
datestyle = 'iso, mdy'
debug_pretty_print = off
debug_print_parse = off
debug_print_plan = off
debug_print_rewritten = off
default_text_search_config = 'pg_catalog.english'
enable_bitmapscan = on
enable_hashagg = on
enable_hashjoin = on
enable_indexonlyscan = on
enable_indexscan = on
enable_material = on
enable_mergejoin = on
enable_nestloop = on
enable_seqscan = on
enable_sort = on
enable_tidscan = on
from_collapse_limit = 8
geqo = on
geqo_threshold = 12
log_checkpoints = off
log_connections = off
log_disconnections = off
log_duration = off
log_executor_stats = off
log_hostname = off
log_parser_stats = off
log_planner_stats = off
log_replication_commands = off
log_statement_stats = off
log_timezone = 'UTC'
max_wal_size = 1GB
min_wal_size = 80MB
shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen'
stats_temp_directory = 'pg_stat_tmp'
timezone = 'US/Pacific'
track_activities = on
track_counts = on
track_io_timing = off


Thanks in advance.





Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

От
"Charles Clavadetscher"
Дата:
Hello

> -----Original Message-----
> From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of ajmcello
> Sent: Freitag, 30. Dezember 2016 05:54
> To: POSTGRES <pgsql-general@postgresql.org>
> Subject: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]
>
>
> I am trying to optimize and tune my server for fastest simple queries with highest connection to server possible.
> Basically, a SELECT item from table takes 30 minutes on a machine with SSD drives. The table has 900K entries and 12
> columns.  Using that SELECT query, I then have the ability to make about 500 simultaneous connections to the server
> before errors start to occur. So, I can live with 500, but the slow query gives me grief.

From previous posts of other users, I assume that in order to get help you will need to provide some more information.
Herethe questions that come to my mind. 

What errors do you get from the server when you reach the 500 connections?

How long does it take to run the query without heavy load, e.g. just one user connected?

\timing on
query

How does the execution plan look like? There you may see if the index is used at all.

EXPLAIN ANALYZE query;

> I have a GIST index for the table. I tried btree, but according to performance tests, GIST was faster. So I went
> with GIST.
>
> The system drive and pgsql drive are separate. I can separate them further if need to be. Total ram is 56GB. I added
> 32GB of swap.
>
> Is there anything I can do to speed up the SELECT statement?

Could you provide the statement and the table definition?

> Here is what I have:
>
> [sysctl.conf]
> net.ipv4.conf.default.rp_filter=1
> net.ipv4.conf.all.rp_filter=1
> net.ipv4.tcp_syncookies=1
> net.ipv4.ip_forward=1
> net.ipv6.conf.all.forwarding=1
> net.ipv4.conf.all.accept_redirects=0
> net.ipv6.conf.all.accept_redirects=0
> net.ipv4.conf.all.accept_source_route=0
> net.ipv6.conf.all.accept_source_route=0
> net.ipv4.conf.all.log_martians=1
> kernel.sysrq=0
> kernel.shmmax=2147483999999
> kernel.shmall=2097159999999
> #32GBkernel.shmmax=17179869184
> #32GBkernel.shmall=4194304
> kernel.shmmni=999999999
> kernel.shmmin=1
> kernel.shmseg=10
> semmsl, semmns, semopm, semmni kernel.sem=250 32000 100 128
> fs.file-max=65536
> kern.maxfiles=50000
> kern.maxfilesperproc=50000
> net.ipv4.ip_local_port_range=1024 65535
> net.ipv4.tcp_tw_recycle=1
> net.ipv4.tcp_fin_timeout=10
> net.ipv4.tcp_tw_reuse=1
> net.core.rmem_max=16777216
> net.core.wmem_max=16777216
> net.ipv4.tcp_max_syn_backlog=4096
> net.ipv4.tcp_syncookies=1
> kernel.sched_migration_cost_ns=5000000
> kernel.sched_migration_cost_ns=5000000
> kernel.sched_autogroup_enabled=0
> vm.swappiness=10
>
>

Here are some helpful informations on the settings below:
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

I am not really the big expert, but 100'000 max_connections and work_mem of 100MB seems to me to be a problem:

From the link mentioned right above:

"This size (work_mem) is applied to each and every sort done by each user, and complex queries can use multiple working
memorysort buffers. Set it to 50MB, and have 30 users submitting queries, and you are soon using 1.5GB of real memory." 

This is:
SELECT * FROM pg_size_pretty((50.0*(2^20)*30.0)::BIGINT);
-[ RECORD 1 ]--+--------
pg_size_pretty | 1500 MB

Applied to your settings:

SELECT * FROM pg_size_pretty((100.0*(2^20)*100000.0)::BIGINT);
-[ RECORD 1 ]--+--------
pg_size_pretty | 9766 GB

This could explain the errors you get from the server. You may be trying to use much more memory than you have.

Regards
Charles

> [postgresql.conf]
> max_connections = 100000
> max_files_per_process = 1000000
> shared_buffers = 24GB
> max_locks_per_transaction  = 1000
> effective_cache_size = 50GB
> work_mem = 100MB
> maintenance_work_mem = 2GB
> log_min_duration_statement = 10000
> checkpoint_completion_target = 0.9
> wal_buffers = 32MB
> default_statistics_target = 100
> listen_addresses = '*'
> port = 5432
> ssl = off
> wal_sync_method = fdatasync
> synchronous_commit = on
> fsync = off
> wal_level = minimal
> #client_min_messages = fatal
> #log_min_messages = fatal
> #log_min_error_statement = fatal
> datestyle = 'iso, mdy'
> debug_pretty_print = off
> debug_print_parse = off
> debug_print_plan = off
> debug_print_rewritten = off
> default_text_search_config = 'pg_catalog.english'
> enable_bitmapscan = on
> enable_hashagg = on
> enable_hashjoin = on
> enable_indexonlyscan = on
> enable_indexscan = on
> enable_material = on
> enable_mergejoin = on
> enable_nestloop = on
> enable_seqscan = on
> enable_sort = on
> enable_tidscan = on
> from_collapse_limit = 8
> geqo = on
> geqo_threshold = 12
> log_checkpoints = off
>
> log_connections = off
> log_disconnections = off
> log_duration = off
> log_executor_stats = off
> log_hostname = off
> log_parser_stats = off
> log_planner_stats = off
> log_replication_commands = off
> log_statement_stats = off
> log_timezone = 'UTC'
> max_wal_size = 1GB
> min_wal_size = 80MB
> shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen'
> stats_temp_directory = 'pg_stat_tmp'
> timezone = 'US/Pacific'
> track_activities = on
> track_counts = on
> track_io_timing = off
>
>
> Thanks in advance.
>
>
>
>




FW: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

От
"Charles Clavadetscher"
Дата:
Forwarding to list.

-----Original Message-----
From: ajmcello [mailto:ajmcello78@gmail.com]
Sent: Freitag, 30. Dezember 2016 07:05
To: Charles Clavadetscher <clavadetscher@swisspug.org>
Subject: Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

There are no connections except one cli when running the query. After that finishes then I get connection refused  or
cannotconnect 
to server due to load increasing because of server connections. But I'm more interested in tuning the server for better
query
response time. Is there anything in the configuration that would help?

Sent from my iPhone

> On Dec 29, 2016, at 7:35 PM, Charles Clavadetscher <clavadetscher@swisspug.org> wrote:
>
> Hello
>
>> -----Original Message-----
>> From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of ajmcello
>> Sent: Freitag, 30. Dezember 2016 05:54
>> To: POSTGRES <pgsql-general@postgresql.org>
>> Subject: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]
>>
>>
>> I am trying to optimize and tune my server for fastest simple queries with highest connection to server possible.
>> Basically, a SELECT item from table takes 30 minutes on a machine with SSD drives. The table has 900K entries and 12
>> columns.  Using that SELECT query, I then have the ability to make about 500 simultaneous connections to the server
>> before errors start to occur. So, I can live with 500, but the slow query gives me grief.
>
> From previous posts of other users, I assume that in order to get help you will need to provide some more
information.Here the 
questions that come to my mind.
>
> What errors do you get from the server when you reach the 500 connections?
>
> How long does it take to run the query without heavy load, e.g. just one user connected?
>
> \timing on
> query
>
> How does the execution plan look like? There you may see if the index is used at all.
>
> EXPLAIN ANALYZE query;
>
>> I have a GIST index for the table. I tried btree, but according to performance tests, GIST was faster. So I went
>> with GIST.
>>
>> The system drive and pgsql drive are separate. I can separate them further if need to be. Total ram is 56GB. I added
>> 32GB of swap.
>>
>> Is there anything I can do to speed up the SELECT statement?
>
> Could you provide the statement and the table definition?
>
>> Here is what I have:
>>
>> [sysctl.conf]
>> net.ipv4.conf.default.rp_filter=1
>> net.ipv4.conf.all.rp_filter=1
>> net.ipv4.tcp_syncookies=1
>> net.ipv4.ip_forward=1
>> net.ipv6.conf.all.forwarding=1
>> net.ipv4.conf.all.accept_redirects=0
>> net.ipv6.conf.all.accept_redirects=0
>> net.ipv4.conf.all.accept_source_route=0
>> net.ipv6.conf.all.accept_source_route=0
>> net.ipv4.conf.all.log_martians=1
>> kernel.sysrq=0
>> kernel.shmmax=2147483999999
>> kernel.shmall=2097159999999
>> #32GBkernel.shmmax=17179869184
>> #32GBkernel.shmall=4194304
>> kernel.shmmni=999999999
>> kernel.shmmin=1
>> kernel.shmseg=10
>> semmsl, semmns, semopm, semmni kernel.sem=250 32000 100 128
>> fs.file-max=65536
>> kern.maxfiles=50000
>> kern.maxfilesperproc=50000
>> net.ipv4.ip_local_port_range=1024 65535
>> net.ipv4.tcp_tw_recycle=1
>> net.ipv4.tcp_fin_timeout=10
>> net.ipv4.tcp_tw_reuse=1
>> net.core.rmem_max=16777216
>> net.core.wmem_max=16777216
>> net.ipv4.tcp_max_syn_backlog=4096
>> net.ipv4.tcp_syncookies=1
>> kernel.sched_migration_cost_ns=5000000
>> kernel.sched_migration_cost_ns=5000000
>> kernel.sched_autogroup_enabled=0
>> vm.swappiness=10
>>
>>
>
> Here are some helpful informations on the settings below:
> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
>
> I am not really the big expert, but 100'000 max_connections and work_mem of 100MB seems to me to be a problem:
>
> From the link mentioned right above:
>
> "This size (work_mem) is applied to each and every sort done by each user, and complex queries can use multiple
workingmemory 
sort buffers. Set it to 50MB, and have 30 users submitting queries, and you are soon using 1.5GB of real memory."
>
> This is:
> SELECT * FROM pg_size_pretty((50.0*(2^20)*30.0)::BIGINT);
> -[ RECORD 1 ]--+--------
> pg_size_pretty | 1500 MB
>
> Applied to your settings:
>
> SELECT * FROM pg_size_pretty((100.0*(2^20)*100000.0)::BIGINT);
> -[ RECORD 1 ]--+--------
> pg_size_pretty | 9766 GB
>
> This could explain the errors you get from the server. You may be trying to use much more memory than you have.
>
> Regards
> Charles
>
>> [postgresql.conf]
>> max_connections = 100000
>> max_files_per_process = 1000000
>> shared_buffers = 24GB
>> max_locks_per_transaction  = 1000
>> effective_cache_size = 50GB
>> work_mem = 100MB
>> maintenance_work_mem = 2GB
>> log_min_duration_statement = 10000
>> checkpoint_completion_target = 0.9
>> wal_buffers = 32MB
>> default_statistics_target = 100
>> listen_addresses = '*'
>> port = 5432
>> ssl = off
>> wal_sync_method = fdatasync
>> synchronous_commit = on
>> fsync = off
>> wal_level = minimal
>> #client_min_messages = fatal
>> #log_min_messages = fatal
>> #log_min_error_statement = fatal
>> datestyle = 'iso, mdy'
>> debug_pretty_print = off
>> debug_print_parse = off
>> debug_print_plan = off
>> debug_print_rewritten = off
>> default_text_search_config = 'pg_catalog.english'
>> enable_bitmapscan = on
>> enable_hashagg = on
>> enable_hashjoin = on
>> enable_indexonlyscan = on
>> enable_indexscan = on
>> enable_material = on
>> enable_mergejoin = on
>> enable_nestloop = on
>> enable_seqscan = on
>> enable_sort = on
>> enable_tidscan = on
>> from_collapse_limit = 8
>> geqo = on
>> geqo_threshold = 12
>> log_checkpoints = off
>>
>> log_connections = off
>> log_disconnections = off
>> log_duration = off
>> log_executor_stats = off
>> log_hostname = off
>> log_parser_stats = off
>> log_planner_stats = off
>> log_replication_commands = off
>> log_statement_stats = off
>> log_timezone = 'UTC'
>> max_wal_size = 1GB
>> min_wal_size = 80MB
>> shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen'
>> stats_temp_directory = 'pg_stat_tmp'
>> timezone = 'US/Pacific'
>> track_activities = on
>> track_counts = on
>> track_io_timing = off
>>
>>
>> Thanks in advance.
>>
>>
>>
>>
>
>



Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

От
ajmcello
Дата:
Reducing worker mem shaved about 12 minutes off the query time.. Thanks for the suggestion. I lowered it to 10MB
insteadof 100MB 



> On Dec 29, 2016, at 8:07 PM, Charles Clavadetscher <clavadetscher@swisspug.org> wrote:
>
> Forwarding to list.
>
> -----Original Message-----
> From: ajmcello [mailto:ajmcello78@gmail.com]
> Sent: Freitag, 30. Dezember 2016 07:05
> To: Charles Clavadetscher <clavadetscher@swisspug.org>
> Subject: Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]
>
> There are no connections except one cli when running the query. After that finishes then I get connection refused  or
cannotconnect 
> to server due to load increasing because of server connections. But I'm more interested in tuning the server for
betterquery 
> response time. Is there anything in the configuration that would help?
>
> Sent from my iPhone
>
>> On Dec 29, 2016, at 7:35 PM, Charles Clavadetscher <clavadetscher@swisspug.org> wrote:
>>
>> Hello
>>
>>> -----Original Message-----
>>> From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of ajmcello
>>> Sent: Freitag, 30. Dezember 2016 05:54
>>> To: POSTGRES <pgsql-general@postgresql.org>
>>> Subject: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]
>>>
>>>
>>> I am trying to optimize and tune my server for fastest simple queries with highest connection to server possible.
>>> Basically, a SELECT item from table takes 30 minutes on a machine with SSD drives. The table has 900K entries and
12
>>> columns.  Using that SELECT query, I then have the ability to make about 500 simultaneous connections to the server
>>> before errors start to occur. So, I can live with 500, but the slow query gives me grief.
>>
>> From previous posts of other users, I assume that in order to get help you will need to provide some more
information.Here the 
> questions that come to my mind.
>>
>> What errors do you get from the server when you reach the 500 connections?
>>
>> How long does it take to run the query without heavy load, e.g. just one user connected?
>>
>> \timing on
>> query
>>
>> How does the execution plan look like? There you may see if the index is used at all.
>>
>> EXPLAIN ANALYZE query;
>>
>>> I have a GIST index for the table. I tried btree, but according to performance tests, GIST was faster. So I went
>>> with GIST.
>>>
>>> The system drive and pgsql drive are separate. I can separate them further if need to be. Total ram is 56GB. I
added
>>> 32GB of swap.
>>>
>>> Is there anything I can do to speed up the SELECT statement?
>>
>> Could you provide the statement and the table definition?
>>
>>> Here is what I have:
>>>
>>> [sysctl.conf]
>>> net.ipv4.conf.default.rp_filter=1
>>> net.ipv4.conf.all.rp_filter=1
>>> net.ipv4.tcp_syncookies=1
>>> net.ipv4.ip_forward=1
>>> net.ipv6.conf.all.forwarding=1
>>> net.ipv4.conf.all.accept_redirects=0
>>> net.ipv6.conf.all.accept_redirects=0
>>> net.ipv4.conf.all.accept_source_route=0
>>> net.ipv6.conf.all.accept_source_route=0
>>> net.ipv4.conf.all.log_martians=1
>>> kernel.sysrq=0
>>> kernel.shmmax=2147483999999
>>> kernel.shmall=2097159999999
>>> #32GBkernel.shmmax=17179869184
>>> #32GBkernel.shmall=4194304
>>> kernel.shmmni=999999999
>>> kernel.shmmin=1
>>> kernel.shmseg=10
>>> semmsl, semmns, semopm, semmni kernel.sem=250 32000 100 128
>>> fs.file-max=65536
>>> kern.maxfiles=50000
>>> kern.maxfilesperproc=50000
>>> net.ipv4.ip_local_port_range=1024 65535
>>> net.ipv4.tcp_tw_recycle=1
>>> net.ipv4.tcp_fin_timeout=10
>>> net.ipv4.tcp_tw_reuse=1
>>> net.core.rmem_max=16777216
>>> net.core.wmem_max=16777216
>>> net.ipv4.tcp_max_syn_backlog=4096
>>> net.ipv4.tcp_syncookies=1
>>> kernel.sched_migration_cost_ns=5000000
>>> kernel.sched_migration_cost_ns=5000000
>>> kernel.sched_autogroup_enabled=0
>>> vm.swappiness=10
>>>
>>>
>>
>> Here are some helpful informations on the settings below:
>> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
>>
>> I am not really the big expert, but 100'000 max_connections and work_mem of 100MB seems to me to be a problem:
>>
>> From the link mentioned right above:
>>
>> "This size (work_mem) is applied to each and every sort done by each user, and complex queries can use multiple
workingmemory 
> sort buffers. Set it to 50MB, and have 30 users submitting queries, and you are soon using 1.5GB of real memory."
>>
>> This is:
>> SELECT * FROM pg_size_pretty((50.0*(2^20)*30.0)::BIGINT);
>> -[ RECORD 1 ]--+--------
>> pg_size_pretty | 1500 MB
>>
>> Applied to your settings:
>>
>> SELECT * FROM pg_size_pretty((100.0*(2^20)*100000.0)::BIGINT);
>> -[ RECORD 1 ]--+--------
>> pg_size_pretty | 9766 GB
>>
>> This could explain the errors you get from the server. You may be trying to use much more memory than you have.
>>
>> Regards
>> Charles
>>
>>> [postgresql.conf]
>>> max_connections = 100000
>>> max_files_per_process = 1000000
>>> shared_buffers = 24GB
>>> max_locks_per_transaction  = 1000
>>> effective_cache_size = 50GB
>>> work_mem = 100MB
>>> maintenance_work_mem = 2GB
>>> log_min_duration_statement = 10000
>>> checkpoint_completion_target = 0.9
>>> wal_buffers = 32MB
>>> default_statistics_target = 100
>>> listen_addresses = '*'
>>> port = 5432
>>> ssl = off
>>> wal_sync_method = fdatasync
>>> synchronous_commit = on
>>> fsync = off
>>> wal_level = minimal
>>> #client_min_messages = fatal
>>> #log_min_messages = fatal
>>> #log_min_error_statement = fatal
>>> datestyle = 'iso, mdy'
>>> debug_pretty_print = off
>>> debug_print_parse = off
>>> debug_print_plan = off
>>> debug_print_rewritten = off
>>> default_text_search_config = 'pg_catalog.english'
>>> enable_bitmapscan = on
>>> enable_hashagg = on
>>> enable_hashjoin = on
>>> enable_indexonlyscan = on
>>> enable_indexscan = on
>>> enable_material = on
>>> enable_mergejoin = on
>>> enable_nestloop = on
>>> enable_seqscan = on
>>> enable_sort = on
>>> enable_tidscan = on
>>> from_collapse_limit = 8
>>> geqo = on
>>> geqo_threshold = 12
>>> log_checkpoints = off
>>>
>>> log_connections = off
>>> log_disconnections = off
>>> log_duration = off
>>> log_executor_stats = off
>>> log_hostname = off
>>> log_parser_stats = off
>>> log_planner_stats = off
>>> log_replication_commands = off
>>> log_statement_stats = off
>>> log_timezone = 'UTC'
>>> max_wal_size = 1GB
>>> min_wal_size = 80MB
>>> shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen'
>>> stats_temp_directory = 'pg_stat_tmp'
>>> timezone = 'US/Pacific'
>>> track_activities = on
>>> track_counts = on
>>> track_io_timing = off
>>>
>>>
>>> Thanks in advance.
>>>
>>>
>>>
>>>
>>
>>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]

От
Amitabh Kant
Дата:


On Fri, Dec 30, 2016 at 12:06 PM, ajmcello <ajmcello78@gmail.com> wrote:
Reducing worker mem shaved about 12 minutes off the query time.. Thanks for the suggestion. I lowered it to 10MB instead of 100MB

[SNIP]

>>> [postgresql.conf]
>>> max_connections = 100000
>>> max_files_per_process = 1000000
>>> shared_buffers = 24GB
>>> max_locks_per_transaction  = 1000
>>> effective_cache_size = 50GB
>>> work_mem = 100MB
>>> maintenance_work_mem = 2GB
>>> log_min_duration_statement = 10000
>>> checkpoint_completion_target = 0.9
>>> wal_buffers = 32MB
>>> default_statistics_target = 100
>>> listen_addresses = '*'
>>> port = 5432
>>> ssl = off
>>> wal_sync_method = fdatasync
>>> synchronous_commit = on
>>> fsync = off
>>> wal_level = minimal
>>> #client_min_messages = fatal
>>> #log_min_messages = fatal
>>> #log_min_error_statement = fatal
>>> datestyle = 'iso, mdy'
>>> debug_pretty_print = off
>>> debug_print_parse = off
>>> debug_print_plan = off
>>> debug_print_rewritten = off
>>> default_text_search_config = 'pg_catalog.english'
>>> enable_bitmapscan = on
>>> enable_hashagg = on
>>> enable_hashjoin = on
>>> enable_indexonlyscan = on
>>> enable_indexscan = on
>>> enable_material = on
>>> enable_mergejoin = on
>>> enable_nestloop = on
>>> enable_seqscan = on
>>> enable_sort = on
>>> enable_tidscan = on
>>> from_collapse_limit = 8
>>> geqo = on
>>> geqo_threshold = 12
>>> log_checkpoints = off
>>>
>>> log_connections = off
>>> log_disconnections = off
>>> log_duration = off
>>> log_executor_stats = off
>>> log_hostname = off
>>> log_parser_stats = off
>>> log_planner_stats = off
>>> log_replication_commands = off
>>> log_statement_stats = off
>>> log_timezone = 'UTC'
>>> max_wal_size = 1GB
>>> min_wal_size = 80MB
>>> shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen'
>>> stats_temp_directory = 'pg_stat_tmp'
>>> timezone = 'US/Pacific'
>>> track_activities = on
>>> track_counts = on
>>> track_io_timing = off
>>>
>>>
>>> Thanks in advance.
>>>
>>>
>>>
>>>
>>
>>


The number of connections that you are attempting from Postgres is way too high. You should be using a connection pooler like pgbouncer, and reduce the number of connections at  postgres level.

Amitabh