Обсуждение: Pgbouncer and Node JS application Query read timeout error
List,

[root@db1 ~]# )
I am using pgbouncer(PgBouncer 1.23.1 RHEL 9.4) along with Postgres16(RHEL 9.4) for connection pooling.
Running a nodejs application which is throwing some errors related to query timeout which the development team suspect after pgbouncer deployment this behaviour appears, but not sure
The error which is thrown from the nodejs logs as follows..

Is this due to pgbouncer config issues or nodejs pool config issues ? ( Also the nodejs application restarts frequently after running for quiet few hours as per the dev team reported, may be code level / mem leak/GC issues but need to roll out this don't have any connection with pgbouncer deployment.
for reference here the pgbouncer config params and node js params at present. Any hints much helpful to rule out this is pgbouncer config issues ...
pgbouncer.ini
[databases]
rpt_db = host=dbmain.mydomain.com port=5444 dbname=rpt_db
postgres = host=dbmain.mydomain.com=5444 dbname=postgres
rpt_db = host=dbmain.mydomain.com port=5444 dbname=rpt_db
postgres = host=dbmain.mydomain.com=5444 dbname=postgres
keycloak = host=dbmain.mydomain.com=5444 dbname=keycloak
[users]
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 5444
auth_type = md5
auth_file = /usr/local/etc/pgbouncer.users
admin_users = myuser
stats_users = myuser
pool_mode = transaction
max_prepared_statements = 120
server_reset_query = DISCARD ALL
server_reset_query_always = 1
ignore_startup_parameters = extra_float_digits, options, statement_timeout, idle_in_transaction_session_timeout
max_client_conn = 5000
default_pool_size = 50
min_pool_size = 30
reserve_pool_size = 10
reserve_pool_timeout = 5
max_db_connections = 130
max_user_connections = 180
server_lifetime = 3600
server_idle_timeout = 600
[root@pgbouncer ~]#
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 5444
auth_type = md5
auth_file = /usr/local/etc/pgbouncer.users
admin_users = myuser
stats_users = myuser
pool_mode = transaction
max_prepared_statements = 120
server_reset_query = DISCARD ALL
server_reset_query_always = 1
ignore_startup_parameters = extra_float_digits, options, statement_timeout, idle_in_transaction_session_timeout
max_client_conn = 5000
default_pool_size = 50
min_pool_size = 30
reserve_pool_size = 10
reserve_pool_timeout = 5
max_db_connections = 130
max_user_connections = 180
server_lifetime = 3600
server_idle_timeout = 600
[root@pgbouncer ~]#
NodeJS application pg-pool-config.js //
import pkg from 'pg';
import { config } from 'dotenv';
import dblogger from '../helpers/dblogger.js';
const { Pool } = pkg;
config();
const { DB_HOST, DB_NAME_MGT, DB_NAME_TSP, DB_NAME_RDB, DB_USER, DB_PASSWORD, DB_PORT } = process.env;
const poolOptions = {
max: 10,
min: 2,
idleTimeoutMillis: 600000, //Idle for 5Min
connectionTimeoutMillis: 10000, //Reconnect 10sec
//statement_timeout: 60000, //Query executiion 1 min
acquire: 20000,
maxUses: 1000 //reconnect after 1000 queries
};
import { config } from 'dotenv';
import dblogger from '../helpers/dblogger.js';
const { Pool } = pkg;
config();
const { DB_HOST, DB_NAME_MGT, DB_NAME_TSP, DB_NAME_RDB, DB_USER, DB_PASSWORD, DB_PORT } = process.env;
const poolOptions = {
max: 10,
min: 2,
idleTimeoutMillis: 600000, //Idle for 5Min
connectionTimeoutMillis: 10000, //Reconnect 10sec
//statement_timeout: 60000, //Query executiion 1 min
acquire: 20000,
maxUses: 1000 //reconnect after 1000 queries
};
..................
............
TOP OUTPUT ON Pgbouncer VM
last pid: 17171; load averages: 0.45, 0.39, 0.40 up 7+07:19:36 10:34:48
23 processes: 1 running, 20 sleeping, 2 stopped
CPU: 0.5% user, 0.0% nice, 3.6% system, 0.0% interrupt, 95.9% idle
Mem: 13M Active, 1352M Inact, 1415M Wired, 1024M Buf, 13G Free
Swap: 7068M Total, 7068M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
828 pgbouncer 1 48 0 25M 12M kqread 0 43.6H 20.45% pgbouncer
217 root 3 20 0 53M 28M select 6 6:30 0.05% vmtoolsd
23 processes: 1 running, 20 sleeping, 2 stopped
CPU: 0.5% user, 0.0% nice, 3.6% system, 0.0% interrupt, 95.9% idle
Mem: 13M Active, 1352M Inact, 1415M Wired, 1024M Buf, 13G Free
Swap: 7068M Total, 7068M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
828 pgbouncer 1 48 0 25M 12M kqread 0 43.6H 20.45% pgbouncer
217 root 3 20 0 53M 28M select 6 6:30 0.05% vmtoolsd
TOP OUTPUT ON NODE JS VM
top - 10:35:52 up 195 days, 5:37, 5 users, load average: 0.89, 0.86, 0.91
Tasks: 612 total, 8 running, 604 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.9 us, 0.3 sy, 0.0 ni, 96.3 id, 0.0 wa, 0.1 hi, 0.3 si, 0.0 st
MiB Mem : 63785.5 total, 50434.0 free, 8909.5 used, 5429.2 buff/cache
MiB Swap: 4044.0 total, 4032.4 free, 11.6 used. 54875.9 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4120 root 20 0 1140028 621676 36864 S 18.6 1.0 51895:18 node
1914526 root 20 0 21.0g 251816 49152 S 9.3 0.4 4:16.81 node /v
Tasks: 612 total, 8 running, 604 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.9 us, 0.3 sy, 0.0 ni, 96.3 id, 0.0 wa, 0.1 hi, 0.3 si, 0.0 st
MiB Mem : 63785.5 total, 50434.0 free, 8909.5 used, 5429.2 buff/cache
MiB Swap: 4044.0 total, 4032.4 free, 11.6 used. 54875.9 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4120 root 20 0 1140028 621676 36864 S 18.6 1.0 51895:18 node
1914526 root 20 0 21.0g 251816 49152 S 9.3 0.4 4:16.81 node /v
TOP OUTPUT ON DB SERVER VM
( [root@db1 ~]# grep -c ^processor /proc/cpuinfo
16[root@db1 ~]# )
TOP
top - 10:37:22 up 407 days, 15:24, 1 user, load average: 9.75, 10.65, 9.09
Tasks: 500 total, 2 running, 498 sleeping, 0 stopped, 0 zombie
%Cpu(s): 16.8 us, 8.8 sy, 0.0 ni, 49.7 id, 23.2 wa, 0.4 hi, 1.1 si, 0.0 st
MiB Mem : 31837.6 total, 351.4 free, 10369.1 used, 30018.5 buff/cache
MiB Swap: 8060.0 total, 6908.2 free, 1151.8 used. 21468.5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1842514 postgres+ 20 0 8961888 2.4g 2.4g R 26.8 7.7 0:00.86 postgres
1827606 postgres+ 20 0 21208 7808 7808 S 26.1 0.0 5:17.38 pgbackrest
1827611 postgres+ 20 0 30596 18220 8064 D 25.1 0.1 3:54.74 pgbackrest
Tasks: 500 total, 2 running, 498 sleeping, 0 stopped, 0 zombie
%Cpu(s): 16.8 us, 8.8 sy, 0.0 ni, 49.7 id, 23.2 wa, 0.4 hi, 1.1 si, 0.0 st
MiB Mem : 31837.6 total, 351.4 free, 10369.1 used, 30018.5 buff/cache
MiB Swap: 8060.0 total, 6908.2 free, 1151.8 used. 21468.5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1842514 postgres+ 20 0 8961888 2.4g 2.4g R 26.8 7.7 0:00.86 postgres
1827606 postgres+ 20 0 21208 7808 7808 S 26.1 0.0 5:17.38 pgbackrest
1827611 postgres+ 20 0 30596 18220 8064 D 25.1 0.1 3:54.74 pgbackrest
Any more inputs required let me know..
Thank you,
Krishane
Вложения
On Wed, 2026-04-08 at 10:51 +0530, KK CHN wrote: > List, I am using pgbouncer(PgBouncer 1.23.1 RHEL 9.4) along with Postgres16(RHEL 9.4) > for connection pooling. > > Running a nodejs application which is throwing some errors related to query timeout > which the development team suspect after pgbouncer deployment this behaviour appears, > but not sure > > The error which is thrown from the nodejs logs as follows.. > > [image showing an error "Query read timeout"] > > Is this due to pgbouncer config issues or nodejs pool config issues ? > > for reference here the pgbouncer config params and node js params at present. > > pgbouncer.ini > > [...] > [pgbouncer] > pool_mode = transaction > default_pool_size = 50 > min_pool_size = 30 > reserve_pool_size = 10 > reserve_pool_timeout = 5 > max_db_connections = 130 > max_user_connections = 180 > server_lifetime = 3600 > server_idle_timeout = 600 > [...] The only way I can imagine that pgBouncer is leading to timeouts on the client side is if client sessions are waiting, because all connections are in use. You can run SHOW POOLS in the pgBouncer console to see if there are any "cl_waiting". If that is the case, you should configure the Node.js pools smaller, so that no connection has to wait. Yours, Laurenz Albe
On Wed, Apr 8, 2026 at 11:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:
On Wed, 2026-04-08 at 10:51 +0530, KK CHN wrote:
> List, I am using pgbouncer(PgBouncer 1.23.1 RHEL 9.4) along with Postgres16(RHEL 9.4)
> for connection pooling.
>
> Running a nodejs application which is throwing some errors related to query timeout
> which the development team suspect after pgbouncer deployment this behaviour appears,
> but not sure
>
> The error which is thrown from the nodejs logs as follows..
>
> [image showing an error "Query read timeout"]
>
> Is this due to pgbouncer config issues or nodejs pool config issues ?
>
> for reference here the pgbouncer config params and node js params at present.
>
> pgbouncer.ini
>
> [...]
> [pgbouncer]
> pool_mode = transaction
> default_pool_size = 50
> min_pool_size = 30
> reserve_pool_size = 10
> reserve_pool_timeout = 5
> max_db_connections = 130
> max_user_connections = 180
> server_lifetime = 3600
> server_idle_timeout = 600
> [...]
The only way I can imagine that pgBouncer is leading to timeouts on the client side
is if client sessions are waiting, because all connections are in use.
You can run SHOW POOLS in the pgBouncer console to see if there are any "cl_waiting".
If that is the case, you should configure the Node.js pools smaller, so that no
connection has to wait.
Configuring Node.js pools smaller ? I couldn't get the logic here why advised to reduce the pool size ?
Increasing pool size more than 10 adversely affects the connection establishment from Node.js application ? Since DB is having Pgbouncer infront and default_pool_size = 50 there , don't we have the freedom to increase node.js application pool size and it will help the query timeout ? or any hidden facts involved could you elaborate ..
Thank you,
Krishane
Yours,
Laurenz Albe
On Fri, 2026-04-10 at 12:37 +0530, KK CHN wrote: > > The only way I can imagine that pgBouncer is leading to timeouts on the client side > > is if client sessions are waiting, because all connections are in use. > > > > You can run SHOW POOLS in the pgBouncer console to see if there are any "cl_waiting". > > If that is the case, you should configure the Node.js pools smaller, so that no > > connection has to wait. > > Configuring Node.js pools smaller ? I couldn't get the logic here why advised to reduce the pool size ? > > Increasing pool size more than 10 adversely affects the connection establishment from > Node.js application ? Since DB is having Pgbouncer infront and > default_pool_size = 50 there , don't we have the freedom to increase node.js application > pool size and it will help the query timeout ? or any hidden facts involved could you elaborate .. There are no hidden facts. You asked if pgBouncer could lead to query timeouts. I answered that the only way a query could take significantly longer with pgBouncer is if the client has to wait for a pooled connections, that is, if the connection pool in pgBouncer is smaller than the number of clients that want to run a transaction concurrently. BEFORE you take any action, you should figure out if that is the case by looking at SHOW POOLS. Since you didn't report any findings, I assume that you launched your volley of questions before doing the required investigation. Bad idea. IF (and only if) there are indeed client sessions waiting for a pooled connection, you could react in two different ways: - increase the pgBouncer pool size - reduce the application server pool size Obvious, isn't it? Increasing the pgBouncer pool size is a good idea only if your database machine can stomach the additional load. If not, reducing the application server pool size is the correct measure. If the application load is too much for the database, somebody has to suffer. If you choose to overload the database, everybody will be unhappy. If you throttle the load in the connection pool, only some client will be unhappy (those that have to wait and get a timeout). You will have to diagnose what the exact problem is. Yours, Laurenz Albe