RE: huge memory of Postgresql backend process

Поиск
Список
Период
Сортировка
От James Pang (chaolpan)
Тема RE: huge memory of Postgresql backend process
Дата
Msg-id CO1PR11MB51857DF935A6F3085E676846D6479@CO1PR11MB5185.namprd11.prod.outlook.com
обсуждение исходный текст
Ответ на RE: huge memory of Postgresql backend process  ("James Pang (chaolpan)" <chaolpan@cisco.com>)
Ответы Re: huge memory of Postgresql backend process  (David Rowley <dgrowleyml@gmail.com>)
Список pgsql-bugs
Hi,
   Based on this test in our environment, we only use one prepare "PREPARE
updmetadata(varchar,varchar,varchar,varchar,bigint,varchar)AS  UPDATE xxxxx          
SET xxxx = $1,xxxxx = $2,KEYURL = $3, xxxx = $4
WHERE xxxx = $5 AND xxxxx = $6          ($6 and $5 is primary key)  , it run fast.

Start a new connection :   RES=12894k
   PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
1837663 postgres  20   0   45.5g  16000  11936 S   0.0   0.0   0:00.02 postgres: xxxxxxxxxxx(50048) idle

Create a server prepared statement for this sql ,...run several times , we start to see server side cached statement
andmemory increased quickly to 72800k 
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
1837663 postgres  20   0   45.6g  72800  12000 S   0.0   0.1   0:00.17 postgres: xxxxxxxxx(50048) idle

--after that ,more execute prepare statement ,no more changes ...

--adding a new prepare sql statement will increase about  several MB bytes.

   Server prepared sql statement memory allocation , depends on partition count , for 256 partition count, it initially
askingabout 60MB memory one time, then several MB memory for following sql statements depends on the SQL statement.
Isthis kind of memory allocation is expected ?  or maybe fine tuned memory allocation for "large connections user case"
toavoid out of memory issue?  
  Another interesting point is , when use psql .. -h localhost ,  the memory allocation is much less (only several MB)
withabove test , since local running ok with same SQL and same table,  for remote connection, need much more memory
instead. 

Thanks,

James

-----Original Message-----
From: James Pang (chaolpan)
Sent: Thursday, September 8, 2022 10:08 PM
To: Tom Lane <tgl@sss.pgh.pa.us>
Cc: PostgreSQL mailing lists <pgsql-bugs@lists.postgresql.org>
Subject: RE: huge memory of Postgresql backend process

   When rebuild from partitioned to non-partitioned tables with same data size, same test load only use 24-25MB/per
backendserver. From auto_explain that show both DELETE and UPDATE use partition pruning , so the sql running very fast.
Butonly see process memory increased quickly , with hundreds of connections to do similar "update/delete" on multiple
partitiontables test,  server physical memory got used up and  a lot of "out of memory" error dumped to pglog, but from
dumped memory context stats , we only saw tens of  MB memory used, instead of 160mb.  
   Looks like Postgresql backend server  try to allocate one large memory suddenly based on some estimation rule when
update/deletefrom partition tables, and actually not used so much memory. we found that never free back to Operating
systemafter the backend process idle long time.   From OS pmap  command,  almost of memory  " xxxxx  rw---   [ anon ]".
 Maybe it's an overestimated memory allocation from OS than it's real usage ?  

Thanks,

James

-----Original Message-----
From: Tom Lane <tgl@sss.pgh.pa.us>
Sent: Thursday, September 8, 2022 9:48 PM
To: James Pang (chaolpan) <chaolpan@cisco.com>
Cc: PostgreSQL mailing lists <pgsql-bugs@lists.postgresql.org>
Subject: Re: huge memory of Postgresql backend process

"James Pang (chaolpan)" <chaolpan@cisco.com> writes:
>   We just switch from Oracle to PGV13, all of these tables got partition (256 HASH partition) ,  when UPDATE/DELETE
...WHERE . The memory increase verify quickly until >130M. not too much data in these tables.  When we switch to
PGV14.5, test again, still see similar memory consumption issue. 

I see no particular reason to think this is a bug.  If you have a lot of partitions, there is going to be a lot of
metadatafor them. 
Moreover, HASH partitioning is effectively incapable of being pruned, so that every query is going to touch every
partition.
(IMO, hash partitioning is basically never a good idea.)

            regards, tom lane



В списке pgsql-bugs по дате отправления:

Предыдущее
От: Kyotaro Horiguchi
Дата:
Сообщение: Re: BUG #17611: SJIS conversion rule about duplicated characters differ from Windows
Следующее
От: Daniel Gustafsson
Дата:
Сообщение: Re: BUG #17612: Error on your site