Обсуждение: DB running out of memory issues after upgrade

Поиск
Список
Период
Сортировка

DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 

spec: RAM 16gb,4vCore

Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 





I could see below error logs and due to this reason database more often going into recovery mode


2020-02-17 22:34:32 UTC::@:[20467]:LOG: server process (PID 32731) was terminated by signal 9: Killed
2020-02-17 22:34:32 UTC::@:[20467]:DETAIL: Failed process was running: select info_starttime,info_starttimel,info_conversationid,info_status,classification_type,intentname,confidencescore,versions::text,messageid from salesdb.liveperson.intents where info_status='CLOSE' AND ( 1=1 ) AND ( 1=1 )
2020-02-17 22:34:32 UTC::@:[20467]:LOG: terminating any other active server processes
2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC::@:[20467]:LOG: archiver process (PID 30799) exited with exit code 1
2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC::@:[30798]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC::@:[30798]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC::@:[30798]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(43660):devops_user@salesdb:[30873]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(42966):devops_user@salesdb:[30869]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:32 UTC:(60818):devops_user@salesdb:[30831]:WARNING: terminating connection because of crash of another server process
2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-02-17 22:34:33 UTC::@:[20467]:LOG: all server processes terminated; reinitializing
2020-02-17 22:34:33 UTC::@:[19633]:LOG: database system was interrupted; last known up at 2020-02-17 22:33:33 UTC
2020-02-17 22:34:33 UTC::@:[19633]:LOG: database system was not properly shut down; automatic recovery in progress
2020-02-17 22:34:33 UTC::@:[19633]:LOG: redo starts at 15B0/D5FCA110
2020-02-17 22:34:34 UTC:(54556):digitaladmin@salesdb:[19637]:FATAL: the database system is in recovery mode
2020-02-17 22:34:34 UTC:(54557):digitaladmin@salesdb:[19639]:FATAL: the database system is in recovery mode
2020-02-17 22:34:34 UTC:(58713):devops_user@salesdb:[19638]:FATAL: the database system is in recovery mode
2020-02-17 22:34:34 UTC:(58714):devops_user@salesdb:[19644]:FATAL: the database system is in recovery mode
2020-02-17 22:34:35 UTC::@:[19633]:LOG: invalid record length at 15B0/E4C32288: wanted 24, got 0
2020-02-17 22:34:35 UTC::@:[19633]:LOG: redo done at 15B0/E4C32260
2020-02-17 22:34:35 UTC::@:[19633]:LOG: last completed transaction was at log time 2020-02-17 22:34:31.864309+00
2020-02-17 22:34:35 UTC::@:[19633]:LOG: checkpoint starting: end-of-recovery immediate

 


 

Thank you.




Re: DB running out of memory issues after upgrade

От
Tomas Vondra
Дата:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and
afterupgrade. 
 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: DB running out of memory issues after upgrade

От
Tomas Vondra
Дата:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and
afterupgrade. 
 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: DB running out of memory issues after upgrade

От
Tomas Vondra
Дата:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and
afterupgrade. 
 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: DB running out of memory issues after upgrade

От
Tomas Vondra
Дата:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and
afterupgrade. 
 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: DB running out of memory issues after upgrade

От
Tomas Vondra
Дата:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and
afterupgrade. 
 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: DB running out of memory issues after upgrade

От
Tomas Vondra
Дата:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and
afterupgrade. 
 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: DB running out of memory issues after upgrade

От
Tomas Vondra
Дата:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and
afterupgrade. 
 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: DB running out of memory issues after upgrade

От
Tomas Vondra
Дата:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and
afterupgrade. 
 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Re: DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
Below are the same configurations ins .conf file before and after updagrade

show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size =  "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"


here is some sys logs,

2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 
2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 


I identified one simple select which consuming more memory and here is the query plan,



"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"



Thanks,



On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:


On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
Below are the same configurations ins .conf file before and after updagrade

show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size =  "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"


here is some sys logs,

2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 
2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 


I identified one simple select which consuming more memory and here is the query plan,



"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"



Thanks,



On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:


On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
Below are the same configurations ins .conf file before and after updagrade

show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size =  "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"


here is some sys logs,

2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 
2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 


I identified one simple select which consuming more memory and here is the query plan,



"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"



Thanks,



On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:


On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
Below are the same configurations ins .conf file before and after updagrade

show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size =  "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"


here is some sys logs,

2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 
2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 


I identified one simple select which consuming more memory and here is the query plan,



"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"



Thanks,



On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:


On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
Below are the same configurations ins .conf file before and after updagrade

show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size =  "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"


here is some sys logs,

2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 
2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 


I identified one simple select which consuming more memory and here is the query plan,



"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"



Thanks,



On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:


On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
Below are the same configurations ins .conf file before and after updagrade

show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size =  "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"


here is some sys logs,

2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 
2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 


I identified one simple select which consuming more memory and here is the query plan,



"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"



Thanks,



On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:


On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
Below are the same configurations ins .conf file before and after updagrade

show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size =  "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"


here is some sys logs,

2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 
2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 


I identified one simple select which consuming more memory and here is the query plan,



"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"



Thanks,



On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:


On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: DB running out of memory issues after upgrade

От
Nagaraj Raj
Дата:
Below are the same configurations ins .conf file before and after updagrade

show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size =  "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"


here is some sys logs,

2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 
2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 


I identified one simple select which consuming more memory and here is the query plan,



"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)"
"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"



Thanks,



On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:


On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. 
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! 
>

This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(

The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.

What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.

And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do

  (gdb) p MemoryContextStats(TopMemoryContext)
  (gdb) q

and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.

It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: DB running out of memory issues after upgrade

От
Merlin Moncure
Дата:
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin



Re: DB running out of memory issues after upgrade

От
Merlin Moncure
Дата:
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin



Re: DB running out of memory issues after upgrade

От
Merlin Moncure
Дата:
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin



Re: DB running out of memory issues after upgrade

От
Merlin Moncure
Дата:
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin



Re: DB running out of memory issues after upgrade

От
Merlin Moncure
Дата:
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin



Re: DB running out of memory issues after upgrade

От
Merlin Moncure
Дата:
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin



Re: DB running out of memory issues after upgrade

От
Merlin Moncure
Дата:
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin



Re: DB running out of memory issues after upgrade

От
Merlin Moncure
Дата:
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <nagaraj.sf@yahoo.com> wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size =  "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoint_completion_target = "0.9"
> show wal_buffers = "16MB"
> show default_statistics_target = "100"
> show random_page_cost = "1.1"
> show effective_io_concurrency =" 200"
> show work_mem = "4MB"
> show min_wal_size = "256MB"
> show max_wal_size = "2GB"
> show max_worker_processes = "8"
> show max_parallel_workers_per_gather = "2"

This smells like oom killer for sure.  how did you resolve some of
these values.  In particular max_connections and effective_cache_size.
  How much memory is in this server?

merlin



Re: DB running out of memory issues after upgrade

От
"Peter J. Holzer"
Дата:
On 2020-02-18 18:10:08 +0000, Nagaraj Raj wrote:
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
[...]
> show work_mem = "4MB"

This is an interesting combination: So you expect a large number of
connections but each one should use very little RAM?

[...]

> here is some sys logs,
>
> 2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS
> due to excessive memory consumption.
> 2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS
> due to excessive memory consumption.

The oom-killer produces a huge block of messages which you can find with
dmesg or in your syslog. It looks something like this:

Feb 19 19:06:53 akran kernel: [3026711.344817] platzangst invoked oom-killer:
gfp_mask=0x15080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO),nodemask=(null), order=1, oom_score_adj=0 
Feb 19 19:06:53 akran kernel: [3026711.344819] platzangst cpuset=/ mems_allowed=0-1
Feb 19 19:06:53 akran kernel: [3026711.344825] CPU: 7 PID: 2012 Comm: platzangst Tainted: G           OE
4.15.0-74-generic#84-Ubuntu 
Feb 19 19:06:53 akran kernel: [3026711.344826] Hardware name: Dell Inc. PowerEdge R630/02C2CP, BIOS 2.1.7 06/16/2016
Feb 19 19:06:53 akran kernel: [3026711.344827] Call Trace:
Feb 19 19:06:53 akran kernel: [3026711.344835]  dump_stack+0x6d/0x8e
Feb 19 19:06:53 akran kernel: [3026711.344839]  dump_header+0x71/0x285
...
Feb 19 19:06:53 akran kernel: [3026711.344893] RIP: 0033:0x7f292d076b1c
Feb 19 19:06:53 akran kernel: [3026711.344894] RSP: 002b:00007fff187ef240 EFLAGS: 00000246 ORIG_RAX: 0000000000000038
Feb 19 19:06:53 akran kernel: [3026711.344895] RAX: ffffffffffffffda RBX: 00007fff187ef240 RCX: 00007f292d076b1c
Feb 19 19:06:53 akran kernel: [3026711.344896] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011
Feb 19 19:06:53 akran kernel: [3026711.344897] RBP: 00007fff187ef2b0 R08: 00007f292d596740 R09: 00000000009d43a0
Feb 19 19:06:53 akran kernel: [3026711.344897] R10: 00007f292d596a10 R11: 0000000000000246 R12: 0000000000000000
Feb 19 19:06:53 akran kernel: [3026711.344898] R13: 0000000000000020 R14: 0000000000000000 R15: 0000000000000000
Feb 19 19:06:53 akran kernel: [3026711.344899] Mem-Info:
Feb 19 19:06:53 akran kernel: [3026711.344905] active_anon:14862589 inactive_anon:1133875 isolated_anon:0
Feb 19 19:06:53 akran kernel: [3026711.344905]  active_file:467 inactive_file:371 isolated_file:0
Feb 19 19:06:53 akran kernel: [3026711.344905]  unevictable:0 dirty:3 writeback:0 unstable:0
...
Feb 19 19:06:53 akran kernel: [3026711.344985] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents
oom_score_adjname 
Feb 19 19:06:53 akran kernel: [3026711.344997] [  823]     0   823    44909        0   106496      121             0
lvmetad
Feb 19 19:06:53 akran kernel: [3026711.344999] [ 1354]     0  1354    11901        3   135168      112             0
rpcbind
Feb 19 19:06:53 akran kernel: [3026711.345000] [ 1485]     0  1485    69911       99   180224      159             0
accounts-daemon
...
Feb 19 19:06:53 akran kernel: [3026711.345345] Out of memory: Kill process 25591 (postgres) score 697 or sacrifice
child
Feb 19 19:06:53 akran kernel: [3026711.346563] Killed process 25591 (postgres) total-vm:71116948kB,
anon-rss:52727552kB,file-rss:0kB, shmem-rss:3023196kB 

The most interesting lines are usually the last two: In this case they
tell us that the process killed was a postgres process and it occupied
about 71 GB of virtual memory at that time. That was clearly the right
choice since the machine has only 64 GB of RAM. Sometimes it is less
clear and then you might want to scroll through the (usually long) list
of processes to see if there are other processes which need suspicious
amounts of RAM or maybe if there are just more of them than you would
expect.


> I identified one simple select which consuming more memory and here is the
> query plan,
>
>
>
> "Result  (cost=0.00..94891854.11 rows=3160784900 width=288)"
> "  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)"
> "        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width
> =288)"
> "              Filter: (((data -> 'info'::text) ->> 'status'::text) =
> 'CLOSE'::text)"
> "        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900
> width=288)"
> "              Filter: (((data -> 'info'::text) ->> 'status'::text) =
> 'CLOSE'::text)"

So: How much memory does that use? It produces a huge number of rows
(more than 3 billion) but it doesn't do much with them, so I wouldn't
expect the postgres process itself to use much memory. Are you sure its
the postgres process and not the application which uses a lot of memory?

        hp

--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

Вложения