Hi,
yes, sorry, somehow I forgot description of task....
In this case which went wrong I used postgres_fdw to compare data on local and remote database using "select all from remote except select all from local".
I selected the same table on remote and local which has ~200M rows and total size ~20GB. I needed to see all differences because we get some erratic differences... Estimation from previous limited queries was that differences are only in approx 1 to 3% rows. So I decided to try to select them all to look for some patterns...
Testing instance (on Google compute engine) had 4 CPUs, 26 GB of RAM, as for OOM killer - I used default setting on Debian 8 without any changes - so
/proc/sys/vm/overcommit_memory = 0
Monitoring done by telegraf on local + influxDB + grafana on other instance.
Nothing else running on that instance, postgresql on instance contained only this huge table.
After ~25 minutes of run all memory was used and as I mentioned in first case postgresql crashed and in second test (in which I lowered work_mem from 24M to 8M and increased a shared_buffers to 8GB to see if it helps) the whole instance crashed and did not want to start any more. SSD disk 500GB was almost empty so no problems with disk space.
Since I did not have time to fiddle with it I just dropped crashed instance and used ansible to create a new one.
Thanks