Hi:
Now the situation goes there:
In the testing environment,
even when my customer changed shared_buffers from 1024MB to 712MB or 512MB,
The total memory consumption is still almost the same.
I think that PG is always using as much resource as it can,
For a query and insert action,
Firstly , the data is pull into private memory of the backend process which is service client.
Then, the backend process push the data into shared memory, here into shared_buffers.
If the shared_buffers is not big enough to hold all the result data, then part of data will be in shared_buffer,
the other data will still remain in backend process's memory.
Is my understanding right?
Best Regard