Обсуждение: How is memory allocated/used by Postgresql Database connections

Поиск
Список
Период
Сортировка

How is memory allocated/used by Postgresql Database connections

От
"McKinzie, Alan (Alan)"
Дата:
I am trying to understand how memory is allocated/used by our Postgresql Database connections.  From reading the documentation it appears that work_mem and temp_buffers are the 2 largest contributors (and work_mem usage can grow) to the memory utilized by the Database connections.  In addition, if I understand correctly, work_mem and temp_buffers each have their own pool, and thus database connections use (when needed) and return memory to these pools.  I have not read this anywhere, but based on observation it appears that once these pools grow, they never release any of the memory (e.g. they do not shrink in size if some of the memory has not been for a given period of time).
 
With that said, are there any mechanisms available to determine how much work_mem and temp_buffers memory has been allocated by the Postgres database (and by database connection/process would be very useful as well)?  Also, which postgres process manages the memory pools for work_mem and temp_buffers?
 
FYI – I am using smem (on a linux server) to monitor the memory allocated to our Database connections.  In an attempt to lower our memory footprint, I lowered our setting for work_mem from 1MB down to 500kB (in addition I enabled log_temp_files to see the SQL statements that now use temp files for sorting and hash operations).  As I expected the memory used by the connections that were doing large sorts went down in size.  However, one of those DB connections dramatically increased in memory usage with this change.  It went from approx. 6MB up to 37MB in memory usage?  Are temp_buffers used in conjunction with some sorting operations that use temp_files (and thus this connection allocated several temp_buffers?  Although I thought the temp_buffers parameter was the amount of memory that would be allocated for a session (e.g. there is no multiplying factor like the work_mem for a session)?
 
We are using Postgres 9.0.13
  shared_buffers = 800MB
  work_mem = 1MB      
  temp_buffers = 8MB   (our applications do not use temp tables)
  effective_cache_size = 1500MB
 
Thanks,
Alan
 

Re: How is memory allocated/used by Postgresql Database connections

От
Tom Lane
Дата:
"McKinzie, Alan (Alan)" <alanmck@avaya.com> writes:
> I am trying to understand how memory is allocated/used by our Postgresql Database connections.  From reading the
documentationit appears that work_mem and temp_buffers are the 2 largest contributors (and work_mem usage can grow) to
thememory utilized by the Database connections.  In addition, if I understand correctly, work_mem and temp_buffers each
havetheir own pool, and thus database connections use (when needed) and return memory to these pools.  I have not read
thisanywhere, but based on observation it appears that once these pools grow, they never release any of the memory
(e.g.they do not shrink in size if some of the memory has not been for a given period of time). 

Temp buffers, once used within a particular backend process, are kept
for the life of that process.  Memory consumed for work_mem will be
released back to libc at the end of the query.  The net effect of that
is platform-dependent --- my experience is that glibc on Linux is able
to give memory back to the OS, but on other platforms the process memory
size doesn't shrink.

> With that said, are there any mechanisms available to determine how much work_mem and temp_buffers memory has been
allocatedby the Postgres database (and by database connection/process would be very useful as well)?  Also, which
postgresprocess manages the memory pools for work_mem and temp_buffers? 

There's no "pool", these allocations are process-local.

> FYI – I am using smem (on a linux server) to monitor the memory allocated to our Database connections.  In an attempt
tolower our memory footprint, I lowered our setting for work_mem from 1MB down to 500kB (in addition I enabled
log_temp_filesto see the SQL statements that now use temp files for sorting and hash operations).  As I expected the
memoryused by the connections that were doing large sorts went down in size.  However, one of those DB connections
dramaticallyincreased in memory usage with this change.  It went from approx. 6MB up to 37MB in memory usage? 

Keep in mind that work_mem is the max per sort or hash operation, so a
complex query could consume a multiple of that.  The most obvious theory
about your result is that the work_mem change caused the planner to
switch to another plan that involved more sorts or hashes than before.
But without a lot more detail than this, we can only speculate.

>  Are temp_buffers used in conjunction with some sorting operations that use temp_files (and thus this connection
allocatedseveral temp_buffers? 

No, they're only used in connection with temp tables.

            regards, tom lane


Re: How is memory allocated/used by Postgresql Database connections

От
Jeff Janes
Дата:
On Thu, Jul 25, 2013 at 6:23 AM, McKinzie, Alan (Alan)
<alanmck@avaya.com> wrote:

> FYI – I am using smem (on a linux server) to monitor the memory allocated to
> our Database connections.  In an attempt to lower our memory footprint, I
> lowered our setting for work_mem from 1MB down to 500kB (in addition I
> enabled log_temp_files to see the SQL statements that now use temp files for
> sorting and hash operations).

1MB is already pretty small.  If you have a lot of connections all
using temp space at the same time, you should probably consider using
a connection pooler to limit that number and then increasing work_mem,
rather than decreasing it.

Cheers,

Jeff