Re: Memory usage and configuration settings

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: Memory usage and configuration settings
Дата
Msg-id CAHyXU0xasfZNgHLhZoQNRw+xbB8n6s7CKxHAKGNtOHGLBu-abA@mail.gmail.com
обсуждение исходный текст
Ответ на Memory usage and configuration settings  (Mike C <smith.not.western@gmail.com>)
Список pgsql-general
On Mon, Mar 5, 2012 at 6:37 AM, Mike C <smith.not.western@gmail.com> wrote:
> Hi,
>
> I have been using table 17-2, Postgres Shared Memory Usage
> (http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
> to calculate approximately how much memory the server will use. I'm
> using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
> Database is approximately 5GB, and is a mixture of read/write.
> Postgres is occasionally being killed by the linux oom-killer. I am
> trying to understand how much memory postgres could use, and how to
> change the configuration to bring it down to a level that won't get it
> killed.
>
> Key configuration values are:
>
> max_connections = 350
> shared_buffers = 4GB
> temp_buffers = 24MB
> max_prepared_transactions = 211
> work_mem = 16MB
> maintenance_work_mem = 131MB
> wal_buffers = -1
> wal_keep_segments = 128
> checkpoint_segments = 64
> effective_cache_size = 4GB
> autovacuum_max_workers = 4
>
> which I have interpreted to be:
>
> max_locks_per_transaction = 64
> max_connections = 350
> autovacuum_max_workers =4
> max_prepared_transactions = 211 (I've since realised this can be 0; I
> use prepared statements, not 2PC)
> shared_buffers = 4294967296
> wal_block_size = 8192
> wal_buffers = 16777216 (actually, -1, but following the documentation
> of max(16MB, shared_buffers/32) it should be 16MB).
> and wal segment size = 16777216, block_size = 8192
>
> And using the equations on the kernel resources page, I get:
>
> Connections = 6,678,000
>                     = (1800 + 270 * max_locks_per_transaction) *
> max_connections
>                     = (1800 + 270 * 64) * 350
> Autovacuum Workers = 76,320
>                                    = (1800 + 270 *
> max_locks_per_transaction) * autovacuum_max_workers
>                                    = (1800 + 270 * 64) * 4
> Prepared Transactions = 3,808,550
>                                      = (770 + 270 *
> max_locks_per_transaction) * max_prepared_transactions
>                                      = (770 + 270 * 64) * 211
> Shared Disk Buffers      = 36,077,725,286,400
>                                 = (block_size + 208) * shared_buffers
>                                 = (8192 + 208) * 4294967296
>                                 = ~33TB
> WAL Buffers = 137,573,171,200
>                     = (wal_block_size + 8) * wal_buffers
>                     = (8192 + 8) * 16777216
>                     = ~128GB
> Fixed Space Requirements = 788,480
> Overall = 36,215,309,808,950 bytes (~33.2 TB!)
>
> 33.2TB doesn't seem right, and while I know the equations are just
> approximations, this seems too much. What have I done wrong? I read a
> prior thread about this on the pgsql lists which seemed to indicate
> the equations for shared disk and wall buffers should be divided by
> the block_size 8192, and looking at it closer, wonder if the equation
> for both should just be overhead + buffer?
>
> Also what is the relationship between memory and work_mem (set to 16M
> in my case). I understand work_mem is per sort, and in most cases our
> queries only have a single sort. Does this mean an additional 16M per
> sorting client (350 * 16M = 5.6GB), or presumably it only uses the
> work memory as it needs it (i.e. does it preallocate all 16M for each
> sort, or on an as-needed basis depending on the size of sorted data?)
>
> Are there any other ways to calculate the worst case memory usage of a
> given postgres configuration?
>
> My gut feeling is to reduce shared_buffer to 1GB or less and reduce
> connections to ~150-200 (to reduce worst case work_mem impact).

One easy thing to neglect when doing memory counting is backend
private memory.  Each postgres process typically eats around 1mb and
this will grow if your processes are long-lived as the backend starts
to cache various structures.  As a rule of thumb I tend to use 4mb per
backend (you can confirm this yourself by subtracting SHR from RES).
In absolutely pathological cases (like heavy plpgsql backends with a
lot of tables and views) it can be worse.  4mb * 350 = 1.4gb...so
you're cutting things fairly close.

Looking at your postgresql.conf, your memory settings for
shared_buffers are a more aggressive than the often suggested 25% rule
but I bet it's the backend memory that's pushing you over the edge.

Rather than reducing backend count, I'd consider (carefully) using
pgbouncer to reduce overall connection count.  Or you can reduce
shared buffers, but in your case I'd probably cut it to 1GB if it was
me.

merlin

В списке pgsql-general по дате отправления:

Предыдущее
От: Merlin Moncure
Дата:
Сообщение: Re: Return unknown resultset from a function
Следующее
От: chinnaobi
Дата:
Сообщение: Lost data Folder, but have WAL files--- How to recover the database ?? Windows