Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

Поиск
Список
Период
Сортировка
От Sergey Klochkov
Тема Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Дата
Msg-id 524AAE12.8@iqbuzz.ru
обсуждение исходный текст
Ответ на Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects  (Giuseppe Broccolo <giuseppe.broccolo@2ndquadrant.it>)
Список pgsql-admin
No, it did not make any difference. And after looking through pg_dump.c
and pg_dump_sort.c, I cannot tell how it possibly could. See the
stacktrace that I've sent to the list.

Thanks.

On 01.10.2013 15:01, Giuseppe Broccolo wrote:
> Maybe you can performe your database changing some parameters properly:
>>
>> PostgreSQL configuration:
>>
>> listen_addresses = '*'          # what IP address(es) to listen on;
>> port = 5432                             # (change requires restart)
>> max_connections = 500                   # (change requires restart)
> Set it to 100, the highest value supported by PostgreSQL
>> shared_buffers = 16GB                  # min 128kB
> This value should not be higher than 8GB
>> temp_buffers = 64MB                     # min 800kB
>> work_mem = 512MB                        # min 64kB
>> maintenance_work_mem = 30000MB          # min 1MB
> Given RAM 96GB, you could set it up to 4800MB
>> checkpoint_segments = 70                # in logfile segments, min 1,
>> 16MB each
>> effective_cache_size = 50000MB
> Given RAM 96GB, you could set it up to 80GB
>>
>
> Hope it can help.
>
> Giuseppe.
>

--
Sergey Klochkov
klochkov@iqbuzz.ru


В списке pgsql-admin по дате отправления:

Предыдущее
От: Giuseppe Broccolo
Дата:
Сообщение: Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Следующее
От: bricklen
Дата:
Сообщение: Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects