Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

Поиск
Список
Период
Сортировка
От Sergey Klochkov
Тема Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects
Дата
Msg-id 524C3163.1050502@iqbuzz.ru
обсуждение исходный текст
Ответ на Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects  (Alejandro Brust <alejandrob@pasteleros.org.ar>)
Список pgsql-admin
I tried it out. It did not make any difference.

On 01.10.2013 23:30, Alejandro Brust wrote:
> Did U perform  any vacuumdb / reindexdb before the Pg_dump?
>
>
> El 01/10/2013 09:49, Magnus Hagander escribió:
>> On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov <klochkov@iqbuzz.ru> wrote:
>>> Hello All,
>>>
>>> While trying to backup a database of relatively modest size (160 Gb) I ran
>>> into the following issue:
>>>
>>> When I run
>>> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>>>
>>> File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
>>> on). pg_dump just begins to consume memory until it eats up all avaliable
>>> RAM (96 Gb total on server, >64 Gb available) and is killed by the oom
>>> killer.
>>>
>>> According to pg_stat_activity, pg_dump runs the following query
>>>
>>> SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
>>> AS rolname, lomacl FROM pg_largeobject_metadata
>>>
>>> until it is killed.
>>>
>>> strace shows that pg_dump is constantly reading a large amount of data from
>>> a UNIX socket. I suspect that it is the result of the above query.
>>>
>>> There are >300000000 large objects in the database. Please don't ask me why.
>>>
>>> I tried googling on this, and found mentions of pg_dump being killed by oom
>>> killer, but I failed to find anything related to the huge large objects
>>> number.
>>>
>>> Is there any method of working around this issue?
>> I think this problem comes from the fact that pg_dump treats each
>> large object as it's own item. See getBlobs() which allocates a
>> BlobInfo struct for each LO (and a DumpableObject if there are any,
>> but that's just one).
>>
>> I assume the query (from that file):
>> SELECT oid, lomacl FROM pg_largeobject_metadata
>>
>> returns 300000000 rows, which are then looped over?
>>
>> I ran into a similar issue a few years ago with a client using a
>> 32-bit version of pg_dump, and got it worked around by moving to
>> 64-bit. Did unfortunately not have time to look at the underlying
>> issue.
>>
>>
>
>

--
Sergey Klochkov
klochkov@iqbuzz.ru



В списке pgsql-admin по дате отправления:

Предыдущее
От: Igor Neyman
Дата:
Сообщение: Re: Random server overload
Следующее
От: Kevin Grittner
Дата:
Сообщение: Re: Random server overload