Large PG_DUMPs going into memory?

Поиск
Список
Период
Сортировка
От Mike Rogers
Тема Large PG_DUMPs going into memory?
Дата
Msg-id OE29d92bXCH5n00zepD00003ac2@hotmail.com
обсуждение исходный текст
Ответ на Patch for memory leaks in index scan  (Dmitry Tkach <dmitry@openratings.com>)
Ответы Re: Large PG_DUMPs going into memory?  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-bugs
I am dumping a rather large PostgreSQL database with 895,000 rows or so.
running 'pg_dump -d databse -u --table=tbl -f ./tbl.sql' makes a 425MB file.
The problem is that whether the database is remote or local [i have tried
running it on the DB server itself with the same result], it takes up a good
half gigabyte of RAM for the course of the dump.  Why does it load all of
this into memory on the client machine rather than output it as it comes in.
On one server, this has caused the machine to swap out a good 250MB to disk,
running up the system load to very high numbers.  The database server seems
to serve it as any other request, but the client server that is running
pg_dump seems to be struggling.
    My main question is not that it does it but why this is how it is done?
Isn't it more efficient to dump it out to the file or STDOUT (if -f isn't
specified) rather than load the entire result set into memory?  It's not
sorting it or anything.

Any help would be appreciated.
--
Mike

В списке pgsql-bugs по дате отправления:

Предыдущее
От: Vladimir Zolotykh
Дата:
Сообщение: Bad timestamp external representation 'Sun 05 May 11:53:44.731416 2002 EEST'
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Large PG_DUMPs going into memory?