Re: problems with large objects dump
| От | Tom Lane |
|---|---|
| Тема | Re: problems with large objects dump |
| Дата | |
| Msg-id | 1269.1349993793@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: problems with large objects dump (Sergio Gabriel Rodriguez <sgrodriguez@gmail.com>) |
| Ответы |
Re: problems with large objects dump
|
| Список | pgsql-performance |
Sergio Gabriel Rodriguez <sgrodriguez@gmail.com> writes:
> I tried with Postgresql 9.2 and the process used to take almost a day
> and a half, was significantly reduced to 6 hours, before failing even used
> to take four hours. My question now is, how long should it take the backup
> for a 200GB database with 80% of large objects?
It's pretty hard to say without knowing a lot more info about your system
than you provided. One thing that would shed some light is if you spent
some time finding out where the time is going --- is the system
constantly I/O busy, or is it CPU-bound, and if so in which process,
pg_dump or the connected backend?
Also, how many large objects is that? (If you don't know already,
"select count(*) from pg_largeobject_metadata" would tell you.)
regards, tom lane
В списке pgsql-performance по дате отправления: