Re: Regarding db dump with Fc taking very long time to completion
| От | Luca Ferrari |
|---|---|
| Тема | Re: Regarding db dump with Fc taking very long time to completion |
| Дата | |
| Msg-id | CAKoxK+504bWyW_CnJ_vdyfMAgSstHVtPQvnB0oMnE9nVjaLn8w@mail.gmail.com обсуждение исходный текст |
| Ответ на | Regarding db dump with Fc taking very long time to completion (Durgamahesh Manne <maheshpostgres9@gmail.com>) |
| Ответы |
Re: Regarding db dump with Fc taking very long time to completion
|
| Список | pgsql-general |
On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne <maheshpostgres9@gmail.com> wrote: > Logical dump of that table is taking more than 7 hours to be completed > > I need to reduce to dump time of that table that has 88GB in size Good luck! I would see two possible solutions to the problem: 1) use physical backup and switch to incremental (e..g, pgbackrest) 2) partition the table and backup single pieces, if possible (constraints?) and be assured it will become hard to maintain (added partitions, and so on). Are all of the 88 GB be written during a bulk process? I guess no, so maybe partitioning you can avoid locking the whole dataset and reduce contention (and thus time). Luca
В списке pgsql-general по дате отправления: