Re: Fastest way to duplicate a quite large database

Поиск
Список
Период
Сортировка
От Louis Battuello
Тема Re: Fastest way to duplicate a quite large database
Дата
Msg-id B4335D14-F696-42CF-9FBB-69D1717D5C43@etasseo.com
обсуждение исходный текст
Ответ на Re: Fastest way to duplicate a quite large database  (Edson Richter <edsonrichter@hotmail.com>)
Список pgsql-general
> On Apr 12, 2016, at 10:51 AM, Edson Richter <edsonrichter@hotmail.com> wrote:
>
> Same machine, same cluster - just different database name.
>
> Atenciosamente,
>
> Edson Carlos Ericksson Richter
>
> Em 12/04/2016 11:46, John R Pierce escreveu:
>> On 4/12/2016 7:25 AM, Edson Richter wrote:
>>>
>>> I have a database "Customer" with about 60Gb of data.
>>> I know I can backup and restore, but this seems too slow.
>>>
>>> Is there any other option to duplicate this database as "CustomerTest" as fast as possible (even fastar than
backup/restore)- better if in one operation (something like "copy database A to B")? 
>>> I would like to run this everyday, overnight, with minimal impact to prepare a test environment based on production
data. 
>>
>>
>> copy to the same machine, or copy to a different test server? different answers.
>>
>>
>>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Not sure how fast is “fast” for your system. You could try:

create database customer_test with template customer;

I’m able to duplicate a 20GB in a couple minutes with the above command.

Couple caveats:

1. No active connections to customer are allowed during the create.
2. You’ll likely have to recreate the search_path and reissue connect grants to the newly created database.



В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: Freezing localtimestamp and other time function on some value
Следующее
От: Adrian Klaver
Дата:
Сообщение: Re: Fastest way to duplicate a quite large database