On 7/26/22 9:29 AM, Ron wrote:
> On 7/26/22 10:22, Adrian Klaver wrote:
>> On 7/26/22 08:15, Rama Krishnan wrote:
>>> Hi Adrian
>>>
>>>
>>> What is size of table?
>>>
>>> I m having two Database example
>>>
>>> 01. Cricket 320G
>>> 02.badminton 250G
>>
>> So you are talking about an entire database not a single table, correct?
>
> In a private email, he said that this is what he's trying:
> Pg_dump -h endpoint -U postgres Fd - d cricket | aws cp -
> s3://dump/cricket.dump
>
> It failed for obvious reasons.
From what I gather it did not fail, it just took a long time. Not sure
adding -j to the above will improve things, pretty sure the choke point
is still going to be aws cp.
Rama if you have the space would it not be better to dump locally using
-Fc to get a compressed format and the upload that to s3 as a separate
process?
--
Adrian Klaver
adrian.klaver@aklaver.com