Обсуждение: pg_dump fatal error

Поиск
Список
Период
Сортировка

pg_dump fatal error

От
"drum.lucas@gmail.com"
Дата:
Hi guys

I'm doing a copy (pg_dump) from a slave server to a test server.
The DB size is 1,7 TB, and I get the error at 1,4 TB.

Command:
ssh postgres@servidorslave01 "/usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' --format=custom db_live" | /usr/pgsql-9.2/bin/pg_restore --dbname=uat_temp --exit-on-error

Error:
|postgres|pg_restore| CONTEXT:  COPY ja_feedlog, line 1
STATEMENT:  COPY ja_feedlog (id, clientid, mobiuserid, customerid, invoiceid, description, gtime, jobid, log_type, serialised_data, push_status, requestorid, the_geom, admin_read, visitid) FROM stdin;

LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost 

What can cause that kind of error? I've tried more than once, actually.

Thanks
Lucas 

Re: pg_dump fatal error

От
"drum.lucas@gmail.com"
Дата:

ERROR:  unexpected message type 0x58 during COPY from stdin

On 12 February 2016 at 07:30, drum.lucas@gmail.com <drum.lucas@gmail.com> wrote:
Hi guys

I'm doing a copy (pg_dump) from a slave server to a test server.
The DB size is 1,7 TB, and I get the error at 1,4 TB.

Command:
ssh postgres@servidorslave01 "/usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' --format=custom db_live" | /usr/pgsql-9.2/bin/pg_restore --dbname=uat_temp --exit-on-error

Error:
|postgres|pg_restore| CONTEXT:  COPY ja_feedlog, line 1
STATEMENT:  COPY ja_feedlog (id, clientid, mobiuserid, customerid, invoiceid, description, gtime, jobid, log_type, serialised_data, push_status, requestorid, the_geom, admin_read, visitid) FROM stdin;

LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost 

What can cause that kind of error? I've tried more than once, actually.

Thanks
Lucas 

Re: pg_dump fatal error

От
Tom Lane
Дата:
"drum.lucas@gmail.com" <drum.lucas@gmail.com> writes:
> I'm doing a copy (pg_dump) from a slave server to a test server.
> The DB size is 1,7 TB, and I get the error at 1,4 TB.

Since you're using ssh, I wonder if this is a renegotiation problem.
Try changing your sshd configuration to disable renegotiation.

            regards, tom lane


Re: pg_dump fatal error

От
Steve Crawford
Дата:
I believe this error can occur when the input to pg_restore terminates or is truncated unexpectedly. I'm going to guess a failure in the SSH piece.

Cheers,
Steve

On Thu, Feb 11, 2016 at 10:33 AM, drum.lucas@gmail.com <drum.lucas@gmail.com> wrote:

ERROR:  unexpected message type 0x58 during COPY from stdin

On 12 February 2016 at 07:30, drum.lucas@gmail.com <drum.lucas@gmail.com> wrote:
Hi guys

I'm doing a copy (pg_dump) from a slave server to a test server.
The DB size is 1,7 TB, and I get the error at 1,4 TB.

Command:
ssh postgres@servidorslave01 "/usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' --format=custom db_live" | /usr/pgsql-9.2/bin/pg_restore --dbname=uat_temp --exit-on-error

Error:
|postgres|pg_restore| CONTEXT:  COPY ja_feedlog, line 1
STATEMENT:  COPY ja_feedlog (id, clientid, mobiuserid, customerid, invoiceid, description, gtime, jobid, log_type, serialised_data, push_status, requestorid, the_geom, admin_read, visitid) FROM stdin;

LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost 

What can cause that kind of error? I've tried more than once, actually.

Thanks
Lucas 


Re: pg_dump fatal error

От
"drum.lucas@gmail.com"
Дата:


On 12 February 2016 at 07:53, Steve Crawford <scrawford@pinpointresearch.com> wrote:
I believe this error can occur when the input to pg_restore terminates or is truncated unexpectedly. I'm going to guess a failure in the SSH piece.

Cheers,
Steve

On Thu, Feb 11, 2016 at 10:33 AM, drum.lucas@gmail.com <drum.lucas@gmail.com> wrote:

ERROR:  unexpected message type 0x58 during COPY from stdin

On 12 February 2016 at 07:30, drum.lucas@gmail.com <drum.lucas@gmail.com> wrote:
Hi guys

I'm doing a copy (pg_dump) from a slave server to a test server.
The DB size is 1,7 TB, and I get the error at 1,4 TB.

Command:
ssh postgres@servidorslave01 "/usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' --format=custom db_live" | /usr/pgsql-9.2/bin/pg_restore --dbname=uat_temp --exit-on-error

Error:
|postgres|pg_restore| CONTEXT:  COPY ja_feedlog, line 1
STATEMENT:  COPY ja_feedlog (id, clientid, mobiuserid, customerid, invoiceid, description, gtime, jobid, log_type, serialised_data, push_status, requestorid, the_geom, admin_read, visitid) FROM stdin;

LOG:  could not send data to client: Broken pipe
FATAL:  connection to client lost 

What can cause that kind of error? I've tried more than once, actually.

Thanks
Lucas 



Tom/Steve...

Should I changed it in both servers? 
Couldn't find anything about ssh renegotiation.. do you guys have a link?

Cheers

Re: pg_dump fatal error

От
Pierre Chevalier Géologue
Дата:
Hello,

What about trying to use a temporary file?

Le 11/02/2016 19:30, drum.lucas@gmail.com a écrit :
> I'm doing a copy (pg_dump) from a slave server to a test server.
> The DB size is 1,7 TB, and I get the error at 1,4 TB.
> Command:
>     ssh postgres@servidorslave01 "/usr/pgsql-9.2/bin/pg_dump
>     --exclude-table-data='junk.*' --format=custom db_live"
>     | /usr/pgsql-9.2/bin/pg_restore --dbname=uat_temp --exit-on-error
>

Something like:

ssh postgres@servidorslave01
/usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' --format=custom
db_live > bigdump
/usr/pgsql-9.2/bin/pg_restore --dbname=uat_temp --exit-on-error bigdump

(no guarantee at all, I'm just blindly guessing!)


I usually do pg_dump's using plain SQL format: this allows to debug such
nasty surprises.  It sometimes helps.  In your case, I would do
something like:

ssh postgres@servidorslave01
/usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' > bigdump.sql
# just to have a look at the dump:
less bigdump.sql
/usr/pgsql-9.2/bin/psql --dbname=uat_temp -f bigdump

À+
Pierre
--
____________________________________________________________________________
Pierre Chevalier
PChGEI: Pierre Chevalier Géologue Et Informaticien
     Mesté Duran
     32100 Condom
   Tél+fax  :    09 75 27 45 62
                 06 37 80 33 64
   Émail  :   pierrechevaliergeolCHEZfree.fr
   icq#   :   10432285
   jabber: pierre.chevalier1967@jabber.fr
   http://pierremariechevalier.free.fr/pierre_chevalier_geologue
____________________________________________________________________________


Re: pg_dump fatal error

От
Pierre Chevalier Géologue
Дата:
Hi,

(sorry, I scr3w3d up *again* and I 'repl(ed)-to' Lucas only; Lucas, I
hope you don't mind that I get this conversation back in the community list)


Le 11/02/2016 20:34, drum.lucas@gmail.com a écrit :
> Hi there!
>
> hmm...
> yep.. this time I'm gonna do it different.
>
> I'll run the pg_Dump in the local machine, then copy it by networking to
> the new server
> and the restore...
>
> that should work :)

Yes, I think that's a quite safe way to go.


> Can I use /usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' >
> bigdump.sql to a BIG sql file? like 1.7 TB?

If you have enough disk space available and your filesystem allows you
to have big files (which I'm pretty sure), that should not be any more
difficult than writing a smaller file.

I did similar tricks in the past, and I had some bugs...  Most of them
were related to stupid stuff like a '\n' in a varchar, which was
misinterpreted at restore time, or some weird châràĉtêrs getting in the
way.  The way I dealt with this was to split the dump file into two
parts: the structure, and the data (all the COPY statements, or INSERTS,
depending).  I did that splitting by using combinations of grep, tail,
head, etc.  Quite tedious, but it worked well.
Nowadays, there is an enhancement to pg_dump which allows to directly
dump into separate files the structure of the database and the tables'
contents.  This is *very* handy.

In your case, that would be:

# dump the structure (small file):
ssh postgres@servidorslave01 "/usr/pgsql-9.2/bin/pg_dump -s
--exclude-table-data='junk.*'" > schema.sql
# dump the data (much bigger file):
ssh postgres@servidorslave01 "/usr/pgsql-9.2/bin/pg_dump -a
--exclude-table-data='junk.*'" > data.sql
# just to have a look at the dumps:
less schema.sql
less data.sql
# restore the database structure:
/usr/pgsql-9.2/bin/psql --dbname=uat_temp -f schema.sql
# and now the data contents:
/usr/pgsql-9.2/bin/psql --dbname=uat_temp -f data.sql

Now, if you encounter more bugs while restoring, you can refer to the
specific lines in the sql files, from the error messages.  If you wish,
yo u may separate your data dumps into several ones, for instance one
per schema, stuff like that.  It eases much the process of figuring out
where the restoration scr3w3d up.

À+
Pierre


Le 11/02/2016 20:57, Pierre Chevalier Géologue a écrit :
> Hello,
>
> What about trying to use a temporary file?
>
> Le 11/02/2016 19:30, drum.lucas@gmail.com a écrit :
>> I'm doing a copy (pg_dump) from a slave server to a test server.
>> The DB size is 1,7 TB, and I get the error at 1,4 TB.
>> Command:
>>     ssh postgres@servidorslave01 "/usr/pgsql-9.2/bin/pg_dump
>>     --exclude-table-data='junk.*' --format=custom db_live"
>>     | /usr/pgsql-9.2/bin/pg_restore --dbname=uat_temp --exit-on-error
>>
>
> Something like:
>
> ssh postgres@servidorslave01
> /usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' --format=custom
> db_live > bigdump
> /usr/pgsql-9.2/bin/pg_restore --dbname=uat_temp --exit-on-error bigdump
>
> (no guarantee at all, I'm just blindly guessing!)
>
>
> I usually do pg_dump's using plain SQL format: this allows to debug such
> nasty surprises.  It sometimes helps.  In your case, I would do
> something like:
>
> ssh postgres@servidorslave01
> /usr/pgsql-9.2/bin/pg_dump --exclude-table-data='junk.*' > bigdump.sql
> # just to have a look at the dump:
> less bigdump.sql
> /usr/pgsql-9.2/bin/psql --dbname=uat_temp -f bigdump
>
> À+
> Pierre

--
____________________________________________________________________________
Pierre Chevalier
PChGEI: Pierre Chevalier Géologue Et Informaticien
     Mesté Duran
     32100 Condom
   Tél+fax  :    09 75 27 45 62
                 06 37 80 33 64
   Émail  :   pierrechevaliergeolCHEZfree.fr
   icq#   :   10432285
   jabber: pierre.chevalier1967@jabber.fr
   http://pierremariechevalier.free.fr/pierre_chevalier_geologue
____________________________________________________________________________


Re: pg_dump fatal error

От
Pierre Chevalier Géologue
Дата:
(still trying to get this thread back on the list... sorry for delay: I
just ignore list messages arriving in my main Inbox...)

Le 14/02/2016 09:04, drum.lucas@gmail.com a écrit :
>

> Thanks Pierre.
>
> I'm running the PG_DUMP in the local machine and then I'll copy through
> the network...
> It's still copying, but once it's done and all the process has been
> successful I'll let u guys know.

Normally, the pg_dump -s concerning just the structure should be very
fast to run, as well as its restoration.  The other one, the pg_dump -a,
should generate a *much* greater output file, and should be *much*
longer to restore.

So it would be better if you begin to start with the structure: if any
error is present at this stage, it may be faster (if not easier) to
track it properly.

À+
Pierre
--
____________________________________________________________________________
Pierre Chevalier
PChGEI: Pierre Chevalier Géologue Et Informaticien
Partenaire DALIBO
     Mesté Duran
     32100 Condom
   Tél+fax  :    09 75 27 45 62
                 06 37 80 33 64
   Émail  :   pierrechevaliergeolCHEZfree.fr
   icq#   :   10432285
   jabber: pierre.chevalier1967@jabber.fr
   http://pierremariechevalier.free.fr/pierre_chevalier_geologue
____________________________________________________________________________