Re: parallel dump fails to dump large tables

Поиск
Список
Период
Сортировка
От Shanker Singh
Тема Re: parallel dump fails to dump large tables
Дата
Msg-id 961471F4049EF94EAD4D0165318BD88162590269@Corp-MBXE3.iii.com
обсуждение исходный текст
Ответ на Re: parallel dump fails to dump large tables  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
If I exclude the large tables(>30GB) in the parallel dump it does succeed and normal dump also succeeds. So I am not
sureif the network is at fault. Is there any other option that might help to make parallel dump usable for large
tables?

thanks
shanker

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Saturday, February 14, 2015 9:00 AM
To: rod@iol.ie
Cc: Shanker Singh; pgsql-general@postgresql.org
Subject: Re: [GENERAL] parallel dump fails to dump large tables

"Raymond O'Donnell" <rod@iol.ie> writes:
> On 14/02/2015 15:42, Shanker Singh wrote:
>> Hi,
>> I am having problem using parallel pg_dump feature in postgres
>> release 9.4. The size of the table is large(54GB). The dump fails
>> with the
>> error: "pg_dump: [parallel archiver] a worker process died
>> unexpectedly". After this error the pg_dump aborts. The error log
>> file gets the following message:
>>
>> 2015-02-09 15:22:04 PST [8636]: [2-1]
>> user=pdroot,db=iii,appname=pg_dump
>> STATEMENT:  COPY iiirecord.varfield (id, field_type_tag, marc_tag,
>> marc_ind1, marc_ind2, field_content, field_group_id, occ_num,
>> record_id) TO stdout;
>> 2015-02-09 15:22:04 PST [8636]: [3-1]
>> user=pdroot,db=iii,appname=pg_dump
>> FATAL:  connection to client lost

> There's your problem - something went wrong with the network.

I'm wondering about SSL renegotiation failures as a possible cause of the disconnect --- that would explain why it only
happenson large tables. 

            regards, tom lane


В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: postgres cust types
Следующее
От: Eugene Dzhurinsky
Дата:
Сообщение: Import large data set into a table and resolve duplicates?