Re: [GENERAL] pg_dump pg_restore hanging in CentOS for large data

Поиск
Список
Период
Сортировка
От Adrian Klaver
Тема Re: [GENERAL] pg_dump pg_restore hanging in CentOS for large data
Дата
Msg-id ca2a7c93-0ee0-a936-1a61-33ce2f4db57f@aklaver.com
обсуждение исходный текст
Ответ на [GENERAL] pg_dump pg_restore hanging in CentOS for large data  (Sridevi B <sridevi17@gmail.com>)
Список pgsql-general
On 03/17/2017 12:27 AM, Sridevi B wrote:
Ccing list.
Please reply to list also, it puts more eyes on the problem.

> Hi Adrian,
>
>  Sorry for delay. Please find my answers inline.
>
> Thanks,
> Sridevi
>
>
>
>
>
> On Thu, Mar 16, 2017 at 2:28 AM, Adrian Klaver
> <adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>> wrote:
>
>     On 03/14/2017 09:48 AM, Sridevi B wrote:
>
>         Hi ,
>
>            I am facing an issue with backup/Restore for data size more than
>         *2GB*. Its working fine for *1GB*.
>
>
>
>         Below are the details for issue:
>
>
>
>         Description:
>
>
>
>         The command pg_dump is hanging at saving large objects and
>         process gets
>         terminated after some time.
>
>
>
>         The command pg_restore is hanging at executing BLOB and getting
>         terminated after some time.
>
>


>     When you refer to BLOB do you mean large objects:
>
>     https://www.postgresql.org/docs/9.2/static/largeobjects.html
>     <https://www.postgresql.org/docs/9.2/static/largeobjects.html>
>
>     or something else? *[Sridevi] yes, internally it refers to large
>     objects*.
>
>

***
>         Expecting: pg_dump/pg_restore should work for minimum large data
>         size <20GB.
>

***

>
>     What data size are you talking about, the entire dump file or an
>     object in the file?

***
*[Sridevi] I am talking about entire dump file
>     size, which of size >3GB*

***

>
>
>
>
>         PostgreSQL version number you are running: postgres92-9.2.9-1.x86_64
>
>          How you installed PostgreSQL:
>
>               Linux RHEL(Backup) installed using rpm.
>
>                      CentOS7.2(Restore) installed using yum.
>
>          Operating system and version:
>
>               Backup - Red Hat Enterprise Linux Server release 5.4 (Tikanga)
>
>                      Restore -centos-release-7-2.1511.el7.centos.2.10.x86_64
>
>
>         What program you're using to connect to PostgreSQL:
>         pg_dump/pg_restore
>         using shell script
>
>

***
>     What are the scripts? *[Sridevi]*  - *We are using Linux scripts,
>     which starts/stops application process during the postgres
>     backup/restore process. And also scripts takes care of additional
>     details specific to application. These scripts internally invoke
>     postgres processes for backup and restore.
>     *


****
>
>
>         Is there anything relevant or unusual in the PostgreSQL server
>         logs?:
>
>                   Pg_dump verbose log: stuck after: pg_dump: saving
>         large objects
>
>                                 Pg_restore verbose log: Stuck after:
>         pg_restore:
>         restoring large objects
>
>                                         Some times: pg_restore: pg_restore:
>         processing item 4376515 BLOB 4993394
>
>                                                     pg_restore:
>         executing BLOB
>         4993394
>
>         For questions about any kind of error:
>
>
>
>         What you were doing when the error happened / how to cause the
>         error:
>         Tried options pg_dump using split and restore. Still same issue
>         exists.
>
>

>     Explain split and restore?
***

>
>     *[Sridevi]* Split option of pg_dump, splits dump file into multiple
> files based on size and restore will combine all files and restore the
> data.
>  I am referring to below link for split and restore.
>
>     http://www.postgresql-archive.org/large-database-problems-with-pg-dump-and-pg-restore-td3236910.html
>
>  I tried below commands:
> *Backup:* /opt/postgres/9.2/bin/pg_dump -v -c -h localhost -p 5432 -U
> ${$db_user}-w -Fc ${db_name}- | split -b 1000m -
> /opt/backups/${dump_file_name}
> *Restore: *cat /opt/backups/${dump_file_name}* |
> /opt/postgres/9.2/bin/pg_restore | /opt/postgres/9.2/bin/psql
> ${db_name}-h localhost -p 5432 -v -U ${$db_user} -w
>  The restore is getting stuck at below error message and process gets
> terminated.
>                   could not send data to client: Broken pipe
>                   connection to client lost


***

So all this happening on the same host, correct?

I do not see anything that is large object specific in the error above.

What is the error message you get at the terminal when you do not use
the split/cat method?

Have you checked the ulimit settings as suggested by Tom Lane?

>
> The EXACT TEXT of the error message you're getting, if there is one:
> (Copy and paste the message to the email, do not send a screenshot)
>
> -          No specific error, pg_dump/pg_restore getting terminated for
> data >2GB
>
>
> Regards,
>
> Sridevi
>
>
>
>     --
>     Adrian Klaver
>     adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>
>
>


--
Adrian Klaver
adrian.klaver@aklaver.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Alexander Farber
Дата:
Сообщение: [GENERAL] Generating JSON-encoded list of object out of joined tables
Следующее
От: Steve Clark
Дата:
Сообщение: [GENERAL] psql - looking in wrong place for socket