Обсуждение: [GENERAL] pg_dump pg_restore hanging in CentOS for large data

Поиск
Список
Период
Сортировка

[GENERAL] pg_dump pg_restore hanging in CentOS for large data

От
Sridevi B
Дата:

Hi ,

   I am facing an issue with backup/Restore for data size more than 2GB. Its working fine for 1GB.

 

Below are the details for issue:

 
Description:

 

The command pg_dump is hanging at saving large objects and process gets terminated after some time.

 

The command pg_restore is hanging at executing BLOB and getting terminated after some time.

Expecting: pg_dump/pg_restore should work for minimum large data size <20GB.

 

PostgreSQL version number you are running: postgres92-9.2.9-1.x86_64

 How you installed PostgreSQL:

      Linux RHEL(Backup) installed using rpm. 
             CentOS7.2(Restore) installed using yum. 
 Operating system and version: 
      Backup - Red Hat Enterprise Linux Server release 5.4 (Tikanga)
             Restore -centos-release-7-2.1511.el7.centos.2.10.x86_64
 
What program you're using to connect to PostgreSQL: pg_dump/pg_restore using shell script
 
Is there anything relevant or unusual in the PostgreSQL server logs?:
          Pg_dump verbose log: stuck after: pg_dump: saving large objects
                        Pg_restore verbose log: Stuck after: pg_restore: restoring large objects
                                Some times: pg_restore: pg_restore: processing item 4376515 BLOB 4993394
                                            pg_restore: executing BLOB 4993394
 
For questions about any kind of error:
 
What you were doing when the error happened / how to cause the error: Tried options pg_dump using split and restore. Still same issue exists.
 
The EXACT TEXT of the error message you're getting, if there is one: (Copy and paste the message to the email, do not send a screenshot)

-          No specific error, pg_dump/pg_restore getting terminated for data >2GB


Regards,

Sridevi

Re: [GENERAL] pg_dump pg_restore hanging in CentOS for large data

От
Adrian Klaver
Дата:
On 03/14/2017 09:48 AM, Sridevi B wrote:
> Hi ,
>
>    I am facing an issue with backup/Restore for data size more than
> *2GB*. Its working fine for *1GB*.
>
>
>
> Below are the details for issue:
>
>
>
> Description:
>
>
>
> The command pg_dump is hanging at saving large objects and process gets
> terminated after some time.
>
>
>
> The command pg_restore is hanging at executing BLOB and getting
> terminated after some time.

When you refer to BLOB do you mean large objects:

https://www.postgresql.org/docs/9.2/static/largeobjects.html

or something else?

>
> Expecting: pg_dump/pg_restore should work for minimum large data size <20GB.

What data size are you talking about, the entire dump file or an object
in the file?

>
>
>
> PostgreSQL version number you are running: postgres92-9.2.9-1.x86_64
>
>  How you installed PostgreSQL:
>
>       Linux RHEL(Backup) installed using rpm.
>
>              CentOS7.2(Restore) installed using yum.
>
>  Operating system and version:
>
>       Backup - Red Hat Enterprise Linux Server release 5.4 (Tikanga)
>
>              Restore -centos-release-7-2.1511.el7.centos.2.10.x86_64
>
>
>
> What program you're using to connect to PostgreSQL: pg_dump/pg_restore
> using shell script

What are the scripts?

>
> Is there anything relevant or unusual in the PostgreSQL server logs?:
>
>           Pg_dump verbose log: stuck after: pg_dump: saving large objects
>
>                         Pg_restore verbose log: Stuck after: pg_restore:
> restoring large objects
>
>                                 Some times: pg_restore: pg_restore:
> processing item 4376515 BLOB 4993394
>
>                                             pg_restore: executing BLOB
> 4993394
>
> For questions about any kind of error:
>
>
>
> What you were doing when the error happened / how to cause the error:
> Tried options pg_dump using split and restore. Still same issue exists.

Explain split and restore?

>
>
>
> The EXACT TEXT of the error message you're getting, if there is one:
> (Copy and paste the message to the email, do not send a screenshot)
>
> -          No specific error, pg_dump/pg_restore getting terminated for
> data >2GB
>
>
> Regards,
>
> Sridevi
>


--
Adrian Klaver
adrian.klaver@aklaver.com


Re: [GENERAL] pg_dump pg_restore hanging in CentOS for large data

От
Tom Lane
Дата:
Sridevi B <sridevi17@gmail.com> writes:
>    I am facing an issue with backup/Restore for data size more than *2GB*.

I'm suspicious that you're running into some limit external to postgres
itself.  A couple of likely possibilities:

* maybe you've got a ulimit active on the server or the pg_dump or
pg_restore proces.

* if you're using SSL encryption on the connection, you might be running
into known bugs in openssl's management of renegotiation.  This is made
more likely by the fact that you're running such an old OS release (and
PG 9.2.9 is not exactly up to date either).  We changed the default
value of ssl_renegotiation_limit to zero around 9.2.14, and you might
want to set that explicitly in your 9.2.9 server.

            regards, tom lane


Re: [GENERAL] pg_dump pg_restore hanging in CentOS for large data

От
Adrian Klaver
Дата:
On 03/17/2017 12:27 AM, Sridevi B wrote:
Ccing list.
Please reply to list also, it puts more eyes on the problem.

> Hi Adrian,
>
>  Sorry for delay. Please find my answers inline.
>
> Thanks,
> Sridevi
>
>
>
>
>
> On Thu, Mar 16, 2017 at 2:28 AM, Adrian Klaver
> <adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>> wrote:
>
>     On 03/14/2017 09:48 AM, Sridevi B wrote:
>
>         Hi ,
>
>            I am facing an issue with backup/Restore for data size more than
>         *2GB*. Its working fine for *1GB*.
>
>
>
>         Below are the details for issue:
>
>
>
>         Description:
>
>
>
>         The command pg_dump is hanging at saving large objects and
>         process gets
>         terminated after some time.
>
>
>
>         The command pg_restore is hanging at executing BLOB and getting
>         terminated after some time.
>
>


>     When you refer to BLOB do you mean large objects:
>
>     https://www.postgresql.org/docs/9.2/static/largeobjects.html
>     <https://www.postgresql.org/docs/9.2/static/largeobjects.html>
>
>     or something else? *[Sridevi] yes, internally it refers to large
>     objects*.
>
>

***
>         Expecting: pg_dump/pg_restore should work for minimum large data
>         size <20GB.
>

***

>
>     What data size are you talking about, the entire dump file or an
>     object in the file?

***
*[Sridevi] I am talking about entire dump file
>     size, which of size >3GB*

***

>
>
>
>
>         PostgreSQL version number you are running: postgres92-9.2.9-1.x86_64
>
>          How you installed PostgreSQL:
>
>               Linux RHEL(Backup) installed using rpm.
>
>                      CentOS7.2(Restore) installed using yum.
>
>          Operating system and version:
>
>               Backup - Red Hat Enterprise Linux Server release 5.4 (Tikanga)
>
>                      Restore -centos-release-7-2.1511.el7.centos.2.10.x86_64
>
>
>         What program you're using to connect to PostgreSQL:
>         pg_dump/pg_restore
>         using shell script
>
>

***
>     What are the scripts? *[Sridevi]*  - *We are using Linux scripts,
>     which starts/stops application process during the postgres
>     backup/restore process. And also scripts takes care of additional
>     details specific to application. These scripts internally invoke
>     postgres processes for backup and restore.
>     *


****
>
>
>         Is there anything relevant or unusual in the PostgreSQL server
>         logs?:
>
>                   Pg_dump verbose log: stuck after: pg_dump: saving
>         large objects
>
>                                 Pg_restore verbose log: Stuck after:
>         pg_restore:
>         restoring large objects
>
>                                         Some times: pg_restore: pg_restore:
>         processing item 4376515 BLOB 4993394
>
>                                                     pg_restore:
>         executing BLOB
>         4993394
>
>         For questions about any kind of error:
>
>
>
>         What you were doing when the error happened / how to cause the
>         error:
>         Tried options pg_dump using split and restore. Still same issue
>         exists.
>
>

>     Explain split and restore?
***

>
>     *[Sridevi]* Split option of pg_dump, splits dump file into multiple
> files based on size and restore will combine all files and restore the
> data.
>  I am referring to below link for split and restore.
>
>     http://www.postgresql-archive.org/large-database-problems-with-pg-dump-and-pg-restore-td3236910.html
>
>  I tried below commands:
> *Backup:* /opt/postgres/9.2/bin/pg_dump -v -c -h localhost -p 5432 -U
> ${$db_user}-w -Fc ${db_name}- | split -b 1000m -
> /opt/backups/${dump_file_name}
> *Restore: *cat /opt/backups/${dump_file_name}* |
> /opt/postgres/9.2/bin/pg_restore | /opt/postgres/9.2/bin/psql
> ${db_name}-h localhost -p 5432 -v -U ${$db_user} -w
>  The restore is getting stuck at below error message and process gets
> terminated.
>                   could not send data to client: Broken pipe
>                   connection to client lost


***

So all this happening on the same host, correct?

I do not see anything that is large object specific in the error above.

What is the error message you get at the terminal when you do not use
the split/cat method?

Have you checked the ulimit settings as suggested by Tom Lane?

>
> The EXACT TEXT of the error message you're getting, if there is one:
> (Copy and paste the message to the email, do not send a screenshot)
>
> -          No specific error, pg_dump/pg_restore getting terminated for
> data >2GB
>
>
> Regards,
>
> Sridevi
>
>
>
>     --
>     Adrian Klaver
>     adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>
>
>


--
Adrian Klaver
adrian.klaver@aklaver.com


Re: [GENERAL] pg_dump pg_restore hanging in CentOS for large data

От
Sridevi B
Дата:
Hi Tom,
Please find my answers below.

   ->ulimit : Verified in file /etc/security/limits.conf - it has been configured as 20000. Should I increase the limit and try again? please let me know.
        root - nofile 20000
        postgres - nofile 20000
       
->ssl_renegotiation_limit : Its been set as 512MB, but it has been commented out in postgres.conf.
 We are using CiscoSSL in our product. 

# - Security and Authentication -

#authentication_timeout = 1min  # 1s-600s
#ssl = off    # (change requires restart)
#ssl_ciphers = 'ALL:!' # allowed SSL ciphers
     # (change requires restart)
#ssl_renegotiation_limit = 512MB


Thanks,
Sridevi


On Thu, Mar 16, 2017 at 3:18 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Sridevi B <sridevi17@gmail.com> writes:
>    I am facing an issue with backup/Restore for data size more than *2GB*.

I'm suspicious that you're running into some limit external to postgres
itself.  A couple of likely possibilities:

* maybe you've got a ulimit active on the server or the pg_dump or
pg_restore proces.

* if you're using SSL encryption on the connection, you might be running
into known bugs in openssl's management of renegotiation.  This is made
more likely by the fact that you're running such an old OS release (and
PG 9.2.9 is not exactly up to date either).  We changed the default
value of ssl_renegotiation_limit to zero around 9.2.14, and you might
want to set that explicitly in your 9.2.9 server.

                        regards, tom lane