Обсуждение: performance database for backup/restore

Поиск
Список
Период
Сортировка

performance database for backup/restore

От
Jeison Bedoya
Дата:
Hi people, i have a database with 400GB running in a server with 128Gb
RAM, and 32 cores, and storage over SAN with fiberchannel, the problem
is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next
i do a restore and take a lot of 17 hours, that is a normal time for
that process in that machine? or i can do something to optimize the
process of backup/restore.

This is my current configuration

Postgres version 9.2.2
connections 1000
shared buffers 4096MB
work_mem = 2048MB
maintenance_work_mem = 2048MB
checkpoint_segments = 103

the other params are by default.

Thankyou very much

--
Atentamente,


JEISON BEDOYA DELGADO
Adm. Servidores y Comunicaciones
AUDIFARMA S.A.


--
NOTA VERDE:
No imprima este correo a menos que sea absolutamente necesario.
Ahorre papel, ayude a salvar un arbol.

--------------------------------------------------------------------
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que esta limpio.

--------------------------------------------------------------------
Este texto fue anadido por el servidor de correo de Audifarma S.A.:

Las opiniones contenidas en este mensaje no necesariamente coinciden
con las institucionales de Audifarma. La informacion y todos sus
archivos Anexos, son confidenciales, privilegiados y solo pueden ser
utilizados por sus destinatarios. Si por error usted recibe este
mensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,
notificarle de su error a la persona que lo envio y abstenerse de
utilizar su contenido.




Re: performance database for backup/restore

От
Evgeny Shishkin
Дата:
On May 21, 2013, at 5:18 PM, Jeison Bedoya <jeisonb@audifarma.com.co> wrote:

> Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with
fiberchannel,the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and
takea lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the
processof backup/restore. 
>

I'd recommend you to dump with

pg_dump --format=c

It will compress the output and later you can restore it in parallel with

pg_restore -j 32 (for example)

Right now you can not dump in parallel, wait for 9.3 release. Or may be someone will back port it to 9.2 pg_dump.

Also during restore you can speed up a little more by disabling fsync and synchronous_commit.

> This is my current configuration
>
> Postgres version 9.2.2
> connections 1000
> shared buffers 4096MB
> work_mem = 2048MB
> maintenance_work_mem = 2048MB
> checkpoint_segments = 103
>
> the other params are by default.
>
> Thankyou very much
>
> --
> Atentamente,
>
>
> JEISON BEDOYA DELGADO
> Adm. Servidores y Comunicaciones
> AUDIFARMA S.A.
>
>
> --
> NOTA VERDE:
> No imprima este correo a menos que sea absolutamente necesario.
> Ahorre papel, ayude a salvar un arbol.
>
> --------------------------------------------------------------------
> Este mensaje ha sido analizado por MailScanner
> en busca de virus y otros contenidos peligrosos,
> y se considera que esta limpio.
>
> --------------------------------------------------------------------
> Este texto fue anadido por el servidor de correo de Audifarma S.A.:
>
> Las opiniones contenidas en este mensaje no necesariamente coinciden
> con las institucionales de Audifarma. La informacion y todos sus
> archivos Anexos, son confidenciales, privilegiados y solo pueden ser
> utilizados por sus destinatarios. Si por error usted recibe este
> mensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,
> notificarle de su error a la persona que lo envio y abstenerse de
> utilizar su contenido.
>
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance



Re: performance database for backup/restore

От
Steve Crawford
Дата:
On 05/21/2013 06:18 AM, Jeison Bedoya wrote:
> Hi people, i have a database with 400GB running in a server with 128Gb
> RAM, and 32 cores, and storage over SAN with fiberchannel, the problem
> is when i go to do a backup whit pg_dumpall take a lot of 5 hours,
> next i do a restore and take a lot of 17 hours, that is a normal time
> for that process in that machine? or i can do something to optimize
> the process of backup/restore.
It would help to know what you wish to solve. I.e. setting up a test/dev
server, testing disaster-recovery, deploying to a new server, etc. Also,
are you dumping to a file then restoring from a file or dumping to a
pipe into the restore?

If you use the custom format in pg_dump *and* are dumping to a file
*and* restoring via pg_restore, you can set the -j flag to somewhat
fewer than the number of cores (though at 128 cores I can't say where
the sweet spot might be) to allow pg_restore to run things like index
recreation in parallel to help your restore speed.

You can also *temporarily* disable fsync while rebuilding the database -
just be sure to turn it back on afterward.

Copying the files is not the recommended method for backups but may work
for certain cases. One is when you can shut down the database so the
whole directory is quiescent while you copy the files. Also, depending
on your SAN features, you *might* be able to do a snapshot of the
running PostgreSQL data directory and use that.

>
>
>
> Postgres version 9.2.2 ...
...has a nasty security issue. Upgrade. Now.

Cheers,
Steve



Re: performance database for backup/restore

От
"ktm@rice.edu"
Дата:
On Tue, May 21, 2013 at 05:28:31PM +0400, Evgeny Shishkin wrote:
>
> On May 21, 2013, at 5:18 PM, Jeison Bedoya <jeisonb@audifarma.com.co> wrote:
>
> > Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with
fiberchannel,the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and
takea lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the
processof backup/restore. 
> >
>
> I'd recommend you to dump with
>
> pg_dump --format=c
>
> It will compress the output and later you can restore it in parallel with
>
> pg_restore -j 32 (for example)
>
> Right now you can not dump in parallel, wait for 9.3 release. Or may be someone will back port it to 9.2 pg_dump.
>
> Also during restore you can speed up a little more by disabling fsync and synchronous_commit.
>

If you have the space and I/O capacity, avoiding the compress option will be
much faster. The current compression scheme using zlib type compression is
very CPU intensive and limits your dump rate. On a system that we have, a
dump without compression takes 20m and with compression 2h20m. The parallel
restore make a big difference as well.

Regards,
Ken


Re: performance database for backup/restore

От
Jeff Janes
Дата:
2013/5/21 Jeison Bedoya <jeisonb@audifarma.com.co>
Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and take a lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the process of backup/restore.

How many database objects do you have?  A few large objects will dump and restore faster than a huge number of smallish objects.

Where is your bottleneck?  "top" should show you whether it is CPU or IO.

I can pg_dump about 6GB/minute to /dev/null using all defaults with a small number of large objects.

Cheers,

Jeff

Re: performance database for backup/restore

От
Jeison Bedoya
Дата:
hi jeff thanks by your answer, when you say "database objects" you talking about the tables?, because i have 1782 tables in my database.

Umm, my boottleneck not is  on CPU because the top don´t show something about that, the memory is used 30%, and the IO not have problem, because the Fiber channel SAN have capacity of 8GB and tthe i/o transfer to the disk is no Upper to 1 GB.

can you explainme again how do you do a 6bg/min for pd_dump

thanks
Atentamente,


JEISON BEDOYA DELGADO
Adm. Servidores y Comunicaciones
AUDIFARMA S.A.
El 21/05/2013 11:11 a.m., Jeff Janes escribió:
2013/5/21 Jeison Bedoya <jeisonb@audifarma.com.co>
Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and take a lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the process of backup/restore.

How many database objects do you have?  A few large objects will dump and restore faster than a huge number of smallish objects.

Where is your bottleneck?  "top" should show you whether it is CPU or IO.

I can pg_dump about 6GB/minute to /dev/null using all defaults with a small number of large objects.

Cheers,

Jeff

--
NOTA VERDE:
No imprima este correo
a menos que sea absolutamente necesario.
Ahorre papel, ayude a salvar un arbol.

--------------------------------------------------------------------
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que esta limpio.

--------------------------------------------------------------------
Este texto fue anadido por el servidor de correo de Audifarma S.A.:

Las opiniones contenidas en este mensaje no necesariamente coinciden
con las institucionales de Audifarma. La informacion y todos sus
archivos Anexos, son confidenciales, privilegiados y solo pueden ser
utilizados por sus destinatarios. Si por error usted recibe este
mensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,
notificarle de su error a la persona que lo envia y abstenerse de
utilizar su contenido.


--
NOTA VERDE:
No imprima este correo
a menos que sea absolutamente necesario.
Ahorre papel, ayude a salvar un arbol.

--------------------------------------------------------------------
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que esta limpio.

--------------------------------------------------------------------
Este texto fue anadido por el servidor de correo de Audifarma S.A.:

Las opiniones contenidas en este mensaje no necesariamente coinciden
con las institucionales de Audifarma. La informacion y todos sus
archivos Anexos, son confidenciales, privilegiados y solo pueden ser
utilizados por sus destinatarios. Si por error usted recibe este
mensaje, le ofrecemos disculpas, solicitamos eliminarlo de inmediato,
notificarle de su error a la persona que lo envia y abstenerse de
utilizar su contenido.