Обсуждение: Problems restoring big tables

Поиск
Список
Период
Сортировка

Problems restoring big tables

От
Arnau
Дата:
Hi all,

   I have to restore a database that its dump using custom format (-Fc)
takes about 2.3GB. To speed the restore first I have restored everything
except (played with pg_restore -l) the contents of some tables that's
where most of the data is stored. This server is a debian running
PostgreSQL 8.1.4. When I try to restore these table's contents I've got
an error:

$ time pg_restore -v -d espsm_asme -O -L
espsm_asme_components_statistics_data.list espsm_asme-20070105-0619.custom
pg_restore: connecting to database for restore
pg_restore: implied data-only restore
pg_restore: restoring data for table "statistics_operators"
pg_restore: ERROR:  out of memory
DETAIL:  Failed on request of size 32.
CONTEXT:  COPY statistics_operators, line 25663678: "137320348  58618027
        20060804220356-1-93-3096        \N      2006-08-04 22:03:56+02
1       34675522993  5755     71      1       6       \N"
pg_restore: [archiver (db)] error returned by PQendcopy: ERROR:  out of
memory
DETAIL:  Failed on request of size 32.
CONTEXT:  COPY statistics_operators, line 25663678: "137320348  58618027
        20060804220356-1-93-3096        \N      2006-08-04 22:03:56+02
1       34675522993  5755     71      1       6       \N"
pg_restore: *** aborted because of error

real    23m16.490s
user    1m55.203s
sys     0m5.672s

I don't know how I can solve this. This server has 4GB of RAM plenty of
space in the disks.

asme@SMLDB005:~/asme_restore$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              28G  2.8G   24G  11% /
tmpfs                 2.0G     0  2.0G   0% /dev/shm
/dev/sdb1              34G  161M   32G   1% /dblog
/dev/sdc1             135G  6.2G  122G   5% /srv
tmpfs                  10M   44K   10M   1% /dev

asme@SMLDB005:~/asme_restore$ cat /proc/sys/kernel/shmmax
163840000


Any idea about how to fix this?


Regards
--
Arnau

Re: Problems restoring big tables

От
Tom Lane
Дата:
Arnau <arnaulist@andromeiberica.com> writes:
>    I have to restore a database that its dump using custom format (-Fc)
> takes about 2.3GB. To speed the restore first I have restored everything
> except (played with pg_restore -l) the contents of some tables that's
> where most of the data is stored.

I think you've outsmarted yourself by creating indexes and foreign keys
before loading the data.  That's *not* the way to make it faster.

> pg_restore: ERROR:  out of memory
> DETAIL:  Failed on request of size 32.
> CONTEXT:  COPY statistics_operators, line 25663678: "137320348  58618027

I'm betting you ran out of memory for deferred-trigger event records.
It's best to load the data and then establish foreign keys ... indexes
too.  See
http://www.postgresql.org/docs/8.2/static/populate.html
for some of the underlying theory.  (Note that pg_dump/pg_restore
gets most of this stuff right already; it's unlikely that you will
improve matters by manually fiddling with the load order.  Instead,
think about increasing maintenance_work_mem and checkpoint_segments,
which pg_restore doesn't risk fooling with.)

            regards, tom lane

Re: Problems restoring big tables

От
Arnau
Дата:
Hi all,

>>    I have to restore a database that its dump using custom format (-Fc)
>> takes about 2.3GB. To speed the restore first I have restored everything
>> except (played with pg_restore -l) the contents of some tables that's
>> where most of the data is stored.
>
> I think you've outsmarted yourself by creating indexes and foreign keys
> before loading the data.  That's *not* the way to make it faster.

   I made a mistake saying that I wanted to speed the restore. What I
really meant is the following. I have to migrated that DB from a server
to another, that means I have to stop my production environment. Those
big tables are not really needed to be on production as they are only
statistical data. So what I wanted to do is first of all restore the
important tables and at the end restore the statistics.

   So what's the way to do this?

>
>> pg_restore: ERROR:  out of memory
>> DETAIL:  Failed on request of size 32.
>> CONTEXT:  COPY statistics_operators, line 25663678: "137320348  58618027
>
> I'm betting you ran out of memory for deferred-trigger event records.
> It's best to load the data and then establish foreign keys ... indexes
> too.  See
> http://www.postgresql.org/docs/8.2/static/populate.html
> for some of the underlying theory.  (Note that pg_dump/pg_restore
> gets most of this stuff right already; it's unlikely that you will
> improve matters by manually fiddling with the load order.  Instead,
> think about increasing maintenance_work_mem and checkpoint_segments,
> which pg_restore doesn't risk fooling with.)

Thank you very much
--
Arnau