Re: Restoring large tables with COPY
| От | Tom Lane |
|---|---|
| Тема | Re: Restoring large tables with COPY |
| Дата | |
| Msg-id | 17277.1008086130@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Restoring large tables with COPY (Marko Kreen <marko@l-t.ee>) |
| Ответы |
Re: Restoring large tables with COPY
Re: Restoring large tables with COPY |
| Список | pgsql-hackers |
Marko Kreen <marko@l-t.ee> writes:
> Maybe I am missing something obvious, but I am unable to load
> larger tables (~300k rows) with COPY command that pg_dump by
> default produces.
I'd like to find out what the problem is, rather than work around it
with such an ugly hack.
> 1) Too few WAL files.
> - well, increase the wal_files (eg to 32),
What PG version are you running? 7.1.3 or later should not have a
problem with WAL file growth.
> 2) Machine runs out of swap, PostgreSQL seems to keep whole TX
> in memory.
That should not happen either. Could we see the full schema of the
table you are having trouble with?
> Or shortly: during pg_restore the resource requirements are
> order of magnitude higher than during pg_dump,
We found some client-side memory leaks in pg_restore recently; is that
what you're talking about?
regards, tom lane
В списке pgsql-hackers по дате отправления: