Re: Restoring large tables with COPY

Поиск
Список
Период
Сортировка
От Marko Kreen
Тема Re: Restoring large tables with COPY
Дата
Msg-id 20011211161936.GA32526@l-t.ee
обсуждение исходный текст
Ответ на Re: Restoring large tables with COPY  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Tue, Dec 11, 2001 at 10:55:30AM -0500, Tom Lane wrote:
> Marko Kreen <marko@l-t.ee> writes:
> > Maybe I am missing something obvious, but I am unable to load
> > larger tables (~300k rows) with COPY command that pg_dump by
> > default produces.
> 
> I'd like to find out what the problem is, rather than work around it
> with such an ugly hack.
> 
> > 1) Too few WAL files.
> >    - well, increase the wal_files (eg to 32),
> 
> What PG version are you running?  7.1.3 or later should not have a
> problem with WAL file growth.

7.1.3

> > 2) Machine runs out of swap, PostgreSQL seems to keep whole TX
> >    in memory.
> 
> That should not happen either.  Could we see the full schema of the
> table you are having trouble with?

Well, there are several such tables, I will reproduce it,
then send the schema.  I guess its the first one, but maybe
not.  postgres gets killed by Linux OOM handler, so I cant
tell by messages, which one it was.  (hmm, i should probably
run it as psql -q -a > log).

> > Or shortly: during pg_restore the resource requirements are
> > order of magnitude higher than during pg_dump,
> 
> We found some client-side memory leaks in pg_restore recently; is that
> what you're talking about?

No, its the postgres process thats memory-hungry, it happens
with "psql < db.dump" too.

If I run a dump thats produced with "pg_dump -m 5000" then
it loops between 20M and 10M is much better.  (the 10M
depends on shared_buffers I guess).

-- 
marko



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Duplicate-rows bug reports
Следующее
От: "Serguei Mokhov"
Дата:
Сообщение: Re: Restoring large tables with COPY