Re: dump of 700 GB database
От | karsten vennemann |
---|---|
Тема | Re: dump of 700 GB database |
Дата | |
Msg-id | E27976924EA445D3A387FEC2C72D7D8C@snuggie обсуждение исходный текст |
Ответ на | Re: dump of 700 GB database (Scott Marlowe <scott.marlowe@gmail.com>) |
Список | pgsql-general |
> Note that cluster on a randomly ordered large table can be > prohibitively slow, and it might be better to schedule a > short downtime to do the following (pseudo code) > alter table tablename rename to old_tablename; create table > tablename like old_tablename; insert into tablename select * > from old_tablename order by clustered_col1, clustered_col2; That sounds like a great idea if that saves time. >> (creating and moving over FK references as needed.) >> shared_buffers=160MB, effective_cache_size=1GB, >> maintenance_work_mem=500MB, wal_buffers=16MB, >> checkpoint_segments=100 > What's work_mem set to? work_mem = 32MB > What ubuntu? 64 or 32 bit? Its a 32 bit. I dont know if 4GB files doesn't sound to small of a dump for originally 350GB big db - nor why pg_restore fails... > Have you got either a file > system or a set of pg tools limited to 4Gig file size? Not sure what is the problem on my server - I'm trying to figure out what has pg_restore fail...
В списке pgsql-general по дате отправления: