Re: URGENT: pg_dump error

Поиск
Список
Период
Сортировка
От Dmitry Tkach
Тема Re: URGENT: pg_dump error
Дата
Msg-id 3E49482C.6050002@openratings.com
обсуждение исходный текст
Ответ на URGENT: pg_dump error  (jerome <jerome@gmanmi.tv>)
Список pgsql-general
I suspect, your problem is that the output file is too large (if you are on ext2, you cannot have files larger than 2
gigin the filesystem). 
Try this:
pg_dump mydatabase -t mytable | gzip -f > sample.gz
or
pg_dump mydatabase -t mytable | split -C 2000m - sample.
or even
pg_dump mydatabase -t mytable | gzip -f | split -b 2000m - sample.gz.
...

The first case should work, unless even the compressed file is larger than 2 gig, either of the other two will work
regardlessof the output size 
(as long as it fits on your disk of course).
In the two last cases, it will create several files, called like sample.aa, sample,ab... or sample.gz.aa, sample.gz.bb
etc...
To 'reassemble' them later, you'll need something like:

cat sample.* | psql mydatabase                 #for the first case  - no gzip or
cat sample.gz.* | gunzip -f | psql mydatabase

I hope, it helps...

Dima

jerome wrote:
> i tried to do pg_dump
>
> pg_dump mydatabase -t mytable > sample
>
> it always results to=20
>
> KILLED
>
> can anyone tell me what should i do...
>
> TIA
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org


В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Is Hash Agg being used? 7.4 seems to handle this query worse than 7.3
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Is Hash Agg being used? 7.4 seems to handle this query worse than 7.3