pg_dump problem

Поиск
Список
Период
Сортировка
От Matthew
Тема pg_dump problem
Дата
Msg-id 183FA749499ED311B6550000F87E206C0C94DE@srv.ctlno.com
обсуждение исходный текст
Список pgsql-hackers
We backup our postgre 7.0.2 databases nightly via a cron script from a
remote box that calls pg_dump -f filename dbname or something to that
effect.  About a week ago we started running out of disk space and we didn't
know it, and pg_dump doesn't report any errors so the cron script didn't
report that to us.  The result being we have a weeks worth of corrupt
backups, this is clearly a problem.

FYI : the database server is Redhat Linux 6.1, Postgre 7.0.2 from RPM,
Athlon 900 w/ 256M
and the backup server is RedHat 6.1, Postgre 7.0.2 client RPMS, P133 w/ 32M.

> When the filesystem fills, pg_dump continues attempting to write data
> which is then lost.   As we are running pg_dump in a cron job, we would
> like it to fail (return a non-zero error code) if there are any filesystem
> errors.  I realize that for stdout the return value should always be true,
> but for the -f option I would like to see the checks done.  
> 
> Taking a look at the source for pg_dump I see that the return values from
> the calls to fputs are not being checked.  If I write a wrapper for fputs
> that checks the error code and sets a error flag which will be the return
> value of the program would that be an acceptable patch?  Or should I check
> the return value of each of the 17 separate calls individually?  Would a
> patch for this bug be accepted against 7.0.3 or should I write it against
> 7.1 CVS?
> 
Thanks,

Matt O'Connor




В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Mikheev, Vadim"
Дата:
Сообщение: RE: Assuming that TAS() will succeed the first time is verboten
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Notify with Rules bugs?