We backup our postgre 7.0.2 databases nightly via a cron script from a
remote box that calls pg_dump -f filename dbname or something to that
effect. About a week ago we started running out of disk space and we didn't
know it, and pg_dump doesn't report any errors so the cron script didn't
report that to us. The result being we have a weeks worth of corrupt
backups, this is clearly a problem.
FYI : the database server is Redhat Linux 6.1, Postgre 7.0.2 from RPM,
Athlon 900 w/ 256M
and the backup server is RedHat 6.1, Postgre 7.0.2 client RPMS, P133 w/ 32M.
> When the filesystem fills, pg_dump continues attempting to write data
> which is then lost. As we are running pg_dump in a cron job, we would
> like it to fail (return a non-zero error code) if there are any filesystem
> errors. I realize that for stdout the return value should always be true,
> but for the -f option I would like to see the checks done.
> Taking a look at the source for pg_dump I see that the return values from
> the calls to fputs are not being checked. If I write a wrapper for fputs
> that checks the error code and sets a error flag which will be the return
> value of the program would that be an acceptable patch? Or should I check
> the return value of each of the 17 separate calls individually? Would a
> patch for this bug be accepted against 7.0.3 or should I write it against
> 7.1 CVS?
pgsql-hackers by date
|Next:||From: Tom Lane||Date: 2000-12-29 22:09:22|
|Subject: Re: [HACKERS] Notify with Rules bugs? |
|Previous:||From: tgl||Date: 2000-12-29 20:39:07|
|Subject: pgsql/src (Makefile.global.in)|