pg_dump problem

From: Matthew <matt(at)ctlno(dot)com>
To: "'pgsql-hackers(at)postgresql(dot)org'" <pgsql-hackers(at)postgresql(dot)org>
Cc: "JONATHAN LOUIS GRIMM (E-mail)" <flymolo(at)eatel(dot)net>
Subject: pg_dump problem
Date: 2000-12-29 21:22:36
Message-ID: 183FA749499ED311B6550000F87E206C0C94DE@srv.ctlno.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

We backup our postgre 7.0.2 databases nightly via a cron script from a
remote box that calls pg_dump -f filename dbname or something to that
effect. About a week ago we started running out of disk space and we didn't
know it, and pg_dump doesn't report any errors so the cron script didn't
report that to us. The result being we have a weeks worth of corrupt
backups, this is clearly a problem.

FYI : the database server is Redhat Linux 6.1, Postgre 7.0.2 from RPM,
Athlon 900 w/ 256M
and the backup server is RedHat 6.1, Postgre 7.0.2 client RPMS, P133 w/ 32M.

> When the filesystem fills, pg_dump continues attempting to write data
> which is then lost. As we are running pg_dump in a cron job, we would
> like it to fail (return a non-zero error code) if there are any filesystem
> errors. I realize that for stdout the return value should always be true,
> but for the -f option I would like to see the checks done.
>
> Taking a look at the source for pg_dump I see that the return values from
> the calls to fputs are not being checked. If I write a wrapper for fputs
> that checks the error code and sets a error flag which will be the return
> value of the program would that be an acceptable patch? Or should I check
> the return value of each of the 17 separate calls individually? Would a
> patch for this bug be accepted against 7.0.3 or should I write it against
> 7.1 CVS?
>
Thanks,

Matt O'Connor

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2000-12-29 22:09:22 Re: [HACKERS] Notify with Rules bugs?
Previous Message tgl 2000-12-29 20:39:07 pgsql/src (Makefile.global.in)