Re: does wal archiving block the current client connection?

From: Tom Arthurs <tarthurs(at)jobflash(dot)com>
To: Jeff Frost <jeff(at)frostconsultingllc(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: does wal archiving block the current client connection?
Date: 2006-05-15 17:33:01
Message-ID: 4468BB4D.6060309@jobflash.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin pgsql-hackers

What might be more bullet proof would be to make the archive command
copy the file to an intermediate local directory, then have a
daemon/cron job that wakes up once a minute or so, check for new files,
then copy them to the network mount. You may want to use something like
lofs to make sure the archive command has finished and closed the file
before moving it to the network drive.

This is what I do, and I've never had a failure of the archive command.
-- Had a few network errors on the network drive (I use nfs) which I
fixed at my leisure, with no problems for the postgresql server.

Jeff Frost wrote:
> I've run into a problem with a PITR setup at a client. The problem is
> that whenever the CIFS NAS device that we're mounting at /mnt/pgbackup
> has problems, it seems that the current client connection gets blocked
> and this eventually builds up to a "sorry, too many clients already"
> error. I'm wondering if this is expected behavior with the archive
> command and if I should build in some more smarts to my archive
> script. Maybe I should fork and waitpid such that I can use a manual
> timeout shorter than whatever the CIFS timeout is so that I can return
> an error in a reasonable amount of time?
>
> Has anyone else seen this problem? Restarting the NAS device fixes
> the problem but it would be much preferable if postges could soldier
> along without the NAS for a little while before we resuscitate it. We
> don't have an NFS or rsync server available in this environment
> currently, though I suppose setting up an rsync server for windows on
> the NAS would be a possibility.
>
> Any suggestions much appreciated.
>
> Currently the script is fairly simple and just does a 'cp' and then a
> 'gzip' although we do use cp -f to copy over a possible previosly
> failed 'cp'. Script is below:
>
> . /usr/local/lib/includes.sh
>
> FULLPATH="$1"
> FILENAME="$2"
>
> #
> # Make sure we have pgbackup dir mounted
> #
> checkpgbackupmount
>
> /bin/cp -f "$FULLPATH" "$PITRDESTDIR/$FILENAME"
> if [ $? -ne 0 ]; then
> die "Could not cp $FULLPATH to $PITRDESTDIR/$FILENAME"
> fi
>
> /usr/bin/gzip -f "$PITRDESTDIR/$FILENAME"
> #
> # Make sure it worked, otherwise roll back
> #
> if [ $? -ne 0 ]; then
> /bin/rm -f "$PITRDESTDIR/$FILENAME"
> die "Could not /usr/bin/gzip $PITRDESTDIR/$FILENAME"
> fi
>
> exit 0
>
>

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Xu, Xiaoyi (Rocky) FSM 2006-05-15 17:56:16 Re: Error in MS Access
Previous Message Jeff Frost 2006-05-15 16:28:25 does wal archiving block the current client connection?

Browse pgsql-hackers by date

  From Date Subject
Next Message Jim C. Nasby 2006-05-15 17:45:40 Compression and on-disk sorting
Previous Message Bruce Momjian 2006-05-15 17:26:09 Re: [HACKERS] Compiling on 8.1.3 on Openserver 5.05