Skip site navigation (1) Skip section navigation (2)

Re: does wal archiving block the current client connection?

From: Tom Arthurs <tarthurs(at)jobflash(dot)com>
To: Jeff Frost <jeff(at)frostconsultingllc(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: does wal archiving block the current client connection?
Date: 2006-05-15 17:33:01
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-adminpgsql-hackers
What might be more bullet proof would be to make the archive command 
copy the file to an intermediate local directory, then have a 
daemon/cron job that wakes up once a minute or so, check for new files, 
then copy them to the network mount.  You may want to use something like 
lofs to make sure the archive command has finished and closed the file 
before moving it to the network drive.

This is what I do, and I've never had a failure of the archive command. 
-- Had a few network errors on the network drive (I use nfs) which I 
fixed at my leisure, with no problems for the postgresql server.

Jeff Frost wrote:
> I've run into a problem with a PITR setup at a client.  The problem is 
> that whenever the CIFS NAS device that we're mounting at /mnt/pgbackup 
> has problems, it seems that the current client connection gets blocked 
> and this eventually builds up to a "sorry, too many clients already" 
> error.  I'm wondering if this is expected behavior with the archive 
> command and if I should build in some more smarts to my archive 
> script.  Maybe I should fork and waitpid such that I can use a manual 
> timeout shorter than whatever the CIFS timeout is so that I can return 
> an error in a reasonable amount of time?
> Has anyone else seen this problem?  Restarting the NAS device fixes 
> the problem but it would be much preferable if postges could soldier 
> along without the NAS for a little while before we resuscitate it.  We 
> don't have an NFS or rsync server available in this environment 
> currently, though I suppose setting up an rsync server for windows on 
> the NAS would be a possibility.
> Any suggestions much appreciated.
> Currently the script is fairly simple and just does a 'cp' and then a 
> 'gzip' although we do use cp -f to copy over a possible previosly 
> failed 'cp'. Script is below:
> . /usr/local/lib/
> #
> # Make sure we have pgbackup dir mounted
> #
> checkpgbackupmount
> if [ $? -ne 0 ]; then
>         die "Could not cp $FULLPATH to $PITRDESTDIR/$FILENAME"
> fi
> /usr/bin/gzip -f "$PITRDESTDIR/$FILENAME"
> #
> # Make sure it worked, otherwise roll back
> #
> if [ $? -ne 0 ]; then
>         /bin/rm -f "$PITRDESTDIR/$FILENAME"
>         die "Could not /usr/bin/gzip $PITRDESTDIR/$FILENAME"
> fi
> exit 0

In response to

pgsql-hackers by date

Next:From: Jim C. NasbyDate: 2006-05-15 17:45:40
Subject: Compression and on-disk sorting
Previous:From: Bruce MomjianDate: 2006-05-15 17:26:09
Subject: Re: [HACKERS] Compiling on 8.1.3 on Openserver 5.05

pgsql-admin by date

Next:From: Xu, Xiaoyi (Rocky) FSMDate: 2006-05-15 17:56:16
Subject: Re: Error in MS Access
Previous:From: Jeff FrostDate: 2006-05-15 16:28:25
Subject: does wal archiving block the current client connection?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group