WAL archiving and backup TAR

From: torrez <torrez(at)unavco(dot)org>
To: pgsql-admin(at)postgresql(dot)org
Cc: Damian Torrez <torrez(at)unavco(dot)org>
Subject: WAL archiving and backup TAR
Date: 2009-06-19 15:43:28
Message-ID: D60C6790-693C-4ACF-8F87-6A6B77F7C1F8@unavco.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hello,
I'm implementing WAL archiving and PITR on my production DB.
I've set up my TAR, WAL archives and pg_xlog all to be store on a
separate disk then my DB.
I'm at the point where i'm running 'Select pg_start_backup('xxx');'.

Here's the command i've run for my tar:

time tar -czf /pbo/podbackuprecovery/tars/pod-backup-$
{CURRDATE}.tar.gz /pbo/pod > /pbo/podbackuprecovery/pitr_logs/backup-
tar-log-${CURRDATE}.log 2>&1

The problem is that this tar took just over 25 hours to complete. I
expected this to be a long process because since my DB is about 100
gigs.
But 25hrs seems a bit too long. Does anyone have any ideas how to cut
down on this time?

Are there limitations to tar or gzip related to the size i'm working
with, or perhaps as a colleague suggested, tar/zip is a single thread
process and it may be bottlenecking one CPU (we run multiple core).
When I run top, gzip is running at about 12% of the CPU and tar is
around .4%. which adds up to 1/8 of 100% CPU, which number wise one
full CPU on our server since we have 8.

After making the .conf file configurations I restarted my DB and
allowed normal transactions while I do the tar/zip.

Your help is very much appreciated.

--Dom Torrez
torrez(at)unavco(dot)org

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tom Lane 2009-06-19 15:56:40 Re: pg_dump exclude tables
Previous Message Mary Sipple 2009-06-19 15:24:13 pg_dump exclude tables