On Tue, Jun 12, 2012 at 4:54 PM, Shaun Thomas <sthomas(at)optionshouse(dot)com> wrote:
> Hey everyone,
> I was wondering if anyone has found a way to get pg_basebackup to be...
> faster. Currently we do our backups something like this:
> tar -c -I pigz -f /db/backup_yyyy-mm-dd.tar.gz -C /db pgdata
> Which basically calls pigz to do parallel compression because with RAIDs and
> ioDrives all over the place, it's the compression that's the bottleneck.
> Otherwise, only one of our 24 CPUs is actually doing anything.
> I can't seem to find anything like this for pg_basebackup. It just uses its
> internal compression method. I could see this being the case for pg_dump,
> but pg_basebackup just produces regular tar.gz files. Is there any way to
> either fake a parallel compression here, or should this be a feature request
> for pg_basebackup?
If you have a single tablespace you can have pg_basebackup write the
output to stdout and then pipe that through pigz.
In response to
pgsql-performance by date
|Next:||From: Shaun Thomas||Date: 2012-06-12 15:00:35|
|Subject: Re: Performance of pg_basebackup|
|Previous:||From: Shaun Thomas||Date: 2012-06-12 14:54:28|
|Subject: Performance of pg_basebackup|