From: | Magnus Hagander <magnus(at)hagander(dot)net> |
---|---|
To: | sthomas(at)optionshouse(dot)com |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Performance of pg_basebackup |
Date: | 2012-06-12 14:57:38 |
Message-ID: | CABUevEx7Y5RjpJHUR20Xe0OpxD=0QKghu2tyx11EJPk0-15Q_g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Jun 12, 2012 at 4:54 PM, Shaun Thomas <sthomas(at)optionshouse(dot)com> wrote:
> Hey everyone,
>
> I was wondering if anyone has found a way to get pg_basebackup to be...
> faster. Currently we do our backups something like this:
>
> tar -c -I pigz -f /db/backup_yyyy-mm-dd.tar.gz -C /db pgdata
>
> Which basically calls pigz to do parallel compression because with RAIDs and
> ioDrives all over the place, it's the compression that's the bottleneck.
> Otherwise, only one of our 24 CPUs is actually doing anything.
>
> I can't seem to find anything like this for pg_basebackup. It just uses its
> internal compression method. I could see this being the case for pg_dump,
> but pg_basebackup just produces regular tar.gz files. Is there any way to
> either fake a parallel compression here, or should this be a feature request
> for pg_basebackup?
If you have a single tablespace you can have pg_basebackup write the
output to stdout and then pipe that through pigz.
--
Magnus Hagander
Me: http://www.hagander.net/
Work: http://www.redpill-linpro.com/
From | Date | Subject | |
---|---|---|---|
Next Message | Shaun Thomas | 2012-06-12 15:00:35 | Re: Performance of pg_basebackup |
Previous Message | Shaun Thomas | 2012-06-12 14:54:28 | Performance of pg_basebackup |