From: | Cédric Villemain <cedric(at)2ndquadrant(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Cc: | "Jehan-Guillaume (ioguix) de Rorthais" <ioguix(at)free(dot)fr>, Tatsuo Ishii <ishii(at)postgresql(dot)org>, klaussfreire(at)gmail(dot)com, sfrost(at)snowman(dot)net |
Subject: | Re: Implementing incremental backup |
Date: | 2013-06-22 13:58:35 |
Message-ID: | 201306221558.39966.cedric@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Le samedi 22 juin 2013 01:09:20, Jehan-Guillaume (ioguix) de Rorthais a écrit
:
> On 20/06/2013 03:25, Tatsuo Ishii wrote:
> >> On Wed, Jun 19, 2013 at 8:40 PM, Tatsuo Ishii <ishii(at)postgresql(dot)org>
wrote:
> >>>> On Wed, Jun 19, 2013 at 6:20 PM, Stephen Frost <sfrost(at)snowman(dot)net>
wrote:
> >>>>> * Claudio Freire (klaussfreire(at)gmail(dot)com) wrote:
> [...]
>
> >> The only bottleneck here, is WAL archiving. This assumes you can
> >> afford WAL archiving at least to a local filesystem, and that the WAL
> >> compressor is able to cope with WAL bandwidth. But I have no reason to
> >> think you'd be able to cope with dirty-map updates anyway if you were
> >> saturating the WAL compressor, as the compressor is more efficient on
> >> amortized cost per transaction than the dirty-map approach.
> >
> > Thank you for detailed explanation. I will think more about this.
>
> Just for the record, I was mulling over this idea since a bunch of
> month. I even talked about that with Dimitri Fontaine some weeks ago
> with some beers :)
>
> My idea came from a customer during a training explaining me the
> difference between differential and incremental backup in Oracle.
>
> My approach would have been to create a standalone tool (say
> pg_walaggregate) which takes a bunch of WAL from archives and merge them
> in a single big file, keeping only the very last version of each page
> after aggregating all their changes. The resulting file, aggregating all
> the changes from given WAL files would be the "differential backup".
>
> A differential backup resulting from a bunch of WAL between W1 and Wn
> would help to recover much faster to the time of Wn than replaying all
> the WALs between W1 and Wn and saves a lot of space.
>
> I was hoping to find some time to dig around this idea, but as the
> subject rose here, then here are my 2¢!
something like that maybe :
./pg_xlogdump -b \
../data/pg_xlog/000000010000000000000001 \
../data/pg_xlog/000000010000000000000005| \
grep 'backup bkp' | awk '{print ($5,$9)}'
--
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2013-06-22 14:08:51 | Re: Implementing incremental backup |
Previous Message | Andres Freund | 2013-06-22 13:56:30 | Re: Unaccent performance |