Skip site navigation (1) Skip section navigation (2)

Base Backup Streaming (was: Sync Rep Design)

From: Dimitri Fontaine <dimitri(at)2ndQuadrant(dot)fr>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: Josh Berkus <josh(at)postgresql(dot)org>, Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, Simon Riggs <simon(at)2ndQuadrant(dot)com>, greg(at)2ndQuadrant(dot)com, Hannu Krosing <hannu(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Base Backup Streaming (was: Sync Rep Design)
Date: 2011-01-02 12:47:17
Message-ID: m24o9rtqui.fsf_-_@2ndQuadrant.fr (view raw or flat)
Thread:
Lists: pgsql-hackers
Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> writes:
> BTW, there's a bunch of replication related stuff that we should work to
> close, that are IMHO more important than synchronous replication. Like
> making the standby follow timeline changes, to make failovers smoother, and
> the facility to stream a base-backup over the wire. I wish someone worked on
> those...

So, we've been talking about base backup streaming at conferences and we
have a working prototype.  We even have a needed piece of it in core
now, that's the pg_read_binary_file() function.  What we still miss is
an overall design and some integration effort.  Let's design first.

I propose the following new pg_ctl command to initiate the cloning:

 pg_ctl clone [-D datadir] [-s on|off] [-t filename]  "primary_conninfo"

As far as user are concerned, that would be the only novelty.  Once that
command is finished (successfully) they would edit postgresql.conf and
start the service as usual.  A basic recovery.conf file is created with
the given options, standby_mode is driven by -s and defaults to off, and
trigger_file defaults to being omitted and is given by -t.  Of course
the primary_conninfo given on the command line is what ends up into the
recovery.conf file.

That alone would allow for making base backups for recovery purposes and
for standby preparing.

To support for this new tool, the simplest would be to just copy what
I've been doing in the prototype, that is run a query to get the primary
file listing (per tablespace, not done in the prototype) then get their
bytea content over the wire.  That means there's no further backend
support code to write.

  https://github.com/dimitri/pg_basebackup

We could prefer to have a backend function prepare a tar archive and
stream it using the COPY protocol, with some compression support, but
that's more complex to code now and to parallelize down the road.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support

In response to

Responses

pgsql-hackers by date

Next:From: KaiGai KoheiDate: 2011-01-02 12:54:18
Subject: Re: management of large patches
Previous:From: Robert HaasDate: 2011-01-02 12:41:58
Subject: Re: management of large patches

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group