|From:||Magnus Hagander <magnus(at)hagander(dot)net>|
|Subject:||Streaming base backups|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Attached is an updated streaming base backup patch, based off the work
started. It includes support for tablespaces, permissions, progress
some actual documentation of the protocol changes (user interface
going to be depending on exactly what the frontend client will look like, so I'm
waiting with that one a while).
The basic implementation is: Add a new command to the replication mode called
BASE_BACKUP, that will initiate a base backup, stream the contents (in tar
compatible format) of the data directory and all tablespaces, and then end
the base backup in a single operation.
Other than the basic implementation, there is a small refactoring done of
pg_start_backup() and pg_stop_backup() splitting them into a "backend function"
that is easier to call internally and a "user facing function" that remains
identical to the previous one, and I've also added a pg_abort_backup()
internal-only function to get out of crashes while in backup mode in a safer
way (so it can be called from error handlers). Also, the walsender needs a
resource owner in order to call pg_start_backup().
I've implemented a frontend for this in pg_streamrecv, based on the assumption
that we wanted to include this in bin/ for 9.1 - and that it seems like a
reasonable place to put it. This can obviously be moved elsewhere if we want to.
That code needs a lot more cleanup, but I wanted to make sure I got the backend
patch out for review quickly. You can find the current WIP branch for
pg_streamrecv on my github page at https://github.com/mhagander/pg_streamrecv,
in the branch "baserecv". I'll be posting that as a separate patch once it's
been a bit more cleaned up (it does work now if you want to test it, though).
Some remaining thoughts and must-dos:
* Compression: Do we want to be able to compress the backups server-side? Or
defer that to whenever we get compression in libpq? (you can still tunnel it
through for example SSH to get compression if you want to) My thinking is
* Compression: We could still implement compression of the tar files in
pg_streamrecv (probably easier, possibly more useful?)
* Windows support (need to implement readlink)
* Tar code is copied from pg_dump and modified. Should we try to factor it out
into port/? There are changes in the middle of it so it can't be done with
the current calling points, it would need a refactor. I think it's not worth
it, given how simple it is.
Improvements I want to add, but that aren't required for basic operation:
* Stefan mentiond it might be useful to put some
in the process that streams all the files out. Seems useful, as long as that
doesn't kick them out of the cache *completely*, for other backends as well.
Do we know if that is the case?
* include all the necessary WAL files in the backup. This way we could generate
a tar file that would work on it's own - right now, you still need to set up
log archiving (or use streaming repl) to get the remaining logfiles from the
master. This is fine for replication setups, but not for backups.
This would also require us to block recycling of WAL files during the backup,
* Suggestion from Heikki: don't put backup_label in $PGDATA during the backup.
Rather, include it just in the tar file. That way if you crash during the
backup, the master doesn't start recovery from the backup_label, leading
to failure to start up in the worst case
* Suggestion from Heikki: perhaps at some point we're going to need a full
bison grammar for walsender commands.
* Relocation of tablespaces (can at least partially be done client-side)
|Next Message||Robert Haas||2011-01-05 14:04:08||Re: making an unlogged table logged|
|Previous Message||Csaba Nagy||2011-01-05 13:43:06||Re: estimating # of distinct values|