Point in time recovery: archiving WAL files

From: Marc Munro <marc(at)bloodnok(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Point in time recovery: archiving WAL files
Date: 2002-02-28 00:40:07
Message-ID: 1014856808.19487.2.camel@bloodnok.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

We need to archive WAL files and I am unsure of the right approach.
What is the right way to do this without completely blocking the backend
that gets the task?

I can see a number of options but lack the depth of PosgreSQL knowledge
to be able to choose between them. No doubt some of you will see other
options.

1) Just let the backend get on with it.
This will effectively stop the user's session while the copy occurs.
Bad idea.

2) Have the backend spawn a child process to do this.
Will the backend wait for it's child before closing down? Will
aborting the backend kill the archiving child? This just seems wrong to
me.

3) Have the backend spawn a disconnected (nohup) process.
This seems dangerous to me but I can't put my finger on why.

4) Have the backend tell the postmaster to archive the file. The
postmaster will spawn a dedicated process to make it happen.
I think I like this but I don't know how to do it yet.

5) Have a dedicated archiver process. Have backends tell it to get on
with the job.
This is Oracle's approach. I see no real benefit over option 4
except that we don't have to keep spawning new processes. On a personal
level I want to be different from Oracle.

6) I have completely missed the point about backends
Please be gentle.

Any and all feedback welcomed. Thanks.

--
Marc marc(at)bloodnok(dot)com

Browse pgsql-hackers by date

  From Date Subject
Next Message Christopher Kings-Lynne 2002-02-28 01:18:39 Arch (was RE: Refactoring of command.c )
Previous Message Marc Munro 2002-02-28 00:39:58 Point in time recovery: recreating relation files