Re: [HACKERS] redolog - for discussion

From: jwieck(at)debis(dot)com (Jan Wieck)
To: vadim(at)krs(dot)ru (Vadim Mikheev)
Cc: jwieck(at)debis(dot)com, pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] redolog - for discussion
Date: 1998-12-02 17:11:50
Message-ID: m0zlFod-000EBPC@orion.SAPserv.Hamburg.dsh.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Vadim wrote:

>
> Jan Wieck wrote:
> >
> > At this time, a logfile switch is done (only if the
> > actual database is really logged) and the sequence number
> > of the new logfile plus the current datetime remembered.
> > The behaviour of pg_dump's backend changes. It will see a
> > snapshot of this time (implemented in tqual code) in any
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Note, that I'm implementing multi-version concurrency control
> (MVCC) for 6.5: pg_dump will have to run all queries
> in one transaction in SERIALIZED mode to get snapshot of
> transaction' begin time...

Sounds good and would make things easier. I'll keep my hands
off from the tqual code and wait for that.

But what about sequence values while in SERIALIZED
transaction mode. Sequences get overwritten in place! And for
a dump/restore/recover it is important, that the sequences
get restored ALL at once in the state they where.

>
> > subsequent command and it is totally unable to do
> > anything that would update the database.
> >
> > Until the final END BACKUP is given, no VACUUM or DROP
> > TABLE etc. commands can be run. If they are issued, the
> > command will be delayed until pg_dump finished.
>
> Vacuum will not be delete records in which any active
> backend is interested - don't worry.

That's the vacuum part, but I still need to delay DROP
TABLE/VIEW/SEQUENCE until the backup is complete.

>
> ...
>
> >
> > All that might look very complicated, but the only commands
> ^^^^^^^^^^^^^^^^
> Yes -:)
> We could copy/move pg_dump' stuff into backend...
> This way pg_dump will just execute one command
>
> ALTER DATABASE ONLINE BACKUP; -- as I understand
>
> - backend will do all what it need and pg_dump just
> write backend' output to a file.
>
> I think that it would be nice to have code in backend to
> generate CREATE statements from catalog and extend EXPLAIN
> to handle something like EXPLAIN TABLE xxx etc.
> We could call EXPLAIN for all \dXXXX in psql and
> when dumping schema in pg_dump.
>
> Comments?

Indeed :-)

If we have serialized transaction that covers sequences, only
BEGIN and END BACKUP must remain. BEGIN to force the logfile
switch and END to flag that dump is complete and backend can
update pg_database.

So you want to put major parts of pg_dump's functionality
into the backend. Hmmm - would be cool. And it would give us
a chance to include tests for most of the dump related code
in regression.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#======================================== jwieck(at)debis(dot)com (Jan Wieck) #

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Joost Kraaijeveld 1998-12-02 19:59:30 NT port
Previous Message John Polstra 1998-12-02 17:03:10 RE: New Linux/libc5 CVSup client