Re: Catalog/Metadata consistency during changeset extraction from wal

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Andres Freund <andres(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: Catalog/Metadata consistency during changeset extraction from wal
Date: 2012-06-21 14:39:21
Message-ID: CA+U5nML-i6MtpCwo8O=1icZVE-wU+JLZ1+qgQKc3p-ugNhoiqg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 21 June 2012 12:41, Andres Freund <andres(at)2ndquadrant(dot)com> wrote:

> 3)
> Multi-Versioned catalog
>
> Below are two possible implementation strategies for that concept
>
> Advantages:
> * Decoding is done on the master in an asynchronous fashion
> * low overhead during normal DML execution, not much additional code in that
> path
> * can be very efficient if architecture/version are the same
> * version/architecture compatibility can be done transparently by falling back
> to textual versions on mismatch
>
> Disadvantages:
> * decoding probably has to happen on the master which might not be what people
> want performancewise
>
> 3a)
> Change the system catalogs to be versioned
>
> Advantages.
> * catalog access is easy
> * might be interesting for other users
>
> Disadvantages:
> * catalog versioning is complex to implement
> * space overhead for all users, even without using logical replication
> * I can't see -hackers signing off

Hmm, there's all sorts of stuff mixed up there in your description.

ISTM we should maintain a lookup table on target system that has the
minimal required information on it.

There is no need to version the whole catalog. (Complete overkill - I
would oppose it ;-)

If we keep the lookup table on the target as a normal table, we can
insert new rows into it as changes occur. If we need to perform
recovery then the earlier version rows will still be there and we just
use those. Versioning is easy to implement, just use LSN as additional
key in the table. Then lookup based on key and LSN. If a transaction
that makes DDL changes aborts, then the changes will be automatically
backed out.

Only keep the lookup table if using logical replication, so zero
overhead otherwise. We just need to setup the initial state carefully,
so it matches whats in the database, but that sounds OK.

So I don't see any of the disadvantages you have there. Its just darn
simple, and hence will probably work. It's also a very similar
solution to the other lookups required in memory by the apply process.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2012-06-21 14:53:35 Re: Catalog/Metadata consistency during changeset extraction from wal
Previous Message Alex Hunsaker 2012-06-21 14:27:41 Re: pl/perl and utf-8 in sql_ascii databases