Re: AS OF queries

From: konstantin knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: AS OF queries
Date: 2017-12-23 20:53:19
Message-ID: 33157CC7-79E2-446B-A7C2-7F3979BBF0FD@postgrespro.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


On Dec 23, 2017, at 2:08 AM, Greg Stark wrote:

> On 20 December 2017 at 12:45, Konstantin Knizhnik
> <k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
>
>> It seems to me that it will be not so difficult to implement them in
>> Postgres - we already have versions of tuples.
>> Looks like we only need to do three things:
>> 1. Disable autovacuum (autovacuum = off)
>
> "The Wheel of Time turns, and Ages come and pass, leaving memories
> that become legend. Legend fades to myth, and even myth is long
> forgotten when the Age that gave it birth comes again"
>
> I think you'll find it a lot harder to get this to work than just
> disabling autovacuum. Notably HOT updates can get cleaned up (and even
> non-HOT updates can now leave tombstone dead line pointers iirc) even
> if vacuum hasn't run.
>

Yeh, I suspected that just disabling autovacuum was not enough.
I heard (but do no know too much) about microvacuum and hot updates.
This is why I was a little bit surprised when me test didn't show lost of updated versions.
May be it is because of vacuum_defer_cleanup_age.

> We do have the infrastructure to deal with that. c.f.
> vacuum_defer_cleanup_age. So in _theory_ you could create a snapshot
> with xmin older than recent_global_xmin as long as it's not more than
> vacuum_defer_cleanup_age older. But the devil will be in the details.
> It does mean that you'll be making recent_global_xmin move backwards
> which it has always been promised to *not* do

But what if I just forbid to change recent_global_xmin?
If it is stalled at FirstNormalTransactionId and never changed?
Will it protect all versions from been deleted?

>
> Then there's another issue that logical replication has had to deal
> with -- catalog changes. You can't start looking at tuples that have a
> different structure than the current catalog unless you can figure out
> how to use the logical replication infrastructure to use the old
> catalogs. That's a huge problem to bite off and probably can just be
> left for another day if you can find a way to reliably detect the
> problem and raise an error if the schema is inconsistent.

Yes, catalog changes this is another problem of time travel.
I do not know any suitable way to handle several different catalog snapshots in one query.
But I think that there are a lot of cases where time travels without possibility of database schema change still will be useful.
The question is how we should handle such catalog changes if them are happen. Ideally we should not allow to move back beyond this point.
Unfortunately it is not so easy to implement.

>
> Postgres used to have time travel. I think it's come up more than once
> in the pasts as something that can probably never come back due to
> other decisions made. If more decisions have made it possible again
> that will be fascinating.
>
> --
> greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2017-12-23 21:53:55 parallel append vs. simple UNION ALL
Previous Message Andres Freund 2017-12-23 20:28:03 Re: [HACKERS] [COMMITTERS] pgsql: Fix freezing of a dead HOT-updated tuple