Re: snapshot too old issues, first around wraparound and then more.

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: Peter Geoghegan <pg(at)bowt(dot)ie>, Noah Misch <noah(at)leadboat(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Kevin Grittner <kgrittn(at)gmail(dot)com>
Subject: Re: snapshot too old issues, first around wraparound and then more.
Date: 2021-06-16 16:11:45
Message-ID: 20210616161144.GK20766@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greetings,

* Greg Stark (stark(at)mit(dot)edu) wrote:
> I think Andres's point earlier is the one that stands out the most for me:
>
> > I still think that's the most reasonable course. I actually like the
> > feature, but I don't think a better implementation of it would share
> > much if any of the current infrastructure.
>
> That makes me wonder whether ripping the code out early in the v15
> cycle wouldn't be a better choice. It would make it easier for someone
> to start work on a new implementation.
>
> There is the risk that the code would still be out and no new
> implementation would have appeared by the release of v15 but it sounds
> like that's people are leaning towards ripping it out at that point
> anyways.
>
> Fwiw I too think the basic idea of the feature is actually awesome.
> There are tons of use cases where you might have one long-lived
> transaction working on a dedicated table (or even a schema) that will
> never look at the rapidly mutating tables in another schema and never
> trigger the error even though those tables have been vacuumed many
> times over during its run-time.

I've long felt that the appropriate approach to addressing that is to
improve on VACUUM and find a way to do better than just having the
conditional of 'xmax < global min' drive if we can mark a given tuple as
no longer visible to anyone.

Not sure that all of the snapshot-too-old use cases could be solved that
way, nor am I even sure it's actually possible to make VACUUM smarter in
that way without introducing other problems or having to track much more
information than we do today, but it'd sure be nice if we could address
the use-case you outline above while also not introducing query
failures if that transaction does happen to decide to go look at some
other table (naturally, the tuples which are in that rapidly mutating
table that *would* be visible to the long-running transaction would have
to be kept around to make things work, but if it's rapidly mutating then
there's very likely lots of tuples that the long-running transaction
can't see in it, and which nothing else can either, and therefore could
be vacuumed).

Thanks,

Stephen

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Champion 2021-06-16 16:15:56 Re: Support for NSS as a libpq TLS backend
Previous Message Robert Haas 2021-06-16 16:07:36 Re: Transactions involving multiple postgres foreign servers, take 2