Re: snapshot too old issues, first around wraparound and then more.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: Peter Geoghegan <pg(at)bowt(dot)ie>, Noah Misch <noah(at)leadboat(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Kevin Grittner <kgrittn(at)gmail(dot)com>
Subject: Re: snapshot too old issues, first around wraparound and then more.
Date: 2021-06-16 16:00:57
Message-ID: 709179.1623859257@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greg Stark <stark(at)mit(dot)edu> writes:
> Fwiw I too think the basic idea of the feature is actually awesome.
> There are tons of use cases where you might have one long-lived
> transaction working on a dedicated table (or even a schema) that will
> never look at the rapidly mutating tables in another schema and never
> trigger the error even though those tables have been vacuumed many
> times over during its run-time.

I agree that's a great use-case. I don't like this implementation though.
I think if you want to set things up like that, you should draw a line
between the tables it's okay for the long transaction to touch and those
it isn't, and then any access to the latter should predictably draw an
error. I really do not like the idea that it might work anyway, because
then if you accidentally break the rule, you have an application that just
fails randomly ... probably only on the days when the boss wants that
report *now* not later.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2021-06-16 16:01:09 Re: a path towards replacing GEQO with something better
Previous Message Yugo NAGATA 2021-06-16 15:59:34 Re: pgbench bug candidate: negative "initial connection time"