Re: Do we need to handle orphaned prepared transactions in the server?

From: Craig Ringer <craig(at)2ndquadrant(dot)com>
To: Ants Aasma <ants(at)cybertec(dot)at>
Cc: Hamid Akhtar <hamid(dot)akhtar(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Do we need to handle orphaned prepared transactions in the server?
Date: 2020-01-22 10:12:29
Message-ID: CAMsr+YHis66Wj1Q3WzY=geeApLWaxOG=dZ4CTm3o0_f6j6Cabw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, 22 Jan 2020 at 16:45, Ants Aasma <ants(at)cybertec(dot)at> wrote:

> The intended use case of two phase transactions is ensuring atomic
> durability of transactions across multiple database systems.

Exactly. I was trying to find a good way to say this.

It doesn't make much sense to embed a 2PC resolver in Pg unless it's
an XA coordinator or similar. And generally it doesn't make sense for
the distributed transaction coordinator to reside alongside one of the
datasources being managed anyway, especially where failover and HA are
in the picture.

I *can* see it being useful, albeit rather heavyweight, to implement
an XA coordinator on top of PostgreSQL. Mostly for HA and replication
reasons. But generally you'd use postgres instances for the HA
coordinator and the DB(s) in which 2PC txns are being managed. While
you could run them in the same instance it'd probably mostly be for
toy-scale PoC/demo/testing use.

So I don't really see the point of doing anything with 2PC xacts
within Pg proper. It's the job of the app that prepares the 2PC xacts,
and if that app is unable to resolve them for some reason there's no
generally-correct action to take without administrator action.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Georgios Kokolatos 2020-01-22 12:54:07 Re: Duplicate Workers entries in some EXPLAIN plans
Previous Message MBeena Emerson 2020-01-22 09:53:15 Re: Error message inconsistency