Re: [BDR] Best practice to automatically abort a DDL operation when one node is down

From: Sylvain MARECHAL <marechal(dot)sylvain2(at)gmail(dot)com>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: [BDR] Best practice to automatically abort a DDL operation when one node is down
Date: 2016-01-15 08:54:47
Message-ID: 5698B3D7.1070905@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


> I am using BDR with two nodes 1 and 2.
> If I issue a DDL operation in node 1 when node 2 is down, for example:
> CREATE TABLE test (i int PRIMARY KEY); (1)
>
> all other transactions fail with the following error:
> Database is locked against DDL operations
>
> The problem is that the (1) DDL request will wait indefinitely,
> meaning all transactions will continue to fail until the DDL operation
> is manually aborted (for example, doing CTRL C in psql to abort the
> "CREATE TABLE").
>
> What is the best practice to make sure the DDL operation will fail,
> possibly after a timeout, if one of the node is down? I could check
> the state of the node before issuing the DDL operation, but this
> solution is far from being perfect as the node may fail right after this.
>

Answering to myself, I guess no magic SQL command exists for this, I
have to cancel the request with pg_cancel_backend() (in fact, that what
the does says, I was guessing if something could detect this
automatically and abort the request).

If using a blocking API, this means one should fork the task and monitor
it to decide whether it should be canceled or not if it takes to much
time (check if one of the node is down, then cancel the request and
retry it later when the node will be up again).

--
Sylvain

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Andreas Joseph Krogh 2016-01-15 12:02:41 Re: Moving a large DB (> 500GB) to another DB with different locale
Previous Message Steve Petrie, P.Eng. 2016-01-15 06:31:52 Re: WIP: CoC V6