Re: Patroni vs pgpool II

From: Tatsuo Ishii <ishii(at)sraoss(dot)co(dot)jp>
To: jgdr(at)dalibo(dot)com
Cc: inzamam(dot)shafiq(at)hotmail(dot)com, cyberdemn(at)gmail(dot)com, pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Patroni vs pgpool II
Date: 2023-04-07 09:04:05
Message-ID: 20230407.180405.1284467321161872462.t-ishii@sranhm.sra.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> And I believe that's part of what Cen was complaining about:
>
> «
> It is basically a daemon glued together with scripts for which you are
> entirely responsible for. Any small mistake in failover scripts and
> cluster enters  a broken state.
> »
>
> If you want to build something clean, including fencing, you'll have to
> handle/dev it by yourself in scripts

That's a design decision. This gives maximum flexibility to users.

Please note that we provide step-by-step installation/configuration
documents which has been used by production systems.

https://www.pgpool.net/docs/44/en/html/example-cluster.html

>> However I am not sure STONITH is always mandatory.
>
> Sure, it really depend on how much risky you can go and how much complexity you
> can afford. Some cluster can leave with a 10 minute split brain where some other
> can not survive a 5s split brain.
>
>> I think that depends what you want to avoid using fencing. If the purpose is
>> to avoid having two primary servers at the same time, Pgpool-II achieve that
>> as described above.
>
> How could you be so sure?
>
> See https://www.alteeve.com/w/The_2-Node_Myth
>
> «
> * Quorum is a tool for when things are working predictably
> * Fencing is a tool for when things go wrong

I think the article does not apply to Pgpool-II.

-------------------------------------------------------------------
3-Node

When node 1 stops responding, node 2 declares it lost, reforms a
cluster with the quorum node, node 3, and is quorate. It begins
recovery by mounting the filesystem under NFS, which replays journals
and cleans up, then starts NFS and takes the virtual IP address.

Later, node 1 recovers from its hang. At the moment of recovery, it
has no concept that time has passed and so has no reason to check to
see if it is still quorate or whether its locks are still valid. It
just finished doing whatever it was doing at the moment it hung.

In the best case scenario, you now have two machines claiming the same
IP address. At worse, you have uncoordinated writes to storage and you
corrupt your data.
-------------------------------------------------------------------

> Later, node 1 recovers from its hang.

Pgpool-II does not allow an automatic recover. If node 1 hangs and
once it is recognized as "down" by other nodes, it will not be used
without manual intervention. Thus the disaster described above will
not happen in pgpool.

Best reagards,
--
Tatsuo Ishii
SRA OSS LLC
English: http://www.sraoss.co.jp/index_en/
Japanese:http://www.sraoss.co.jp

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jehan-Guillaume de Rorthais 2023-04-07 10:46:12 Re: Patroni vs pgpool II
Previous Message Jehan-Guillaume de Rorthais 2023-04-07 07:45:57 Re: Patroni vs pgpool II