From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Global snapshots |
Date: | 2018-05-04 19:09:19 |
Message-ID: | CA+Tgmob=fJo-pwGsnQT+PBoHyhNM4Giw3y5LXBm0YLhWvKEm1g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, May 1, 2018 at 5:02 PM, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
> Yes, that totally possible. On both systems you need:
Cool.
> * set track_global_snapshots='on' -- this will start writing each
> transaction commit sequence number to SRLU.
> * set global_snapshot_defer_time to 30 seconds, for example -- this
> will delay oldestXmin advancement for specified amount of time,
> preserving old tuples.
So, is the idea that we'll definitely find out about any remote
transactions within 30 seconds, and then after we know about remote
transactions, we'll hold back OldestXmin some other way?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Dilger | 2018-05-04 19:24:27 | genbki.pl not quoting keywords in postgres.bki output |
Previous Message | Andres Freund | 2018-05-04 18:56:31 | Re: pageinspect get_raw_page() option to return invalid pages |