Re: foreign key locks, 2nd attempt

From: Noah Misch <noah(at)leadboat(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: foreign key locks, 2nd attempt
Date: 2012-03-14 22:10:00
Message-ID: 20120314221000.GG27122@tornado.leadboat.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Mar 14, 2012 at 01:23:14PM -0400, Robert Haas wrote:
> On Tue, Mar 13, 2012 at 11:42 PM, Noah Misch <noah(at)leadboat(dot)com> wrote:
> > More often than that; each 2-member mxid takes 4 bytes in an offsets file and
> > 10 bytes in a members file. ?So, more like one fsync per ~580 mxids. ?Note
> > that we already fsync the multixact SLRUs today, so any increase will arise
> > from the widening of member entries from 4 bytes to 5. ?The realism of this
> > test is attractive. ?Nearly-static parent tables are plenty common, and this
> > test will illustrate the impact on those designs.
>
> Agreed. But speaking of that, why exactly do we fsync the multixact SLRU today?

Good question. So far, I can't think of a reason. "nextMulti" is critical,
but we already fsync it with pg_control. We could delete the other multixact
state data at every startup and set OldestVisibleMXactId accordingly.

> > You still have HEAP_XMAX_{INVALID,COMMITTED} to reduce the pressure on mxid
> > lookups, so I think something more sophisticated is needed to exercise that
> > cost. ?Not sure what.
>
> I don't think HEAP_XMAX_COMMITTED is much help, because committed !=
> all-visible. HEAP_XMAX_INVALID will obviously help, when it happens.

True. The patch (see ResetMultiHintBit()) also replaces a multixact xmax with
the updater xid when all transactions of the multixact have ended. You would
need a test workload with long-running multixacts that delay such replacement.
However, the workloads that come to mind are the very workloads for which this
patch eliminates lock waits; they wouldn't illustrate a worst-case.

> >> This isn't exactly a test case, but from Noah's previous comments I
> >> gather that there is a theoretical risk of mxid consumption running
> >> ahead of xid consumption. ?We should try to think about whether there
> >> are any realistic workloads where that might actually happen. ?I'm
> >> willing to believe that there aren't, but not just because somebody
> >> asserts it. ?The reason I'm concerned about this is because, if it
> >> should happen, the result will be more frequent anti-wraparound
> >> vacuums on every table in the cluster. ?Those are already quite
> >> painful for some users.
> >
> > Yes. ?Pre-release, what can we really do here other than have more people
> > thinking about ways it might happen in practice? ?Post-release, we could
> > suggest monitoring methods or perhaps have VACUUM emit a WARNING when a table
> > is using more mxid space than xid space.
>
> Well, post-release, the cat is out of the bag: we'll be stuck with
> this whether the performance characteristics are acceptable or not.
> That's why we'd better be as sure as possible before committing to
> this implementation that there's nothing we can't live with. It's not
> like there's any reasonable way to turn this off if you don't like it.

I disagree; we're only carving in stone the FOR KEY SHARE and FOR KEY UPDATE
syntax additions. We could even avoid doing that by not documenting them. A
later major release could implement them using a completely different
mechanism or even reduce them to aliases, KEY SHARE = SHARE and KEY UPDATE =
UPDATE. To be sure, let's still do a good job the first time.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2012-03-14 22:20:02 Re: CREATE FOREGIN TABLE LACUNA
Previous Message Kevin Grittner 2012-03-14 22:08:01 Re: Faster compression, again