Re: Planner performance extremely affected by an hanging transaction (20-30 times)?

From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Planner performance extremely affected by an hanging transaction (20-30 times)?
Date: 2013-09-25 17:53:13
Message-ID: 20130925175312.GB5578@awork2.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 2013-09-25 00:06:06 -0700, Jeff Janes wrote:
> > On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a
> > given transaction has finished yet can be a
> > > serious point of system-wide contention, because it takes the
> > > ProcArrayLock, once per row which needs to be checked. So you have 20
> > > processes all fighting over the ProcArrayLock, each doing so 1000
> > times per
> > > query.

That should be gone in master, we don't use SnapshotNow anymore which
had those TransactionIdIsInProgress() calls you're probably referring
to. The lookups discussed in this thread now use the statement's
snapshot. And all those have their own copy of the currently running
transactions.

> > Why do we need a procarraylock for this? Seems like the solution would
> > be not to take a lock at all; the information on transaction commit is
> > in the clog, after all.

More clog accesses would hardly improve the situation.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Peter Geoghegan 2013-09-25 17:53:17 Re: Why is n_distinct always -1 for range types?
Previous Message Josh Berkus 2013-09-25 17:36:24 Re: Planner performance extremely affected by an hanging transaction (20-30 times)?