Re: Planner performance extremely affected by an hanging transaction (20-30 times)?

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, Bartłomiej Romański <br(at)sentia(dot)pl>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Planner performance extremely affected by an hanging transaction (20-30 times)?
Date: 2013-09-25 06:48:42
Message-ID: CAMkU=1w0kr9SLcuZYuz6t7kvMiep5RpwD5OEADaDQQTc-cyqdw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Sep 24, 2013 at 3:35 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Kevin Grittner <kgrittn(at)ymail(dot)com> writes:
> > Are we talking about the probe for the end (or beginning) of an
> > index? If so, should we even care about visibility of the row
> > related to the most extreme index entry? Should we even go to the
> > heap during the plan phase?
>
> Consider the case where some transaction inserted a wildly out-of-range
> value, then rolled back. If we don't check validity of the heap row,
> we'd be using that silly endpoint value for planning purposes ---
> indefinitely.

Would it really be indefinite? Would it be different from if someone
inserted a wild value, committed, then deleted it and committed that? It
seems like eventually the histogram would have to get rebuilt with the
ability to shrink the range.

To get really complicated, it could stop at an in-progress tuple and use
its value for immediate purposes, but suppress storing it in the histogram
(storing only committed, not in-progress, values).

Cheers,

Jeff

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jeff Janes 2013-09-25 07:06:06 Re: Planner performance extremely affected by an hanging transaction (20-30 times)?
Previous Message Jeff Janes 2013-09-25 06:38:03 Re: Performance bug in prepared statement binding in 9.2?