Re: poor performance when recreating constraints on large tables

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: poor performance when recreating constraints on large tables
Date: 2011-06-08 21:57:37
Message-ID: BANLkTi=3w8+sDbYZeDz7Mpj_QMEErEURVQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

---------- Forwarded message ----------
From: Claudio Freire <klaussfreire(at)gmail(dot)com>
Date: Wed, Jun 8, 2011 at 11:57 PM
Subject: Re: [PERFORM] poor performance when recreating constraints on
large tables
To: Samuel Gendler <sgendler(at)ideasculptor(dot)com>

On Wed, Jun 8, 2011 at 9:57 PM, Samuel Gendler
<sgendler(at)ideasculptor(dot)com> wrote:
> Sure, but if it is a query that is slow enough for a time estimate to be
> useful, odds are good that stats that are that far out of whack would
> actually be interesting to whoever is looking at the time estimate, so
> showing some kind of 'N/A' response once things have gotten out of whack
> wouldn't be unwarranted.  Not that I'm suggesting that any of this is a
> particularly useful exercise.  I'm just playing with the original thought
> experiment suggestion.

There's a trick to get exactly that:

Do an explain, fetch the expected rowcount on the result set, add a
dummy sequence and a dummy field to the resultset "nextval(...) as
progress".

Now, you won't get to read the progress column probably, but that
doesn't matter. Open up another transaction, and query it there.
Sequences are nontransactional.

All the smarts about figuring out the expected resultset's size
remains on the application, which is fine by me.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Greg Smith 2011-06-09 05:57:25 Re: poor performance when recreating constraints on large tables
Previous Message Samuel Gendler 2011-06-08 19:57:48 Re: poor performance when recreating constraints on large tables