Skip site navigation (1) Skip section navigation (2)

Re: Stopgap solution for table-size-estimate updatingproblem

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Greg Stark <gsstark(at)mit(dot)edu>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Stopgap solution for table-size-estimate updatingproblem
Date: 2004-11-29 17:44:42
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Greg Stark <gsstark(at)mit(dot)edu> writes:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
>> I'm unsure why you feel you need a knob to defeat this.

> Simply put because the optimizer isn't infallible.

And one of the main reasons that it's fallible is because it sometimes
uses grossly obsolete statistics.  We can fix the first-order problems
in this line with the proposed changes.  (Obsolete pg_statistic contents
are an issue too, but they usually have only second-order effects on
plan choices.)

> And some mistakes are more
> costly than others. Continuing to use a plan that worked fine after an
> incremental change to the table is unlikely to cause pain

We're not talking about "incremental" changes; those would be unlikely
to result in a plan change in any case.  The cases that are causing pain
are where the table size has changed by an order of magnitude and the
planner failed to notice.

> You're going to say the opposite is also possible but it's not really true. A
> DML change that doesn't trigger an execution plan change isn't going to cause
> a disproportionate change in the execution time of queries.

Nonsense.  You're assuming incremental changes (ie, only a small
fractional change in table size), but we are getting killed by
non-incremental cases.  If the plan cost estimates are such that a small
fractional change in table size will cause the planner to switch to a
hugely worse plan, then you're living on the edge of disaster anyway.
Or are you telling me that every time you VACUUM or ANALYZE, you
immediately hand-inspect the plans for every query you use?

A further point is that only VACUUM can decrease the table size, and
VACUUM already updates these stats anyway.  The only "loss of control"
involved here is prevention of a plan change in response to a
significant increase in table size.  Overestimates of result size
usually don't produce as horrible plans as underestimates, so the
downside doesn't seem as large as you make it out to be.

> For a production OLTP system I would want to be able to control when
> the plans change. In an ideal world I would even want to inspect and
> test them before they go live.

This is pure fantasy.  It certainly has nothing to do with the current
state of nor future directions for the planner, and you haven't even
convinced me that it's a desirable goal.  What you are describing is a
brittle, inflexible system that is much more likely to break under
unforeseen circumstances than it is to perform well reliably.

			regards, tom lane

In response to


pgsql-hackers by date

Next:From: Thomas HallgrenDate: 2004-11-29 17:50:17
Subject: Re: Status of server side Large Object support?
Previous:From: Joe ConwayDate: 2004-11-29 17:23:56
Subject: Re: unnest

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group