Skip site navigation (1) Skip section navigation (2)

Re: Stopgap solution for table-size-estimate updatingproblem

From: Greg Stark <gsstark(at)mit(dot)edu>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Stopgap solution for table-size-estimate updatingproblem
Date: 2004-11-29 17:21:06
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> I'm unsure why you feel you need a knob to defeat this.  The only time
> when the plan would change from what you think of as the hand-tuned
> case is when the physical table size is greatly different from what it
> was when you analyzed.  The entire point of wanting to make this change
> is exactly that in that situation the plan *does* need to change.

Simply put because the optimizer isn't infallible. And some mistakes are more
costly than others. Continuing to use a plan that worked fine after an
incremental change to the table is unlikely to cause pain whereas changing
plans opens a pandora's box of potential catastrophic failures.

Imagine a scenario where the system was running fine using nested loops and
index scans but the user deletes a few records (at 9am just as the site is
hitting peak usage and before I'm awake) and suddenly the planner decides to
use sequential scans and hash joins. The resulting plan may be far too slow
and crash the application. This is especially likely if the original plan
estimates were off.

You're going to say the opposite is also possible but it's not really true. A
DML change that doesn't trigger an execution plan change isn't going to cause
a disproportionate change in the execution time of queries. It's going to
cause a change in execution time proportionate to the change in the data.

If the user doubles the number of records in the table (something I can
predict the likelihood of) it probably means the query will take twice as
long. Now there may be a faster plan out there but failing to find it just
means the query will take twice as long.

If the user halves the number of records and the planner tries to be clever
and switches plans, then it might be right, but it might be wrong. And the
potential damage if it's wrong is unbounded. It could just take twice as long,
but it could take 1,000 times as long or worse.

For a production OLTP system I would want to be able to control when the plans
change. In an ideal world I would even want to inspect and test them before
they go live. The last thing I want is for them to change spontaneously when
I'm not expecting it.


In response to


pgsql-hackers by date

Next:From: Joe ConwayDate: 2004-11-29 17:23:56
Subject: Re: unnest
Previous:From: Tom LaneDate: 2004-11-29 16:41:13
Subject: Re: Basic Requirements for SQL Window Functions

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group