From: | Neil Conway <neilc(at)samurai(dot)com> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: detecting poor query plans |
Date: | 2003-11-26 21:10:01 |
Message-ID: | 87u14qltue.fsf@mailbox.samurai.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greg Stark <gsstark(at)mit(dot)edu> writes:
> At least for all the possible plans of a given query at a specific
> point in time the intention is that the cost be proportional to the
> execution time.
Why is this relevant?
Given a cost X at a given point in time, the system needs to derive an
"expected runtime" Y, and compare Y with the actual runtime. I think
that producing Y given an arbitrary X involves so many parameters as
to be practically impossible for us to compute with any degree of
accuracy.
> That's a valid point. The ms/cost factor may not be constant over
> time. However I think in the normal case this number will tend
> towards a fairly consistent value across queries and over time.
It might be true in the "normal case", but that doesn't seem very
helpful to me: in general, the mapping of plan costs to execution time
can vary wildly over time. Spewing "hints" to the log whenever the
system's workload changes, a checkpoint occurs, or the system's RAID
array hiccups doesn't sound like a useful feature to me.
> On further thought the real problem is that these numbers are only
> available when running with "explain" on. As shown recently on one
> of the lists, the cost of the repeated gettimeofday calls can be
> substantial.
That sounds more like an implementation detail than the "real problem"
to me -- I think this proposed feature has more fundamental issues.
-Neil
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-11-26 21:33:19 | Re: pg_restore and create FK without verification check |
Previous Message | Greg Stark | 2003-11-26 20:45:49 | Re: detecting poor query plans |