From: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Julian Markwort <julian(dot)markwort(at)uni-muenster(dot)de> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, marius(dot)timmer(at)uni-muenster(dot)de, arne(dot)scheffer(at)uni-muenster(dot)de |
Subject: | Re: [FEATURE PATCH] pg_stat_statements with plans |
Date: | 2017-03-04 13:18:16 |
Message-ID: | 206d380c-adea-712d-131e-03d3b3cbf7d0@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 1/25/17 12:43, Simon Riggs wrote:
> On 25 January 2017 at 17:34, Julian Markwort
> <julian(dot)markwort(at)uni-muenster(dot)de> wrote:
>
>> Analogous to this, a bad_plan is saved, when the time has been exceeded by a
>> factor greater than 1.1 .
> ...and the plan differs?
>
> Probably best to use some stat math to calculate deviation, rather than fixed %.
Yeah, it seems to me too that this needs a bit more deeper analysis. I
don't see offhand why a 10% deviation in execution time would be a
reasonable threshold for "good" or "bad". A deviation approach like you
allude to would be better.
The other problem is that this measures execution time, which can vary
for reasons other than plan. I would have expected that the cost
numbers are tracked somehow.
There is also the issue of generic vs specific plans, which this
approach might be papering over.
Needs more thought.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2017-03-04 13:33:48 | Re: [WIP]Vertical Clustered Index (columnar store extension) |
Previous Message | Peter Eisentraut | 2017-03-04 13:08:45 | Re: PATCH: pageinspect / add page_checksum and bt_page_items(bytea) |