A pessimistic planner

From: Stuart Bishop <stuart(at)stuartbishop(dot)net>
To: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: A pessimistic planner
Date: 2014-11-21 05:07:27
Message-ID: CADmi=6PBw=vUD5S7fgLaMvdGRpmxSqJ4FMap8-4cZGyW-4JYgQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Another day, another timing out query rewritten to force a more stable
query plan.

While I know that the planner almost always chooses a good plan, I
tend to think it is trying too hard. While 99% of the queries might be
10% faster, 1% might be timing out which makes my users cross and my
life difficult. I'd much rather have systems that are less efficient
overall, but stable with a very low rate of timeouts.

I was wondering if the planner should be much more pessimistic,
trusting in Murphy's Law and assuming the worst case is the likely
case? Would this give me a much more consistent system? Would it
consistently grind to a halt doing full table scans? Do we actually
know the worst cases, and would it be a relatively easy task to update
the planner so we can optionally enable this behavior per transaction
or across a system? Boolean choice between pessimistic or optimistic,
or is pessimism a dial?

--
Stuart Bishop <stuart(at)stuartbishop(dot)net>
http://www.stuartbishop.net/

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2014-11-21 05:12:05 Re: A pessimistic planner
Previous Message Yuri Kunde Schlesner 2014-11-19 00:02:34 Re: Plan uses wrong index, preferring to scan pkey index instead