Re: Cached Query Plans (was: global prepared statements)

From: PFC <lists(at)peufeu(dot)com>
To: "Csaba Nagy" <nagy(at)ecircle-ag(dot)com>, "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
Cc: "Dawid Kuroczko" <qnex42(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Cached Query Plans (was: global prepared statements)
Date: 2008-04-14 15:08:25
Message-ID: op.t9lwcbszcigqcu@apollo13.peufeu.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, 14 Apr 2008 16:17:18 +0200, Csaba Nagy <nagy(at)ecircle-ag(dot)com> wrote:

> On Mon, 2008-04-14 at 16:10 +0200, Csaba Nagy wrote:
>> ... or plan the query with the actual parameter value you get, and also
>> record the range of the parameter values you expect the plan to be valid
>> for. If at execution time the parameter happens to be out of that range,
>> replan, and possibly add new sublpan covering the extra range. This
>> could still work with prepared queries (where you don't get any
>> parameter values to start with) by estimating the most probable
>> parameter range (whatever that could mean), and planning for that.
>
> More on that: recording the presumptions under which the (cached!)plan
> is thought to be valid would also facilitate setting up dependencies
> against statistics, to be checked when you analyze tables... and if the
> key value which you depend on with your query changed, the analyze
> process could possibly replan it in the background.

LOL, it started with the idea to make small queries faster, and now the
brain juice is pouring.
Those "Decision" nodes could potentially lead to lots of decisions (ahem).
What if you have 10 conditions in the Where, plus some joined ones ? That
would make lots of possibilities...

Consider several types of queries :

- The small, quick query which returns one or a few rows : in this case,
planning overhead is large relative to execution time, but I would venture
to guess that the plans always end up being the same.
- The query that takes a while : in this case, planning overhead is nil
compared to execution time, better replan every time with the params.
- The complex query that still executes fast because it doesn't process a
lot of rows and postgres finds a good plan (for instance, a well optimized
search query). Those would benefit from reducing the planning overhead,
but those also typically end up having many different plans depending on
the search parameters. Besides, those queries are likely to be dynamically
generated. So, would it be worth it to add all those features just to
optimize those ? I don't know...

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2008-04-14 15:18:49 pg_dump object sorting
Previous Message Csaba Nagy 2008-04-14 15:01:47 Re: Cached Query Plans