AgentM <agentm(at)themactionfaction(dot)com> writes:
> On Aug 31, 2006, at 11:18 , mark(at)mark(dot)mielke(dot)cc wrote:
>> I'm attempting to understand why prepared statements would be used for
>> long enough for tables to change to a point that a given plan will
>> change from 'optimal' to 'disastrous'.
> Scenario: A web application maintains a pool of connections to the
> database. If the connections have to be regularly restarted due to a
> postgres implementation detail (stale plans), then that is a database
The two major complaints that I've seen are
* plpgsql's prepared plans don't work at all for scenarios involving
temp tables that are created and dropped in each use of the function.
Then, the plan needs to be regenerated on every successive call.
Right now we tell people they have to use EXECUTE, which is painful
and gives up unnecessary amounts of performance (because it might
well be useful to cache a plan for the lifespan of the table).
* for parameterized queries, a generic plan gives up too much
performance compared to one generated for specific constant parameter
Neither of these problems have anything to do with statistics getting
regards, tom lane
In response to
pgsql-hackers by date
|Next:||From: Tom Lane||Date: 2006-08-31 16:10:35|
|Subject: Re: Prepared statements considered harmful |
|Previous:||From: mark||Date: 2006-08-31 15:53:24|
|Subject: Re: Prepared statements considered harmful|