| From: | Ben <bench(at)silentmedia(dot)com> | 
|---|---|
| To: | Richard Huxton <dev(at)archonet(dot)com> | 
| Cc: | Martijn van Oosterhout <kleptog(at)svana(dot)org>, PostgreSQL general <pgsql-general(at)postgresql(dot)org> | 
| Subject: | Re: a question for the way-back machine | 
| Date: | 2006-12-14 17:35:47 | 
| Message-ID: | Pine.LNX.4.64.0612140932270.6762@localhost.localdomain | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
Interesting. Is that plan cached for the life of the session doing the 
inserts, the life of the trigger, or until the database is restarted?
I guess I'm trying to figure out how to get the plan to re-cache, without 
making it entirely dynamic.
On Thu, 14 Dec 2006, Richard Huxton wrote:
> Ben wrote:
>>> When you insert a tuple, it needs to be inserted into the index, yes. 
>>> There
>>> is no way an insert can cause a sequential scan, except by some trigger
>>> defined on the table.
>> 
>> Actually, as it happens, there *is* a trigger defined on the table to fire 
>> before insert, but it too uses an index scan, at least according to 
>> explain. Though, you'd think if it actually was using an index scan, that 
>> would be showing up in pg_stat_user_tables, which it isn't. Might the fact 
>> that the trigger is a plpgsql function be throwing it off and keeping it 
>> from using more recent planner stats?
>
> The query-plan for the function will be compiled first time it is called. 
> From that point on, it is fixed. It seems that is the source of your 
> seq-scans.
>
> You can use the EXECUTE statement to construct a dynamic version of the 
> query, which will be planned every time it is run.
>
> --
>  Richard Huxton
>  Archonet Ltd
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Scott Marlowe | 2006-12-14 17:47:08 | Re: could not write to log -> PANIC -> System down | 
| Previous Message | dev | 2006-12-14 17:28:08 | could not write to log -> PANIC -> System down |