"Iain" <iain(at)mst(dot)co(dot)jp> writes:
> I can only tell you (roughly) how it works wth Oracle,
Which unfortunately has little to do with how it works with Postgres.
This "latches" stuff is irrelevant to us.
In practice, any repetitive planning in PG is going to be consulting
catalog rows that it draws from the backend's local catalog caches.
After the first read of a given catalog row, the backend won't need
to re-read it unless the associated table has a schema update. (There
are some other cases, like a VACUUM FULL of the catalog the rows came
from, but in practice catalog cache entries don't change often in most
scenarios.) We need place only one lock per table referenced in order
to interlock against schema updates; not one per catalog row used.
The upshot of all this is that any sort of shared plan cache is going to
create substantially more contention than exists now --- and that's not
even counting the costs of managing the cache, ie deciding when to throw
A backend-local plan cache would avoid the contention issues, but would
of course not allow amortizing planning costs across multiple backends.
I'm personally dubious that sharing planning costs is a big deal.
Simple queries generally don't take that long to plan. Complicated
queries do, but I think the reusability odds go down with increasing
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Iain||Date: 2004-09-28 04:47:30|
|Subject: Re: Caching of Queries |
|Previous:||From: Iain||Date: 2004-09-28 02:06:17|
|Subject: Re: Caching of Queries|