Re: Caching of Queries

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Iain" <iain(at)mst(dot)co(dot)jp>
Cc: "Jim C(dot) Nasby" <decibel(at)decibel(dot)org>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Caching of Queries
Date: 2004-09-28 03:17:40
Message-ID: 9617.1096341460@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Iain" <iain(at)mst(dot)co(dot)jp> writes:
> I can only tell you (roughly) how it works wth Oracle,

Which unfortunately has little to do with how it works with Postgres.
This "latches" stuff is irrelevant to us.

In practice, any repetitive planning in PG is going to be consulting
catalog rows that it draws from the backend's local catalog caches.
After the first read of a given catalog row, the backend won't need
to re-read it unless the associated table has a schema update. (There
are some other cases, like a VACUUM FULL of the catalog the rows came
from, but in practice catalog cache entries don't change often in most
scenarios.) We need place only one lock per table referenced in order
to interlock against schema updates; not one per catalog row used.

The upshot of all this is that any sort of shared plan cache is going to
create substantially more contention than exists now --- and that's not
even counting the costs of managing the cache, ie deciding when to throw
away entries.

A backend-local plan cache would avoid the contention issues, but would
of course not allow amortizing planning costs across multiple backends.

I'm personally dubious that sharing planning costs is a big deal.
Simple queries generally don't take that long to plan. Complicated
queries do, but I think the reusability odds go down with increasing
query complexity.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Iain 2004-09-28 04:47:30 Re: Caching of Queries
Previous Message Iain 2004-09-28 02:06:17 Re: Caching of Queries