dev(at)archonet(dot)com (Richard Huxton) writes:
> If you could "pin" data in the cache it would run quicker, but at the
> cost of everything else running slower.
> Suggested steps:
> 1. Read the configuration/tuning guide at:
> 2. Post a sample query/explain analyse that runs very slowly when not
> 3. If needs be, you can write a simple timed script that performs a
> query. Or, the autovacuum daemon might be what you want.
I don't think this case will be anywhere near so simple to resolve.
I have seen this phenomenon occur when a query needs to pull a
moderate number of blocks into memory to satisfy a query that involves
some moderate number of rows.
Let's say you need 2000 rows, which fit into 400 blocks.
The first time the query runs, it needs to pull those 400 blocks off
disk, which requires 400 reads of 8K of data. That can easily take a
few seconds of I/O.
The second time, not only are those blocks cached, they are probably
cached in the buffer cache, so that the I/O overhead disappears.
There's very likely no problem with the table statistics; they are
leading to the right query plan, which happens to need to do 5 seconds
of I/O to pull the data into memory.
What is essentially required is the "prescient cacheing algorithm,"
where the postmaster must consult /dev/esp in order to get a
prediction of what blocks it may need to refer to in the next sixty
(format nil "~S(at)~S" "cbbrowne" "cbbrowne.com")
"Normally, we don't do people's homework around here, but Venice is a
very beautiful city, so I'll make a small exception."
--- Robert Redelmeier compromises his principles
In response to
pgsql-performance by date
|Next:||From: Vitaly Belman||Date: 2004-05-21 17:33:37|
|Subject: Re: PostgreSQL caching|
|Previous:||From: Bill Montgomery||Date: 2004-05-21 15:59:11|
|Subject: Avoiding vacuum full on an UPDATE-heavy table|