What appends when PG scans a table that is is too big to fit in the
Won't the whole cache get trashed and swapped off to disk?
Shouldn't there be a way to lock some tables in PG cache?
Who about caracterizing some of the RAM like: scan, index, small
frequently used tables.
Tom Lane wrote:
> PG is *not* any smarter about the usage patterns of its disk buffers
> than the kernel is; it uses a simple LRU algorithm that is surely no
> brighter than what the kernel uses. (We have looked at smarter buffer
> recycling rules, but failed to see any performance improvement.) So the
> notion that PG can do a better job of cache management than the kernel
> is really illusory. About the only advantage you gain from having data
> directly in PG buffers rather than kernel buffers is saving the CPU
> effort needed to move data across the userspace boundary --- which is
> not zero, but it's sure a lot less than the time spent for actual I/O.
> So my take on it is that you want shared_buffers fairly small, and let
> the kernel do the bulk of the heavy lifting for disk cache. That's what
> it does for a living, so let it do what it does best. You only want
> shared_buffers big enough so you don't spend too many CPU cycles shoving
> data back and forth between PG buffers and kernel disk cache. The
> default shared_buffers setting of 64 is surely too small :-(, but my
> feeling is that values in the low thousands are enough to get past the
> knee of that curve in most cases.
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2003-04-10 14:40:15|
|Subject: Re: Caching (was Re: choosing the right platform) |
|Previous:||From: Shridhar Daithankar||Date: 2003-04-10 09:59:05|
|Subject: Re: Caching (was Re: choosing the right platform)|