On Thu, 27 Jun 2002, Jan Wieck wrote:
> Since none of us actually has done real benchmarks in this area, we are
> all just debating out of the blue. So please don't take that personal,
Well, I have a pretty good knowledge of how Unix operating systems work
internally (I've been a NetBSD developer for about six years now), so
it's not just out of the blue. However, I will always bow to hard data.
> Sure, the optimum will depend on the application and it's usage profile.
> But that's fine tuning, not a rough rule of thumb for general purpose,
> and I think we where looking for the latter.
Good. That's about all I can give.
> > I'd say, at a rough estimate, go for a number of buffers 2-3 times the
> > maximum number of connections you allow. Or less if you anticipate
> > rarely ever having that many connections.
> Here I disagree. The more shared buffer cache you have, the bigger the
> percentage of your database that neither causes read()'s nor memory
> copying from the OS buffer cache.
Certainly. But overall, you will cache a smaller number of blocks
because you will be buffering them twice. When you copy a block
from the OS buffer to shared memory, the copy still exists in the
OS buffer. So that block is now buffered twice.
For most workloads, in the long run, that will force you to do disk
I/O that you would not have had to do otherwise. A single disk I/O
is far more expensive than hundreds of copies between the OS buffer
cache and postgres' shared memory.
Draw your own conclusions.
Curt Sampson <cjs(at)cynic(dot)net> +81 90 7737 2974 http://www.netbsd.org
Don't you know, in this new Dark Age, we're all light. --XTC
In response to
pgsql-general by date
|Next:||From: Martijn van Oosterhout||Date: 2002-06-28 02:17:55|
|Subject: Re: How should I do this?|
|Previous:||From: King King||Date: 2002-06-28 02:12:36|
|Subject: The Rule does not working.|