> For example, if you have 1G of RAM on the box, you can't
> configure a cache of 900 meg and expect things to work well.
> This is because the OS and associated other stuff running on
> the box will use ~300megs. The system will page as a result.
Overcommitting of memory leads to trashing, yes, that is also my experience.
> The only sure fire way I know of to find the absolute maximum
> cache size that can be safely configured is to experiment with
> larger and larger sizes until paging occurs, then back off a bit.
Yeah, I know the trial and error method. But I also learned that
reading the manuals and documentation often helps.
So after fastreading the various PostgreSQL tuning materials, I came
accross formulas to calculate a fine starting point for shared memory
size; and the recommendation to check with shared_memory information
tools if that size is okay.
And THAT is exactly the challenge of this thread: I am searching for
tools to check shared memory usage on Windows. ipcs is not available.
And neither Magnus nor Dave, both main contributors of the win32 port
of PostgreSQL, and both way wiser concerning Windows internas then me,
know of some :(
The challenge below that: I maintain a win32 PostgreSQL server, which
gets slow every 3-4 weeks. After restarting it runs perfect, for again
The Oracle-guys at the same customer solved a similiar problem by
simply restarting Oracle every night. But that would be not good
enough for my sence of honour :)
Thanks for your thoughts,
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Python: the only language with more web frameworks than keywords.
In response to
pgsql-performance by date
|Next:||From: Mark Kirkwood||Date: 2006-10-16 20:25:05|
|Subject: Re: Hints proposal|
|Previous:||From: Carlo Stonebanks||Date: 2006-10-16 17:33:28|
|Subject: Re: Performance Optimization for Dummies 2 - the SQL|