Check out pg_fincore. Still kind of risky on a production server, but does an excellent job of measuring page access on Linux.
----- Original Message -----
> Baron Swartz's recent post  on working set size got me to
> I'm well aware of how I can tell when my database's working set
> exceeds available memory (cache hit rate plummets, performance
> collapses), but it's less clear how I could predict when this might
> Baron's proposed method for defining working set size is interesting.
> > Quantifying the working set size is probably best done as a
> > percentile over time.
> > We can define the 1-hour 99th percentile working set size as the
> > portion of the data
> > to which 99% of the accesses are made over an hour, for example.
> I'm not sure whether it would be possible to calculate that today in
> Postgres. Does anyone have any advice?
> Best regards,
> Peter van Hardenberg
> San Francisco, California
> "Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
> Sent via pgsql-performance mailing list
> To make changes to your subscription:
In response to
pgsql-performance by date
|Next:||From: Joshua Berkus||Date: 2012-03-27 20:06:17|
|Subject: Linux machine aggressively clearing cache|
|Previous:||From: Steve Atkins||Date: 2012-03-26 16:00:18|
|Subject: Re: anyone tried to use hoard allocator?|