Determining working set size

From: Peter van Hardenberg <pvh(at)pvh(dot)ca>
To: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Determining working set size
Date: 2012-03-26 07:11:02
Message-ID: CAAcg=kUN57mRjKXcVHO_7on+924CyrcPep9wMHbuCSwWxRZ-8A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Baron Swartz's recent post [1] on working set size got me to thinking.
I'm well aware of how I can tell when my database's working set
exceeds available memory (cache hit rate plummets, performance
collapses), but it's less clear how I could predict when this might
occur.

Baron's proposed method for defining working set size is interesting. Quoth:

> Quantifying the working set size is probably best done as a percentile over time.
> We can define the 1-hour 99th percentile working set size as the portion of the data
> to which 99% of the accesses are made over an hour, for example.

I'm not sure whether it would be possible to calculate that today in
Postgres. Does anyone have any advice?

Best regards,
Peter

[1]: http://www.fusionio.com/blog/will-fusionio-make-my-database-faster-percona-guest-blog/

--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tomas Vondra 2012-03-26 09:50:00 anyone tried to use hoard allocator?
Previous Message Kevin Grittner 2012-03-23 14:09:10 Re: Write workload is causing severe slowdown in Production