Skip site navigation (1) Skip section navigation (2)

Re: Determining working set size

From: Joshua Berkus <josh(at)agliodbs(dot)com>
To: Peter van Hardenberg <pvh(at)pvh(dot)ca>
Cc: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Determining working set size
Date: 2012-03-27 19:58:22
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance

Check out pg_fincore.  Still kind of risky on a production server, but does an excellent job of measuring page access on Linux.

----- Original Message -----
> Baron Swartz's recent post [1] on working set size got me to
> thinking.
> I'm well aware of how I can tell when my database's working set
> exceeds available memory (cache hit rate plummets, performance
> collapses), but it's less clear how I could predict when this might
> occur.
> Baron's proposed method for defining working set size is interesting.
> Quoth:
> > Quantifying the working set size is probably best done as a
> > percentile over time.
> > We can define the 1-hour 99th percentile working set size as the
> > portion of the data
> > to which 99% of the accesses are made over an hour, for example.
> I'm not sure whether it would be possible to calculate that today in
> Postgres. Does anyone have any advice?
> Best regards,
> Peter
> [1]:
> --
> Peter van Hardenberg
> San Francisco, California
> "Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
> --
> Sent via pgsql-performance mailing list
> (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:

In response to

pgsql-performance by date

Next:From: Joshua BerkusDate: 2012-03-27 20:06:17
Subject: Linux machine aggressively clearing cache
Previous:From: Steve AtkinsDate: 2012-03-26 16:00:18
Subject: Re: anyone tried to use hoard allocator?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group