Skip site navigation (1) Skip section navigation (2)

Re: Monitoring buffercache...

From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: Kevin Kempter <kevin(at)consistentstate(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Monitoring buffercache...
Date: 2008-11-25 04:34:03
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
On Mon, 24 Nov 2008, Kevin Kempter wrote:

> Currently I run this each 15min via cron:
> insert into buffercache_stats select now(), isdirty, count(*) as buffers,
> (count(*) * 8192) as memory from pg_buffercache group by 1,2;

This query isn't going to save the information you need to figure out if 
shared_buffers is working effectively for you.  You'll need the usage 
count information (if you're on 8.3) and a notion of what tables it's 
caching large amounts of data from to do that.  What's going to happen 
with the above is that you'll watch shared_buffers grow to fill whatever 
size you've allocated it, and then the only useful information you'll be 
saving is what percentage of that happens to be dirty.  If it happens that 
the working set of everything you touch is smaller than shared_buffers, 
you'll find that out, but you don't need this query to figure that 
out--just look at the amount of shared memory the postgres processes are 
using with ipcs or top and you can find where that peaks at.

I've got some queries that I find more useful, along with a general 
suggested methodology for figuring out if you've sized the buffers 
correctly, in my "Inside the PostgreSQL Buffer Cache" presentation at at

> Does anyone have any suggestions per a better approach or maybe a way to
> improve the performance for the above query ?

It's possible to obtain this data in a rather messy but faster way by not 
taking all those locks.  Someone even submitted a patch to do just that:

I wouldn't recommend doing that, and it's really the only way to make this 
run faster.

It's nice to grab a snapshot of the buffer cache every now and then just 
to see what tends to accumulate high usage counts and such.  I predict 
that trying to collect it all the time will leave you overwhelmed with 
data it's hard to analyze and act on.  I'd suggest a snapshot per hour, 
spread across one normal day every week, would be more than enough data to 
figure out how your system is behaving.  If you want something worth 
saving every 15 minutes, you should save a snapshot of the data in 

* Greg Smith gsmith(at)gregsmith(dot)com Baltimore, MD

In response to

pgsql-performance by date

Next:From: Greg SmithDate: 2008-11-25 04:40:52
Subject: Re: Monitoring buffercache...
Previous:From: Scott CareyDate: 2008-11-24 23:10:08
Subject: Re: limit clause produces wrong query plan

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group