Skip site navigation (1) Skip section navigation (2)

Re: [PATCHES] Automatic adjustment of bgwriter_lru_maxpages

From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Cc: daveg <daveg(at)sonic(dot)net>
Subject: Re: [PATCHES] Automatic adjustment of bgwriter_lru_maxpages
Date: 2007-05-16 03:02:44
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackerspgsql-patchespgsql-performance
On Tue, 15 May 2007, Jim C. Nasby wrote:

> Moving to -performance.

No, really, moved to performance now.

> On Mon, May 14, 2007 at 09:55:16PM -0700, daveg wrote:
>> What is the current thinking on bg_writer setttings for systems such as
>> 4 core Opteron with 16GB or 32GB of memory and heavy batch workloads?

First off, the primary purpose of both background writers are to keep the 
individual client backends from stalling to wait for disk I/O.  If you're 
running a batch workload, and there isn't a specific person waiting for a 
response, the background writer isn't as critical to worry about.

As Jim already said, tuning the background writer well really requires a 
look at the usage profile of your buffer pool and some thinking about your 
I/O capacity just as much as it does your CPU/memory situation.

For the first part, I submitted a patch that updates the 
contrib/pg_buffercache module to show the usage count information of your 
buffer cache.  The LRU writer only writes things with a usage_count of 0, 
so taking some snapshots of that data regularly will give you an idea 
whether you can useful use it or whether you'd be better off making the 
all scan more aggressive.  It's a simple patch that only effects a contrib 
module you can add and remove easily, I would characterize it as pretty 
safe to apply even to a production system as long as you're doing the 
initial tests off-hours.  The patch is at

And the usual summary query I run after installing it in a database is:

select usagecount,count(*),isdirty from pg_buffercache group by 
isdirty,usagecount order by isdirty,usagecount;

As for the I/O side of things, I'd suggest you compute a worst-case 
scenario for how many disk writes will happen if every buffer the 
background writer comes across is dirty and base your settings on what 
you're comfortable with there.  Say you kept the default interval of 200ms 
but increased the maximum pages value to 1000; each writer could 
theoretically push 1000 x 8KB x 5/second = 40MB/s worth of data to disk. 
Since these are database writes that have to be interleaved with reads, 
the sustainable rate here is not as high as you might think.  You might 
get a useful performance boost just pushing the max numbers from the 
defaults to up into the couple of hundred range--with the amount of RAM 
you probably have decided to the buffer cache even the default small 
percentages will cover a lot of ground and might need to be increased.  I 
like 250 as a round number because it makes for at most an even 10MB a 
second flow out per writer.  I wouldn't go too high on the max writes per 
pass unless you're in a position to run some good tests to confirm you're 
not actually making things worse.

* Greg Smith gsmith(at)gregsmith(dot)com Baltimore, MD

In response to

pgsql-performance by date

Next:From: Greg SmithDate: 2007-05-16 03:55:32
Subject: New performance documentation released
Previous:From: Jim C. NasbyDate: 2007-05-16 00:08:07
Subject: Re: Automatic adjustment of bgwriter_lru_maxpages

pgsql-hackers by date

Next:From: ITAGAKI TakahiroDate: 2007-05-16 03:05:45
Subject: Error correction for n_dead_tuples
Previous:From: Robert TreatDate: 2007-05-16 02:19:55
Subject: Re: Not ready for 8.3

pgsql-patches by date

Next:From: Bruce MomjianDate: 2007-05-16 03:20:25
Subject: Re: actualised forgotten Magnus's patch for plpgsql MOVE statement
Previous:From: Alvaro HerreraDate: 2007-05-16 02:02:18
Subject: Re: [DOCS] Autovacuum and XID wraparound

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group