Skip site navigation (1) Skip section navigation (2)

Re: Configuration for a new server.

From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: Benjamin Krajmalnik <kraj(at)servoyant(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Configuration for a new server.
Date: 2011-02-02 01:16:26
Message-ID: 4D48B06A.6020100@2ndquadrant.com (view raw or flat)
Thread:
Lists: pgsql-performance
Benjamin Krajmalnik wrote:
>
> There are approximately 50 tables which get updated with almost 100% 
> records updated every 5 minutes -- what is a good number of autovacuum 
> processes to have on these?  The current server I am replacing only 
> has 3 of them but I think I may gain a benefit from having more.
>
>
> Watch pg_stat_user_tables and you can figure this out for your 
> workload.  There are no generic answers in this area.
>
> What in particular should I be looking at to help me decide?
>


The information reported that's related to vacuuming.  If you don't have 
enough workers, you can watch the dead row counts pop upwards without 
enough "last autovacuum time" updates on enough tables to suggest it's 
keeping up.  If you see >20% dead rows on lots of tables and they're not 
being processed by AV and having their timestamps, that's your sign that 
you don't have enough workers.


> You'd probably be better off decreasing the delay rather than pushing 
> up the other two parameters.  It's easy to tell if you did it right or 
> not; just look at pg_stat_bgwriter.  If buffers_backend is high 
> relative to the others, that means the multiplier or delay is wrong.  
> Or if maxwritten_clean is increasing fast, that means 
> bgwriter_lru_maxpages is too low.
>
> checkpoints_timed = 261
>
> checkpoints_req = 0
>
> buffers_checkpoint = 49,058,438
>
> buffers_clean = 3,562,421
>
> maxwritten_clean = 243
>
> buffers_backend = 11,774,254
>
> buffers_alloc = 42,816,578
>

See how buffers_backend is much larger than buffers_clean, even though 
maxwritten_clean is low?  That means the background writer isn't running 
often enough to keep up with cleaning things, even though it does a lot 
of work when it does kick in.  In your situation I'd normally do a first 
pass by cutting bgwriter_lru_maxpages to 1/4 of what it is now, cut 
bgwriter_delay to 1/4 as well (to 50ms), and then see how the 
proportions change.  You can probably cut the multiplier, too, yet still 
see more pages written by the cleaner.

I recommend saving a snapsot of this data with a timestamp, i.e.:

select now(),* from pg_stat_bgwriter;

Anytime you make a change to one of the background writer or checkpoint 
timing parameters.  That way you have a new baseline to compare 
against.  These numbers aren't very useful with a single value, but once 
you get two of them with timestamps you can compute all sorts of fun 
statistics from the pair.


-- 
Greg Smith   2ndQuadrant US    greg(at)2ndQuadrant(dot)com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

In response to

Responses

pgsql-performance by date

Next:From: Greg SmithDate: 2011-02-02 01:30:20
Subject: Re: Are we in the ballpark?
Previous:From: Wayne ConradDate: 2011-02-02 00:23:02
Subject: Are we in the ballpark?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group