Re: Database size Vs performance degradation

From: "Dave North" <DNorth(at)signiant(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Database size Vs performance degradation
Date: 2008-07-30 17:39:09
Message-ID: 35FABCF85D99464FB00BC5123DC2A70A05244E0C@s228130hz1ew09.apptix-01.savvis.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> -----Original Message-----
> From: Greg Smith [mailto:gsmith(at)gregsmith(dot)com]
> Sent: July 30, 2008 12:48 PM
> To: Dave North
> Cc: pgsql-performance(at)postgresql(dot)org
> Subject: Re: [PERFORM] Database size Vs performance degradation
>
> On Wed, 30 Jul 2008, Dave North wrote:
>
> > One observation I've made on the DB system is the disk I/O seems
> > dreadfully slow...we're at around 75% I/O wait sometimes
> and the read
> > rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for
> > un-cached reads).
>
> This is typically what happens when you are not buffering
> enough of the right information in RAM, such that there are
> lots of small reads and writes to the disk involve lots of
> seeking. You'll only get a couple of MB/s out of a disk if
> it has to move all over the place to retreive the blocks you
> asked for.

I could totally see that except on another identical server, the MAX
rate I was able to get under no load was 20MB/sec which just seems
awfully low for 10K rpm disks to me (but granted, I'm not a performance
analysis expert by any stretch)

>
> Setting shared_buffers too low makes this more likely to
> happen, because PostgreSQL has to constantly read and write
> out random blocks to make space to read new ones in its
> limited work area. The OS buffers some of that, but not as
> well as if the database server has a bit more RAM for itself
> because then the blocks it most uses won't leave that area.

OK, this makes sense that a "specialist" cache will provide more
benefits that a "general" cache. Got it.

>
> > And if so, would I be better having a higher shared_buffers rather
> > than relying so much on OS cache?
>
> The main situation where making shared_buffers too high is a
> problem on
> 8.1 involves checkpoints writing out too much information at
> once. You didn't mention changing checkpoint_segments on
> your system; if it's at its default of 3, your system is
> likely continuously doing tiny checkpoints, which might be
> another reason why you're seeing so much scattered seek
> behavior above. Something >30 would be more appropriate for
> checkpoint_segments on your server.

It appears ours is currently set to 12 but this is something I'll have a
play with as well.

>
> I'd suggest re-tuning as follows:
>
> 1) Increase shared_buffers to 10,000, test. Things should be
> a bit faster.
>
> 2) Increase checkpoint_segments to 30, test. What you want
> to watch for here whether there are periods where the server
> seems to freeze for a couple of seconds. That's a
> "checkpoint spike". If this happens, reduce
> checkpoint_segments to some sort of middle ground; some
> people never get above 10 before it's a problem.
>
> 3) Increase shared_buffers in larger chunks, as long as you
> don't see any problematic spikes you might usefully keep
> going until it's set to at least 100,000 before improvements
> level off.

Do you happen to know if these are "reload" or "restart" tunable
parameters? I think I've read somewhere before that they are restart
parameters (assuming I've set SHMMAX high enough of course)

>
> > I spent several hours reading info on this list and other
> places and
> > it's highly inconclusive about having high or low shared buffs Vs
> > letting the OS disk cache handle it.
>
<SNIP good reading info
>
> The other parameter I hope you're setting correctly for your
> system is effective_cache_size, which should be at least 2GB
> for your server (exact sizing depends on how much RAM is
> leftover after the Tomcat app is running).

Now, this is interesting. I'm seeing just from top and vmstat, that the
OS cache is around 2-3GB pretty consistently with everything running
under full load. So it seems I should be able to pretty safely set this
to 2GB as you suggest.

>
> All this is something to consider in parallel with the vacuum
> investigation you're doing. It looks like your autovacuum
> isn't anywhere close to aggressive enough for your workload,
> which is not unusual at all for 8.1, and that may actually be
> the majority if your problem.

Yeah, I've pretty well convinced myself that it is. We have 2 tables
that see this add/remove pattern where we add something like 100,000
rows per table and then delete around 75,000 per night...what I've just
done is added enties into pg_autovacuum to change the vac_scale_factor
down to 0.2 for both of these tables. By my calcs this will lower the
vac threshold for these tables from 221,000 to 111,000 tuples each.
Even this may be too high with max_fsm_pages at 400,000 but I can go
lower if needed. As per other threads, my only real metric to measure
this is the overall database size since the better AV messaging is an
8.3 enhancement.

I'm starting with just changing the autovac parameters and see how that
affects things. I'm reluctant to change multiple parameters in one shot
(although they all make sense logically) just in case one goes awry ;)

I have to say, I've learnt a whole load from you folks here this
morning...very enlightening. I'm now moving on to your site Greg! :)

Cheers

Dave

>
> --
> * Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com
> Baltimore, MD
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Guillaume Lelarge 2008-07-30 17:51:47 Re: Database size Vs performance degradation
Previous Message Richard Huxton 2008-07-30 17:18:25 Re: Database size Vs performance degradation