Re: sudden spurt in swap utilization (was:cpu bound postgresql setup.)

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Rajesh Kumar Mallah" <mallah(dot)rajesh(at)gmail(dot)com>
Cc: "Yeb Havinga" <yebhavinga(at)gmail(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: sudden spurt in swap utilization (was:cpu bound postgresql setup.)
Date: 2010-06-25 16:41:51
Message-ID: 4C2496000200002500032B21@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Rajesh Kumar Mallah <mallah(dot)rajesh(at)gmail(dot)com> wrote:

> pgtune suggests the following:
> (current value are in braces via reason) , (*) indicates
> significant difference from current value.

Different people have come to different conclusions on some of these
settings. I believe that's probably because differences in hardware
and workloads actually make the best choice different in different
environments, and it's not always clear how to characterize that to
make the best choice. If yo get conflicting advice on particular
settings, I would strongly recommend testing to establish what works
best for your actual workload on your hardware and OS.

That said, my experience suggests...

> default_statistics_target = 50 # pgtune wizard 2010-06-25
> (current 100 via default)

Higher values add a little bit to the planning time of complex
queries, but reduce the risk of choosing a bad plan. I would
recommend leaving this at 100 unless you notice problems with long
plan times.

> (*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25
> (16MB via default)

Yeah, I'd boost this to 1GB.

> checkpoint_completion_target = 0.9 # pgtune wizard 2010-06-25
> (0.5 via default)

I'd change this one by itself, and probably after some of the other
tuning is done, so you can get a good sense of "before" and "after".
I'm guessing that 0.9 would be better, but I would test it.

> (*) effective_cache_size = 22GB # pgtune wizard 2010-06-25
> (18GB , specified)

Unless you're running other processes on the box which consume a lot
of RAM, 18GB is probably lower than ideal, although this setting
isn't too critical -- it doesn't affect actual RAM allocation; it
just gives the optimizer a hint about how much might get cached. A
higher setting encourages index use; a lower setting encourages
table scans.

> work_mem = 192MB # pgtune wizard 2010-06-25
> (256MB , specified)

With 300 connections, I think that either of these could lead you to
experience intermittent bursts of extreme swapping. I'd drop it to
somewhere in the 16MB to 32MB range until I had a connection pool
configured such that it was actually keeping the number of active
connections much lower.

> (*) wal_buffers = 8MB # pgtune wizard 2010-06-25
> (64kb , via default)

Sure, I'd boost this.

> checkpoint_segments = 16 # pgtune wizard 2010-06-25
> (30 , specified)

If you have the disk space for the 30 segments, I wouldn't reduce
it.

> shared_buffers = 7680MB # pgtune wizard 2010-06-25
> (4096 MB , specified)

This one is perhaps the most sensitive to workload. Anywhere
between 1GB and 8GB might be best for you. Greg Smith has some
great advice on how to tune this for your workload.

> (*) max_connections = 80 # pgtune wizard 2010-06-25
> (300 , ;-) specified)
>
> when i reduce max_connections i start getting errors, i will see
> again concurrent connections during business hours.

That's probably a good number to get to, but you have to reduce the
number of actual connections before you set the limit that low.

> lot of our connections are in <IDLE> in transaction state

If any of these stay in that state for more than a minute or two,
you need to address that if you want to get your connection count
under control. If any of them persist for hours or days, you need
to fix it to avoid bloat which can kill performance.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Greg Smith 2010-06-25 16:55:17 Re: Occasional giant spikes in CPU load
Previous Message Kevin Grittner 2010-06-25 16:41:11 Re: Occasional giant spikes in CPU load