Re: Extreme high load averages

From: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>
To: Martin Foster <martin(at)ethereal-realms(dot)org>
Cc: Shridhar Daithankar <shridhar_daithankar(at)persistent(dot)co(dot)in>, ostgresql Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Extreme high load averages
Date: 2003-07-07 19:35:17
Message-ID: Pine.LNX.4.33.0307071330560.4823-100000@css120.ihs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice pgsql-performance

On Sun, 6 Jul 2003, Martin Foster wrote:

> Shridhar Daithankar wrote:
> >
> > It gives hint to psotgresql how much file system cache is available in the
> > system.
> >
> > You have 1GB memory and your application requirement does not exceed 400MB. So
> > OS can use roughly 600MB for file system cache. In that case you can set this
> > parameter to 400MB cache to leave room for other application in FS cache.
> >
> > IIRC, BSD needs sysctl tuning to make more memory available for FS cache other
> > wise they max out at 300MB.
> >
> > Roughly this setting should be (total memory -application
> > requirement)*(0.7/0.8)
> >
> > I guess that high kernel load you are seeing due to increased interaction
> > between postgresql and OS when data is swapped to/fro in shared memory. If OS
> > cache does well, postgresql should reduce this interaction as well.
> >
> >
> > BTW, since you have IDE disks, heavy disk activity can eat CPU as well. Is
> > your disk bandwidth totally maxed out? Check with vmstat or whatever
> > equivalent you have on BSD.
> >
> > Shridhar
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 6: Have you searched our list archives?
> >
> > http://archives.postgresql.org
>
> I changed the value of effective_cache_size seems interesting to 512.
> The database restarted without any problems and load averages seem to be
> a bit lower as a result.

I would try a few things. First off, effective_cache_size is the size
measured in 8k blocks, so 512 would be a setting of 4 Megs. Probably a
little low. If you average 512Meg free, that would be a setting of 65536.

Note that the higer the effective_cache_size, the more the planner will
favor index scans, and the lower, the more it will favor sequential scans.

Generally speaking, index scans cost in CPU terms, while seq scans cost in
I/O time.

Since you're reporting low CPU usage, I'm guessing you're getting a lot of
seq scans.

Do you have any type mismatches anywhere that could be the culprit?
running vacuum and analyze regurlarly? Any tables that are good
candidates for clustering?

A common problem is a table like this:

create table test (info text, id int8 primary key);
insert into test values ('ted',1);
.. a few thousand more inserts;
vacuum full;
analyze;
select * from test where id=1;

will result in a seq scan, always, because the 1 by itself is
autoconverted to int4, which doesn't match int8 automatically. This
query:

select * from test where id=1::int8

will cast the 1 to an int8 so the index can be used.

In response to

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Joe Conway 2003-07-07 21:38:42 Re: Problems with arrays
Previous Message phil campaigne 2003-07-07 19:04:29 Query combining columns

Browse pgsql-performance by date

  From Date Subject
Next Message scott.marlowe 2003-07-07 19:58:24 Re: PostgreSQL vs. MySQL
Previous Message Greg Stark 2003-07-07 18:22:00 optimizer picks smaller table to drive nested loops?