Skip site navigation (1) Skip section navigation (2)

Re: select on 22 GB table causes "An I/O error occured while sending to the backend." exception

From: Bill Moran <wmoran(at)collaborativefusion(dot)com>
To: henk de wit <henk53602(at)hotmail(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: select on 22 GB table causes "An I/O error occured while sending to the backend." exception
Date: 2008-08-27 13:01:23
Message-ID: 20080827090123.c0e8e638.wmoran@collaborativefusion.com (view raw or flat)
Thread:
Lists: pgsql-performance
In response to henk de wit <henk53602(at)hotmail(dot)com>:

> > What do your various logs (pgsql, application, etc...) have to say?
> 
> There
> is hardly anything helpful in the pgsql log. The application log
> doesn't mention anything either. We log a great deal of information in
> our application, but there's nothing out of the ordinary there,
> although there's of course always a chance that somewhere we missed
> something.

There should be something in a log somewhere.  Someone suggested the oom
killer might be getting you, if so there should be something in one of
the system logs.

If you can't find anything, then you need to beef up your logs.  Try
increasing the amount of stuff that gets logged by PG by tweaking the
postgres.conf settings.  Then run iostat, vmstat and top in an endless
loop dumping their output to files (recommend you run date(1) in between
each run, otherwise you can't correlate the output to the time of
occurrence ;)

While you've got all this extra logging going and you're waiting for the
problem to happen again, do an audit of your postgres.conf settings for
memory usage and see if they actually add up.  How much RAM does the
system have?  How much of it is free?  How much of that are you eating
with shared_buffers?  How much sort_mem did you tell PG it has?  Have
you told PG that it has more memory than the machine actually has?

I've frequently recommended installing pg_buffercache and using mrtg
or something similar to graph various values from it and other easily
accessible statistics in PG and the operating system.  The overhead of
collecting and graphing those values is minimal, and having the data
from those graphs can often be the little red arrow that points you to
the solution to problems like these.  Not to mention the historical
data generally tells you months ahead of time when you're going to
need to scale up to bigger hardware.

On a side note, what version of PG are you using?  If it was in a
previous email, I missed it.

Hope this helps.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

In response to

Responses

pgsql-performance by date

Next:From: Alvaro HerreraDate: 2008-08-27 13:08:00
Subject: Re: control the number of clog files and xlog files
Previous:From: Duan LigongDate: 2008-08-27 11:27:53
Subject: Re: control the number of clog files and xlog files

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group