Database size Vs performance degradation

From: "Dave North" <DNorth(at)signiant(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Database size Vs performance degradation
Date: 2008-07-30 12:09:10
Message-ID: 35FABCF85D99464FB00BC5123DC2A70A051D0FB8@s228130hz1ew09.apptix-01.savvis.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Morning folks,
Long time listener, first time poster. Having an interesting
problem related to performance which I'll try and describe below and
hopefully get some enlightenment. First the environment:

Postgres 8.1.8
shared_buffers = 2000
max_fsm_pages = 400000
Redhat Enterprise 4
Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1
Also running on the server is a tomcat web server and other ancillaries

Now, the problem. We have an application that continually writes a
bunch of data to a few tables which is then deleted by a batch job each
night. We're adding around 80,000 rows to one table per day and
removing around 75,000 that are deemed to be "unimportant". Now, the
problem we see is that after a period of time, the database access
becomes very 'slow' and the load avg on the machine gets up around 5.
When this happens, the application using the DB basically grinds to a
halt. Checking the stats, the DB size is around 7.5GB; no tables or
indexes look to be 'bloated' (we have been using psql since 7.3 with the
classic index bloat problem) and the auto-vac has been running solidly.

We had this problem around a month ago and again yesterday. Because the
application needs reasonably high availability, we couldn't full vacuum
so what we did was a dump and load to another system. What I found here
was that after the load, the DB size was around 2.7GB - a decrease of
5GB. Re-loading this back onto the main system, and the world is good.

One observation I've made on the DB system is the disk I/O seems
dreadfully slow...we're at around 75% I/O wait sometimes and the read
rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for
un-cached reads). I've also observed that the OS cache seems to be
using all of the remaining memory for it's cache (around 3GB) which
seems probably the best it can do with the available memory.

Now, clearly we need to examine the need for the application to write
and remove so much data but my main question is:

Why does the size of the database with so much "un-used" space seem to
impact performance so much? If (in this case) the extra 5GB of space is
essentially "unallocated", does it factor into any of the caching or
performance metrics that the DBMS uses? And if so, would I be better
having a higher shared_buffers rather than relying so much on OS cache?

Yes, I know we need to upgrade to 8.3 but that's going to take some time
:)

Many thanks in advance.

Dave

___
Dave North
dnorth(at)signiant(dot)com
Signiant - Making Media Move
Visit Signiant at: www.signiant.com <http://www.signiant.com/>

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Richard Huxton 2008-07-30 12:28:09 Re: Database size Vs performance degradation
Previous Message Mark Wong 2008-07-28 20:17:08 Re: A guide/tutorial to performance monitoring and tuning