Skip site navigation (1) Skip section navigation (2)

postgresql meltdown on PlanetMath.org

From: Aaron Krowne <akrowne(at)vt(dot)edu>
To: pgsql-performance(at)postgresql(dot)org
Cc: akrowne(at)vt(dot)edu
Subject: postgresql meltdown on PlanetMath.org
Date: 2003-03-16 06:01:25
Message-ID: 20030316060125.GD19570@vt.edu (view raw or flat)
Thread:
Lists: pgsql-performance
Hi all.

As the topic suggests, I am having fairly critical troubles with
postgresql on PlanetMath.org (a site which I run).  You can go there and
try to pull up some entries and you will see the problem: everything is
incredibly slow.

It is hard to pinpoint when this began happening, but I've tried a
variety of optimizations to fix it, all of which have failed.

First: the machine.  The machine is not too spectactular, but it is not
so bad that the performance currently witnessed should be happening.  It
is a dual PIII-650 with 512MB of RAM and a 20gb IDE drive (yes, DMA is
on).  There is plenty of free space on the drive.

Now, the optimisations I have tried:

- Using hash indices everywhere.  A few months ago, I did this, and
  there was a dramatic and instant speed up.  However, this began
  degenerating.  I also noticed in the logs that there was deadlock
  happening all over the place.  The server response time was
  intolerable so I figured the deadlock might have something to do with
  this, and eliminated all hash indices (replaced with normal BTree
  indices).  

- Going back to BTrees yielded a temporary respite, but soon enough the
  server was back to half a minute to pull up an already-cached entry,
  which is of course crazy.  

- I then tried increasing the machines shared memory max to 75% of the
  physical memory, and scaled postgresql's buffers accordingly.  This
  also sped things up for a while, but again resulted in eventual
  degeneration.  Even worse, there were occasional crashes due to
  running out of memory that (according to my calculations) shouldn't
  have been happening.

- Lastly, I tried reducing the shared memory max and limiting postgresql
  to more conservative values, although still not to the out-of-box
  values.  Right now shared memory max on the system is 128mb,
  postgres's shared buffers are at 64mb, sort_mem is at 16mb, and
  effective cache size is at 10mb.

For perspective, the size of the PlanetMath database dump is 24mb.  It
should be able to fit in memory easily, so I'm not sure what I'm doing
wrong regarding the caching.

For the most trivial request, Postgresql takes up basically all the CPU
for the duration of the request.  The load average of the machine is
over-unity at all times, sometimes as bad as being the 30's.  None of
this happens without postgres running, so it is definitely the culprit.

The site averages about one hit every twenty seconds.  This should not
be an overwhelming load, especially for what is just pulling up cached
information 99% of the time.

Given this scenario, can anyone advise?  I am particularly puzzled as to
why everything I tried initially helped, but always degenerated rather
rapidly to a near standstill.  It seems to me that everything should be
able to be cached in memory with no problem, perhaps I need to force
this more explicitly.

My next step, if I cannot fix this, is to try mysql =(

Anyway, whoever helps would be doing a great service to many who use
PlanetMath =)  It'd be much appreciated.  

Aaron Krowne



Responses

pgsql-performance by date

Next:From: Sean ChittendenDate: 2003-03-16 06:12:08
Subject: Re: postgresql meltdown on PlanetMath.org
Previous:From: Josh BerkusDate: 2003-03-14 17:10:06
Subject: Re: speeding up COUNT and DISTINCT queries

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group