Matthew Nuzum wrote:
>>Jack Orenstein <jao(at)geophile(dot)com> writes:
>>>I'm looking at one case in which two successive transactions, each
>>>updating a handful of records, take 26 and 18 *seconds* (not msec) to
>>>complete. These transactions normally complete in under 30 msec.
>>None of this is necessarily going to fix matters for an installation
>>that has no spare I/O capacity, though. And from the numbers you're
>>quoting I fear you may be in that category. "Buy faster disks" may
>>be the only answer ...
> I had a computer once that had an out-of-the-box hard drive configuration
> that provided horrible disk performance. I found a tutorial at O'Reilly
> that explained how to use hdparm to dramatically speed up disk performance
> on Linux. I've noticed on other computers I've set up recently that hdparm
> seems to be used by default out of the box to give good performance.
> Maybe your computer is using all of it's I/O capacity because it's using PIO
> mode or some other non-optimal method of accessing the disk.
There's certainly some scope there. I have an SGI Octane whos SCSI 2
disks were set-up by default with no write buffer and CTQ depth of zero
:/ IDE drivers in Linux maybe not detecting your IDE chipset correctly
and stepping down, however unlikely there maybe something odd going on
but you could check hdparm out. Ensure correct cables too, and the
aren't crushed or twisted too bad.... I digress...
Assuming you're running with optimal schema and index design (ie you're
not doing extra work unnecessarily), and your backend has
better-then-default config options set-up (plenty of tips around here),
then disk arrangement is critical to smoothing the ride.
Taking things to a relative extreme, we implemented a set-up with issues
similar sounding to yours. It was resolved by first optimising
everything but hardware, then finally optimising hardware. This served
us because it meant we squeezed as much out of the available hardware,
before finally throwing more at it, getting us the best possible returns
(plus further post optimisation on the new hardware).
First tip would to take your pg_xlog and put it on another disk (and
channel). Next if you're running a journalled fs, get that journal off
onto another disk (and channel). Finally, get as many disks for the data
store and spread the load across spindles. You're aiming here to
distribute the contention and disk I/O more evenly to remove the
congestion. sar and iostat help out as part of the analysis.
You say you're using IDE, for which I'd highly recommend switching to
SCSI and mutliple controllers because IDE isn't great for lots of other
reasons. Obviously budgets count, and playing with SCSI certainly limits
that. We took a total of 8 disks across 2 SCSI 160 channels and split up
the drives into a number of software RAID arrays. RAID0 mirrors for the
os, pg_xlog, data disk journal and swap and the rest became a RAID5
array for the data. You could instead implement your DATA disk as
RAID1+0 if you wanted more perf at the cost of free space. Anyway, it's
certainly not the fastest config out there, but it made all the
difference to this particular application. Infact, we had so much free
I/O we recently installed another app on there (based on mysql, sorry)
which runs concurrently, and itself 4 times faster than it originally did...
YMMV, just my 2p.
www.dsvr.co.uk Development Designer Servers Ltd
In response to
pgsql-performance by date
|Next:||From: jao||Date: 2004-05-11 16:52:32|
|Subject: Re: Configuring PostgreSQL to minimize impact of checkpoints|
|Previous:||From: Tom Lane||Date: 2004-05-11 14:41:48|
|Subject: Re: Configuring PostgreSQL to minimize impact of checkpoints |