Skip site navigation (1) Skip section navigation (2)

Re: database contest results

From: Brian Hurt <bhurt(at)janestcapital(dot)com>
To: mdean <mdean(at)xn1(dot)com>
Cc: pgsql-advocacy(at)postgresql(dot)org
Subject: Re: database contest results
Date: 2006-08-30 13:03:39
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-advocacy
mdean wrote:

> Jeff Davis wrote:
>> On Wed, 2006-08-30 at 00:10 +0200, Andreas Pflug wrote:
> Guys, what I need is alternative benchmark data, where postgresql 
> comes out better on standard assumptions, etc. than i this example  
> Can this be done?
> Michael
I think it can be done, it's just a lot of work.  At a minimum, I'd say 
1) Reliability of final configuration needs to be tested with a "pull 
the plug" test, probably repeated a number of times.  If the database 
doesn't survive this then it is disqualified as not meeting minimum 

2) Multi-threaded access to the database is needed- at least dozens of 
threads doing asynchronous changes (updates, inserts, deletes, selects) 
to the database using transactions.  Non-transactional databases need 
not apply.

3) The database needs to be tuned by people who actually know how to 
tune the database.  This is the classic "Postgres is slow!" mistake- 
they run a default configuration.  This also means the clients run on a 
different machine, and the specs of the database machine are a) 
reasonable, and b) known to the tuners, so they can actually use the 
capabilities of the machine.  At this point, I'd say a reasonable lower 
bound would be a 64-bit CPU, at least 4G of memory, and 6-8 SATA drives 
in a RAID 1+0 configuration.

Note that in any sort of "real" environment, which includes a small 
webserver app that I actually care about, these requirements will 
reflect reality.  Sooner or later the plug is going to get pulled on my 
database- power outage, the magic smoke being released from some 
hardware, something will happen which will make the database die 
uncleanly, and need to recover from it.  And sooner or later I'm going 
to have more than 1 person accessing the database at a time- at which 
point transactions will be a lifesaver (as a side note, this is why I 
picked Postgres over Mysql).  Even more so, when performance is most 
important is when I have lots of people hitting the DB simultaneously- 
my website just got slashdotted or what have you.  And finally, no 
matter which database I end up picking, I'm going to put some time into 
learning that databases, including how to tune that database.

The problem here is the cost- both in hardware (we're talking ~$3-4K for 
the DB server alone), but even more so in time.  Time to set up the 
database, time for the knowledgable people to come out of the woodwork 
and help you configure the database (and possibly the application), time 
to unplug each database multiple times, etc.  Not even a long weekend is 
enough time, you're looking at weeks, if not months.  More time than 
most journalists are willing to invest just to write a "Database 
performance shoot out!  Details inside!" article.


In response to

pgsql-advocacy by date

Next:From: Alvaro HerreraDate: 2006-08-30 13:44:19
Subject: Re: PostgreSQL rebranding
Previous:From: Martin MarquesDate: 2006-08-30 11:37:47
Subject: Re: 7.4 Development

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group