Re: MySQL and PostgreSQL speed compare

From: Alfred Perlstein <bright(at)wintelcom(dot)net>
To: Jarmo Paavilainen <netletter(at)comder(dot)com>
Cc: MYSQL <mysql(at)lists(dot)mysql(dot)com>, PostgreSQL General <pgsql-general(at)postgresql(dot)org>
Subject: Re: MySQL and PostgreSQL speed compare
Date: 2000-12-29 12:50:57
Message-ID: 20001229045056.P19572@fw.wintelcom.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

* Jarmo Paavilainen <netletter(at)comder(dot)com> [001229 04:23] wrote:
> Hi,
>
> I wrote a piece of benchmarking, just to test my classes, and was suprised
> of the speed diffs.
>
> So one more entry to the flame war (or the start of a new one) about which
> one is faster, PostgreSQL or MySQL.
>
> Well I expected MySQL to be the faster one, but this much.
>
> Inserts on MySQL : 0.71sec/1000 rows
> Inserts on PostgreSQL: 10.78sec/1000 rows (15 times slower?)
> Inserts on PostgreSQL*: 1.59sec/1000 rows (2 times slower?)
>
> Modify on MySQL : 0.67sec/1000 rows
> Modify on PostgreSQL: 10.20sec/1000 rows (15 times slower?)
> Modify on PostgreSQL*: 1.61sec/1000 rows (2 times slower?)
>
> Delete on MySQL : 1.04sec/1000 rows
> Delete on PostgreSQL: 20.40sec/1000 rows (almost 20 times slower?)
> Delete on PostgreSQL*: 7.20sec/1000 rows (7 times slower?)
>
> Search were almost the same (MySQL were faster on some, PostgreSQL on some),
> sorting and reading sorted entries from dba was the same. But
> insert/modify/delete.
>
> "PostgreSQL*" is postgres whith queries inside transactions. But as long as
> transactions are broken in PostgreSQL you cant use them in real life (if a
> query fails inside a transactions block, PostgreSQL "RollBack"s the whole
> transaction block, and thats broken. You can not convince me of anything
> else).

Well, I'm not going to try to convince you because you seem to have
made up your mind already, however for anyone else watching there's
nothing majorly broken with the 'all or nothing' approach in
postgresql, in fact it's very handy.

The all or nothing approach doesn't happen if a query fails to
modify or return any results, only if there's a genuine error in
the code, like inserting duplicate values into a column that should
be unique, or if you somehow send malformed sql to the server
mid-transaction. This is actually a pretty convient feature because
it prevents programmer mistakes from proceeding to trash more data
and backs it out.

The fact that MySQL doesn't support transactions at all severly
limits its utility for applications that need data consistancy, it
also makes it very dangerous to try any new queries on a database
because one can't just issue rollbacks after a test run.

> Then I thought that maybe it would even up if I made more than one simul.
> call. So I rewrote the utility so that it forked itself several times. With
> PostgreSQL I could not try the test with transactions activated
> (transactions are broken in PostgreSQL, and the test shows it clearly).
> PostgreSQl maxed out my CPU with 5 connections, MySQL used around 75% with
> 20 connections. At five connections MySQL was 5 times faster, with 20
> connections it was 4 times faster.
>
> I do not claim that this is accurate, maybe my classes are broken or
> something, or the test might be totally wrong. But *I think* it simulates
> quite well a ordinary webserver running the dba locally (on the same server
> as the www-server).
>
> The setup is:
>
> PII 450MHz with 256MByte memory.
> Linux Redhat 6.0 (almost out of box).
> MySQL, latest .rpm (a few weeks ago).
> PostgreSQL, from CVS tree (HEAD, a few weeks ago).
> MySQL on a SCSI disk.
> PostgreSQL on a IDE disk. I moved the "data" dir to the SCSI disk and
> tested. Suprise suprise it was slower! Well PostgreSQL was as nice as MySQL
> towards the CPU when it was on the SCSI disk.
> Used gcc to compile PostgreSQL, using only the --prefix when
> ./configur(ing).
>
> If you like to run the test (or view the code), download DBA-Test and AFW
> package from my site (www.comder.com). No fancy configure scripts exist so
> you have to modify the code to make it run on your system.
>
> Comments? Reasons for the result? What was wrong with the test?

A lot of things went wrong here, first off you didn't contact the
developers to let them know ahead of time and discuss tuning the
system. Both the MySQL and Postgresql developers deserve a chance
to recommend tuneing for your application/bench or ask that you
delay your bench until bug X or Y is addressed.

I also think that while updates and inserts are important (they
sure are for us) you admit that Postgresql achieves the same speed
for queries as MySQL when doing searches.

Most sites are that I know of are dynamic content and perform
selects for the most part.

Some other flaws:

You have an admitted inbalance with the disk systems but don't go
into any details.

You probably didn't tune postgresql worth a damn. I don't see any
mention of you raising the amount of shared memory allocated to
postgresql. I also imagine you may have run the test many times
on Postgresql without vacuuming the database?

Telling both development communities:
> MySQL, latest .rpm (a few weeks ago).
> PostgreSQL, from CVS tree (HEAD, a few weeks ago).
doesn't tell us much, maybe there's some bug in the code that
needed work?

> I do not want to start a flame war. Just need help to get PostgreSQL up to
> speed or MySQL to support sub-selects.

I think your time would be better spent working on actually
impelementing the features you want rather than posting broken and
biased benchmarks that do more harm than good.

bye,
--
-Alfred Perlstein - [bright(at)wintelcom(dot)net|alfred(at)freebsd(dot)org]
"I have the heart of a child; I keep it in a jar on my desk."

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jens Hartwig 2000-12-29 12:54:17 Re: MySQL and PostgreSQL speed compare
Previous Message Dimitris Papadiotis 2000-12-29 12:46:31 ODBC Returns 0 records