Re: SV: MySQL and PostgreSQL speed compare

From: Lamar Owen <lamar(dot)owen(at)wgcr(dot)org>
To: Jarmo Paavilainen <netletter(at)comder(dot)com>
Cc: MYSQL <mysql(at)lists(dot)mysql(dot)com>, PostgreSQL General <pgsql-general(at)postgresql(dot)org>
Subject: Re: SV: MySQL and PostgreSQL speed compare
Date: 2000-12-29 21:08:44
Message-ID: 3A4CFD5C.C4F567A3@wgcr.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Jarmo Paavilainen wrote:
> I run both MySQL and PostgreSQL as they are (minimum switches, no tuning, as
> default as it can be). That is MySQL as the .rpm installed it
> (--datadir --pid-file --skip-locking) and PostgreSQL with -i -S -D. Thats
> the way most people would be running them anyway. And default should be good
> enought for this test (simple queries, few rows (max 1000) per table).

Comment to the list as a whole: believe it or not, most PostgreSQL
newbies who are not DBA's by profession really DO run with the default
settings. Maybe benchmarking with both the default and the recommended
settings (which are not really adequately (read: clearly and concisely)
documented as being the _recommended_ settings) would have its uses.
But just benchmarking with the default settings doesn't in and of itself
invalidate the results.

But, then again, if the default settings are so bad performance-wise,
why _are_ they the default anyway? There should be good reason, of
course, but I think maybe the defaults could or should be revisited as
to applicability.

> > > > Well I expected MySQL to be the faster one, but this much.

The MySQL crowd used to claim an 'order of magnitude' performance
difference. A difference of only two times is an improvement.

> The idea was to run as recomended and as default as possible. But with the
> latest (alpha/beta/development) code.

While I can't fault the use of the default settings, as stated above --
really, very very few are going to use the BETA CODE! If they are going
to install the beta, then they are just as likely to do the recommended
tuning. If you are going to use the default settings, then use the
latest NON-BETA releases.

> Ill test that. Even thou it feels like tweaking PostgreSQL away from what
> its considered safe by PostgreSQL developers. If it would be safe it would
> be default.

While the reasoning here sounds broken for an experienced PostgreSQL
user or developer, I can definitely see his point.

> > > > transaction block, and thats broken. You can not convince me of
> anything else).

> > > They are not as functionally complete as they could be, I'll give you
> that.

> Thanks, I think ;-)

FWIW, I prefer the PostgreSQL transaction block behavior. And it is not
difficult at all to work around -- but, I do see the utility of having
savepoints -- and I am sure we will have those at some point in time.

> What if I do a SELECT to check for a row. Then I do a INSERT. But between
> SELECT and INSERT someone else inserted a row. NO I do not think that "good
> programming" will solve this.

Neither will putting the SELECT and INSERT inside a transaction block,
unless you lock the table -- or use something like a UNIQUE INDEX to
prevent duplicate inserts. Or use a trigger.

It sounds like you are trying to prevent duplicate inserts -- something
like a BBS system which needs guaranteed unique user id's. My
experience is that a UNIQUE INDEX is the ONLY practical way to do this,
as the application code cannot possibly prevent an insert which violates
the uniqueness, thanks to the race condition between the SELECT and the
INSERT -- again, assuming that you don't want to lock the whole table
(and who wants to put a bottleneck like that into the system!).

Of course, if you're wanting uniquesness AND case-insensitive user id's,
you need a UNIQUE INDEX on lower(user-id), not just UNIQUE on user-id.

Now, as to the multiuser aspects of your benchmarks, you should never
have issued results when the two RDBMS's were running on non-identical
hardware (since PostgreSQL had its data on the IDE disk, and MySQL's was
on the SCSI disk, that qualifies as a _massive_ oversight that
completely invalidates your results).

Although, think a minute: if PostgreSQL is that close to MySQL's
performance, with the known extra overhead for transactions, for a
SINGLE USER case, then things are much better.

It's in the multiuser case PostgreSQL _really_ shines anyway -- that is,
given hardware that can handle the multiuser case in a sane fashion (and
IDE isn't sane hardware for multiuser benchmarking). And I say that
knowing that my (lightly loaded) production database server is running
IDE drives -- I don't need a benchmark-grade system to server 25 users.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Alfred Perlstein 2000-12-29 21:28:08 Re: SV: MySQL and PostgreSQL speed compare
Previous Message Dominic J. Eidson 2000-12-29 20:39:18 Re: SV: MySQL and PostgreSQL speed compare