Re: more anti-postgresql FUD

From: Thomas Kellerer <spam_eater(at)gmx(dot)net>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: more anti-postgresql FUD
Date: 2006-10-13 21:11:13
Message-ID: egovdh$tcb$1@sea.gmane.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers

alexei(dot)vladishev(at)gmail(dot)com wrote on 11.10.2006 16:54:
> Do a simple test to see my point:
>
> 1. create table test (id int4, aaa int4, primary key (id));
> 2. insert into test values (0,1);
> 3. Execute "update test set aaa=1 where id=0;" in an endless loop

As others have pointed out, committing the data is a vital step in when testing
the performance of a relational/transactional database.

What's the point of updating an infinite number of records and never committing
them? Or were you running in autocommit mode?
Of course MySQL will be faster if you don't have transactions. Just as a plain
text file will be faster than MySQL.

You are claiming that this test does simulate the load that your applications
puts on the database server. Does this mean that you never commit data when
running on MySQL?

This test also proves (in my opinion) that any multi-db application when using
the lowest common denominator simply won't perform equally well on all
platforms. I'm pretty sure the same test would also show a very bad performance
on an Oracle server.
It simply ignores the basic optimization that one should do in an transactional
system. (Like batching updates, committing transactions etc).

Just my 0.02€
Thomas

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Dann Corbit 2006-10-13 21:41:09 Re: more anti-postgresql FUD
Previous Message A. Kretschmer 2006-10-13 20:50:01 Re: Create Index on Date portion of timestamp

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2006-10-13 21:32:08 Re: Hints (Was: Index Tuning Features)
Previous Message Tom Lane 2006-10-13 20:39:04 Re: more anti-postgresql FUD