Re: more anti-postgresql FUD

From: "Dann Corbit" <DCorbit(at)connx(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: more anti-postgresql FUD
Date: 2006-10-13 21:41:09
Message-ID: D425483C2C5C9F49B5B7A41F8944154757DC8E@postal.corporate.connx.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers

> -----Original Message-----
> From: pgsql-general-owner(at)postgresql(dot)org [mailto:pgsql-general-
> owner(at)postgresql(dot)org] On Behalf Of Thomas Kellerer
> Sent: Friday, October 13, 2006 2:11 PM
> To: pgsql-general(at)postgresql(dot)org
> Subject: Re: [GENERAL] more anti-postgresql FUD
>
> alexei(dot)vladishev(at)gmail(dot)com wrote on 11.10.2006 16:54:
> > Do a simple test to see my point:
> >
> > 1. create table test (id int4, aaa int4, primary key (id));
> > 2. insert into test values (0,1);
> > 3. Execute "update test set aaa=1 where id=0;" in an endless loop
>
> As others have pointed out, committing the data is a vital step in when
> testing
> the performance of a relational/transactional database.
>
> What's the point of updating an infinite number of records and never
> committing
> them? Or were you running in autocommit mode?
> Of course MySQL will be faster if you don't have transactions. Just as a
> plain
> text file will be faster than MySQL.
>
> You are claiming that this test does simulate the load that your
> applications
> puts on the database server. Does this mean that you never commit data
> when
> running on MySQL?
>
> This test also proves (in my opinion) that any multi-db application when
> using
> the lowest common denominator simply won't perform equally well on all
> platforms. I'm pretty sure the same test would also show a very bad
> performance
> on an Oracle server.
> It simply ignores the basic optimization that one should do in an
> transactional
> system. (Like batching updates, committing transactions etc).
>
> Just my 0.02€
> Thomas

In a situation where a ludicroulsly high volume of update transactions is expected, probably a tool like MonetDB would be a good idea:
http://monetdb.cwi.nl/

It's basically the freely available DB correspondent to TimesTen:
http://www.oracle.com/database/timesten.html

For an in-memory database, the high speed will require heaps and gobs of RAM, but then you will be able to do transactions 10x faster than anything else can.

It might be interesting to add fragmented column tubes in RAM {like MonetDB uses} for highly transactional tables to PostgreSQL some day.

> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2006-10-13 22:03:47 Re: Backup DB not getting connected
Previous Message Thomas Kellerer 2006-10-13 21:11:13 Re: more anti-postgresql FUD

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2006-10-13 22:05:33 Re: index advisor
Previous Message Josh Berkus 2006-10-13 21:32:08 Re: Hints (Was: Index Tuning Features)