Re: large dataset with write vs read clients

From: Florian Weimer <fw(at)deneb(dot)enyo(dot)de>
To: mladen(dot)gogala(at)vmsinfo(dot)com
Cc: Greg Smith <greg(at)2ndquadrant(dot)com>, Aaron Turner <synfinatic(at)gmail(dot)com>, pgsql-performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: large dataset with write vs read clients
Date: 2010-10-10 11:45:01
Message-ID: 871v7y2rwi.fsf@mid.deneb.enyo.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

* Mladen Gogala:

> I have a logical problem with asynchronous commit. The "commit"
> command should instruct the database to make the outcome of the
> transaction permanent. The application should wait to see whether the
> commit was successful or not. Asynchronous behavior in the commit
> statement breaks the ACID rules and should not be used in a RDBMS
> system.

That's a bit over the top. It may make sense to use PostgreSQL even
if the file system doesn't guarantuee ACID by keeping multiple
checksummed copies of the database files. Asynchronous commits offer
yet another trade-off here.

Some people use RDBMSs mostly for the *M* part, to get a consistent
administration experience across multiple applications. And even with
asynchronous commits, PostgreSQL will maintain a consistent state of
the database.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Віталій Тимчишин 2010-10-10 12:02:03 Re: Slow count(*) again...
Previous Message Thom Brown 2010-10-10 11:04:31 Re: DB slow down after table partition