Re: [Pgreplication-general] DBMIRROR and INSERT transactions

From: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>
To: Ezra Nugroho <ezran(at)goshen(dot)edu>
Cc: Hervé Piedvache <herve(at)elma(dot)fr>, Michael Loftis <mloftis(at)wgops(dot)com>, <pgsql-general(at)postgresql(dot)org>, <pgreplication-general(at)gborg(dot)postgresql(dot)org>
Subject: Re: [Pgreplication-general] DBMIRROR and INSERT transactions
Date: 2003-04-03 17:58:14
Message-ID: Pine.LNX.4.33.0304031056080.20023-100000@css120.ihs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 31 Mar 2003, Ezra Nugroho wrote:

> Try doing it without replication, check the time.
>
> I think your problem has nothing to do with replication. It is simply
> because you have a huge one-shot transactions. Each time you run
> something in transaction, db needs to perform the sql in a rollback-able
> segment instead of in a permanent storage. It means that you are eating
> virtual memory like nuts...
> After a while page swap has to be done too frequently that your
> performance drops.
>
> Do you really have to run those 320 000 inserts in a transaction?

Sorry, but you're probably familiar with other databases. While running
truly huge transactions in Postgresql has some issues, the one you are
listing does not exist. All transactions happen in permanent storage all
the time in Postgresql. So, no it wouldn't eat virtual memory like nuts,
or a rollback segment, since postgresql doesn't have those. It will drive
up the storage requirements on the main store, and may produce lots of
dead tuples if you're updating / replacing loads of tuples, but for pure
inserts, 100,000 in a batch is no big deal at all.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tamir Halperin 2003-04-03 17:59:36 Re: Anyone know of a news group for mysql?
Previous Message Joe Conway 2003-04-03 17:56:36 Re: Help with array constraints