Re: Moving from MySQL to PGSQL....some questions (multilevel

From: Bruno Wolff III <bruno(at)wolff(dot)to>
To: "Karl O(dot) Pinc" <kop(at)meme(dot)com>
Cc: "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Moving from MySQL to PGSQL....some questions (multilevel
Date: 2004-03-04 04:48:58
Message-ID: 20040304044858.GA16533@wolff.to
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, Mar 03, 2004 at 17:22:44 -0600,
"Karl O. Pinc" <kop(at)meme(dot)com> wrote:
>
> To make it fast, you'd want to keep the max(id2) value on the table
> keyed by id1. Your trigger would update the max(id2) value as well
> as alter the row being inserted. To keep from having problems with
> concurrent inserts, you'd need to perform all inserts inside
> serialized transactions. The only problem I see is that there's
> a note in the documentation that says that postgresql's serialization
> dosen't always work. Anybody know if it would work in this case?

There was a discussion about predicate locking some time ago (I think
last summer). Postgres doesn't do this and it is possible for two
parallel transactions to get results that aren't consistant with
one transaction occurring before the other. I think the particular
example was inserting some rows and then counting them in each of
two parallel transactions. The answer you get won't be the same as
if either of the two transactions occurred entirely before the other.
This might be what you are referring to.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2004-03-04 06:28:59 Re: gist index build produces corrupt result on first access to table.
Previous Message Jan Wieck 2004-03-04 04:17:53 Re: Slony-I makes progress