Re: serialization errors

From: Ryan VanderBijl <rvbijl-pgsql(at)vanderbijlfamily(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: serialization errors
Date: 2003-01-30 19:00:42
Message-ID: 20030130190042.GA926@vanderbijlfamily.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> > I'm open to suggestions of a better way to store tree structures in a
> > database ...
>
> Not a better way, per se, but a different way:
> http://searchdatabase.techtarget.com/tip/1,289483,sid13_gci537290,00.html
>
> If you only have two levels, the the adjacency list model will work very
> well. It works well when a maximum depth can be specified.
>
> The order field might be better expressed as a non-primary key column.

That article looks interesting ... I'll have to take a look at it in more
detail.

BTW, the node_id is the primary key, and the node_order is a unique
constraint.

Also, BTW, the inset command looks more like:

INSERT INTO tree_node(parent_id, node_name, node_order)
VALUES(
1,
"document",
(SELECT COALESCE(MAX(node_order),0)+1 FROM tree_node WHERE parent_id = 1)
);

>
> > However, back to serialization issue ... i'm going to take one [snip]
> > more stab at convincing you a uniqueconstraint error should be flagged
> > as a serial error (a "serializable error: unique constraint
> > violated"?)
>
> No it shouldn't. You have attempted to insert duplicate primary keys.
> When you use select max with concurrent transactions, both transactions
> will receive the same value for select max. The second one to insert
> will be inserting a duplicate primary key.
>
> PostgreSQL should say; "unique constraint violated", which it does.

I guess I'm starting to sound like a broken record here, but I'm struggling
to understand why it should say unique constraint violated instead of serial.

BEGIN;
i = SELECT MAX(node_order)+1 FROM tree_node WHERE parent_id = 1;
INSERT INTO tree_node(parent_id, node_name, node_order)
VALUES(1, "name", i);

Why does it make more sense to get a unique constraint violated at this point?
As far as that transaction is concerned, this is completely and perfectly
correct. When I use a serializable transaction, I would expect all queries
to act internally consistant.

Now, if because of a concurrently commited transaction, this would
violate a unique constraint, and the database can't serialize the transaction,
then well, i should get a serializing error. Then I try again, and two things
happen:
1. I'm not smart enough to requery the max id, in which case on try two
I get the unique contraint violated error.
2. I requery the max id, and I get a non-unique constraint violated error.

The other thing beneffit of having this return a "serializable error", that
I neglected to mention last time, was that then the user doesn't strictly
have to put in a re-try limiter in the loop. Currently, if I receive a unique
constraint violated error, there are two reasons it may have happened:
a) someone else committed an entry
b) the query I'm attempting is wrong

So, the way it is currently, I have to special case the unique constraint
violated. If I receive that error consistantly, I don't know if it is
simply because of extremely high activity, or if I have a bug and am
executing a stupid query.

Anyways, now that I know that I can get a unique constraint violated error
in addition to serializable error, i've added the special case code, and
am (un?)happily retrying my queries, with a retry limit.

Thanks!

Ryan
--
Ryan VanderBijl rvbijl(at)vanderbijlfamily(dot)com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Andrew Sullivan 2003-01-30 19:03:43 Re: Installing PG 7.3.1 on Solaris 8
Previous Message Noah Silverman 2003-01-30 18:24:38 Re: One large v. many small