Re: pg_dump and DEFAULT column values

From: "Peter Darley" <pdarley(at)Kinesis-CEM(dot)com>
To: "Eric Ridge" <ebr(at)tcdi(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: pg_dump and DEFAULT column values
Date: 2001-11-08 21:31:13
Message-ID: NNEAICKPNOGDBHNCEDCPIEAOCBAA.pdarley@kinesis-cem.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

You might want to move that logic to your client application or the insert
statement, rather than having it done in the database, if people aren't
going to be going into the database without going through your application.
You could have something like
INSERT INTO Table (Field, Field2, Field3) VALUES ((SELECT MAX(Field) FROM
Table)+1, 'Value2', 17);

Thanks,
Peter Darley

-----Original Message-----
From: pgsql-general-owner(at)postgresql(dot)org
[mailto:pgsql-general-owner(at)postgresql(dot)org]On Behalf Of Eric Ridge
Sent: Wednesday, November 07, 2001 9:46 PM
To: Tom Lane
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] pg_dump and DEFAULT column values

> But if you allow applications to change the field, then all bets about
> sort order are off anyway, no? It's far from clear exactly what
> semantics you think you are guaranteeing here.

The only thing I'm trying to guarantee is that new records appear last
(or first, depending on sort direction). Once the user "physically"
re-sorts the records (by changing the value of this field) then I don't
care what the old values are. Only that new records are greater than
the last existing value.

> In any case it seems like changing the initially-assigned field value
> is an infrequent operation, and that you ought to make *that* be the
> expensive and not-safe-for-parallelism case, not row insertion.

About as frequent as insertions. As records are added, they're manually
sorted. So every INSERT (eventually) causes a manual UPDATE for all
records.

In most cases I know before I do the INSERT, how the record needs to be
sorted, so I can set the this value accordingly. But other cases
involve a bulk insert process, and a manual resorting process.

> (Perhaps this is a wrong assumption on my part, in which case ignore
> what follows.) I'd still be inclined to use a sequence for insertion,
> and to enforce consistency on update with an AFTER UPDATE trigger that
> does something like
> if old.fld != new.fld then
> begin
> lock mytable;
> select setval('seq', max(fld)) from table;
> end;
> (Untested, but I hope the idea is clear.)

Very clear. However, this table is going to receive many UPDATEs that
aren't related to this sorting field. Would the overall overhead of
executing the trigger on every UPDATE outweigh the overhead of doing it
on BEFORE INSERT:

begin
if NEW.fld is null then -- user didn't specify a value
lock mytable;
NEW.fld := select max(fld)+1 from table;
end;

I guess it depends on the frequency of updates v/s the frequency of
inserts. I'm not sure, at this point, which will be greater.

eric

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

In response to

Browse pgsql-general by date

  From Date Subject
Next Message David Link 2001-11-08 21:57:38 Performance: Sum()
Previous Message Rich Ryan 2001-11-08 20:54:19 Re: pg_dump usage - problems with restore due to the use of tab delimiters