Re: Duplicate rows

From: Samer Abukhait <abukhait(at)gmail(dot)com>
To: Bob Pawley <rjpawley(at)shaw(dot)ca>
Cc: Postgre General <pgsql-general(at)postgresql(dot)org>
Subject: Re: Duplicate rows
Date: 2005-11-14 22:48:18
Message-ID: 7d215b0c0511141448v1a6b6588j68358ff19e308105@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

so what's the problem exactly??

what's holding you from adding the primary key over fluid_id ??

in the trigger, you could use an if exists to check if the row is there before
and i guess there is no need for a loop? you can do the same per row.

On 11/12/05, Bob Pawley <rjpawley(at)shaw(dot)ca> wrote:
>
> I have the following expression working in that the process.fluid_id is
> transfereed to pipe.fluid_id when the column - process.contain has a value
> of 'ip'.
> There is no transfer when the contain column holds other values. Success -
> so far.
>
> How do I keep the table pipe from being populated with duplicate rows? Among
> other reasons not to have duplicate rows, I want to make pipe.fluid_id a
> primary key.
>
> Bob
>
> CREATE TABLE pipe ( fluid_id int4 NOT NULL);
> CREATE TABLE process( fluid_id int4 NOT NULL, process varchar, contain
> varchar) ;
>
> create or replace function base() returns trigger as $$
> DECLARE
> myrow RECORD;
> BEGIN
>
> for myrow in select * from process where contain = 'ip' loop
> insert into pipe(fluid_id) values (myrow.fluid_id);
> if not found then
> do nothing ;
>
> end if;
> end loop;
> return NULL;
> END;
> $$ language plpgsql;
>
> create trigger trig1 after insert on process
> for each row execute procedure base();
>
> insert into process (fluid_id, process, contain)
> values ('1', 'water3', 'ip');
>
>
>
>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Jim C. Nasby 2005-11-14 22:54:41 Re: 3 x PostgreSQL in cluster/redunant
Previous Message Mikael Carneholm 2005-11-14 22:46:44 Re: Queries causing highest I/O load since pg_stat_reset?