From: | Patrik Kudo <kudo(at)partitur(dot)se> |
---|---|
To: | Allan Kelly <allan(dot)kelly(at)buildstore(dot)co(dot)uk> |
Cc: | "pgsql-sql(at)postgresql(dot)org" <pgsql-sql(at)postgresql(dot)org> |
Subject: | Re: duplicates |
Date: | 2000-04-07 11:27:38 |
Message-ID: | 38EDC62A.77BBB73D@partitur.se |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Hi Allan
Firstly I'd suggest a unique index on the column that should be unique
to force uniqueness on that column. But since you already have double
tuples, you can do the following:
delete from subscribers where exists (
select 1 from subscribers s where s.user_name = subscribers.user_name
and s.oid > subscribers.oid)
I'm not 100% certain it'll do the work right, so PLEASE try it out with
a testtable first =)
Regards,
Patrik Kudo
Allan Kelly wrote:
>
> I have a system bug which means that I have occasional dupicate entries in
> my 'subscribers' table. I'm trying to find a query which performs something
> like a 'where user_name is not unique' query.
>
> At the moment we use this cludge:
>
> select count(*), user_name from subscribers
> group by user_name order by count;
>
> (I'm surprised I can't add 'where count > 1' but it seems conditionals on
> aggregate fields are not allowed).
>
> This gives me a very long list with the 'not unique' entries at the bottom, eg
>
> count | user_name
> ------+------------------
> 1 | bill.hicks
> [ ..cut 9 zillion results.. ]
> 1 | margaret.thatcher
> 4 | linus.torvalds
> 9 | bill.gates
>
> I then have to do
>
> select oid from subscribers where user_name = 'linus.torvalds';
>
> and delete all but one of the records. Is there a better way to do this?
> And yes, we're working on a system fix to avoid the problem in the 1st place!
>
> TIA, al.
From | Date | Subject | |
---|---|---|---|
Next Message | Patrik Kudo | 2000-04-07 11:33:25 | Re: duplicates |
Previous Message | Allan Kelly | 2000-04-07 10:34:01 | duplicates |