From: | Jim Mercer <jim(at)reptiles(dot)org> |
---|---|
To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | Charles Martin <martin_pgsql(at)yahoo(dot)com>, pgsql-general(at)postgreSQL(dot)org |
Subject: | Re: [GENERAL] please help me recover from duplicate key in unique index |
Date: | 2000-01-05 00:21:28 |
Message-ID: | 20000104192127.O4188@reptiles.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Jan 04, 2000 at 06:47:39PM -0500, Bruce Momjian wrote:
> Try:
>
> SELECT oid, *
> FROM table
> GROUP BY oid
> HAVING count(*) > 1
>
> that should show your duplicates, and you can remove them using
> table.oid = _bad_number_.
ah, i thought each row had a unique oid, and thus, that query wouldn't show the
duplicate content.
i would do something like:
SELECT keyfield FROM tablename GROUP BY keyfield HAVING COUNT(*) > 1;
this will produce a list of rows where keyfield is not unique in tablename.
then, for each of those entries, you want to do a:
SELECT oid, keyfield, other fields FROM tablename WHERE keyfield = <value>;
then you can select which duplicate you want to nuke, and do:
DELETE FROM tablename WHERE OID = 999999;
(i've found that creating an index, non-unique for performance sake) on oid will
improve the preformance of duplicate nukes on really big tables)
mind you, the above process, on a large table is gonna be slow anyways.
--
[ Jim Mercer jim(at)reptiles(dot)org +1 416 506-0654 ]
[ Reptilian Research -- Longer Life through Colder Blood ]
[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2000-01-05 00:22:40 | Re: [GENERAL] please help me recover from duplicate key in unique index |
Previous Message | Ed Loehr | 2000-01-04 23:56:32 | Re: [GENERAL] please help me recover from duplicate key in unique index |