Re: DELETE ERROR: tuple concurrently updated

From: Михаил Кечинов <kechinoff(at)gmail(dot)com>
To: Greg Stark <gsstark(at)mit(dot)edu>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: DELETE ERROR: tuple concurrently updated
Date: 2009-12-29 14:51:50
Message-ID: c8f0e27c0912290651o61f7a40ci3e73163bbc4755a4@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Ok, I did it.

1. Drop primary key CASCADE (foreign keys too).
2. Reindex table.
3. Delete duplicate rows.
4. Create primary key.
5. Create foreign keys.

Thanks for help.

29 декабря 2009 г. 15:24 пользователь Михаил Кечинов
<kechinoff(at)gmail(dot)com>написал:

> Good. Now I have error:
>
> docs=# REINDEX TABLE document;
> ERROR: could not create unique index "pkey_document"
> DETAIL: Table contains duplicated values.
>
> So, I have primary key and I have some rows with similar "numdoc", but
> "numdoc" is primary key and must be unique.
>
> I can't drop pkey because there are some tables with foreign keys:
>
> docs=# alter table document drop constraint pkey_document;
> NOTICE: constraint fk_search_document_vid_numdoc on table
> ref_search_document_v id depends
> on index pkey_document
> ERROR: cannot drop constraint pkey_document on table document because
> other obj ects depend on it
> HINT: Use DROP ... CASCADE to drop the dependent objects too.
>
> So, I can drop pkey only with other fkey's. It's bad. Is there any methods,
> how to clean pkey? When I try to delete some duplicate document, I have that
> error: tuple concurrently updated
>
> ...
>
> 2009/12/29 Greg Stark <gsstark(at)mit(dot)edu>
>
>> On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов <kechinoff(at)gmail(dot)com>
>> wrote:
>> > When I try to delete one row from database (for example):
>> > delete from document where numdoc = 901721617
>> > I have this error:
>> > ERROR: tuple concurrently updated
>> > SQL state: XX000
>> > I know, that no one deleting this row at same time.
>> > What's mean this error?
>>
>> So this error can only come from a normal SQL-level delete if there is
>> associated TOAST data which is being deleted as well. In which case
>> that TOAST data must be already marked deleted -- which shouldn't be
>> possible.
>>
>> It sounds like you have a database where some writes from earlier
>> transactions reached the database and others didn't. That can happen
>> if you take an inconsistent backup (without using pg_start_backup())
>> or if the drive you're using confirmed writes before crashing but
>> didn't actually write them.
>>
>> You might be able to get somewhat further by reindexing the TOAST
>> table for this table. To do so do "REINDEX TABLE document". But note
>> that you could run into further errors from the missing toast data.
>>
>> --
>> greg
>>
>
>
>
> --
> Михаил Кечинов
> http://www.mkechinov.ru
>

--
Михаил Кечинов
http://www.mkechinov.ru

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Gastón Quiroga 2009-12-29 15:48:26 Re: pg_dump ERROR, usename "postgres" duplicated
Previous Message Adrian Klaver 2009-12-29 14:41:46 Re: cross-database time extract?