Re: Deduplication and transaction isolation level

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: François Beausoleil <francois(at)teksol(dot)info>
Cc: Forums postgresql <pgsql-general(at)postgresql(dot)org>
Subject: Re: Deduplication and transaction isolation level
Date: 2013-09-25 13:04:18
Message-ID: CAHyXU0zfsrnvXavLvgTUszm65AHwYwmaokEns16DWPfBiqizwg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Sep 24, 2013 at 10:19 PM, François Beausoleil
<francois(at)teksol(dot)info> wrote:
> Hi all!
>
> I import many, many rows of data into a table, from three or more computers, 4 times per hour. I have a primary key, and the query I use to import the data is supposed to dedup before inserting, but I still get primary key violations.
>
> The import process is:
>
> * Load CSV data into temp table
> * INSERT INTO dest SELECT DISTINCT (pkey) FROM temp WHERE NOT EXISTS(temp.pkey = dest.pkey)
>
> I assumed (erroneously) that this would guarantee no duplicate data could make it into the database. The primary key violations are proving me wrong.

Right. Transaction A and B are interleaved: they both run the same
check against the same id at the same time. Both checks pass because
neither transaction is committed. This problem is not solvable by
adjusting the isolation level.

Typical solutions might be to:
A. Lock the table while inserting
B. Retry the transaction following an error.
C. Import the records to a staging table, then copy the do the
deduplication check when moving from the staging table

merlin

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Guillaume Lelarge 2013-09-25 13:36:13 Re: streaming replication not working
Previous Message Dave Cramer 2013-09-25 10:43:09 Re: reading cvs logs with pgadmin queries