Re: WIP: generalized index constraints

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: WIP: generalized index constraints
Date: 2009-07-06 11:28:58
Message-ID: 407d949e0907060428l47d4e4a3r805159e2443ff178@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jul 6, 2009 at 11:56 AM, Simon Riggs<simon(at)2ndquadrant(dot)com> wrote:
> How will you cope with a large COPY? Surely there can be more than one
> concurrent insert from any backend?

He only needs to handle inserts for the period they're actively being
inserted into the index. Once they're in the index he'll find them
using the index scan. In other words this is all a proxy for the way
btree locks index pages while it looks for a unique key violation.

I'm a bit concerned about the use of tid. You might have to look at a
lot of heap pages to check for conflicts. I suppose they're almost
certainly all in shared memory though. Also, it sounds like you're
anticipating the possibility of dead entries in the array but if you
do then you need to store the xmin also to protect against a tuple
that's been vacuumed and had its line pointer reused since. But I
don't see the necessity for that anyways since you can just clean up
the entry on abort.

--
greg
http://mit.edu/~gsstark/resume.pdf

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2009-07-06 13:12:55 Re: First CommitFest: July 15th
Previous Message Simon Riggs 2009-07-06 10:56:41 Re: WIP: generalized index constraints