Re: [HACKERS] Re: ALTER TABLE DROP COLUMN

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jan Wieck <wieck(at)debis(dot)com>
Cc: Don Baccus <dhogaza(at)pacifier(dot)com>, Hannu Krosing <hannu(at)tm(dot)ee>, PostgreSQL Development <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: [HACKERS] Re: ALTER TABLE DROP COLUMN
Date: 2000-02-29 05:40:19
Message-ID: 27878.951802819@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

wieck(at)debis(dot)com (Jan Wieck) writes:
> So it's definitely some kind of "accept duplicates for now
> but check for final dup's on this key later".

> But that requires another index scan later. We can remember
> the relations and indices Oid (to get back the relation and
> index in question) plus the CTID of the added
> (inserted/updated tuple) to get back the key values
> (remembering the key itself could blow up memory). Then do an
> index scan under current (statement end/XACT commit)
> visibility to check if more than one HeapTupleSatisfies().

> It'll be expensive, compared to current UNIQUE implementation
> doing it on the fly during btree insert (doesn't it?). But
> the only way I see.

How about:

1. During INSERT into unique index, notice whether any other index
entries have same key. If so, add that key value to a queue of
possibly-duplicate keys to check later.

2. At commit, or whenever consistency should be checked, scan the
queue. For each entry, use the index to look up all the matching
tuples, and check that only one will be valid if the transaction
commits.

This avoids a full index scan in the normal case, although it could
be pretty slow in the update-every-tuple scenario...

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2000-02-29 05:44:22 Re: [HACKERS] Re: [SQL] prob with aggregate and group by - returns multiples
Previous Message Hiroshi Inoue 2000-02-29 05:13:50 RE: [HACKERS] Re: ALTER TABLE DROP COLUMN