Re: Inserting heap tuples in bulk in COPY

From: Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Inserting heap tuples in bulk in COPY
Date: 2011-08-13 09:01:57
Message-ID: CAEZATCU3CgZm1o471Yfjbx975pg+0Aob_En9hTUERuboQYUKrQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 12 August 2011 23:19, Heikki Linnakangas
<heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
>>> Triggers complicate this. I believe it is only safe to group tuples
>>> together
>>> like this if the table has no triggers. A BEFORE ROW trigger might run a
>>> SELECT on the table being copied to, and check if some of the tuples
>>> we're
>>> about to insert exist. If we run BEFORE ROW triggers for a bunch of
>>> tuples
>>> first, and only then insert them, none of the trigger invocations will
>>> see
>>> the other rows as inserted yet. Similarly, if we run AFTER ROW triggers
>>> after inserting a bunch of tuples, the trigger for each of the insertions
>>> would see all the inserted rows. So at least for now, the patch simply
>>> falls
>>> back to inserting one row at a time if there are any triggers on the
>>> table.

I guess DEFAULT values could also suffer from a similar problem to
BEFORE ROW triggers though (in theory a DEFAULT could be based on a
function that SELECTs from the table being copied to).

Regards,
Dean

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2011-08-13 14:11:41 Re: VACUUM FULL versus system catalog cache invalidation
Previous Message Peter Eisentraut 2011-08-13 08:15:20 Re: PL/Perl Returned Array