Re: Reducing the memory footprint of large sets of pending triggers

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Simon Riggs <simon(at)2ndQuadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Reducing the memory footprint of large sets of pending triggers
Date: 2008-10-25 13:51:47
Message-ID: 878wscrde4.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> Simon Riggs <simon(at)2ndQuadrant(dot)com> writes:
>> A much better objective would be to remove duplicate trigger calls, so
>> there isn't any build up of trigger data in the first place. That would
>> apply only to immutable functions. RI checks certainly fall into that
>> category.
>
> They're hardly "duplicates": each event is for a different tuple.
>
> For RI checks, once you get past a certain percentage of the table it'd
> be better to throw away all the per-tuple events and do a full-table
> verification a la RI_Initial_Check(). I've got no idea about a sane
> way to make that happen, though.

One idea I had was to accumulate the data in something like a tuplestore and
then perform the RI check as a join between a materialize node and the target
table. Then we could use any join type whether a hash join, nested loop, merge
join, etc depending on how many there on each side are and how many are
distinct values.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Meskes 2008-10-25 15:04:15 Email/lists setup
Previous Message Simon Riggs 2008-10-25 13:36:17 Re: Reducing the memory footprint of large sets of pending triggers