Re: Reducing the memory footprint of large sets of pending triggers

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Reducing the memory footprint of large sets of pending triggers
Date: 2008-10-25 13:36:17
Message-ID: 1224941777.15085.86.camel@ebony.2ndQuadrant
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


On Sat, 2008-10-25 at 08:48 -0400, Tom Lane wrote:
> Simon Riggs <simon(at)2ndQuadrant(dot)com> writes:
> > A much better objective would be to remove duplicate trigger calls, so
> > there isn't any build up of trigger data in the first place. That would
> > apply only to immutable functions. RI checks certainly fall into that
> > category.
>
> They're hardly "duplicates": each event is for a different tuple.

That's what makes it hard; we may find the same trigger parameter values
but on different tuples.

> For RI checks, once you get past a certain percentage of the table it'd
> be better to throw away all the per-tuple events and do a full-table
> verification a la RI_Initial_Check(). I've got no idea about a sane
> way to make that happen, though.

Me neither, yet.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Gregory Stark 2008-10-25 13:51:47 Re: Reducing the memory footprint of large sets of pending triggers
Previous Message Tom Lane 2008-10-25 12:57:43 Impending back branch update releases