Re: Commit every N rows in PL/pgsql

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Len Walter <len(dot)walter(at)gmail(dot)com>
Cc: Alban Hertroys <dalroi(at)solfertje(dot)student(dot)utwente(dot)nl>, pgsql-general(at)postgresql(dot)org
Subject: Re: Commit every N rows in PL/pgsql
Date: 2010-05-27 00:51:39
Message-ID: 5491.1274921499@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Len Walter <len(dot)walter(at)gmail(dot)com> writes:
>>> I need to populate a new column in a Postgres 8.3 table. The SQL would be
>>> something like "update t set col_c = col_a + col_b". Unfortunately, this
>>> table has 110 million rows, so running that query runs out of memory.
>>
>> That's unusual, what is the error you get?

> Here it is:
> ...
> AfterTriggerEvents: 2642403328 total in 327 blocks; 10176 free (319
> chunks); 2642393152 used

And there's the problem. Evidently you have an AFTER trigger on the
table, and the queued events for that trigger are overrunning memory.

Figuring out what to do about that would depend on what the trigger's
for, but usually the best bet is to drop the trigger and instead do
whatever it's doing in some bulk fashion instead of retail.

(We have a TODO item to allow the trigger queue to spill to disk instead
of always being kept in memory; but it's fairly low on the priority
list, because quite frankly once the event list gets this large, you'd
not want to wait around for all the triggers to execute anyway ...)

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Kenichiro Tanaka 2010-05-27 02:37:30 Re: Help on update.
Previous Message Len Walter 2010-05-27 00:42:33 Re: Commit every N rows in PL/pgsql