Re: Excessive memory used for INSERT

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Torsten Zuehlsdorff <mailinglists(at)toco-domains(dot)de>
Cc: pgsql-performance(at)postgresql(dot)org, Alessandro Ipe <Alessandro(dot)Ipe(at)meteo(dot)be>
Subject: Re: Excessive memory used for INSERT
Date: 2014-12-17 15:41:31
Message-ID: 7359.1418830891@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Torsten Zuehlsdorff <mailinglists(at)toco-domains(dot)de> writes:
> How many rows is "(SELECT * FROM upsert)" returning? Without knowing
> more i would guess, that the result-set is very big and that could be
> the reason for the memory usage.

Result sets are not ordinarily accumulated on the server side.

Alessandro didn't show the trigger definition, but my guess is that it's
an AFTER trigger, which means that a trigger event record is accumulated
in server memory for each inserted/updated row. If you're trying to
update a huge number of rows in one command (or one transaction, if it's
a DEFERRED trigger) you'll eventually run out of memory for the event
queue.

An easy workaround is to make it a BEFORE trigger instead. This isn't
really nice from a theoretical standpoint; but as long as you make sure
there are no other BEFORE triggers that might fire after it, it'll work
well enough.

Alternatively, you might want to reconsider the concept of updating
hundreds of millions of rows in a single operation ...

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alessandro Ipe 2014-12-17 16:57:51 Re: Excessive memory used for INSERT
Previous Message Torsten Zuehlsdorff 2014-12-17 15:26:32 Re: Excessive memory used for INSERT