Re: Memory exhaustion during bulk insert

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Xin Wang <andywx(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Memory exhaustion during bulk insert
Date: 2009-04-15 14:04:34
Message-ID: 9306.1239804274@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Xin Wang <andywx(at)gmail(dot)com> writes:
> I searched the mailinglist archive and noticed that a patch to improve
> bulk insert performance is committed in Nov 2008. The log message said
> "(the patch) keeps the current target buffer pinned and make it work
> in a small ring of buffers to avoid having bulk inserts trash the whole
> buffer arena."
> However, I do not know much about the code below the heapam layer. Can that
> patch solve my problem (the version I use is 8.3.5)?

No. You have a memory leak to fix. I suspect you need to be paying
attention to evaluating the successive tuple values in a short-term
memory context that you can reset on each cycle. There are other
possibilities though --- looking at the memory map produced on an
out-of-memory error would help narrow down the problem. (If the thing
"hangs up" without producing such an error, that's the *first* problem
to solve. It could be that it's not so much hanging up as going into
swap hell; in which case I'd suggest running the postmaster under a more
restrictive ulimit, so that it fails before starting to swap.)

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2009-04-15 15:33:01 Re: Performance of full outer join in 8.3
Previous Message Robert Haas 2009-04-15 13:31:08 Re: Replacing plpgsql's lexer