On 10/11/07, henk de wit <henk53602(at)hotmail(dot)com> wrote:
> I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit Debian
> Etch/2x dual core C2D/8GB mem). The thing is that I have a huge transaction
> that does 2 things: 1) delete about 300.000 rows from a table with about 15
> million rows and 2) do some (heavy) calculations and re-insert a litlte more
> than 300.000 new rows.
> My problem is that this consumes huge amounts of memory. The transaction
> runs for about 20 minutes and during that transaction memory usage peaks to
> about 2GB. Over time, the more rows that are involved in this transaction,
> the higher the peak memory requirements.
How is the memory consumed? How are you measuring it? I assume you
mean the postgres process that is running the query uses the memory.
If so, which tool(s) are you using and what's the output that shows it
I believe that large transactions with foreign keys are known to cause
> Lately we increased our shared_buffers to 1.5GB, and during this transaction
> we reached the process memory limit, causing an out of memory and a rollback
> of the transaction:
How much memory does this machine have? You do realize that
shared_buffers are not a generic postgresql memory pool, but
explicitly used to hold data from the discs. If you need to sort and
materialize data, that is done with memory allocated from the heap.
If you've given all your memory to shared_buffers, there might not be
How much swap have you got configured?
Lastly, what does explain <your query here> say?
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2007-10-11 14:51:35|
|Subject: Re: Huge amount of memory consumed during transaction |
|Previous:||From: Richard Huxton||Date: 2007-10-11 14:16:20|
|Subject: Re: Huge amount of memory consumed during transaction|