Re: Huge amount of memory consumed during transaction

From: henk de wit <henk53602(at)hotmail(dot)com>
To: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Huge amount of memory consumed during transaction
Date: 2007-10-11 23:21:35
Message-ID: BAY124-W594E8378868281E1035467F5A70@phx.gbl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> How is the memory consumed? How are you measuring it? I assume you
> mean the postgres process that is running the query uses the memory.
> If so, which tool(s) are you using and what's the output that shows it
> being used?

It's periodically measured and recorded by a script from which the relevant parts are:

GET_VSZ="ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \$5}'
| sort -n | tail -n1";
GET_RSS="ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \$6}'
| sort -n | tail -n1";

From this I draw graphs using Cacti. I just checked a recent transaction; during this transaction which involved about 900.000 rows, VSZ peakes at 2.36GB, with RSS then peaking at 2.27GB. This memory usage is on top of a shared_buffers being set back to 320MB. Shortly after the transaction finished, memory usage indeed drops back to a nearly steady 320MB.
(btw, I mistyped the rows involved in the original post; the 2GB memory usage is for 900.000 rows, not 300.000).

After some more digging, I found out that the immense increase of memory usage started fairly recently (but before the increase of my shared_buffers, that just caused the out of memory exception).

E.g. for a transaction with 300.000 rows involved a few weeks back, the memory usage stayed at a rather moderate 546MB/408MB (including 320MB for shared_buffers), and for some 800.000 rows the memory usage peaked at 'only' 631/598. When I draw a graph of "rows involved" vs "memory usage" there is a direct relation; apart from a few exceptions its clearly that the more rows are involved, the more memory is consumed.

I'll have to check what was exactly changed at the PG installation recently, but nevertheless even with the more moderate memory consumption it becomes clear that PG eventually runs out of memory when more and more rows are involved.

> I believe that large transactions with foreign keys are known to cause
> this problem.

As far as I can see there are no, or nearly no foreign keys involved in the transaction I'm having problems with.

> How much memory does this machine have?

It's in the original post: 8GB ;)

> If you've given all your memory to shared_buffers, there might not be
> any left.

I have of course not given all memory to shared_buffers. I tried to apply the rule of thumb of setting it to 1/4 of total memory. To be a little conservative, even a little less than that. 1/4 of 8GB is 2GB, so I tried with 1.5 to start. All other queries and small transactions run fine (we're talking about thousands upon thousands of queries and 100's of different ones. It's this huge transaction that occupies so much memory.

> Lastly, what does explain <your query here> say?

I can't really test that easily now and it'll be a huge explain anyway (the query is almost 500 lines :X). I'll try to get one though.

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message henk de wit 2007-10-11 23:23:33 Re: Huge amount of memory consumed during transaction
Previous Message Merlin Moncure 2007-10-11 20:04:01 Re: Performance problems with prepared statements