Skip site navigation (1) Skip section navigation (2)

Re: Postgres Connections Requiring Large Amounts of Memory

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Dawn Hollingsworth <dmh(at)airdefense(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org,Ben Scherrey <scherrey(at)proteus-tech(dot)com>
Subject: Re: Postgres Connections Requiring Large Amounts of Memory
Date: 2003-06-17 19:38:02
Message-ID: 14875.1055878682@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-performance
Dawn Hollingsworth <dmh(at)airdefense(dot)net> writes:
> I attached gdb to a connection using just over 400MB( according to top)
> and ran "MemoryContextStats(TopMemoryContext)"

Hmm.  This only seems to account for about 5 meg of space, which means
either that lots of space is being used and released, or that the leak
is coming from direct malloc calls rather than palloc.  I doubt the
latter though; we don't use too many direct malloc calls.

On the former theory, could it be something like updating a large
number of tuples in one transaction in a table with foreign keys?
The pending-triggers list could have swelled up and then gone away
again.

The large number of SPI Plan contexts seems a tad fishy, and even more
so the fact that some of them are rather large.  They still only account
for a couple of meg, so they aren't directly the problem, but perhaps
they are related to the problem.  I presume these came from either
foreign-key triggers or something you've written in PL functions.  Can
you tell us more about what you use in that line?

			regards, tom lane

In response to

Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2003-06-17 22:03:45
Subject: Re: Postgres Connections Requiring Large Amounts of Memory
Previous:From: Dawn HollingsworthDate: 2003-06-17 11:03:28
Subject: Re: Postgres Connections Requiring Large Amounts of Memory

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group