I've encountered a memory leak problem when I use a PL/pgsql function which
creates and drops a temporary table. I couldn't find any similar problem in
the mailing list. I'd like to ask you whether this is a PostgreSQL's bug.
Maybe I should post this to pgsql-bugs or pgsql-general, but the discussion
is likely to involve the internal behavior of PostgreSQL, so let me post
The steps to reproduce the problem is as follows. Please find attached two
files to use for this.
$ psql -d postgres -f myfunc.sql
$ ecpg myfunc.pgc
$ cc -I<pgsql_inst_dir>/include myfunc.c -o
myfunc -L<pg_inst_dir>/lib -lecpg
As the program myfunc runs longer, the values of VSZ and RSS get bigger,
even after 50,000 transactions.
The cause of the memory increase appears to be CacheMemoryContext. When I
attached to postgres with gdb and ran "call
MemoryContextStats(TopMemoryContext)" several times, the size of
CacheMemoryContext kept increasing.
By the way, when I replace "SELECT COUNT(*) INTO cnt FROM mytable" in the
PL/pgSQL function with "INSERT INTO mytable VALUES(1)", the memory stops
increasing. So, the memory leak seems to occur when SELECT is used.
I know the solution -- add "IF NOT EXISTS" to the CREATE TEMPORARY TABLE.
That prevents memory increase. But why? What's wrong with my program? I'd
like to know:
Q1: Is this a bug of PostgreSQL?
Q2: If yes, is it planned to be included in the upcoming minor release?
Q3: If this is not a bug and a reasonable behavior, is it described
Description: application/octet-stream (795 bytes)
Description: application/octet-stream (233 bytes)
pgsql-hackers by date
|Next:||From: David Gould||Date: 2013-06-18 12:03:50|
|Subject: Re: Spin Lock sleep resolution|
|Previous:||From: Szymon Guz||Date: 2013-06-18 11:24:27|
|Subject: Re: Add regression tests for SET xxx|