Skip site navigation (1) Skip section navigation (2)

Re: C client memory usage grows

From: Douglas Trainor <trainor(at)uic(dot)edu>
To: "Patrick L(dot) Nolan" <pln(at)cosmic(dot)stanford(dot)edu>
Cc: pgsql-interfaces(at)postgresql(dot)org
Subject: Re: C client memory usage grows
Date: 2002-07-15 20:48:40
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-interfaces
You created the memory leak.  Don't you want to have something
like PQclear(res) -- and what about checking for errors?


"PQclear Frees the storage associated with the PGresult. Every query result should be freed via PQclear when it is no longer needed.

    void PQclear(PQresult *res);

You can keep a PGresult object around for as long as you need it; it does not go away
when you issue a new query, nor even if you close the connection. To get rid of it,
you must call PQclear. Failure to do this will result in memory leaks in the frontend application."

"Patrick L. Nolan" wrote:

> I'm writing a C program that uses libpq to read a big table.
> First I tried a dumb query like "SELECT * FROM mytable".
> It ran out of memory after fetching about 9 million rows.
> Tom Lane suggested that I should use a cursor to fetch
> data in more manageable chunks.  I have tried that, and it
> doesn't really seem to cure the problem.  My program's
> memory usage grows steadily, no matter how many rows
> I FETCH at a time.
> The relevant portion of my program looks sort of like
> this:
>   res = PQexec(conn, "BEGIN WORK");
>   res = PQexec(conn, "DECLARE mycur BINARY CURSOR FOR SELECT * FROM mytable");
>   while (1) {
>     res = PQexec(conn, "FETCH 8192 FROM mycur");
>     nrows = PQntuples(res);
>     if (nrows <= 0) break;
>     for (i=0; i< nrow; i++) {
>       /* Extract data from row */
>     }
>   }
>   res = PQexec(conn, "COMMIT");
> I have experimented with other values of the number of rows in
> the FETCH command, and it doesn't seem to make much difference
> in speed or memory usage.  The size of the client grows from
> 4 MB to 72 MB over about a minute.  On a sufficiently large
> table it will continue to grow until it dies.
> I don't do any mallocs at all in my code, so it's libpq that
> uses all the memory.
> It acts as if each FETCH operation opens a whole new set of
> buffers, ignoring the ones that were used before.  I suppose
> you might need that for reverse fetching and such, but it
> works like a memory leak in my application.  Is there some way
> around this?
> Details: This is postgres 7.1 on Red Hat Linux 7.1.
> --
> *   Patrick L. Nolan                                          *
> *   W. W. Hansen Experimental Physics Laboratory (HEPL)       *
> *   Stanford University                                       *
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?

In response to

pgsql-interfaces by date

Next:From: Nikolay HristovDate: 2002-07-16 07:28:36
Subject: Embedded SQL in a function
Previous:From: Patrick L. NolanDate: 2002-07-15 20:33:40
Subject: C client memory usage grows

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group