Re: Controlling memory of session

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Richard Huxton <dev(at)archonet(dot)com>
Cc: James Im <im-james(at)hotmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Controlling memory of session
Date: 2007-01-17 15:06:00
Message-ID: 9406.1169046360@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Richard Huxton <dev(at)archonet(dot)com> writes:
> James Im wrote:
>> What am I missing to limit the memory taken by session to 1MB?

> You can't. In particular, work_mem is memory *per sort* so can be
> several times that. If you're trying to get PG to run in 64MB or
> something like that, I think you're going to be disappointed.

Yeah. I think the working RAM per backend is approaching a megabyte
these days just for behind-the-scenes overhead (catalog caches and
so forth), before you expend even one byte on per-query structures
that work_mem would affect.

Something else to consider: I dunno what tool you were using on Windows
to look at memory usage or how it counts shared memory, but on Unix a
lot of process-monitoring tools tend to count shared memory against each
process touching that shared memory. Which leads to artificially
bloated numbers. The default PG shared memory block size these days is
order-of-10-megabytes I think; if a backend has touched any significant
fraction of that since it started, that could dwarf the backend's true
private workspace size.

If you're concerned about total memory footprint for a pile of backends,
usually the right answer is to put some connection-pooling software in
front of them, not try to hobble each backend to work in a tiny amount
of space.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Jeremy Haile 2007-01-17 15:18:38 Diagnosing deadlock / connection hang
Previous Message k.novo 2007-01-17 14:52:49 PostgreSQL and embedded PC with Compact Flash?