I had read that before, so you are right. The amount of memory being used
could run much higher than I wrote.
In my case, I know that not all the connections are not busy all the time
(this isn't a web application with thousands of users connecting to a pool)
so not all active connections will be doing sorts all the time. As far as I
can tell, sort memory is allocated as needed, so my estimate of 400MB should
still be reasonable, and I have plenty of unaccounted for memory outside the
effective cache so it shouldn't be a problem.
Presumably, that memory isn't needed after the result set is built.
If I understand correctly, there isn't any way to limit the amount of memory
allocated for sorting, which means that you can't specifiy generous sort_mem
values to help out when there is spare capacity (few connections) because in
the worst case it could cause swapping when the system is busy. In the the
not so bad case, the effective cache size estimate will just be completely
Maybe a global sort memory limit would be a good idea, I don't know.
> Iain wrote:
>> sort_mem 4096 (=400MB RAM for 100 connections)
> If I understand correctly, memory usage related to `sort_mem'
> is per connection *and* per sort.
> If every client runs a query with 3 sorts in its plan, you are
> going to need (in theory) 100 connections * 4Mb * 3 sorts,
> which is 1.2 Gb.
> Please correct me if I'm wrong...
In response to
pgsql-performance by date
|Next:||From: Iain||Date: 2004-12-28 02:31:55|
|Subject: Re: Howto Increased performace ?|
|Previous:||From: Greg Stark||Date: 2004-12-27 20:51:29|
|Subject: Re: Wrong Stats and Poor Performance|