Skip site navigation (1) Skip section navigation (2)

Re: optimization ideas for frequent, large(ish) updates

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Marinos J(dot) Yannikos" <mjy(at)geizhals(dot)at>
Cc: Jeff Trout <jeff(at)jefftrout(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: optimization ideas for frequent, large(ish) updates
Date: 2004-02-16 03:28:48
Message-ID: 17527.1076902128@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-performance
"Marinos J. Yannikos" <mjy(at)geizhals(dot)at> writes:
> Jeff Trout wrote:
>> Remember that it is going to allocate 800MB per sort.

> I didn't know that it always allocates the full amount of memory 
> specificed in the configuration

It doesn't ... but it could use *up to* that much before starting to
spill to disk.  If you are certain your sorts won't use that much,
then you could set the limit lower, hm?

Also keep in mind that sort_mem controls hash table size as well as sort
size.  The hashtable code is not nearly as accurate as the sort code
about honoring the specified limit exactly.  So you really oughta figure
that you could need some multiple of sort_mem per active backend.

			regards, tom lane

In response to

pgsql-performance by date

Next:From: David TeranDate: 2004-02-16 16:51:37
Subject: select max(id) from aTable is very slow
Previous:From: Marinos J. YannikosDate: 2004-02-16 02:53:15
Subject: Re: optimization ideas for frequent, large(ish) updates

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group