Re: optimization ideas for frequent, large(ish) updates

From: Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>
To: "Marinos J(dot) Yannikos" <mjy(at)geizhals(dot)at>
Cc: josh(at)agliodbs(dot)com, pgsql-performance(at)postgresql(dot)org
Subject: Re: optimization ideas for frequent, large(ish) updates
Date: 2004-02-15 04:51:41
Message-ID: 402EFADD.4040100@familyhealth.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> 800MB is correct, yes... There are usually only 10-30 postgres processes
> active (imagine 5-10 people working on the web front-end while cron
> jobs access the db occasionally). Very few queries can use such large
> amounts of memory for sorting, but they do exist.

But remember that means that if you have 4 people doign 2 sorts each at
the same time, postgres will use 6.4GB RAM maximum. The sort_mem
parameter means that if a sort is larger than the max, it will be done
in disk swap.

Chris

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jeff Trout 2004-02-15 17:20:38 Re: optimization ideas for frequent, large(ish) updates
Previous Message Marinos J. Yannikos 2004-02-15 02:02:48 Re: optimization ideas for frequent, large(ish) updates