Re: 2GB or not 2GB

From: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>
To: josh(at)agliodbs(dot)com
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: 2GB or not 2GB
Date: 2008-05-29 15:45:14
Message-ID: 1212075914.26576.8.camel@jd-laptop
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, 2008-05-28 at 16:59 -0700, Josh Berkus wrote:
> Folks,

> shared_buffers: according to witnesses, Greg Smith presented at East that
> based on PostgreSQL's buffer algorithms, buffers above 2GB would not
> really receive significant use. However, Jignesh Shah has tested that on
> workloads with large numbers of connections, allocating up to 10GB
> improves performance.

I have seen multiple production systems where upping the buffers up to
6-8GB helps. What I don't know, and what I am guessing Greg is referring
to is if it helps as much as say upping to 2GB. E.g; the scale of
performance increase goes down while the actual performance goes up
(like adding more CPUs).

>
> sort_mem: My tests with 8.2 and DBT3 seemed to show that, due to
> limitations of our tape sort algorithm, allocating over 2GB for a single
> sort had no benefit. However, Magnus and others have claimed otherwise.
> Has this improved in 8.3?

I have never see work_mem (there is no sort_mem Josh) do any good above
1GB. Of course, I would never willingly use that much work_mem unless
there was a really good reason that involved a guarantee of not calling
me at 3:00am.

>
> So, can we have some test evidence here? And workload descriptions?
>

Its all, tune now buddy :P

Sinceerely,

Joshua D. Drake

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Chris Shoemaker 2008-05-29 15:47:34 Adding "LIMIT 1" kills performance.
Previous Message Alexey Kupershtokh 2008-05-29 09:52:59 Re: IN() statement values order makes 2x performance hit