Skip site navigation (1) Skip section navigation (2)

Re: 2GB or not 2GB

From: "Jignesh K(dot) Shah" <J(dot)K(dot)Shah(at)Sun(dot)COM>
To: Greg Smith <gsmith(at)gregsmith(dot)com>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: 2GB or not 2GB
Date: 2008-05-29 03:01:37
Message-ID: (view raw or whole thread)
Lists: pgsql-performance

Greg Smith wrote:
> On Wed, 28 May 2008, Josh Berkus wrote:
>> shared_buffers:  according to witnesses, Greg Smith presented at East 
>> that
>> based on PostgreSQL's buffer algorithms, buffers above 2GB would not
>> really receive significant use.  However, Jignesh Shah has tested 
>> that on
>> workloads with large numbers of connections, allocating up to 10GB
>> improves performance.
> Lies!  The only upper-limit for non-Windows platforms I mentioned was 
> suggesting those recent tests at Sun showed a practical limit in the 
> low multi-GB range.
> I've run with 4GB usefully for one of the multi-TB systems I manage, 
> the main index on the most frequently used table is 420GB and anything 
> I can do to keep the most popular parts of that pegged in memory seems 
> to help. I haven't tried to isolate the exact improvement going from 
> 2GB to 4GB with benchmarks though.
Yep its always the index that seems to benefit with high cache hits.. In 
one of the recent tests what I end up doing is writing a select  
count(*) from trade where t_id >= $1 and t_id < SOMEMAX just to kick in 
index scan and get it in memory first. So higher the bufferpool better 
the hit for index in it better the performance.


In response to

pgsql-performance by date

Next:From: Alexey KupershtokhDate: 2008-05-29 09:38:38
Subject: IN() statement values order makes 2x performance hit
Previous:From: Jignesh K. ShahDate: 2008-05-29 02:54:13
Subject: Re: 2GB or not 2GB

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group