Skip site navigation (1) Skip section navigation (2)

Making the most of memory?

From: Joshua Fielek <jfielek(at)centriccrm(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Making the most of memory?
Date: 2008-01-23 04:11:48
Message-ID: 4796BE84.9030803@centriccrm.com (view raw or flat)
Thread:
Lists: pgsql-performance
Hey folks --

For starters, I am fairly new to database tuning and I'm still learning 
the ropes. I understand the concepts but I'm still learning the real 
world impact of some of the configuration options for postgres.

We have an application that has been having some issues with performance 
within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a 
nice little performance increase just off the improved query 
optimization, but we are still having other performance issues.

The database itself is not that large -- a db_dump of the sql file as 
text is only about 110MB. I haven't checked the exact size of the actual 
data base, but the entire data directory is smaller than the available 
memory at about 385MB including logs and config files. This is a single 
database with a relatively small number of client connections (50 or so) 
making a fair number of smaller queries. This is not a massive data 
effort by any means at this time, but it will be growing.

We have available currently ~4GB (8GB total) for Postgres. We will be 
moving to a server that will have about 24GB (32GB total) available for 
the database, with the current server becoming a hot backup, probably 
with slony or something similar to keep the databases in sync.

I've been monitoring the memory usage of postgres on the current system 
and it seems like none of the threads ever allocate more than about 
400MB total and about 80-90MB shared memory. It seems to me that since 
we have a very large chunk of memory relative to the database size we 
should be loading the entire database into memory. How can we be sure 
we're getting the most out of the memory we're allocating to postgres? 
What can we do to improve the memory usage, looking for performance 
first and foremost, on both the larger and smaller systems?

Here's the salient config items for the 8GB system:

max_connections = 200        # realistically we expect 50-150 open
shared_buffers = 38000
sort_mem = 1048576
work_mem = 32000
maintenance_work_mem = 32000
max_fsm_pages = 480001        # probably too large for the max_fsm_* 
max_fsm_relations = 20000    # items; one Db with ~400 tables.
effective_cache_size = 212016    # ~2GB, could probably double this


Thanks,
J
-- 
Joshua J. Fielek
Sr. Software Engineer
Concursive Corporation
223 East City Hall Ave., Suite 212
Norfolk, VA 23510
Phone  : (757) 627-3002x6656
Mobile : (757) 754-4462
Fax    : (757) 627-8773
Email  : jfielek(at)concursive(dot)com
http://www.concursive.com

Responses

pgsql-performance by date

Next:From: Scott MarloweDate: 2008-01-23 05:20:43
Subject: Re: Making the most of memory?
Previous:From: Guillaume SmetDate: 2008-01-23 02:02:50
Subject: Re: Workaround for cross column stats dependency

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group