Skip site navigation (1) Skip section navigation (2)

Re: Making the most of memory?

From: Rich <rhdyes(at)gmail(dot)com>
To: "Joshua Fielek" <jfielek(at)centriccrm(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Making the most of memory?
Date: 2008-01-23 19:06:02
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Josh what about the rest of your system?  What operating system?  Your
hardware setup. Drives? Raids?  What indices do you have setup for
these queries?  There are other reasons that could cause bad queries

On Jan 22, 2008 11:11 PM, Joshua Fielek <jfielek(at)centriccrm(dot)com> wrote:
> Hey folks --
> For starters, I am fairly new to database tuning and I'm still learning
> the ropes. I understand the concepts but I'm still learning the real
> world impact of some of the configuration options for postgres.
> We have an application that has been having some issues with performance
> within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a
> nice little performance increase just off the improved query
> optimization, but we are still having other performance issues.
> The database itself is not that large -- a db_dump of the sql file as
> text is only about 110MB. I haven't checked the exact size of the actual
> data base, but the entire data directory is smaller than the available
> memory at about 385MB including logs and config files. This is a single
> database with a relatively small number of client connections (50 or so)
> making a fair number of smaller queries. This is not a massive data
> effort by any means at this time, but it will be growing.
> We have available currently ~4GB (8GB total) for Postgres. We will be
> moving to a server that will have about 24GB (32GB total) available for
> the database, with the current server becoming a hot backup, probably
> with slony or something similar to keep the databases in sync.
> I've been monitoring the memory usage of postgres on the current system
> and it seems like none of the threads ever allocate more than about
> 400MB total and about 80-90MB shared memory. It seems to me that since
> we have a very large chunk of memory relative to the database size we
> should be loading the entire database into memory. How can we be sure
> we're getting the most out of the memory we're allocating to postgres?
> What can we do to improve the memory usage, looking for performance
> first and foremost, on both the larger and smaller systems?
> Here's the salient config items for the 8GB system:
> max_connections = 200        # realistically we expect 50-150 open
> shared_buffers = 38000
> sort_mem = 1048576
> work_mem = 32000
> maintenance_work_mem = 32000
> max_fsm_pages = 480001        # probably too large for the max_fsm_*
> max_fsm_relations = 20000    # items; one Db with ~400 tables.
> effective_cache_size = 212016    # ~2GB, could probably double this
> Thanks,
> J
> --
> Joshua J. Fielek
> Sr. Software Engineer
> Concursive Corporation
> 223 East City Hall Ave., Suite 212
> Norfolk, VA 23510
> Phone  : (757) 627-3002x6656
> Mobile : (757) 754-4462
> Fax    : (757) 627-8773
> Email  : jfielek(at)concursive(dot)com
> ---------------------------(end of broadcast)---------------------------
> TIP 7: You can help support the PostgreSQL project by donating at

In response to

pgsql-performance by date

Next:From: Vivek KheraDate: 2008-01-23 19:22:20
Subject: Re: Vacuum and FSM page size
Previous:From: Decibel!Date: 2008-01-23 19:02:21
Subject: Re: Vacuum and FSM page size

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group