Question about memory allocations

From: Steve <cheetah(at)tanabi(dot)org>
To: pgsql-performance(at)postgresql(dot)org
Subject: Question about memory allocations
Date: 2007-04-10 19:28:48
Message-ID: Pine.GSO.4.64.0704101458290.17955@kittyhawk.tanabi.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hey there;

I'm trying to tune the memory usage of a new machine that has a -lot- of
memory in it (32 gigs). We're upgrading from a machine that had 16 gigs
of RAM and using a database that's around 130-some gigs on disc. Our
largest tables have in the order of close to 10 million rows.

Problem is, the postgres documentation isn't always clear about what
different memory things are used for and it's definitely not clear about
what 'useful values' would be for various things. Further, looking
online, gets a lot of random stuff and most of the configuration
information out there is for pre-8.1 versions that don't have all these
new and strange values :)

This machine exists only for the database. With that in mind, a few
questions.

- I've set up a configuration (I'll show important values below), and
I"m wondering if there's any way I can actually see the distribution of
memory in the DB and how the memory is being used.

- What is temp_buffers used for exactly? Does this matter for, say,
nested queries or anything in specific? Is there any case where having
this as a large number actually -helps-?

- Do full_page_writes and wal_buffers settings matter AT ALL for a machine
where fysnc = off ?

- What does wal_buffers mean and does increasing this value actually help
anything?

- Any idea if this is a smart configuration for this machine? It's a
Redhat Enterprise Linux machine (kernel 2.6.18), 8 dual-core AMD 64bit
processors, 32 gigs of RAM, 4x 176 (or whatever the exact number is) gig
SCSI hard drives in a stripe. Only values I have modified are mentioned,
everything else left at default:

shared_buffers = 16GB
temp_buffers = 128MB
max_prepared_transactions = 0

# This value is going to probably set off cries of using this as a set
# command instead of a big global value; however there's more big queries
# than small ones and the number of simultaneous users is very small so
# 'for now' this can be set globally big and if it shows improvement
# I'll implement it as set commands later.
#
# Question; does this mean 2 gigs will be immediately allocated to
# every query, or is this just how big the work memory is allowed to
# grow per transaction?
work_mem=2G

maintenance_work_mem = 4GB
max_stack_depth = 16MB

# Vacuum suggested I make this 'over 3600000' on the old machine, so
# I use this value; if it's too big, this is a symptom of another problem,
# I'd be interested to know :)
max_fsm_pages = 5000000

# For a lot of reasons, it doesn't make any sense to use fsync for this
# DB. Read-only during the day, backed up daily, UPS'd, etc.
fsync = off
full_page_writes = off
wal_buffers = 512MB

# Leaving this low makes the DB complain, but I'm not sure what's
# reasonable.
checkpoint_segments = 128

random_page_cost = 1.5
cpu_tuple_cost = 0.001
cpu_index_tuple_cost = 0.0005
cpu_operator_cost = 0.00025
effective_cache_size = 8GB

default_statistics_target = 100

Thanks for all your help!

Steve

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Michael Dengler 2007-04-10 19:44:52 Do I need to rebuild php-pgsql for 8.2.3
Previous Message Tom Lane 2007-04-10 18:21:24 Re: join to view over custom aggregate seems like it should be faster