Postgres configuration for 64 CPUs, 128 GB RAM...

From: "Marc Mamin" <M(dot)Mamin(at)intershop(dot)de>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Postgres configuration for 64 CPUs, 128 GB RAM...
Date: 2007-07-17 14:10:30
Message-ID: CA896D7906BF224F8A6D74A1B7E54AB301750B12@JENMAIL01.ad.intershop.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


Postgres configuration for 64 CPUs, 128 GB RAM...

Hello,

We have the oppotunity to benchmark our application on a large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration should be fail-proof as I won't be able to
attend the tests.

Our application (java + perl) and Postgres will run on the same server,
whereas the application activity is low when Postgres has large
transactions to process.

There is a large gap between our current produtcion server (Linux, 4GB
RAM, 4 cpus) and the benchmark server; one of the target of this
benchmark is to verify the scalability of our application.

And you have no reason to be envious as the server doesn't belong us :-)

Thanks for your comments,

Marc Mamin

Posgres version: 8.2.1

Server Specifications:
----------------------

Sun SPARC Enterprise M8000 Server:

http://www.sun.com/servers/highend/m8000/specs.xml

File system:

http://en.wikipedia.org/wiki/ZFS

Planned configuration:
--------------------------------

# we don't expect more than 150 parallel connections,
# but I suspect a leak in our application that let some idle connections
open

max_connections=2000

ssl = off

#maximum allowed
shared_buffers= 262143

# on our current best production server with 4GB RAM (not dedicated to
Postgres), work_mem is set to 600 MB
# this limitation is probably the bottleneck for our application as the
files in pgsql_tmp grows up to 15 GB
# during large aggregations (we have a locking mechanismus to avoid
parallel processing of such transactions)
work_mem = 31457280 # (30 GB)

# index creation time is also an issue for us; the process is locking
other large processes too.
# our largest table so far is 13 GB + 11 GB indexes
maintenance_work_mem = 31457280 # (30 GB)

# more than the max number of tables +indexes expected during the
benchmark
max_fsm_relations = 100000

max_fsm_pages = 1800000

# don't know if I schoud modify this.
# seems to be sufficient on our production servers
max_stack_depth = 2MB

# vacuum will be done per hand between each test session
autovacuum = off

# required to analyse the benchmark
log_min_duration_statement = 1000

max_prepared_transaction = 100

# seems to be required to drop schema/roles containing large number of
objects
max_locks_per_transaction = 128

# I use the default for the bgwriter as I couldnt find recommendation on
those

#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers
scanned/round
#bgwriter_lru_maxpages = 5 # 0-1000 buffers max
written/round
#bgwriter_all_percent = 0.333 # 0-100% of all buffers
scanned/round
#bgwriter_all_maxpages = 5 # 0-1000 buffers max
written/round

#WAL

fsync = on

#use default
#wal_sync_method

# we are using 32 on our production system
wal_buffers=64

# we didn't make any testing with this parameter until now, but this
should'nt be a relevant
# point as our performance focus is on large transactions
commit_delay = 0

#CHECKPOINT

# xlog will be on a separate disk
checkpoint_segments=256

checkpoint_timeout = 5min

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Stefan Kaltenbrunner 2007-07-17 15:06:03 Re: Postgres configuration for 64 CPUs, 128 GB RAM...
Previous Message Tom Lane 2007-07-17 00:18:18 Re: TRUNCATE TABLE