Re: Postgres configuration for 64 CPUs, 128 GB RAM...

From: "Gavin M(dot) Roy" <gmr(at)ehpg(dot)net>
To: "Marc Mamin" <M(dot)Mamin(at)intershop(dot)de>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Postgres configuration for 64 CPUs, 128 GB RAM...
Date: 2007-07-21 00:18:53
Message-ID: af1bce590707201718r7ae94471hc42ccdefa8654ba8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Having done something similar recently, I would recommend that you look at
adding connection pooling using pgBouncer transaction pooling between your
benchmark app and PgSQL. In our application we have about 2000 clients
funneling down to 30 backends and are able to sustain large transaction per
second volume. This has been the #1 key to success for us in running on
monster hardware.
Regards,

Gavin

On 7/17/07, Marc Mamin <M(dot)Mamin(at)intershop(dot)de> wrote:
>
>
> Postgres configuration for 64 CPUs, 128 GB RAM...
>
> Hello,
>
> We have the oppotunity to benchmark our application on a large server. I
> have to prepare the Postgres configuration and I'd appreciate some comments
> on it as I am not experienced with servers of such a scale. Moreover the
> configuration should be fail-proof as I won't be able to attend the tests.
>
> Our application (java + perl) and Postgres will run on the same server,
> whereas the application activity is low when Postgres has large transactions
> to process.
>
> There is a large gap between our current produtcion server (Linux, 4GB
> RAM, 4 cpus) and the benchmark server; one of the target of this benchmark
> is to verify the scalability of our application.
>
> And you have no reason to be envious as the server doesn't belong us :-)
>
> Thanks for your comments,
>
> Marc Mamin
>
>
>
>
> Posgres version: 8.2.1
>
>
> Server Specifications:
> ----------------------
>
> Sun SPARC Enterprise M8000 Server:
>
> *http://www.sun.com/servers/highend/m8000/specs.xml*<http://www.sun.com/servers/highend/m8000/specs.xml>
>
> File system:
>
> *http://en.wikipedia.org/wiki/ZFS* <http://en.wikipedia.org/wiki/ZFS>
>
>
> Planned configuration:
> --------------------------------
>
> # we don't expect more than 150 parallel connections,
> # but I suspect a leak in our application that let some idle connections
> open
>
> max_connections=2000
>
> ssl = off
>
> #maximum allowed
> shared_buffers= 262143
>
> # on our current best production server with 4GB RAM (not dedicated to
> Postgres), work_mem is set to 600 MB
> # this limitation is probably the bottleneck for our application as the
> files in pgsql_tmp grows up to 15 GB
> # during large aggregations (we have a locking mechanismus to avoid
> parallel processing of such transactions)
> work_mem = 31457280 # (30 GB)
>
> # index creation time is also an issue for us; the process is locking
> other large processes too.
> # our largest table so far is 13 GB + 11 GB indexes
> maintenance_work_mem = 31457280 # (30 GB)
>
> # more than the max number of tables +indexes expected during the
> benchmark
> max_fsm_relations = 100000
>
> max_fsm_pages = 1800000
>
> # don't know if I schoud modify this.
> # seems to be sufficient on our production servers
> max_stack_depth = 2MB
>
> # vacuum will be done per hand between each test session
> autovacuum = off
>
>
> # required to analyse the benchmark
> log_min_duration_statement = 1000
>
> max_prepared_transaction = 100
>
> # seems to be required to drop schema/roles containing large number of
> objects
> max_locks_per_transaction = 128
>
>
>
> # I use the default for the bgwriter as I couldnt find recommendation on
> those
>
> #bgwriter_delay = 200ms # 10-10000ms between rounds
> #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers
> scanned/round
> #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round
> #bgwriter_all_percent = 0.333 # 0-100% of all buffers
> scanned/round
> #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round
>
> #WAL
>
> fsync = on
>
> #use default
> #wal_sync_method
>
> # we are using 32 on our production system
> wal_buffers=64
>
> # we didn't make any testing with this parameter until now, but this
> should'nt be a relevant
> # point as our performance focus is on large transactions
> commit_delay = 0
>
> #CHECKPOINT
>
> # xlog will be on a separate disk
> checkpoint_segments=256
>
> checkpoint_timeout = 5min
>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message David Boreham 2007-07-21 03:31:14 Re: User concurrency thresholding: where do I look?
Previous Message Josh Berkus 2007-07-20 23:28:27 Re: large number of connected connections to postgres database (v8.0)