Re: Postgres configuration for 64 CPUs, 128 GB RAM...

From: "Strong, David" <david(dot)strong(at)unisys(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Postgres configuration for 64 CPUs, 128 GB RAM...
Date: 2007-07-17 17:43:44
Message-ID: B6419AF36AC8524082E1BC17DA2506E803CE97FE@USMV-EXCH2.na.uis.unisys.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

>"Marc Mamin" <M(dot)Mamin(at)intershop(dot)de> writes:
>
>> We have the oppotunity to benchmark our application on a large
server. I
>> have to prepare the Postgres configuration and I'd appreciate some
>> comments on it as I am not experienced with servers of such a scale.
>> Moreover the configuration should be fail-proof as I won't be able to
>> attend the tests.
>
>I really think that's a recipe for disaster. Even on a regular machine
you
>need to treat tuning as an on-going feedback process. There's no such
thing >as
>a fail-proof configuration since every application is different.
>
>On an exotic machine like this you're going to run into unique problems
>that
>nobody here can anticipate with certainty.
>
>--
> Gregory Stark
> EnterpriseDB http://www.enterprisedb.com
>

Marc,

You're getting a lot of good advice for your project. Let me be another
to reiterate that upgrading to Postgres 8.2.4 will bring added
performance and scalability benefits.

Others have mentioned that you do have to be data driven and
unfortunately that is true. All you can really do is pick a reasonable
starting point and run a test to create a baseline number. Then,
monitor, make small changes and test again. That's the only way you're
going to find the best configuration for your system. This will take
time and effort.

In addition, everything involved in your testing must scale - not just
Postgres. For example, if your driver hardware or driver software does
not scale, you won't be able to generate enough throughput for your
application or Postgres. The same goes for your all of your networking
equipment and any other hardware servers/software that might be involved
in the test environment. So, you really have to monitor at all levels
i.e. don't just focus on the database platform.

I took a quick look at the Sun M8000 server link you provided. I don't
know the system specifically so I might be mistaken, but it looks like
it is configured with 4 sockets per CPU board and 4 CPU boards per
system. Each CPU board looks like it has the ability to take 128GB RAM.
In this case, you will have to keep an eye on how Solaris is binding
(affinitizing) processes to CPU cores and/or boards. Any time a process
is bound to a new CPU board it's likely that there will be a number of
cache invalidations to move data the process was working on from the old
board to the new board. In addition, the moved process may still
continue to refer to memory it allocated on the old board. This can be
quite expensive. Typically, the more CPU cores/CPU boards you have, the
more likely this will happen. I'm no Solaris expert so I don't know if
there is a better way of doing this, but you might consider using the
psrset or pbind commands to bind Postgres backend processes to a
specific CPU core or range of cores. If choosing a range of cores, these
should be on the same CPU board. Again, through monitoring, you'll have
to determine how many CPU cores each backend really needs and then
you'll have to determine how best to spread the backends out over each
of the CPU boards.

Good luck.

David

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message PFC 2007-07-17 18:04:28 Re: Postgres configuration for 64 CPUs, 128 GB RAM...
Previous Message Greg Smith 2007-07-17 16:46:22 Re: Postgres configuration for 64 CPUs, 128 GB RAM...