Re: Re: New server to improve performance on our large and busy DB - advice? (v2)

From: "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Re: New server to improve performance on our large and busy DB - advice? (v2)
Date: 2010-01-20 22:26:05
Message-ID: hj7vta$5ft$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> * A database that is of small to medium size (5 - 10 GB)?
> * Around 10 clients that perform constant write operations to the database
> (UPDATE/INSERT)
> * Around 10 clients that occasionally read from the database
> * Around 6000 tables in your database
> * A problem with tuning it all
> * Migration to new hardware and/or OS
>
> Is this all correct?

Actually, the tablespace is very large, over 500GB. However, the actualy
production DB is 200GB.

> First thing that is noticeable is that you seem to have way too few drives
> in the server - not because of disk space required but because of speed.
> You didn't say what type of drives you have and you didn't say what you
> would consider desirable performance levels, but off hand (because of the
> "10 clients perform constant writes" part) you will probably want at least
> 2x-4x more drives.

> With only 4 drives, RAID 10 is the only thing usable here.

What would be the optimum RAID level and number of disks?

> > 2) Which Windows OS would you recommend? (currently 2008 x64 Server)
>
> Would not recommend Windows OS.

We may be stuck as my client is only considering Red Hat Linux (still
waiting to find out which version). If it turns out that this limitatt
doesn't give better than a marginal improvement, then there is no incentive
to create more complications in what is basically a Windows shop (although
the project manager is a Linux advocate).

> Most importantly, you didn't say what you would consider desirable
> performance. The hardware and the setup you described will work, but not
> necessarily fast enough.

Once again, it seems as though we are down to the number of drives...

> Have you tried decreasing random_page_cost in postgresql.conf? Or setting
> (as a last resort) enable_seqscan = off?

In critical code sections, we do - we have stored procedures and code
segments which save the current enable_seqscan value, set it to off (local
to the transaction), then restore it after the code has run. Our current
"planner cost" values are all default. Is this what you would choose for a
Intel Core 2 Quads Quad with 48 GB RAM?

# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#effective_cache_size = 128MB

Thanks for the help,

Carlo

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Haas 2010-01-21 02:03:33 Re: New server to improve performance on our large and busy DB - advice?
Previous Message Greg Smith 2010-01-20 21:18:48 Re: ext4 finally doing the right thing