Re: SAN vs Internal Disks

From: "Bryan Murphy" <bryan(dot)murphy(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: SAN vs Internal Disks
Date: 2007-09-07 17:56:20
Message-ID: bd8531800709071056g740cb44cld04cd489bc4e6343@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

We are currently running our database against on SAN share. It looks like this:

2 x RAID 10 (4 disk SATA 7200 each)

Raid Group 0 contains the tables + indexes
Raid Group 1 contains the log files + backups (pg_dump)

Our database server connects to the san via iSCSI over Gig/E using
jumbo frames. File system is XFS (noatime).

I believe our raid controller is an ARECA. Whatever it is, it has the
option of adding a battery to it but I have not yet been able to
convince my boss that we need it.

Maintenance is nice, we can easily mess around with the drive shares,
expand and contract them, snapshot them, yadda yadda yadda. All
things which we NEVER do to our database anyway. :)

Performance, however, is a mixed bag. It handles concurrency very
well. We have a number of shares (production shares, data shares, log
file shares, backup shares, etc. etc.) spread across the two raid
groups and it handles them with aplomb.

Throughput, however, kinda sucks. I just can't get the kind of
throughput to it I was hoping to get. When our memory cache is blown,
the database can be downright painful for the next few minutes as
everything gets paged back into the cache.

I'd love to try a single 8 disk RAID 10 with battery wired up directly
to our database, but given the size of our company and limited funds,
it won't be feasible any time soon.

Bryan

On 9/7/07, Matthew Schumacher <matt(dot)s(at)aptalaska(dot)net> wrote:
> I'm getting a san together to consolidate my disk space usage for my
> servers. It's iscsi based and I'll be pxe booting my servers from it.
> The idea is to keep spares on hand for one system (the san) and not have
> to worry about spares for each specific storage system on each server.
> This also makes growing filesystems and such pretty simple. Redundancy
> is also good since I'll have two iscsi switches plugged into two cisco
> ethernet switches and two different raid controllers on the jbod. I'll
> start plugging my servers into each switch for further redundancy. In
> the end I could loose disks, ethernet switches, cables, iscsi switches,
> raid controller, whatever, and it keeps on moving.
>
> That said, I'm not putting my postgres data on the san. The DB server
> will boot from the san and use it for is OS, but there are 6 15k SAS
> disks in it setup with raid-10 that will be used for the postgres data
> mount. The machine is a dell 2950 and uses an LSI raid card.
>
> The end result is a balance of cost, performance, and reliability. I'm
> using iscsi for the cost, reliability, and ease of use, but where I need
> performance I'm sticking to local disks.
>
> schu
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
> message can get through to the mailing list cleanly
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alan Hodgson 2007-09-07 18:18:42 Re: SAN vs Internal Disks
Previous Message Harald Armin Massa 2007-09-07 17:32:53 Re: Performance on 8CPU's and 32GB of RAM