Skip site navigation (1) Skip section navigation (2)

Re: ZFS vs. UFS

From: Laszlo Nagy <gandalf(at)shopzeus(dot)com>
To: Greg Smith <greg(at)2ndQuadrant(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: ZFS vs. UFS
Date: 2012-07-31 08:50:11
Message-ID: 50179C43.5030607@shopzeus.com (view raw or flat)
Thread:
Lists: pgsql-performance
> When Intel RAID controller is that?  All of the ones on the 
> motherboard are pretty much useless if that's what you have. Those are 
> slower than software RAID and it's going to add driver issues you 
> could otherwise avoid.  Better to connect the drives to the non-RAID 
> ports or configure the controller in JBOD mode first.
>
> Using one of the better RAID controllers, one of Dell's good PERC 
> models for example, is one of the biggest hardware upgrades you could 
> make to this server.  If your database is mostly read traffic, it 
> won't matter very much.  Write-heavy loads really benefit from a good 
> RAID controller's write cache.
Actually, it is a PERC with write-cache and BBU.
>
> ZFS will heavily use server RAM for caching by default, much more so 
> than UFS.  Make sure you check into that, and leave enough RAM for the 
> database to run too.  (Doing *some* caching that way is good for 
> Postgres; you just don't want *all* the memory to be used for that)
Right now, the size of the database is below 5GB. So I guess it will fit 
into memory. I'm concerned about data safety and availability. I have 
been in a situation where the RAID card went wrong and I was not able to 
recover the data because I could not get an identical RAID card in time. 
I have also been in a situation where the system was crashing two times 
a day, and we didn't know why. (As it turned out, it was a bug in the 
"stable" kernel and we could not identify this for two weeks.) However, 
we had to do fsck after every crash. With a 10TB disk array, it was 
extremely painful. ZFS is much better: short recovery time and it is 
RAID card independent. So I think I have answered my own question - I'm 
going to use ZFS to have better availability, even if it leads to poor 
performance. (That was the original question: how bad it it to use ZFS 
for PostgreSQL, instead of the native UFS.)
>
> Moving disks to another server is a very low probability fix for a 
> broken system.  The disks are a likely place for the actual failure to 
> happen at in the first place.
Yes, but we don't have to worry about that. raidz2 + hot spare is safe 
enough. The RAID card is the only single point of failure.
> I like to think more in terms of "how can I create a real-time replica 
> of this data?" to protect databases, and the standby server for that 
> doesn't need to be an expensive system.  That said, there is no reason 
> to set things up so that they only work with that Intel RAID 
> controller, given that it's not a very good piece of hardware anyway.
I'm not sure how to create a real-time replica. This database is updated 
frequently. There is always a process that reads/writes into the 
database. I was thinking about using slony to create slave databases. I 
have no experience with that. We have a 100Mbit connection. I'm not sure 
how much bandwidth we need to maintain a real-time slave database. It 
might be a good idea.

I'm sorry, I feel I'm being off-topic.

In response to

Responses

pgsql-performance by date

Next:From: Craig JamesDate: 2012-07-31 14:33:29
Subject: Re: ZFS vs. UFS
Previous:From: Mark KirkwoodDate: 2012-07-31 01:21:04
Subject: Re: Postgres 9.1.4 - high stats collector IO usage

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group