Re: SSD performance

From: Jeff <threshar(at)torgo(dot)978(dot)org>
To: Scott Carey <scott(at)richrelevance(dot)com>
Cc: "david(at)lang(dot)hm" <david(at)lang(dot)hm>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: SSD performance
Date: 2009-02-04 13:06:12
Message-ID: AC1A7311-E5D3-4F0B-9F67-7AE32F360E05@torgo.978.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


On Feb 3, 2009, at 1:43 PM, Scott Carey wrote:

> I don’t think write caching on the disks is a risk to data integrity
> if you are configured correctly.
> Furthermore, these drives don’t use the RAM for write cache, they
> only use a bit of SRAM on the controller chip for that (and respect
> fsync), so write caching should be fine.
>
> Confirm that NCQ is on (a quick check in dmesg), I have seen
> degraded performance when the wrong SATA driver is in use on some
> linux configs, but your results indicate its probably fine.
>

As it turns out, there's a bug/problem/something with the controller
in the macpro vs the ubuntu drives where the controller goes into
"works, but not as super as it could" mode, so NCQ is effectively
disabled, haven't seen a workaround yet. Not sure if this problem
exists on other distros (used ubuntu because I just wanted to try a
live). I read some stuff from Intel on the NCQ and in a lot of cases
it won't make that much difference because the thing can respond so
fast.

> How much RAM is in that machine?
>

8GB

> Some suggested tests if you are looking for more things to try :D
> -- What affect does the following tuning have:
>
> Turn the I/O scheduler to ‘noop’ ( echo noop > /sys/block/<devices>/
> queue/scheduler) I’m assuming the current was cfq, deadline may
> also be interesting, anticipatory would have comically horrible
> results.

I only tested noop, if you think about it, it is the most logical one
as an SSD really does not need an elevator at all. There is no
rotational latency or moving of the arm that the elevator was designed
to cope with.

but, here are the results:
scale 50, 100 clients, 10x txns: 1600tps (a noticable improvement!)
scale 1500, 100 clients, 10xtxns: 434tps

I'm going to try to get some results for raptors, but there was
another post earlier today that got higher, but not ridiculously
higher tps but it required 14 15k disks instead of 2

>
> Tune upward the readahead value ( blockdev —setra <value> /dev/
> <device>) -- try 16384 (8MB) This probably won’t help that much
> for a pgbench tune, its more for large sequential scans in other
> workload types, and more important for rotating media.
> Generally speaking with SSD’s, tuning the above values does less
> than with hard drives.
>

Yeah, I don't think RA will help pgbench, and for my workloads it is
rather useless as they tend to be tons of random IO.

I've got some Raptors here too I'll post numbers wed or thu.

--
Jeff Trout <jeff(at)jefftrout(dot)com>
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Haas 2009-02-04 13:59:17 Re: Deleting millions of rows
Previous Message Gregory Stark 2009-02-04 12:35:57 Re: Deleting millions of rows