Stef Telford wrote:
> Hello Mark,
> Okay, so, take all of this with a pinch of salt, but, I have the
> same config (pretty much) as you, with checkpoint_Segments raised to
> 192. The 'test' database server is Q8300, 8GB ram, 2 x 7200rpm SATA
> into motherboard which I then lvm stripped together; lvcreate -n
> data_lv -i 2 -I 64 mylv -L 60G (expandable under lvm2). That gives me
> a stripe size of 64. Running pgbench with the same scaling factors;
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 100
> number of clients: 24
> number of transactions per client: 12000
> number of transactions actually processed: 288000/288000
> tps = 1398.907206 (including connections establishing)
> tps = 1399.233785 (excluding connections establishing)
> It's also running ext4dev, but, this is the 'playground' server,
> not the real iron (And I dread to do that on the real iron). In short,
> I think that chunksize/stripesize is killing you. Personally, I would
> go for 64 or 128 .. that's jst my 2c .. feel free to
> ignore/scorn/laugh as applicable ;)
Stef - I suspect that your (quite high) tps is because your SATA disks
are not honoring the fsync() request for each commit. SCSI/SAS disks
tend to by default flush their cache at fsync - ATA/SATA tend not to.
Some filesystems (e.g xfs) will try to work around this with write
barrier support, but it depends on the disk firmware.
Thanks for your reply!
In response to
pgsql-performance by date
|Next:||From: Dave Cramer||Date: 2009-03-26 12:47:55|
|Subject: I have a fusion IO drive available for testing|
|Previous:||From: Mark Kirkwood||Date: 2009-03-26 04:37:00|
|Subject: Re: Raid 10 chunksize|