Re: Using pgiosim realistically

From: Jeff <threshar(at)torgo(dot)978(dot)org>
To: John Rouillard <rouilj(at)renesys(dot)com>
Cc: Jeff <threshar(at)torgo(dot)dyndns-server(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Using pgiosim realistically
Date: 2011-05-16 17:54:06
Message-ID: CF55DDA4-5AC1-47AE-84E8-08868D1BF494@torgo.978.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


On May 16, 2011, at 1:06 PM, John Rouillard wrote:

>> that is a #define in pgiosim.c
>
> So which is a better test, modifying the #define to allow specifying
> 200-300 1GB files, or using 64 files but increasing the size of my
> files to 2-3GB for a total bytes in the file two or three times the
> memory in my server (96GB)?
>

I tend to make 10G chunks with dd and run pgiosim over that.
dd if=/dev/zero of=bigfile bs=1M count=10240

>> the -w param to pgiosim has it rewrite blocks out as it runs. (it is
>> a percentage).
>
> Yup, I was running with that and getting low enough numbers, that I
> switched to pure read tests. It looks like I just need multiple
> threads so I can have multiple reads/writes in flight at the same
> time.
>

Yep - you need multiple threads to get max throughput of your io.

--
Jeff Trout <jeff(at)jefftrout(dot)com>
http://www.stuarthamm.net/
http://www.dellsmartexitin.com/

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Nathan Boley 2011-05-16 19:10:37 Re: reducing random_page_cost from 4 to 2 to force index scan
Previous Message Tom Lane 2011-05-16 17:24:53 Re: reducing random_page_cost from 4 to 2 to force index scan