From: | "Luke Lonergan" <llonergan(at)greenplum(dot)com> |
---|---|
To: | stange(at)rentec(dot)com |
Cc: | "Dave Cramer" <pg(at)fastcrypt(dot)com>, "Greg Stark" <gsstark(at)mit(dot)edu>, "Joshua Marsh" <icub3d(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Hardware/OS recommendations for large databases ( |
Date: | 2005-11-18 13:46:58 |
Message-ID: | BFA31B52.14012%llonergan@greenplum.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Alan,
On 11/18/05 5:41 AM, "Alan Stange" <stange(at)rentec(dot)com> wrote:
>
> That's interesting, as I occasionally see more than 110MB/s of
> postgresql IO on our system. I'm using a 32KB block size, which has
> been a huge win in performance for our usage patterns. 300GB database
> with a lot of turnover. A vacuum analyze now takes about 3 hours, which
> is much shorter than before. Postgresql 8.1, dual opteron, 8GB memory,
> Linux 2.6.11, FC drives.
300GB / 3 hours = 27MB/s.
If you are using the 2.6 linux kernel, you may be fooled into thinking you
burst more than you actually get in net I/O because the I/O stats changed in
tools like iostat and vmstat.
The only meaningful stats are (size of data) / (time to process data). Do a
sequential scan of one of your large tables that you know the size of, then
divide by the run time and report it.
I'm compiling some new test data to make my point now.
Regards,
- Luke
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Cramer | 2005-11-18 13:47:43 | Re: Hardware/OS recommendations for large databases ( |
Previous Message | Alan Stange | 2005-11-18 13:41:58 | Re: Hardware/OS recommendations for large databases ( |