| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Shridhar Daithankar <shridhar_daithankar(at)persistent(dot)co(dot)in> |
| Cc: | Oleg Lebedev <oleg(dot)lebedev(at)waterford(dot)org>, Mary Edie Meredith <maryedie(at)osdl(dot)org>, Jenny Zhang <jenny(at)osdl(dot)org>, pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
| Subject: | Re: TPC-R benchmarks |
| Date: | 2003-09-29 16:33:39 |
| Message-ID: | 25270.1064853219@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Shridhar Daithankar <shridhar_daithankar(at)persistent(dot)co(dot)in> writes:
> Also if you have fast disk drives, you can reduce random page cost to 2 or 1.5.
Note however that most of the people who have found smaller
random_page_cost to be helpful are in situations where most of their
data fits in RAM. Reducing the cost towards 1 simply reflects the fact
that there's no sequential-fetch advantage when grabbing data that's
already in RAM.
When benchmarking with data sets considerably larger than available
buffer cache, I rather doubt that small random_page_cost would be a good
idea. Still, you might as well experiment to see.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Josh Berkus | 2003-09-29 17:19:33 | Re: Performance: BigInt vs Decimal(19,0) |
| Previous Message | Tom Lane | 2003-09-29 16:26:21 | Re: TPC-R benchmarks |