Josh, thanks for your response and insight. At the moment, my database
is trivially small (dump size of ~40M, grows by a few hundred k a day
but that will speed up as usage keeps increasing). Most of this is to
better understand PG and filesystems. A couple of short follow-ups:
Josh Berkus wrote:
> I'd follow up with a more "realistic" test such as DBT2 or Jan-TPCW.
I'll take a look into these. I'm also looking at generating a long
transaction list from my existing application and playing those back to
see what the performance is like with some realistic queries.
>> * These are all with a default postgresql.conf. My plan was to test it
>> out of the box to determine filesystem/array setup first and then tune
> You're potentially masking some underlying issues.
Is work mem the most critical of these or what else should I look at?
With a small DB like I currently have, I expect performance whould
skyrocket once available memory > database size. Are there any other
tricks related to putting the whole DB into memory?
> Maybe ... depends on the server. In practice, I've found that 95% of OS
> files get cache semi-permanently in RAM and there's very little disk
> activity associated with the OS files. As a result, putting the XLog on
> the disk with the OS files works fine.
I'll try a 4-disk RAID 10 with the log on the OS partition and see how
that works next. Since the pgbench doesn't do a lot of realistic
writing though, I expect I'll see a minor increase?
> Hey, there is no such thing as enough testing so your study is very
When I finish, I will post my spreadsheet for reference. Thanks,
In response to
sfpug by date
|Next:||From: Brian Ghidinelli||Date: 2006-01-17 23:17:44|
|Subject: Followup on benchmarks, results|
|Previous:||From: Josh Berkus||Date: 2006-01-17 18:51:16|
|Subject: Re: Benchmarking results, questions|