| From: | Yeb Havinga <yebhavinga(at)gmail(dot)com> |
|---|---|
| To: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: write barrier question |
| Date: | 2010-08-18 20:25:27 |
| Message-ID: | 4C6C41B7.4060709@gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Samuel Gendler wrote:
> When running pgbench on a db which fits easily into RAM (10% of RAM =
> -s 380), I see transaction counts a little less than 5K. When I go to
> 90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a
> fairly wide range of concurrencies). At that point, I decided to
> investigate the performance impact of write barriers.
At 90% of RAM you're probable reading data as well, not only writing.
Watching iostat -xk 1 or vmstat 1 during a test should confirm this. To
find the maximum database size that fits comfortably in RAM you could
try out http://github.com/gregs1104/pgbench-tools - my experience with
it is that it takes less than 10 minutes to setup and run and after some
time you get rewarded with nice pictures! :-)
regards,
Yeb Havinga
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Samuel Gendler | 2010-08-18 22:05:27 | Re: write barrier question |
| Previous Message | s anwar | 2010-08-18 20:09:39 | Copy performance issues |