From: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> |
---|---|
To: | Yeb Havinga <yebhavinga(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: write barrier question |
Date: | 2010-08-18 22:06:43 |
Message-ID: | AANLkTims18LE6dxiYOTM-1Qv8T1zOEUE6+QsfSpQxbGi@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Aug 18, 2010 at 1:25 PM, Yeb Havinga <yebhavinga(at)gmail(dot)com> wrote:
> Samuel Gendler wrote:
>>
>> When running pgbench on a db which fits easily into RAM (10% of RAM =
>> -s 380), I see transaction counts a little less than 5K. When I go to
>> 90% of RAM (-s 3420), transaction rate dropped to around 1000 ( at a
>> fairly wide range of concurrencies). At that point, I decided to
>> investigate the performance impact of write barriers.
>
> At 90% of RAM you're probable reading data as well, not only writing.
> Watching iostat -xk 1 or vmstat 1 during a test should confirm this. To find
> the maximum database size that fits comfortably in RAM you could try out
> http://github.com/gregs1104/pgbench-tools - my experience with it is that it
> takes less than 10 minutes to setup and run and after some time you get
> rewarded with nice pictures! :-)
Yes. I've intentionally sized it at 90% precisely so that I am
reading as well as writing, which is what an actual production
environment will resemble.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-08-18 22:42:14 | Re: Copy performance issues |
Previous Message | Samuel Gendler | 2010-08-18 22:05:27 | Re: write barrier question |