From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Initial 9.2 pgbench write results |
Date: | 2012-02-26 01:03:30 |
Message-ID: | CAMkU=1wKz5LVrB8z0CGrbdPB2J-agFg_GtL+d3HGCN_KGnw8SA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Feb 14, 2012 at 12:25 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> On 02/14/2012 01:45 PM, Greg Smith wrote:
>>
>> scale=1000, db is 94% of RAM; clients=4
>> Version TPS
>> 9.0 535
>> 9.1 491 (-8.4% relative to 9.0)
>> 9.2 338 (-31.2% relative to 9.1)
>
>
> A second pass through this data noted that the maximum number of buffers
> cleaned by the background writer is <=2785 in 9.0/9.1, while it goes as high
> as 17345 times in 9.2.
There is something strange about the data for Set 4 (9.1) at scale 1000.
The number of buf_alloc varies a lot from run to run in that series
(by a factor of 60 from max to min).
But the TPS doesn't vary by very much.
How can that be? If a transaction needs a page that is not in the
cache, it needs to allocate a buffer. So the only thing that could
lower the allocation would be a higher cache hit rate, right? How
could there be so much variation in the cache hit rate from run to run
at the same scale?
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2012-02-26 03:17:17 | Re: Checkpoint sync pause |
Previous Message | Thom Brown | 2012-02-26 00:07:47 | Re: Command Triggers, patch v11 |