From: | Mark Kirkwood <markir(at)paradise(dot)net(dot)nz> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Luke Lonergan <llonergan(at)greenplum(dot)com>, PGSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Doug Rady <drady(at)greenplum(dot)com>, Sherry Moore <sherry(dot)moore(at)sun(dot)com> |
Subject: | Re: Bug: Buffer cache is not scan resistant |
Date: | 2007-03-05 05:03:54 |
Message-ID: | 45EBA4BA.7070507@paradise.net.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane wrote:
> "Luke Lonergan" <llonergan(at)greenplum(dot)com> writes:
>> The issue is summarized like this: the buffer cache in PGSQL is not "scan
>> resistant" as advertised.
>
> Sure it is. As near as I can tell, your real complaint is that the
> bufmgr doesn't attempt to limit its usage footprint to fit in L2 cache;
> which is hardly surprising considering it doesn't know the size of L2
> cache. That's not a consideration that we've ever taken into account.
>
To add a little to this - forgetting the scan resistant point for the
moment... cranking down shared_buffers to be smaller than the L2 cache
seems to help *any* sequential scan immensely, even on quite modest HW:
e.g: PIII 1.26Ghz 512Kb L2 cache, 2G ram,
SELECT count(*) FROM lineitem (which is about 11GB) performance:
Shared_buffers Elapsed
-------------- -------
400MB 101 s
128KB 74 s
When I've profiled this activity, I've seen a lot of time spent
searching for/allocating a new buffer for each page being fetched.
Obviously having less of them to search through will help, but having
less than the L2 cache-size worth of 'em seems to help a whole lot!
Cheers
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Gavin Sherry | 2007-03-05 05:16:44 | Re: Bug: Buffer cache is not scan resistant |
Previous Message | ITAGAKI Takahiro | 2007-03-05 04:11:46 | Automatic adjustment of bgwriter_lru_maxpages (was: Dead Space Map version 2) |