On Thu, Jun 30, 2011 at 12:31 AM, Jim Nasby <jim(at)nasby(dot)net> wrote:
> Would it be reasonable to keep a second level cache that store individual XIDs instead of blocks? That would provide protection for XIDs that are extremely common but don't have a good fit with the pattern of XID ranges that we're caching. I would expect this to happen if you had a transaction that touched a bunch of data (ie: bulk load or update) some time ago (so the other XIDs around it are less likely to be interesting) but not old enough to have been frozen yet. Obviously you couldn't keep too many XIDs in this secondary cache, but if you're just trying to prevent certain pathological cases then hopefully you wouldn't need to keep that many.
Maybe, but I think that's probably still papering around the problem.
I'd really like to find an algorithm that bounds how often we can
flush a page out of the cache to some number of tuples significantly
greater than 100. The one I suggested yesterday has that property,
for example, although it may have other problems I'm not thinking of.
The Enterprise PostgreSQL Company
In response to
pgsql-hackers by date
|Next:||From: Robert Haas||Date: 2011-06-30 11:59:44|
|Subject: Re: Adding Japanese README|
|Previous:||From: Robert Haas||Date: 2011-06-30 11:50:20|
|Subject: Re: [COMMITTERS] pgsql: Make the visibility map crash-safe.|