Re: Protecting against unexpected zero-pages: proposal

From: Aidan Van Dyk <aidan(at)highrise(dot)ca>
To: Greg Stark <gsstark(at)mit(dot)edu>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Protecting against unexpected zero-pages: proposal
Date: 2010-11-09 21:23:39
Message-ID: AANLkTin2+w7UW-xVvTs2N4+uoNZMMg2SVqophoR7TkCR@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Nov 9, 2010 at 3:25 PM, Greg Stark <gsstark(at)mit(dot)edu> wrote:

> Then we might have to get rid of hint bits. But they're hint bits for
> a metadata file that already exists, creating another metadata file
> doesn't solve anything.

Is there any way to instrument the writes of dirty buffers from the
share memory, and see how many of the pages normally being written are
not backed by WAL (hint-only updates)? Just "dumping" those buffers
without writes would allow at least *checksums* to go throug without
loosing all the benifits of the hint bits.

I've got a hunch (with no proof) that the penalty of not writing them
will be born largely by small database installs. Large OLTP databases
probably won't have pages without a WAL'ed change and hint-bits set,
and large data warehouse ones will probably vacuum freeze big tables
on load to avoid the huge write penalty the 1st time they scan the
tables...

</waving hands>

--
Aidan Van Dyk                                             Create like a god,
aidan(at)highrise(dot)ca                                       command like a king,
http://www.highrise.ca/                                   work like a slave.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2010-11-09 21:42:46 Re: Protecting against unexpected zero-pages: proposal
Previous Message Jochem van Dieten 2010-11-09 21:12:15 Re: W3C Specs: Web SQL