Skip site navigation (1) Skip section navigation (2)

full_page_write and also compressed logging

From: James Mansion <james(at)mansionfamily(dot)plus(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: full_page_write and also compressed logging
Date: 2008-04-18 19:55:31
Message-ID: 4808FCB3.7090904@mansionfamily.plus.com (view raw or flat)
Thread:
Lists: pgsql-performance
Has there ever been any analysis regarding the redundant write overhead 
of full page writes?

I'm wondering if once could regard an 8k page as being 64 off 128 byte 
paragraphs or
32 off 256byte paragraphs, each represented by a bit in a word.  And, 
when a pageis dirtied
by changes some record is kept of this based on the paragraphs 
affected.  Then you could
just incrementally dump the pre-image of newly dirtied paragraphs as you 
go, and the cost
in terms of dirtied pages would be much lower for the case of scattered 
updates.

(I was also wondering about just doing preimages based on chaned byte 
ranges but the
approach above is probably faster, doesn't dump the same range twice, 
and may fit
the existing flow more directly)

Also - has any attempt been made to push log writes through a cheap 
compressor, such
a zlib on lowest setting or one like Jeff Bonwick's for ZFS
(http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/os/compress.c).

Would work well for largely textual tables (and I suspect a lot of 
integer data too).

James


pgsql-performance by date

Next:From: Chris BrowneDate: 2008-04-18 19:57:10
Subject: Re: Message queue table..
Previous:From: Tom LaneDate: 2008-04-18 19:27:02
Subject: Re: Message queue table..

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group