There's an xlogdump project on pgfoundry. However it suffers from perennial
bitrot as it has to maintain its own table of xlog record types and code to
decode each xlog record type.
Earlier I modified xlogdump to generate a CSV loadable data set so I could do
some basic analysis and see what types of operations are generating the most
wal traffic. But I found it had bitrotted and needed some attention to bring
it up to date.
Again now I wanted to repeat that analysis to measure the effect HOT has had
on WAL traffic. And again now I find it has bitrotted, not least because of
HOT of course...
I think this module should be rewritten to depend more closely on the Postgres
source files. What I'm doing now is making an SRF in the style of the
pageinspect module which will read an arbitrary wal file and generate records
This has a big disadvantage compared to the original approach, namely that you
need a functioning Postgres instance of the same version to dissect wal
But it also has a big advantage, namely that it will always be in sync. It
will just use the same RmgrTable to find the rm_name and call the rm_desc
method to decode the record. The result might not be quite as or dense as the
rm_desc function is meant for debugging messages. We could address that
sometime with a new method if we wanted to.
I'm thinking of actually dropping it directly into the pageinspect contrib
module. It's not quite an exact fit but it doesn't seem to deserve it's own
contrib module and it's likely to suffer the same bitrot problem if it lives
Incidentally I would like to call xlog.c:RecordIsValid() which is currently a
static function. Any objection to exporting it? It doesn't depend on any
external xlog.c state.
Ask me about EnterpriseDB's On-Demand Production Tuning
pgsql-hackers by date
|Next:||From: Heikki Linnakangas||Date: 2007-11-02 11:09:09|
|Subject: Re: xlogdump|
|Previous:||From: Gokulakannan Somasundaram||Date: 2007-11-02 10:13:37|
|Subject: Re: Proposal: Select ... AS OF Savepoint|