Skip site navigation (1) Skip section navigation (2)

Re: GDQ iimplementation

From: Hannu Krosing <hannu(at)2ndquadrant(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-cluster-hackers(at)postgresql(dot)org
Subject: Re: GDQ iimplementation
Date: 2010-05-20 23:04:17
Message-ID: 1274396657.12930.713.camel@hvost (view raw, whole thread or download thread mbox)
Lists: pgsql-cluster-hackers
On Thu, 2010-05-20 at 20:51 +0100, Simon Riggs wrote:
> On Tue, 2010-05-18 at 01:53 +0200, Hannu Krosing wrote:
> > On Mon, 2010-05-17 at 14:46 -0700, Josh Berkus wrote:
> > > Jan, Marko, Simon,
> > > 
> > > I'm concerned that doing anything about the write overhead issue was 
> > > discarded almost immediately in this discussion.  
> > 
> > Only thing we can do to write overhead _on_master_ is to trade it for
> > transaction boundary reconstruction on slave (or special intermediate
> > node), effectively implementing a "logical WAL" in addition to (or as an
> > extension of) the current WAL.
> That does sound pretty good to me.
> Fairly easy to make the existing triggers write XLOG_NOOP WAL records
> directly rather than writing to a queue table, which also gets logged to
> WAL. We could just skip the queue table altogether.
> Even better would be extending WAL format to include all the information
> you need, so it gets written to WAL just once.

Maybe it is also possible (less intrusive/easier to implement) to add
some things to WAL which have met resistance as general trigger-based
features, like "logical representation" of DDL. We already have
equivalent of minimal ON COMMIT/ON ROLLBACK triggers in form of
commit/rollback records in WAL.

Also, if we use extended WAL as GDQ, then there should be a possibility
to write WAL in form that supports only "logical" (+ of course
Durability) features but not full backup and WAL based replication .

And a possibility to have "user-defined" WAL records for specific tasks
would also be a nice and postgreSQL-ly extensibility feature.

> > > This is not a trivial 
> > > issue for performance; it means that each row which is being tracked by 
> > > the GDQ needs to be written to disk a minimum of 4 times (once to WAL, 
> > > once to table, once to WAL for queue, once to queue).  
> > 
> > In reality the WAL record for main table is forced to disk mosttimes in
> > the same WAL write as the WAL record for queue. And the actual queue
> > page does not reach disk at all if queue rotation is fast.
> Josh, you really should do some measurements to show the overheads. Not
> sure you'll get people just to accept that assertion otherwise.

Hannu Krosing
PostgreSQL Scalability and Availability 
   Services, Consulting and Training

In response to

pgsql-cluster-hackers by date

Next:From: Greg Sabino MullaneDate: 2010-06-09 16:06:01
Subject: Re: BOF at pgCon?
Previous:From: Simon RiggsDate: 2010-05-20 19:51:46
Subject: Re: GDQ iimplementation

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group