Re: 500 tpsQL + WAL log implementation

From: "Curtis Faith" <curtis(at)galtair(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Curtis Faith" <curtis(at)galtair(dot)com>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: 500 tpsQL + WAL log implementation
Date: 2002-11-12 14:21:43
Message-ID: DMEEJMCDOJAKPPFACMPMAECBCFAA.curtis@galtair.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

tom lane wrote:
> What can you do *without* using a raw partition?
>
> I dislike that idea for two reasons: portability and security. The
> portability disadvantages are obvious. And in ordinary system setups
> Postgres would have to run as root in order to write on a raw partition.
>
> It occurs to me that the same technique could be used without any raw
> device access. Preallocate a large WAL file and apply the method within
> it. You'll have more noise in the measurements due to greater
> variability in the physical positioning of the blocks --- but it's
> rather illusory to imagine that you know the disk geometry with any
> accuracy anyway. Modern drives play a lot of games under the hood.

A write to a raw disk file is immediate and completes with minimal system
overhead. I'll try to test a file-based approach using a write followed by an
immediate fdatasynch and see if that approaches the speed of the raw
partition access. I suspect we'll get decent performance, only perhaps 10% to
15% slower.

As you mention, there is nothing exact about the technique, so we should be
able to get similar improvements with a file based system. I've been able to
get over 1,500 raw writes confirmed to disk using raw partition writes each
slightly offset ahead of they other, yet, only somewhere between 500 and 650
on a sustained basis using the technique I described because of the noise in
the geometry measurements and variable timing for the writes themselves.

>
> This scares me quite a bit too. The reason that the existing
> implementation maxes out at one WAL write per rotation is that for small
> transactions it's having to repeatedly write the same disk sector. You
> could only get around that by writing multiple versions of the same WAL
> page at different disk locations. Reliably reconstructing what data to
> use is not something that I'm prepared to accept on a handwave...

I'm pretty sure this could be done very reliably but at the cost of slightly
slower reading after a failure for redo.

I figured that whenever a transaction wrote to the log it would set the log
offset marker for new transactions to force the next transaction to use a new
block. This would result in space waste which could be partially offset by
using writes smaller than the 8K block size (along disk block size
boundaries, 512 bytes for my disk). This has the advantage of making it
fairly easy to make sure that the log can be reconstructed in order since
there would be no partial block writes to worry about.

I believe that 4 to 8 full rotations worth of usable blocks could be
maintained and blocks would be written to the lowest offset tracks first
unless there were no free blocks of sufficient size. This would probably
result in 90% to 95% utilization of the blocks (disregarding waste inside the
blocks themselves). When the lowest offset track filled up sufficiently,
another empty track would be added to the usable blocks list and the lowest
offset track taken off the unused list.

This would ensure that a read of 4 to 8 tracks, which needs to be a fixed
number for any given installation, could reconstruct the order of the WAL log
since at no time would blocks be out of order beyond that range.

Disk space is much cheaper than CPU and memory so I think that a logging
system that used as much as three or four times the space but is three or
four times faster would be a worthwhile improvement for those systems where
updates or insert volume are very heavy. Obviously, this needs to be an
option, not the default configuration.

- Curtis

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Marc G. Fournier 2002-11-12 15:45:24 Re: RC1?
Previous Message Thomas Lockhart 2002-11-12 13:15:35 Re: need a current cvsup file