Skip site navigation (1) Skip section navigation (2)

Re: wal_buffers

From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Robert Haas'" <robertmhaas(at)gmail(dot)com>, "'Peter Geoghegan'"<peter(at)2ndquadrant(dot)com>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: wal_buffers
Date: 2012-08-31 05:12:20
Message-ID: 003301cd8737$333bcac0$99b36040$ (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
On Thursday, August 30, 2012 7:14 PM Robert Haas wrote:
On Wed, Aug 29, 2012 at 10:25 PM, Peter Geoghegan <peter(at)2ndquadrant(dot)com>
> On 19 February 2012 05:24, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>>> I have attached tps scatterplots.  The obvious conclusion appears to
>>> be that, with only 16MB of wal_buffers, the buffer "wraps around" with
>>> some regularity: we can't insert more WAL because the buffer we need
>>> to use still contains WAL that hasn't yet been fsync'd, leading to
>>> long stalls.  More buffer space ameliorates the problem.
>> Incidentally, I wondered if we could further improve group commit
>> performance by implementing commit_delay with a WaitLatch call, and
>> setting the latch in the event of WAL buffers wraparound (or rather, a
>> queued wraparound request - a segment switch needs WALWriteLock, which
>> the group commit leader holds for a relatively long time during the
>> delay). I'm not really sure how significant a win this might be,
>> though. There could be other types of contention, which could be
>> considerably more significant. I'll try and take a look at it next
>> week.

> I have a feeling that one of the big bottlenecks here is that we force
> an immediate fsync when we reach the end of a segment.  I think it was
> originally done that way to keep the code simple, and it does
> accomplish that, but it's not so hot for performance.  More generally,
> I think we really need to split WALWriteLock into two locks, one to
> protect the write position and the other to protect the flush
> position.  I think we're often ending up with a write (which is
> usually fast) waiting for a flush (which is often much slower) when in
> fact those things ought to be able to happen in parallel.

  This is really good idea for splitting WALWriteLock into two locks, 
  but in that case do we need separate handling for OPEN_SYNC method where 
  write and flush happens together?

  And more about WAL, do you have any suggestions regarding the idea of
  WALWriter in case Xlog buffers are nearly full?

With Regards,
Amit Kapila.

With Regards,
Amit Kapila.

Robert Haas
The Enterprise PostgreSQL Company

In response to

pgsql-hackers by date

Next:From: Dean RasheedDate: 2012-08-31 06:59:43
Subject: Re: Proof of concept: auto updatable views
Previous:From: Etsuro FujitaDate: 2012-08-31 05:04:44
Subject: Re: Don't allow relative path for copy from file

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group