Re: Why Wal_buffer is 64KB

From: "Pierre C" <lists(at)peufeu(dot)com>
To: "Jaime Casanova" <jcasanov(at)systemguards(dot)com(dot)ec>, "Tadipathri Raghu" <traghu(dot)dba(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Why Wal_buffer is 64KB
Date: 2010-03-25 18:14:38
Message-ID: op.u94yaoixeorkce@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


If you do large transactions, which emits large quantities of xlog, be
aware that while the previous xlog segment is being fsynced, no new writes
happen to the next segment. If you use large wal_buffers (more than 16 MB)
these buffers can absorb xlog data while the previous segment is being
fsynced, which allows a higher throughput. However, large wal_buffers also
mean the COMMIT of small transactions might find lots of data in the
buffers that noone has written/synced yet, which isn't good. If you use
dedicated spindle(s) for the xlog, you can set the walwriter to be
extremely aggressive (write every 5 ms for instance) and use fdatasync.
This way, at almost every rotation of the disk, xlog gets written. I've
found this configuration gives increased throughput, while not
compromising latency, but you need to test it for yourself, it depends on
your whole system.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Faheem Mitha 2010-03-25 19:57:54 experiments in query optimization
Previous Message Jaime Casanova 2010-03-25 17:15:58 Re: Why Wal_buffer is 64KB