Re: measuring the impact of increasing WAL segment size

From: Andres Freund <andres(at)anarazel(dot)de>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: measuring the impact of increasing WAL segment size
Date: 2017-08-15 01:37:05
Message-ID: 20170815013705.tiwvvs7ikujxt3sh@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

Thanks for running this!

On 2017-08-15 03:27:00 +0200, Tomas Vondra wrote:
> Granted - this chart does not show latency, so it's not a complete
> picture.

That'd be quite useful to see here, too.

> Also, if you care about raw OLTP performance you're probably already running
> on flash, where this does not seem to be an issue. It's also not an issue if
> you have RAID controller with write cache, which can absorb those writes.
> And of course, those machines have reasonable dirty_background_bytes values
> (like 64MB or less).

The problem is that dirty_background_bytes = 64MB is *not* actually a
generally reasonable config, because it makes temp table, disk sort, etc
operations flush way too aggressively.

> b) The "flushing enabled" case seems to be much more sensitive to WAL
> segment size increases. It seems the throughput drops a bit (by 10-20%), for
> some segment sizes, and then recovers. The behavior seems to be smooth (not
> just a sudden drop for one segment size) but the value varies depending on
> the scale, test type (tpc-b /simple-update).

That's interesting. I presume you've not tested with separate data /
xlog disks?

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2017-08-15 01:51:32 Re: Explicit relation name in VACUUM VERBOSE log
Previous Message Masahiko Sawada 2017-08-15 01:27:21 Explicit relation name in VACUUM VERBOSE log