Re: checkpointer continuous flushing - V16

From: Andres Freund <andres(at)anarazel(dot)de>
To: Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>
Cc: PostgreSQL Developers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: checkpointer continuous flushing - V16
Date: 2016-02-19 12:18:00
Message-ID: 20160219121800.dz7nhwyildyoo2cd@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2016-02-19 10:16:41 +0100, Fabien COELHO wrote:
> Below the results of a lot of tests with pgbench to exercise checkpoints on
> the above version when fetched.

Wow, that's a great test series.

> Overall comments:
> - sorting & flushing is basically always a winner
> - benchmarking with short runs on large databases is a bad idea
> the results are very different if a longer run is used
> (see andres00b vs andres00c)

Based on these results I think 32 will be a good default for
checkpoint_flush_after? There's a few cases where 64 showed to be
beneficial, and some where 32 is better. I've seen 64 perform a bit
better in some cases here, but the differences were not too big.

I gather that you didn't play with
backend_flush_after/bgwriter_flush_after, i.e. you left them at their
default values? Especially backend_flush_after can have a significant
positive and negative performance impact.

> 16 GB 2 cpu 8 cores
> 200 GB RAID1 HDD, ext4 FS
> Ubuntu 12.04 LTS (precise)

That's with 12.04's standard kernel?

> postgresql.conf:
> shared_buffers = 1GB
> max_wal_size = 1GB
> checkpoint_timeout = 300s
> checkpoint_completion_target = 0.8
> checkpoint_flush_after = { none, 0, 32, 64 }

Did you re-initdb between the runs?

I've seen massively varying performance differences due to autovacuum
triggered analyzes. It's not completely deterministic when those run,
and on bigger scale clusters analyze can take ages, while holding a
snapshot.

> Hmmm, interesting: maintenance_work_mem seems to have some influence on
> performance, although it is not too consistent between settings, probably
> because as the memory is used to its limit the performance is quite
> sensitive to the available memory.

That's probably because of differing behaviour of autovacuum/vacuum,
which sometime will have to do several scans of the tables if there are
too many dead tuples.

Regards,

Andres

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2016-02-19 13:27:00 Re: WAL logging problem in 9.4.3?
Previous Message Christoph Berg 2016-02-19 11:53:34 Re: Relaxing SSL key permission checks