Re: Different compression methods for FPI

From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Justin Pryzby <pryzby(at)telsasoft(dot)com>
Cc: Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, pgsql-hackers(at)postgresql(dot)org, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, Andres Freund <andres(at)anarazel(dot)de>
Subject: Re: Different compression methods for FPI
Date: 2021-06-16 07:18:23
Message-ID: YMmlvyVyAFlxZ+/H@paquier.xyz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Jun 16, 2021 at 09:39:57AM +0900, Michael Paquier wrote:
> From I'd like us to finish with here is one new algorithm method, able
> to cover a large range of cases as mentioned upthread, from
> low-CPU/low-compression to high-CPU/high-compression. It does not
> seem like a good idea to be stuck with an algo that only specializes
> in one or the other, for example.

So, I have been playing with that. And the first thing I have done
before running any benchmark was checking the logic of the patch, that
I have finished to heavily clean up. This is still WIP (see the
various XXX), and it still includes all the compression methods we are
discussing here, but it allows to control the level of the
compression and it is in a much better shape. So that will help.

Attached are two patches, the WIP version I have simplified (there
were many things I found confusing, from the set of header
dependencies added across the code to unnecessary code, the set of
patches in the series as mentioned upthread, etc.) that I have used
for the benchmarks. The second patch is a tweak to grab getrusage()
stats for the lifetime of a backend.

The benchmark I have used is rather simple, as follows, with a
value of shared_buffers that allows to fit all the pages of the
relation in. I then just mounted the instance on a tmpfs while
adapting wal_compression* for each test. This gives a fixed amount of
FPWs generated, large enough to reduce any noise and to still allow to
any difference:
#!/bin/bash
psql <<EOF
-- Change your conf here
SET wal_compression = zstd;
SET wal_compression_level = 20;
SELECT pg_backend_pid();
DROP TABLE IF EXISTS aa, results;
CREATE TABLE aa (a int);
CREATE TABLE results (phase text, position pg_lsn);
CREATE EXTENSION IF NOT EXISTS pg_prewarm;
ALTER TABLE aa SET (FILLFACTOR = 50);
INSERT INTO results VALUES ('pre-insert', pg_current_wal_lsn());
INSERT INTO aa VALUES (generate_series(1,7000000)); -- 484MB
SELECT pg_size_pretty(pg_relation_size('aa'::regclass));
SELECT pg_prewarm('aa'::regclass);
CHECKPOINT;
INSERT INTO results VALUES ('pre-update', pg_current_wal_lsn());
UPDATE aa SET a = 7000000 + a;
CHECKPOINT;
INSERT INTO results VALUES ('post-update', pg_current_wal_lsn());
SELECT * FROM results;
EOF

The set of results, with various compression levels used gives me the
following (see also compression_results.sql attached):
wal_compression | user_diff | sys_diff | rel_size | fpi_size
------------------------------+------------+----------+----------+----------
lz4 level=1 | 24.219464 | 0.427996 | 429 MB | 574 MB
lz4 level=65535 (speed mode) | 24.154747 | 0.524067 | 429 MB | 727 MB
off | 24.069323 | 0.635622 | 429 MB | 727 MB
pglz | 36.123642 | 0.451949 | 429 MB | 566 MB
zlib level=1 (default) | 27.454397 | 2.25989 | 429 MB | 527 MB
zlib level=9 | 31.962234 | 2.160444 | 429 MB | 527 MB
zstd level=0 | 24.766077 | 0.67174 | 429 MB | 523 MB
zstd level=20 | 114.429589 | 0.495938 | 429 MB | 520 MB
zstd level=3 (default) | 25.218323 | 0.475974 | 429 MB | 523 MB
(9 rows)

There are a couple of things that stand out here:
- zlib has a much higher user CPU time than zstd and lz4, so we could
just let this one go.
- Everything is better than pglz, that does not sound as a surprise.
- The level does not really influence the compression reached
-- lz4 aims at being fast, so its default is actually the best
compression it can do. Using a much high acceleration level reduces
the effects of compression to zero.
-- zstd has a high CPU consumption at high level (level > 20 is
classified as ultra, I have not tested that), without helping much
with the amount of data compressed.

It seems to me that this would leave LZ4 or zstd as obvious choices,
and that we don't really need to care about the compression level, so
let's just stick with the defaults without any extra GUCs. Among the
remaining two I would be tempted to choose LZ4. That's consistent
with what toast can use now. And, even if it is a bit worse than pglz
in terms of compression in this case, it shows a CPU usage close to
the "off" case, which is nice (sys_diff for lz4 with level=1 is a
bit suspicious by the way). zstd has merits as well at default
level.

At the end I am not surprised by this result: LZ4 is designed to be
faster, while zstd compresses more and eats more CPU. Modern
compression algos are nice.
--
Michael

Attachment Content-Type Size
v10-0001-Add-more-options-for-wal_compression.patch text/x-diff 37.0 KB
v10-0002-Add-tweak-to-test-CPU-usage-within-a-session-for.patch text/x-diff 2.1 KB
compression_results.sql application/sql 1.5 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Langote 2021-06-16 07:27:45 Re: Skip partition tuple routing with constant partition key
Previous Message Fabien COELHO 2021-06-16 06:58:17 Re: Error on pgbench logs