Re: wal_compression=zstd

From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Justin Pryzby <pryzby(at)telsasoft(dot)com>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, Andres Freund <andres(at)anarazel(dot)de>
Subject: Re: wal_compression=zstd
Date: 2022-03-09 08:06:57
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Mar 05, 2022 at 07:26:39PM +0900, Michael Paquier wrote:
> Repeatability and randomness of data counts, we could have for example
> one case with a set of 5~7 int attributes, a second with text values
> that include random data, up to 10~12 bytes each to count on the tuple
> header to be able to compress some data, and a third with more
> repeatable data, like one attribute with an int column populate
> with generate_series(). Just to give an idea.

And that's what I did as of the attached set of test:
- Cluster on tmpfs.
- max_wal_size, min_wal_size at 2GB and shared_buffers at 1GB, aka
large enough to include the full data set in memory.
- Rather than using Justin's full patch set, I have just patched the
code in xloginsert.c to switch the level.
- One case with table that uses one int attribute, with rather
repetitive data worth 484MB.
- One case with table using (int, text), where the text data is made
of 10~11 bytes of random data, worth ~340MB.
- Use pg_prewarm to load the data into shared buffers. With the
cluster mounted on a tmpfs that should not matter though.
- Both tables have a fillfactor at 50 to give room to the updates.

I have measured the CPU usage with a toy extension, also attached,
called pg_rusage() that is a simple wrapper to upstream's pg_rusage.c
to initialize a rusage snapshot and print its data with two SQL
functions that are called just before and after the FPIs are generated
(aka the UPDATE query that rewrites the whole table in the script

The quickly-hacked test script and the results are in test.tar.gz, for
reference. The toy extension is pg_rusage.tar.gz.

Here are the results I compiled, as of results_format.sql in the
tarball attached:
descr | rel_size | fpi_size | time_s
int column no compression | 429 MB | 727 MB | 13.15
int column ztsd default level | 429 MB | 523 MB | 14.23
int column zstd level 1 | 429 MB | 524 MB | 13.94
int column zstd level 10 | 429 MB | 523 MB | 23.46
int column zstd level 19 | 429 MB | 523 MB | 103.71
int column lz4 default level | 429 MB | 575 MB | 13.37
int/text no compression | 344 MB | 558 MB | 10.08
int/text lz4 default level | 344 MB | 463 MB | 10.29
int/text zstd default level | 344 MB | 415 MB | 11.48
int/text zstd level 1 | 344 MB | 418 MB | 11.25
int/text zstd level 10 | 344 MB | 415 MB | 20.59
int/text zstd level 19 | 344 MB | 413 MB | 62.64
(12 rows)

I did not expect zstd to be this slow at a level of 10~ actually. The
runtime (elapsed CPU time) got severely impacted at level 19, that I
ran just for fun to see how that it would become compared to a level
of 10. There is a slight difference between the default level and a
level of 1, and the compression size does not change much, nor does
the CPU usage really change.

Attached is an updated patch, while on it, that I have tweaked before
running my own tests.

At the end, I'd still like to think that we'd better stick with the
default level for this parameter, and that's the suggestion of
upstream. So I would like to move on with that for this patch.

Attachment Content-Type Size
test.tar.gz application/gzip 1.3 KB
pg_rusage.tar.gz application/gzip 1005 bytes
v2-0001-add-wal_compression-zstd.patch text/x-diff 7.9 KB

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2022-03-09 08:18:26 Re: Changing "Hot Standby" to "hot standby"
Previous Message Daniel Westermann (DWE) 2022-03-09 07:45:32 Re: Changing "Hot Standby" to "hot standby"