Re: libpq compression

From: Daniil Zakhlystov <usernamedt(at)yandex-team(dot)ru>
To: Robert Haas <robertmhaas(at)gmail(dot)com>, Konstantin Knizhnik <knizhnik(at)garret(dot)ru>
Cc: pryzby(at)telsasoft(dot)com, x4mmm(at)yandex-team(dot)ru, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: libpq compression
Date: 2021-02-08 19:23:53
Message-ID: 470E411E-681D-46A2-A1E9-6DE11B5F59F3@yandex-team.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi everyone,

I’ve been making some experiments with an on-the-fly compression switch lately and have some updates.

> On Dec 22, 2020, at 10:42 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
>
> Hmm, I assumed that if the compression buffers were flushed on the
> sending side, and if all the data produced on the sending side were
> transmitted to the receiver, the receiving side would then return
> everything up to the point of the flush. However, now that I think
> about it, there's no guarantee that any particular compression library
> would actually behave that way. I wonder what actually happens in
> practice with the libraries we care about?

> I'm not sure about the details, but the general idea seems like it
> might be worth considering. If we choose a compression method that is
> intended for streaming compression and decompression and whose library
> handles compression flushes sensibly, then we might not really need to
> go this way to make it work. But, on the other hand, this method has a
> certain elegance that just compressing everything lacks, and might
> allow some useful flexibility. On the third hand, restarting
> compression for every new set of messages might really hurt the
> compression ratio in some scenarios. I'm not sure what is best.

Earlier in the thread, we’ve discussed introducing a new message type (CompressedMessage)
so I came up with the two main approaches to send compressed data:

1. Sending the compressed message type without the message length followed by continuous compressed data.
2. Sending the compressed data packed into messages with specified length (pretty much like CopyData).

The first approach allows sending raw compressed data without any additional framing, but has some downsides:
- to determine the end of compressed data, it is required to decompress the entire compressed data
- in most cases (at least in ZSTD and ZLIB), it is required to end the compression stream so the decompressor
can determine the end of compressed data on the receiving side. After that, it is required to init a new compression
context (for example, in case of ZSTD, start a new frame) which may lead to a worse compression ratio.

The second approach has some overhead because it requires framing the compressed data into messages with specified length (chunks),
but I see the following advantages:
- CompressedMessage is being sent like any other Postgres protocol message and we always know the size of
compressed data from the message header so it is not required to actually decompress the data to determine the end of
compressed data
- This approach does not require resetting the compression context so compression can continue even if there are some
uncompressed messages between two CompressedMessage messages

So I’ve implemented the second approach with the following message compression criteria:
if the message type is CopyData or DataRow, it should be compressed, otherwise, send the message uncompressed

I’ve compared this approach with permanent compression in the following scenarios:
- pg_restore of IMDB database (https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/2QYZBT)
- pgbench "host=x dbname=testdb port=5432 user=testuser compression=zstd:1" --builtin tpcb-like -t 400 --jobs=64 --client=600

The detailed report with CPU/memory/network load is available here:
https://docs.google.com/document/d/13qEUpIjh2NNOOW_8NZOFUohRSEIro0R2xdVs-2Ug8Ts

pg_restore of IMDB database test results

Chunked compression with only CopyData or DataRow compression (second approach):
time:
real 2m27.947s
user 0m45.453s
sys 0m3.113s
RX bytes diff, human: 1.8837M
TX bytes diff, human: 1.2810G

Permanent compression:
time:
real 2m15.810s
user 0m42.822s
sys 0m2.022s
RX bytes diff, human: 2.3274M
TX bytes diff, human: 1.2761G

Without compression:
time:
real 2m38.245s
user 0m18.946s
sys 0m2.443s
RX bytes diff, human: 5.6117M
TX bytes diff, human: 3.8227G

Also, I’ve run pgbench tests and measured the CPU load. Since chunked compression did not compress any messages
except for CopyData or DataRow, it demonstrated lower CPU usage compared to the permanent compression, full report with graphs
is available in the Google doc above.

Pull request with the second approach implemented:
https://github.com/postgrespro/libpq_compression/pull/7

Also, in this pull request, I’ve made the following changes:
- extracted the general-purpose streaming compression API into the separate structure (ZStream) so it can be used in other places without tx_func and rx_func,
maybe the other compression patches can utilize it?
- made some refactoring of ZpqStream
- moved the SSL and ZPQ buffered read data checks into separate function pqReadPending

What do you think of the results above? I think that the implemented approach is viable, but maybe I missed something in my tests.
Maybe we can choose the other compression criteria (for example, compress only messages with length more than X bytes),
I am not sure if the current compression criteria provides the best results.

Thanks,

Daniil Zakhlystov

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2021-02-08 19:28:47 Re: parse mistake in ecpg connect string
Previous Message Hari Sankar 2021-02-08 19:01:36 Allowing create database within transaction block