Re: libpq compression

From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Daniil Zakhlystov <usernamedt(at)yandex-team(dot)ru>, Konstantin Knizhnik <knizhnik(at)garret(dot)ru>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: libpq compression
Date: 2021-02-22 20:40:40
Message-ID: 20210222204040.n43uevzsptbh2blt@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2021-02-22 14:48:25 -0500, Robert Haas wrote:
> So, if I read these results correctly, on the "pg_restore of IMDB
> database" test, we get 88% of the RX bytes reduction and 99.8% of the
> TX bytes reduction for 90% of the CPU cost. On the "pgbench" test,
> which probably has much smaller packets, chunked compression gives us
> no bandwidth reduction and in fact consumes slightly more network
> bandwidth -- which seems like it has to be an implementation defect,
> since we should always be able to fall back to sending the
> uncompressed packet if the compressed one is larger, or will be after
> adding the wrapper overhead. But with the current code, at least, we
> pay about a 30% CPU tax, and there's no improvement. The permanent
> compression imposes a whopping 90% CPU tax, but we save about 33% on
> TX bytes and about 14% on RX bytes.

It'd be good to fix the bandwidth increase issue, of course. But other
than that I'm not really bothered by transactional workloads like
pgbench not saving much / increasing overhead (within reason) compared
to bulkier operations. With packets as small as the default pgbench
workloads use, it's hard to use generic compression methods and save
space. While we could improve upon that even in the packet oriented
case, it doesn't seem like an important use case to me.

For pgbench like workloads the main network level issue is latency - and
compression won't help with that (more likely to hurt even). If you
instead pipeline queries, compression can of course help significantly -
but then you'd likely get compression benefits in the chunked case,
right?

IMO the by *far* most important use for fe/be compression is compressing
the WAL stream, so it'd probably be good to analyze the performance of
that while running a few different workloads (perphaps just pgbench
initialization, and r/w workload).

I don't think we're planning to turn compression on by default - it's so
use-case dependent whether network bandwidth or CPU is the scarce
resource - so I think causing *some* unhelpful overhead isn't
prohibitive. It'd be good to improve, but I'd also ok with deferring
that.

> But there's a subtler way in which the permanent compression approach
> could be winning, which is that the compressor can retain state over
> long time periods. In a single pgbench response, there's doubtless
> some opportunity for the compressor to find savings, but an individual
> response doesn't likely include all that much duplication. But just
> think about how much duplication there is from one response to the
> next. The entire RowDescription message is going to be exactly the
> same for every query. If you can represent that in just a couple of
> bytes, it think that figures to be a pretty big win. If I had to
> guess, that's likely why the permanent compression approach seems to
> deliver a significant bandwidth savings even on the pgbench test,
> while the chunked approach doesn't. Likewise in the other direction:
> the query doesn't necessarily contain a lot of internal duplication,
> but it duplicate the previous query to a very large extent. It would
> be interesting to know whether this theory is correct, and whether
> anyone can spot a flaw in my reasoning.

I would assume this is correct.

> If it is, that doesn't necessarily mean we can't use the chunked
> approach, but it certainly makes it less appealing. I can see two ways
> to go. One would be to just accept that it won't get much benefit in
> cases like the pgbench example, and mitigate the downsides as well as
> we can. A version of this patch that caused a 3% CPU overhead in cases
> where it can't compress would be far more appealing than one that
> causes a 30% overhead in such cases (which seems to be where we are
> now).

I personally don't think it's worth caring about pgbench. But there's
obviously also other cases where "stateful" compression could be
better...

Greetings,

Andres Freund

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Joel Jacobson 2021-02-22 20:42:44 Re: pg_attribute.attname inconsistency when renaming primary key columns
Previous Message John Naylor 2021-02-22 20:16:08 Re: WIP: BRIN multi-range indexes