Re: libpq compression

From: Daniil Zakhlystov <usernamedt(at)yandex-team(dot)ru>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>
Subject: Re: libpq compression
Date: 2020-10-29 09:07:42
Message-ID: 4337C3F1-EB00-4A8F-9809-84654B127B2E@yandex-team.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

> On Oct 29, 2020, at 12:27 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> The protocol sounds to me like there's no way to enable/disable
> compression in an existing connection. To me it seems better to have an
> explicit, client initiated, request to use a specific method of
> compression (including none). That allows to enable compression for bulk
> work, and disable it in other cases (e.g. for security sensitive
> content, or for unlikely to compress well content).
>
> I think that would also make cross-version handling easier, because a
> newer client driver can send the compression request and handle the
> error, without needing to reconnect or such.
>
> Most importantly, I think such a design is basically a necessity to make
> connection poolers to work in a sensible way.

Can you please clarify your opinion about connection poolers? Do you mean by sensible way that in some cases they can just forward the compressed stream without parsing?

>> +/*
>> + * Index of used compression algorithm in zpq_algorithms array.
>> + */
>> +static int zpq_algorithm_impl;
>
> This is just odd API design imo. Why doesn't the dispatch work based on
> an argument for zpq_create() and the ZpqStream * for the rest?
>
> What if there's two libpq connections in one process? To servers
> supporting different compression algorithms? This isn't going to fly.

I agree, I think that moving the zpq_algorithm_impl to the ZpqStream struct seems like an easy fix for this issue.

>> + /* initialize compression */
>> + if (zpq_set_algorithm(compression_algorithm))
>> + PqStream = zpq_create((zpq_tx_func)secure_write, (zpq_rx_func)secure_read, MyProcPort);
>> + }
>> + return 0;
>> +}
>
> Why is zpq a wrapper around secure_write/read? I'm a bit worried this
> will reduce the other places we could use zpq.

Maybe we can just split the PqStream into PqCompressStream and PqDecompressStream? It looks like that they can work independently.


Daniil Zakhlystov

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2020-10-29 09:24:41 Re: Parallel copy
Previous Message osumi.takamichi@fujitsu.com 2020-10-29 09:00:58 RE: extension patch of CREATE OR REPLACE TRIGGER