|From:||Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>|
|To:||"Iwata, Aya" <iwata(dot)aya(at)jp(dot)fujitsu(dot)com>, 'Dmitry Dolgov' <9erthalion6(at)gmail(dot)com>|
|Cc:||Michael Paquier <michael(at)paquier(dot)xyz>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, "rharwood(at)redhat(dot)com" <rharwood(at)redhat(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, "g(dot)smolkin(at)postgrespro(dot)ru" <g(dot)smolkin(at)postgrespro(dot)ru>, PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org>|
|Subject:||Re: libpq compression|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On 09.01.2019 13:25, Iwata, Aya wrote:
>>> I agree with the critiques from Robbie Harwood and Michael Paquier
>>> that the way in that compression is being hooked into the existing
>>> architecture looks like a kludge. I'm not sure I know exactly how it
>>> should be done, but the current approach doesn't look natural; it
>>> looks like it was bolted on.
>> After some time spend reading this patch and investigating different points,
>> mentioned in the discussion, I tend to agree with that. As far as I see it's
>> probably the biggest disagreement here, that keeps things from progressing.
>> I'm interested in this feature, so if Konstantin doesn't mind, I'll post in
>> the near future (after I'll wrap up the current CF) an updated patch I'm working
>> on right now to propose another way of incorporating compression. For now
>> I'm moving patch to the next CF.
> This thread seems to be stopped.
> In last e-mail, Dmitry suggest to update the patch that implements the function in another way, and as far as I saw, he has not updated patch yet. (It may be because author has not responded.)
> I understand big disagreement is here, however the status is "Needs review".
> There is no review after author update the patch to v9. So I will do.
> About the patch, Please update your patch to attach current master. I could not test.
> About Documentation, there are typos. Please check it. I am waiting for the reviewer of the sentence because I am not so good at English.
> When you add new protocol message, it needs the information of "Length of message contents in bytes, including self.".
> It provides supported compression algorithm as a Byte1. I think it better to provide it as a list like the NegotiateProtocolVersion protocol.
> I quickly saw code changes.
> + nread = conn->zstream
> + ? zpq_read(conn->zstream, conn->inBuffer + conn->inEnd,
> + conn->inBufSize - conn->inEnd, &processed)
> + : pqsecure_read(conn, conn->inBuffer + conn->inEnd,
> + conn->inBufSize - conn->inEnd);
> How about combine as a #define macro? Because there are same logic in two place.
> Do you consider anything about memory control?
> Typically compression algorithm keeps dictionary in memory. I think it needs reset or some method.
Thank you for review.
Attached please find rebased version of the patch.
I fixed all issues you have reported except using list of supported
It will require extra round of communication between client and server
to make a decision about used compression algorithm.
I still not sure whether it is good idea to make it possible to user to
explicitly specify compression algorithm.
Right now used streaming compression algorithm is hardcoded and depends
on --use-zstd ort --use-zlib configuration options.
If client and server were built with the same options, then they are
able to use compression.
Concerning memory control: there is a call of zpq_free(PqStream) in
socket_close() function which should deallocate all memory used by
if (zs != NULL)
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
|Next Message||Tomas Vondra||2019-01-09 14:38:03||Re: FETCH FIRST clause PERCENT option|
|Previous Message||Donald Dong||2019-01-09 14:00:23||Re: Making WAL receiver startup rely on GUC context for primary_conninfo and primary_slot_name|