On Tue, Jan 15, 2013 at 7:46 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Compressing every small packet seems like it'd be overkill and might
>> surprise people by actually reducing performance in the case of lots of
>> small requests.
> Yeah, proper selection and integration of a compression method would be
> critical, which is one reason that I'm not suggesting a plugin for this.
> You couldn't expect any-random-compressor to work well. I think zlib
> would be okay though when making use of its stream compression features.
> The key thing there is to force a stream buffer flush (too lazy to look
> up exactly what zlib calls it, but they have the concept) exactly when
> we're about to do a flush to the socket. That way we get cross-packet
> compression but don't have a problem with the compressor failing to send
> the last partial message when we need it to.
Just a "stream flush bit" (or stream reset bit) on the packet header
would do. First packet on any stream would be marked, and that's it.
In response to
pgsql-hackers by date
|Next:||From: Peter Eisentraut||Date: 2013-01-16 03:01:56|
|Subject: Re: transforms|
|Previous:||From: Simon Riggs||Date: 2013-01-16 02:42:58|
|Subject: log_lock_waits to identify transaction's relation|