Re: Client/Server compression?

From: "Arguile" <arguile(at)lucentstudios(dot)com>
To: "PostgresSQL Hackers Mailing List" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Client/Server compression?
Date: 2002-03-14 20:03:39
Message-ID: LLENKEMIODLDJNHBEFBOKEFFEGAA.arguile@lucentstudios.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Bruce Momjian wrote:
>
> Greg Copeland wrote:
> > Well, it occurred to me that if a large result set were to be identified
> > before transport between a client and server, a significant amount of
> > bandwidth may be saved by using a moderate level of compression.
> > Especially with something like result sets, which I tend to believe may
> > lend it self well toward compression.
>
> I should have said compressing the HTTP protocol, not FTP.
>
> > This may be of value for users with low bandwidth connectivity to their
> > servers or where bandwidth may already be at a premium.
>
> But don't slow links do the compression themselves, like PPP over a
> modem?

Yes, but that's packet level compression. You'll never get even close to the
result you can achieve compressing the set as a whole.

Speaking of HTTP, it's fairly common for web servers (Apache has mod_gzip)
to gzip content before sending it to the client (which unzips it silently);
especially when dealing with somewhat static content (so it can be cached
zipped). This can provide great bandwidth savings.

I'm sceptical of the benefit such compressions would provide in this setting
though. We're dealing with sets that would have to be compressed every time
(no caching) which might be a bit expensive on a database server. Having it
as a default off option for psql migtht be nice, but I wonder if it's worth
the time, effort, and cpu cycles.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Paul Ramsey 2002-03-14 20:08:11 Re: Client/Server compression?
Previous Message Tom Lane 2002-03-14 19:45:00 Re: problem with array of boxes