Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "Douglas McNaught" <doug(at)mcnaught(dot)org>, "Stephen R(dot) van den Berg" <srb(at)cuci(dot)nl>, "Alvaro Herrera" <alvherre(at)commandprompt(dot)com>, lar(at)quicklz(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)
Date: 2009-01-05 19:02:44
Message-ID: 87iqotshij.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


"Robert Haas" <robertmhaas(at)gmail(dot)com> writes:

> Regardless of whether we do that or not, no one has offered any
> justification of the arbitrary decision not to compress columns >1MB,

Er, yes, there was discussion before the change, for instance:

http://archives.postgresql.org/pgsql-hackers/2007-08/msg00082.php

And do you have any response to this point?

I think the right value for this setting is going to depend on the
environment. If the system is starved for cpu cycles then you won't want to
compress large data. If it's starved for i/o bandwidth but has spare cpu
cycles then you will.

http://archives.postgresql.org/pgsql-hackers/2009-01/msg00074.php

> and at least one person (Peter) has suggested that it is exactly
> backwards. I think he's right, and this part should be backed out.

Well the original code had a threshold above which we *always* compresed even
if it saved only a single byte.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2009-01-05 19:05:32 Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)
Previous Message Pavel Stehule 2009-01-05 18:45:41 an idea, language SPI