On Mon, Jan 7, 2013 at 10:16 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Takeshi Yamamuro <yamamuro(dot)takeshi(at)lab(dot)ntt(dot)co(dot)jp> writes:
>> The attached is a patch to improve compression speeds with loss of
>> compression ratios in backend/utils/adt/pg_lzcompress.c.
> Why would that be a good tradeoff to make? Larger stored values require
> more I/O, which is likely to swamp any CPU savings in the compression
> step. Not to mention that a value once written may be read many times,
> so the extra I/O cost could be multiplied many times over later on.
I disagree. pg compression is so awful it's almost never a net win.
I turn it off.
> Another thing to keep in mind is that the compression area in general
> is a minefield of patents. We're fairly confident that pg_lzcompress
> as-is doesn't fall foul of any, but any significant change there would
> probably require more research.
A minefield of *expired* patents. Fast lz based compression is used
all over the place -- for example by the lucene.
In response to
pgsql-hackers by date
|Next:||From: Tom Lane||Date: 2013-01-07 20:41:00|
|Subject: Re: Improve compression speeds in pg_lzcompress.c|
|Previous:||From: Pavel Stehule||Date: 2013-01-07 19:31:18|
|Subject: Re: json api WIP patch|