Skip site navigation (1) Skip section navigation (2)

Re: Improve compression speeds in pg_lzcompress.c

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Takeshi Yamamuro <yamamuro(dot)takeshi(at)lab(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Improve compression speeds in pg_lzcompress.c
Date: 2013-01-07 20:18:57
Message-ID: CAHyXU0w+N5_h9WUZhzvXXpLjiLfq8Pd=PUeBT_XpPT9uHheefA@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-hackers
On Mon, Jan 7, 2013 at 10:16 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Takeshi Yamamuro <yamamuro(dot)takeshi(at)lab(dot)ntt(dot)co(dot)jp> writes:
>> The attached is a patch to improve compression speeds with loss of
>> compression ratios in backend/utils/adt/pg_lzcompress.c.
>
> Why would that be a good tradeoff to make?  Larger stored values require
> more I/O, which is likely to swamp any CPU savings in the compression
> step.  Not to mention that a value once written may be read many times,
> so the extra I/O cost could be multiplied many times over later on.

I disagree.  pg compression is so awful it's almost never a net win.
I turn it off.

> Another thing to keep in mind is that the compression area in general
> is a minefield of patents.  We're fairly confident that pg_lzcompress
> as-is doesn't fall foul of any, but any significant change there would
> probably require more research.

A minefield of *expired* patents.  Fast lz based compression is used
all over the place -- for example by the lucene.

lz4.

merlin


In response to

Responses

pgsql-hackers by date

Next:From: Tom LaneDate: 2013-01-07 20:41:00
Subject: Re: Improve compression speeds in pg_lzcompress.c
Previous:From: Pavel StehuleDate: 2013-01-07 19:31:18
Subject: Re: json api WIP patch

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group