From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Ashwin Agrawal <aagrawal(at)pivotal(dot)io> |
Cc: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Zedstore - compressed in-core columnar storage |
Date: | 2019-04-11 14:54:20 |
Message-ID: | 11257.1554994460@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Ashwin Agrawal <aagrawal(at)pivotal(dot)io> writes:
> Thank you for trying it out. Yes, noticed for certain patterns pg_lzcompress() actually requires much larger output buffers. Like for one 86 len source it required 2296 len output buffer. Current zedstore code doesn’t handle this case and errors out. LZ4 for same patterns works fine, would highly recommend using LZ4 only, as anyways speed is very fast as well with it.
You realize of course that *every* compression method has some inputs that
it makes bigger. If your code assumes that compression always produces a
smaller string, that's a bug in your code, not the compression algorithm.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jeevan Chalke | 2019-04-11 15:04:37 | cache lookup failed for collation 0 |
Previous Message | Tom Lane | 2019-04-11 14:15:39 | Re: creating gist index seems to look at data ignoring transaction? |