Luke Lonergan wrote:
> On 2/26/06 10:37 AM, "Jim C. Nasby" <jnasby(at)pervasive(dot)com> wrote:
> > So the cutover point (on your system with very fast IO) is 4:1
> > compression (is that 20 or 25%?).
> Actually the size of the gzipp'ed binary file on disk was 65MB, compared to
> 177.5MB uncompressed, so the compression ratio is 37% (?), or 2.73:1.
I doubt our algorithm would give the same compression (though I haven't
really measured it). The LZ implementation we use is supposed to have
lightning speed at the cost of a not-so-good compression ratio.
> No, unfortunately not. O'Reilly's jobs data have 65K rows, so that would
> work. How do we implement LZW compression on toasted fields? I've never
> done it!
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
In response to
pgsql-hackers by date
|Next:||From: Tino Wildenhain||Date: 2006-02-26 19:20:28|
|Subject: Re: Pl/Python -- current maintainer?|
|Previous:||From: Luke Lonergan||Date: 2006-02-26 19:05:50|
|Subject: Re: TOAST compression|