Re: Compression and on-disk sorting

From: Martijn van Oosterhout <kleptog(at)svana(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, Zeugswetter Andreas DCP SD <ZeugswetterA(at)spardat(dot)at>, Greg Stark <gsstark(at)mit(dot)edu>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Rod Taylor <pg(at)rbt(dot)ca>, "Bort, Paul" <pbort(at)tmwsystems(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Compression and on-disk sorting
Date: 2006-05-19 13:30:31
Message-ID: 20060519133031.GF17873@svana.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, May 19, 2006 at 09:03:31AM -0400, Tom Lane wrote:
> Martijn van Oosterhout <kleptog(at)svana(dot)org> writes:
> > I'm seeing 250,000 blocks being cut down to 9,500 blocks. That's almost
> > unbeleiveable. What's in the table?
>
> Yeah, I'd tend to question the test data being used. gzip does not do
> that well on typical text (especially not at the lower settings we'd
> likely want to use).

However, postgres tables are very highly compressable, 10-to-1 is not
that uncommon. pg_proc and pg_index compress by that for example.
Indexes compress even more (a few on my system compress 25-to-1 but
that could just be slack space, the record being 37-to-1
(pg_constraint_conname_nsp_index)).

The only table on my test system over 32KB that doesn't reach 2-to-1
compression with gzip -3 is one of the toast tables.

So getting 25-to-1 is a lot, but possibly not that extreme.
pg_statistic, which is about as close to random data as you're going to
get on a postgres system, compresses 5-to-1.

Have a nice day,
--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2006-05-19 13:42:59 Re: Compression and on-disk sorting
Previous Message Larry Rosenman 2006-05-19 13:27:23 Re: