Re: page compression

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Andy Colson <andy(at)squeakycode(dot)net>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: page compression
Date: 2011-01-02 23:36:02
Message-ID: 1294011362.2090.4214.camel@ebony
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, 2010-12-28 at 09:10 -0600, Andy Colson wrote:

> I know its been discussed before, and one big problem is license and
> patent problems.

Would like to see a design for that. There's a few different ways we
might want to do that, and I'm interested to see if its possible to get
compressed pages to be indexable as well.

For example, if you compress 2 pages into 8Kb then you do one I/O and
out pops 2 buffers. That would work nicely with ring buffers.

Or you might try to have pages > 8Kb in one block, which would mean
decompressing every time you access the page. That wouldn't be much of a
problem if we were just seq scanning.

Or you might want to compress the whole table at once, so it can only be
read by seq scan. Efficient, but not indexes.

It would be interesting to explore pre-populating the compression
dictionary with some common patterns.

Anyway, interesting topic.

--
Simon Riggs http://www.2ndQuadrant.com/books/
PostgreSQL Development, 24x7 Support, Training and Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Joel Jacobson 2011-01-03 00:44:53 Re: contrib/snapshot
Previous Message Simon Riggs 2011-01-02 23:23:19 Re: Recovery conflict monitoring