Re: Converting MySQL tinyint to PostgreSQL

From: Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my>
To: "Jim C(dot) Nasby" <decibel(at)decibel(dot)org>, Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Converting MySQL tinyint to PostgreSQL
Date: 2005-07-18 17:32:30
Message-ID: 5.2.1.1.1.20050719010818.0c319d50@localhost
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I believe that one should leave such on-the-fly disk compression to the
O/S. Postgresql already does compression for TOAST.

However, maybe padding for alignment is a waste on the disk - disks being
so much slower than CPUs (not sure about that once the data is in memory ).
Maybe there should be an option to reorder columns so that less space is
wasted.

At 05:47 PM 7/17/2005 -0500, Jim C. Nasby wrote:

>On Sat, Jul 16, 2005 at 03:18:24PM -0700, Ron Mayer wrote:
>
> > If that were practical, even more radical I/O saving tricks might be
> > possible beyond removing alignment bytes - like some compression algorithm.
>
>True, though there's a few issues with zlib compression. First, you have
>to be able to pull specific pages out of the files on disk. Right now
>that's trivial; you just read bytes xxx - yyy. With compression things
>are more difficult, because you no longer have a fixed page size.
>
>methods. Another factor is that more complex compression methods will be
>much more CPU intensive.
>
>FWIW, the way oracle handles compression is as a one-time operation.
>When you tell it to compress a table it will re-write the entire table,
>compressing it as it goes. But any pages that get changed after that
>will end up uncompressed. Of course in a data warehouse environment
>that's perfectly acceptable.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Joseph Shraibman 2005-07-18 17:33:00 free space map settings
Previous Message Scott Marlowe 2005-07-18 17:31:49 Re: How to find the number of rows deleted