Leonardo Francalanci wrote:
> > We have an FAQ item about this.
> Damn! I didn't see that one! Sorry...
> > Long data values are automatically compressed.
> The reason I'm asking is:
> we have a system that stores 200,000,000 rows per month
> (other tables store 10,000,000 rows per month)
> Every row has 400 columns of integers + 2 columns (date+integer) as index.
> Our system compresses rows before writing them to a binary file on disk.
> Data don't usually need to be updated/removed.
> We usually access all columns of a row (hence compression on a per-row basis
> makes sense).
> Is there any way to compress data on a per-row basis? Maybe with
> a User-Defined type?
Ah, we only compress long row values, which integers would not be. I
don't see any way to compress an entire row even with a user-defined
type unless you put multiple values into a single column and compress
those as a single value. In fact, if you used an array or some special
data type it would become a long value and would be automatically
However, as integers, there would have to be a lot of duplicate values
before compression would be a win.
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
In response to
pgsql-general by date
|Next:||From: Tom Lane||Date: 2004-08-26 16:23:50|
|Subject: Re: R: space taken by a row & compressed data |
|Previous:||From: Martha Chronopoulou||Date: 2004-08-26 15:46:14|
|Subject: SPI query...|