From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andrew McMillan <andrew(at)catalyst(dot)net(dot)nz> |
Cc: | Aarmel <pgadmin(at)animated(dot)net(dot)au>, "pgsql-novice(at)postgresql(dot)org" <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: Max Tuple Size |
Date: | 2001-04-05 14:38:34 |
Message-ID: | 16655.986481514@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Andrew McMillan <andrew(at)catalyst(dot)net(dot)nz> writes:
> In 7.0.3 you can use LZTEXT which gives good compression for most
> strings, managing to fit 50k in with that should be no problem if it is
> english (or other) language text. I've successfully stuffed over 200k
> into an LZTEXT field if it is especially compressible.
That seems overly optimistic to me, I'd not expect LZTEXT to give more
than about a factor of 2 compression on average.
> In 7.1 the limit is increased through arcane magic to (I think) around
> 2GB, possibly more, if you can make assumptions like "it won't be
> indexed". Even in 7.0.3 you can only index fields up to blocksize/3.
Just for the record, the hard upper limit on field size in 7.1 is 1GB
(in practice you probably don't want to go past a few megabytes).
As Andrew says, if the data is to be indexed then it's less, since btree
still has a one-third-page record-size limit. However that limit is
after LZ compression, so in practice you could index fields with widths
ranging up to perhaps 2/3 blocksize --- say 20K if you set BLCKSZ = 32K.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-04-05 14:52:45 | Re: Need help - optimizer trouble |
Previous Message | ADBAAMD | 2001-04-05 14:26:38 | Re: Need help - optimizer trouble |