From: | "Roland Giesler" <roland(at)giesler(dot)za(dot)net> |
---|---|
To: | "'Rainer Bauer'" <usenet(at)munnin(dot)com>, <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: Large Objects in table |
Date: | 2005-11-20 06:22:08 |
Message-ID: | TAXNET01XRPmcarcR4Z000000cc@frontdoor.taxpoint.co.za |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Rainer Bauer wrote:
> How big are there reports?
Relatively small, I'd say about 100K, 500K maybe at the most.
> If they are not too big, why don't you just use "bytea"?
> See chapter "8.4. Binary Data Types"
I'm using Druid to design and maintain my database and pg complains when I
try to use bytea (Druid generates a script)? I've read about TEXT type also
now. It seems that (by using TOAST technology) TEXT type colums can contain
up to 1GB of data, but I'm not so sure of how that works, especially if the
report is some or other object (possibly a Java file, or at least non-text
(binary) type data)
> PS: I was struggling along the same lines, because when
> starting with postgres the documentation seems to suggest to
> use the lo type for BLOB's (Chapter 28.
> Large Objects). But I really could not find any reason to use
> them if the objects are small (up to 200 KB).
Yes, I find the documentation unclear on this. Even the FAQ on "Large
Objects" is too brief to be of real value, imho.
> Can any experienced Postgres user perhaps shed a light on this?
Would be great, yes.
Roland
From | Date | Subject | |
---|---|---|---|
Next Message | stig erikson | 2005-11-20 07:07:04 | Re: Browser |
Previous Message | me | 2005-11-20 00:31:57 | changing null-values' sort order |