From: | "Poul L(dot) Christiansen" <poulc(at)cs(dot)auc(dot)dk> |
---|---|
To: | Thomas Weholt <thomas(at)cintra(dot)no> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: Storing big chunks of text, variable length |
Date: | 2001-04-24 16:07:48 |
Message-ID: | 3AE5A4D4.F03FBEB7@cs.auc.dk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
The Text type in PostgreSQL 7.1 can hold up to 1GB of text and AFAIK
performes quite well.
Previous versions had a text limit of 8-32KB, so upgrade to 7.1, if you
haven't allready.
HTH,
Poul L. Christiansen
Thomas Weholt wrote:
>
> Hi,
>
> What would be the most efficient way, performance wise, of storing alot of
> rather big chunks of text in seperate records in PostgreSQL. I'm dividing
> huge XML-documents into smaller bits and placing the bits into seperate
> records. Requests want all or just some of the records, and the document is
> re-built based on the request. So everything is heavy IO-based.
>
> What would be the best way to do this? LargeObject, the binary blob feature
> of PostgreSQL or .... ????
>
> The chunks can be everything from a few lines to entire documents of
> several megabytes ( ok, that's the extreme example, but still .... )
>
> Best regards,
> Thomas
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://www.postgresql.org/search.mpl
From | Date | Subject | |
---|---|---|---|
Next Message | will trillich | 2001-04-24 19:11:10 | Re: Re: BETWEEN clause |
Previous Message | Thomas Weholt | 2001-04-24 15:53:52 | Storing big chunks of text, variable length |