At 01:36 PM 26-05-2000 -0400, Tom Lane wrote:
>Barry Lind <barry(at)xythos(dot)com> writes:
>> Does this also mean that if you are using large objects that you really
>> won't be able to store large numbers of large objects in a database?
>> (If I am correct, each large object creates two files, one for the large
>> object and one for it's index.)
Wow! For my webmail app that would be really bad- fortunately I went the
filesystem way for storing the actual emails and database storing the path.
In theory if BLOBS were handled better then storing them in the database
would be quite nice, but right now the BLOBS don't seem to be helpful.
>There's never been much enthusiasm among the core developers for large
>objects at all --- we see them as a poor substitute for allowing large
>values directly. (The "TOAST" work scheduled for 7.1 will finally
>resolve that issue, I hope.) So no one's felt like working on improving
>the large-object implementation.
On the practical side, say I want to insert/read a large amount of
information into/from a TOAST field. How should I do it?
Is there a pipe method where I can continuously print to/read from?
My worry is that if it's just like a standard insert/select command, it
will take up a lot of memory to insert/select big stuff.
So if lots of people are inserting/reading 1MB email attachments at the
same time it'll get nasty. For other apps with really big stuff it could
become really unmanageable.
In response to
pgsql-general by date
|Next:||From: Ron Chmara||Date: 2000-05-30 02:42:19|
|Subject: Re: Postgresql usage clip.|
|Previous:||From: Rich Teer||Date: 2000-05-30 02:40:28|
|Subject: Re: Is PostgreSQL multi-threaded?|