Chris Bitmead <chris(at)bitmead(dot)com> writes:
> Will anybody want to use this when TOAST comes to be?
I think it's worth doing --- existing users of large objects will
probably not want to move all their code overnight. The core developers
have mostly felt they had more pressing problems to work on, but if
someone wants to contribute a better implementation of large objects
I have no objection...
>> Here is a patch attached which implement the following strategy of large object handling:
>> 1. There's new system table: pg_largeobject.
>> 2. All large objects are stored inside files not relations.
>> 3. Large objects stored in dir $PGDATA/base/$DATABASE/lo in hashed dirs.
>> Hashing density can be tuned in config.h.in.
>> 4. For search in pg_largeobject we always use index scan.
However, that is the wrong way to go about it. The really fatal
objection is that you have given up transactional semantics for large
objects --- if you don't keep the data in a relation then how will you
roll back an aborted write? A lesser objection is that you are working
hard to create a poor substitute for indexing that Postgres already has
perfectly good mechanisms for. Having to tune a config parameter by
guessing how many LOs I will have doesn't strike me as attractive.
The approach that's been discussed in the past is to retain the existing
relation-based storage mechanism for large objects, but to combine all
the LOs of a database into one relation by adding an additional column
that is the LO identifier number. By indexing this single relation on
LO identifier + chunk number (two columns), access should be just as
fast as for any other scheme you might come up with.
regards, tom lane
In response to
pgsql-patches by date
|Next:||From: Bruce Momjian||Date: 2000-06-12 15:58:28|
|Subject: Re: NO-CREATE-TABLE and NO-LOCK-TABLE|
|Previous:||From: David Reid||Date: 2000-06-12 13:46:20|
|Subject: BeOS Diff Take 3|