Re: Are large objects well supported? Are they considered very stableto use?

From: "Cary O'Brien" <cobrien(at)Radix(dot)Net>
To: chris(dot)bitmead(at)bigfoot(dot)com
Cc: pgsql-hackers(at)hub(dot)org
Subject: Re: Are large objects well supported? Are they considered very stableto use?
Date: 1999-03-30 04:52:58
Message-ID: 199903300452.XAA05758@saltmine.radix.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


I'd stay away from PostgreSQL large objects for now.

Two big problems:

1) Minimum size is 16K
2) They all end up in the same directory as your regular
tables.

If you need to store a lot of files in the 10-20-30K size, I'd
suggest first trying the unix file system, but hash them into some
sort of subdirectory structure so as to have not so many in each
directory. 256 per directory is nice, so give each file a 32 bit
id, store the id and the key information in postgresql, and when
you need file 0x12345678, go to 12/34/56/12345678.txt. You could
be smarter about the hashing so the bins filled evenly. Either way
you can spread the load out over different file systems with
soft links.

If space is at a preimum, and your files are compressable, you can
do what we did on one project: batch the files up into batches of,
say, about 32k (i.e. keep adding files till the aggregate gets over
32k), store start and end offsets for each file in the database, and
gzip each batch. gzip -d -c can tear through whatever your 32K compresses
down to pretty quickly, and a little bit of C or perl can discard the unwanted
leading part of the file pretty quickly too. You can store the blocks
themselves hashed as described above.

Have fun,
Drop me a line if I can help.
-- cary
cobrien(at)radix(dot)net

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Davis 1999-03-30 05:04:23 Create user is failing under 6.5
Previous Message Michael Davis 1999-03-30 04:50:50 Some 6.5 regression tests are failing