On 12/13/07, Campbell, Lance <lance(at)uiuc(dot)edu> wrote:
> I am looking at the possibility of storing files in some of my database
> tables. My concern is obviously performance. I have configured PostgreSQL
> to take advantage of Linux file caching. So my PostgreSQL does not have a
> large setting for shared_buffers even though I have 24G of memory.
This used to be the recommended way before 8.0. In 8.0, it is
advantageous to give PostgreSQL more buffers. You should still make
some room for the kernel cache.
By "storing files", I assume you mean a lot of data imported from
files. The procs and cons of storing large amounts of data as
PostgreSQL tuples has been debated before. You might want to search
My opinion is that PostgreSQL is fine up to a point (let's say 10,000
tuples x 2KB), above which I would merely store references to
file-system objects. Managing these objects can be painful, especially
in a cluster of more than one machine, but at least it's fast and
> What data type should I use for fields that hold files?
PostgreSQL has two ways of storing "large amounts of data" in a single
tuple: variable-length columns, and blobs.
Blobs are divided into chunks and stored in separate tables, one tuple
per chunk, indexed by offset, and PostgreSQL allows random access to
the data. The downside is that they take up more room, they're slower
to create, slower to read from end to end, and I believe there are
some operations (such as function expressions) that don't work on
them. Some replication products, including (the last time I looked)
Slony, does not support replicating blobs. Blobs are not deprecated, I
think, but they feel like they are.
Variable-length columns such as bytea and text support a system called
TOAST, which allow the first few kilobytes of the data to be stored
in-place in the tuple, and the overflow to be stored elsewhere and
potentially compressed. This system is much faster and tighter than
blobs, but does not offer random I/O.
> Is there anything that I should be aware of when putting files into a field
> in a table?
Backup dumps will increase in size in proportion to the size of your
data. PostgreSQL is no speed demon at loading/storing data, so this
might turn out to be the Achilles heel.
> When PostgreSQL accesses a table that has fields that contain files does it
> put the fields that contain the files into the shared_buffers memory area?
I believe so.
In response to
pgsql-performance by date
|Next:||From: Campbell, Lance||Date: 2007-12-13 20:09:06|
|Subject: Re: Putting files into fields in a table|
|Previous:||From: Simon Riggs||Date: 2007-12-13 19:20:43|
|Subject: Re: Heavy write activity on first vacuum of fresh TOASTdata|