<We have an application subjected do sign documents and store them
I developed a relative simple "file archive" with PostgreSQL (web
application with JSF for user interface). The major structure is one
table with some "key word fields", and 3 blob-fields (because exactly 3
files belong to one record). I have do deal with millions of files (95%
about 2-5KB, 5% are greater than 1MB).
The great advantage is that I don't have to "communicate" with the file
system (try to open a directory with 300T files on a windows system...
it's horrible, even on the command line).
The database now is 12Gb, but searching with the web interface has a
maximum of 5 seconds (most searches are faster). The one disadvantage is
the backup (I use pg_dump once a week which needs about 10 hours). But
for now, this is acceptable for me. But I want to look at slony or port
everything to a linux machine.
In response to
pgsql-performance by date
|Next:||From: cluster||Date: 2007-04-21 10:43:27|
|Subject: Re: FK triggers misused?|
|Previous:||From: Colin McGuigan||Date: 2007-04-21 05:58:07|
|Subject: Odd problem with planner choosing seq scan|