Re: Large objetcs performance

From: Ulrich Cech <ulrich-news2(at)cech-privat(dot)de>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Large objetcs performance
Date: 2007-04-22 09:01:19
Message-ID: 462B245F.9030100@cech-privat.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hello Alexandre,

<We have an application subjected do sign documents and store them
somewhere.>

I developed a relative simple "file archive" with PostgreSQL (web
application with JSF for user interface). The major structure is one
table with some "key word fields", and 3 blob-fields (because exactly 3
files belong to one record). I have do deal with millions of files (95%
about 2-5KB, 5% are greater than 1MB).
The great advantage is that I don't have to "communicate" with the file
system (try to open a directory with 300T files on a windows system...
it's horrible, even on the command line).

The database now is 12Gb, but searching with the web interface has a
maximum of 5 seconds (most searches are faster). The one disadvantage is
the backup (I use pg_dump once a week which needs about 10 hours). But
for now, this is acceptable for me. But I want to look at slony or port
everything to a linux machine.

Ulrich

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message henk de wit 2007-04-22 16:17:40 Re: Redundant sub query triggers slow nested loop left join
Previous Message Tom Lane 2007-04-22 02:45:11 Re: Odd problem with planner choosing seq scan