Skip site navigation (1) Skip section navigation (2)

Re: Large objetcs performance

From: Ulrich Cech <ulrich-news(at)cech-privat(dot)de>
To:
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Large objetcs performance
Date: 2007-04-21 07:27:03
Message-ID: 4629BCC7.1090900@cech-privat.de (view raw or flat)
Thread:
Lists: pgsql-performance
Hello Alexandre,

<We have an application subjected do sign documents and store them 
somewhere.>

I developed a relative simple "file archive" with PostgreSQL (web 
application with JSF for user interface). The major structure is one 
table with some "key word fields", and 3 blob-fields (because exactly 3 
files belong to one record). I have do deal with millions of files (95% 
about 2-5KB, 5% are greater than 1MB).
The great advantage is that I don't have to "communicate" with the file 
system (try to open a directory with 300T files on a windows system... 
it's horrible, even on the command line).

The database now is 12Gb, but searching with the web interface has a 
maximum of 5 seconds (most searches are faster). The one disadvantage is 
the backup (I use pg_dump once a week which needs about 10 hours). But 
for now, this is acceptable for me. But I want to look at slony or port 
everything to a linux machine.

Ulrich

In response to

pgsql-performance by date

Next:From: clusterDate: 2007-04-21 10:43:27
Subject: Re: FK triggers misused?
Previous:From: Colin McGuiganDate: 2007-04-21 05:58:07
Subject: Odd problem with planner choosing seq scan

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group