Skip site navigation (1) Skip section navigation (2)

pgsql BLOB issues

From: Jeremy Andrus <jeremy(at)jeremya(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: pgsql BLOB issues
Date: 2003-04-28 02:30:23
Message-ID: 200304272230.23583@jeremy-rokks (view raw or flat)
Thread:
Lists: pgsql-performance
Hello,

  I have a database that contains a large amount of Large Objects 
(>500MB). I am using this database to store images for an e-commerce 
website, so I have a simple accessor script written in perl to dump out 
a blob based on a virtual 'path' stored in a table (and associated with 
the large object's OID). This system seemed to work wonderfully until I 
put more than ~500MB of binary data into the database. 

  Now, every time I run the accessor script (via the web OR the command 
line), the postmaster process gobbles up my CPU resources (usually >30% 
for a single process - and it's a 1GHz processor with 1GB of RAM!), and 
the script takes a very long time to completely dump out the data.

  I have the same issue with an import script that reads files from the 
hard drive and puts them into Large Objects in the database. It takes a 
very long time to import whereas before, it would run extremely fast. 

  Are there any known issues in PostgreSQL involving databases with a 
lot of binary data? I am using PostgreSQL v7.2.3 on a linux system.

Thanks,

	-Jeremy

-- 
------------------------
Jeremy C. Andrus
http://www.jeremya.com/
------------------------


Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2003-04-28 05:00:01
Subject: Re: pgsql BLOB issues
Previous:From: Hannu KrosingDate: 2003-04-27 06:50:51
Subject: Re: More tablescanning fun

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group