Re: Large Objects

From: Jeroen Schaap <jeroen(at)rulffk(dot)medfac(dot)leidenuniv(dot)nl>
To: pgsql-general(at)postgresql(dot)org
Cc: David Giffin <david(at)agent911(dot)com>
Subject: Re: Large Objects
Date: 1998-11-20 09:07:37
Message-ID: 36553159.F238ACE5@rulffk.medfac.leidenuniv.nl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

David,

> I'm working on integrating Large Objects into a database. I'm courious
> about a couple of different things. 1) Can you view large objects from the
> psql interface? 2) Is there a way to reference the large object from a
> standard query or do I have create a method of running a query then
> opening the data from the large object? 3) Is there anyone the has already
> has a working model of this in Perl?

1) What exactly do you mean with 'view'?
A large object is referenced by its OID (objectID). You just can dump
the object to a file and view the file. There is no MIME typing
(<blink><blink> any echo? :-) of objects, so no automatic viewing can be
done. Most of the time, this really is no problem at all, since the
writer of the interface also creates the dbase and tables.

2) There is lots of ways to grab a large object. I've had some arguments
with the proper Perl/C/libpsql methods, so I use lo_import($file) and
lo_export($file,$oid) in the SQL queries. Than I open the file in Perl
and i do something with it. Does the job, and network portability is
implemented by NFS (:-<, but I didn't have the time at hand to do it
really right. This method works at least...).

3) Yes. At least there is me, but I'd guess there is a whole lot of
folks out there.

For good measure, here is a trimmed-down example: (largely copied from
the pgperl manpage)

Enjoy, and any feedback welcome....

Jeroen

#! /usr/bin/perl

# ExtractFromFile
# Jeroen Schaap 7/10/98
#
# Perl script to extract ranges of data from database
# The database has been implemented in
# PostgreSQL. Here the extension Pg will be used.
#
# I will adept the 'new style'. See man Pg
#

use Pg; # use postgres extension

#Some 'constants'.
$outputfile = $ARGV[0];
$tempfile = "/home/me/data"; #Standard filename for datafiles

#Connect to database muavivo
$dbname = 'me';
$tablename_input = 'mytable'; # This contains the records sought for
$conn = Pg::connectdb("dbname=$dbname"); #Connects to $dbname
check_result(PGRES_CONNECTION_OK,$conn->status,"Connected to $dbname");

#Well, we may do our stuff now. That is, opening file to start with.
($numberofbins,$cyclelength,$pulsedelay,$thefile,@thedata)
= get_record($number,$filenumber,$tempfile);

#Do something with your data. The fun resides in the subs:

#Subroutine for checking status of postgres-command/connection. Just a
copy of the manpage
sub check_result {

my $cmp = shift;
my $ret = shift;
my $msg = shift; #import the passed variables
my $errmsg;

if ("$cmp" eq "$ret") { #compare
print "$msg\n";
} else {
$errmsg = $conn->errorMessage;
print "Error trying\n\t$msg\nError\n\t$errmsg\n";
exit;
}
}

#Subroutine for getting the values from the database.
sub get_record{
my $number = shift;
my $filenumber = shift;
my $filename = shift;
my $excstr;
my @thedata;
my @output;

# Get those values from the table
$excstr="SELECT length, pulseperiod, pulsedelay, data".
" FROM $tablename_input".
" WHERE number=$number".
" AND filenumber=$filenumber;";
$result = $conn->exec($excstr);
check_result(PGRES_TUPLES_OK, $result->resultStatus,
"Fetched record from $tablename_input.");

#Check on the number of records, should be one.
if (($ntuples=$result->ntuples)){
#OK, we have an answer. Is it just one?
if ($ntuples>1) {
#Ohoh, double entries!!!
print "Multiple entries! It's recommended that you delete the
copies of".
" the original entry. \n".
" If you want to take the first entry, type \"yes\":";
$answer=<STDIN>; chop($answer);
($answer =~ /^yes/) || die "Multiple entries.".
" Repair database and try again.\n";
} else {
print "Encountered valid entry....";
}
} else { #SQL returned nothing at all!! Boo!
die "Couldn't find an entry $number - $filenumber. Bye!\n";
}

#Before opening file to write to, check if it possible at all!
print "Checking $filename for caching....";

until ( open(TMPFILE, ">$filename")){ #As long as it is not
valid.....
print "Couldn't open $filename. Gimme another!!";
$answer=<STDIN>; chop($answer);
if (! ($answer =~ /^\//)){ #We only want filenames referenced from
root
#otherwise PostgreSQL cannot find that
stuff!!
print "File must be referenced from / (root).";
$_=$filename;
} else { #We can write this file
#Keep $answer
$filename=$answer;
}
} #We have a writable $filename now!
print "OK";
close (TMPFILE);

#Now get the values we were waiting for.
@output=($result->getvalue(0,0),
$result->getvalue(0,2),$filename);

##################################################
###### Here is the actual lo_export statement ####
##################################################

#It's time to get the data
$result=$conn->exec("SELECT lo_export(data, \'$filename\') from muaexp
".
"WHERE number=$number AND electrode=$electrode ".
"AND filenumber=$filenumber;") || #
die "Could not open our datafile!\n";
check_result(PGRES_TUPLES_OK, $result->resultStatus,
"Fetched datafile from $tablename_input.");
#Count the rows
open (TMPFILE, $filename);
while (<TMPFILE>){
push(@thedata,$_);
}
close(TMPFILE);

$_=(at)thedata;
push(@output,@thedata); #add rows of data to output
@output; #Return values to main routine
}

##################################################
##### The End ###################################

Browse pgsql-general by date

  From Date Subject
Next Message James Oden 1998-11-20 12:43:30 Re: [GENERAL] Re: Large Objects
Previous Message David Giffin 1998-11-19 22:51:17 Large Objects