Skip site navigation (1) Skip section navigation (2)

Re: Writing transactions from pgtksh

From: Andreas Kretzer <andi(at)kretzer-berlin(dot)de>
To: Postgres Interface List <pgsql-interfaces(at)postgresql(dot)org>
Subject: Re: Writing transactions from pgtksh
Date: 2002-04-09 08:24:25
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-interfaces
Denis Chavez schrieb:

> Hi!
> I'm using pgtksh to develop a Postgres client application. I need to use
> transactions, but
> the Postgres Programmer's Guide just tells me to use large objects and
> doesn't give any
> example.
> Any idea on how to use this pg_lo* with transactions in pgtksh?

I'm using the normal Tcl/Tk together with (or libpgtcl.dll under windoze).
It should effectively be the same thing.

The documentation states, that you must enclose your LargeObject operation
in a transaction -- and this is definetly true! So let's take a code fragment
to create a large object from a client-local file ($db is my connection handle):

proc create_lobject {db file} {
   # let's open the file
   if {[catch {set fd [open $file r]}]} {
      ... error message 'couldn't open file' ...
      return -1
   fconfigure $fd -translation binary -encoding binary -eofchar {}

   # now create the object
   set oid [pg_lo_create $db INV_READ|INV_WRITE]
   if {[catch {set oid [expr $oid * 1]}]} {   # check for numeric value
      ... some error message ...
      return -1

   # start with a transaction and open the object
   set res [pg_exec $db "BEGIN TRANSACTION;"]
   pg_result $res -clear;    # don't care for result - this should always work

   set lobj_fd [pg_lo_open $db $oid w]
   # one should check $lobj_fd for succesfull open - I don't do this for now :-(

   # now sequentially read file and write to the large object like
   # you would do with a regular file
   while {![eof $fd]} {
      set buf [read $fd 1024]
      set len [string bytelength $buf]
      pg_lo_write $db $lobj_fd $buf $len

   # finish everything
   pg_lo_close $db $lobj_fd
   close $fd

   set res [pg_exec $db "COMMIT TRANSACTION;"]
   pg_result $res -clear

   return $oid

Remember to store the returned OID in some descriptive data set. The
large object can only be recovered by it's OID. Reading works exactly
like writing - except that you don't create the object before opening

ATTENTION: This works under linux :-) but may give you some trouble
under Windows :-(
Under Windows you can't really import binary files as they are recoded
every time. Even the 'fconfigure' command can't do anything against
it (I'm still searching for the reason ...). One may be that you even
can't figure out the real length of what you read from the file and
in addition the 'buf' contains weired chars that won't make up the
real binary representation of the file (mainly because some UTF-8
chars or whatever).

There are two other methods to read and write large objects.
pg_lo_import and pg_lo_export that should do all that in one
step. I encountered the same problems with binary files on windows
so I just don't use them anymore.

Hope this helps


In response to

pgsql-interfaces by date

Next:From: Bruce MomjianDate: 2002-04-11 21:42:06
Subject: Re: libpgtcl pg_execute
Previous:From: "."Date: 2002-04-07 02:31:43
Subject: sqlbang

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group