Skip site navigation (1) Skip section navigation (2)

computing and updating the size of a table with large objects

From: "Marco Bizzarri" <marco(dot)bizzarri(at)gmail(dot)com>
To: PgSQL-General <pgsql-general(at)postgresql(dot)org>
Subject: computing and updating the size of a table with large objects
Date: 2007-08-31 16:54:32
Message-ID: 3f0d61c40708310954h7fcfdf4ajed2637abb9f8817@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-general
Hi all.

I've a table with large objects, here is the definition:


PAFlow-emmebi=# \d documenti
           Tabella "public.documenti"
 Colonna |         Tipo          | Modificatori
---------+-----------------------+--------------
 id_doc  | character varying(50) | not null
 file    | oid                   |
 kind    | integer               | not null
 size    | bigint                |
Indici:
    "documenti_pkey" chiave primaria, btree (id_doc, kind)

(sorry for the mixed language output)

I need to populate the size attribute with the size of the large object in file.

My first attempt was:

SELECT  lo_lseek(lo_open(file, 262144), 0, 2) AS size FROM documenti ;

This opens the largeobject, and passes the result to the lo_lseek,
which goes up to the end of the largeobject, and thus computes the
size.

Now I could prepare an update which does the job. My doubt is that I
could use all the resources in the update (documenti table is quite
large).

I thought I could use something like this:

SELECT  lo_lseek(lo_open(file, 262144), 0, 2) AS size, lo_close(0)
FROM documenti ;

which is quite nasty, and relies on side effects happening in the
proper order, but uses just one file descriptor for all the query.

Does anyone has any other suggestion?

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

Responses

pgsql-general by date

Next:From: Jeff DavisDate: 2007-08-31 17:08:26
Subject: Re: auditing in postgresql
Previous:From: Erik JonesDate: 2007-08-31 16:05:45
Subject: Re: Obtaining random rows from a result set

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group