Re: Inspection of row types in pl/pgsql and pl/sql

From: "Florian G(dot) Pflug" <fgp(at)phlo(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, Postgresql-Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Inspection of row types in pl/pgsql and pl/sql
Date: 2009-11-14 20:23:24
Message-ID: 4AFF11BC.8050200@phlo.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:
> "Florian G. Pflug" <fgp(at)phlo(dot)org> writes:
>> Tom Lane wrote:
>>> Perhaps it would help if we looked at some specific use-cases
>>> that people need, rather than debating abstractly. What do you
>>> need your generic trigger to *do*?
>
>> I need to build a global index table of all values of a certain
>> type together with a pointer to the row and table that contains
>> them. Since all involved tables have an "id" column, storing that
>> pointer is the easy part. The hard part is collecting all those
>> values in an insert/update/delete trigger so that I can update the
>> global index accordingly.
>
> So in this case it seems like you don't actually need any
> polymorphism at all; the target columns are always of a known
> datatype. You just don't want to commit to their names. I wonder
> though why you're willing to pin down the name of the "id" column but
> not the name of the data column.

There might be more than one (or none at all) columns of the type to be
indexed. I need to process all such columns (each of them produces a
seperate record in the index table). Plus, this schema is relatively
volatile - new fields are added about once a month or so.

>> Currently, a set of plpgsql functions generate a seperate trigger
>> function for each table. Yuck!
>
> Would you be happy with an approach similar to what Andrew mentioned,
> ie, you generate CREATE TRIGGER commands that list the names of the
> target column(s) as TG_ARGV arguments? The alternative to that seems
> to be that you iterate at runtime through all the table columns to
> see which ones are of the desired type. Which might be less trouble
> to set up, but the performance penalty of figuring out
> basically-unchanging information again on every single tuple update
> seems awful high.

Hm.. I had hoped to get away without any need to modify the trigger
definitions if the schema changes. But having a function that does "DROP
TRIGGER; CREATE TRIGGER..." is already a huge improvement over having
one that does "CREATE FUNCTION...".

I've now played around with the
EXECUTE 'select $1.' || quote_ident(fieldname)' USING NEW/OLD
trick, and simply look up the existing field with
SELECT attname
FROM pg_attribute
WHERE
attrelid = TG_RELID AND
atttypeid IN (...) AND
attname NOT IN ('referenced_by', 'self') AND
attnum > 0 AND NOT attisdropped
This at least gives me a working proof-of-concept implementation of the
trigger.

Still, doing that SELECT seems rather silly since NEW and OLD already
contain the required information. So I still believe that having
something like record_name() and record_types() would be useful. And at
least these functions have less of an issue with the type system...

best regards,
Florian Pflug

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2009-11-14 20:23:36 Re: next CommitFest
Previous Message u235sentinel 2009-11-14 20:07:52 Postgres and likewise authentication