Re: Shell Script Execution

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jerry Thompson <jthomp_mp(at)accessus(dot)net>
Cc: pgsql-novice(at)postgresql(dot)org
Subject: Re: Shell Script Execution
Date: 2002-04-25 17:14:27
Message-ID: 2970.1019754867@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

Jerry Thompson <jthomp_mp(at)accessus(dot)net> writes:
> It is possible to fire off a system level shell script based upon a
> table trigger? I have a table that will be access very infrequently.
> But when that happens, I want to be able to run a script that will
> update some files on my system.

You could use untrusted plperl or pltcl to fire off such a script,
or there is Peter Eisentraut's plsh hack (check archives for a URL,
I don't recall it offhand).

But the reason this isn't discussed much is that it's usually a bad
idea. If the transaction that updated the table later rolls back
due to an error, your external files are now out of sync with the
database contents. This is Not Good.

If you really need derived files, one way to handle them is to send
a NOTIFY from a trigger or rule to a waiting background process
that'll read the tables and update the files. With this method,
nothing happens to the files until you commit. There are still risks
of being out of sync if your update process fails for some reason ...
but the possible failure mode is "my outside files are out of date",
not "my outside files contain bogus data that never got committed at all".
This is usually not so bad. It's also easy to recover from: you just
force a run of the update process.

regards, tom lane

In response to

Browse pgsql-novice by date

  From Date Subject
Next Message Aarni Ruuhimäki / Megative Tmi / KYMI.com 2002-04-25 17:38:17 Strange behaviour
Previous Message Tom Lane 2002-04-25 17:02:47 Re: Problems with \copy