I've got a largish database which once a month churns out some
invoices. Once those invoices are created, there is zero business logic
reason to ever modify the underlying data. A few hundred thousand
database rows go into the building of each month's invoices. New data
is added all the time, and used at the end of each month.
I'm considering building a protective mechanism, and am seeking feedback
on the idea. The approach would be to add a new column named "ro" to
each table at invoice level and below. Then have a trigger on
'ro'==true deny the write, and probably raise a huge stink. As invoice
are mailed each month, all the supporting data would be set to "ro" true.
The idea is to protect years and years of archival data from an
inadvertent write (such from an underspecified where clause, or a
software bug). Ideally the mechanism would never be triggered. To
corrupt data would require two acts -- changing the "ro" column, then
issuing an update.
I'm seeking feedback on the need, the approach, performance issues, and
any instances of core database support for such a concept. I do see an
Oracle feature that seems somewhat on target. I am using postgres, in a
mostly database independent manner.
pgsql-sql by date
|Next:||From: chester c young||Date: 2008-01-29 04:40:11|
|Subject: Re: Proposed archival read only trigger on rows - prevent history modification|
|Previous:||From: Tom Lane||Date: 2008-01-28 17:20:06|
|Subject: Re: Slow Query problem |