On Tue, May 31, 2011 at 7:03 PM, Greg Stark <gsstark(at)mit(dot)edu> wrote:
> On Tue, May 31, 2011 at 1:07 PM, Alvaro Herrera
> <alvherre(at)commandprompt(dot)com> wrote:
>> Excerpts from Ross J. Reedstrom's message of mar may 31 14:02:04 -0400 2011:
>>> Follows from one of the practical maxims of databases: "The data is
>>> always dirty" Being able to have the constraints enforced at least for
>>> new data allows you to at least fence the bad data, and have a shot at
>>> fixing it all.
>> Interesting point of view. I have to admit that I didn't realize I was
>> allowing that, even though I have wished for it in the past myself.
> What happens when there's bad data that the new transaction touches in
> some minor way? For example updating some other column of the row or
> just locking the row?
Updating some other column should fail unless the constraint is
satisfied for the resulting row, I think. The rule should be simple
and easy to understand: old row (versions) aren't checked, but new
ones must satisfy all constraints, whether validated or not.
There's no question that this feature has a certain amount of foot-gun
potential. But it's also really useful. And there are plenty of
people who know how to use a gun safely, without shooting themselves
in the foot. We shouldn't aim for the lowest common denominator.
> What about things like cluster or table
> Also I think NOT NULL might be used in the join elimination patch.
> Make sure it understands the "valid" flag and doesn't drop joins that
> aren't needed. It would be nice to have this for unique constraints as
> well which would *definitely* need to have the planner understand
> whether they're valid or not.
The Enterprise PostgreSQL Company
In response to
pgsql-hackers by date
|Next:||From: Craig Ringer||Date: 2011-06-01 01:01:55|
|Subject: Re: [BUGS] BUG #6046: select current_date crashes postgres|
|Previous:||From: Robert Haas||Date: 2011-06-01 00:55:38|
|Subject: Re: [PERFORM] Hash Anti Join performance degradation|