Re: creating CHECK constraints as NOT VALID

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: "Ross J(dot) Reedstrom" <reedstrm(at)rice(dot)edu>, Kevin Grittner <kevin(dot)grittner(at)wicourts(dot)gov>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: creating CHECK constraints as NOT VALID
Date: 2011-05-31 23:03:56
Message-ID: BANLkTi=N7DifwrK+imUFw8FVOgFgg_OdLg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, May 31, 2011 at 1:07 PM, Alvaro Herrera
<alvherre(at)commandprompt(dot)com> wrote:
> Excerpts from Ross J. Reedstrom's message of mar may 31 14:02:04 -0400 2011:
>
>> Follows from one of the practical maxims of databases: "The data is
>> always dirty" Being able to have the constraints enforced at least for
>> new data allows you to at least fence the bad data, and have a shot at
>> fixing it all.
>
> Interesting point of view.  I have to admit that I didn't realize I was
> allowing that, even though I have wished for it in the past myself.

What happens when there's bad data that the new transaction touches in
some minor way? For example updating some other column of the row or
just locking the row? What about things like cluster or table
rewrites?

Also I think NOT NULL might be used in the join elimination patch.
Make sure it understands the "valid" flag and doesn't drop joins that
aren't needed. It would be nice to have this for unique constraints as
well which would *definitely* need to have the planner understand
whether they're valid or not.

--
greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Greg Stark 2011-05-31 23:07:12 Re: pgsql: Protect GIST logic that assumes penalty values can't be negative
Previous Message Marko Kreen 2011-05-31 22:30:56 Re: Please test peer (socket ident) auth on *BSD