From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Thomas Reiss <thomas(dot)reiss(at)dalibo(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Casting issues with domains |
Date: | 2014-12-10 23:23:46 |
Message-ID: | 1702807885.4979365.1418253826765.JavaMail.yahoo@jws100136.mail.ne1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Kevin Grittner <kgrittn(at)ymail(dot)com> writes:
>> It's kinda hard for me to visualize where it makes sense to define
>> the original table column as the bare type but use a domain when
>> referencing it in the view.
>
> As far as that goes, I think the OP was unhappy about the performance
> of the information_schema views, which in our implementation do exactly
> that so that the exposed types of the view columns conform to the SQL
> standard, even though the underlying catalogs use PG-centric types.
>
> I don't believe that that's the only reason why the performance of the
> information_schema views tends to be sucky, but it's certainly a reason.
Is that schema too "edge case" to justify some functional indexes
on the cast values on the underlying catalogs? (I'm inclined to
think so, but it seemed like a question worth putting out
there....)
Or, since these particular domains are known, is there any sane way
to "special-case" these to allow the underlying types to work?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Petr Jelinek | 2014-12-10 23:24:49 | TABLESAMPLE patch |
Previous Message | Tom Lane | 2014-12-10 23:01:09 | Re: Casting issues with domains |