Re: Bug in UTF8-Validation Code?

From: Martijn van Oosterhout <kleptog(at)svana(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Mark Dilger <pgsql(at)markdilger(dot)com>, Albe Laurenz <all(at)adv(dot)magwien(dot)gv(dot)at>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Bug in UTF8-Validation Code?
Date: 2007-04-04 15:21:41
Message-ID: 20070404152141.GD22542@svana.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Apr 04, 2007 at 10:22:28AM -0400, Tom Lane wrote:
> Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> > Right -- IMHO what we should be doing is reject any input to chr() which
> > is beyond plain ASCII (or maybe > 255), and create a separate function
> > (unicode_char() sounds good) to get an Unicode character from a code
> > point, converted to the local client_encoding per conversion_procs.
>
> Hm, I hadn't thought of that approach, but another idea is that the
> argument of chr() is *always* a unicode code point, and it converts
> to the current encoding. Do we really need a separate function?

That's what I'd advocate, but then we're not Oracle compatable...

Have a nice day,
--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tatsuo Ishii 2007-04-04 15:33:40 Re: Bug in UTF8-Validation Code?
Previous Message Zoltan Boszormenyi 2007-04-04 15:19:02 Re: IDENTITY/GENERATED v36 Re: Final version of IDENTITY/GENERATED patch