Re: Wrong results using initcap() with non normalized string

From: Juan José Santamaría Flecha <juanjo(dot)santamaria(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Wrong results using initcap() with non normalized string
Date: 2019-10-03 18:39:18
Message-ID: CAC+AXB28ADgwdNRA=aAoWDYPqO1DZR+5NTO8iXGSsFrXyVpqYQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Sep 29, 2019 at 3:38 AM Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> wrote:
>
> The UTF8 bits looks reasonable to me. I guess the other part of that
> question is whether we support any other multibyte encoding that
> supports combining characters. Maybe for cases other than UTF8 we can
> test for 0-width chars (using pg_encoding_dsplen() perhaps?) and drive
> the upper/lower decision off that? (For the UTF8 case, I don't know if
> Juanjo's proposal is better than pg_encoding_dsplen. Both seem to boil
> down to a bsearch, though unicode_norm.c's table seems much larger than
> wchar.c's).
>

Using pg_encoding_dsplen() looks like the way to go. The normalizarion
logic included in ucs_wcwidth() already does what is need to avoid the
issue, so there is no need to use unicode_norm_table.h. UTF8 is the
only multibyte encoding that can return a 0-width dsplen, so this
approach would also works for all the other encodings that do not use
combining characters.

Please find attached a patch with this approach.

Regards,

Juan José Santamaría Flecha

Attachment Content-Type Size
0001-initcap-non-normalized-string-v2.patch application/x-patch 1.6 KB

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2019-10-03 18:39:37 Re: Auxiliary Processes and MyAuxProc
Previous Message Mike Palmiotto 2019-10-03 18:33:26 Re: Auxiliary Processes and MyAuxProc