| From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
|---|---|
| To: | Jeroen Vermeulen <jtvjtv(at)gmail(dot)com> |
| Cc: | VASUKI M <vasukianand0119(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
| Subject: | Re: BUG #19354: JOHAB rejects valid byte sequences |
| Date: | 2025-12-16 14:26:12 |
| Message-ID: | CA+TgmoZaoc37ohnhF5inoPxWzfoznV483xQw8Fmw+ELFScv47g@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-bugs |
On Tue, Dec 16, 2025 at 2:42 AM Jeroen Vermeulen <jtvjtv(at)gmail(dot)com> wrote:
> My one worry is perhaps Johab is on the list because one important user needed it.
>
> But even then that requirement may have gone away?
Well, that was over 20 years ago. There's a very good chance that even
if somebody was using JOHAB back then, they're not still using it now.
What's mystifying to me is that, presumably, somebody had a reason at
the time for thinking that this was correct. I know that our quality
standards were a whole looser back then, but I still don't quite
understand why someone would have spent time and effort writing code
based on a purely fictitious encoding scheme. So I went looking for
where we got the mapping tables from. UCS_to_JOHAB.pl expects to read
from a file JOHAB.TXT, of which the latest version seems to be found
here:
https://www.unicode.org/Public/MAPPINGS/OBSOLETE/EASTASIA/KSC/JOHAB.TXT
And indeed, if I run UCS_to_JOHAB.pl on that JOHAB.txt file, it
regenerates the current mapping files. Playing with it a bit:
rhaas=# select convert_from(e'\\x8a5c'::bytea, 'johab');
ERROR: invalid byte sequence for encoding "JOHAB": 0x8a 0x5c
rhaas=# select convert_from(e'\\x8444'::bytea, 'johab');
ERROR: invalid byte sequence for encoding "JOHAB": 0x84 0x44
rhaas=# select convert_from(e'\\x89ef'::bytea, 'johab');
convert_from
--------------
괦
(1 row)
So, \x8a5c is the original example, which does appear in JOHAB.TXT,
and \x8444 is the first multi-byte character in that file, and both of
them fail. But 89ef, which also appears in that file, doesn't fail,
and from what I can tell the mapping is correct. So apparently we've
got the "right" mappings, but you can only actually the ones that
match the code's rules for something to be a valid multi-byte
character, which aren't actually in sync with the mapping table. I'm
left with the conclusions that (1) nobody ever actually tried using
this encoding for anything real until 3 days ago and (2) we don't have
any testing infrastructure that verifies that the characters in the
mapping tables are actually accepted by pg_verifymbstr(). I wonder how
many other encodings we have that don't actually work?
--
Robert Haas
EDB: http://www.enterprisedb.com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Greg Sabino Mullane | 2025-12-16 15:17:08 | Re: BUG #19350: Short circuit optimization missed when running sqlscriptes in JDBC |
| Previous Message | Tender Wang | 2025-12-16 11:30:45 | Re: BUG #19353: Error XX000 if referencing expanded array in grouping set: variable not found in subplan target list |