On Jan 20, 2012, at 8:58 AM, Robert Haas wrote:
> If, however,
> we're not using UTF-8, we have to first turn \uXXXX into a Unicode
> code point, then covert that to a character in the database encoding,
> and then test for equality with the other character after that. I'm
> not sure whether that's possible in general, how to do it, or how
> efficient it is. Can you or anyone shed any light on that topic?
If it’s like the XML example, it should always represent a Unicode code point, and *not* be converted to the other character set, no?
At any rate, since the JSON standard requires UTF-8, such distinctions having to do with alternate encodings are not likely to be covered, so I suspect we can do whatever we want here. It’s outside the spec.
In response to
pgsql-hackers by date
|Next:||From: Dimitri Fontaine||Date: 2012-01-20 17:14:47|
|Subject: Re: Command Triggers|
|Previous:||From: David E. Wheeler||Date: 2012-01-20 17:12:13|
|Subject: Re: JSON for PG 9.2 |