Re: jsonb, unicode escapes and escaped backslashes

From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Noah Misch <noah(at)leadboat(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: jsonb, unicode escapes and escaped backslashes
Date: 2015-01-27 20:11:18
Message-ID: 54C7F0E6.9050201@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


On 01/27/2015 02:28 PM, Tom Lane wrote:
> Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
>> On 01/27/2015 01:40 PM, Tom Lane wrote:
>>> In particular, I would like to suggest that the current representation of
>>> \u0000 is fundamentally broken and that we have to change it, not try to
>>> band-aid around it. This will mean an on-disk incompatibility for jsonb
>>> data containing U+0000, but hopefully there is very little of that out
>>> there yet. If we can get a fix into 9.4.1, I think it's reasonable to
>>> consider such solutions.
>> Hmm, OK. I had thought we'd be ruling that out, but I agree if it's on
>> the table what I suggested is unnecessary.
> Well, we can either fix it now or suffer with a broken representation
> forever. I'm not wedded to the exact solution I described, but I think
> we'll regret it if we don't change the representation.
>
> The only other plausible answer seems to be to flat out reject \u0000.
> But I assume nobody likes that.
>
>

I don't think we can be in the business of rejecting valid JSON.

cheers

andrew

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2015-01-27 20:26:54 Re: proposal: row_to_array function
Previous Message Pavel Stehule 2015-01-27 19:30:09 Re: proposal: plpgsql - Assert statement