Re: Proposal: Add JSON support

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Proposal: Add JSON support
Date: 2010-03-29 03:52:42
Message-ID: 603c8f071003282052o73ceeeccn1484cba74d949db5@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Mar 28, 2010 at 11:24 PM, Joseph Adams
<joeyadams3(dot)14159(at)gmail(dot)com> wrote:
> I apologize; I was just starting the conversation with some of my
> ideas to receive feedback.  I didn't want people to have to wade
> through too many "I think"s .  I'll be sure to use <opinion> tags in
> the future :-)

FWIW, I don't care at all whether you say "I think" or "I know"; the
point is that you have to provide backup for any position you choose
to take.

> My reasoning for "It should be built-in" is:
>  * It would be nice to have a built-in serialization format that's
> available by default.
>  * It might be a little faster because it doesn't have to link to an
> external library.

I don't think either of these reasons is valid.

>  * The code to interface between JSON logic and PostgreSQL will
> probably be much larger than the actual JSON encoding/decoding itself.

If true, this is a good argument.

>  * The externally-maintained and packaged libjson implementations I
> saw brought in lots of dependencies (e.g. glib).

As is this.

>  * "Everyone else" (e.g. PHP) uses a statically-linked JSON implementation.

But this isn't.

> Is the code in question "*really*" small?  Well, not really, but it's
> not enormous either.  By the way, I found a bug in PHP's JSON_parser
> (json_decode("true "); /* with a space */ returns null instead of
> true).  I'll have to get around to reporting that.
>
> Now, assuming JSON support is built-in to PostgreSQL and is enabled by
> default, it is my opinion that encoding issues should not be dealt
> with in the JSON code itself, but that the JSON code itself should
> assume UTF-8.  I think conversions should be done to/from UTF-8 before
> passing it through the JSON code because this would likely be the
> smallest way to implement it (not necessarily the fastest, though).
>
> Mike Rylander pointed out something wonderful, and that is that JSON
> code can be stored in plain old ASCII using \u... .  If a target
> encoding supports all of Unicode, the JSON serializer could be told
> not to generate \u escapes.  Otherwise, the \u escapes would be
> necessary.
>
> Thus, here's an example of how (in my opinion) character sets and such
> should be handled in the JSON code:
>
> Suppose the client's encoding is UTF-16, and the server's encoding is
> Latin-1.  When JSON is stored to the database:
>  1. The client is responsible and sends a valid UTF-16 JSON string.
>  2. PostgreSQL checks to make sure it is valid UTF-16, then converts
> it to UTF-8.
>  3. The JSON code parses it (to ensure it's valid).
>  4. The JSON code unparses it (to get a representation without
> needless whitespace).  It is given a flag indicating it should only
> output ASCII text.
>  5. The ASCII is stored in the server, since it is valid Latin-1.
>
> When JSON is retrieved from the database:
>  1. ASCII is retrieved from the server
>  2. If user needs to extract one or more fields, the JSON is parsed,
> and the fields are extracted.
>  3. Otherwise, the JSON text is converted to UTF-16 and sent to the client.
>
> Note that I am being biased toward optimizing code size rather than speed.

Can you comment on my proposal elsewhere on this thread and compare
your proposal to mine? In what ways are they different, and which is
better, and why?

> Here's a question about semantics: should converting JSON to text
> guarantee that Unicode will be \u escaped, or should it render actual
> Unicode whenever possible (when the client uses a Unicode-complete
> charset) ?

I feel pretty strongly that the data should be stored in the database
in the format in which it will be returned to the user - any
conversion which is necessary should happen on the way in. I am not
100% sure to what extent we should attempt to canonicalize the input
and to what extend we should simply store it in whichever way the user
chooses to provide it.

> As for reinventing the wheel, I'm in the process of writing yet
> another JSON implementation simply because I didn't find the other
> ones I looked at palatable.  I am aiming for simple code, not fast
> code.  I am using malloc for structures and realloc for strings/arrays
> rather than resorting to clever buffering tricks.  Of course, I'll
> switch it over to palloc/repalloc before migrating it to PostgreSQL.

I'm not sure that optimizing for simplicity over speed is a good idea.
I think we can reject implementations as unpalatable because they are
slow or feature-poor or have licensing issues or are not actively
maintained, but rejecting them because they use complex code in order
to be fast doesn't seem like the right trade-off to me.

...Robert

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Christensen 2010-03-29 05:04:06 Patch for 9.1: initdb -C option
Previous Message Hitoshi Harada 2010-03-29 03:52:36 Re: Proposal: Add JSON support