Skip site navigation (1) Skip section navigation (2)

Re: JSON for PG 9.2

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Hannu Krosing <hannu(at)krosing(dot)net>, Abhijit Menon-Sen <ams(at)toroid(dot)org>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: JSON for PG 9.2
Date: 2012-04-16 14:40:41
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
On Mon, Apr 16, 2012 at 9:10 AM, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> On 04/16/2012 09:34 AM, Hannu Krosing wrote:
>>> based on Abhijit's feeling and some discussion offline, the consensus
>>> seems to be to remove query_to_json.
>> The only comment I have here is that query_to_json could have been
>> replaced with json_agg, so thet you don't need to do double-buffering
>> for the results of array(<yourquery>) call in
>> SELECT array_to_json(array(<yourquery>));
>> Or is there some other way to avoid it except to wrap row_to_json()
>> calls in own aggregate function which adds enclosing brackets and comma
>> separator ( like this : '['<row1>[,<rowN>]']' ?
> The way I usually write this is:
>    select array_to_json(array_agg(q))
>    from (<yourquery>) q;
> It's a pity you didn't make this comment back in January when we were
> talking about this. I think it's too late now in this release cycle to be
> talking about adding the aggregate function.

I find array_agg to be pretty consistently slower than
array()...although not much, say around 5-10%.  I use array_agg only
when grouping.  try timing
select array_to_json(array_agg(v)) from (select v from
generate_series(1,1000000) v) q;
select array_to_json(array(select v from generate_series(1,1000000) v));

I agree with Hannu but as things stand if I'm trying to avoid the
extra buffer I've found myself doing the final aggregation on the
client -- it's easy enough.  BTW, I'm using the json stuff heavily and
it's just absolutely fantastic.  Finally I can write web applications
without wondering exactly where it was that computer science went off
the rails.

I've already demoed a prototype app that integrates pg directly with
the many high quality js libraries out there and it makes things very
easy and quick by making writing data services trivial.  Data pushes
are still quite a pain but I figure something can be worked out.


In response to

pgsql-hackers by date

Next:From: Michael MeskesDate: 2012-04-16 16:04:48
Subject: Re: ECPG FETCH readahead
Previous:From: Tom LaneDate: 2012-04-16 14:19:36
Subject: Re: index-only scans vs. Hot Standby, round two

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group