Re: json api WIP patch

From: james <james(at)mansionfamily(dot)plus(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Merlin Moncure <mmoncure(at)gmail(dot)com>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: json api WIP patch
Date: 2013-01-08 06:45:50
Message-ID: 50EBC09E.8070303@mansionfamily.plus.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> The processing functions have been extended to provide populate_record() and populate_recordset() functions.The latter in particular could be useful in decomposing a piece of json representing an array of flat objects (a fairly common pattern) into a set of Postgres records in a single pass.

So this would allow an 'insert into ... select ... from
<unpack-the-JSON>(...)'?

I had been wondering how to do such an insertion efficiently in the
context of SPI, but it seems that there is no SPI_copy equiv that would
allow a query parse and plan to be avoided.

Is this mechanism likely to be as fast as we can get at the moment in
contexts where copy is not feasible?

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Takeshi Yamamuro 2013-01-08 09:04:24 Re: Improve compression speeds in pg_lzcompress.c
Previous Message Pavan Deolasee 2013-01-08 06:17:17 Re: Set visibility map bit after HOT prune