On Thu, Mar 29, 2012 at 10:04 AM, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> 1. I've been in discussion with some people about adding simple JSON extract
> functions. We already have some (i.e. xpath()) for XML.
I've built a couple of applications that push data in and out of xml
via manual composition going out and xpath coming in. TBH, I found
this to be a pretty tedious way of developing a general application
structure and a couple of notches down from the more sql driven
approach. Not that jsonpath/xpath aren't wonderful functions -- but I
thing for general information passing there's a better way.
Your json work is a great start in marrying document level database
features with a relational backend. My take is that storing rich data
inside the database in json format, while tempting, is generally a
mistake. Unless the document is black box it should be decomposed and
stored relationally and marked back up into a document as it goes out
the door. This is why brevity and flexibility of syntax is so
important when marshaling data in and out of transport formats. It
encourages people to take the right path and get the best of both
worlds -- a rich backend with strong constraints that can natively
speak such that writing data driven web services is easy.
What I'm saying is that jsonpath probably isn't the whole story:
another way of bulk moving json into native backend structures without
parsing would also be very helpful. For example, being able to cast a
json document into a record or a record array would be just amazing.
In response to
pgsql-hackers by date
|Next:||From: Robert Haas||Date: 2012-03-30 14:11:00|
|Subject: Re: Command Triggers patch v18|
|Previous:||From: Robert Haas||Date: 2012-03-30 12:19:17|
|Subject: Re: [COMMITTERS] pgsql: pg_test_timing utility, to measure clock
monotonicity and timing|