Re: Similar to csvlog but not really, json logs?

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Similar to csvlog but not really, json logs?
Date: 2014-08-27 03:23:38
Message-ID: 20140827032338.GX16422@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

* Alvaro Herrera (alvherre(at)2ndquadrant(dot)com) wrote:
> Stephen Frost wrote:
>
> > The flip side is that there are absolutely production cases where what
> > we output is either too little or too much- being able to control that
> > and then have the (filtered) result in JSON would be more-or-less
> > exactly what a client of ours is looking for.
>
> My impression is that the JSON fields are going to be more or less
> equivalent to the current csvlog columns (what else could it be?). So
> if you can control what you give your auditors by filtering by
> individual JSON attributes, surely you could count columns in the
> hardcoded CSV definition we use for csvlog just as well.

I don't want to invent a CSV and SQL parser to address this
requirement.. That'd be pretty horrible.

> > To try to clarify that a bit, as it comes across as rather opaque even
> > on my re-reading, consider a case where you can't have the
> > "credit_card_number" field ever exported to an audit or log file, but
> > you're required to log all other changes to a table. Then consider that
> > such a situation extends to individual INSERT or UPDATE commands- you
> > need the command logged, but you can't have the contents of that column
> > in the log file.
>
> It seems a bit far-fetched to think that you will be able to rip out
> parts of queries by applying JSON operators to the query text. Perhaps
> your intention is to log queries using something similar to the JSON
> blobs I'm using the DDL deparse patch?

Right- we need to pass the queries through a normalization structure
which can then consider what's supposed to be sent on to the log file-
ideally that would happen on a per-backend basis, allowing the filtering
to be parallelized. It's quite a bit more than what we've currently got
going on, which is more-or-less "dump the string we were sent",
certainly.

> My own thought is: JSON is good, but sadly it doesn't cure cancer.

Unfortunately, straw-man arguments don't either. ;)

Thanks!

Stephen

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Fujii Masao 2014-08-27 03:31:31 Re: Missing comment block at the top of streamutil.h and receivelog.h
Previous Message Stephen Frost 2014-08-27 03:20:17 Re: Similar to csvlog but not really, json logs?