Re: [PATCH] Log CSV by default

From: Gilles Darold <gilles(at)darold(dot)net>
To: Michael Paquier <michael(at)paquier(dot)xyz>, Gilles Darold <gilles(at)darold(dot)net>
Cc: David Fetter <david(at)fetter(dot)org>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [PATCH] Log CSV by default
Date: 2018-12-04 07:58:50
Message-ID: b9cc8183-28b5-ce98-9dfb-218bba82f99f@darold.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Le 04/12/2018 à 03:37, Michael Paquier a écrit :
> On Mon, Dec 03, 2018 at 09:20:01PM +0100, Gilles Darold wrote:
>> pgBadger is able to parse jsonlog
>> (https://github.com/michaelpq/pg_plugins/tree/master/jsonlog) output in
>> multi-process mode because it do not use an external library to parse
>> the json. In this minimalist implementation I assume that each log line
>> is a full json object. Honestly I do not have enough feedback on this
>> format to be sure that my basic jsonlog parser is enough, maybe it is
>> not. What is really annoying for a log parser is the preservation of
>> newline character in queries, if the format outputs each atomic log
>> messages on a single line each, this is ideal.
> jsonlog generates one JSON object for each elog call, and per line.
> Newlines are treated as they would for a JSON object by being replaced
> with \n within the query strings, the same way te backend handles JSON
> strings as text.

This was what I remember and this is why it is easy for pgbadger to
process these log files in multiprocess mode.

--
Gilles Darold

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Etsuro Fujita 2018-12-04 08:24:23 Re: postgres_fdw: oddity in costing aggregate pushdown paths
Previous Message Michael Paquier 2018-12-04 07:36:04 Re: Use durable_unlink for .ready and .done files for WAL segment removal