Re: csvlog gets crazy when csv file is not writable

From: Michael Paquier <michael(at)paquier(dot)xyz>
To: Alexander Kukushkin <cyberdemn(at)gmail(dot)com>
Cc: pgsql-bugs(at)postgresql(dot)org, 9erthalion6(at)gmail(dot)com
Subject: Re: csvlog gets crazy when csv file is not writable
Date: 2018-08-21 02:23:42
Message-ID: 20180821022342.GD2897@paquier.xyz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On Mon, Aug 20, 2018 at 03:55:01PM +0200, Alexander Kukushkin wrote:
> If for some reason postgres can't open 'postgresql-%Y-%m-%d.csv' file
> for writing, it gets mad and outputs a few thousands of lines to
> stderr:
>
> 2018-08-20 15:40:46.920 CEST [22069] PANIC: could not open log file

Ah, this log message could be changed to be simply "could not open
file", the file name offers enough context...

> And so on. ERRORDATA_STACK_SIZE is presented in the output 3963 times
>
> Sure, it is entirely my fault, that csv file is not writable, but such
> avalanche of PANIC lines is really scary.

Yeah, this is a recursion in logfile_open -> open_csvlogfile. With
stderr there is a much better effort, where the server just quits with a
FATAL if the log file cannot be opened in SysLogger_Start. Could this
be an argument for allowing logfile_open() to use write_stderr? I am
not sure under the hood of the don't-do-that rule. And we make sure
that log_destination is writable already at early stage, which would
cover any scenarios like a kernel switching the log partition to be
read-only.
--
Michael

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Amit Kapila 2018-08-21 03:14:31 Re: BUG #15324: Non-deterministic behaviour from parallelised sub-query
Previous Message Michael Paquier 2018-08-21 01:49:44 Re: BUG #15343: Segmentation fault using pg_dump with --exclude-table if table contains identity column