Re: csvlog gets crazy when csv file is not writable

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>
Cc: Michael Paquier <michael(at)paquier(dot)xyz>, cyberdemn(at)gmail(dot)com, PostgreSQL mailing lists <pgsql-bugs(at)postgresql(dot)org>
Subject: Re: csvlog gets crazy when csv file is not writable
Date: 2018-08-25 22:39:32
Message-ID: 6508.1535236772@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Dmitry Dolgov <9erthalion6(at)gmail(dot)com> writes:
>> On Tue, 21 Aug 2018 at 04:23, Michael Paquier <michael(at)paquier(dot)xyz> wrote:
> On Mon, Aug 20, 2018 at 03:55:01PM +0200, Alexander Kukushkin wrote:
>>> If for some reason postgres can't open 'postgresql-%Y-%m-%d.csv' file
>>> for writing, it gets mad and outputs a few thousands of lines to
>>> stderr:

>> Yeah, this is a recursion in logfile_open -> open_csvlogfile. With
>> stderr there is a much better effort, where the server just quits with a
>> FATAL if the log file cannot be opened in SysLogger_Start.

> I also wonder if there are any strong reasons why in the case of errors with
> csv log file behaviour is different, maybe it's possible to unify them?

Yeah. This seems to boil down to somebody deciding that it'd be okay
to delay opening the csvlogFile until we actually need to write to it.
As this example shows, that's just crazy, because it neglects to account
for the possibility of failure. What we need to do is, first, make the
csvlogFile handling look as much as possible like the well-tested
syslogFile handling, and second, decide what we'll do about open failures
so that they *don't* lead to recursion.

At postmaster startup, we can open csvlogFile if requested and fail if
we can't, just as for syslogFile. And if we fail to make a rotation,
it's okay to keep writing to the old file, just as for syslogFile.
But we have to allow for Log_destination to change after startup,
and that means we might need to open csvlogFile when it wasn't open
before, and we can't just quit if we fail.

What the attached patch does about that is to redirect CSV-format
output to syslogFile if we get CSV output when csvlogFile isn't open.
The only other plausible answer I can see is to drop such output on
the floor, and that seems pretty unfriendly.

Also we really ought to close csvlogFile if CSV logging is turned off;
the existing code doesn't, but this adds that.

Also, while looking at this I noticed what seems to be pretty dodgy code
in write_stderr:

void
write_stderr(const char *fmt,...)
{
va_list ap;

fmt = _(fmt);

va_start(ap, fmt);

I'm not sure how, or if, va_start actually uses its second argument;
it seems likely that that's vestigial in all modern compilers. But
on a machine where it's *not* vestigial, assigning a new value to fmt
before we give it to va_start doesn't seem like a bright idea.
I think it'd be better to avoid that and just write _(fmt) twice,
as attached.

regards, tom lane

Attachment Content-Type Size
fix-bogus-csvlog-error-handling-1.patch text/x-diff 11.5 KB

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Dmitry Dolgov 2018-08-26 19:39:44 Re: csvlog gets crazy when csv file is not writable
Previous Message Thomas Munro 2018-08-25 21:55:23 Re: BUG #15348: Postgres 8.4 accepte les connexions entrantes