Re: [HACKERS] log_destination=file

From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Dmitry Dolgov <9erthalion6(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Greg Stark <stark(at)mit(dot)edu>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [HACKERS] log_destination=file
Date: 2018-01-20 12:51:12
Message-ID: CABUevEygVLWbF5LmQWkGiPmeo7456rjyED0X3Uj1PtsY5XhEOQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Nov 14, 2017 at 5:33 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Sun, Sep 10, 2017 at 5:29 AM, Dmitry Dolgov <9erthalion6(at)gmail(dot)com>
> wrote:
> > average latency:
> >
> > clients patch master
> > 10 0.321 0.286
> > 20 0.669 0.602
> > 30 1.016 0.942
> > 40 1.358 1.280
> > 50 1.727 1.637
>
> That's still a noticeable slowdown, though. And we've had previous
> reports of the overhead of logging being significant as well:
>
> http://postgr.es/m/CACLsApsA7U0GCFpojVQem6SGTEkv8
> vnwdBfhVi+dqO+gu5gdCA(at)mail(dot)gmail(dot)com
>
> I seem to recall a discussion, perhaps in-person, around the time Theo
> submitted that patch where it was reported that the logging collector
> could not be used on some systems he was working with because it
> became a major performance bottleneck. With each backend writing its
> own messages to a file, it was tolerable, but when you tried to funnel
> everything through a single process, the back-pressure slowed down the
> entire system unacceptably.
>

Finally found myself back at this one, because I still think this is a
problem we definitely need to adress (whether with this file or not).

The funneling into a single process is definitely an issue.

But we don't really solve that problem today wit logging to stderr, do we?
Because somebody has to pick up the log as it came from stderr. Yes, you
get more overhead when sending the log to devnull, but that isn't really a
realistic scenario. The question is what to do when you actually want to
collect that much logging that quickly.

If each backend could actually log to *its own file*, then things would get
sped up. But we can't do that today. Unless you use the hooks and build it
yourself.

Per the thread referenced, using the hooks to handle the
very-high-rate-logging case seems to be the conclusion. But is that still
the conclusion, or do we feel we need to also have a native solution?

And if the conclusion is that hooks is the way to go for that, then is the
slowdown of this patch actually a relevant problem to it?

--
Magnus Hagander
Me: https://www.hagander.net/ <http://www.hagander.net/>
Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2018-01-20 13:21:19 Re: improve type conversion of SPI_processed in Python
Previous Message Magnus Hagander 2018-01-20 12:33:15 Re: [HACKERS] Supporting huge pages on Windows