Re: syslog performance when logging big statements

From: "achill(at)matrix" <achill(at)matrix(dot)gatewaynet(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: syslog performance when logging big statements
Date: 2008-07-09 12:31:05
Message-ID: 200807091531.05372.achill@matrix.gatewaynet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Στις Tuesday 08 July 2008 21:34:01 ο/η Tom Lane έγραψε:
> Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com> writes:
> > Στις Tuesday 08 July 2008 17:35:16 ο/η Tom Lane έγραψε:
> >> Hmm. There's a function in elog.c that breaks log messages into chunks
> >> for syslog. I don't think anyone's ever looked hard at its performance
> >> --- maybe there's an O(N^2) behavior?
> >
> > Thanx,
> > i changed PG_SYSLOG_LIMIT in elog.c:1269 from 128 to 1048576
> > and i got super fast stderr performance. :)
>
> Doesn't seem like a very good solution given its impact on the stack
> depth right there.
>
> Looking at the code, the only bit that looks like repeated work are the
> repeated calls to strchr(), which would not be an issue in the "typical"
> case where the very long message contains reasonably frequent newlines.
> Am I right in guessing that your problematic statement contained
> megabytes worth of text with nary a newline?

Yes it was the input source of a bytea field containing tiff or pdf data,
of size of some 1-3 megabytes, and it did not contain (many) newlines.

>
> If so, we can certainly fix it by arranging to remember the last
> strchr() result across loop iterations, but I'd like to confirm the
> theory before doing that work.
>
> regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message achill@matrix 2008-07-09 12:37:26 Re: syslog performance when logging big statements
Previous Message Tom Lane 2008-07-09 02:08:46 Re: syslog performance when logging big statements