Re: Increasing the length of pg_stat_activity.current_query...

From: Andreas Pflug <pgadmin(at)pse-consulting(dot)de>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: Increasing the length of pg_stat_activity.current_query...
Date: 2004-11-09 15:32:26
Message-ID: 4190E30A.4080609@pse-consulting.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Josh Berkus wrote:
> Tom,
>
>
>>Another relevant question is why you are expecting to get this
>>information through pgstats and not by looking in the postmaster log.
>>I don't know about you, but I don't have any tools that are designed to
>>cope nicely with looking at tables that have columns that might be many
>>K wide. Looking in the log seems a much nicer way of examining the full
>>text of extremely long queries. So I think it's actually a good thing
>>that pgstats truncates the queries at some reasonable width.
>
>
> Because pg_stat_activity can be queried dynamically, and the log can't.

I've been planning to post a lengthy mail after 8.0 release, but it
seems a good idea to do it now.

When comparing pgsql to MSSQL in practice, I encounter a similar problem
as Josh. I got a server hammered by countless queries, some of them not
too well constructed and thus soaking CPU from all users. On MSSQL, I'd
be using the Profiler, which lets me tap one or more connections, and
log whatever I think is important to trace down the problem. This lets
me filter out those uninteresting 99.9 % of queries which would make my
log unreadable. Additionally, some performance measures are recorded for
each query, enabling me to spot the bad guys, analyze and improve them.

On pgsql, all logging goes unstructured into one file, I even can't
start and stop a new log on demand on my observation period (somebody
refused to implement a manual log rotation function, "nobody needs
that"...) On a server addressed by 100 users, with several dozens of
queries fired every second, it's hard work to locate the offending query.

It appears to me that simple increasing the max query length won't do
the deal (and 16k would not be enough). What I'd like to see is the
possibility to tap one or more backends (this is superuser only, of
course), and put them in a logging mode, which will record the complete
query including performance counters to some process in a lossless way.
When I say tapping I mean that the backend configuration switch is *not*
set by the very same backend, but from a different superuser backend.

Regards,
Andreas

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2004-11-09 16:28:38 Call for objections: simplify stable functions during estimation
Previous Message Simon Riggs 2004-11-09 15:19:48 Re: Increasing the length of