Obtaining resource usage statistics from execution? (v 9.1)

From: Karl Denninger <karl(at)denninger(dot)net>
To: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Obtaining resource usage statistics from execution? (v 9.1)
Date: 2012-03-16 14:31:57
Message-ID: 4F634EDD.2090802@denninger.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi folks;

I am trying to continue profiling which in turn feeds query and index
tuning changes for the AKCS-WWW forum software, and appear to have no
good way to do what I need to do -- or I've missed something obvious.

The application uses the libpq interface from "C" to talk to Postgres
which contains all the back end data. Since this is a forum application
it is very read-heavy (other than accounting and of course user posting
functionality), and is template-driven. All of the table lookup
functions that come from the display templates are compartmentalized in
one function in the base code.

What I want to be able to do is to determine the resource usage by
Postgres for each of these calls.

I can do this by adding a call into the function just before the "real"
call to PQexec() that prepends "explain analyze" to the call, makes a
preamble call to PQexec() then grabs the last tuple returned which is
the total execution time (with some text), parse that and there is the
total time anyway. But I see no way to get stats on I/O (e.g. Postgres
buffer hits and misses, calls to the I/O operating system level APIs, etc.)

But while I can get the numbers this way it comes at the expense of
doubling the Postgres processing. There does not appear, however, to be
any exposition of the processing time requirements for actual (as
opposed to "modeled" via explain analyze) execution of queries -- at
least not via the libpq interface.

Am I missing something here -- is there a way to get resource
consumption from actual queries as they're run? What I'm doing right
now is the above, with a configuration switch that has a minimum
reportable execution time and then logging the returns that exceed that
time, logging the queries that have the above-threshold runtimes for
analysis and attempted optimization. This works but obviously is
something one only does for profiling as it doubles database load and is
undesirable in ordinary operation. What I'd like to be able to do is
have the code track performance all the time and raise alerts when it
sees "outliers" giving me a continually-improving set of targets for
reduction of resource consumption (up until I reach the point where I
don't seem to be able to make it any faster of course :-))

Thanks in advance!

--
-- Karl Denninger
/The Market Ticker ®/ <http://market-ticker.org>
Cuda Systems LLC

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Ants Aasma 2012-03-16 14:48:25 Re: Obtaining resource usage statistics from execution? (v 9.1)
Previous Message Kevin Grittner 2012-03-16 14:17:31 Re: Shared memory for large PostGIS operations