Re: pg_stat_statements

From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: Robert Treat <xzilla(at)users(dot)sourceforge(dot)net>
Cc: pgsql-hackers(at)postgresql(dot)org, Josh Berkus <josh(at)agliodbs(dot)com>
Subject: Re: pg_stat_statements
Date: 2008-06-16 02:31:59
Message-ID: 20080616110358.7517.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


Robert Treat <xzilla(at)users(dot)sourceforge(dot)net> wrote:

> On Friday 13 June 2008 12:58:22 Josh Berkus wrote:
> > I can see how this would be useful, but I can also see that it could be a
> > huge performance burden when activated. So it couldn't be part of the
> > standard statistics collection.
>
> A lower overhead way to get at this type of information is to quantize dtrace
> results over a specific period of time. Much nicer than doing the whole
> logging/analyze piece.

DTrace is disabled in most installation as default, and cannot be used in
some platforms (especially I want to use the feature in Linux). I think
DTrace is known as a tool for developers, but not for DBAs. However,
statement logging is required by DBAs who used to use STATSPACK in Oracle.

I will try to measure overheads of logging in some implementation:
1. Log statements and dump them into server logs.
2. Log statements and filter them before to be written.
3. Store statements in shared memory.

I know 1 is slow, but I don't know what part of it is really slow;
If the reason is to write statements into disks, 2 would be a solution.
3 will be needed if sending statements to loggger itself is the reason
of the overhead.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Fetter 2008-06-16 02:36:09 Re: How to Sponsor a Feature
Previous Message ITAGAKI Takahiro 2008-06-16 02:11:40 Re: pg_stat_statements