database profiling

From: Orion Henry <orion(at)trustcommerce(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: database profiling
Date: 2004-01-20 19:47:09
Message-ID: 1074628029.1563.521.camel@orthanc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


I'm running several postgreSQL databases containing tens of millions of
rows. While postgreSQL has proven itself amazingly scalable, I am
finding it difficult to properly tune/profile these databases as they
grow. I am not aware of any tools to profile the database outside of
heavy use of the "EXPLAIN" command and application space logging of
query times. I wanted to lay down some ideas I have for some powerful
tools for dealing with this. Please let me know if these are redundant,
not needed or too complex to implement.

1) Statistics kept on all the tables and indexes showing the number of
times and the number of milliseconds spent sequential scanning / index
scanning them. You could easily identify an index that's not being used
or a table that's being sequential scanned too often. Also supply a
database command to reset the statistic gather to zero.

2) Statics that show count/total times to insert into/updating/deleting
from a table with time times spent in table and index separated so you
could see how much the indexes are effecting the write performance on
the table.

3) The ability to view the current running time, plan and SQL text of
all currently running queries.

Thank you for your time,

___________________________________________________
O R I O N H E N R Y

Chief Technical Officer, TrustCommerce
959 East Colorado Blvd #1, Pasadena, CA 91106
(626) 744-7700 x815, fax (626) 628-3431 orion(at)trustcommerce(dot)com
www.trustcommerce.com

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Christopher Kings-Lynne 2004-01-21 00:24:55 Re: database profiling
Previous Message Richard Huxton 2004-01-20 19:05:46 Re: Trigger question