Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools

From: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
To: mahamood hussain <hussain(dot)ieg(at)gmail(dot)com>, Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools
Date: 2026-04-11 15:35:23
Message-ID: a3eed63ae63a0a0ec55b963f88f12b5fda56b904.camel@cybertec.at
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Fri, 2026-04-10 at 23:18 +0530, mahamood hussain wrote:
> From a DBA perspective, I’m looking to proactively identify problem areas—such as:
>  * Long-running queries

log_min_duration_statement = 2000

>  * Jobs/stored procedures consuming high temp space

SELECT temp_blks_written, query
FROM pg_stat_statements
ORDER BY temp_blks_written DESC
LIMIT 10;

>  * Queries resulting in sequential scans due to missing indexes

There is no direct way to find that.

First, look at tables that receive large sequential scans frequently:

SELECT relid::regclass, seq_scan, seq_tup_read
FROM pg_stat_all_tables
ORDER BY least(seq_scan, seq_tup_read) DESC
LIMIT 10;

Then examine the long-running queries that consume a lot of database time:

SELECT total_exec_time, query
FROM pg_stat_statements
WHERE mean_exec_time > 100
ORDER BY total_exec_time DESC
LIMIT 10;

See if any of those use one the tables found with the first query.

Then use EXPLAIN (ANALYZE, BUFFERS) on the statement to get the execution plan
and tune the query if you can.

>  * Lock waits, deadlocks, and memory-heavy operations

log_lock_waits = on

Deadlocks are logged automatically.

Memory use is not tracked.

Yours,
Laurenz Albe

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Raj 2026-04-11 19:16:22 Pgbouncer error
Previous Message flatley 2026-04-11 15:22:28 Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools