Re: Performance issue!

From: Pär Mattsson <par(dot)x(dot)mattsson(at)gmail(dot)com>
To: ceo(at)goodfellas(dot)agency
Cc: Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: Re: Performance issue!
Date: 2025-12-22 16:38:13
Message-ID: BBDB8FD7-8820-402C-9A43-1D1728E0DEAC@gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi! It is Postgres version 16.

Thanks alot for the answer!


Regards Pär

Mobile: +46 70 606 96 45




22 dec. 2025 kl. 17:26 skrev CEO Goodfellas Holdings LLC <ceo(at)goodfellas(dot)agency>:







Hi everyone, I’m gonna answer the gentleman. Maybe I can share some wealth . 


The disclaimer is I have a strong AI background and I’m running a personal multi boat with API connections to five different providers and then I run the responses through vertex to ensure reliability. But we all know that it’s always a capture so happy holidays and hope this helps!



 Yep — here are a few Postgres performance script “bundles” I’ve used in real life that produce clean, executive-friendly output (and are safe to run in production if you keep the sampling windows sane). They’re grouped by what stakeholders usually ask: “Is the DB slow?” “What’s slow?” “Who’s causing it?” “Is it IO/CPU/locks?” “Is it indexes?”





1) “What’s happening right now?” (top waits, locks, long queries)




1A. Active sessions + wait reasons (quick triage)


SELECT

  now() AS ts,

  pid,

  usename,

  application_name,

  client_addr,

  state,

  wait_event_type,

  wait_event,

  date_trunc('second', now() - query_start) AS runtime,

  left(regexp_replace(query, '\s+', ' ', 'g'), 140) AS query_140

FROM pg_stat_activity

WHERE pid <> pg_backend_pid()

  AND state <> 'idle'

ORDER BY (now() - query_start) DESC

LIMIT 30;


1B. Blocking / blocked chain (this one wins political battles)


WITH blocked AS (

  SELECT

    a.pid AS blocked_pid,

    a.usename AS blocked_user,

    a.application_name AS blocked_app,

    a.client_addr AS blocked_client,

    now() - a.query_start AS blocked_runtime,

    left(regexp_replace(a.query, '\s+', ' ', 'g'), 120) AS blocked_query,

    unnest(pg_blocking_pids(a.pid)) AS blocking_pid

  FROM pg_stat_activity a

  WHERE a.state <> 'idle'

    AND cardinality(pg_blocking_pids(a.pid)) > 0

)

SELECT

  b.blocked_pid,

  b.blocked_user,

  b.blocked_app,

  b.blocked_client,

  date_trunc('second', b.blocked_runtime) AS blocked_runtime,

  b.blocked_query,

  b.blocking_pid,

  a.usename AS blocking_user,

  a.application_name AS blocking_app,

  date_trunc('second', now() - a.query_start) AS blocking_runtime,

  left(regexp_replace(a.query, '\s+', ' ', 'g'), 120) AS blocking_query

FROM blocked b

JOIN pg_stat_activity a ON a.pid = b.blocking_pid

ORDER BY b.blocked_runtime DESC;


1C. Heavy relations currently locked (what table is “on fire”)


SELECT

  c.relname AS relation,

  l.mode,

  l.granted,

  count(*) AS lock_count

FROM pg_locks l

JOIN pg_class c ON c.oid = l.relation

WHERE c.relkind IN ('r','p','i','m')  -- table/partition/index/materialized view

GROUP BY 1,2,3

ORDER BY lock_count DESC, relation

LIMIT 50;





2) “What are the slowest queries?” (requires pg_stat_statements)



If you can enable one extension for performance work, make it this one.

You’ll need shared_preload_libraries = 'pg_stat_statements' and CREATE EXTENSION pg_stat_statements;



2A. Top total time (who costs you the most)


SELECT

  calls,

  round(total_exec_time::numeric, 1) AS total_ms,

  round(mean_exec_time::numeric, 3) AS mean_ms,

  rows,

  round(100.0 * shared_blks_hit / NULLIF(shared_blks_hit + shared_blks_read,0), 2) AS cache_hit_pct,

  left(regexp_replace(query, '\s+', ' ', 'g'), 160) AS query_160

FROM pg_stat_statements

ORDER BY total_exec_time DESC

LIMIT 20;


2B. Top mean time (classic “one query ruins the day” list)


SELECT

  calls,

  round(mean_exec_time::numeric, 3) AS mean_ms,

  round(stddev_exec_time::numeric, 3) AS stddev_ms,

  round(max_exec_time::numeric, 1) AS max_ms,

  rows,

  left(regexp_replace(query, '\s+', ' ', 'g'), 160) AS query_160

FROM pg_stat_statements

WHERE calls >= 10

ORDER BY mean_exec_time DESC

LIMIT 20;


2C. IO-heavy queries (read storms)


SELECT

  calls,

  (shared_blks_read + local_blks_read) AS blks_read,

  (shared_blks_hit  + local_blks_hit)  AS blks_hit,

  round( (blk_read_time + blk_write_time)::numeric, 1) AS io_ms,

  round(total_exec_time::numeric, 1) AS total_ms,

  left(regexp_replace(query, '\s+', ' ', 'g'), 160) AS query_160

FROM pg_stat_statements

ORDER BY (shared_blks_read + local_blks_read) DESC

LIMIT 20;





3) “Are we index/scan healthy?” (high-signal, low-noise)




3A. Tables with heavy sequential scans


SELECT

  relname,

  seq_scan,

  seq_tup_read,

  idx_scan,

  n_live_tup,

  round(100.0 * idx_scan / NULLIF(seq_scan + idx_scan,0), 2) AS idx_ratio_pct

FROM pg_stat_user_tables

WHERE (seq_scan + idx_scan) > 0

ORDER BY seq_tup_read DESC

LIMIT 25;


3B. Index hit ratio (quick “cache health” indicator)


SELECT

  datname,

  round(100.0 * blks_hit / NULLIF(blks_hit + blks_read, 0), 2) AS cache_hit_pct,

  blks_read,

  blks_hit

FROM pg_stat_database

ORDER BY cache_hit_pct ASC NULLS LAST;





4) “Autovacuum / bloat / dead tuples” (the silent killer)




4A. Dead tuples & vacuum stats (which tables need love)


SELECT

  relname,

  n_live_tup,

  n_dead_tup,

  round(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup,0), 2) AS dead_pct,

  last_vacuum,

  last_autovacuum,

  last_analyze,

  last_autoanalyze

FROM pg_stat_user_tables

ORDER BY n_dead_tup DESC

LIMIT 25;


4B. Autovacuum activity “right now”


SELECT

  pid,

  datname,

  usename,

  state,

  wait_event_type,

  wait_event,

  date_trunc('second', now() - query_start) AS runtime,

  left(regexp_replace(query, '\s+', ' ', 'g'), 140) AS query_140

FROM pg_stat_activity

WHERE query ILIKE 'autovacuum:%'

ORDER BY (now() - query_start) DESC;





5) “One-liner report” you can paste into Slack (executive snapshot)


SELECT

  now() AS ts,

  (SELECT count(*) FROM pg_stat_activity WHERE state <> 'idle') AS active_sessions,

  (SELECT count(*) FROM pg_stat_activity WHERE wait_event_type IS NOT NULL AND state <> 'idle') AS waiting_sessions,

  (SELECT count(*) FROM pg_stat_activity WHERE cardinality(pg_blocking_pids(pid)) > 0) AS blocked_sessions,

  (SELECT round(100.0 * blks_hit / NULLIF(blks_hit + blks_read,0), 2)

   FROM pg_stat_database WHERE datname = current_database()) AS cache_hit_pct;





How to turn these into a “nice output” script (repeatable + pretty)



If you want one script that prints sections with headers, timestamps, and consistent columns, the cleanest route is:



psql + \echo headers + \x auto (fast, minimal dependencies)

Or a small Bash/Python wrapper that runs each query, renders tables, and optionally exports CSV/JSON.




Here’s a simple pattern for psql:

\timing on

\x auto

\pset pager off


\echo '=== ACTIVE SESSIONS (top 30) ==='

-- paste query 1A


\echo '=== BLOCKING CHAINS ==='

-- paste query 1B


\echo '=== TOP QUERIES (TOTAL TIME) ==='

-- paste query 2A

Run it:

psql "postgresql://user:pass(at)host:5432/dbname" -f perf_report.sql





What I’d enable day-1 on any new project (minimal, high ROI)




pg_stat_statements (query-level truth)

auto_explain (optional: catches slow queries at runtime, but use carefully)

log_min_duration_statement (or sampling) for forensic timeline correlation




These are standard Postgres instrumentation options discussed in official docs for pg_stat_statements and statistics views (pg_stat_activity, pg_locks, pg_stat_database, etc.).




If you tell me (a) Postgres version, (b) managed service vs self-hosted, and (c) whether you can enable pg_stat_statements, I’ll bundle this into a single “Perf Pack” file with:



perf_report.sql (human-readable)

perf_report.csv export mode

optional “baseline vs now” delta mode (best for stakeholder questions like “did it get worse after deploy?”)


Lazaro Sanchez CEO at Cyberellum Technologies & Laboratory

ceo(at)cyberellum(dot)technology 









Get Outlook for iOS




From: Pär Mattsson <par(dot)x(dot)mattsson(at)gmail(dot)com>

Sent: Monday, December 22, 2025 10:35 AM

To: Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>

Subject: Performance issue!

 



Hi!


Anyone who have performance scripts with nice output to share.




I am in a new project where we got lots of questions about performance on the database!






Mvh Pär



Browse pgsql-admin by date

  From Date Subject
Next Message Sbob 2025-12-22 21:14:39 logical replication, with a replication slot but still getting "requested WAL segment has already been removed"
Previous Message Pär Mattsson 2025-12-22 15:34:48 Performance issue!