| From: | Keith <keith(at)keithf4(dot)com> |
|---|---|
| To: | mahamood hussain <hussain(dot)ieg(at)gmail(dot)com> |
| Cc: | Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools |
| Date: | 2026-04-10 18:34:22 |
| Message-ID: | CAHw75vtLsBL-O5EGVTgzZR=MUXFG=k0r3vEYma7YVdaiaDZJ1Q@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
For log analysis, pgbadger (https://pgbadger.darold.net/) is the best tool
I've come across. Just be sure to adjust your log_line_prefix to include
information such as client host/IP, role, database and error codes. Turn on
other settings such as logging checkpointing, temp file generation and
locking. The more you log, the more that pgbadger can help analyze. Just
keep an eye on your log generation since you don't want that to in turn
affect your database. If you can write the logs to a different mount point.
that can isolate that IO from the database.
For trending other things, I'd recommend starting with something like
Prometheus (https://prometheus.io/) & Grafana (https://grafana.com/) Get
an exporter set up (sql_exporter is a good one -
https://github.com/burningalchemist/sql_exporter) and set up alerts and
trending your metrics so you can go back and look at history when those
alerts fire. If you need to start scaling bigger, then you can start
looking into something like Datadog and since you'll already have a better
idea of what you actually need to monitor by then, it won't seem nearly as
overwhelming jumping to something like that.
Keith
On Fri, Apr 10, 2026 at 1:49 PM mahamood hussain <hussain(dot)ieg(at)gmail(dot)com>
wrote:
> Hi Team,
>
> We are currently working on a migration project from DB2 to PostgreSQL.
> Post-migration, we’re observing several performance issues such as
> long-running queries and occasional instance crashes. It also appears that
> some application-side workloads may not be optimized for PostgreSQL.
>
> From a DBA perspective, I’m looking to proactively identify problem
> areas—such as:
>
> - Long-running queries
> - Jobs/stored procedures consuming high temp space
> - Queries resulting in sequential scans due to missing indexes
> - Lock waits, deadlocks, and memory-heavy operations
>
> We already have key parameters enabled (pg_stat_statements, pg_buffercache,
> etc.), and PostgreSQL is generating logs in .csv format. However, the
> main challenge is efficiently analyzing these logs and identifying
> performance bottlenecks at scale (databases ranging from ~1TB to 15TB).
>
> We currently don’t have third-party monitoring tools like Datadog, so I’m
> looking for *recommendations on free or lightweight tools* and best
> practices to:
>
> - Parse and analyze PostgreSQL logs (especially CSV logs)
> - Identify top resource-consuming queries and patterns
> - Correlate temp usage, memory pressure, and query behavior
> - Generate actionable insights for the engineering team
>
> Any suggestions on tools, scripts, or approaches that have worked well in
> similar large-scale environments would be greatly appreciated.
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | bertrand HARTWIG | 2026-04-11 06:07:09 | Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools |
| Previous Message | Ron Johnson | 2026-04-10 18:23:50 | Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools |