Обсуждение: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools
Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools
Hi Team,
We are currently working on a migration project from DB2 to PostgreSQL. Post-migration, we’re observing several performance issues such as long-running queries and occasional instance crashes. It also appears that some application-side workloads may not be optimized for PostgreSQL.
From a DBA perspective, I’m looking to proactively identify problem areas—such as:
- Long-running queries
- Jobs/stored procedures consuming high temp space
- Queries resulting in sequential scans due to missing indexes
- Lock waits, deadlocks, and memory-heavy operations
We already have key parameters enabled (pg_stat_statements, pg_buffercache, etc.), and PostgreSQL is generating logs in .csv format. However, the main challenge is efficiently analyzing these logs and identifying performance bottlenecks at scale (databases ranging from ~1TB to 15TB).
We currently don’t have third-party monitoring tools like Datadog, so I’m looking for recommendations on free or lightweight tools and best practices to:
- Parse and analyze PostgreSQL logs (especially CSV logs)
- Identify top resource-consuming queries and patterns
- Correlate temp usage, memory pressure, and query behavior
- Generate actionable insights for the engineering team
Any suggestions on tools, scripts, or approaches that have worked well in similar large-scale environments would be greatly appreciated.
Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools
Hi Team,
We are currently working on a migration project from DB2 to PostgreSQL. Post-migration, we’re observing several performance issues such as long-running queries and occasional instance crashes. It also appears that some application-side workloads may not be optimized for PostgreSQL.
From a DBA perspective, I’m looking to proactively identify problem areas—such as:
- Long-running queries
- Jobs/stored procedures consuming high temp space
- Queries resulting in sequential scans due to missing indexes
- Lock waits, deadlocks, and memory-heavy operations
We already have key parameters enabled (
pg_stat_statements,pg_buffercache, etc.), and PostgreSQL is generating logs in.csvformat. However, the main challenge is efficiently analyzing these logs and identifying performance bottlenecks at scale (databases ranging from ~1TB to 15TB).We currently don’t have third-party monitoring tools like Datadog, so I’m looking for recommendations on free or lightweight tools and best practices to:
- Parse and analyze PostgreSQL logs (especially CSV logs)
- Identify top resource-consuming queries and patterns
- Correlate temp usage, memory pressure, and query behavior
- Generate actionable insights for the engineering team
Any suggestions on tools, scripts, or approaches that have worked well in similar large-scale environments would be greatly appreciated.
Hi Team,
We are currently working on a migration project from DB2 to PostgreSQL. Post-migration, we’re observing several performance issues such as long-running queries and occasional instance crashes. It also appears that some application-side workloads may not be optimized for PostgreSQL.
From a DBA perspective, I’m looking to proactively identify problem areas—such as:
- Long-running queries
- Jobs/stored procedures consuming high temp space
- Queries resulting in sequential scans due to missing indexes
- Lock waits, deadlocks, and memory-heavy operations
We already have key parameters enabled (
pg_stat_statements,pg_buffercache, etc.), and PostgreSQL is generating logs in.csvformat. However, the main challenge is efficiently analyzing these logs and identifying performance bottlenecks at scale (databases ranging from ~1TB to 15TB).We currently don’t have third-party monitoring tools like Datadog, so I’m looking for recommendations on free or lightweight tools and best practices to:
- Parse and analyze PostgreSQL logs (especially CSV logs)
- Identify top resource-consuming queries and patterns
- Correlate temp usage, memory pressure, and query behavior
- Generate actionable insights for the engineering team
Any suggestions on tools, scripts, or approaches that have worked well in similar large-scale environments would be greatly appreciated.
Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools
Le 10 avr. 2026 à 19:48, mahamood hussain <hussain.ieg@gmail.com> a écrit :Hi Team,
We are currently working on a migration project from DB2 to PostgreSQL. Post-migration, we’re observing several performance issues such as long-running queries and occasional instance crashes. It also appears that some application-side workloads may not be optimized for PostgreSQL.
From a DBA perspective, I’m looking to proactively identify problem areas—such as:
- Long-running queries
- Jobs/stored procedures consuming high temp space
- Queries resulting in sequential scans due to missing indexes
- Lock waits, deadlocks, and memory-heavy operations
We already have key parameters enabled (
pg_stat_statements,pg_buffercache, etc.), and PostgreSQL is generating logs in.csvformat. However, the main challenge is efficiently analyzing these logs and identifying performance bottlenecks at scale (databases ranging from ~1TB to 15TB).We currently don’t have third-party monitoring tools like Datadog, so I’m looking for recommendations on free or lightweight tools and best practices to:
- Parse and analyze PostgreSQL logs (especially CSV logs)
- Identify top resource-consuming queries and patterns
- Correlate temp usage, memory pressure, and query behavior
- Generate actionable insights for the engineering team
Any suggestions on tools, scripts, or approaches that have worked well in similar large-scale environments would be greatly appreciated.
On Fri, Apr 10, 2026 at 1:49 PM mahamood hussain <hussain.ieg@gmail.com> wrote:Hi Team,
We are currently working on a migration project from DB2 to PostgreSQL. Post-migration, we’re observing several performance issues such as long-running queries and occasional instance crashes. It also appears that some application-side workloads may not be optimized for PostgreSQL.
From a DBA perspective, I’m looking to proactively identify problem areas—such as:
- Long-running queries
- Jobs/stored procedures consuming high temp space
- Queries resulting in sequential scans due to missing indexes
- Lock waits, deadlocks, and memory-heavy operations
We already have key parameters enabled (
pg_stat_statements,pg_buffercache, etc.), and PostgreSQL is generating logs in.csvformat. However, the main challenge is efficiently analyzing these logs and identifying performance bottlenecks at scale (databases ranging from ~1TB to 15TB).We currently don’t have third-party monitoring tools like Datadog, so I’m looking for recommendations on free or lightweight tools and best practices to:
- Parse and analyze PostgreSQL logs (especially CSV logs)
- Identify top resource-consuming queries and patterns
- Correlate temp usage, memory pressure, and query behavior
- Generate actionable insights for the engineering team
Any suggestions on tools, scripts, or approaches that have worked well in similar large-scale environments would be greatly appreciated.
Have you set log_min_duration_statement to some number of milliseconds? When you do that, the query and its parameters show up in the log file. Grep for "duration:" to find statements taking longer than threshold milliseconds.Does it require some manual effort? Sure. But it's free.Barring that, try installing pgbadger.--Death to <Redacted>, and butter sauce.Don't boil me, I'm still alive.<Redacted> lobster!
Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools
On Fri, 2026-04-10 at 23:18 +0530, mahamood hussain wrote: > From a DBA perspective, I’m looking to proactively identify problem areas—such as: > * Long-running queries log_min_duration_statement = 2000 > * Jobs/stored procedures consuming high temp space SELECT temp_blks_written, query FROM pg_stat_statements ORDER BY temp_blks_written DESC LIMIT 10; > * Queries resulting in sequential scans due to missing indexes There is no direct way to find that. First, look at tables that receive large sequential scans frequently: SELECT relid::regclass, seq_scan, seq_tup_read FROM pg_stat_all_tables ORDER BY least(seq_scan, seq_tup_read) DESC LIMIT 10; Then examine the long-running queries that consume a lot of database time: SELECT total_exec_time, query FROM pg_stat_statements WHERE mean_exec_time > 100 ORDER BY total_exec_time DESC LIMIT 10; See if any of those use one the tables found with the first query. Then use EXPLAIN (ANALYZE, BUFFERS) on the statement to get the execution plan and tune the query if you can. > * Lock waits, deadlocks, and memory-heavy operations log_lock_waits = on Deadlocks are logged automatically. Memory use is not tracked. Yours, Laurenz Albe