Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools
| От | Laurenz Albe |
|---|---|
| Тема | Re: Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools |
| Дата | |
| Msg-id | a3eed63ae63a0a0ec55b963f88f12b5fda56b904.camel@cybertec.at обсуждение |
| Ответ на | Seeking Advice: PostgreSQL Performance Troubleshooting Without Third-Party Tools (mahamood hussain <hussain.ieg@gmail.com>) |
| Список | pgsql-admin |
On Fri, 2026-04-10 at 23:18 +0530, mahamood hussain wrote: > From a DBA perspective, I’m looking to proactively identify problem areas—such as: > * Long-running queries log_min_duration_statement = 2000 > * Jobs/stored procedures consuming high temp space SELECT temp_blks_written, query FROM pg_stat_statements ORDER BY temp_blks_written DESC LIMIT 10; > * Queries resulting in sequential scans due to missing indexes There is no direct way to find that. First, look at tables that receive large sequential scans frequently: SELECT relid::regclass, seq_scan, seq_tup_read FROM pg_stat_all_tables ORDER BY least(seq_scan, seq_tup_read) DESC LIMIT 10; Then examine the long-running queries that consume a lot of database time: SELECT total_exec_time, query FROM pg_stat_statements WHERE mean_exec_time > 100 ORDER BY total_exec_time DESC LIMIT 10; See if any of those use one the tables found with the first query. Then use EXPLAIN (ANALYZE, BUFFERS) on the statement to get the execution plan and tune the query if you can. > * Lock waits, deadlocks, and memory-heavy operations log_lock_waits = on Deadlocks are logged automatically. Memory use is not tracked. Yours, Laurenz Albe
В списке pgsql-admin по дате отправления: