← Back to BlogArticle

PostgreSQL Performance Deep Dive: Indexes, EXPLAIN, and Query Tuning in 2026

PostgreSQL is one of the most capable databases available, but most developers only scratch the surface. This guide covers the tools and techniques that actually move the needle on query performance in production.

EXPLAIN ANALYZE: Read the Query Plan

Before you optimize anything, read the plan:

EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)\nSELECT u.id, u.email, COUNT(o.id) AS order_count\nFROM users u\nJOIN orders o ON o.user_id = u.id\nWHERE u.created_at > NOW() - INTERVAL '30 days'\nGROUP BY u.id, u.email;

Key things to look for: Seq Scan on large tables (bad), rows= estimate vs actual (big gaps = stale stats), and Buffers: hit vs read (cache misses hurt).

Index Strategies That Actually Matter

A B-tree index on a high-cardinality column is the baseline. Go further with these:

-- Partial index: only index the rows you query\nCREATE INDEX idx_orders_pending\n  ON orders (created_at)\n  WHERE status = 'pending';\n\n-- Composite index: column order matters\n-- Put equality conditions first, range last\nCREATE INDEX idx_users_plan_created\n  ON users (plan_id, created_at);\n\n-- Covering index: avoid heap fetches entirely\nCREATE INDEX idx_orders_covering\n  ON orders (user_id)\n  INCLUDE (status, total_amount);\n\n-- Expression index: index a computed value\nCREATE INDEX idx_users_email_lower\n  ON users (LOWER(email));

VACUUM, AUTOVACUUM, and Table Bloat

Dead tuples from UPDATE and DELETE accumulate. AUTOVACUUM handles this, but not always fast enough under heavy write load:

-- Check table bloat\nSELECT relname, n_dead_tup, n_live_tup,\n  ROUND(n_dead_tup::numeric / NULLIF(n_live_tup, 0) * 100, 2) AS dead_ratio\nFROM pg_stat_user_tables\nORDER BY n_dead_tup DESC\nLIMIT 10;\n\n-- Manually vacuum a hot table without locking\nVACUUM (ANALYZE, VERBOSE) orders;\n\n-- Tune autovacuum per-table for high-churn tables\nALTER TABLE orders SET (\n  autovacuum_vacuum_scale_factor = 0.01,\n  autovacuum_analyze_scale_factor = 0.005\n);

Connection Pooling: PgBouncer is Not Optional

PostgreSQL spawns a process per connection. At 500+ connections, you pay in memory and context-switch overhead. Use PgBouncer in transaction mode:

# pgbouncer.ini\n[databases]\nmydb = host=127.0.0.1 port=5432 dbname=mydb\n\n[pgbouncer]\npool_mode = transaction\nmax_client_conn = 1000\ndefault_pool_size = 20\nserver_idle_timeout = 600

Transaction mode means a server connection is only held for the duration of a transaction — not the entire client session. Most Node.js and NestJS apps work fine with this.

Useful Diagnostic Queries

Keep these in your toolbox:

-- Find slow queries (requires pg_stat_statements)\nSELECT query, calls, mean_exec_time, total_exec_time\nFROM pg_stat_statements\nORDER BY mean_exec_time DESC\nLIMIT 10;\n\n-- Find missing indexes (sequential scans on large tables)\nSELECT relname, seq_scan, seq_tup_read,\n  idx_scan, seq_tup_read / seq_scan AS avg_seq_read\nFROM pg_stat_user_tables\nWHERE seq_scan > 0\nORDER BY seq_tup_read DESC\nLIMIT 10;\n\n-- Find unused indexes (wasted write overhead)\nSELECT indexrelname, idx_scan\nFROM pg_stat_user_indexes\nWHERE idx_scan = 0\n  AND indexrelname NOT LIKE 'pg_%'\nORDER BY pg_relation_size(indexrelid) DESC;

Best Practices Summary

  • Always run EXPLAIN (ANALYZE, BUFFERS) before adding an index
  • Use partial indexes to keep indexes small and fast
  • INCLUDE columns in indexes to avoid heap fetches
  • Tune autovacuum per-table on high-churn tables
  • Enable pg_stat_statements and review it weekly
  • Put PgBouncer in front of every production Postgres instance