Design first: model entities with clear ownership and avoid over-normalizing early. Add created_at/updated_at on everything. Use UUIDs only when you need them; ints are faster and smaller. For time-series data, partition or separate tables to keep hot data lean.
Indexing: add composite indexes that match your most common where + order by patterns. Cover queries when possible. Avoid wildcards on leading columns; push filtering to selective columns. Monitor with EXPLAIN, find sequential scans, and trim unused indexes that bloat writes.
Query shaping: fetch only the columns you need. Break N+1 patterns with joins or batched loaders. Cap result sets with LIMIT and pagination. For writes, wrap multi-step changes in transactions and keep them short. Move heavy analytics to replicas or warehouses.
Caching: layer results in Redis for expensive reads; set sensible TTLs and cache busting on writes. Use HTTP caching for API responses when they’re safe. Apply connection pooling and circuit breakers to protect the database under load.
Operations: set slow query logging, monitor lock wait times, and alert on replication lag. Run regular VACUUM/ANALYZE (Postgres) and keep autovacuum tuned. Backups plus restore drills are mandatory. When scaling, vertical upgrades beat premature sharding; read replicas handle spikes for reads.