What Are “Dirty Pages” in PostgreSQL?

PostgreSQL stores data in fixed‑size blocks (pages), normally 8 KB. When a client updates or inserts data, PostgreSQL does not immediately write those changes to disk. Instead, it loads the affected page into shared memory (shared buffers), makes the modification there, and marks the page as dirty. A “dirty page” means the version of that page in memory is newer than the on‑disk copy.
Read More

Introduction to NUMA

PostgreSQL and NUMA, part 1 of 4 This series covers the specifics of running PostgreSQL on large systems with many processors. My experience is that people spend months often learning the basics when confronted with the problem. This series tries to dispel these difficulties by providing a clear background into the topics in question. The hope is that future generations of database engineers and administrators don’t have to spend months figuring things out through trial and error. This entry in the series focuses on the low-level hows and whys of Non-Uniform Memory Access so that it is possible to understand the solutions and recommendations later with a focus on conceptual details. Unfortunately in many details this requires focusing on technical details as often the concepts without the details are confusing at best. Further entries will build upon the information in this post. We recommend reading it first, and then referring back as needed.
Read More

Configuring Linux Huge Pages for PostgreSQL

Huge pages are a Linux kernel feature that allocates larger memory pages (typically 2 MB or 1 GB instead of the normal 4 KB). PostgreSQL’s shared buffer pool and dynamic shared memory segments are often tens of gigabytes, and using huge pages reduces the number of pages the processor must manage. Fewer page‑table entries mean fewer translation‑lookaside‑buffer (TLB) misses and fewer page table walks, which reduces CPU overhead and improves query throughput and parallel query performance.
Read More

Understanding Disaster Recovery in PostgreSQL

System outages, hardware failures, or accidental data loss can strike without warning. What determines whether operations resume smoothly or grind to a halt is the strength of the disaster recovery setup. PostgreSQL is built with powerful features that make reliable recovery possible. This post takes a closer look at how these components work together behind the scenes to protect data integrity, enable consistent restores, and ensure your database can recover from any failure scenario.
Read More

Don’t Skip ANALYZE: A Real-World PostgreSQL Story

Recently, we worked on a production PostgreSQL database where a customer reported that a specific SELECT query was performing extremely slowly. The issue was critical since this query was part of a daily business process that directly impacted their operations.
Read More