Replication Types and Modes in PostgreSQL

Data is a key part of any mission-critical application. Losing it can lead to serious issues, such as financial loss or harm to a business’s reputation. A common way to protect against data loss is by taking regular backups, either manually or automatically. However, as data grows, backups can become large and take longer to complete.
Read More

Disaster Recovery Guide with pgbackrest

Recently, we worked with a client who was manually backing up their 800GB PostgreSQL database using pg_dump, which was growing rapidly and had backups stored on the same server as the database itself. This setup had several critical issues: - Single point of failure: If the server failed, both the database and its backups would be lost. - No point-in-time recovery: Accidental data deletion couldn’t be undone. - Performance bottlenecks: Backups consumed local storage, impacting database performance. To address these risks, we replaced their setup with pgBackRest, shifting backups to a dedicated backup server with automated retention policies and support for point-in-time recovery (PITR). This guide will walk you through installing, configuring, and testing pgBackRest in a real-world scenario where backups will be configured on a dedicated backup server, separate from the data node itself.
Read More

From 99.9% to 99.99%: Building PostgreSQL Resilience into Your Product Architecture

Most teams building production applications understand that “uptime” matters. I am writing this blog to demonstrate how much difference an extra 0.09% makes. At 99.9% availability, your system can be down for over 43 minutes every month. At 99.99%, that window drops to just over 4 minutes. If your product is critical to business operations, customer workflows, or revenue generation, those 39 extra minutes of downtime each month can be the difference between trust and churn.
Read More

Choosing the Right Barman Backup Type and Mode for Your PostgreSQL Highly Available Cluster

When running a PostgreSQL database in a High Availability (HA) cluster, it’s easy to assume that having multiple nodes means your data is safe. But HA is not a replacement for backups. If someone accidentally deletes important data or runs a wrong update query, that change will quickly spread to all nodes in the cluster. Without proper safeguards, that data is gone everywhere. In these cases, only a backup can help you restore what was lost. The case mentioned above isn’t the only reason backups are important. In fact, many industries have strict compliance requirements that make regular backups mandatory. This makes backups essential not just for recovering lost data, but also for meeting regulatory standards. Barman is a popular tool in the PostgreSQL ecosystem for managing backups, especially in High Availability (HA) environments. It's known for being easy to set up and for offering multiple types and modes of backups. However, this flexibility can also be a bit overwhelming at first. That’s why I’m writing this blog to break down each backup option in a simple and clear way, so you can choose the one that best fits your business needs.
Read More

The Odoo Performance Fix You’ve Been Looking For

When your business depends on Odoo CRM and starts to slow down, operations suffer. That is precisely what happened to a fast-growing delivery company that relied on speed, both on the streets and in the back office. As their customer base grew, load times for key CRM screens ballooned to five minutes, reports stalled, and internal workflows were disrupted. Their application looked fine. But the problem, like in many scaling Odoo deployments, was deeper: unoptimized PostgreSQL.
Read More

Which PostgreSQL HA Solution Fits Your Needs: Pgpool or Patroni?

When designing a highly available PostgreSQL cluster, two popular tools often come into the conversation: Pgpool-II and Patroni. Both are widely used in production environments, offer solid performance, and aim to improve resilience and reduce downtime; however, they take different approaches to achieving this goal. We often get questions during webinars/talks and customer calls about which tool is better suited for production deployments. So, we decided to put together this blog to help you understand the differences and guide you in choosing the right solution based on your specific use case. Before we dive into comparing these two great tools for achieving high availability, let's first take a quick look at some of the key components involved in building a highly available and resilient setup.
Read More

Checklist: Is Your PostgreSQL Deployment Production-Grade?

One of the things I admire most about PostgreSQL is its ease of getting started. I have seen many developers and teams pick it up, launch something quickly, and build real value without needing a DBA or complex tooling. That simplicity is part of what makes PostgreSQL so widely adopted. However, over time, as the application grows and traffic increases, new challenges emerge. Queries slow down, disk usage balloons, or a minor issue leads to unexpected downtime. This is a journey I have witnessed unfold across many teams. I don’t think of it as a mistake or an oversight; it is simply the natural progression of a system evolving from development to production scale. The idea behind this blog is to help you assess your current situation and identify steps that can enhance the robustness, security, and scalability of your PostgreSQL deployment.
Read More

Understanding Split-Brain Scenarios in Highly Available PostgreSQL Clusters

High Availability (HA) refers to a system design approach that ensures a service remains accessible even in the event of hardware or software failures. In PostgreSQL, HA is typically implemented through replication, failover mechanisms, and clustering solutions to minimize downtime and ensure data consistency. Hence, HA is very important for your mission-critical applications.  In this blog post, we will try to explore a critical failure condition known as a split-brain scenario that can occur in PostgreSQL HA clusters. We will first see what split-brain means, and then how it can impact PostgreSQL clusters, and finally discuss how to prevent it through architectural choices and tools available in the PostgreSQL ecosystem
Read More

How to Upgrade Major PostgreSQL Versions: A Practical Production Guide

PostgreSQL versions follow a well-defined five-year support lifecycle. Each major release receives bug fixes, security patches, and minor updates for five years from its initial release date. After that point, the version reaches end-of-life (EOL) and no longer receives official updates. Staying on an EOL version exposes your systems to security risks, potential compatibility issues, and missing performance improvements introduced in later releases. You can always check the current support status of PostgreSQL versions on the official PostgreSQL Versioning Policy page. Upgrading to the latest version ensures long-term stability, access to new features, and better support. Recently, I worked on upgrading a critical production PostgreSQL environment from version 11 to 15. Version 15 was chosen because the client’s application had only been tested up to that release. The system supported large batch workloads and live applications, so we had to be meticulous. While this article draws from that specific project, the steps are broadly applicable to anyone planning a major PostgreSQL upgrade, especially when crossing several versions. This guide outlines a generalized, production-ready approach for performing major version upgrades using the pg_dump/pg_restore method.
Read More

Best Practices for TimescaleDB Massive Delete Operations

Welcome to the second part of our TimescaleDB best practices series! In the first part, we explored how to perform massive backfill operations efficiently, sharing techniques to optimize performance and avoid common pitfalls. If you haven’t had a chance to read the first part yet, you can check it out using this link In today's blog, we will discuss another crucial aspect of time-series data management: massive delete operations. As your data grows over time, older records often lose their relevance but continue to occupy valuable disk space, potentially increasing storage costs and might degrade the performance if not managed well.  Let's walk through some strategies to clean up or downsample aged data in TimescaleDB, helping you maintain a lean, efficient, and cost-effective database.
Read More