Scaling Up Wasn’t the Plan — Until It Was the Only Plan

If you have ever generated a complex report in Odoo only to watch the loading spinner for minutes, you are not alone. One of our customers ran into exactly this scenario: their system ground to a near stall whenever they tried to compile business reports. After a systematic investigation, we achieved a 93 % performance improvement, but only by choosing the last resort: upgrading the instance’s resources.

This blog walks through the diagnosis, the dead ends we hit, and why scaling up was ultimately the right solution.

1. The Architecture

The customer’s setup was straightforward: a single Odoo application server talking to a single PostgreSQL database. Yet the reporting workload felt painfully slow.

2. PostgreSQL parameter tuning

The first thing we examined was PostgreSQL’s configuration parameters, particularly those related to memory, to ensure they were tuned according to community best practices. In our case, the settings were already aligned with the recommendations, and increasing them further could have led to memory exhaustion.

3. autovacuum status

Our next step was to look into autovacuum and autoanalyze, especially for the tables involved in the customer’s report generation process. We expected to find dead tuples piling up or outdated stats, but to our surprise, the dead-to-live ratio was only about 1–2%, and the statistics were already fresh. That ruled out autovacuum and autoanalyze as the cause of the slowdown.

4. Capturing and analysing queries

Odoo’s ORM generates SQL queries dynamically, so we captured the queries from the log files that were executed during report generation. These queries involved enormous multi‑table joins with nested subqueries. One thing was clear: they were not written cleanly and could potentially be optimized.

We ran EXPLAIN ANALYZE and confirmed that the queries were using the correct indexes, with few or no sequential scans. However, since these queries were auto-generated by Odoo, neither we nor the customer could rewrite them.

5. Evaluating the Partitioning Approach

Since the queries were already correctly indexed, we explored partitioning as a potential optimization. However, due to business constraints and the complexity of retrofitting partitions into an existing schema, this approach was not feasible.

6. The Twist: Resource Constraints Revealed

After exhausting these options—and a few others I may have forgotten to mention—we concluded that the database was performing as expected; PostgreSQL itself was not the problem.

But then came a twist: monitoring graphs revealed that during report runs, disk reads spiked while shared buffers were fully utilized. There simply wasn’t enough memory to cache the datasets being scanned.

This made it clear that memory was the bottleneck. That’s when the idea of scaling up came into play—usually my last-resort option. The solution was to increase the resources, particularly memory, to handle the workload efficiently.

With no further tuning options left, we tested a memory upgrade in a staging environment. Increasing the instance’s RAM allowed us to allocate larger shared buffers and gave the OS more cache. When we re‑ran the same reports, execution time dropped dramatically—93 % faster compared to the baseline.

Armed with this data, we performed the same upgrade in production and adjusted PostgreSQL shared_buffers accordingly. The result was immediate: reports that previously took hours are now completed in seconds. Users could continue their business workflows without frustration.

7. Conclusion

Sometimes the workload simply outgrows the available resources. When every software‑level optimisation has been explored, scaling up the instance can be the most cost‑effective solution.

PostgreSQL elephant racing past a struggling Odoo-labeled car, illustrating a fast database performance fix

Leave A Comment