In a previous blog post, we configured an EC2 instance and installed PostgreSQL on it. After the initial setup, we ran pgbench to measure the initial TPS (transactions per second). Then, we tuned PostgreSQL and reran pgbench to see how much we could improve the TPS. To tune PostgreSQL, we adjusted various parameters based on the system’s available resources. In this blog, we will identify which of those parameters contributed the most to the performance improvements.
If you want to read the initial blog post, you can find it here: Key PostgreSQL Configuration Parameters for Enhanced Performance.
We will use the same instance size on AWS EC2 as before, which is t2.large. Here is the list of parameters that we initially tuned
max_connections = 20 shared_buffers = 2GB effective_cache_size = 6GB maintenance_work_mem = 1GB checkpoint_completion_target = 0.9 wal_buffers = 16MB default_statistics_target = 500 random_page_cost = 1.1 effective_io_concurrency = 200 work_mem = 26MB huge_pages = off min_wal_size = 4GB max_wal_size = 16GBFirst, we can exclude any parameters that have default values since PostgreSQL will use these default values automatically. So we can remove checkpoint_completion_target Next in our list, there are a couple of parameters that will have little to no impact on performance huge_pages: This setting won’t make any difference until it is defined at the OS level and PostgreSQL is restarted to make it effective. default_statistics_target: This parameter sets the default number of samples used by the query planner to gather statistics for each column. While higher values can lead to more accurate query plans, they also increase the time and resources required to gather and maintain these statistics. Conversely, lower values reduce the overhead but may result in less accurate query plans. max_connections: Since we will only be executing 10 connections, the default value of 100 is sufficient for our needs. We will remove these three parameters and reassess the performance. Additionally, We need to update max_wal_size parameter it was set to 16 GB, which is not recommended for a system with only 8 GB of RAM, so we have updated this value. The new list of parameters is
shared_buffers = 2GB effective_cache_size = 6GB maintenance_work_mem = 1GB wal_buffers = 16MB random_page_cost = 1.1 effective_io_concurrency = 200 work_mem = 26214kB min_wal_size = 4GB max_wal_size = 2GBSo after using the above updated list of parameters we were still able to get a 1.2x performance boost similar to what we were getting earlier The next question is whether there are any additional parameters we can reduce or omit while still achieving the same performance. Let’s find out. I have kept the parameters related to memory settings only
shared_buffers = 2GB effective_cache_size = 6GB maintenance_work_mem = 1GB work_mem = 26214kBAfter running the test again, we achieved a similar performance boost of 1.2x with fewer parameters. In conclusion, optimizing PostgreSQL performance doesn’t always require a complex set of parameters. By focusing on key memory and WAL settings, we were able to achieve performance improvements with minimal adjustments. This demonstrates that even small, targeted changes can have a substantial impact. I hope this blog has provided valuable insights into PostgreSQL tuning. Stay tuned for more tips and tricks in future posts!