Post Webinar Blog: Boosting PostgreSQL Performance: Tools and Techniques for Optimization

On Wednesday, 21 August 2024, we hosted a comprehensive session led by our PostgreSQL Consultant, Semab Tariq on “Boosting PostgreSQL Performance: Tools and Techniques for Optimization” where he talked about the essential strategies and tools you need to optimize your PostgreSQL database for peak performance.

The Q&A session had intriguing questions asked by the audience that are listed below:

Q1. I have many deadlocks, what can be the problem and how can I fix it?

Deadlocks in PostgreSQL occur when two or more transactions are waiting for each other to release locks, creating a cycle of dependencies that cannot be resolved. Here are a couple of things you can try:
 
  • Check PostgreSQL logs for ERROR: deadlock detected and try to see why and where deadlocks are occurring
  • Examine the queries involved in the deadlock
  • Keep transactions as short as possible to minimize the time locks are held.
  • Acquire Locks in a Consistent Order it ensures that all transactions acquire locks in the same order to prevent circular dependencies.
  • Implement retry logic in your application to handle deadlocks gracefully.
  • We can use different properties to manage locks properly from postgresql.conf file
  • lock_timeout = ‘5s’: Sets the maximum amount of time a transaction will wait to acquire a lock before aborting.
  • log_lock_waits = on: Enables logging of lock waits that exceed the deadlock_timeout. This can help you identify and analyze potential deadlock situations.

Q2. How do I optimize a higher number of idle sessions?

You can use the following three steps to optimize for a high number of idle sessions: 

  • Make sure your application is closing the sessions properly
  • Use idle_in_transaction_session_timeout = ‘5min’ property inside postgresql.conf file. It will automatically terminate any session that has been idle for a specified period. (5 mins in this example)
  • Make sure that there are no misconfigurations if you are using any connection pooler like pgpool or pgbouncer

Q3. Would it be important to avoid large amounts of “rows removed by filter” rows in query plans, using composite indexes?

Yes, it is important to avoid large amounts of “rows removed by filter” in query plans, and using composite indexes can be an effective strategy to achieve this.

When a query plan shows a high number of “rows removed by filter” then it indicates that the database engine is scanning many rows and then filtering out a large portion of them, which can be inefficient.

Q4. Do you have any recommendations for filesystem type configuration for the data directory?

For most PostgreSQL deployments, I recommend using the ext4 filesystem. It is a widely used, stable, and well-supported filesystem that offers a good balance of performance and reliability. Other options that can be considered are:

  • XFS
  • ZFS

Q5. BgWriter doesn’t clean dirty buffers. How I can fix this problem? 

In this particular scenario, we can try adjusting postgresql.conf properties that are related to BgWriter:
  • bgwriter_delay = ‘200ms’: It controls the delay between BgWriter rounds. Reducing this value makes the BgWriter run more frequently.
  • bgwriter_lru_maxpages = 100: It sets the maximum number of buffers the BgWriter will write in one round. Increasing this value allows the BgWriter to write more buffers per round.
  • bgwriter_lru_multiplier = 2.0: It controls how aggressively the BgWriter tries to keep the number of clean buffers high. Increasing this value makes the BgWriter more aggressive.

Q6. Can we add multiple types of indexes to multiple columns in a single table to improve performance?

Yes, you can create multiple indexes on the table but keep in mind that it has its downsides leading to:
  • Increased disk space usage
  • Slower write operations
  • Increased maintenance overhead
  • Complexity in query planning

 

Watch the whole webinar here.

Leave A Comment