Stormatics
(678) 345-3456
380 Albert St, Melbourne, Australia

Blog Details

PostgreSQL Memory Management

In this post, we’ll discuss some parameters for PostgreSQL memory assignment.

shared_buffers

shared_buffers are blocks of memory used to cache frequently accessed data. When a query needs to read data from disk, PostgreSQL will first check if the data is in shared_buffers. If it is available, the data can be read from memory instead of from disk, resulting in faster query execution times.
A good rule of thumb is to start by allocating 25% of your system’s total memory to PostgreSQL shared_buffers parameter(Requires Restart) and then increase as per need(40% is widely used in prod environments), but this may vary based on your specific workload and system configuration.

wal_buffers

It controls the amount of shared memory used for WAL data that has not yet been written to disk. The contents of the WAL buffers are written out to disk at every transaction commit or when it is full using fsync operation, so extremely large values are unlikely to provide a significant benefit. However, setting this value to at least a few megabytes (optimal value is 16MB) can improve write performance on a busy server where many clients are committing at once. The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers automatically which is also suitable.

work_mem

The work_mem parameter is another important PostgreSQL configuration parameter that affects memory usage. Work_mem determines the amount of memory available for each query’s sorting and hashing operations.

For example: if work_mem is set to 32MB and a query includes around 10 sorting/hashing/order by/group by/distinct operations, total memory usage would be around 32*10=320MB for that specific query. If a certain operation takes more than the available work_mem (more than 32MB in the above example), it will start creating temp files on disk.

A good rule of thumb is to start with the default work_mem value of 4MB(Requires Reload). If query temp file usage is high(Can be identified using pg_Stat_statements or pgbadger) or if you see an external disk merge node in explain analyze output or if you see higher I/O activity with a specific query which includes(sorting/hashing type of operations), that means we might need to increase work_mem parameter. work_mem value can be set on the system level as well as session level.

The parameter “log_temp_files” inside postgresql.conf can be used to log sorts, hashes, and temp files which can be useful in figuring out if sorts are spilling to disk instead of fitting in memory.

maintenance_work_mem

This parameter controls the amount of memory used by maintenance operations such as VACUUM, ANALYZE, CREATE INDEX, ALTER TABLE, ADD FOREIGN KEY, and MERGE JOIN operations. These maintenance operations require temporary working space to perform their operations, and maintenance_work_mem determines the maximum amount of memory that can be used by these operations.

The default value for maintenance_work_mem is set to 64MB(Requires Reload). Setting maintenance_work_mem too low can cause maintenance operations to take longer to complete, while setting it too high may improve performance for vacuuming, restoring database dumps, and create index, but it can also contribute to the system being starved for memory. maintenance_work_mem value can be set on the system level as well as session level.

effective_cache_size

effective_cache_size provides an estimate to the query planner on how much memory is available for disk caching by the operating system for the PostgreSQL instance. This is a guideline for how much memory you expect to be available in the OS and PostgreSQL buffer caches, not an allocation! This information is used by the planner to choose the best query plan based on the expected cost of various plans. for e.g: it will prefer index scan over sequential scan if effective_cache_size memory is high enough.

Reasonable values lie around 60-75% of total OS memory(Requires Restart). If it is set too high, the query planner may choose plans that are too expensive, leading to slow query performance. On the other hand, if the parameter is set too low, the query planner may underestimate the available memory and choose plans that are not optimal.

Conclusion

Memory management is a critical aspect of ensuring optimal performance for a PostgreSQL database. By properly configuring the memory allocation parameters, PostgreSQL can read and write data more efficiently resulting in faster query execution. It is important to carefully consider the workload and system configuration when adjusting these parameters. Generally, starting with default values and gradually increasing them based on performance testing is a good approach. Additionally, monitoring query performance and identifying any issues related to memory usage can help in making adjustments to these parameters.

General FAQ’s:

Q1: How PostgreSQL uses memory for read operations?

When a read request comes from the client, PostgreSQL first checks if the requested data is already in the buffer cache(shared_buffers). If it is not available, PostgreSQL checks the data from the OS cache. If it is still not there, then it reads from the disk and stores it in a buffer cache.

PostgreSQL uses a “least recently used” (LRU) algorithm to determine which buffers to evict from the cache when it needs to make room for new data. This means that the buffer that hasn’t been used for the longest time will be evicted first. However, PostgreSQL tries to avoid evicting buffers that are likely to be used again soon

Q2: How PostgreSQL uses memory for write operations?

When a write operation is executed, PostgreSQL first writes the data to wal buffers to ensure data consistency and durability. When the transaction is committed or wal_buffers gets full, PostgreSQL sends fsync signal to persist wal changes inside pg_xlog/pg_wal folder. The WAL contains a sequential log of all changes made to the database, and it’s used to ensure data consistency and durability.

After the changes are written to the WAL, PostgreSQL updates the corresponding data pages in memory (shared_buffers). The changes are not immediately written to disk but are instead kept in memory for a period of time. This is known as “write-behind caching” or “lazy write”.

PostgreSQL uses “Checkpoints” to periodically flush the dirty data (i.e., the changes that have not yet been written to disk) from memory to disk. During a checkpoint, PostgreSQL writes all dirty data to disk and updates the WAL to reflect the changes. This ensures that the data is durable even in the event of a system failure.

Q3: what is the relation between work_mem, maintenance_work_mem and temp_buffers?

work_mem controls the amount of memory allocated for sorting and hashing operations for a single query. PostgreSQL allocates memory from the work_mem pool when a query requires sorting or hashing. If there is not enough memory available in the work_mem pool, PostgreSQL will spill to disk.

temp_buffers controls the amount of memory allocated for temporary tables. This parameter sets the maximum number of temporary buffers used by each session. When the session ends, the server clears the buffers. We can alter temp_buffers value (default 8MB) before the first use of temporary tables within a session. By increasing the amount of memory available for temp_buffers, PostgreSQL can reduce the need to write temporary objects to disk, which can improve query performance and reduce disk I/O.

temp_buffers is not the only memory parameter that affects temporary table creation and usage. The work_mem parameter can also be involved, particularly for queries that involve sorting or aggregation on the temporary table. The maintenance_work_mem parameter can also be involved, particularly for queries that involve create index, reindex, or vacuum full commands which involve sort and merge operations.

Note: There is no relation between temp_buffers and pgsql_tmp directory. pgsql_tmp directory is used to store data for large hash or sort operations that would not fit in work_mem.

Q4: let’s say temp_buffers value is set to 1024MB. I created a temporary table that will hold around 5GB of data. Is my table going to be created in memory or on disk? how temp_buffers will be used here?

When you create a temporary table, the data for that table is initially stored in memory which is governed by temp_buffers setting. If the amount of data in the temporary table exceeds the temp_buffers value, the excess data will be written to disk.

In this scenario, if you set temp_buffers to 1024MB and create a temporary table that holds around 5GB of data, the first 1024MB of data will be stored in memory, while the remaining data will be written to disk. When you query the temporary table, PostgreSQL will read the data from both memory and disk, as necessary.

Q5: what is the relation between index creation and maintenance_work_mem?

The maintenance_work_mem parameter controls the amount of memory used for the entire maintenance operation, including index creation, sorting, and other maintenance operations.

PostgreSQL reads the table data that is being indexed and sorts it in memory using maintenance_work_mem. If the amount of memory required for sorting exceeds the maintenance_work_mem value, PostgreSQL will use disk-based sorting and create temporary files on disk.

Q6: How vacuum operation uses memory in PostgreSQL?

The VACUUM operation in PostgreSQL is used to reclaim storage space and improve performance by removing dead rows and freeing up unused space in the database

First, PostgreSQL scans a target table to build a list of dead tuples and freeze old tuples if possible. The list is stored in maintenance_work_mem. After scanning, PostgreSQL removes index tuples by referring to the dead tuple list.

In the second step, Vacuum removes dead tuples, reorders (sort) the align remaining tuples, and updates both the FSM(Free space map) and VM(Visibility map) on a page-by-page basis. After this, Postgresql updates both the statistics and system catalogs related to vacuum processing for each target table.

Note that maintenance_work_mem is similar to work_mem but for operations like CREATE INDEX and VACUUM.

Q7: Is there any relation between column alignment/padding and memory management in PostgreSQL?

Yes, there is a relationship between column padding and memory management in PostgreSQL. Column padding refers to the unused space that is added to each column in the table to ensure that the data is aligned correctly in memory. This padding can have an impact on PostgreSQL in two ways:

Storage requirements: Padding increases the amount of storage space required to store each row in a table. This can increase the overall storage requirements for the table and for the database as a whole.

Memory usage: Padding also affects the amount of memory that PostgreSQL uses to store data. When PostgreSQL reads data from a table into memory, it reads entire blocks of data at a time. If there is significant padding in the table, this can result in reading more data into memory than is actually needed.

Choosing appropriate data types and minimizing padding can help reduce the overall storage requirements and memory usage of the database. However, it is also important to balance these considerations against other factors such as query performance and data integrity.

Reference with examples:
https://www.2ndquadrant.com/en/blog/on-rocks-and-sand/

Q8: Does wal_buffers helps in PostgreSQL replication?

Yes, the wal_buffers setting can help improve replication performance in PostgreSQL.

When PostgreSQL is configured for replication, it uses a streaming replication mechanism to replicate changes made to the primary database to one or more standby databases. The primary database writes changes to the WAL, and the standby databases read those changes from the WAL and apply them to their own copies of the database.

By allocating an appropriate amount of memory for wal_buffers, you can help ensure that the WAL data is written to memory and can be read by the standby databases as quickly as possible. This can help reduce replication lag and ensure that the standby databases are as up-to-date as possible.

Q9: What are the other guidelines for memory management in PostgreSQL?

  • Ensure that your queries are well-optimized and use appropriate indexes. Poorly optimized queries can cause excessive memory usage, which can lead to loading large amounts of data into memory.
  • Make sure the table is properly analyzed and vacuumed. So the planner will always go toward a better execution plan
  • If queries are running on a large dataset and filters are applied, try to partition your data as this will reduce memory usage.
  •  Try connection pooling such as pgbouncer which reduces the memory footprint by reusing existing connections instead of creating new ones.
  •  Using SSDs can also help in better performance if the dataset cannot fit inside available shared_buffers.
  • Use efficient data types whenever possible to reduce memory usage. Also, make sure that column alignment is set to reduce space overhead.
  • Try changing temp_tablespace parameter to a faster disk or intermediate operation will speed up. Follow this link for more information: https://dev.to/bolajiwahab/postgresql-temp-files-usage-1gb9/ 
  • Use the following link for tuning PostgreSQL server parameters
    https://pgtune.leopard.in.ua/

Leave A Comment