PostgreSQL Backup Best Practices

There are several backup strategies available for PostgreSQL, and the right one for your environment will depend on your backup and recovery requirements. Some popular strategies include:

1: Logical backup

This method involves creating a backup of the data as SQL statements that can be used to recreate the database objects and data. The most common tool used for creating logical backups in PostgreSQL is the pg_dump/pg_dumpall utility, which is included with the PostgreSQL server. The pg_dump utility can be run either from the command line or through a graphical user interface such as pgAdmin.

Benefits:

  1.  Backups generated by pg_dump are compatible with different versions of PostgreSQL. you can restore backup of lower version to new versions.
  2. It allows for selective backup and restore of database objects(Not available in plain format). This means that you can backup and restore specific tables or schemas within a database, rather than having to restore the entire database.
  3. pg_dump produces SQL scripts that can be easily manipulated or automated using scripts or other tools. This can be useful for creating custom backup workflows or integrating backups with other systems or processes.
  4. pg_dump doesn’t include obsolete/bloated rows in the backup which means backup size can be smaller than actual database size.

Limitations:

  1. It can be slower than physical backups for larger databases.
  2. pg_dump is single threaded for Tar/Custom and Plain sql format. Parallel mode is only supported for directory backup format.
  3. The entire SQL script has to be executed to recreate the database objects and data. Depending on the size of the database and the complexity of the schema, Restoration process can take a long time and may require downtime for the database.
  4. No support for incremental or differential level backups.

2: Physical backup

This method involves copying the entire database cluster to a backup location, and is the most complete backup method. It’s ideal for disaster recovery scenarios where you need to quickly restore your entire database cluster.
Note: Physical backups are only implemented on cluster/file-system level. We cannot take single database or schema level backups.
There are multiple tools available for PostgreSQL physical backups:

pg_basebackup

pg_basebackup is a default utility in PostgreSQL that creates copy of the entire PostgreSQL data directory, including all data files, configuration files, and transaction logs. Advantages of pgbasebackup is that it helps in taking consistent backup of cluster at any point in time, supports PITR(Point-in-time recovery) and can be used as the starting point for streaming-replication standby server. To support PITR and standby server creation, make sure that archive mode is on and wal_level is set to replica.
Limitations:
1: It doesn’t support parallelism which can lead to slow backups for large cluster.
2: Compression is only available when using the tar format.
3: No support for incremental or differential level backups.
pgBackRest
pgBackRest is an open-source and widely used reliable backup and restore solution for PostgreSQL. It is designed to be efficient, flexible, and easy to use, with features such as parallel backup and restore, full/differential and incremental backups, compression, encryption, and backup/archive rotation. It also provides options for Backup verification using checksums, delta restore and much more.
Other physical backup tools include Barman, pg_probackup and WAL-G. Their usage and differences can be found inside the following link:
https://www.postgresql.eu/events/pgconfeu2018/sessions/session/2098/slides/123/

Create a backup schedule

To ensure that you always have a recent backup of your database, it’s important to create a backup schedule. The frequency of your backups will depend on your requirements for RPO(recovery point objectives) and RTO(recovery time objectives).
RTO: It specify the amount of time to recover from a disaster or it is the maximum database downtime an organization can handle or how quickly the organization can regain access to the database after an outage.
RPO: It is about how much data you can afford to lose – after recovery from a disruptive event. RPO helps determine how much data an organization can tolerate losing during an unforeseen event. It is dependent on backup and how valid the backup is.
For example, if you need to be able to recover to within the last hour of data loss, you may need to perform hourly backups. Some common backup schedules include:
Daily backups: This is the most common backup schedule, and is suitable for many small to medium-sized databases.
Hourly backups: This schedule is useful for larger databases or for databases with more stringent RPO requirements.
Continuous backups: This schedule uses continuous archiving to create a backup of the transaction log, which can be used to recover to any point in time. This is the most complete backup strategy, but requires more resources.

Test your backups

One of the most important steps in backup and recovery is testing your backups to ensure that they can be restored successfully. Testing your backups should be a regular part of your backup and recovery process. You should test your backups in a separate environment to ensure that the recovery process works as expected, and that the backup is complete and accurate.

Store your backups securely

Storing your backups securely is critical to ensuring the availability of your data. You should store your backups in a separate location from your database server, and ensure that the location is secure and accessible only to authorized personnel. You should also encrypt your backups to prevent unauthorized access.

Summary

Backing up and recovering your PostgreSQL database is critical to ensuring the safety and availability of your data. By following these best practices, you can ensure that your backups are complete, accurate, and secure, and that you’re able to recover your data quickly in the event of a disaster.

FAQ’S

1: What type of locking is used by pg_dump?

pg_dump takes access shared locks on database objects. Means this does not prevent reads or writes inside the table. It does prevent dropping the table, making schema alterations to it (such as column add/drop), or anything else that wants an exclusive lock.

2: What are important options while running pg_dump utility?

-U : Specify the PostgreSQL username.(Make sure that user has select access on objects to be backed up)
-d : Specifies the name of the database to connect and take dump.
-h : Specifies the host name of the machine on which the server is running. The default is taken from the PGHOST environment variable.
-p : Specifies the TCP port on which the server is listening for connections. The default is taken from the PGPORT environment variable.
-F : Specify the format of the output file. where formats are:

  • p : Plain text SQL script. Compression is off by default for this format.
  • c : Specify the custom format. It includes all of the data, schema, and other information needed to recreate the database in a single binary file. This format is compressed by default.
  • d : Specify the directory format. It creates a directory containing one file for each table, index, and sequence in the database. This format is compressed by default.
  • t : Specify tar format archive file. It creates a tar archive containing one file for each table, index, and sequence in the database. This format does not support compression

-s : Dump only the object definitions (DDL), not data.
-a : Dump only the data, not the schema (data definitions). Table data, large objects, and sequence values are dumped
-t : Dump only tables. Multiple tables can be selected by writing multiple -t switches. multiple tables can also be selected by writing wildcard characters in the pattern.
-n : Dump whole schema objects. Multiple schemas can be selected by writing multiple -n switches. Multiple schemas can also be selected by writing wildcard characters in the pattern.
-j : Run the dump in parallel by dumping tables simultaneously by specifying threads. Only directory format supports parallel dumps.
Other important options are:
-c / –clean
-N / –exclude-schema
-O / –no-owner
-T / –exclude-table

3: While upgrading a database major version, which binary utility to use?

The recommended way is to use the higher database client version. For example, if you’re upgrading major PostgreSQL version 11 to 13, use PostgreSQL version 13 pg_dump/pg_restore utility.

4:  How to restore database effectively from a logical backup?

When restoring a database from logical backups i.e. pg_dump:
1: Make sure to use Custom/directory format backup. These formats provide support for parallel restoration. check -j option in pg_restore.
2: If you’re restoring data to a new server, you can optimize the following server parameters prior to running pg_restore:
shared_buffers = 4GB
work_mem = 32MB
max_wal_size = 32GB
checkpoint_timeout = 60min
maintenance_work_mem = 2GB
autovacuum = off
wal_compression = on
wal_buffers = -1
After restoration, update parameters as per workload requirements.
Note: Above parameters will help in decreasing I/O while restoring data. Follow above recommendations only if there’s enough memory and cores available. Do monitor what your system is doing and whether you are limited by cpu or disk I/O.

5: What privileges are required to run pg_basebackup?

The backup is made over a regular PostgreSQL connection that uses the replication protocol. The connection must be made with a user ID that has REPLICATION permissions or is a superuser, and pg_hba.conf must permit the replication connection. The server must also be configured with max_wal_senders set high enough to provide at least one walsender for the backup plus one for WAL streaming (if used).

6: How pg_basebackup works in background?

Low level steps are as follows:
=> Issue pg_start_backup command
=> Switch the current WAL segment file.
=> Do checkpoint.(Depends on –checkpoint value. It can be immediate or default checkpoint time)
=> Create a backup label file – This file contains the contents of the backup_label file which includes checkpoint Location, timeline info, backup source(Primary/Standby). During Restoration from base backup, PostgreSQL looks for backup_label file and starts recovering from given checkpoint location.
=> Issue the pg_stop_backup command

7: Which parameters to look for while using pg_basebackup?

Following parameters should be used in pgbasebackup command
-F : Selects the format for the output. Only plain(default) and tar format is available
-Xs : Stream write-ahead log data while the backup is being taken. This will ensure that your backup is up-to-date.
-z : Enables gzip compression of tar file output,
-c : Sets checkpoint mode to fast(immediate) or spread(pgbasebackup will wait until the next checkpoint timeout)
Stay tuned for next blog on how to recover a database from physical backup and perform PITR with various scenarios.

References:

Leave A Comment