The {[project]} User Guide demonstrates how to quickly and easily setup {[project]} for your {[postgres]} database. Step-by-step instructions lead the user through all the important features of the fastest, most reliable {[postgres]} backup and restore solution.Debian & UbuntuRHEL & CentOS 6Debian/UbuntuRHEL/CentOS 6
use File::Basename qw(dirname);
use Cwd qw(abs_path);
dirname(dirname(abs_path($0)));
9.49.59.59.6/home/postgres/usr/share/perl5/usr/bin/var/lib/pgbackrestaes-256-cbczWaf6XtpjIVZC5444yXB+cgFDFl7MxGlgkZSaoPvTGirhPygu4jOKOXf9LO4vjfObackrest{[br-user]}/home/{[br-user]}demo/etc/{[project-exe]}.conf/var/lib/postgresql/[version]/[cluster]/var/lib/pgsql/[version]/data/var/lib/postgresql/{[pg-version]}/{[postgres-cluster-demo]}/var/lib/pgsql/{[pg-version]}/data/var/lib/postgresql/{[pg-version-upgrade]}/{[postgres-cluster-demo]}/var/lib/pgsql/{[pg-version-upgrade]}/data/var/spool/pgbackrestapt-get install postgresql-{[pg-version-upgrade]}yum install postgresql96-server/etc/postgresql/{[pg-version]}/{[postgres-cluster-demo]}/postgresql.conf{[db-path]}/postgresql.conf/etc/postgresql/{[pg-version-upgrade]}/{[postgres-cluster-demo]}/postgresql.conf{[db-path-upgrade]}/postgresql.conf/etc/postgresql/{[pg-version]}/{[postgres-cluster-demo]}/pg_hba.conf{[db-path]}/pg_hba.conf/etc/postgresql/{[pg-version-upgrade]}/{[postgres-cluster-demo]}/pg_hba.conf{[db-path-upgrade]}/pg_hba.conf{[pg-home-path]}/.pgpass/var/log/postgresql/postgresql-{[pg-version]}-{[postgres-cluster-demo]}.log{[db-path]}/pg_log/postgresql.log/var/lib/pgsql/{[pg-version]}/pgstartup.log{[db-path]}/recovery.conf{[db-path]}/recovery.confu16co6use English; getpwuid($UID) eq 'root' ? 'ubuntu' : getpwuid($UID) . ''{[pgbackrest-base-dir]}:/backrestpgbackrest/tests3-serverdb-primary{[host-user]}{[image-repo]}:{[host-os]}-base{[host-mount]}db-standby{[host-db-primary-user]}{[host-db-primary-image]}{[host-mount]}backup{[host-user]}{[image-repo]}:{[host-os]}-base{[host-mount]}ls -1 {[backrest-repo-path]}/backup/demo | tail -5 | head -1Important Datasleep 2pg_createcluster {[pg-version]} {[postgres-cluster-demo]}service postgresql-{[pg-version]} initdbpg_createcluster {[pg-version-upgrade]} {[postgres-cluster-demo]}service postgresql-{[pg-version-upgrade]} initdbpg_ctlcluster {[pg-version]} {[postgres-cluster-demo]} startservice postgresql-{[pg-version]} startpg_ctlcluster {[pg-version-upgrade]} {[postgres-cluster-demo]} startservice postgresql-{[pg-version-upgrade]} startpg_ctlcluster {[pg-version]} {[postgres-cluster-demo]} stopservice postgresql-{[pg-version]} stoppg_ctlcluster {[pg-version]} {[postgres-cluster-demo]} restartservice postgresql-{[pg-version]} restartpg_ctlcluster {[pg-version]} {[postgres-cluster-demo]} reloadservice postgresql-{[pg-version]} reloadpg_lsclustersservice postgresql-{[pg-version-upgrade]} status
requires trusted (no password) SSH to enable communication between the hosts.
Exchange keys between {[host-backup]} and {[setup-ssh-host]}.
Copy {[setup-ssh-host]} public key to {[host-backup]}ssh root@{[setup-ssh-host]} cat {[pg-home-path]}/.ssh/id_rsa.pub |
sudo -u backrest tee -a {[br-home-path]}/.ssh/authorized_keysCopy {[host-backup]} public key to {[setup-ssh-host]}ssh root@{[host-backup]} cat {[br-home-path]}/.ssh/id_rsa.pub |
sudo -u postgres tee -a {[pg-home-path]}/.ssh/authorized_keys
Test that connections can be made from {[host-backup]} to {[setup-ssh-host]} and vice versa.
Test connection from {[host-backup]} to {[setup-ssh-host]}ssh postgres@{[setup-ssh-host]}-o StrictHostKeyChecking=no lsTest connection from {[setup-ssh-host]} to {[host-backup]}ssh backrest@{[host-backup]}-o StrictHostKeyChecking=no ls
is written in Perl which is included with {[user-guide-os]} by default. Some additional modules must also be installed but they are available as standard packages.
{[user-guide-os]} packages for are available at apt.postgresql.org. If they are not provided for your distribution/version it is easy to download the source and install manually.
{[user-guide-os]} packages for are available from Crunchy Data or yum.postgresql.org, but it is also easy to download the source and install manually.
includes an optional companion C library that enhances performance and enables the `checksum-page` option and encryption. Pre-built packages are generally a better option than building the C library manually but the steps required are given below for completeness. Depending on the distribution a number of packages may be required which will not be enumerated here.
Build and Install C Librarysh -c 'cd /root/pgbackrest-release-{[version]}/libc &&
perl Makefile.PL INSTALLMAN1DIR=none INSTALLMAN3DIR=none'make -C /root/pgbackrest-release-{[version]}/libc testmake -C /root/pgbackrest-release-{[version]}/libc installCreate the repositorymkdir {[backrest-repo-path]}chmod 750 {[backrest-repo-path]}chown {[br-install-user]}:{[br-install-group]} {[backrest-repo-path]}Introduction
This user guide is intended to be followed sequentially from beginning to end — each section depends on the last. For example, the Backup section relies on setup that is performed in the Quick Start section. Once is up and running then skipping around is possible but following the user guide in order is recommended the first time through.
Although the examples are targeted at {[user-guide-os]} and {[pg-version]}, it should be fairly easy to apply this guide to any Unix distribution and version. Note that only 64-bit distributions are currently supported due to 64-bit operations in the Perl code. The only OS-specific commands are those to create, start, stop, and drop clusters. The commands will be the same on any Unix system though the locations to install Perl libraries and executables may vary.
Configuration information and documentation for PostgreSQL can be found in the Manual.
A somewhat novel approach is taken to documentation in this user guide. Each command is run on a virtual machine when the documentation is built from the XML source. This means you can have a high confidence that the commands work correctly in the order presented. Output is captured and displayed below the command when appropriate. If the output is not included it is because it was deemed not relevant or was considered a distraction from the narrative.
All commands are intended to be run as an unprivileged user that has sudo privileges for both the root and postgres users. It's also possible to run the commands directly as their respective users without modification and in that case the sudo commands can be stripped off.
Concepts
The following concepts are defined as they are relevant to , , and this user guide.
Backup
A backup is a consistent copy of a database cluster that can be restored to recover from a hardware failure, to perform Point-In-Time Recovery, or to bring up a new standby.
Full Backup: copies the entire contents of the database cluster to the backup server. The first backup of the database cluster is always a Full Backup. is always able to restore a full backup directly. The full backup does not depend on any files outside of the full backup for consistency.
Differential Backup: copies only those database cluster files that have changed since the last full backup. restores a differential backup by copying all of the files in the chosen differential backup and the appropriate unchanged files from the previous full backup. The advantage of a differential backup is that it requires less disk space than a full backup, however, the differential backup and the full backup must both be valid to restore the differential backup.
Incremental Backup: copies only those database cluster files that have changed since the last backup (which can be another incremental backup, a differential backup, or a full backup). As an incremental backup only includes those files changed since the prior backup, they are generally much smaller than full or differential backups. As with the differential backup, the incremental backup depends on other backups to be valid to restore the incremental backup. Since the incremental backup includes only those files since the last backup, all prior incremental backups back to the prior differential, the prior differential backup, and the prior full backup must all be valid to perform a restore of the incremental backup. If no differential backup exists then all prior incremental backups back to the prior full backup, which must exist, and the full backup itself must be valid to restore the incremental backup.
Restore
A restore is the act of copying a backup to a system where it will be started as a live database cluster. A restore requires the backup files and one or more WAL segments in order to work correctly.
Write Ahead Log (WAL)
WAL is the mechanism that uses to ensure that no committed changes are lost. Transactions are written sequentially to the WAL and a transaction is considered to be committed when those writes are flushed to disk. Afterwards, a background process writes the changes into the main database cluster files (also known as the heap). In the event of a crash, the WAL is replayed to make the database consistent.
WAL is conceptually infinite but in practice is broken up into individual 16MB files called segments. WAL segments follow the naming convention 0000000100000A1E000000FE where the first 8 hexadecimal digits represent the timeline and the next 16 digits are the logical sequence number (LSN).
Encryption
Encryption is the process of converting data into a format that is unrecognizable unless the appropriate password (also referred to as passphrase) is provided.
will encrypt the repository based on a user-provided password, thereby preventing unauthorized access to data stored within the repository.
Installation
A new host named db-primary is created to contain the demo cluster and run examples.
If has been installed before it's best to be sure that no prior copies of it are still installed. Depending on how old the version of pgBackRest is it may have been installed in a few different locations. The following commands will remove all prior versions of pgBackRest.
should now be properly installed but it is best to check. If any dependencies were missed then you will get an error when running from the command line.
Make sure the installation worked{[project-exe]}Quick Start
The Quick Start section will cover basic configuration of and and introduce the backup, restore, and info commands.
Setup Demo Cluster
Creating the demo cluster is optional but is strongly recommended, especially for new users, since the example commands in the user guide reference the demo cluster; the examples assume the demo cluster is running on the default port (i.e. 5432). The cluster will not be started until a later section because there is still some configuration to do.
Create the demo cluster
/usr/lib/postgresql/{[pg-version]}/bin/initdb
-D {[db-path]} -k -A peer{[db-cluster-create]}
By default will only accept local connections. The examples in this guide will require connections from other servers so listen_addresses is configured to listen on all interfaces. This may not be appropriate for secure installations.
Set listen_addresses'*'
For demonstration purposes the log_line_prefix setting will be minimally configured. This keeps the log output as brief as possible to better illustrate important information.
Set log_line_prefix''
By default {[user-guide-os]} includes the day of the week in the log filename. This makes automating the user guide a bit more complicated so the log_filename is set to a constant.
Set log_filename'postgresql.log'Configure Cluster Stanza
The name 'demo' describes the purpose of this cluster accurately so that will also make a good stanza name.
needs to know where the base data directory for the cluster is located. The path can be requested from directly but in a recovery scenario the process will not be available. During backups the value supplied to will be compared against the path that is running on and they must be equal or the backup will return an error. Make sure that db-path is exactly equal to data_directory in postgresql.conf.
By default {[user-guide-os]} stores clusters in {[db-path-default]} so it is easy to determine the correct path for the data directory.
When creating the {[backrest-config-demo]} file, the database owner (usually postgres) must be granted read privileges.
Configure the cluster data directory{[db-path]}offn
configuration files follow the Windows INI convention. Sections are denoted by text in brackets and key/value pairs are contained in each section. Lines beginning with # are ignored and can be used as comments.
Create the Repository
For this demonstration the repository will be stored on the same host as the server. This is the simplest configuration and is useful in cases where traditional backup software is employed to backup the database host.
{[host-db-primary]}postgrespostgres
The repository path must be configured so knows where to find it.
Configure the repository path{[backrest-repo-path]}Configure Archiving
Backing up a running cluster requires WAL archiving to be enabled. Note that at least one WAL segment will be created during the backup process even if no explicit writes are made to the cluster.
The wal_level setting must be set to archive at a minimum but hot_standby and logical also work fine for backups. Setting wal_level to hot_standy and increasing max_wal_senders is a good idea even if you do not currently run a hot standby as this will allow them to be added later without restarting the primary cluster.
The cluster must be restarted after making these changes and before performing a backup.
Restart the {[postgres-cluster-demo]} cluster{[db-cluster-restart]}{[db-cluster-wait]}
When archiving a WAL segment is expected to take more than 60 seconds (the default) to reach the repository, then the archive-timeout option should be increased. Note that this option is not the same as the archive_timeout option which is used to force a WAL segment switch; useful for databases where there are long periods of inactivity. For more information on the archive_timeout option, see Write Ahead Log.
Configure Retention
expires backups based on retention options.
Configure retention to 2 full backups2
More information about retention can be found in the Retention section.
Configure Repository Encryption
The repository will be configured with a cipher type and key to demonstrate encryption. The companion C library is required for encryption, see Installation.
It is important to use a long, random passphrase for the cipher key. A good way to generate one is to run: openssl rand -base64 48.
Once the repository has been configured and the stanza created and checked, the repository encryption settings cannot be changed.
Create the Stanza
The stanza-create command must be run on the host where the repository is located to initialize the stanza. It is recommended that the check command be run after stanza-create to ensure archiving and backups are properly configured.
Create the stanza and check the configuration{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-log-level-console=info stanza-createcompleted successfullyCheck the ConfigurationCheck the configuration{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-log-level-console=info check successfully stored in the archive at Example of an invalid configuration{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} --archive-timeout=.1 checkcould not find WAL segment|did not reach the archivePerform a Backup
To perform a backup of the cluster run with the backup command.
By default will attempt to perform an incremental backup. However, an incremental backup must be based on a full backup and since no full backup existed ran a full backup instead.
The type option can be used to specify a full or differential backup.
Differential backup of the {[postgres-cluster-demo]} cluster{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=diff
--log-level-console=info backupdiff backup size
This time there was no warning because a full backup already existed. While incremental backups can be based on a full or differential backup, differential backups must be based on a full backup. A full backup can be performed by running the backup command with {[dash]}-type=full.
More information about the backup command can be found in the Backup section.
Schedule a Backup
Backups can be scheduled with utilities such as cron.
In the following example, two cron jobs are configured to run; full backups are scheduled for 6:30 AM every Sunday with differential backups scheduled for 6:30 AM Monday through Saturday. If this crontab is installed for the first time mid-week, then pgBackRest will run a full backup the first time the differential job is executed, followed the next day by a differential backup.
Once backups are scheduled it's important to configure retention so backups are expired on a regular schedule, see Retention.
Backup Information
Use the info command to get information about backups.
Get info for the {[postgres-cluster-demo]} cluster{[project-exe]} info(full|incr|diff) backup
Each stanza has a separate section and it is possible to limit output to a single stanza with the --stanza option. The stanza 'status' gives a brief indication of the stanza's health. If this is 'ok' then is functioning normally. The 'wal archive min/max' shows the minimum and maximum WAL currently stored in the archive. Note that there may be gaps due to archive retention policies or other reasons.
The backups are displayed oldest to newest. The oldest backup will always be a full backup (indicated by an F at the end of the label) but the newest backup can be full, differential (ends with D), or incremental (ends with I).
The 'timestamp start/stop' defines the time period when the backup ran. The 'timestamp stop' can be used to determine the backup to use when performing Point-In-Time Recovery. More information about Point-In-Time Recovery can be found in the Point-In-Time Recovery section.
The 'wal start/stop' defines the WAL range that is required to make the database consistent when restoring. The backup command will ensure that this WAL range is in the archive before completing.
The 'database size' is the full uncompressed size of the database while 'backup size' is the amount of data actually backed up (these will be the same for full backups). The 'repository size' includes all the files from this backup and any referenced backups that are required to restore the database while 'repository backup size' includes only the files in this backup (these will also be the same for full backups). Repository sizes reflect compressed file sizes if compression is enabled in or the filesystem.
The 'backup reference list' contains the additional backups that are required to restore this backup.
Restore a Backup
Backups can protect you from a number of disaster scenarios, the most common of which are hardware failure and data corruption. The easiest way to simulate data corruption is to remove an important cluster file.
Stop the {[postgres-cluster-demo]} cluster and delete the pg_control file{[db-cluster-stop]}rm {[db-path]}/global/pg_control
Starting the cluster without this important file will result in an error.
Attempt to start the corrupted {[postgres-cluster-demo]} cluster{[db-cluster-start]}could not find the database system{[db-cluster-wait]}rm -f {[postgres-log-pgstartup-demo]}{[db-cluster-start]}cat {[postgres-log-pgstartup-demo]}errorcould not find the database system
To restore a backup of the cluster run with the restore command. The cluster needs to be stopped (in this case it is already stopped) and all files must be removed from the data directory.
Remove old files from {[postgres-cluster-demo]} clusterfind {[db-path]} -mindepth 1 -deleteRestore the {[postgres-cluster-demo]} cluster and start {[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} restore{[db-cluster-start]}{[db-cluster-wait]}
This time the cluster started successfully since the restore replaced the missing pg_control file.
More information about the restore command can be found in the Restore section.
Backup
The Backup section introduces additional backup command features.
Fast Start Option
By default will wait for the next regularly scheduled checkpoint before starting a backup. Depending on the checkpoint_timeout and checkpoint_segments settings in it may be quite some time before a checkpoint completes and the backup can begin.
Incremental backup of the {[postgres-cluster-demo]} cluster with the regularly scheduled checkpoint{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=incr
--log-level-console=info backupbackup begins after the next regular checkpoint completes
When {[dash]}-start-fast is passed on the command-line or start-fast=y is set in {[backrest-config-demo]} an immediate checkpoint is requested and the backup will start more quickly. This is convenient for testing and for ad-hoc backups. For instance, if a backup is being taken at the beginning of a release window it makes no sense to wait for a checkpoint. Since regularly scheduled backups generally only happen once per day it is unlikely that enabling the start-fast in {[backrest-config-demo]} will negatively affect performance, however for high-volume transactional systems you may want to pass {[dash]}-start-fast on the command-line instead. Alternately, it is possible to override the setting in the configuration file by passing {[dash]}-no-start-fast on the command-line.
Enable the start-fast optionyIncremental backup of the {[postgres-cluster-demo]} cluster with an immediate checkpoint{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=incr
--log-level-console=info backupbackup begins after the requested immediate checkpoint completesAutomatic Stop Option
Sometimes will exit unexpectedly and the backup in progress on the cluster will not be properly stopped. exits as quickly as possible when an error occurs so that the cause can be reported accurately and is not masked by another problem that might happen during a more extensive cleanup.
Here an error is intentionally caused by removing repository permissions.
Revoke write privileges in the repository and attempt a backupchmod 550 {[backrest-repo-path]}/backup/{[postgres-cluster-demo]}/{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=incr
--log-level-console=info backupERROR:
Even when the permissions are fixed will still be unable to perform a backup because the cluster is stuck in backup mode.
Restore write privileges in the repository and attempt a backupchmod 750 {[backrest-repo-path]}/backup/{[postgres-cluster-demo]}/{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=incr
--log-level-console=info backupERROR:
Enabling the stop-auto option allows to stop the current backup if it detects that no other backup process is running.
Enable the stop-auto optiony
Now will stop the old backup and start a new one so the process completes successfully.
Perform an incremental backup{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=incr
--log-level-console=info backupcluster is already in backup mode|backup begins after the requested immediate checkpoint completes
Although useful this feature may not be appropriate when another third-party backup solution is being used to take online backups as will not recognize that the other software is running and may terminate a backup started by that software. However, it would be unusual to run more than one third-party backup solution at the same time so this is not likely to be a problem.
Note that pg_dump and pg_base_backup do not take online backups so are not affected. It is safe to run them in conjunction with .
Archive Timeout
During an online backup waits for WAL segments that are required for backup consistency to be archived. This wait time is governed by the archive-timeout option which defaults to 60 seconds. If archiving an individual segment is known to take longer then this option should be increased.
Retention
Generally it is best to retain as many backups as possible to provide a greater window for Point-in-Time Recovery, but practical concerns such as disk space must also be considered. Retention options remove older backups once they are no longer needed.
Full Backup Retention
Set retention-full to the number of full backups required. New backups must be completed before expiration will occur — that means if retention-full=2 then there will be three full backups stored before the oldest one is expired.
Configure retention-full2
Backup retention-full=2 but currently there is only one full backup so the next full backup to run will not expire any full backups.
Perform a full backup{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} --type=full
--log-level-console=detail backuparchive retention on backup {[backup-full-first]}|remove archive{[cmd-backup-last]}
Archive is expired because WAL segments were generated before the oldest backup. These are not useful for recovery — only WAL segments generated after a backup can be used to recover that backup.
Perform a full backup{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} --type=full
--log-level-console=info backupexpire full backup set\: {[backup-full-first]}|archive retention on backup {[backup-full-second]}|remove archive
The {[backup-full-first]} full backup is expired and archive retention is based on the {[backup-full-second]} which is now the oldest full backup.
Differential Backup Retention
Set retention-diff to the number of differential backups required. Differentials only rely on the prior full backup so it is possible to create a rolling set of differentials for the last day or more. This allows quick restores to recent points-in-time but reduces overall space consumption.
Configure retention-diff1
Backup retention-diff=1 so two differentials will need to be performed before one is expired. An incremental backup is added to demonstrate incremental expiration. Incremental backups cannot be expired independently — they are always expired with their related full or differential backup.
Although automatically removes archived WAL segments when expiring backups (the default expires WAL for full backups based on the retention-full option), it may be useful to expire archive more aggressively to save disk space. Note that full backups are treated as differential backups for the purpose of differential archive retention.
Expiring archive will never remove WAL segments that are required to make a backup consistent. However, since Point-in-Time-Recovery (PITR) only works on a continuous WAL stream, care should be taken when aggressively expiring archive outside of the normal backup expiration process.
The {[backup-diff-first]} differential backup has archived WAL segments that must be retained to make the older backups consistent even though they cannot be played any further forward with PITR. WAL segments generated after {[backup-diff-first]} but before {[backup-diff-second]} are removed. WAL segments generated after the new backup {[backup-diff-second]} remain and can be used for PITR.
Since full backups are considered differential backups for the purpose of differential archive retention, if a full backup is now performed with the same settings, only the archive for that full backup is retained for PITR.
Restore
The Restore section introduces additional restore command features.
Delta Option
Restore a Backup in Quick Start required the database cluster directory to be cleaned before the restore could be performed. The delta option allows to automatically determine which files in the database cluster directory can be preserved and which ones need to be restored from the backup — it also removes files not present in the backup manifest so it will dispose of divergent changes. This is accomplished by calculating a SHA-1 cryptographic hash for each file in the database cluster directory. If the SHA-1 hash does not match the hash stored in the backup then that file will be restored. This operation is very efficient when combined with the process-max option. Since the server is shut down during the restore, a larger number of processes can be used than might be desirable during a backup when the server is running.
Stop the {[postgres-cluster-demo]} cluster, perform delta restore{[db-cluster-stop]}{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta
--log-level-console=detail restoredemo\/PG_VERSION - exists and matches backup|check\/clean db path|restore global\/pg_controlRestart {[db-cluster-start]}{[db-cluster-wait]}Restore Selected Databases
There may be cases where it is desirable to selectively restore specific databases from a cluster backup. This could be done for performance reasons or to move selected databases to a machine that does not have enough space to restore the entire cluster backup.
To demonstrate this feature two databases are created: test1 and test2. A fresh backup is run so is aware of the new databases.
Create two test databases and perform a backup
psql -c "create database test1;"
psql -c "create database test2;"
{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} --type=incr backup
Each test database will be seeded with tables and data to demonstrate that recovery works with selective restore.
Create a test table in each database
psql -c "create table test1_table (id int);
insert into test1_table (id) values (1);" test1
psql -c "create table test2_table (id int);
insert into test2_table (id) values (2);" test2
One of the main reasons to use selective restore is to save space. The size of the test1 database is shown here so it can be compared with the disk utilization after a selective restore.
Show space used by test1 database
du -sh {[db-path]}/base/16384
Stop the cluster and restore only the test2 database. Built-in databases (template0, template1, and postgres) are always restored.
Restore from last backup including only the test2 database{[db-cluster-stop]}{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta
{[dash]}-db-include=test2 restore{[db-cluster-start]}{[db-cluster-wait]}
Once recovery is complete the test2 database will contain all previously created tables and data.
Demonstrate that the test2 database was recovered
psql -c "select * from test2_table;" test2
The test1 database, despite successful recovery, is not accessible. This is because the entire database was restored as sparse, zeroed files. can successfully apply WAL on the zeroed files but the database as a whole will not be valid because key files contain no data. This is purposeful to prevent the database from being accidentally used when it might contain partial data that was applied during WAL replay.
Attempting to connect to the test1 database will produce an error
psql -c "select * from test1_table;" test1
relation mapping file.*contains invalid data
Since the test1 database is restored with sparse, zeroed files it will only require as much space as the amount of WAL that is written during recovery. While the amount of WAL generated during a backup and applied during recovery can be significant it will generally be a small fraction of the total database size, especially for large databases where this feature is most likely to be useful.
It is clear that the test1 database uses far less disk space during the selective restore than it would have if the entire database had been restored.
Show space used by test1 database after recovery
du -sh {[db-path]}/base/16384
At this point the only action that can be taken on the invalid test1 database is drop database. does not automatically drop the database since this cannot be done until recovery is complete and the cluster is accessible.
Drop the test1 database
psql -c "drop database test1;"
Now that the invalid test1 database has been dropped only the test2 and built-in databases remain.
List remaining databases
psql -c "select oid, datname from pg_database order by oid;"
test2Point-in-Time Recovery
Restore a Backup in Quick Start performed default recovery, which is to play all the way to the end of the WAL stream. In the case of a hardware failure this is usually the best choice but for data corruption scenarios (whether machine or human in origin) Point-in-Time Recovery (PITR) is often more appropriate.
Point-in-Time Recovery (PITR) allows the WAL to be played from the last backup to a specified time, transaction id, or recovery point. For common recovery scenarios time-based recovery is arguably the most useful. A typical recovery scenario is to restore a table that was accidentally dropped or data that was accidentally deleted. Recovering a dropped table is more dramatic so that's the example given here but deleted data would be recovered in exactly the same way.
Backup the {[postgres-cluster-demo]} cluster and create a table with very important data{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} --type=diff backup
psql -c "begin;
create table important_table (message text);
insert into important_table values ('{[test-table-data]}');
commit;
select * from important_table;"
{[test-table-data]}
It is important to represent the time as reckoned by and to include timezone offsets. This reduces the possibility of unintended timezone conversions and an unexpected recovery result.
Get the time from
psql -Atc "select current_timestamp"
Now that the time has been recorded the table is dropped. In practice finding the exact time that the table was dropped is a lot harder than in this example. It may not be possible to find the exact time, but some forensic work should be able to get you close.
Drop the important tablepsql -c "begin;
drop table important_table;
commit;
select * from important_table;"does not exist
Now the restore can be performed with time-based recovery to bring back the missing table.
Stop , restore the {[postgres-cluster-demo]} cluster to {[time-recovery-timestamp]}, and display recovery.conf{[db-cluster-stop]}{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta
{[dash]}-type=time "{[dash]}-target={[time-recovery-timestamp]}" restorerm {[postgres-log-demo]}cat {[postgres-recovery-demo]}recovery_target_time
The recovery.conf file has been automatically generated by so can be started immediately. Once has finished recovery the table will exist again and can be queried.
Start and check that the important table exists{[db-cluster-start]}{[db-cluster-wait]}psql -c "select * from important_table"{[test-table-data]}
The log also contains valuable information. It will indicate the time and transaction where the recovery stopped and also give the time of the last transaction to be applied.
This example was rigged to give the correct result. If a backup after the required time is chosen then will not be able to recover the lost table. can only play forward, not backward. To demonstrate this the important table must be dropped (again).
Drop the important table (again)psql -c "begin;
drop table important_table;
commit;
select * from important_table;"does not exist
Now take a new backup and attempt recovery from the new backup.
Perform a backup then attempt recovery from that backup{[cmd-backup-last]}{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=incr backup{[db-cluster-stop]}{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta
{[dash]}-type=time "{[dash]}-target={[time-recovery-timestamp]}" restorerm {[postgres-log-demo]}{[db-cluster-start]}{[db-cluster-wait]}psql -c "select * from important_table"does not exist
Looking at the log output it's not obvious that recovery failed to restore the table. The key is to look for the presence of the recovery stopping before... and last completed transaction... log messages. If they are not present then the recovery to the specified point-in-time was not successful.
Examine the log output to discover the recovery was not successfulcat {[postgres-log-demo]}starting point-in-time recovery|consistent recovery state reached
Using an earlier backup will allow to play forward to the correct time. The info command can be used to find the next to last backup.
Get backup info for the {[postgres-cluster-demo]} cluster{[project-exe]} info{[backup-last]}
The default behavior for restore is to use the last backup but an earlier backup can be specified with the {[dash]}-set option.
Stop , restore from the selected backup, and start {[db-cluster-stop]}
{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta
{[dash]}-type=time "{[dash]}-target={[time-recovery-timestamp]}"
{[dash]}-set={[backup-last]} restore
rm {[postgres-log-demo]}{[db-cluster-start]}{[db-cluster-wait]}psql -c "select * from important_table"{[test-table-data]}
Now the the log output will contain the expected recovery stopping before... and last completed transaction... messages showing that the recovery was successful.
Examine the log output for log messages indicating successcat {[postgres-log-demo]}recovery stopping before|last completed transaction|starting point-in-time recoveryS3 Support
supports storing repositories in Amazon S3.
Clear the cipher settingsnoneConfigure S3s3/accessKey1verySecretKey1demo-buckets3.amazonaws.comus-east-1n4
Commands are run exactly as if the repository were stored on a local disk.
Create the stanzaecho "{[host-s3-server-ip]} demo-bucket.s3.amazonaws.com s3.amazonaws.com" |
sudo tee -a /etc/hostsaws s3 --no-verify-ssl mb s3://demo-bucket 2>&1{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-log-level-console=info stanza-createcompleted successfully
File creation time in S3 is relatively slow so commands benefit by increasing process-max to parallelize file creation.
The configuration described in Quickstart is suitable for simple installations but for enterprise configurations it is more typical to have a dedicated backup host. This separates the backups and WAL archive from the database server so database host failures have less impact. It is still a good idea to employ traditional backup software to backup the backup host.
Installation
A new host named backup is created to store the cluster backups.
The {[br-user]} user is created to own the repository. Any user can own the repository but it is best not to use postgres (if it exists) to avoid confusion.
The backup host must be configured with the db-primary host/user and database path. The primary will be configured as db1 to allow a standby to be added later.
Configure db1-host/db1-user and db1-path{[db-path]}{[host-db-primary]}postgresy2offn
The database host must be configured with the backup host/user. The default for the backup-user option is backrest. If the postgres user does restores on the backup host it is best not to also allow the postgres user to perform backups. However, the postgres user can read the repository directly if it is in the same group as the backrest user.
The repository directory will also be removed from the database host. It will not be used anymore so leaving it around may be confusing later on.
Remove repository now that it will be located on the backup host serverfind {[backrest-repo-path]} -delete
Commands are run the same as on a single host configuration except that some commands such as backup and expire are run from the backup host instead of the database host.
Create the stanza in the new repository.
Create the stanza{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} stanza-create
Check that the configuration is correct on both the database and backup hosts. More information about the check command can be found in Check the Configuration.
Check the configuration{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} checkCheck the configuration{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} checkPerform a Backup
To perform a backup of the cluster run with the backup command on the backup host.
Backup the {[postgres-cluster-demo]} cluster{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} backup
Since a new repository was created on the backup host the warning about the incremental backup changing to a full backup was emitted.
Restore a Backup
To perform a restore of the cluster run with the restore command on the database host.
Stop the {[postgres-cluster-demo]} cluster, restore, and restart {[db-cluster-stop]}{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta restore{[db-cluster-start]}{[db-cluster-wait]}
A new backup must be performed due to the timeline switch.
Backup the {[postgres-cluster-demo]} cluster{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} backupAsynchronous Archiving
The archive-async option offloads WAL archiving to a separate process (or processes) to improve throughput. It works by looking ahead to see which WAL segments are ready to be archived beyond the request that is currently making via the archive_command. WAL segments are transferred to the archive directly from the pg_xlog/pg_wal directory and success is only returned by the archive_command when the WAL segment has been safely stored in the archive.
The spool directory is created to hold the current status of WAL archiving. Status files written into the spool directory are typically zero length and should consume a minimal amount of space (a few MB at most) and very little IO. All the information in this directory can be recreated so it is not necessary to preserve the spool directory if the cluster is moved to new hardware.
NOTE: In the original implementation of asynchronous archiving, WAL segments were copied to the spool directory before compression and transfer. The new implementation copies WAL directly from the pg_xlog directory. If asynchronous archiving was utilized in v1.12 or prior, read the v1.13 release notes carefully before upgrading.
Create the spool directorymkdir -m 750 {[spool-path]}chown postgres:postgres {[spool-path]}
The spool path must be configured and asynchronous archiving enabled. Asynchronous archiving automatically confers some benefit by reducing the number of ssh connections made to the backup server, but setting process-max can drastically improve performance. Be sure not to set process-max so high that it affects normal database operations.
Configure the spool path and asynchronous archiving{[spool-path]}y2
The archive-async.log file can be used to monitor the activity of the asynchronous process. A good way to test this is to quickly push a number of WAL segments.
Now the log file will contain parallel, asynchronous activity.
Check results in the logcat /var/log/pgbackrest/demo-archive-async.log WAL file\(s\) to archive|pushed WAL file 0000000Parallel Backup / Restore
offers parallel processing to improve performance of compression and transfer. The number of processes to be used for this feature is set using the --process-max option.
Check the number of CPUslscpu^CPU\(s\)\:
It is usually best not to use more than 25% of the available CPUs for the backup command. Backups don't have to run that fast as long as they are performed regularly and the backup process should not impact database performance, if at all possible.
The restore command can and should use all available CPUs because during a restore the cluster is shut down and there is generally no other important work being done on the host. If the host contains multiple clusters then that should be considered when setting restore parallelism.
Perform a backup with single process{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=full backupConfigure to use multiple backup processes3Perform a backup with multiple processes{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=full backupGet backup info for the {[postgres-cluster-demo]} cluster{[project-exe]} infotimestamp start/stop
The performance of the last backup should be improved by using multiple processes. For very small backups the difference may not be very apparent, but as the size of the database increases so will time savings.
Starting and Stopping
Sometimes it is useful to prevent from running on a system. For example, when failing over from a primary to a standby it's best to prevent from running on the old primary in case gets restarted or can't be completely killed. This will also prevent from running on cron.
Stop the services{[project-exe]} stop
New processes will no longer run.
Attempt a backup{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} backup\: stop file exists for all stanzas
Specify the --force option to terminate any process that are currently running. If is already stopped then stopping again will generate a warning.
Stop the services again{[project-exe]} stop
Start processes again with the start command.
Start the services{[project-exe]} start
It is also possible to stop for a single stanza.
Stop services for the demo stanza{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} stop
New processes for the specified stanza will no longer run.
Attempt a backup{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} backup\: stop file exists for stanza demo
The stanza must also be specified when starting the processes for a single stanza.
Start the services for the demo stanza{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} startReplication
Replication allows multiple copies of a cluster (called standbys) to be created from a single primary. The standbys are useful for balancing reads and to provide redundancy in case the primary host fails.
Installation
A new host named db-standby is created to run the standby.
{[host-db-standby]}postgrespostgres
The demo cluster must be created even though it will be overwritten later.
A hot standby performs replication using the WAL archive and allows read-only queries.
Set options'''postgresql.log'
configuration is very similar to db-primary except that the standby_mode setting will be enabled to keep the cluster in recovery mode when the end of the WAL stream has been reached.
Configure on the standby{[db-path]}{[host-backup]}standby_mode=onoffn
Now the standby can be created with the restore command.
Restore the {[postgres-cluster-demo]} standby cluster{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta restorecat {[postgres-recovery-demo]}
Note that the standby_mode setting has been written into the recovery.conf file. Configuring recovery settings in means that the recovery.conf file does not need to be stored elsewhere since it will be properly recreated with each restore. The --type=preserve option can be used with the restore to leave the existing recovery.conf file in place if that behavior is preferred.
The hot_standby setting must be enabled before starting to allow read-only connections on db-standby. Otherwise, connection attempts will be refused.
The log gives valuable information about the recovery. Note especially that the cluster has entered standby mode and is ready to accept read-only connections.
Examine the log output for log messages indicating successcat {[postgres-log-demo]}entering standby mode|database system is ready to accept read only connections
An easy way to test that replication is properly configured is to create a table on db-primary.
Create a new table on the primary
psql -c "
begin;
create table replicated_table (message text);
insert into replicated_table values ('{[test-table-data]}');
commit;
select * from replicated_table";
{[test-table-data]}
And then query the same table on db-standby.
Query new table on the standbypsql -c "select * from replicated_table;"does not exist
So, what went wrong? Since is pulling WAL segments from the archive to perform replication, changes won't be seen on the standby until the WAL segment that contains those changes is pushed from db-primary.
This can be done manually by calling pg_switch_xlog() which pushes the current WAL segment to the archive (a new WAL segment is created to contain further changes).
Call pg_switch_xlog()
psql -c "select *, current_timestamp from pg_switch_xlog()";
Now after a short delay the table will appear on db-standby.
Now the new table exists on the standby (may require a few retries)psql -c "
select *, current_timestamp from replicated_table"{[test-table-data]}
Check the standby configuration for access to the repository.
Check the configuration{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-log-level-console=info checkall other checks passedStreaming Replication
Instead of relying solely on the WAL archive, streaming replication makes a direct connection to the primary and applies changes as soon as they are made on the primary. This results in much less lag between the primary and standby.
Streaming replication requires a user with the replication privilege.
Create replication user
psql -c "
create user replicator password 'jw8s0F4' replication";
The pg_hba.conf file must be updated to allow the standby to connect as the replication user. Be sure to replace the IP address below with the actual IP address of your db-primary. A reload will be required after modifying the pg_hba.conf file.
Create pg_hba.conf entry for replication user
sh -c 'echo
"host replication replicator {[host-db-standby-ip]}/32 md5"
>> {[postgres-hba-demo]}'
{[db-cluster-reload]}
The standby needs to know how to contact the primary so the primary_conninfo setting will be configured in .
Set primary_conninfoprimary_conninfo=host={[host-db-primary-ip]} port=5432 user=replicator
It is possible to configure a password in the primary_conninfo setting but using a .pgpass file is more flexible and secure.
Configure the replication password in the .pgpass file.
sh -c 'echo
"{[host-db-primary-ip]}:*:replication:replicator:jw8s0F4"
>> {[postgres-pgpass]}'
chmod 600 {[postgres-pgpass]}
Now the standby can be created with the restore command.
Stop and restore the {[postgres-cluster-demo]} standby cluster{[db-cluster-stop]}{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta restorecat {[postgres-recovery-demo]}
By default {[user-guide-os]} stores the postgresql.conf file in the data directory. That means the change made to postgresql.conf was overwritten by the last restore and the hot_standby setting must be enabled again. Other solutions to this problem are to store the postgresql.conf file elsewhere or to enable the hot_standby setting on the db-primary host where it will be ignored.
The log will confirm that streaming replication has started.
Examine the log output for log messages indicating successcat {[postgres-log-demo]}started streaming WAL from primary
Now when a table is created on db-primary it will appear on db-standby quickly and without the need to call pg_switch_xlog().
Create a new table on the primary
psql -c "
begin;
create table stream_table (message text);
insert into stream_table values ('{[test-table-data]}');
commit;
select *, current_timestamp from stream_table";
{[test-table-data]}Query table on the standbypsql -c "
select *, current_timestamp from stream_table"{[test-table-data]}Backup from a Standby
can perform backups on a standby instead of the primary. Standby backups require the db-standby host to be configured and the backup-standby option enabled. If more than one standby is configured then the first running standby found will be used for the backup.
Configure db2-host/db2-user and db2-path{[db-path]}{[host-db-standby]}postgresy
Both the primary and standby databases are required to perform the backup, though the vast majority of the files will be copied from the standby to reduce load on the primary. The database hosts can be configured in any order. will automatically determine which is the primary and which is the standby.
Backup the {[postgres-cluster-demo]} cluster from db-standby{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} --log-level-console=detail backupbackup file db-primary|replay on the standby
This incremental backup shows that most of the files are copied from the db-standby host and only a few are copied from the db-primary host.
creates a standby backup that is identical to a backup performed on the primary. It does this by starting/stopping the backup on the db-primary host, copying only files that are replicated from the db-standby host, then copying the remaining few files from the db-primary host. This means that logs and statistics from the primary database will be included in the backup.
Upgrading
The following instructions are not meant to be a comprehensive guide for upgrading , rather they will outline the general process for upgrading a primary and standby with the intent of demonstrating the steps required to reconfigure . It is recommended that a backup be taken prior to upgrading.
Install new version{[postgres-install-upgrade]}-y
Create the new cluster. If the install creates a default cluster, then remove it to avoid confusion.
Drop default cluster and create the new demo clusterpg_dropcluster {[pg-version-upgrade]} main
/usr/lib/postgresql/{[pg-version-upgrade]}/bin/initdb
-D {[db-path-upgrade]} -k -A peer{[db-cluster-create-upgrade]}
Stop the old cluster on the standby since it will be restored from the newly upgraded cluster to ensure the database system id is identical on both the primary and standby.
Stop old cluster and drop default cluster if created{[db-cluster-stop]}{[postgres-install-upgrade]}-ypg_dropcluster {[pg-version-upgrade]} main
Stop the old cluster on the primary and perform the upgrade.
Update the configuration on all systems to point to the new cluster.
Upgrade the db-path{[db-path-upgrade]}Upgrade the db-path{[db-path-upgrade]}Upgrade db1-path and db2-path, disable backup from standby{[db-path-upgrade]}{[db-path-upgrade]}nCopy hba configurationcp {[postgres-hba-demo]}
{[postgres-hba-demo-upgrade]}
Before starting the new cluster, the stanza-upgrade command must be run on the server where the repository is located.
Upgrade the stanza{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-no-online
{[dash]}-log-level-console=info stanza-upgradecompleted successfully
Start the new cluster and confirm it is successfully installed.
Start new cluster{[db-cluster-start-upgrade]}
Test configuration using the check command. The warning on the backup host regarding the standby being down is expected and can be ignored.
Remove old clusterpg_dropcluster {[pg-version]} {[postgres-cluster-demo]}rm -rf {[db-path]}
Run a full backup on the new cluster and then restore the standby from the backup. The backup type will automatically be changed to full if incr or diff is requested.
Run a full backup{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-type=full backup
Install the new binaries on the standby and create the cluster.
Remove old cluster and initialize new onepg_dropcluster {[pg-version]} {[postgres-cluster-demo]}rm -rf {[db-path]}
/usr/lib/postgresql/{[pg-version-upgrade]}/bin/initdb
-D {[db-path-upgrade]} -k -A peer{[db-cluster-create-upgrade]}Restore the {[postgres-cluster-demo]} standby cluster{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} {[dash]}-delta restoreConfigure onStart {[db-cluster-start-upgrade]}{[db-cluster-wait]}
Backup from standby can be enabled now that the standby is restored.