From d4418e7764bf6b3d3e8b6d2bc204cfd0e00bca13 Mon Sep 17 00:00:00 2001 From: David Steele Date: Wed, 21 Feb 2018 18:15:40 -0500 Subject: [PATCH] Rename pg-primary and pg-standby variables to pg1 and pg2. It would be better if the hostnames were also pg1 and pg2 to illustrate that primaries and standbys can change hosts, but at this time the configuration ends up being confusing since pg1, pg2, etc. are also used in the option naming. So, for now leave the names as pg-primary and pg-standby to avoid confusion. --- doc/xml/user-guide.xml | 320 +++++++++++++++++++++-------------------- 1 file changed, 162 insertions(+), 158 deletions(-) diff --git a/doc/xml/user-guide.xml b/doc/xml/user-guide.xml index 78325e7cb..04d39184a 100644 --- a/doc/xml/user-guide.xml +++ b/doc/xml/user-guide.xml @@ -88,19 +88,23 @@ {[pgbackrest-base-dir]}:/backrest pgbackrest/test - s3-server + s3 + s3-server - pg-primary - {[host-user]} - {[image-repo]}:{[host-os]}-base - {[host-mount]} + pg1 + pg-primary + {[host-user]} + {[image-repo]}:{[host-os]}-base + {[host-mount]} - pg-standby - {[host-pg-primary-user]} - {[host-pg-primary-image]} - {[host-mount]} + pg2 + pg-standby + {[host-pg1-user]} + {[host-pg1-image]} + {[host-mount]} - repo1 + repo1 + repository {[host-user]} {[image-repo]}:{[host-os]}-base {[host-mount]} @@ -424,15 +428,15 @@ Installation - + -

A new host named pg-primary is created to contain the demo cluster and run examples.

+

A new host named pg1 is created to contain the demo cluster and run examples.

- +

If has been installed before it's best to be sure that no prior copies of it are still installed. Depending on how old the version of pgBackRest is it may have been installed in a few different locations. The following commands will remove all prior versions of pgBackRest.

- + Remove prior <backrest/> installations @@ -456,14 +460,14 @@ - {[host-pg-primary]} + {[host-pg1]} postgres postgres

should now be properly installed but it is best to check. If any dependencies were missed then you will get an error when running from the command line.

- + Make sure the installation worked @@ -484,7 +488,7 @@

Creating the demo cluster is optional but is strongly recommended, especially for new users, since the example commands in the user guide reference the demo cluster; the examples assume the demo cluster is running on the default port (i.e. 5432). The cluster will not be started until a later section because there is still some configuration to do.

- + Create the demo cluster @@ -500,7 +504,7 @@

By default will only accept local connections. The examples in this guide will require connections from other servers so listen_addresses is configured to listen on all interfaces. This may not be appropriate for secure installations.

- + Set <pg-option>listen_addresses</pg-option> '*' @@ -508,7 +512,7 @@

For demonstration purposes the log_line_prefix setting will be minimally configured. This keeps the log output as brief as possible to better illustrate important information.

- + Set <pg-option>log_line_prefix</pg-option> '' @@ -516,7 +520,7 @@

By default {[user-guide-os]} includes the day of the week in the log filename. This makes automating the user guide a bit more complicated so the log_filename is set to a constant.

- + Set <pg-option>log_filename</pg-option> 'postgresql.log' @@ -537,7 +541,7 @@

When creating the {[backrest-config-demo]} file, the database owner (usually postgres) must be granted read privileges.

- + Configure the <postgres/> cluster data directory {[pg-path]} @@ -558,14 +562,14 @@

For this demonstration the repository will be stored on the same host as the server. This is the simplest configuration and is useful in cases where traditional backup software is employed to backup the database host.

- {[host-pg-primary]} + {[host-pg1]} postgres postgres

The repository path must be configured so knows where to find it.

- + Configure the <backrest/> repository path {[backrest-repo-path]} @@ -578,7 +582,7 @@

Backing up a running cluster requires WAL archiving to be enabled. Note that at least one WAL segment will be created during the backup process even if no explicit writes are made to the cluster.

- + Configure archive settings '{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} archive-push %p' @@ -591,7 +595,7 @@

The cluster must be restarted after making these changes and before performing a backup.

- + Restart the {[postgres-cluster-demo]} cluster @@ -612,7 +616,7 @@

expires backups based on retention options.

- + Configure retention to 2 full backups 2 @@ -630,7 +634,7 @@

It is important to use a long, random passphrase for the cipher key. A good way to generate one is to run: openssl rand -base64 48.

- + Configure <backrest/> repository encryption {[backrest-repo-cipher-type]} @@ -646,7 +650,7 @@

The stanza-create command must be run on the host where the repository is located to initialize the stanza. It is recommended that the check command be run after stanza-create to ensure archiving and backups are properly configured.

- + Create the stanza and check the configuration @@ -661,7 +665,7 @@ Check the Configuration - + Check the configuration @@ -671,7 +675,7 @@ - + Example of an invalid configuration @@ -687,7 +691,7 @@

To perform a backup of the cluster run with the backup command.

- + Backup the {[postgres-cluster-demo]} cluster @@ -705,7 +709,7 @@

The type option can be used to specify a full or differential backup.

- + Differential backup of the {[postgres-cluster-demo]} cluster @@ -743,7 +747,7 @@

Use the info command to get information about backups.

- + Get info for the {[postgres-cluster-demo]} cluster @@ -771,7 +775,7 @@

Backups can protect you from a number of disaster scenarios, the most common of which are hardware failure and data corruption. The easiest way to simulate data corruption is to remove an important cluster file.

- + Stop the {[postgres-cluster-demo]} cluster and delete the <file>pg_control</file> file @@ -785,7 +789,7 @@

Starting the cluster without this important file will result in an error.

- + Attempt to start the corrupted {[postgres-cluster-demo]} cluster @@ -814,7 +818,7 @@

To restore a backup of the cluster run with the restore command. The cluster needs to be stopped (in this case it is already stopped) and all files must be removed from the data directory.

- + Remove old files from {[postgres-cluster-demo]} cluster @@ -822,7 +826,7 @@ - + Restore the {[postgres-cluster-demo]} cluster and start <postgres/> @@ -856,7 +860,7 @@

By default will wait for the next regularly scheduled checkpoint before starting a backup. Depending on the checkpoint_timeout and checkpoint_segments settings in it may be quite some time before a checkpoint completes and the backup can begin.

- + Incremental backup of the {[postgres-cluster-demo]} cluster with the regularly scheduled checkpoint @@ -868,13 +872,13 @@

When {[dash]}-start-fast is passed on the command-line or start-fast=y is set in {[backrest-config-demo]} an immediate checkpoint is requested and the backup will start more quickly. This is convenient for testing and for ad-hoc backups. For instance, if a backup is being taken at the beginning of a release window it makes no sense to wait for a checkpoint. Since regularly scheduled backups generally only happen once per day it is unlikely that enabling the start-fast in {[backrest-config-demo]} will negatively affect performance, however for high-volume transactional systems you may want to pass {[dash]}-start-fast on the command-line instead. Alternately, it is possible to override the setting in the configuration file by passing {[dash]}-no-start-fast on the command-line.

- + Enable the <br-option>start-fast</br-option> option y - + Incremental backup of the {[postgres-cluster-demo]} cluster with an immediate checkpoint @@ -893,7 +897,7 @@

Here an error is intentionally caused by removing repository permissions.

- + Revoke write privileges in the <backrest/> repository and attempt a backup @@ -909,7 +913,7 @@

Even when the permissions are fixed will still be unable to perform a backup because the cluster is stuck in backup mode.

- + Restore write privileges in the <backrest/> repository and attempt a backup @@ -925,7 +929,7 @@

Enabling the stop-auto option allows to stop the current backup if it detects that no other backup process is running.

- + Enable the <br-option>stop-auto</br-option> option y @@ -933,7 +937,7 @@

Now will stop the old backup and start a new one so the process completes successfully.

- + Perform an incremental backup @@ -968,7 +972,7 @@

Set repo1-retention-full to the number of full backups required. New backups must be completed before expiration will occur &mdash; that means if repo1-retention-full=2 then there will be three full backups stored before the oldest one is expired.

- + Configure <br-option>repo1-retention-full</br-option> 2 @@ -976,7 +980,7 @@

Backup repo1-retention-full=2 but currently there is only one full backup so the next full backup to run will not expire any full backups.

- + Perform a full backup @@ -992,7 +996,7 @@

Archive is expired because WAL segments were generated before the oldest backup. These are not useful for recovery &mdash; only WAL segments generated after a backup can be used to recover that backup.

- + Perform a full backup @@ -1011,7 +1015,7 @@

Set repo1-retention-diff to the number of differential backups required. Differentials only rely on the prior full backup so it is possible to create a rolling set of differentials for the last day or more. This allows quick restores to recent points-in-time but reduces overall space consumption.

- + Configure <br-option>repo1-retention-diff</br-option> 1 @@ -1019,7 +1023,7 @@

Backup repo1-retention-diff=1 so two differentials will need to be performed before one is expired. An incremental backup is added to demonstrate incremental expiration. Incremental backups cannot be expired independently &mdash; they are always expired with their related full or differential backup.

- + Perform differential and incremental backups @@ -1037,7 +1041,7 @@

Now performing a differential backup will expire the previous differential and incremental backups leaving only one differential backup.

- + Perform a differential backup @@ -1056,13 +1060,13 @@

Expiring archive will never remove WAL segments that are required to make a backup consistent. However, since Point-in-Time-Recovery (PITR) only works on a continuous WAL stream, care should be taken when aggressively expiring archive outside of the normal backup expiration process.

- + Configure <br-option>repo1-retention-diff</br-option> 2 - + Perform differential backup @@ -1087,7 +1091,7 @@ - + Expire archive @@ -1115,7 +1119,7 @@

Restore a Backup in Quick Start required the database cluster directory to be cleaned before the restore could be performed. The delta option allows to automatically determine which files in the database cluster directory can be preserved and which ones need to be restored from the backup &mdash; it also removes files not present in the backup manifest so it will dispose of divergent changes. This is accomplished by calculating a SHA-1 cryptographic hash for each file in the database cluster directory. If the SHA-1 hash does not match the hash stored in the backup then that file will be restored. This operation is very efficient when combined with the process-max option. Since the server is shut down during the restore, a larger number of processes can be used than might be desirable during a backup when the server is running.

- + Stop the {[postgres-cluster-demo]} cluster, perform delta restore @@ -1129,7 +1133,7 @@ - + Restart <postgres/> @@ -1150,7 +1154,7 @@

To demonstrate this feature two databases are created: test1 and test2. A fresh backup is run so is aware of the new databases.

- + Create two test databases and perform a backup @@ -1172,7 +1176,7 @@

Each test database will be seeded with tables and data to demonstrate that recovery works with selective restore.

- + Create a test table in each database @@ -1192,7 +1196,7 @@

One of the main reasons to use selective restore is to save space. The size of the test1 database is shown here so it can be compared with the disk utilization after a selective restore.

- + Show space used by test1 database @@ -1204,7 +1208,7 @@

Stop the cluster and restore only the test2 database. Built-in databases (template0, template1, and postgres) are always restored.

- + Restore from last backup including only the test2 database @@ -1227,7 +1231,7 @@

Once recovery is complete the test2 database will contain all previously created tables and data.

- + Demonstrate that the test2 database was recovered @@ -1239,7 +1243,7 @@

The test1 database, despite successful recovery, is not accessible. This is because the entire database was restored as sparse, zeroed files. can successfully apply WAL on the zeroed files but the database as a whole will not be valid because key files contain no data. This is purposeful to prevent the database from being accidentally used when it might contain partial data that was applied during WAL replay.

- + Attempting to connect to the test1 database will produce an error @@ -1254,7 +1258,7 @@

It is clear that the test1 database uses far less disk space during the selective restore than it would have if the entire database had been restored.

- + Show space used by test1 database after recovery @@ -1266,7 +1270,7 @@

At this point the only action that can be taken on the invalid test1 database is drop database. does not automatically drop the database since this cannot be done until recovery is complete and the cluster is accessible.

- + Drop the test1 database @@ -1278,7 +1282,7 @@

Now that the invalid test1 database has been dropped only the test2 and built-in databases remain.

- + List remaining databases @@ -1299,7 +1303,7 @@

Point-in-Time Recovery (PITR) allows the WAL to be played from the last backup to a specified time, transaction id, or recovery point. For common recovery scenarios time-based recovery is arguably the most useful. A typical recovery scenario is to restore a table that was accidentally dropped or data that was accidentally deleted. Recovering a dropped table is more dramatic so that's the example given here but deleted data would be recovered in exactly the same way.

- + Backup the {[postgres-cluster-demo]} cluster and create a table with very important data @@ -1320,7 +1324,7 @@

It is important to represent the time as reckoned by and to include timezone offsets. This reduces the possibility of unintended timezone conversions and an unexpected recovery result.

- + Get the time from <postgres/> @@ -1332,7 +1336,7 @@

Now that the time has been recorded the table is dropped. In practice finding the exact time that the table was dropped is a lot harder than in this example. It may not be possible to find the exact time, but some forensic work should be able to get you close.

- + Drop the important table @@ -1346,7 +1350,7 @@

Now the restore can be performed with time-based recovery to bring back the missing table.

- + Stop <postgres/>, restore the {[postgres-cluster-demo]} cluster to <id>{[time-recovery-timestamp]}</id>, and display <file>recovery.conf</file> @@ -1370,7 +1374,7 @@

The recovery.conf file has been automatically generated by so can be started immediately. Once has finished recovery the table will exist again and can be queried.

- + Start <postgres/> and check that the important table exists @@ -1389,7 +1393,7 @@

The log also contains valuable information. It will indicate the time and transaction where the recovery stopped and also give the time of the last transaction to be applied.

- + Examine the <postgres/> log output @@ -1400,7 +1404,7 @@

This example was rigged to give the correct result. If a backup after the required time is chosen then will not be able to recover the lost table. can only play forward, not backward. To demonstrate this the important table must be dropped (again).

- + Drop the important table (again) @@ -1414,7 +1418,7 @@

Now take a new backup and attempt recovery from the new backup.

- + Perform a backup then attempt recovery from that backup @@ -1454,7 +1458,7 @@

Looking at the log output it's not obvious that recovery failed to restore the table. The key is to look for the presence of the recovery stopping before... and last completed transaction... log messages. If they are not present then the recovery to the specified point-in-time was not successful.

- + Examine the <postgres/> log output to discover the recovery was not successful @@ -1465,7 +1469,7 @@

Using an earlier backup will allow to play forward to the correct time. The info command can be used to find the next to last backup.

- + Get backup info for the {[postgres-cluster-demo]} cluster @@ -1476,7 +1480,7 @@

The default behavior for restore is to use the last backup but an earlier backup can be specified with the {[dash]}-set option.

- + Stop <postgres/>, restore from the selected backup, and start <postgres/> @@ -1511,7 +1515,7 @@

Now the the log output will contain the expected recovery stopping before... and last completed transaction... messages showing that the recovery was successful.

- + Examine the <postgres/> log output for log messages indicating success @@ -1527,14 +1531,14 @@

supports storing repositories in Amazon S3. The bucket used to store the repository must be created in advance &mdash; will not do it automatically.

- + Clear the cipher settings none - + Configure <proper>S3</proper> s3 @@ -1551,12 +1555,12 @@

Commands are run exactly as if the repository were stored on a local disk.

- + Create the stanza - echo "{[host-s3-server-ip]} demo-bucket.s3.amazonaws.com s3.amazonaws.com" | + echo "{[host-s3-ip]} demo-bucket.s3.amazonaws.com s3.amazonaws.com" | sudo tee -a /etc/hosts @@ -1572,7 +1576,7 @@

File creation time in S3 is relatively slow so commands benefit by increasing process-max to parallelize file creation.

- + Backup the {[postgres-cluster-demo]} cluster @@ -1589,7 +1593,7 @@ - + Stop <postgres/> cluster to be removed @@ -1597,7 +1601,7 @@ - + Stop <backrest/> for the stanza @@ -1606,7 +1610,7 @@ - + Delete the stanza @@ -1631,7 +1635,7 @@

A new host named repository is created to store the cluster backups.

- +

The {[br-user]} user is created to own the repository. Any user can own the repository but it is best not to use postgres (if it exists) to avoid confusion.

@@ -1682,7 +1686,7 @@
- {[host-pg-primary]} + {[host-pg1]} @@ -1696,13 +1700,13 @@ {[backrest-repo-path]}
-

The repository host must be configured with the pg-primary host/user and database path. The primary will be configured as db1 to allow a standby to be added later.

+

The repository host must be configured with the {[host-pg1]} host/user and database path. The primary will be configured as db1 to allow a standby to be added later.

Configure <br-option>pg1-host</br-option>/<br-option>pg1-host-user</br-option> and <br-option>pg1-path</br-option> {[pg-path]} - {[host-pg-primary]} + {[host-pg1]} postgres y @@ -1714,7 +1718,7 @@

The database host must be configured with the repository host/user. The default for the repo1-host-user option is pgbackrest. If the postgres user does restores on the repository host it is best not to also allow the postgres user to perform backups. However, the postgres user can read the repository directly if it is in the same group as the pgbackrest user.

- + Configure <br-option>repo1-host</br-option>/<br-option>repo1-host-user</br-option> {[pg-path]} @@ -1741,7 +1745,7 @@

Check that the configuration is correct on both the database and repository hosts. More information about the check command can be found in Check the Configuration.

- + Check the configuration @@ -1781,7 +1785,7 @@

To perform a restore of the cluster run with the restore command on the database host.

- + Stop the {[postgres-cluster-demo]} cluster, restore, and restart <postgres/> @@ -1822,7 +1826,7 @@

NOTE: In the original implementation of asynchronous archiving, WAL segments were copied to the spool directory before compression and transfer. The new implementation copies WAL directly from the pg_xlog directory. If asynchronous archiving was utilized in v1.12 or prior, read the v1.13 release notes carefully before upgrading.

- + Create the spool directory @@ -1835,7 +1839,7 @@

The spool path must be configured and asynchronous archiving enabled. Asynchronous archiving automatically confers some benefit by reducing the number of ssh connections made to the backup server, but setting process-max can drastically improve performance. Be sure not to set process-max so high that it affects normal database operations.

- + Configure the spool path and asynchronous archiving {[spool-path]} @@ -1845,7 +1849,7 @@

The archive-async.log file can be used to monitor the activity of the asynchronous process. A good way to test this is to quickly push a number of WAL segments.

- + Test parallel asynchronous archiving @@ -1871,7 +1875,7 @@

Now the log file will contain parallel, asynchronous activity.

- + Check results in the log @@ -1941,7 +1945,7 @@

Sometimes it is useful to prevent from running on a system. For example, when failing over from a primary to a standby it's best to prevent from running on the old primary in case gets restarted or can't be completely killed. This will also prevent from running on cron.

- + Stop the <backrest/> services @@ -1962,7 +1966,7 @@

Specify the --force option to terminate any process that are currently running. If is already stopped then stopping again will generate a warning.

- + Stop the <backrest/> services again @@ -1972,7 +1976,7 @@

Start processes again with the start command.

- + Start the <backrest/> services @@ -1982,7 +1986,7 @@

It is also possible to stop for a single stanza.

- + Stop <backrest/> services for the <id>demo</id> stanza @@ -2003,7 +2007,7 @@

The stanza must also be specified when starting the processes for a single stanza.

- + Start the <backrest/> services for the <id>demo</id> stanza @@ -2022,12 +2026,12 @@
Installation -

A new host named pg-standby is created to run the standby.

+

A new host named {[host-pg2]} is created to run the standby.

- + - {[host-pg-standby]} + {[host-pg2]} postgres postgres @@ -2042,7 +2046,7 @@ - {[host-pg-standby]} + {[host-pg2]}
@@ -2052,9 +2056,9 @@

A hot standby performs replication using the WAL archive and allows read-only queries.

-

configuration is very similar to pg-primary except that the standby_mode setting will be enabled to keep the cluster in recovery mode when the end of the WAL stream has been reached.

+

configuration is very similar to {[host-pg1]} except that the standby_mode setting will be enabled to keep the cluster in recovery mode when the end of the WAL stream has been reached.

- + Configure <backrest/> on the standby {[pg-path]} @@ -2069,7 +2073,7 @@

The demo cluster must be created (even though it will be overwritten restore) in order to create the configuration files.

- + Create demo cluster @@ -2079,7 +2083,7 @@

Create the path where will be restored.

- + Create <postgres/> path @@ -2091,7 +2095,7 @@

Now the standby can be created with the restore command.

- + Restore the {[postgres-cluster-demo]} standby cluster @@ -2109,9 +2113,9 @@

Note that the standby_mode setting has been written into the recovery.conf file. Configuring recovery settings in means that the recovery.conf file does not need to be stored elsewhere since it will be properly recreated with each restore. The --type=preserve option can be used with the restore to leave the existing recovery.conf file in place if that behavior is preferred.

-

The hot_standby setting must be enabled before starting to allow read-only connections on pg-standby. Otherwise, connection attempts will be refused.

+

The hot_standby setting must be enabled before starting to allow read-only connections on {[host-pg2]}. Otherwise, connection attempts will be refused.

- + Enable <pg-option>hot_standby</pg-option> and configure logging on @@ -2119,7 +2123,7 @@ '' - + Start <postgres/> @@ -2137,7 +2141,7 @@

The log gives valuable information about the recovery. Note especially that the cluster has entered standby mode and is ready to accept read-only connections.

- + Examine the <postgres/> log output for log messages indicating success @@ -2146,9 +2150,9 @@ -

An easy way to test that replication is properly configured is to create a table on pg-primary.

+

An easy way to test that replication is properly configured is to create a table on {[host-pg1]}.

- + Create a new table on the primary @@ -2164,9 +2168,9 @@ -

And then query the same table on pg-standby.

+

And then query the same table on {[host-pg2]}.

- + Query new table on the standby @@ -2175,11 +2179,11 @@ -

So, what went wrong? Since is pulling WAL segments from the archive to perform replication, changes won't be seen on the standby until the WAL segment that contains those changes is pushed from pg-primary.

+

So, what went wrong? Since is pulling WAL segments from the archive to perform replication, changes won't be seen on the standby until the WAL segment that contains those changes is pushed from {[host-pg1]}.

This can be done manually by calling pg_switch_xlog() which pushes the current WAL segment to the archive (a new WAL segment is created to contain further changes).

- + Call <code>pg_switch_xlog()</code> @@ -2189,9 +2193,9 @@ -

Now after a short delay the table will appear on pg-standby.

+

Now after a short delay the table will appear on {[host-pg2]}.

- + Now the new table exists on the standby (may require a few retries) @@ -2203,7 +2207,7 @@

Check the standby configuration for access to the repository.

- + Check the configuration @@ -2221,7 +2225,7 @@

Streaming replication requires a user with the replication privilege.

- + Create replication user @@ -2232,15 +2236,15 @@ -

The pg_hba.conf file must be updated to allow the standby to connect as the replication user. Be sure to replace the IP address below with the actual IP address of your pg-primary. A reload will be required after modifying the pg_hba.conf file.

+

The pg_hba.conf file must be updated to allow the standby to connect as the replication user. Be sure to replace the IP address below with the actual IP address of your {[host-pg1]}. A reload will be required after modifying the pg_hba.conf file.

- + Create <file>pg_hba.conf</file> entry for replication user sh -c 'echo - "host replication replicator {[host-pg-standby-ip]}/32 md5" + "host replication replicator {[host-pg2-ip]}/32 md5" >> {[postgres-hba-demo]}' @@ -2252,21 +2256,21 @@

The standby needs to know how to contact the primary so the primary_conninfo setting will be configured in .

- + Set <pg-option>primary_conninfo</pg-option> - primary_conninfo=host={[host-pg-primary-ip]} port=5432 user=replicator + primary_conninfo=host={[host-pg1-ip]} port=5432 user=replicator

It is possible to configure a password in the primary_conninfo setting but using a .pgpass file is more flexible and secure.

- + Configure the replication password in the <file>.pgpass</file> file. sh -c 'echo - "{[host-pg-primary-ip]}:*:replication:replicator:jw8s0F4" + "{[host-pg1-ip]}:*:replication:replicator:jw8s0F4" >> {[postgres-pgpass]}' @@ -2278,7 +2282,7 @@

Now the standby can be created with the restore command.

- + Stop <postgres/> and restore the {[postgres-cluster-demo]} standby cluster @@ -2294,15 +2298,15 @@ -

By default {[user-guide-os]} stores the postgresql.conf file in the data directory. That means the change made to postgresql.conf was overwritten by the last restore and the hot_standby setting must be enabled again. Other solutions to this problem are to store the postgresql.conf file elsewhere or to enable the hot_standby setting on the pg-primary host where it will be ignored.

+

By default {[user-guide-os]} stores the postgresql.conf file in the data directory. That means the change made to postgresql.conf was overwritten by the last restore and the hot_standby setting must be enabled again. Other solutions to this problem are to store the postgresql.conf file elsewhere or to enable the hot_standby setting on the {[host-pg1]} host where it will be ignored.

- + Enable <pg-option>hot_standby</pg-option> on - + Start <postgres/> @@ -2320,7 +2324,7 @@

The log will confirm that streaming replication has started.

- + Examine the <postgres/> log output for log messages indicating success @@ -2329,9 +2333,9 @@ -

Now when a table is created on pg-primary it will appear on pg-standby quickly and without the need to call pg_switch_xlog().

+

Now when a table is created on {[host-pg1]} it will appear on {[host-pg2]} quickly and without the need to call pg_switch_xlog().

- + Create a new table on the primary @@ -2347,7 +2351,7 @@ - + Query table on the standby @@ -2363,13 +2367,13 @@
Backup from a Standby -

can perform backups on a standby instead of the primary. Standby backups require the pg-standby host to be configured and the backup-standby option enabled. If more than one standby is configured then the first running standby found will be used for the backup.

+

can perform backups on a standby instead of the primary. Standby backups require the {[host-pg2]} host to be configured and the backup-standby option enabled. If more than one standby is configured then the first running standby found will be used for the backup.

Configure <br-option>pg2-host</br-option>/<br-option>pg2-host-user</br-option> and <br-option>pg2-path</br-option> {[pg-path]} - {[host-pg-standby]} + {[host-pg2]} postgres y @@ -2378,17 +2382,17 @@

Both the primary and standby databases are required to perform the backup, though the vast majority of the files will be copied from the standby to reduce load on the primary. The database hosts can be configured in any order. will automatically determine which is the primary and which is the standby.

- Backup the {[postgres-cluster-demo]} cluster from <host>pg-standby</host> + Backup the {[postgres-cluster-demo]} cluster from <host>pg2</host> {[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} --log-level-console=detail backup - backup file pg-primary|replay on the standby + backup file {[host-pg1]}|replay on the standby -

This incremental backup shows that most of the files are copied from the pg-standby host and only a few are copied from the pg-primary host.

+

This incremental backup shows that most of the files are copied from the {[host-pg2]} host and only a few are copied from the {[host-pg1]} host.

-

creates a standby backup that is identical to a backup performed on the primary. It does this by starting/stopping the backup on the pg-primary host, copying only files that are replicated from the pg-standby host, then copying the remaining few files from the pg-primary host. This means that logs and statistics from the primary database will be included in the backup.

+

creates a standby backup that is identical to a backup performed on the primary. It does this by starting/stopping the backup on the {[host-pg1]} host, copying only files that are replicated from the {[host-pg2]} host, then copying the remaining few files from the {[host-pg1]} host. This means that logs and statistics from the primary database will be included in the backup.

@@ -2398,7 +2402,7 @@

The following instructions are not meant to be a comprehensive guide for upgrading , rather they outline the general process for upgrading a primary and standby with the intent of demonstrating the steps required to reconfigure . It is recommended that a backup be taken prior to upgrading.

- + Stop old cluster and install new <postgres/> version @@ -2417,7 +2421,7 @@

Stop the old cluster on the standby since it will be restored from the newly upgraded cluster.

- + Stop old cluster and install new <postgres/> version @@ -2436,7 +2440,7 @@

Create the new cluster and perform upgrade.

- + Create new cluster and perform the upgrade @@ -2479,7 +2483,7 @@

Configure the new cluster settings and port.

- + Configure <postgres/> '{[project-exe]} {[dash]}-stanza={[postgres-cluster-demo]} archive-push %p' @@ -2493,13 +2497,13 @@

Update the configuration on all systems to point to the new cluster.

- + Upgrade the <br-option>pg1-path</br-option> {[pg-path-upgrade]} - + Upgrade the <br-option>pg-path</br-option> {[pg-path-upgrade]} @@ -2514,7 +2518,7 @@ n - + Copy hba configuration @@ -2537,7 +2541,7 @@

Start the new cluster and confirm it is successfully installed.

- + Start new cluster @@ -2547,7 +2551,7 @@

Test configuration using the check command.

- + Check configuration @@ -2561,7 +2565,7 @@

Remove the old cluster.

- + Remove old cluster @@ -2575,7 +2579,7 @@

Install the new binaries on the standby and create the cluster.

- + Remove old cluster and create the new cluster @@ -2617,7 +2621,7 @@ - + Restore the {[postgres-cluster-demo]} standby cluster @@ -2629,13 +2633,13 @@ - + Configure <postgres/> on - + Start <postgres/> and check the <backrest/> configuration