1
0
mirror of https://github.com/pgbackrest/pgbackrest.git synced 2025-03-17 20:58:34 +02:00

v0.50: restore and much more

* Added restore functionality.

* All options can now be set on the command-line making pg_backrest.conf optional.

* De/compression is now performed without threads and checksum/size is calculated in stream.  That means file checksums are no longer optional.

* Added option `--no-start-stop` to allow backups when Postgres is shut down.  If `postmaster.pid` is present then `--force` is required to make the backup run (though if Postgres is running an inconsistent backup will likely be created).  This option was added primarily for the purpose of unit testing, but there may be applications in the real world as well.

* Fixed broken checksums and now they work with normal and resumed backups.  Finally realized that checksums and checksum deltas should be functionally separated and this simplied a number of things.  Issue #28 has been created for checksum deltas.

* Fixed an issue where a backup could be resumed from an aborted backup that didn't have the same type and prior backup.

* Removed dependency on Moose.  It wasn't being used extensively and makes for longer startup times.

* Checksum for backup.manifest to detect corrupted/modified manifest.

* Link `latest` always points to the last backup.  This has been added for convenience and to make restores simpler.

* More comprehensive unit tests in all areas.
This commit is contained in:
David Steele 2015-03-25 15:15:55 -04:00
parent 4bc4d97f2b
commit b37d59832f
29 changed files with 11752 additions and 2353 deletions

View File

@ -1,418 +0,0 @@
# PgBackRest Installation
## sample ubuntu 12.04 install
1. Starting from a clean install, update the OS:
```
apt-get update
apt-get upgrade (reboot if required)
```
2. Install ssh, git and cpanminus
```
apt-get install ssh
apt-get install git
apt-get install cpanminus
```
3. Install Postgres (instructions from http://www.postgresql.org/download/linux/ubuntu/)
Create the file /etc/apt/sources.list.d/pgdg.list, and add a line for the repository:
```
deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main
```
Then run the following:
```
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
apt-get install postgresql-9.3
apt-get install postgresql-server-dev-9.3
```
4. Install required Perl modules:
```
cpanm JSON
cpanm Moose
cpanm Net::OpenSSH
cpanm DBI
cpanm DBD::Pg
cpanm IPC::System::Simple
cpanm Digest::SHA
cpanm IO::Compress::Gzip
cpanm IO::Uncompress::Gunzip
```
5. Install PgBackRest
Backrest can be installed by downloading the most recent release:
https://github.com/dwsteele/pg_backrest/releases
6. To run unit tests:
* Create backrest_dev user
* Setup trusted ssh between test user account and backrest_dev
* Backrest user and test user must be in the same group
## configuration examples
PgBackRest takes some command-line parameters, but depends on a configuration file for most of the settings. The default location for the configuration file is /etc/pg_backrest.conf.
#### confguring postgres for archiving with backrest
Modify the following settings in postgresql.conf:
```
wal_level = archive
archive_mode = on
archive_command = '/path/to/backrest/bin/pg_backrest.pl --stanza=db archive-push %p'
```
Replace the path with the actual location where PgBackRest was installed. The stanza parameter should be changed to the actual stanza name you used for your database in pg_backrest.conf.
#### simple single host install
This configuration is appropriate for a small installation where backups are being made locally or to a remote file system that is mounted locally.
`/etc/pg_backrest.conf`:
```
[global:command]
psql=/usr/bin/psql
[global:backup]
path=/var/lib/postgresql/backup
[global:retention]
full-retention=2
differential-retention=2
archive-retention-type=diff
archive-retention=2
[db]
path=/var/lib/postgresql/9.3/main
```
#### simple multiple host install
This configuration is appropriate for a small installation where backups are being made remotely. Make sure that postgres@db-host has trusted ssh to backrest@backup-host and vice versa.
`/etc/pg_backrest.conf on the db host`:
```
[global:command]
psql=/usr/bin/psql
[global:backup]
host=backup-host@mydomain.com
user=postgres
path=/var/lib/postgresql/backup
[db]
path=/var/lib/postgresql/9.3/main
```
`/etc/pg_backrest.conf on the backup host`:
```
[global:command]
psql=/usr/bin/psql
[global:backup]
path=/var/lib/postgresql/backup
[global:retention]
full-retention=2
archive-retention-type=full
[db]
host=db-host@mydomain.com
user=postgres
path=/var/lib/postgresql/9.3/main
```
## running
PgBackRest is intended to be run from a scheduler like cron as there is no built-in scheduler. Postgres does backup rotation, but it is not concerned with when the backups were created. So if two full backups are configured in rentention, PgBackRest will keep two full backup no matter whether they occur 2 hours apart or two weeks apart.
There are four basic operations:
1. Backup
```
/path/to/pg_backrest.pl --stanza=db --type=full backup
```
Run a `full` backup on the `db` stanza. `--type` can also be set to `incr` or `diff` for incremental or differential backups. However, if now `full` backup exists then a `full` backup will be forced even if `incr`
2. Archive Push
```
/path/to/pg_backrest.pl --stanza=db archive-push %p
```
Accepts an archive file from Postgres and pushes it to the backup. `%p` is how Postgres specifies the location of the file to be archived. This command has no other purpose.
3. Archive Get
```
/path/to/pg_backrest.pl --stanza=db archive-get %f %p
```
Retrieves an archive log from the backup. This is used in `restore.conf` to restore a backup to that last archive log, do PITR, or as an alternative to streaming for keep a replica up to date. `%f` is how Postgres specifies the archive log it needs, and `%p` is the location where it should be copied.
3. Backup Expire
```
/path/to/pg_backrest.pl --stanza=db expire
```
Expire (rotate) any backups that exceed the defined retention. Expiration is run after every backup, so there's no need to run this command on its own unless you have reduced rentention, usually to free up some space.
## structure
PgBackRest stores files in a way that is easy for users to work with directly. Each backup directory has two files and two subdirectories:
1. `backup.manifest` file
Stores information about all the directories, links, and files in the backup. The file is plaintext and should be very clear, but documentation of the format is planned in a future release.
2. `version` file
Contains the PgBackRest version that was used to create the backup.
3. `base` directory
Contains the Postgres data directory as defined by the data_directory setting in postgresql.conf
4. `tablespace` directory
Contains each tablespace in a separate subdirectory. The links in `base/pg_tblspc` are rewritten to this directory.
## restoring
PgBackRest does not currently have a restore command - this is planned for the near future. However, PgBackRest stores backups in a way that makes restoring very easy. If `compress=n` it is even possible to start Postgres directly on the backup directory.
In order to restore a backup, simple rsync the files from the base backup directory to your data directory. If you have used compression, then recursively ungzip the files. If you have tablespaces, repeat the process for each tablespace in the backup tablespace directory.
It's good to practice restoring backups in advance of needing to do so.
## configuration options
Each section defines important aspects of the backup. All configuration sections below should be prefixed with `global:` as demonstrated in the configuration samples.
#### command section
The command section defines external commands that are used by PgBackRest.
##### psql key
Defines the full path to psql. psql is used to call pg_start_backup() and pg_stop_backup().
```
required: y
example: psql=/usr/bin/psql
```
##### remote key
Defines the file path to pg_backrest_remote.pl.
Required only if the path to pg_backrest_remote.pl is different on the local and remote systems. If not defined, the remote path will be assumed to be the same as the local path.
```
required: n
example: remote=/home/postgres/backrest/bin/pg_backrest_remote.pl
```
#### command-option section
The command-option section allows abitrary options to be passed to any command in the command section.
##### psql key
Allows command line parameters to be passed to psql.
```
required: no
example: psql=--port=5433
```
#### log section
The log section defines logging-related settings. The following log levels are supported:
- `off `- No logging at all (not recommended)
- `error `- Log only errors
- `warn `- Log warnings and errors
- `info `- Log info, warnings, and errors
- `debug `- Log debug, info, warnings, and errors
- `trace `- Log trace (very verbose debugging), debug, info, warnings, and errors
##### level-file
Sets file log level.
```
default: info
example: level-file=warn
```
##### level-console
Sets console log level.
```
default: error
example: level-file=info
```
#### backup section
The backup section defines settings related to backup and archiving.
##### host
Sets the backup host.
```
required: n (but must be set if user is defined)
example: host=backup.mydomain.com
```
##### user
Sets user account on the backup host.
```
required: n (but must be set if host is defined)
example: user=backrest
```
##### path
Path where backups are stored on the local or remote host.
```
required: y
example: path=/backup/backrest
```
##### compress
Enable gzip compression. Files stored in the backup are compatible with command-line gzip tools.
```
default: y
example: compress=n
```
##### checksum
Enable SHA-1 checksums. Backup checksums are stored in backup.manifest while archive checksums are stored in the filename.
```
default: y
example: checksum=n
```
##### start_fast
Forces an immediate checkpoint (by passing true to the fast parameter of pg_start_backup()) so the backup begins immediately.
```
default: n
example: hardlink=y
```
##### hardlink
Enable hard-linking of files in differential and incremental backups to their full backups. This gives the appearance that each
backup is a full backup. Be care though, because modifying files that are hard-linked can affect all the backups in the set.
```
default: y
example: hardlink=n
```
##### thread-max
Defines the number of threads to use for backup. Each thread will perform compression and transfer to make the backup run faster, but don't set `thread-max` so high that it impacts database performance.
```
default: 1
example: thread-max=4
```
##### thread-timeout
Maximum amount of time that a backup thread should run. This limits the amount of time that a thread might be stuck due to unforeseen issues during the backup.
```
default: <none>
example: thread-max=4
```
##### archive-required
Are archive logs required to to complete the backup? It's a good idea to leave this as the default unless you are using another
method for archiving.
```
default: y
example: archive-required=n
```
#### archive section
The archive section defines parameters when doing async archiving. This means that the archive files will be stored locally, then a background process will pick them and move them to the backup.
##### path
Path where archive logs are stored before being asynchronously transferred to the backup. Make sure this is not the same path as the backup is using if the backup is local.
```
required: y
example: path=/backup/archive
```
##### compress-async
When set then archive logs are not compressed immediately, but are instead compressed when copied to the backup host. This means that more space will be used on local storage, but the initial archive process will complete more quickly allowing greater throughput from Postgres.
```
default: n
example: compress-async=y
```
##### archive-max-mb
Limits the amount of archive log that will be written locally. After the limit is reached, the following will happen:
1. PgBackRest will notify Postgres that the archive was succesfully backed up, then DROP IT.
2. An error will be logged to the console and also to the Postgres log.
3. A stop file will be written in the lock directory and no more archive files will be backed up until it is removed.
If this occurs then the archive log stream will be interrupted and PITR will not be possible past that point. A new backup will be required to regain full restore capability.
The purpose of this feature is to prevent the log volume from filling up at which point Postgres will stop all operation. Better to lose the backup than have the database go down completely.
To start normal archiving again you'll need to remove the stop file which will be located at `${archive-path}/lock/${stanza}-archive.stop` where `${archive-path}` is the path set in the archive section, and ${stanza} is the backup stanza.
```
required: n
example: archive-max-mb=1024
```
#### retention section
The rentention section defines how long backups will be retained. Expiration only occurs when the number of complete backups exceeds the allowed retention. In other words, if full-retention is set to 2, then there must be 3 complete backups before the oldest will be expired. Make sure you always have enough space for rentention + 1 backups.
##### full-retention
Number of full backups to keep. When a full backup expires, all differential and incremental backups associated with the full backup will also expire. When not defined then all full backups will be kept.
```
required: n
example: full-retention=2
```
##### differential-retention
Number of differential backups to keep. When a differential backup expires, all incremental backups associated with the differential backup will also expire. When not defined all differential backups will be kept.
```
required: n
example: differential-retention=3
```
##### archive-retention-type
Type of backup to use for archive retention (full or differential). If set to full, then PgBackRest will keep archive logs for the number of full backups defined by `archive-retention`. If set to differential, then PgBackRest will keep archive logs for the number of differential backups defined by `archive-retention`.
If not defined then archive logs will be kept indefinitely. In general it is not useful to keep archive logs that are older than the oldest backup, but there may be reasons for doing so.
```
required: n
example: archive-retention-type=full
```
##### archive-retention
Number of backups worth of archive log to keep. If not defined, then `full-retention` will be used when `archive-retention-type=full` and `differential-retention` will be used when `archive-retention-type=differential`.
```
required: n
example: archive-retention=2
```
### stanza sections
A stanza defines a backup for a specific database. The stanza section must define the base database path and host/user if the database is remote. Also, any global configuration sections can be overridden to define stanza-specific settings.
##### host
Sets the database host.
```
required: n (but must be set if user is defined)
example: host=db.mydomain.com
```
##### user
Sets user account on the db host.
```
required: n (but must be set if host is defined)
example: user=postgres
```
##### path
Path to the db data directory (data_directory setting in postgresql.conf).
```
required: y
example: path=/var/postgresql/data
```

View File

@ -1,6 +1,6 @@
The MIT License (MIT)
Copyright (c) 2013-2014 David Steele
Copyright (c) 2013-2015 David Steele
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in

778
README.md
View File

@ -2,72 +2,780 @@
PgBackRest aims to be a simple backup and restore system that can seamlessly scale up to the largest databases and workloads.
## release notes
Primary PgBackRest features:
### v0.30: core restructuring and unit tests
- Local or remote backup
- Multi-threaded backup/restore for performance
- Checksums
- Safe backups (checks that logs required for consistency are present before backup completes)
- Full, differential, and incremental backups
- Backup rotation (and minimum retention rules with optional separate retention for archive)
- In-stream compression/decompression
- Archiving and retrieval of logs for replicas/restores built in
- Async archiving for very busy systems (including space limits)
- Backup directories are consistent Postgres clusters (when hardlinks are on and compression is off)
- Tablespace support
- Restore delta option
- Restore using timestamp/size or checksum
- Restore remapping base/tablespaces
* Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations. Compression is performed in threads rather than forked processes.
Instead of relying on traditional backup tools like tar and rsync, PgBackRest implements all backup features internally and uses a custom protocol for communicating with remote systems. Removing reliance on tar and rsync allows for better solutions to database-specific backup issues. The custom remote protocol limits the types of connections that are required to perform a backup which increases security.
* Fairly comprehensive unit tests for all the basic operations. More work to be done here for sure, but then there is always more work to be done on unit tests.
## Install
* Removed dependency on Storable and replaced with a custom ini file implementation.
PgBackRest is written entirely in Perl and uses some non-standard modules that must be installed from CPAN.
* Added much needed documentation (see INSTALL.md).
### Ubuntu 12.04
* Numerous other changes that can only be identified with a diff.
* Starting from a clean install, update the OS:
```
apt-get update
apt-get upgrade (reboot if required)
```
* Install ssh, git and cpanminus:
```
apt-get install ssh
apt-get install git
apt-get install cpanminus
```
* Install Postgres (instructions from http://www.postgresql.org/download/linux/ubuntu/)
### v0.19: improved error reporting/handling
Create the file /etc/apt/sources.list.d/pgdg.list, and add a line for the repository:
```
deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main
```
* Then run the following:
```
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
* Working on improving error handling in the file object. This is not complete, but works well enough to find a few errors that have been causing us problems (notably, find is occasionally failing building the archive async manifest when system is under load).
apt-get install postgresql-9.3
apt-get install postgresql-server-dev-9.3
```
* Install required Perl modules:
```
cpanm JSON
cpanm Net::OpenSSH
cpanm IPC::System::Simple
cpanm Digest::SHA
cpanm Compress::ZLib
```
* Install PgBackRest
* Found and squashed a nasty bug where file_copy was defaulted to ignore errors. There was also an issue in file_exists that was causing the test to fail when the file actually did exist. Together they could have resulted in a corrupt backup with no errors, though it is very unlikely.
PgBackRest can be installed by downloading the most recent release:
### v0.18: return soft error from archive-get when file is missing
https://github.com/pgmasters/backrest/releases
* The archive-get function returns a 1 when the archive file is missing to differentiate from hard errors (ssh connection failure, file copy error, etc.) This lets Postgres know that that the archive stream has terminated normally. However, this does not take into account possible holes in the archive stream.
PgBackRest can be installed anywhere but it's best (though not required) to install it in the same location on all systems.
### v0.17: warn when archive directories cannot be deleted
## Operation
* If an archive directory which should be empty could not be deleted backrest was throwing an error. There's a good fix for that coming, but for the time being it has been changed to a warning so processing can continue. This was impacting backups as sometimes the final archive file would not get pushed if the first archive file had been in a different directory (plus some bad luck).
### General Options
### v0.16: RequestTTY=yes for SSH sessions
These options are either global or used by all commands.
* Added RequestTTY=yes to ssh sesssions. Hoping this will prevent random lockups.
#### `config` option
### v0.15: added archive-get
By default PgBackRest expects the its configuration file to be located at `/etc/pg_backrest.conf`. Use this option to specify another location.
```
required: n
default: /etc/pg_backrest.conf
example: config=/var/lib/backrest/pg_backrest.conf
```
* Added archive-get functionality to aid in restores.
#### `stanza` option
* Added option to force a checkpoint when starting the backup (start_fast=y).
Defines the stanza for the command. A stanza is the configuration for a database that defines where it is located, how it will be backed up, archiving options, etc. Most db servers will only have one Postgres cluster and therefore one stanza, whereas backup servers will have a stanza for every database that needs to be backed up.
### v0.11: minor fixes
Examples of how to configure a stanza can be found in the `configuration examples` section.
```
required: y
example: stanza=main
```
Tweaking a few settings after running backups for about a month.
#### `help` option
* Removed master_stderr_discard option on database SSH connections. There have been occasional lockups and they could be related issues originally seen in the file code.
Displays the PgBackRest help.
```
required: n
```
* Changed lock file conflicts on backup and expire commands to ERROR. They were set to DEBUG due to a copy-and-paste from the archive locks.
#### `version` option
### v0.10: backup and archiving are functional
Displays the PgBackRest version.
```
required: n
```
This version has been put into production at Resonate, so it does work, but there are a number of major caveats.
### Commands
* No restore functionality, but the backup directories are consistent Postgres data directories. You'll need to either uncompress the files or turn off compression in the backup. Uncompressed backups on a ZFS (or similar) filesystem are a good option because backups can be restored locally via a snapshot to create logical backups or do spot data recovery.
#### `backup` command
* Archiving is single-threaded. This has not posed an issue on our multi-terabyte databases with heavy write volume. Recommend a large WAL volume or to use the async option with a large volume nearby.
Perform a database backup. PgBackRest does not have a built-in scheduler so it's best to run it from cron or some other scheduling mechanism.
* Backups are multi-threaded, but the Net::OpenSSH library does not appear to be 100% threadsafe so it will very occasionally lock up on a thread. There is an overall process timeout that resolves this issue by killing the process. Yes, very ugly.
##### `type` option
* Checksums are lost on any resumed backup. Only the final backup will record checksum on multiple resumes. Checksums from previous backups are correctly recorded and a full backup will reset everything.
The following backup types are supported:
* The backup.manifest is being written as Storable because Config::IniFile does not seem to handle large files well. Would definitely like to save these as human-readable text.
- `full` - all database files will be copied and there will be no dependencies on previous backups.
- `incr` - incremental from the last successful backup.
- `diff` - like an incremental backup but always based on the last full backup.
* Absolutely no documentation (outside the code). Well, excepting these release notes.
```
required: n
default: incr
example: --type=full
```
* Lots of other little things and not so little things. Much refactoring to follow.
##### `no-start-stop` option
## recognition
This option prevents PgBackRest from running `pg_start_backup()` and `pg_stop_backup()` on the database. In order for this to work PostgreSQL should be shut down and PgBackRest will generate an error if it is not.
Primary recognition goes to Stephen Frost for all his valuable advice a criticism during the development of PgBackRest. It's a far better piece of software than it would have been without him. Any mistakes should be blamed on me alone.
The purpose of this option is to allow cold backups. The `pg_xlog` directory is copied as-is and `archive-check` is automatically disabled for the backup.
```
required: n
default: n
```
Resonate (http://www.resonateinsights.com) also contributed to the development of PgBackRest and allowed me to install early (but well tested) versions as their primary Postgres backup solution. Works so far!
##### `force` option
When used with `--no-start-stop` a backup will be run even if PgBackRest thinks that PostgreSQL is running. **This option should be used with extreme care as it will likely result in a bad backup.**
There are some scenarios where a backup might still be desirable under these conditions. For example, if a server crashes and the database volume can only be mounted read-only, it would be a good idea to take a backup even if `postmaster.pid` is present. In this case it would be better to revert to the prior backup and replay WAL, but possibly there is a very important transaction in a WAL segment that did not get archived.
```
required: n
default: n
```
##### Example: Full Backup
```
/path/to/pg_backrest.pl --stanza=db --type=full backup
```
Run a `full` backup on the `db` stanza. `--type` can also be set to `incr` or `diff` for incremental or differential backups. However, if no `full` backup exists then a `full` backup will be forced even if `incr` or `diff` is requested.
#### `archive-push` command
Archive a WAL segment to the repository.
##### Example
```
/path/to/pg_backrest.pl --stanza=db archive-push %p
```
Accepts a WAL segment from PostgreSQL and archives it in the repository. `%p` is how PostgreSQL specifies the location of the WAL segment to be archived.
#### `archive-get` command
Get a WAL segment from the repository.
##### Example
```
/path/to/pg_backrest.pl --stanza=db archive-get %f %p
```
Retrieves a WAL segment from the repository. This command is used in `restore.conf` to restore a backup, perform PITR, or as an alternative to streaming for keeping a replica up to date. `%f` is how PostgreSQL specifies the WAL segment it needs and `%p` is the location where it should be copied.
#### `expire` command
PgBackRest does backup rotation, but is not concerned with when the backups were created. So if two full backups are configured for rentention, PgBackRest will keep two full backups no matter whether they occur, two hours apart or two weeks apart.
##### Example
```
/path/to/pg_backrest.pl --stanza=db expire
```
Expire (rotate) any backups that exceed the defined retention. Expiration is run automatically after every successful backup, so there is no need to run this command separately unless you have reduced rentention, usually to free up some space.
#### `restore` command
Perform a database restore. This command is generall run manually, but there are instances where it might be automated.
##### `set` option
The backup set to be restored. `latest` will restore the latest backup, otherwise provide the name of the backup to restore.
```
required: n
default: default
example: --set=20150131-153358F_20150131-153401I
```
##### `delta` option
By default the PostgreSQL data and tablespace directories are expected to be present but empty. This option performs a delta restore using checksums.
```
required: n
default: n
```
##### `force` option
By itself this option forces the PostgreSQL data and tablespace paths to be completely overwritten. In combination with `--delta` a timestamp/size delta will be performed instead of using checksums.
```
required: n
default: n
```
##### `type` option
The following recovery types are supported:
- `default` - recover to the end of the archive stream.
- `name` - recover the restore point specified in `--target`.
- `xid` - recover to the transaction id specified in `--target`.
- `time` - recover to the time specified in `--target`.
- `preserve` - preserve the existing `recovery.conf` file.
```
required: n
default: default
example: --type=xid
```
##### `target` option
Defines the recovery target when `--type` is `name`, `xid`, or `time`.
```
required: y
example: "--target=2015-01-30 14:15:11 EST"
```
##### `target-exclusive` option
Defines whether recovery to the target would be exclusive (the default is inclusive) and is only valid when `--type` is `time` or `xid`. For example, using `--target-exclusive` would exclude the contents of transaction `1007` when `--type=xid` and `--target=1007`. See `recovery_target_inclusive` option in the PostgreSQL docs for more information.
```
required: n
default: n
```
##### `target-resume` option
Specifies whether recovery should resume when the recovery target is reached. See `pause_at_recovery_target` in the PostgreSQL docs for more information.
```
required: n
default: n
```
##### `target-timeline` option
Recovers along the specified timeline. See `recovery_target_timeline` in the PostgreSQL docs for more information.
```
required: n
example: --target-timeline=3
```
##### `recovery-setting` option
Recovery settings in restore.conf options can be specified with this option. See http://www.postgresql.org/docs/X.X/static/recovery-config.html for details on restore.conf options (replace X.X with your database version). This option can be used multiple times.
Note: `restore_command` will be automatically generated but can be overridden with this option. Be careful about specifying your own `restore_command` as PgBackRest is designed to handle this for you. Target Recovery options (recovery_target_name, recovery_target_time, etc.) are generated automatically by PgBackRest and should not be set with this option.
Recovery settings can also be set in the `restore:recovery-setting` section of pg_backrest.conf. For example:
```
[restore:recovery-setting]
primary_conn_info=db.mydomain.com
standby_mode=on
```
Since PgBackRest does not start PostgreSQL after writing the `recovery.conf` file, it is always possible to edit/check `recovery.conf` before manually restarting.
```
required: n
example: --recovery-setting primary_conninfo=db.mydomain.com
```
##### `tablespace-map` option
Moves a tablespace to a new location during the restore. This is useful when tablespace locations are not the same on a replica, or an upgraded system has different mount points.
Since PostgreSQL 9.2 tablespace locations are not stored in pg_tablespace so moving tablespaces can be done with impunity. However, moving a tablespace to the `data_directory` is not recommended and may cause problems. For more information on moving tablespaces http://www.databasesoup.com/2013/11/moving-tablespaces.html is a good resource.
```
required: n
example: --tablespace-map ts_01=/db/ts_01
```
##### Example: Restore Latest
```
/path/to/pg_backrest.pl --stanza=db --type=name --target=release restore
```
Restores the latest database backup and then recovers to the `release` restore point.
## Configuration
PgBackRest can be used entirely with command-line parameters but a configuration file is more practical for installations that are complex or set a lot of options. The default location for the configuration file is `/etc/pg_backrest.conf`.
### Examples
#### Confguring Postgres for Archiving
Modify the following settings in `postgresql.conf`:
```
wal_level = archive
archive_mode = on
archive_command = '/path/to/backrest/bin/pg_backrest.pl --stanza=db archive-push %p'
```
Replace the path with the actual location where PgBackRest was installed. The stanza parameter should be changed to the actual stanza name for your database.
#### Minimal Configuration
The absolute minimum required to run PgBackRest (if all defaults are accepted) is the database path.
`/etc/pg_backrest.conf`:
```
[main]
db-path=/data/db
```
The `db-path` option could also be provided on the command line, but it's best to use a configuration file as options tend to pile up quickly.
#### Simple Single Host Configuration
This configuration is appropriate for a small installation where backups are being made locally or to a remote file system that is mounted locally. A number of additional options are set:
- `cmd-psql` - Custom location and parameters for psql.
- `cmd-psql-option` - Options for psql can be set per stanza.
- `compress` - Disable compression (handy if the file system is already compressed).
- `repo-path` - Path to the PgBackRest repository where backups and WAL archive are stored.
- `log-level-file` - Set the file log level to debug (Lots of extra info if something is not working as expected).
- `hardlink` - Create hardlinks between backups (but never between full backups).
- `thread-max` - Use 2 threads for backup/restore operations.
`/etc/pg_backrest.conf`:
```
[global:command]
cmd-psql=/usr/local/bin/psql -X %option%
[global:general]
compress=n
repo-path=/Users/dsteele/Documents/Code/backrest/test/test/backrest
[global:log]
log-level-file=debug
[global:backup]
hardlink=y
thread-max=2
[main]
db-path=/data/db
[main:command]
cmd-psql-option=--port=5433
```
#### Simple Multiple Host Configuration
This configuration is appropriate for a small installation where backups are being made remotely. Make sure that postgres@db-host has trusted ssh to backrest@backup-host and vice versa. This configuration assumes that you have pg_backrest_remote.pl and pg_backrest.pl in the same path on both servers.
`/etc/pg_backrest.conf` on the db host:
```
[global:general]
repo-path=/path/to/db/repo
repo-remote-path=/path/to/backup/repo
[global:backup]
backup-host=backup.mydomain.com
backup-user=backrest
[global:archive]
archive-async=y
[main]
db-path=/data/db
```
`/etc/pg_backrest.conf` on the backup host:
```
[global:general]
repo-path=/path/to/backup/repo
[main]
db-host=db.mydomain.com
db-path=/data/db
db-user=postgres
```
### Options
#### `command` section
The `command` section defines the location of external commands that are used by PgBackRest.
##### `cmd-psql` key
Defines the full path to `psql`. `psql` is used to call `pg_start_backup()` and `pg_stop_backup()`.
If addtional per stanza parameters need to be passed to `psql` (such as `--port` or `--cluster`) then add `%option%` to the command line and use `command-option::psql` to set options.
```
required: n
default: /usr/bin/psql -X
example: cmd-psql=/usr/bin/psql -X %option%
```
##### `cmd-psql-option` key
Allows per stanza command line parameters to be passed to `psql`.
```
required: n
example: cmd-psql-option --port=5433
```
##### `cmd-remote` key
Defines the location of `pg_backrest_remote.pl`.
Required only if the path to `pg_backrest_remote.pl` is different on the local and remote systems. If not defined, the remote path will be assumed to be the same as the local path.
```
required: n
default: same as local
example: cmd-remote=/usr/lib/backrest/bin/pg_backrest_remote.pl
```
#### `log` section
The `log` section defines logging-related settings. The following log levels are supported:
- `off` - No logging at all (not recommended)
- `error` - Log only errors
- `warn` - Log warnings and errors
- `info` - Log info, warnings, and errors
- `debug` - Log debug, info, warnings, and errors
- `trace` - Log trace (very verbose debugging), debug, info, warnings, and errors
##### `log-level-file` key
Sets file log level.
```
required: n
default: info
example: log-level-file=debug
```
##### `log-level-console` key
Sets console log level.
```
required: n
default: warn
example: log-level-console=error
```
#### `general` section
The `general` section defines settings that are shared between multiple operations.
##### `buffer-size` key
Set the buffer size used for copy, compress, and uncompress functions. A maximum of 3 buffers will be in use at a time per thread. An additional maximum of 256K per thread may be used for zlib buffers.
```
required: n
default: 1048576
allow: 4096 - 8388608
example: buffer-size=16384
```
##### `compress` key
Enable gzip compression. Backup files are compatible with command-line gzip tools.
```
required: n
default: y
example: compress=n
```
##### `compress-level` key
Sets the zlib level to be used for file compression when `compress=y`.
```
required: n
default: 6
allow: 0-9
example: compress-level=9
```
##### `compress-level-network` key
Sets the zlib level to be used for protocol compression when `compress=n` and the database is not on the same host as the backup. Protocol compression is used to reduce network traffic but can be disabled by setting `compress-level-network=0`. When `compress=y` the `compress-level-network` setting is ignored and `compress-level` is used instead so that the file is only compressed once. SSH compression is always disabled.
```
required: n
default: 3
allow: 0-9
example: compress-level-network=1
```
##### `repo-path` key
Path to the backrest repository where WAL segments, backups, logs, etc are stored.
```
required: n
default: /var/lib/backup
example: repo-path=/data/db/backrest
```
##### `repo-remote-path` key
Path to the remote backrest repository where WAL segments, backups, logs, etc are stored.
```
required: n
example: repo-remote-path=/backup/backrest
```
#### `backup` section
The `backup` section defines settings related to backup.
##### `backup-host` key
Sets the backup host when backup up remotely via SSH. Make sure that trusted SSH authentication is configured between the db host and the backup host.
When backing up to a locally mounted network filesystem this setting is not required.
```
required: n
example: backup-host=backup.domain.com
```
##### `backup-user` key
Sets user account on the backup host.
```
required: n
example: backup-user=backrest
```
##### `start-fast` key
Forces a checkpoint (by passing `true` to the `fast` parameter of `pg_start_backup()`) so the backup begins immediately.
```
required: n
default: n
example: start-fast=y
```
##### `hardlink` key
Enable hard-linking of files in differential and incremental backups to their full backups. This gives the appearance that each backup is a full backup. Be careful, though, because modifying files that are hard-linked can affect all the backups in the set.
```
required: n
default: n
example: hardlink=y
```
##### `thread-max` key
Defines the number of threads to use for backup or restore. Each thread will perform compression and transfer to make the backup run faster, but don't set `thread-max` so high that it impacts database performance during backup.
```
required: n
default: 1
example: thread-max=4
```
##### `thread-timeout` key
Maximum amount of time (in seconds) that a backup thread should run. This limits the amount of time that a thread might be stuck due to unforeseen issues during the backup. Has no affect when `thread-max=1`.
```
required: n
example: thread-timeout=3600
```
##### `archive-check` key
Checks that all WAL segments required to make the backup consistent are present in the WAL archive. It's a good idea to leave this as the default unless you are using another method for archiving.
```
required: n
default: y
example: archive-check=n
```
##### `archive-copy` key
Store WAL segments required to make the backup consistent in the backup's pg_xlog path. This slightly paranoid option protects against corruption or premature expiration in the WAL segment archive. PITR won't be possible without the WAL segment archive and this option also consumes more space.
```
required: n
default: n
example: archive-copy=y
```
#### `archive` section
The `archive` section defines parameters when doing async archiving. This means that the archive files will be stored locally, then a background process will pick them and move them to the backup.
##### `archive-async` key
Archive WAL segments asynchronously. WAL segments will be copied to the local repo, then a process will be forked to compress the segment and transfer it to the remote repo if configured. Control will be returned to PostgreSQL as soon as the WAL segment is copied locally.
```
required: n
default: n
example: archive-async=y
```
##### `archive-max-mb` key
Limits the amount of archive log that will be written locally when `compress-async=y`. After the limit is reached, the following will happen:
- PgBackRest will notify Postgres that the archive was succesfully backed up, then DROP IT.
- An error will be logged to the console and also to the Postgres log.
- A stop file will be written in the lock directory and no more archive files will be backed up until it is removed.
If this occurs then the archive log stream will be interrupted and PITR will not be possible past that point. A new backup will be required to regain full restore capability.
The purpose of this feature is to prevent the log volume from filling up at which point Postgres will stop completely. Better to lose the backup than have the database go down.
To start normal archiving again you'll need to remove the stop file which will be located at `${archive-path}/lock/${stanza}-archive.stop` where `${archive-path}` is the path set in the `archive` section, and `${stanza}` is the backup stanza.
```
required: n
example: archive-max-mb=1024
```
#### `expire` section
The `expire` section defines how long backups will be retained. Expiration only occurs when the number of complete backups exceeds the allowed retention. In other words, if full-retention is set to 2, then there must be 3 complete backups before the oldest will be expired. Make sure you always have enough space for rentention + 1 backups.
##### `retention-full` key
Number of full backups to keep. When a full backup expires, all differential and incremental backups associated with the full backup will also expire. When not defined then all full backups will be kept.
```
required: n
example: retention-full=2
```
##### `retention-diff` key
Number of differential backups to keep. When a differential backup expires, all incremental backups associated with the differential backup will also expire. When not defined all differential backups will be kept.
```
required: n
example: retention-diff=3
```
##### `retention-archive-type` key
Type of backup to use for archive retention (full or differential). If set to full, then PgBackRest will keep archive logs for the number of full backups defined by `archive-retention`. If set to differential, then PgBackRest will keep archive logs for the number of differential backups defined by `archive-retention`.
If not defined then archive logs will be kept indefinitely. In general it is not useful to keep archive logs that are older than the oldest backup, but there may be reasons for doing so.
```
required: n
default: full
example: retention-archive-type=diff
```
##### `retention-archive` key
Number of backups worth of archive log to keep.
```
required: n
example: retention-archive=2
```
#### `stanza` section
A stanza defines a backup for a specific database. The stanza section must define the base database path and host/user if the database is remote. Also, any global configuration sections can be overridden to define stanza-specific settings.
##### `db-host` key
Define the database host. Used for backups where the database host is different from the backup host.
```
required: n
example: db-host=db.domain.com
```
##### `db-user` key
Defines user account on the db host when `db-host` is defined.
```
required: n
example: db-user=postgres
```
##### `db-path` key
Path to the db data directory (data_directory setting in postgresql.conf).
```
required: y
example: db-path=/data/db
```
## Release Notes
### v0.50: restore and much more
- Added restore functionality.
- All options can now be set on the command-line making pg_backrest.conf optional.
- De/compression is now performed without threads and checksum/size is calculated in stream. That means file checksums are no longer optional.
- Added option `--no-start-stop` to allow backups when Postgres is shut down. If `postmaster.pid` is present then `--force` is required to make the backup run (though if Postgres is running an inconsistent backup will likely be created). This option was added primarily for the purpose of unit testing, but there may be applications in the real world as well.
- Fixed broken checksums and now they work with normal and resumed backups. Finally realized that checksums and checksum deltas should be functionally separated and this simplied a number of things. Issue #28 has been created for checksum deltas.
- Fixed an issue where a backup could be resumed from an aborted backup that didn't have the same type and prior backup.
- Removed dependency on Moose. It wasn't being used extensively and makes for longer startup times.
- Checksum for backup.manifest to detect corrupted/modified manifest.
- Link `latest` always points to the last backup. This has been added for convenience and to make restores simpler.
- More comprehensive unit tests in all areas.
### v0.30: Core Restructuring and Unit Tests
- Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations. Compression is performed in threads rather than forked processes.
- Fairly comprehensive unit tests for all the basic operations. More work to be done here for sure, but then there is always more work to be done on unit tests.
- Removed dependency on Storable and replaced with a custom ini file implementation.
- Added much needed documentation
- Numerous other changes that can only be identified with a diff.
### v0.19: Improved Error Reporting/Handling
- Working on improving error handling in the file object. This is not complete, but works well enough to find a few errors that have been causing us problems (notably, find is occasionally failing building the archive async manifest when system is under load).
- Found and squashed a nasty bug where `file_copy()` was defaulted to ignore errors. There was also an issue in file_exists that was causing the test to fail when the file actually did exist. Together they could have resulted in a corrupt backup with no errors, though it is very unlikely.
### v0.18: Return Soft Error When Archive Missing
- The `archive-get` operation returns a 1 when the archive file is missing to differentiate from hard errors (ssh connection failure, file copy error, etc.) This lets Postgres know that that the archive stream has terminated normally. However, this does not take into account possible holes in the archive stream.
### v0.17: Warn When Archive Directories Cannot Be Deleted
- If an archive directory which should be empty could not be deleted backrest was throwing an error. There's a good fix for that coming, but for the time being it has been changed to a warning so processing can continue. This was impacting backups as sometimes the final archive file would not get pushed if the first archive file had been in a different directory (plus some bad luck).
### v0.16: RequestTTY=yes for SSH Sessions
- Added `RequestTTY=yes` to ssh sesssions. Hoping this will prevent random lockups.
### v0.15: RequestTTY=yes for SSH Sessions
- Added archive-get functionality to aid in restores.
- Added option to force a checkpoint when starting the backup `start-fast=y`.
### v0.11: Minor Fixes
- Removed `master_stderr_discard` option on database SSH connections. There have been occasional lockups and they could be related to issues originally seen in the file code.
- Changed lock file conflicts on backup and expire commands to ERROR. They were set to DEBUG due to a copy-and-paste from the archive locks.
### v0.10: Backup and Archiving are Functional
- No restore functionality, but the backup directories are consistent Postgres data directories. You'll need to either uncompress the files or turn off compression in the backup. Uncompressed backups on a ZFS (or similar) filesystem are a good option because backups can be restored locally via a snapshot to create logical backups or do spot data recovery.
- Archiving is single-threaded. This has not posed an issue on our multi-terabyte databases with heavy write volume. Recommend a large WAL volume or to use the async option with a large volume nearby.
- Backups are multi-threaded, but the Net::OpenSSH library does not appear to be 100% threadsafe so it will very occasionally lock up on a thread. There is an overall process timeout that resolves this issue by killing the process. Yes, very ugly.
- Checksums are lost on any resumed backup. Only the final backup will record checksum on multiple resumes. Checksums from previous backups are correctly recorded and a full backup will reset everything.
- The backup.manifest is being written as Storable because Config::IniFile does not seem to handle large files well. Would definitely like to save these as human-readable text.
- Absolutely no documentation (outside the code). Well, excepting these release notes.
## Recognition
Primary recognition goes to Stephen Frost for all his valuable advice and criticism during the development of PgBackRest.
Resonate (http://www.resonate.com/) also contributed to the development of PgBackRest and allowed me to install early (but well tested) versions as their primary Postgres backup solution.

View File

@ -1 +1 @@
0.30
0.50

View File

@ -6,20 +6,17 @@
####################################################################################################################################
# Perl includes
####################################################################################################################################
use threads;
use strict;
use warnings;
use Carp;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename;
use Getopt::Long;
use Pod::Usage;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::Config;
use BackRest::Remote;
use BackRest::File;
use BackRest::Backup;
use BackRest::Db;
####################################################################################################################################
# Usage
@ -33,10 +30,11 @@ pg_backrest.pl - Simple Postgres Backup and Restore
pg_backrest.pl [options] [operation]
Operation:
Operations:
archive-get retrieve an archive file from backup
archive-push push an archive file to backup
backup backup a cluster
restore restore a cluster
expire expire old backups (automatically run after backup)
General Options:
@ -47,171 +45,98 @@ pg_backrest.pl [options] [operation]
Backup Options:
--type type of backup to perform (full, diff, incr)
--no-start-stop do not call pg_start/stop_backup(). Postmaster should not be running.
--force force backup when --no-start-stop passed and postmaster.pid exists.
Use with extreme caution as this will probably produce an inconsistent backup!
Restore Options:
--set backup set to restore (defaults to latest set).
--delta perform a delta restore.
--force force a restore and overwrite all existing files.
with --delta forces size/timestamp deltas.
Recovery Options:
--type type of recovery:
default - recover to end of archive log stream
name - restore point target
time - timestamp target
xid - transaction id target
preserve - preserve the existing recovery.conf
none - no recovery past database becoming consistent
--target recovery target if type is name, time, or xid.
--target-exclusive stop just before the recovery target (default is inclusive).
--target-resume do not pause after recovery (default is to pause).
--target-timeline recover into specified timeline (default is current timeline).
=cut
####################################################################################################################################
# Operation constants - basic operations that are allowed in backrest
####################################################################################################################################
use constant
{
OP_ARCHIVE_GET => 'archive-get',
OP_ARCHIVE_PUSH => 'archive-push',
OP_BACKUP => 'backup',
OP_EXPIRE => 'expire'
};
####################################################################################################################################
# Configuration constants - configuration sections and keys
####################################################################################################################################
use constant
{
CONFIG_SECTION_COMMAND => 'command',
CONFIG_SECTION_COMMAND_OPTION => 'command:option',
CONFIG_SECTION_LOG => 'log',
CONFIG_SECTION_BACKUP => 'backup',
CONFIG_SECTION_ARCHIVE => 'archive',
CONFIG_SECTION_RETENTION => 'retention',
CONFIG_SECTION_STANZA => 'stanza',
CONFIG_KEY_USER => 'user',
CONFIG_KEY_HOST => 'host',
CONFIG_KEY_PATH => 'path',
CONFIG_KEY_THREAD_MAX => 'thread-max',
CONFIG_KEY_THREAD_TIMEOUT => 'thread-timeout',
CONFIG_KEY_HARDLINK => 'hardlink',
CONFIG_KEY_ARCHIVE_REQUIRED => 'archive-required',
CONFIG_KEY_ARCHIVE_MAX_MB => 'archive-max-mb',
CONFIG_KEY_START_FAST => 'start-fast',
CONFIG_KEY_COMPRESS_ASYNC => 'compress-async',
CONFIG_KEY_LEVEL_FILE => 'level-file',
CONFIG_KEY_LEVEL_CONSOLE => 'level-console',
CONFIG_KEY_COMPRESS => 'compress',
CONFIG_KEY_CHECKSUM => 'checksum',
CONFIG_KEY_PSQL => 'psql',
CONFIG_KEY_REMOTE => 'remote',
CONFIG_KEY_FULL_RETENTION => 'full-retention',
CONFIG_KEY_DIFFERENTIAL_RETENTION => 'differential-retention',
CONFIG_KEY_ARCHIVE_RETENTION_TYPE => 'archive-retention-type',
CONFIG_KEY_ARCHIVE_RETENTION => 'archive-retention'
};
####################################################################################################################################
# Command line parameters
####################################################################################################################################
my $strConfigFile; # Configuration file
my $strStanza; # Stanza in the configuration file to load
my $strType; # Type of backup: full, differential (diff), incremental (incr)
my $bVersion = false; # Display version and exit
my $bHelp = false; # Display help and exit
# Test parameters - not for general use
my $bNoFork = false; # Prevents the archive process from forking when local archiving is enabled
my $bTest = false; # Enters test mode - not harmful in anyway, but adds special logging and pauses for unit testing
my $iTestDelay = 5; # Amount of time to delay after hitting a test point (the default would not be enough for manual tests)
GetOptions ('config=s' => \$strConfigFile,
'stanza=s' => \$strStanza,
'type=s' => \$strType,
'version' => \$bVersion,
'help' => \$bHelp,
# Test parameters - not for general use (and subject to change without notice)
'no-fork' => \$bNoFork,
'test' => \$bTest,
'test-delay=s' => \$iTestDelay)
or pod2usage(2);
# Display version and exit if requested
if ($bVersion || $bHelp)
{
print 'pg_backrest ' . version_get() . "\n";
if (!$bHelp)
{
exit 0;
}
}
# Display help and exit if requested
if ($bHelp)
{
print "\n";
pod2usage();
}
# Set test parameters
test_set($bTest, $iTestDelay);
####################################################################################################################################
# Global variables
####################################################################################################################################
my %oConfig; # Configuration hash
my $oRemote; # Remote object
my $oRemote; # Remote protocol object
my $oLocal; # Local protocol object
my $strRemote; # Defines which side is remote, DB or BACKUP
####################################################################################################################################
# CONFIG_LOAD - Get a value from the config and be sure that it is defined (unless bRequired is false)
# REMOTE_GET - Get the remote object or create it if not exists
####################################################################################################################################
sub config_key_load
sub remote_get
{
my $strSection = shift;
my $strKey = shift;
my $bRequired = shift;
my $strDefault = shift;
my $bForceLocal = shift;
my $iCompressLevel = shift;
my $iCompressLevelNetwork = shift;
# Default is that the key is not required
if (!defined($bRequired))
# Return the remote if is already defined
if (defined($oRemote))
{
$bRequired = false;
return $oRemote;
}
my $strValue;
# Look in the default stanza section
if ($strSection eq CONFIG_SECTION_STANZA)
# Return the remote when required
if ($strRemote ne NONE && !$bForceLocal)
{
$strValue = $oConfig{"${strStanza}"}{"${strKey}"};
}
# Else look in the supplied section
else
{
# First check the stanza section
$strValue = $oConfig{"${strStanza}:${strSection}"}{"${strKey}"};
$oRemote = new BackRest::Remote
(
$strRemote eq DB ? optionGet(OPTION_DB_HOST) : optionGet(OPTION_BACKUP_HOST),
$strRemote eq DB ? optionGet(OPTION_DB_USER) : optionGet(OPTION_BACKUP_USER),
optionGet(OPTION_COMMAND_REMOTE),
optionGet(OPTION_BUFFER_SIZE),
$iCompressLevel, $iCompressLevelNetwork
);
# If the stanza section value is undefined then check global
if (!defined($strValue))
{
$strValue = $oConfig{"global:${strSection}"}{"${strKey}"};
}
return $oRemote;
}
if (!defined($strValue) && $bRequired)
# Otherwise return local
if (!defined($oLocal))
{
if (defined($strDefault))
{
return $strDefault;
}
confess &log(ERROR, 'config value ' . (defined($strSection) ? $strSection : '[stanza]') . "->${strKey} is undefined");
$oLocal = new BackRest::Remote
(
undef, undef, undef,
optionGet(OPTION_BUFFER_SIZE),
$iCompressLevel, $iCompressLevelNetwork
);
}
if ($strSection eq CONFIG_SECTION_COMMAND)
{
my $strOption = config_key_load(CONFIG_SECTION_COMMAND_OPTION, $strKey);
if (defined($strOption))
{
$strValue =~ s/\%option\%/${strOption}/g;
}
}
return $strValue;
return $oLocal;
}
####################################################################################################################################
# SAFE_EXIT - terminate all SSH sessions when the script is terminated
####################################################################################################################################
sub safe_exit
{
remote_exit();
my $iTotal = backup_thread_kill();
confess &log(ERROR, "process was terminated on signal, ${iTotal} threads stopped");
}
$SIG{TERM} = \&safe_exit;
$SIG{HUP} = \&safe_exit;
$SIG{INT} = \&safe_exit;
####################################################################################################################################
# REMOTE_EXIT - Close the remote object if it exists
####################################################################################################################################
@ -230,101 +155,32 @@ sub remote_exit
}
}
####################################################################################################################################
# REMOTE_GET - Get the remote object or create it if not exists
####################################################################################################################################
sub remote_get()
{
if (!defined($oRemote) && $strRemote ne REMOTE_NONE)
{
$oRemote = BackRest::Remote->new
(
strHost => config_key_load($strRemote eq REMOTE_DB ? CONFIG_SECTION_STANZA : CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST, true),
strUser => config_key_load($strRemote eq REMOTE_DB ? CONFIG_SECTION_STANZA : CONFIG_SECTION_BACKUP, CONFIG_KEY_USER, true),
strCommand => config_key_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_REMOTE, true)
);
}
return $oRemote;
}
####################################################################################################################################
# SAFE_EXIT - terminate all SSH sessions when the script is terminated
####################################################################################################################################
sub safe_exit
{
remote_exit();
my $iTotal = backup_thread_kill();
confess &log(ERROR, "process was terminated on signal, ${iTotal} threads stopped");
}
$SIG{TERM} = \&safe_exit;
$SIG{HUP} = \&safe_exit;
$SIG{INT} = \&safe_exit;
####################################################################################################################################
# START EVAL BLOCK TO CATCH ERRORS AND STOP THREADS
####################################################################################################################################
eval {
####################################################################################################################################
# START MAIN
# Load command line parameters and config
####################################################################################################################################
# Get the operation
my $strOperation = $ARGV[0];
# Validate the operation
if (!defined($strOperation))
{
confess &log(ERROR, 'operation is not defined');
}
if ($strOperation ne OP_ARCHIVE_GET &&
$strOperation ne OP_ARCHIVE_PUSH &&
$strOperation ne OP_BACKUP &&
$strOperation ne OP_EXPIRE)
{
confess &log(ERROR, "invalid operation ${strOperation}");
}
# Type should only be specified for backups
if (defined($strType) && $strOperation ne OP_BACKUP)
{
confess &log(ERROR, 'type can only be specified for the backup operation')
}
####################################################################################################################################
# LOAD CONFIG FILE
####################################################################################################################################
if (!defined($strConfigFile))
{
$strConfigFile = '/etc/pg_backrest.conf';
}
config_load($strConfigFile, \%oConfig);
# Load and check the cluster
if (!defined($strStanza))
{
confess 'a backup stanza must be specified';
}
configLoad();
# Set the log levels
log_level_set(uc(config_key_load(CONFIG_SECTION_LOG, CONFIG_KEY_LEVEL_FILE, true, INFO)),
uc(config_key_load(CONFIG_SECTION_LOG, CONFIG_KEY_LEVEL_CONSOLE, true, ERROR)));
log_level_set(optionGet(OPTION_LOG_LEVEL_FILE), optionGet(OPTION_LOG_LEVEL_CONSOLE));
# Set test options
!optionGet(OPTION_TEST) or test_set(optionGet(OPTION_TEST), optionGet(OPTION_TEST_DELAY));
####################################################################################################################################
# DETERMINE IF THERE IS A REMOTE
####################################################################################################################################
# First check if backup is remote
if (defined(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST)))
if (optionTest(OPTION_BACKUP_HOST))
{
$strRemote = REMOTE_BACKUP;
$strRemote = BACKUP;
}
# Else check if db is remote
elsif (defined(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST)))
elsif (optionTest(OPTION_DB_HOST))
{
# Don't allow both sides to be remote
if (defined($strRemote))
@ -332,46 +188,34 @@ elsif (defined(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST)))
confess &log(ERROR, 'db and backup cannot both be configured as remote');
}
$strRemote = REMOTE_DB;
$strRemote = DB;
}
else
{
$strRemote = REMOTE_NONE;
$strRemote = NONE;
}
####################################################################################################################################
# ARCHIVE-PUSH Command
####################################################################################################################################
if ($strOperation eq OP_ARCHIVE_PUSH)
if (operationTest(OP_ARCHIVE_PUSH))
{
# Make sure the archive push operation happens on the db side
if ($strRemote eq REMOTE_DB)
if ($strRemote eq DB)
{
confess &log(ERROR, 'archive-push operation must run on the db host');
}
# If an archive section has been defined, use that instead of the backup section when operation is OP_ARCHIVE_PUSH
my $bArchiveLocal = defined(config_key_load(CONFIG_SECTION_ARCHIVE, CONFIG_KEY_PATH));
my $strSection = $bArchiveLocal ? CONFIG_SECTION_ARCHIVE : CONFIG_SECTION_BACKUP;
my $strArchivePath = config_key_load($strSection, CONFIG_KEY_PATH);
# Get checksum flag
my $bChecksum = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_CHECKSUM, true, 'y') eq 'y' ? true : false;
# Get the async compress flag. If compress_async=y then compression is off for the initial push when archiving locally
my $bCompressAsync = false;
if ($bArchiveLocal)
{
config_key_load($strSection, CONFIG_KEY_COMPRESS_ASYNC, true, 'n') eq 'n' ? false : true;
}
my $bArchiveAsync = optionTest(OPTION_ARCHIVE_ASYNC);
my $strArchivePath = optionGet(OPTION_REPO_PATH);
# If logging locally then create the stop archiving file name
my $strStopFile;
if ($bArchiveLocal)
if ($bArchiveAsync)
{
$strStopFile = "${strArchivePath}/lock/${strStanza}-archive.stop";
$strStopFile = "${strArchivePath}/lock/" . optionGet(OPTION_STANZA) . "-archive.stop";
}
# If an archive file is defined, then push it
@ -388,15 +232,16 @@ if ($strOperation eq OP_ARCHIVE_PUSH)
}
# Get the compress flag
my $bCompress = $bCompressAsync ? false : config_key_load($strSection, CONFIG_KEY_COMPRESS, true, 'y') eq 'y' ? true : false;
my $bCompress = $bArchiveAsync ? false : optionGet(OPTION_COMPRESS);
# Create the file object
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strRemote => $bArchiveLocal ? REMOTE_NONE : $strRemote,
oRemote => $bArchiveLocal ? undef : remote_get(),
strBackupPath => config_key_load($strSection, CONFIG_KEY_PATH, true)
optionGet(OPTION_STANZA),
$bArchiveAsync || $strRemote eq NONE ? optionGet(OPTION_REPO_PATH) : optionGet(OPTION_REPO_REMOTE_PATH),
$bArchiveAsync ? NONE : $strRemote,
remote_get($bArchiveAsync, optionGet(OPTION_COMPRESS_LEVEL),
optionGet(OPTION_COMPRESS_LEVEL_NETWORK))
);
# Init backup
@ -406,45 +251,36 @@ if ($strOperation eq OP_ARCHIVE_PUSH)
$oFile,
undef,
$bCompress,
undef,
!$bChecksum
undef
);
&log(INFO, 'pushing archive log ' . $ARGV[1] . ($bArchiveLocal ? ' asynchronously' : ''));
&log(INFO, 'pushing archive log ' . $ARGV[1] . ($bArchiveAsync ? ' asynchronously' : ''));
archive_push(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_PATH), $ARGV[1]);
archive_push(optionGet(OPTION_DB_PATH, false), $ARGV[1], $bArchiveAsync);
# Exit if we are archiving local but no backup host has been defined
if (!($bArchiveLocal && defined(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST))))
# Exit if we are archiving async
if (!$bArchiveAsync)
{
remote_exit(0);
}
# Fork and exit the parent process so the async process can continue
if (!$bNoFork)
if (!optionTest(OPTION_TEST_NO_FORK) && fork())
{
if (fork())
{
remote_exit(0);
}
remote_exit(0);
}
# Else the no-fork flag has been specified for testing
else
{
&log(INFO, 'No fork on archive local for TESTING');
}
}
# If no backup host is defined it makes no sense to run archive-push without a specified archive file so throw an error
if (!defined(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST)))
{
&log(ERROR, 'archive-push called without an archive file or backup host');
# Start the async archive push
&log(INFO, 'starting async archive-push');
}
&log(INFO, 'starting async archive-push');
# Create a lock file to make sure async archive-push does not run more than once
my $strLockPath = "${strArchivePath}/lock/${strStanza}-archive.lock";
my $strLockPath = "${strArchivePath}/lock/" . optionGet(OPTION_STANZA) . "-archive.lock";
if (!lock_file_create($strLockPath))
{
@ -453,95 +289,52 @@ if ($strOperation eq OP_ARCHIVE_PUSH)
}
# Build the basic command string that will be used to modify the command during processing
my $strCommand = $^X . ' ' . $0 . " --stanza=${strStanza}";
my $strCommand = $^X . ' ' . $0 . " --stanza=" . optionGet(OPTION_STANZA);
# Get the new operational flags
my $bCompress = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_COMPRESS, true, 'y') eq 'y' ? true : false;
my $iArchiveMaxMB = config_key_load(CONFIG_SECTION_ARCHIVE, CONFIG_KEY_ARCHIVE_MAX_MB);
my $bCompress = optionGet(OPTION_COMPRESS);
my $iArchiveMaxMB = optionGet(OPTION_ARCHIVE_MAX_MB, false);
# eval
# {
# Create the file object
my $oFile = BackRest::File->new
(
strStanza => $strStanza,
strRemote => $strRemote,
oRemote => remote_get(),
strBackupPath => config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true)
);
# Create the file object
my $oFile = new BackRest::File
(
optionGet(OPTION_STANZA),
$strRemote eq NONE ? optionGet(OPTION_REPO_PATH) : optionGet(OPTION_REPO_REMOTE_PATH),
$strRemote,
remote_get(false, optionGet(OPTION_COMPRESS_LEVEL),
optionGet(OPTION_COMPRESS_LEVEL_NETWORK))
);
# Init backup
backup_init
(
undef,
$oFile,
undef,
$bCompress,
undef,
!$bChecksum,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
undef,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT)
);
# Init backup
backup_init
(
undef,
$oFile,
undef,
$bCompress,
undef,
1, #optionGet(OPTION_THREAD_MAX),
undef,
optionGet(OPTION_THREAD_TIMEOUT, false)
);
# Call the archive_xfer function and continue to loop as long as there are files to process
my $iLogTotal;
# Call the archive_xfer function and continue to loop as long as there are files to process
my $iLogTotal;
while (!defined($iLogTotal) || $iLogTotal > 0)
while (!defined($iLogTotal) || $iLogTotal > 0)
{
$iLogTotal = archive_xfer($strArchivePath . "/archive/" . optionGet(OPTION_STANZA) . "/out", $strStopFile,
$strCommand, $iArchiveMaxMB);
if ($iLogTotal > 0)
{
$iLogTotal = archive_xfer($strArchivePath . "/archive/${strStanza}", $strStopFile, $strCommand, $iArchiveMaxMB);
if ($iLogTotal > 0)
{
&log(DEBUG, "${iLogTotal} archive logs were transferred, calling archive_xfer() again");
}
else
{
&log(DEBUG, 'no more logs to transfer - exiting');
}
&log(DEBUG, "${iLogTotal} archive logs were transferred, calling archive_xfer() again");
}
#
# };
# # If there were errors above then start compressing
# if ($@)
# {
# if ($bCompressAsync)
# {
# &log(ERROR, "error during transfer: $@");
# &log(WARN, "errors during transfer, starting compression");
#
# # Run file_init_archive - this is the minimal config needed to run archive pulling !!! need to close the old file
# my $oFile = BackRest::File->new
# (
# # strStanza => $strStanza,
# # bNoCompression => false,
# # strBackupPath => config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true),
# # strCommand => $0,
# # strCommandCompress => config_key_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_COMPRESS, $bCompress),
# # strCommandDecompress => config_key_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_DECOMPRESS, $bCompress)
# );
#
# backup_init
# (
# undef,
# $oFile,
# undef,
# $bCompress,
# undef,
# !$bChecksum,
# config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
# undef,
# config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT)
# );
#
# archive_compress($strArchivePath . "/archive/${strStanza}", $strCommand, 256);
# }
# else
# {
# confess $@;
# }
# }
else
{
&log(DEBUG, 'no more logs to transfer - exiting');
}
}
lock_file_remove();
remote_exit(0);
@ -550,7 +343,7 @@ if ($strOperation eq OP_ARCHIVE_PUSH)
####################################################################################################################################
# ARCHIVE-GET Command
####################################################################################################################################
if ($strOperation eq OP_ARCHIVE_GET)
if (operationTest(OP_ARCHIVE_GET))
{
# Make sure the archive file is defined
if (!defined($ARGV[1]))
@ -565,12 +358,14 @@ if ($strOperation eq OP_ARCHIVE_GET)
}
# Init the file object
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strRemote => $strRemote,
oRemote => remote_get(),
strBackupPath => config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true)
optionGet(OPTION_STANZA),
$strRemote eq BACKUP ? optionGet(OPTION_REPO_REMOTE_PATH) : optionGet(OPTION_REPO_PATH),
$strRemote,
remote_get(false,
optionGet(OPTION_COMPRESS_LEVEL),
optionGet(OPTION_COMPRESS_LEVEL_NETWORK))
);
# Init the backup object
@ -584,120 +379,154 @@ if ($strOperation eq OP_ARCHIVE_GET)
&log(INFO, 'getting archive log ' . $ARGV[1]);
# Get the archive file
remote_exit(archive_get(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_PATH), $ARGV[1], $ARGV[2]));
remote_exit(archive_get(optionGet(OPTION_DB_PATH, false), $ARGV[1], $ARGV[2]));
}
####################################################################################################################################
# OPEN THE LOG FILE
# Initialize the default file object
####################################################################################################################################
if (defined(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST)))
my $oFile = new BackRest::File
(
optionGet(OPTION_STANZA),
$strRemote eq BACKUP ? optionGet(OPTION_REPO_REMOTE_PATH) : optionGet(OPTION_REPO_PATH),
$strRemote,
remote_get(false,
operationTest(OP_EXPIRE) ? OPTION_DEFAULT_COMPRESS_LEVEL : optionGet(OPTION_COMPRESS_LEVEL),
operationTest(OP_EXPIRE) ? OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK : optionGet(OPTION_COMPRESS_LEVEL_NETWORK))
);
####################################################################################################################################
# RESTORE
####################################################################################################################################
if (operationTest(OP_RESTORE))
{
confess &log(ASSERT, 'backup/expire operations must be performed locally on the backup server');
}
if ($strRemote eq DB)
{
confess &log(ASSERT, 'restore operation must be performed locally on the db server');
}
log_file_set(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true) . "/log/${strStanza}");
# Open the log file
log_file_set(optionGet(OPTION_REPO_PATH) . '/log/' . optionGet(OPTION_STANZA) . '-restore');
# Set the lock path
my $strLockPath = optionGet(OPTION_REPO_PATH) . '/lock/' .
optionGet(OPTION_STANZA) . '-' . operationGet() . '.lock';
# Do the restore
use BackRest::Restore;
new BackRest::Restore
(
optionGet(OPTION_DB_PATH),
optionGet(OPTION_SET),
optionGet(OPTION_RESTORE_TABLESPACE_MAP, false),
$oFile,
optionGet(OPTION_THREAD_MAX),
optionGet(OPTION_DELTA),
optionGet(OPTION_FORCE),
optionGet(OPTION_TYPE),
optionGet(OPTION_TARGET, false),
optionGet(OPTION_TARGET_EXCLUSIVE, false),
optionGet(OPTION_TARGET_RESUME, false),
optionGet(OPTION_TARGET_TIMELINE, false),
optionGet(OPTION_RESTORE_RECOVERY_SETTING, false),
optionGet(OPTION_STANZA),
$0,
optionGet(OPTION_CONFIG)
)->restore;
remote_exit(0);
}
####################################################################################################################################
# GET MORE CONFIG INFO
####################################################################################################################################
# Open the log file
log_file_set(optionGet(OPTION_REPO_PATH) . '/log/' . optionGet(OPTION_STANZA));
# Make sure backup and expire operations happen on the backup side
if ($strRemote eq REMOTE_BACKUP)
if ($strRemote eq BACKUP)
{
confess &log(ERROR, 'backup and expire operations must run on the backup host');
}
# Set the backup type
if (!defined($strType))
{
$strType = 'incremental';
}
elsif ($strType eq 'diff')
{
$strType = 'differential';
}
elsif ($strType eq 'incr')
{
$strType = 'incremental';
}
elsif ($strType ne 'full' && $strType ne 'differential' && $strType ne 'incremental')
{
confess &log(ERROR, 'backup type must be full, differential (diff), incremental (incr)');
}
# Get the operational flags
my $bCompress = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_COMPRESS, true, 'y') eq 'y' ? true : false;
my $bChecksum = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_CHECKSUM, true, 'y') eq 'y' ? true : false;
# Set the lock path
my $strLockPath = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true) . "/lock/${strStanza}-${strOperation}.lock";
my $strLockPath = optionGet(OPTION_REPO_PATH) . '/lock/' . optionGet(OPTION_STANZA) . '-' . operationGet() . '.lock';
if (!lock_file_create($strLockPath))
{
&log(ERROR, "backup process is already running for stanza ${strStanza} - exiting");
&log(ERROR, 'backup process is already running for stanza ' . optionGet(OPTION_STANZA) . ' - exiting');
remote_exit(0);
}
# Run file_init_archive - the rest of the file config required for backup and restore
my $oFile = BackRest::File->new
(
strStanza => $strStanza,
strRemote => $strRemote,
oRemote => remote_get(),
strBackupPath => config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true)
);
# Initialize the db object
use BackRest::Db;
my $oDb;
my $oDb = BackRest::Db->new
(
strDbUser => config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_USER),
strDbHost => config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST),
strCommandPsql => config_key_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_PSQL),
oDbSSH => $oFile->{oDbSSH}
);
if (operationTest(OP_BACKUP))
{
if (!optionGet(OPTION_NO_START_STOP))
{
$oDb = new BackRest::Db
(
optionGet(OPTION_COMMAND_PSQL),
optionGet(OPTION_DB_HOST, false),
optionGet(OPTION_DB_USER, optionTest(OPTION_DB_HOST))
);
}
# Run backup_init - parameters required for backup and restore operations
backup_init
(
$oDb,
$oFile,
$strType,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_COMPRESS, true, 'y') eq 'y' ? true : false,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HARDLINK, true, 'y') eq 'y' ? true : false,
!$bChecksum,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_ARCHIVE_REQUIRED, true, 'y') eq 'y' ? true : false,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT),
$bTest,
$iTestDelay
);
# Run backup_init - parameters required for backup and restore operations
backup_init
(
$oDb,
$oFile,
optionGet(OPTION_TYPE),
optionGet(OPTION_COMPRESS),
optionGet(OPTION_HARDLINK),
optionGet(OPTION_THREAD_MAX),
optionGet(OPTION_THREAD_TIMEOUT, false),
optionGet(OPTION_NO_START_STOP),
optionTest(OPTION_FORCE)
);
}
####################################################################################################################################
# BACKUP
####################################################################################################################################
if ($strOperation eq OP_BACKUP)
if (operationTest(OP_BACKUP))
{
backup(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_PATH),
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_START_FAST, true, 'n') eq 'y' ? true : false);
use BackRest::Backup;
backup(optionGet(OPTION_DB_PATH), optionGet(OPTION_START_FAST));
$strOperation = OP_EXPIRE;
operationSet(OP_EXPIRE);
}
####################################################################################################################################
# EXPIRE
####################################################################################################################################
if ($strOperation eq OP_EXPIRE)
if (operationTest(OP_EXPIRE))
{
if (!defined($oDb))
{
backup_init
(
undef,
$oFile
);
}
backup_expire
(
$oFile->path_get(PATH_BACKUP_CLUSTER),
config_key_load(CONFIG_SECTION_RETENTION, CONFIG_KEY_FULL_RETENTION),
config_key_load(CONFIG_SECTION_RETENTION, CONFIG_KEY_DIFFERENTIAL_RETENTION),
config_key_load(CONFIG_SECTION_RETENTION, CONFIG_KEY_ARCHIVE_RETENTION_TYPE),
config_key_load(CONFIG_SECTION_RETENTION, CONFIG_KEY_ARCHIVE_RETENTION)
optionGet(OPTION_RETENTION_FULL, false),
optionGet(OPTION_RETENTION_DIFF, false),
optionGet(OPTION_RETENTION_ARCHIVE_TYPE, false),
optionGet(OPTION_RETENTION_ARCHIVE, false)
);
lock_file_remove();
}
backup_cleanup();
remote_exit(0);
};
@ -706,6 +535,14 @@ remote_exit(0);
####################################################################################################################################
if ($@)
{
my $oMessage = $@;
# If a backrest exception then return the code - don't confess
if ($oMessage->isa('BackRest::Exception'))
{
remote_exit($oMessage->code());
}
remote_exit();
confess $@;
}

View File

@ -7,11 +7,11 @@
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename;
use Getopt::Long;
use Carp;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
@ -54,16 +54,21 @@ sub param_get
log_level_set(OFF, OFF);
# Create the remote object
my $oRemote = BackRest::Remote->new();
# Create the file object
my $oFile = BackRest::File->new
my $oRemote = new BackRest::Remote
(
oRemote => $oRemote
undef, # Host
undef, # User
'remote' # Command
);
# Write the greeting so remote process knows who we are
$oRemote->greeting_write();
# Create the file object
my $oFile = new BackRest::File
(
undef,
undef,
undef,
$oRemote,
);
# Command string
my $strCommand = OP_NOOP;
@ -77,26 +82,58 @@ while ($strCommand ne OP_EXIT)
eval
{
# Copy a file to STDOUT
if ($strCommand eq OP_FILE_COPY_OUT)
# Copy file
if ($strCommand eq OP_FILE_COPY ||
$strCommand eq OP_FILE_COPY_IN ||
$strCommand eq OP_FILE_COPY_OUT)
{
$oFile->copy(PATH_ABSOLUTE, param_get(\%oParamHash, 'source_file'),
PIPE_STDOUT, undef,
param_get(\%oParamHash, 'source_compressed'), undef);
my $bResult;
my $strChecksum;
my $iFileSize;
$oRemote->output_write();
}
# Copy a file from STDIN
elsif ($strCommand eq OP_FILE_COPY_IN)
{
$oFile->copy(PIPE_STDIN, undef,
PATH_ABSOLUTE, param_get(\%oParamHash, 'destination_file'),
undef, param_get(\%oParamHash, 'destination_compress'),
undef, undef,
param_get(\%oParamHash, 'permission', false),
param_get(\%oParamHash, 'destination_path_create'));
# Copy a file locally
if ($strCommand eq OP_FILE_COPY)
{
($bResult, $strChecksum, $iFileSize) =
$oFile->copy(PATH_ABSOLUTE, param_get(\%oParamHash, 'source_file'),
PATH_ABSOLUTE, param_get(\%oParamHash, 'destination_file'),
param_get(\%oParamHash, 'source_compressed'),
param_get(\%oParamHash, 'destination_compress'),
param_get(\%oParamHash, 'ignore_missing_source', false),
undef,
param_get(\%oParamHash, 'mode', false),
param_get(\%oParamHash, 'destination_path_create') ? 'Y' : 'N',
param_get(\%oParamHash, 'user', false),
param_get(\%oParamHash, 'group', false),
param_get(\%oParamHash, 'append_checksum', false));
}
# Copy a file from STDIN
elsif ($strCommand eq OP_FILE_COPY_IN)
{
($bResult, $strChecksum, $iFileSize) =
$oFile->copy(PIPE_STDIN, undef,
PATH_ABSOLUTE, param_get(\%oParamHash, 'destination_file'),
param_get(\%oParamHash, 'source_compressed'),
param_get(\%oParamHash, 'destination_compress'),
undef, undef,
param_get(\%oParamHash, 'mode', false),
param_get(\%oParamHash, 'destination_path_create'),
param_get(\%oParamHash, 'user', false),
param_get(\%oParamHash, 'group', false),
param_get(\%oParamHash, 'append_checksum', false));
}
# Copy a file to STDOUT
elsif ($strCommand eq OP_FILE_COPY_OUT)
{
($bResult, $strChecksum, $iFileSize) =
$oFile->copy(PATH_ABSOLUTE, param_get(\%oParamHash, 'source_file'),
PIPE_STDOUT, undef,
param_get(\%oParamHash, 'source_compressed'),
param_get(\%oParamHash, 'destination_compress'));
}
$oRemote->output_write();
$oRemote->output_write(($bResult ? 'Y' : 'N') . " " . (defined($strChecksum) ? $strChecksum : '?') . " " .
(defined($iFileSize) ? $iFileSize : '?'));
}
# List files in a path
elsif ($strCommand eq OP_FILE_LIST)
@ -121,7 +158,7 @@ while ($strCommand ne OP_EXIT)
# Create a path
elsif ($strCommand eq OP_FILE_PATH_CREATE)
{
$oFile->path_create(PATH_ABSOLUTE, param_get(\%oParamHash, 'path'), param_get(\%oParamHash, 'permission', false));
$oFile->path_create(PATH_ABSOLUTE, param_get(\%oParamHash, 'path'), param_get(\%oParamHash, 'mode', false));
$oRemote->output_write();
}
# Check if a file/path exists
@ -129,18 +166,10 @@ while ($strCommand ne OP_EXIT)
{
$oRemote->output_write($oFile->exists(PATH_ABSOLUTE, param_get(\%oParamHash, 'path')) ? 'Y' : 'N');
}
# Copy a file locally
elsif ($strCommand eq OP_FILE_COPY)
# Wait
elsif ($strCommand eq OP_FILE_WAIT)
{
$oRemote->output_write(
$oFile->copy(PATH_ABSOLUTE, param_get(\%oParamHash, 'source_file'),
PATH_ABSOLUTE, param_get(\%oParamHash, 'destination_file'),
param_get(\%oParamHash, 'source_compressed'),
param_get(\%oParamHash, 'destination_compress'),
param_get(\%oParamHash, 'ignore_missing_source', false),
undef,
param_get(\%oParamHash, 'permission', false),
param_get(\%oParamHash, 'destination_path_create')) ? 'Y' : 'N');
$oRemote->output_write($oFile->wait(PATH_ABSOLUTE));
}
# Generate a manifest
elsif ($strCommand eq OP_FILE_MANIFEST)
@ -149,7 +178,7 @@ while ($strCommand ne OP_EXIT)
$oFile->manifest(PATH_ABSOLUTE, param_get(\%oParamHash, 'path'), \%oManifestHash);
my $strOutput = "name\ttype\tuser\tgroup\tpermission\tmodification_time\tinode\tsize\tlink_destination";
my $strOutput = "name\ttype\tuser\tgroup\tmode\tmodification_time\tinode\tsize\tlink_destination";
foreach my $strName (sort(keys $oManifestHash{name}))
{
@ -157,7 +186,7 @@ while ($strCommand ne OP_EXIT)
$oManifestHash{name}{"${strName}"}{type} . "\t" .
(defined($oManifestHash{name}{"${strName}"}{user}) ? $oManifestHash{name}{"${strName}"}{user} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{group}) ? $oManifestHash{name}{"${strName}"}{group} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{permission}) ? $oManifestHash{name}{"${strName}"}{permission} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{mode}) ? $oManifestHash{name}{"${strName}"}{mode} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{modification_time}) ?
$oManifestHash{name}{"${strName}"}{modification_time} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{inode}) ? $oManifestHash{name}{"${strName}"}{inode} : "") . "\t" .

94
doc/doc.dtd Normal file
View File

@ -0,0 +1,94 @@
<!ELEMENT doc (intro, install, operation, config, release, recognition)>
<!ATTLIST doc title CDATA #REQUIRED>
<!ATTLIST doc subtitle CDATA #REQUIRED>
<!ELEMENT intro (text)>
<!ELEMENT install (text, install-system-list)>
<!ATTLIST install title CDATA #REQUIRED>
<!ELEMENT install-system-list (text?, install-system+)>
<!ELEMENT install-system (text)>
<!ATTLIST install-system title CDATA #REQUIRED>
<!ELEMENT operation (text?, operation-general, command-list)>
<!ATTLIST operation title CDATA #REQUIRED>
<!ELEMENT operation-general (text, option-list)>
<!ATTLIST operation-general title CDATA #REQUIRED>
<!ELEMENT command-list (text?, command+)>
<!ATTLIST command-list title CDATA #REQUIRED>
<!ELEMENT command (text, option-list?, command-example-list)>
<!ATTLIST command id CDATA #REQUIRED>
<!ELEMENT command-example-list (text?, command-example+)>
<!ATTLIST command-example-list title CDATA "Examples">
<!ELEMENT command-example (text)>
<!ATTLIST command-example title CDATA "Example">
<!ELEMENT option-list (option+)>
<!ELEMENT option (text, example?)>
<!ATTLIST option id CDATA #REQUIRED>
<!ELEMENT config (text, config-example-list, config-section-list)>
<!ATTLIST config title CDATA #REQUIRED>
<!ELEMENT config-example-list (text?, config-example+)>
<!ATTLIST config-example-list title CDATA #REQUIRED>
<!ELEMENT config-example (text)>
<!ATTLIST config-example title CDATA #REQUIRED>
<!ELEMENT config-section-list (text?, config-section+)>
<!ATTLIST config-section-list title CDATA #REQUIRED>
<!ELEMENT config-section (text, config-key-list?)>
<!ATTLIST config-section id CDATA #REQUIRED>
<!ELEMENT config-key-list (config-key+)>
<!ELEMENT config-key (text, default?, allow?, example)>
<!ATTLIST config-key id CDATA #REQUIRED>
<!ELEMENT default (#PCDATA)>
<!ELEMENT allow (#PCDATA)>
<!ELEMENT example (#PCDATA)>
<!ELEMENT release (text?, release-version-list)>
<!ATTLIST release title CDATA #REQUIRED>
<!ELEMENT release-version-list (release-version+)>
<!ELEMENT release-version (text?, release-feature-bullet-list)>
<!ATTLIST release-version version CDATA #REQUIRED>
<!ATTLIST release-version title CDATA #REQUIRED>
<!ELEMENT release-feature-bullet-list (release-feature+)>
<!ELEMENT release-feature (text)>
<!ELEMENT recognition (text)>
<!ATTLIST recognition title CDATA #REQUIRED>
<!ELEMENT text (#PCDATA|b|i|bi|ul|ol|id|code|code-block|file|path|cmd|param|setting|backrest|postgres)*>
<!ELEMENT i (#PCDATA)>
<!ELEMENT b (#PCDATA)>
<!ELEMENT bi (#PCDATA)>
<!ELEMENT ul (li+)>
<!ELEMENT ol (li+)>
<!ELEMENT li (#PCDATA|b|i|bi|ul|ol|id|code|code-block|file|path|cmd|param|setting|backrest|postgres)*>
<!ELEMENT id (#PCDATA)>
<!ELEMENT code (#PCDATA)>
<!ELEMENT code-block (#PCDATA)>
<!ELEMENT file (#PCDATA)>
<!ELEMENT path (#PCDATA)>
<!ELEMENT cmd (#PCDATA)>
<!ELEMENT param (#PCDATA)>
<!ELEMENT setting (#PCDATA)>
<!ELEMENT backrest EMPTY>
<!ELEMENT postgres EMPTY>

761
doc/doc.pl Executable file
View File

@ -0,0 +1,761 @@
#!/usr/bin/perl
####################################################################################################################################
# pg_backrest.pl - Simple Postgres Backup and Restore
####################################################################################################################################
####################################################################################################################################
# Perl includes
####################################################################################################################################
use strict;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename qw(dirname);
use Pod::Usage qw(pod2usage);
use Getopt::Long qw(GetOptions);
use XML::Checker::Parser;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::Config;
####################################################################################################################################
# Usage
####################################################################################################################################
=head1 NAME
doc.pl - Generate PgBackRest documentation
=head1 SYNOPSIS
doc.pl [options] [operation]
General Options:
--help display usage and exit
=cut
####################################################################################################################################
# DOC_RENDER_TAG - render a tag to another markup language
####################################################################################################################################
my $oRenderTag =
{
'markdown' =>
{
'b' => ['**', '**'],
'i' => ['_', '_'],
'bi' => ['_**', '**_'],
'ul' => ["\n", ''],
'ol' => ["\n", ''],
'li' => ['- ', "\n"],
'id' => ['`', '`'],
'file' => ['`', '`'],
'path' => ['`', '`'],
'cmd' => ['`', '`'],
'param' => ['`', '`'],
'setting' => ['`', '`'],
'code' => ['`', '`'],
'code-block' => ['```', '```'],
'backrest' => ['PgBackRest', ''],
'postgres' => ['PostgreSQL', '']
},
'html' =>
{
'b' => ['<b>', '</b>']
}
};
sub doc_render_tag
{
my $oTag = shift;
my $strType = shift;
my $strBuffer = "";
my $strTag = $$oTag{name};
my $strStart = $$oRenderTag{$strType}{$strTag}[0];
my $strStop = $$oRenderTag{$strType}{$strTag}[1];
if (!defined($strStart) || !defined($strStop))
{
confess "invalid type ${strType} or tag ${strTag}";
}
$strBuffer .= $strStart;
if ($strTag eq 'li')
{
$strBuffer .= doc_render_text($oTag, $strType);
}
elsif (defined($$oTag{value}))
{
$strBuffer .= $$oTag{value};
}
elsif (defined($$oTag{children}[0]))
{
foreach my $oSubTag (@{doc_list($oTag)})
{
$strBuffer .= doc_render_tag($oSubTag, $strType);
}
}
$strBuffer .= $strStop;
}
####################################################################################################################################
# DOC_RENDER_TEXT - Render a text node
####################################################################################################################################
sub doc_render_text
{
my $oText = shift;
my $strType = shift;
my $strBuffer = "";
if (defined($$oText{children}))
{
for (my $iIndex = 0; $iIndex < @{$$oText{children}}; $iIndex++)
{
if (ref(\$$oText{children}[$iIndex]) eq "SCALAR")
{
$strBuffer .= $$oText{children}[$iIndex];
}
else
{
$strBuffer .= doc_render_tag($$oText{children}[$iIndex], $strType);
}
}
}
return $strBuffer;
}
####################################################################################################################################
# DOC_GET - Get a node
####################################################################################################################################
sub doc_get
{
my $oDoc = shift;
my $strName = shift;
my $bRequired = shift;
my $oNode;
for (my $iIndex = 0; $iIndex < @{$$oDoc{children}}; $iIndex++)
{
if ($$oDoc{children}[$iIndex]{name} eq $strName)
{
if (!defined($oNode))
{
$oNode = $$oDoc{children}[$iIndex];
}
else
{
confess "found more than one child ${strName} in node $$oDoc{name}";
}
}
}
if (!defined($oNode) && (!defined($bRequired) || $bRequired))
{
confess "unable to find child ${strName} in node $$oDoc{name}";
}
return $oNode;
}
####################################################################################################################################
# DOC_GET - Test if a node exists
####################################################################################################################################
sub doc_exists
{
my $oDoc = shift;
my $strName = shift;
my $bExists = false;
for (my $iIndex = 0; $iIndex < @{$$oDoc{children}}; $iIndex++)
{
if ($$oDoc{children}[$iIndex]{name} eq $strName)
{
return true;
}
}
return false;
}
####################################################################################################################################
# DOC_LIST - Get a list of nodes
####################################################################################################################################
sub doc_list
{
my $oDoc = shift;
my $strName = shift;
my $bRequired = shift;
my @oyNode;
for (my $iIndex = 0; $iIndex < @{$$oDoc{children}}; $iIndex++)
{
if (!defined($strName) || $$oDoc{children}[$iIndex]{name} eq $strName)
{
push(@oyNode, $$oDoc{children}[$iIndex]);
}
}
if (@oyNode == 0 && (!defined($bRequired) || $bRequired))
{
confess "unable to find child ${strName} in node $$oDoc{name}";
}
return \@oyNode;
}
####################################################################################################################################
# DOC_VALUE - Get value from a node
####################################################################################################################################
sub doc_value
{
my $oNode = shift;
my $strDefault = shift;
if (defined($oNode) && defined($$oNode{value}))
{
return $$oNode{value};
}
return $strDefault;
}
####################################################################################################################################
# DOC_PARSE - Parse the XML tree into something more usable
####################################################################################################################################
sub doc_parse
{
my $strName = shift;
my $oyNode = shift;
my %oOut;
my $iIndex = 0;
my $bText = $strName eq 'text' || $strName eq 'li';
# Store the node name
$oOut{name} = $strName;
if (keys($$oyNode[$iIndex]))
{
$oOut{param} = $$oyNode[$iIndex];
}
$iIndex++;
# Look for strings and children
while (defined($$oyNode[$iIndex]))
{
# Process string data
if (ref(\$$oyNode[$iIndex]) eq 'SCALAR' && $$oyNode[$iIndex] eq '0')
{
$iIndex++;
my $strBuffer = $$oyNode[$iIndex++];
# Strip tabs, CRs, and LFs
$strBuffer =~ s/\t|\r//g;
# If anything is left
if (length($strBuffer) > 0)
{
# If text node then create array entries for strings
if ($bText)
{
if (!defined($oOut{children}))
{
$oOut{children} = [];
}
push($oOut{children}, $strBuffer);
}
# Don't allow strings mixed with children
elsif (length(trim($strBuffer)) > 0)
{
if (defined($oOut{children}))
{
confess "text mixed with children in node ${strName} (spaces count)";
}
if (defined($oOut{value}))
{
confess "value is already defined in node ${strName} - this shouldn't happen";
}
# Don't allow text mixed with
$oOut{value} = $strBuffer;
}
}
}
# Process a child
else
{
if (defined($oOut{value}) && $bText)
{
confess "text mixed with children in node ${strName} before child " . $$oyNode[$iIndex++] . " (spaces count)";
}
if (!defined($oOut{children}))
{
$oOut{children} = [];
}
push($oOut{children}, doc_parse($$oyNode[$iIndex++], $$oyNode[$iIndex++]));
}
}
return \%oOut;
}
####################################################################################################################################
# DOC_SAVE - save a doc
####################################################################################################################################
sub doc_write
{
my $strFileName = shift;
my $strBuffer = shift;
# Open the file
my $hFile;
open($hFile, '>', $strFileName)
or confess &log(ERROR, "unable to open ${strFileName}");
# Write the buffer
my $iBufferOut = syswrite($hFile, $strBuffer);
# Report any errors
if (!defined($iBufferOut) || $iBufferOut != length($strBuffer))
{
confess "unable to write '${strBuffer}'" . (defined($!) ? ': ' . $! : '');
}
# Close the file
close($hFile);
}
####################################################################################################################################
# Load command line parameters and config
####################################################################################################################################
my $bHelp = false; # Display usage
my $bVersion = false; # Display version
my $bQuiet = false; # Sets log level to ERROR
my $strLogLevel = 'info'; # Log level for tests
GetOptions ('help' => \$bHelp,
'version' => \$bVersion,
'quiet' => \$bQuiet,
'log-level=s' => \$strLogLevel)
or pod2usage(2);
# Display version and exit if requested
if ($bHelp || $bVersion)
{
print 'pg_backrest ' . version_get() . " doc builder\n";
if ($bHelp)
{
print "\n";
pod2usage();
}
exit 0;
}
# Set console log level
if ($bQuiet)
{
$strLogLevel = 'off';
}
log_level_set(undef, uc($strLogLevel));
####################################################################################################################################
# Load the doc file
####################################################################################################################################
# Initialize parser object and parse the file
my $oParser = XML::Checker::Parser->new(ErrorContext => 2, Style => 'Tree');
my $strFile = dirname($0) . '/doc.xml';
my $oTree;
eval
{
local $XML::Checker::FAIL = sub
{
my $iCode = shift;
die XML::Checker::error_string($iCode, @_);
};
$oTree = $oParser->parsefile(dirname($0) . '/doc.xml');
};
# Report any error that stopped parsing
if ($@)
{
$@ =~ s/at \/.*?$//s; # remove module line number
die "malformed xml in '$strFile}':\n" . trim($@);
}
####################################################################################################################################
# Build the document from xml
####################################################################################################################################
my $oDocIn = doc_parse(${$oTree}[0], ${$oTree}[1]);
sub doc_build
{
my $oDoc = shift;
# Initialize the node object
my $oOut = {name => $$oDoc{name}, children => []};
my $strError = "in node $$oDoc{name}";
# Get all params
if (defined($$oDoc{param}))
{
for my $strParam (keys $$oDoc{param})
{
$$oOut{param}{$strParam} = $$oDoc{param}{$strParam};
}
}
if (defined($$oDoc{children}))
{
for (my $iIndex = 0; $iIndex < @{$$oDoc{children}}; $iIndex++)
{
my $oSub = $$oDoc{children}[$iIndex];
my $strName = $$oSub{name};
if ($strName eq 'text')
{
$$oOut{field}{text} = $oSub;
}
elsif (defined($$oSub{value}))
{
$$oOut{field}{$strName} = $$oSub{value};
}
elsif (!defined($$oSub{children}))
{
$$oOut{field}{$strName} = true;
}
else
{
push($$oOut{children}, doc_build($oSub));
}
}
}
return $oOut;
}
my $oDocOut = doc_build($oDocIn);
####################################################################################################################################
# Build commands pulled from the code
####################################################################################################################################
# Get the option rules
my $oOptionRule = optionRuleGet();
my %oOptionFound;
sub doc_out_get
{
my $oNode = shift;
my $strName = shift;
my $bRequired = shift;
foreach my $oChild (@{$$oNode{children}})
{
if ($$oChild{name} eq $strName)
{
return $oChild;
}
}
if (!defined($bRequired) || $bRequired)
{
confess "unable to find child node '${strName}' in node '$$oNode{name}'";
}
return undef;
}
sub doc_option_list_process
{
my $oOptionListOut = shift;
my $strOperation = shift;
foreach my $oOptionOut (@{$$oOptionListOut{children}})
{
my $strOption = $$oOptionOut{param}{id};
# if (defined($oOptionFound{$strOption}))
# {
# confess "option ${strOption} has already been found";
# }
if ($strOption eq 'help' || $strOption eq 'version')
{
next;
}
$oOptionFound{$strOption} = true;
if (!defined($$oOptionRule{$strOption}{&OPTION_RULE_TYPE}))
{
confess "unable to find option $strOption";
}
$$oOptionOut{field}{default} = optionDefault($strOption, $strOperation);
if (defined($$oOptionOut{field}{default}))
{
$$oOptionOut{field}{required} = false;
if ($$oOptionRule{$strOption}{&OPTION_RULE_TYPE} eq &OPTION_TYPE_BOOLEAN)
{
$$oOptionOut{field}{default} = $$oOptionOut{field}{default} ? 'y' : 'n';
}
}
else
{
$$oOptionOut{field}{required} = optionRequired($strOption, $strOperation);
}
if (defined($strOperation))
{
$$oOptionOut{field}{cmd} = true;
}
if ($strOption eq 'cmd-remote')
{
$$oOptionOut{field}{default} = 'same as local';
}
# &log(INFO, "operation " . (defined($strOperation) ? $strOperation : '[undef]') .
# ", option ${strOption}, required $$oOptionOut{field}{required}" .
# ", default " . (defined($$oOptionOut{field}{default}) ? $$oOptionOut{field}{default} : 'undef'));
}
}
# Ouput general options
my $oOperationGeneralOptionListOut = doc_out_get(doc_out_get(doc_out_get($oDocOut, 'operation'), 'operation-general'), 'option-list');
doc_option_list_process($oOperationGeneralOptionListOut);
# Ouput commands
my $oCommandListOut = doc_out_get(doc_out_get($oDocOut, 'operation'), 'command-list');
foreach my $oCommandOut (@{$$oCommandListOut{children}})
{
my $strOperation = $$oCommandOut{param}{id};
my $oOptionListOut = doc_out_get($oCommandOut, 'option-list', false);
if (defined($oOptionListOut))
{
doc_option_list_process($oOptionListOut, $strOperation);
}
my $oExampleListOut = doc_out_get($oCommandOut, 'command-example-list');
foreach my $oExampleOut (@{$$oExampleListOut{children}})
{
if (defined($$oExampleOut{param}{title}))
{
$$oExampleOut{param}{title} = 'Example: ' . $$oExampleOut{param}{title};
}
else
{
$$oExampleOut{param}{title} = 'Example';
}
}
# $$oExampleListOut{param}{title} = 'Examples';
}
# Ouput config section
my $oConfigSectionListOut = doc_out_get(doc_out_get($oDocOut, 'config'), 'config-section-list');
foreach my $oConfigSectionOut (@{$$oConfigSectionListOut{children}})
{
my $oOptionListOut = doc_out_get($oConfigSectionOut, 'config-key-list', false);
if (defined($oOptionListOut))
{
doc_option_list_process($oOptionListOut);
}
}
# Mark undocumented features as processed
$oOptionFound{'no-fork'} = true;
$oOptionFound{'test'} = true;
$oOptionFound{'test-delay'} = true;
# Make sure all options were processed
foreach my $strOption (sort(keys($oOptionRule)))
{
if (!defined($oOptionFound{$strOption}))
{
confess "option ${strOption} was not found";
}
}
####################################################################################################################################
# Render the document
####################################################################################################################################
sub doc_render
{
my $oDoc = shift;
my $strType = shift;
my $iDepth = shift;
my $bChildList = shift;
my $strBuffer = "";
my $bList = $$oDoc{name} =~ /.*-bullet-list$/;
$bChildList = defined($bChildList) ? $bChildList : false;
my $iChildDepth = $iDepth;
if ($strType eq 'markdown')
{
if (defined($$oDoc{param}{id}))
{
my @stryToken = split('-', $$oDoc{name});
my $strTitle = @stryToken == 0 ? '[unknown]' : $stryToken[@stryToken - 1];
$strBuffer = ('#' x $iDepth) . " `$$oDoc{param}{id}` " . $strTitle;
}
if (defined($$oDoc{param}{title}))
{
$strBuffer = ('#' x $iDepth) . ' ';
if (defined($$oDoc{param}{version}))
{
$strBuffer .= "v$$oDoc{param}{version}: ";
}
$strBuffer .= $$oDoc{param}{title};
}
if (defined($$oDoc{param}{subtitle}))
{
if (!defined($$oDoc{param}{subtitle}))
{
confess "subtitle not valid without title";
}
$strBuffer .= " - " . $$oDoc{param}{subtitle};
}
if ($strBuffer ne "")
{
$iChildDepth++;
}
if (defined($$oDoc{field}{text}))
{
if ($strBuffer ne "")
{
$strBuffer .= "\n\n";
}
if ($bChildList)
{
$strBuffer .= '- ';
}
$strBuffer .= doc_render_text($$oDoc{field}{text}, $strType);
}
if ($$oDoc{name} eq 'config-key' || $$oDoc{name} eq 'option')
{
my $strError = "config section ?, key $$oDoc{param}{id} requires";
my $bRequired = defined($$oDoc{field}{required}) && $$oDoc{field}{required};
my $strDefault = $$oDoc{field}{default};
my $strAllow = $$oDoc{field}{allow};
my $strOverride = $$oDoc{field}{override};
my $strExample = $$oDoc{field}{example};
if (defined($strExample))
{
if (index($strExample, '=') == -1)
{
$strExample = "=${strExample}";
}
else
{
$strExample = " ${strExample}";
}
$strExample = "$$oDoc{param}{id}${strExample}";
if (defined($$oDoc{field}{cmd}) && $$oDoc{field}{cmd})
{
$strExample = '--' . $strExample;
if (index($$oDoc{field}{example}, ' ') != -1)
{
$strExample = "\"${strExample}\"";
}
}
}
$strBuffer .= "\n```\n" .
"required: " . ($bRequired ? 'y' : 'n') . "\n" .
(defined($strDefault) ? "default: ${strDefault}\n" : '') .
(defined($strAllow) ? "allow: ${strAllow}\n" : '') .
(defined($strOverride) ? "override: ${strOverride}\n" : '') .
(defined($strExample) ? "example: ${strExample}\n" : '') .
"```";
}
if ($strBuffer ne "" && $iDepth != 1 && !$bList)
{
$strBuffer = "\n\n" . $strBuffer;
}
}
else
{
confess "unknown type ${strType}";
}
my $bFirst = true;
foreach my $oChild (@{$$oDoc{children}})
{
if ($strType eq 'markdown')
{
}
else
{
confess "unknown type ${strType}";
}
$strBuffer .= doc_render($oChild, $strType, $iChildDepth, $bList);
}
if ($iDepth == 1)
{
if ($strType eq 'markdown')
{
$strBuffer .= "\n";
}
else
{
confess "unknown type ${strType}";
}
}
return $strBuffer;
}
# Write markdown
doc_write(dirname($0) . '/../README.md', doc_render($oDocOut, 'markdown', 1));

799
doc/doc.xml Normal file
View File

@ -0,0 +1,799 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE doc SYSTEM "doc.dtd">
<doc title="PgBackRest" subtitle="Simple Postgres Backup &amp; Restore">
<intro>
<text><backrest/> aims to be a simple backup and restore system that can seamlessly scale up to the largest databases and workloads.
Primary <backrest/> features:
<ul>
<li>Local or remote backup</li>
<li>Multi-threaded backup/restore for performance</li>
<li>Checksums</li>
<li>Safe backups (checks that logs required for consistency are present before backup completes)</li>
<li>Full, differential, and incremental backups</li>
<li>Backup rotation (and minimum retention rules with optional separate retention for archive)</li>
<li>In-stream compression/decompression</li>
<li>Archiving and retrieval of logs for replicas/restores built in</li>
<li>Async archiving for very busy systems (including space limits)</li>
<li>Backup directories are consistent Postgres clusters (when hardlinks are on and compression is off)</li>
<li>Tablespace support</li>
<li>Restore delta option</li>
<li>Restore using timestamp/size or checksum</li>
<li>Restore remapping base/tablespaces</li>
</ul>
Instead of relying on traditional backup tools like tar and rsync, <backrest/> implements all backup features internally and uses a custom protocol for communicating with remote systems. Removing reliance on tar and rsync allows for better solutions to database-specific backup issues. The custom remote protocol limits the types of connections that are required to perform a backup which increases security.</text>
</intro>
<install title="Install">
<text><backrest/> is written entirely in Perl and uses some non-standard modules that must be installed from CPAN.</text>
<install-system-list>
<install-system title="Ubuntu 12.04">
<text>* Starting from a clean install, update the OS:
<code-block>
apt-get update
apt-get upgrade (reboot if required)
</code-block>
* Install ssh, git and cpanminus:
<code-block>
apt-get install ssh
apt-get install git
apt-get install cpanminus
</code-block>
* Install Postgres (instructions from http://www.postgresql.org/download/linux/ubuntu/)
Create the file /etc/apt/sources.list.d/pgdg.list, and add a line for the repository:
<code-block>
deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main
</code-block>
* Then run the following:
<code-block>
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
apt-get install postgresql-9.3
apt-get install postgresql-server-dev-9.3
</code-block>
* Install required Perl modules:
<code-block>
cpanm JSON
cpanm Net::OpenSSH
cpanm IPC::System::Simple
cpanm Digest::SHA
cpanm Compress::ZLib
</code-block>
* Install PgBackRest
<backrest/> can be installed by downloading the most recent release:
https://github.com/pgmasters/backrest/releases
<backrest/> can be installed anywhere but it's best (though not required) to install it in the same location on all systems.</text>
</install-system>
</install-system-list>
</install>
<operation title="Operation">
<operation-general title="General Options">
<text>These options are either global or used by all commands.</text>
<option-list>
<!-- OPERATION - GENERAL - CONFIG OPTION -->
<option id="config">
<text>By default <backrest/> expects the its configuration file to be located at `/etc/pg_backrest.conf`. Use this option to specify another location.</text>
<example>/var/lib/backrest/pg_backrest.conf</example>
</option>
<!-- OPERATION - GENERAL - STANZA OPTION -->
<option id="stanza">
<text>Defines the stanza for the command. A stanza is the configuration for a database that defines where it is located, how it will be backed up, archiving options, etc. Most db servers will only have one Postgres cluster and therefore one stanza, whereas backup servers will have a stanza for every database that needs to be backed up.
Examples of how to configure a stanza can be found in the `configuration examples` section.</text>
<example>main</example>
</option>
<!-- OPERATION - GENERAL - HELP OPTION -->
<option id="help">
<text>Displays the <backrest/> help.</text>
</option>
<!-- OPERATION - GENERAL - VERSION OPTION -->
<option id="version">
<text>Displays the <backrest/> version.</text>
</option>
</option-list>
</operation-general>
<command-list title="Commands">
<!-- OPERATION - BACKUP COMMAND -->
<command id="backup">
<text>Perform a database backup. <backrest/> does not have a built-in scheduler so it's best to run it from cron or some other scheduling mechanism.</text>
<option-list>
<!-- OPERATION - BACKUP COMMAND - TYPE OPTION -->
<option id="type">
<text>The following backup types are supported:
<ul>
<li><id>full</id> - all database files will be copied and there will be no dependencies on previous backups.</li>
<li><id>incr</id> - incremental from the last successful backup.</li>
<li><id>diff</id> - like an incremental backup but always based on the last full backup.</li>
</ul></text>
<example>full</example>
</option>
<!-- OPERATION - BACKUP COMMAND - NO-START-STOP OPTION -->
<option id="no-start-stop">
<text>This option prevents <backrest/> from running <code>pg_start_backup()</code> and <code>pg_stop_backup()</code> on the database. In order for this to work <postgres/> should be shut down and <backrest/> will generate an error if it is not.
The purpose of this option is to allow cold backups. The <path>pg_xlog</path> directory is copied as-is and <setting>archive-check</setting> is automatically disabled for the backup.</text>
</option>
<!-- OPERATION - BACKUP COMMAND - FORCE OPTION -->
<option id="force">
<text>When used with <param>--no-start-stop</param> a backup will be run even if <backrest/> thinks that <postgres/> is running. <b>This option should be used with extreme care as it will likely result in a bad backup.</b>
There are some scenarios where a backup might still be desirable under these conditions. For example, if a server crashes and the database volume can only be mounted read-only, it would be a good idea to take a backup even if <file>postmaster.pid</file> is present. In this case it would be better to revert to the prior backup and replay WAL, but possibly there is a very important transaction in a WAL segment that did not get archived.</text>
</option>
</option-list>
<command-example-list>
<command-example title="Full Backup">
<text><code-block>
/path/to/pg_backrest.pl --stanza=db --type=full backup
</code-block>
Run a <id>full</id> backup on the <id>db</id> stanza. <param>--type</param> can also be set to <id>incr</id> or <id>diff</id> for incremental or differential backups. However, if no <id>full</id> backup exists then a <id>full</id> backup will be forced even if <id>incr</id> or <id>diff</id> is requested.</text>
</command-example>
</command-example-list>
</command>
<!-- OPERATION - ARCHIVE-PUSH COMMAND -->
<command id="archive-push">
<text>Archive a WAL segment to the repository.</text>
<command-example-list>
<command-example>
<text><code-block>
/path/to/pg_backrest.pl --stanza=db archive-push %p
</code-block>
Accepts a WAL segment from <postgres/> and archives it in the repository. <param>%p</param> is how <postgres/> specifies the location of the WAL segment to be archived.</text>
</command-example>
</command-example-list>
</command>
<!-- OPERATION - ARCHIVE-GET COMMAND -->
<command id="archive-get">
<text>Get a WAL segment from the repository.</text>
<command-example-list>
<command-example>
<text><code-block>
/path/to/pg_backrest.pl --stanza=db archive-get %f %p
</code-block>
Retrieves a WAL segment from the repository. This command is used in <file>restore.conf</file> to restore a backup, perform PITR, or as an alternative to streaming for keeping a replica up to date. <param>%f</param> is how <postgres/> specifies the WAL segment it needs and <param>%p</param> is the location where it should be copied.</text>
</command-example>
</command-example-list>
</command>
<!-- OPERATION - EXPIRE COMMAND -->
<command id="expire">
<text><backrest/> does backup rotation, but is not concerned with when the backups were created. So if two full backups are configured for rentention, <backrest/> will keep two full backups no matter whether they occur, two hours apart or two weeks apart.</text>
<command-example-list>
<command-example>
<text><code-block>
/path/to/pg_backrest.pl --stanza=db expire
</code-block>
Expire (rotate) any backups that exceed the defined retention. Expiration is run automatically after every successful backup, so there is no need to run this command separately unless you have reduced rentention, usually to free up some space.</text>
</command-example>
</command-example-list>
</command>
<!-- OPERATION - RESTORE COMMAND -->
<command id="restore">
<text>Perform a database restore. This command is generall run manually, but there are instances where it might be automated.</text>
<option-list>
<!-- OPERATION - RESTORE COMMAND - SET OPTION -->
<option id="set">
<text>The backup set to be restored. <id>latest</id> will restore the latest backup, otherwise provide the name of the backup to restore.</text>
<example>20150131-153358F_20150131-153401I</example>
</option>
<!-- OPERATION - RESTORE COMMAND - DELTA OPTION -->
<option id="delta">
<text>By default the <postgres/> data and tablespace directories are expected to be present but empty. This option performs a delta restore using checksums.</text>
</option>
<!-- OPERATION - RESTORE COMMAND - FORCE OPTION -->
<option id="force">
<text>By itself this option forces the <postgres/> data and tablespace paths to be completely overwritten. In combination with <param>--delta</param> a timestamp/size delta will be performed instead of using checksums.</text>
</option>
<!-- OPERATION - RESTORE COMMAND - TYPE OPTION -->
<option id="type">
<text>The following recovery types are supported:
<ul>
<li><id>default</id> - recover to the end of the archive stream.</li>
<li><id>name</id> - recover the restore point specified in <param>--target</param>.</li>
<li><id>xid</id> - recover to the transaction id specified in <param>--target</param>.</li>
<li><id>time</id> - recover to the time specified in <param>--target</param>.</li>
<li><id>preserve</id> - preserve the existing <file>recovery.conf</file> file.</li>
</ul></text>
<example>xid</example>
</option>
<!-- OPERATION - RESTORE COMMAND - TARGET OPTION -->
<option id="target">
<text>Defines the recovery target when <param>--type</param> is <id>name</id>, <id>xid</id>, or <id>time</id>.</text>
<example>2015-01-30 14:15:11 EST</example>
</option>
<!-- OPERATION - RESTORE COMMAND - TARGET-EXCLUSIVE OPTION -->
<option id="target-exclusive">
<text>Defines whether recovery to the target would be exclusive (the default is inclusive) and is only valid when <param>--type</param> is <id>time</id> or <id>xid</id>. For example, using <param>--target-exclusive</param> would exclude the contents of transaction <id>1007</id> when <param>--type=xid</param> and <param>--target=1007</param>. See <param>recovery_target_inclusive</param> option in the <postgres/> docs for more information.</text>
</option>
<!-- OPERATION - RESTORE COMMAND - TARGET-RESUME OPTION -->
<option id="target-resume">
<text>Specifies whether recovery should resume when the recovery target is reached. See <setting>pause_at_recovery_target</setting> in the <postgres/> docs for more information.</text>
</option>
<!-- OPERATION - RESTORE COMMAND - TARGET-TIMELINE OPTION -->
<option id="target-timeline">
<text>Recovers along the specified timeline. See <setting>recovery_target_timeline</setting> in the <postgres/> docs for more information.</text>
<example>3</example>
</option>
<!-- OPERATION - RESTORE COMMAND - RECOVERY-SETTING OPTION -->
<option id="recovery-setting">
<text>Recovery settings in restore.conf options can be specified with this option. See http://www.postgresql.org/docs/X.X/static/recovery-config.html for details on restore.conf options (replace X.X with your database version). This option can be used multiple times.
Note: <setting>restore_command</setting> will be automatically generated but can be overridden with this option. Be careful about specifying your own <setting>restore_command</setting> as <backrest/> is designed to handle this for you. Target Recovery options (recovery_target_name, recovery_target_time, etc.) are generated automatically by <backrest/> and should not be set with this option.
Recovery settings can also be set in the <setting>restore:recovery-setting</setting> section of pg_backrest.conf. For example:
<code-block>
[restore:recovery-setting]
primary_conn_info=db.mydomain.com
standby_mode=on
</code-block>
Since <backrest/> does not start <postgres/> after writing the <file>recovery.conf</file> file, it is always possible to edit/check <file>recovery.conf</file> before manually restarting.</text>
<example>primary_conninfo=db.mydomain.com</example>
</option>
<!-- OPERATION - RESTORE COMMAND - TABLESPACE-MAP OPTION -->
<option id="tablespace-map">
<text>Moves a tablespace to a new location during the restore. This is useful when tablespace locations are not the same on a replica, or an upgraded system has different mount points.
Since <postgres/> 9.2 tablespace locations are not stored in pg_tablespace so moving tablespaces can be done with impunity. However, moving a tablespace to the <setting>data_directory</setting> is not recommended and may cause problems. For more information on moving tablespaces http://www.databasesoup.com/2013/11/moving-tablespaces.html is a good resource.</text>
<example>ts_01=/db/ts_01</example>
</option>
</option-list>
<command-example-list>
<command-example title="Restore Latest">
<text><code-block>
/path/to/pg_backrest.pl --stanza=db --type=name --target=release restore
</code-block>
Restores the latest database backup and then recovers to the <id>release</id> restore point.</text>
</command-example>
</command-example-list>
</command>
</command-list>
</operation>
<config title="Configuration">
<text><backrest/> can be used entirely with command-line parameters but a configuration file is more practical for installations that are complex or set a lot of options. The default location for the configuration file is <file>/etc/pg_backrest.conf</file>.</text>
<config-example-list title="Examples">
<config-example title="Confguring Postgres for Archiving">
<text>Modify the following settings in <file>postgresql.conf</file>:
<code-block>
wal_level = archive
archive_mode = on
archive_command = '/path/to/backrest/bin/pg_backrest.pl --stanza=db archive-push %p'
</code-block>
Replace the path with the actual location where <backrest/> was installed. The stanza parameter should be changed to the actual stanza name for your database.
</text>
</config-example>
<config-example title="Minimal Configuration">
<text>The absolute minimum required to run <backrest/> (if all defaults are accepted) is the database path.
<file>/etc/pg_backrest.conf</file>:
<code-block>
[main]
db-path=/data/db
</code-block>
The <setting>db-path</setting> option could also be provided on the command line, but it's best to use a configuration file as options tend to pile up quickly.</text>
</config-example>
<config-example title="Simple Single Host Configuration">
<text>This configuration is appropriate for a small installation where backups are being made locally or to a remote file system that is mounted locally. A number of additional options are set:
<ul>
<li><setting>cmd-psql</setting> - Custom location and parameters for psql.</li>
<li><setting>cmd-psql-option</setting> - Options for psql can be set per stanza.</li>
<li><setting>compress</setting> - Disable compression (handy if the file system is already compressed).</li>
<li><setting>repo-path</setting> - Path to the <backrest/> repository where backups and WAL archive are stored.</li>
<li><setting>log-level-file</setting> - Set the file log level to debug (Lots of extra info if something is not working as expected).</li>
<li><setting>hardlink</setting> - Create hardlinks between backups (but never between full backups).</li>
<li><setting>thread-max</setting> - Use 2 threads for backup/restore operations.</li>
</ul>
<file>/etc/pg_backrest.conf</file>:
<code-block>
[global:command]
cmd-psql=/usr/local/bin/psql -X %option%
[global:general]
compress=n
repo-path=/Users/dsteele/Documents/Code/backrest/test/test/backrest
[global:log]
log-level-file=debug
[global:backup]
hardlink=y
thread-max=2
[main]
db-path=/data/db
[main:command]
cmd-psql-option=--port=5433
</code-block>
</text>
</config-example>
<config-example title="Simple Multiple Host Configuration">
<text>This configuration is appropriate for a small installation where backups are being made remotely. Make sure that postgres@db-host has trusted ssh to backrest@backup-host and vice versa. This configuration assumes that you have pg_backrest_remote.pl and pg_backrest.pl in the same path on both servers.
<file>/etc/pg_backrest.conf</file> on the db host:
<code-block>
[global:general]
repo-path=/path/to/db/repo
repo-remote-path=/path/to/backup/repo
[global:backup]
backup-host=backup.mydomain.com
backup-user=backrest
[global:archive]
archive-async=y
[main]
db-path=/data/db
</code-block>
<file>/etc/pg_backrest.conf</file> on the backup host:
<code-block>
[global:general]
repo-path=/path/to/backup/repo
[main]
db-host=db.mydomain.com
db-path=/data/db
db-user=postgres
</code-block>
</text>
</config-example>
</config-example-list>
<config-section-list title="Options">
<!-- CONFIG - COMMAND SECTION -->
<config-section id="command">
<text>The <setting>command</setting> section defines the location of external commands that are used by <backrest/>.</text>
<config-key-list>
<!-- CONFIG - COMMAND SECTION - CMD-PSQL KEY -->
<config-key id="cmd-psql">
<text>Defines the full path to <cmd>psql</cmd>. <cmd>psql</cmd> is used to call <code>pg_start_backup()</code> and <code>pg_stop_backup()</code>.
If addtional per stanza parameters need to be passed to <cmd>psql</cmd> (such as <param>--port</param> or <param>--cluster</param>) then add <param>%option%</param> to the command line and use <setting>command-option::psql</setting> to set options.</text>
<example>/usr/bin/psql -X %option%</example>
</config-key>
<!-- CONFIG - COMMAND SECTION - CMD-PSQL-OPTION KEY -->
<config-key id="cmd-psql-option">
<text>Allows per stanza command line parameters to be passed to <cmd>psql</cmd>.</text>
<example>--port=5433</example>
</config-key>
<!-- CONFIG - COMMAND SECTION - CMD-REMOTE KEY -->
<config-key id="cmd-remote">
<text>Defines the location of <cmd>pg_backrest_remote.pl</cmd>.
Required only if the path to <cmd>pg_backrest_remote.pl</cmd> is different on the local and remote systems. If not defined, the remote path will be assumed to be the same as the local path.</text>
<default>same as local</default>
<example>/usr/lib/backrest/bin/pg_backrest_remote.pl</example>
</config-key>
</config-key-list>
</config-section>
<!-- CONFIG - LOG -->
<config-section id="log">
<text>The <setting>log</setting> section defines logging-related settings. The following log levels are supported:
<ul>
<li><id>off</id> - No logging at all (not recommended)</li>
<li><id>error</id> - Log only errors</li>
<li><id>warn</id> - Log warnings and errors</li>
<li><id>info</id> - Log info, warnings, and errors</li>
<li><id>debug</id> - Log debug, info, warnings, and errors</li>
<li><id>trace</id> - Log trace (very verbose debugging), debug, info, warnings, and errors</li>
</ul></text>
<!-- CONFIG - LOG SECTION - LEVEL-FILE KEY -->
<config-key-list>
<config-key id="log-level-file">
<text>Sets file log level.</text>
<example>debug</example>
</config-key>
<!-- CONFIG - LOG SECTION - LEVEL-CONSOLE KEY -->
<config-key id="log-level-console">
<text>Sets console log level.</text>
<example>error</example>
</config-key>
</config-key-list>
</config-section>
<!-- CONFIG - GENERAL -->
<config-section id="general">
<text>The <setting>general</setting> section defines settings that are shared between multiple operations.</text>
<!-- CONFIG - GENERAL SECTION - BUFFER-SIZE KEY -->
<config-key-list>
<config-key id="buffer-size">
<text>Set the buffer size used for copy, compress, and uncompress functions. A maximum of 3 buffers will be in use at a time per thread. An additional maximum of 256K per thread may be used for zlib buffers.</text>
<allow>4096 - 8388608</allow>
<example>16384</example>
</config-key>
<!-- CONFIG - BACKUP SECTION - COMPRESS -->
<config-key id="compress">
<text>Enable gzip compression. Backup files are compatible with command-line gzip tools.</text>
<example>n</example>
</config-key>
<!-- CONFIG - GENERAL SECTION - COMPRESS-LEVEL KEY -->
<config-key id="compress-level">
<text>Sets the zlib level to be used for file compression when <setting>compress=y</setting>.</text>
<allow>0-9</allow>
<example>9</example>
</config-key>
<!-- CONFIG - GENERAL SECTION - COMPRESS-LEVEL-NETWORK KEY -->
<config-key id="compress-level-network">
<text>Sets the zlib level to be used for protocol compression when <setting>compress=n</setting> and the database is not on the same host as the backup. Protocol compression is used to reduce network traffic but can be disabled by setting <setting>compress-level-network=0</setting>. When <setting>compress=y</setting> the <setting>compress-level-network</setting> setting is ignored and <setting>compress-level</setting> is used instead so that the file is only compressed once. SSH compression is always disabled.</text>
<allow>0-9</allow>
<example>1</example>
</config-key>
<!-- CONFIG - GENERAL SECTION - REPO-PATH KEY -->
<config-key id="repo-path">
<text>Path to the backrest repository where WAL segments, backups, logs, etc are stored.</text>
<example>/data/db/backrest</example>
</config-key>
<!-- CONFIG - GENERAL SECTION - REPO-REMOTE-PATH KEY -->
<config-key id="repo-remote-path">
<text>Path to the remote backrest repository where WAL segments, backups, logs, etc are stored.</text>
<example>/backup/backrest</example>
</config-key>
</config-key-list>
</config-section>
<!-- CONFIG - BACKUP -->
<config-section id="backup">
<text>The <setting>backup</setting> section defines settings related to backup.</text>
<!-- CONFIG - BACKUP SECTION - HOST KEY -->
<config-key-list>
<config-key id="backup-host">
<text>Sets the backup host when backup up remotely via SSH. Make sure that trusted SSH authentication is configured between the db host and the backup host.
When backing up to a locally mounted network filesystem this setting is not required.</text>
<example>backup.domain.com</example>
</config-key>
<!-- CONFIG - BACKUP SECTION - USER KEY -->
<config-key id="backup-user">
<text>Sets user account on the backup host.</text>
<example>backrest</example>
</config-key>
<!-- CONFIG - BACKUP SECTION - START-FAST -->
<config-key id="start-fast">
<text>Forces a checkpoint (by passing <id>true</id> to the <id>fast</id> parameter of <code>pg_start_backup()</code>) so the backup begins immediately.</text>
<example>y</example>
</config-key>
<!-- CONFIG - BACKUP SECTION - HARDLINK -->
<config-key id="hardlink">
<text>Enable hard-linking of files in differential and incremental backups to their full backups. This gives the appearance that each backup is a full backup. Be careful, though, because modifying files that are hard-linked can affect all the backups in the set.</text>
<example>y</example>
</config-key>
<!-- CONFIG - BACKUP SECTION - THEAD-MAX -->
<config-key id="thread-max">
<text>Defines the number of threads to use for backup or restore. Each thread will perform compression and transfer to make the backup run faster, but don't set <setting>thread-max</setting> so high that it impacts database performance during backup.</text>
<example>4</example>
</config-key>
<!-- CONFIG - BACKUP SECTION - THEAD-TIMEOUT -->
<config-key id="thread-timeout">
<text>Maximum amount of time (in seconds) that a backup thread should run. This limits the amount of time that a thread might be stuck due to unforeseen issues during the backup. Has no affect when <setting>thread-max=1</setting>.</text>
<example>3600</example>
</config-key>
<!-- CONFIG - BACKUP SECTION - ARCHIVE-CHECK -->
<config-key id="archive-check">
<text>Checks that all WAL segments required to make the backup consistent are present in the WAL archive. It's a good idea to leave this as the default unless you are using another method for archiving.</text>
<example>n</example>
</config-key>
<!-- CONFIG - BACKUP SECTION - ARCHIVE-COPY -->
<config-key id="archive-copy">
<text>Store WAL segments required to make the backup consistent in the backup's pg_xlog path. This slightly paranoid option protects against corruption or premature expiration in the WAL segment archive. PITR won't be possible without the WAL segment archive and this option also consumes more space.</text>
<example>y</example>
</config-key>
</config-key-list>
</config-section>
<!-- CONFIG - ARCHIVE -->
<config-section id="archive">
<text>The <setting>archive</setting> section defines parameters when doing async archiving. This means that the archive files will be stored locally, then a background process will pick them and move them to the backup.</text>
<!-- CONFIG - ARCHIVE SECTION - PATH KEY -->
<config-key-list>
<!-- CONFIG - ARCHIVE SECTION - ARCHIVE-ASYNC KEY -->
<config-key id="archive-async">
<text>Archive WAL segments asynchronously. WAL segments will be copied to the local repo, then a process will be forked to compress the segment and transfer it to the remote repo if configured. Control will be returned to <postgres/> as soon as the WAL segment is copied locally.</text>
<example>y</example>
</config-key>
<!-- CONFIG - ARCHIVE SECTION - ARCHIVE-MAX-MB KEY -->
<config-key id="archive-max-mb">
<text>Limits the amount of archive log that will be written locally when <setting>compress-async=y</setting>. After the limit is reached, the following will happen:
<ol>
<li>PgBackRest will notify Postgres that the archive was succesfully backed up, then DROP IT.</li>
<li>An error will be logged to the console and also to the Postgres log.</li>
<li>A stop file will be written in the lock directory and no more archive files will be backed up until it is removed.</li>
</ol>
If this occurs then the archive log stream will be interrupted and PITR will not be possible past that point. A new backup will be required to regain full restore capability.
The purpose of this feature is to prevent the log volume from filling up at which point Postgres will stop completely. Better to lose the backup than have the database go down.
To start normal archiving again you'll need to remove the stop file which will be located at <file>${archive-path}/lock/${stanza}-archive.stop</file> where <code>${archive-path}</code> is the path set in the <setting>archive</setting> section, and <code>${stanza}</code> is the backup stanza.</text>
<example>1024</example>
</config-key>
</config-key-list>
</config-section>
<!-- CONFIG - EXPIRE -->
<config-section id="expire">
<text>The <setting>expire</setting> section defines how long backups will be retained. Expiration only occurs when the number of complete backups exceeds the allowed retention. In other words, if full-retention is set to 2, then there must be 3 complete backups before the oldest will be expired. Make sure you always have enough space for rentention + 1 backups.</text>
<!-- CONFIG - RETENTION SECTION - FULL-RETENTION KEY -->
<config-key-list>
<config-key id="retention-full">
<text>Number of full backups to keep. When a full backup expires, all differential and incremental backups associated with the full backup will also expire. When not defined then all full backups will be kept.</text>
<example>2</example>
</config-key>
<!-- CONFIG - RETENTION SECTION - DIFFERENTIAL-RETENTION KEY -->
<config-key id="retention-diff">
<text>Number of differential backups to keep. When a differential backup expires, all incremental backups associated with the differential backup will also expire. When not defined all differential backups will be kept.</text>
<example>3</example>
</config-key>
<!-- CONFIG - RETENTION SECTION - ARCHIVE-RETENTION-TYPE KEY -->
<config-key id="retention-archive-type">
<text>Type of backup to use for archive retention (full or differential). If set to full, then PgBackRest will keep archive logs for the number of full backups defined by <setting>archive-retention</setting>. If set to differential, then PgBackRest will keep archive logs for the number of differential backups defined by <setting>archive-retention</setting>.
If not defined then archive logs will be kept indefinitely. In general it is not useful to keep archive logs that are older than the oldest backup, but there may be reasons for doing so.</text>
<example>diff</example>
</config-key>
<!-- CONFIG - RETENTION SECTION - ARCHIVE-RETENTION KEY -->
<config-key id="retention-archive">
<text>Number of backups worth of archive log to keep.</text>
<example>2</example>
</config-key>
</config-key-list>
</config-section>
<!-- CONFIG - STANZA -->
<config-section id="stanza">
<text>A stanza defines a backup for a specific database. The stanza section must define the base database path and host/user if the database is remote. Also, any global configuration sections can be overridden to define stanza-specific settings.</text>
<!-- CONFIG - RETENTION SECTION - HOST KEY -->
<config-key-list>
<config-key id="db-host">
<text>Define the database host. Used for backups where the database host is different from the backup host.</text>
<example>db.domain.com</example>
</config-key>
<!-- CONFIG - RETENTION SECTION - USER KEY -->
<config-key id="db-user">
<text>Defines user account on the db host when <setting>db-host</setting> is defined.</text>
<example>postgres</example>
</config-key>
<!-- CONFIG - RETENTION SECTION - PATH KEY -->
<config-key id="db-path">
<text>Path to the db data directory (data_directory setting in postgresql.conf).</text>
<example>/data/db</example>
</config-key>
</config-key-list>
</config-section>
</config-section-list>
</config>
<release title="Release Notes">
<release-version-list>
<release-version version="0.50" title="restore and much more">
<release-feature-bullet-list>
<release-feature>
<text>Added restore functionality.</text>
</release-feature>
<release-feature>
<text>All options can now be set on the command-line making pg_backrest.conf optional.</text>
</release-feature>
<release-feature>
<text>De/compression is now performed without threads and checksum/size is calculated in stream. That means file checksums are no longer optional.</text>
</release-feature>
<release-feature>
<text>Added option <param>--no-start-stop</param> to allow backups when Postgres is shut down. If <file>postmaster.pid</file> is present then <param>--force</param> is required to make the backup run (though if Postgres is running an inconsistent backup will likely be created). This option was added primarily for the purpose of unit testing, but there may be applications in the real world as well.</text>
</release-feature>
<release-feature>
<text>Fixed broken checksums and now they work with normal and resumed backups. Finally realized that checksums and checksum deltas should be functionally separated and this simplied a number of things. Issue #28 has been created for checksum deltas.</text>
</release-feature>
<release-feature>
<text>Fixed an issue where a backup could be resumed from an aborted backup that didn't have the same type and prior backup.</text>
</release-feature>
<release-feature>
<text>Removed dependency on Moose. It wasn't being used extensively and makes for longer startup times.</text>
</release-feature>
<release-feature>
<text>Checksum for backup.manifest to detect corrupted/modified manifest.</text>
</release-feature>
<release-feature>
<text>Link <path>latest</path> always points to the last backup. This has been added for convenience and to make restores simpler.</text>
</release-feature>
<release-feature>
<text>More comprehensive unit tests in all areas.</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
<release-version version="0.30" title="Core Restructuring and Unit Tests">
<release-feature-bullet-list>
<release-feature>
<text>Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations. Compression is performed in threads rather than forked processes.</text>
</release-feature>
<release-feature>
<text>Fairly comprehensive unit tests for all the basic operations. More work to be done here for sure, but then there is always more work to be done on unit tests.</text>
</release-feature>
<release-feature>
<text>Removed dependency on Storable and replaced with a custom ini file implementation.</text>
</release-feature>
<release-feature>
<text>Added much needed documentation</text>
</release-feature>
<release-feature>
<text>Numerous other changes that can only be identified with a diff.</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
<release-version version="0.19" title="Improved Error Reporting/Handling">
<release-feature-bullet-list>
<release-feature>
<text>Working on improving error handling in the file object. This is not complete, but works well enough to find a few errors that have been causing us problems (notably, find is occasionally failing building the archive async manifest when system is under load).</text>
</release-feature>
<release-feature>
<text>Found and squashed a nasty bug where <code>file_copy()</code> was defaulted to ignore errors. There was also an issue in file_exists that was causing the test to fail when the file actually did exist. Together they could have resulted in a corrupt backup with no errors, though it is very unlikely.</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
<release-version version="0.18" title="Return Soft Error When Archive Missing">
<release-feature-bullet-list>
<release-feature>
<text>The <param>archive-get</param> operation returns a 1 when the archive file is missing to differentiate from hard errors (ssh connection failure, file copy error, etc.) This lets Postgres know that that the archive stream has terminated normally. However, this does not take into account possible holes in the archive stream.</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
<release-version version="0.17" title="Warn When Archive Directories Cannot Be Deleted">
<release-feature-bullet-list>
<release-feature>
<text>If an archive directory which should be empty could not be deleted backrest was throwing an error. There's a good fix for that coming, but for the time being it has been changed to a warning so processing can continue. This was impacting backups as sometimes the final archive file would not get pushed if the first archive file had been in a different directory (plus some bad luck).</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
<release-version version="0.16" title="RequestTTY=yes for SSH Sessions">
<release-feature-bullet-list>
<release-feature>
<text>Added <setting>RequestTTY=yes</setting> to ssh sesssions. Hoping this will prevent random lockups.</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
<release-version version="0.15" title="RequestTTY=yes for SSH Sessions">
<release-feature-bullet-list>
<release-feature>
<text>Added archive-get functionality to aid in restores.</text>
</release-feature>
<release-feature>
<text>Added option to force a checkpoint when starting the backup <setting>start-fast=y</setting>.</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
<release-version version="0.11" title="Minor Fixes">
<release-feature-bullet-list>
<release-feature>
<text>Removed <setting>master_stderr_discard</setting> option on database SSH connections. There have been occasional lockups and they could be related to issues originally seen in the file code.</text>
</release-feature>
<release-feature>
<text>Changed lock file conflicts on backup and expire commands to ERROR. They were set to DEBUG due to a copy-and-paste from the archive locks.</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
<release-version version="0.10" title="Backup and Archiving are Functional">
<release-feature-bullet-list>
<release-feature>
<text>No restore functionality, but the backup directories are consistent Postgres data directories. You'll need to either uncompress the files or turn off compression in the backup. Uncompressed backups on a ZFS (or similar) filesystem are a good option because backups can be restored locally via a snapshot to create logical backups or do spot data recovery.</text>
</release-feature>
<release-feature>
<text>Archiving is single-threaded. This has not posed an issue on our multi-terabyte databases with heavy write volume. Recommend a large WAL volume or to use the async option with a large volume nearby.</text>
</release-feature>
<release-feature>
<text>Backups are multi-threaded, but the Net::OpenSSH library does not appear to be 100% threadsafe so it will very occasionally lock up on a thread. There is an overall process timeout that resolves this issue by killing the process. Yes, very ugly.</text>
</release-feature>
<release-feature>
<text>Checksums are lost on any resumed backup. Only the final backup will record checksum on multiple resumes. Checksums from previous backups are correctly recorded and a full backup will reset everything.</text>
</release-feature>
<release-feature>
<text>The backup.manifest is being written as Storable because Config::IniFile does not seem to handle large files well. Would definitely like to save these as human-readable text.</text>
</release-feature>
<release-feature>
<text>Absolutely no documentation (outside the code). Well, excepting these release notes.</text>
</release-feature>
</release-feature-bullet-list>
</release-version>
</release-version-list>
</release>
<recognition title="Recognition">
<text>Primary recognition goes to Stephen Frost for all his valuable advice and criticism during the development of <backrest/>.
Resonate (http://www.resonate.com/) also contributed to the development of PgBackRest and allowed me to install early (but well tested) versions as their primary Postgres backup solution.</text>
</recognition>
</doc>

Binary file not shown.

204
doc/html/default.css Normal file
View File

@ -0,0 +1,204 @@
/*******************************************************************************
Html and body
*******************************************************************************/
html
{
background-color: #F8F8F8;
font-family: Avenir, Corbel, sans-serif;
font-size: medium;
margin-top: 8px;
margin-left: 1%;
margin-right: 1%;
width: 98%;
}
body
{
margin: 0px auto;
padding: 0px;
width: 100%;
}
@media (min-width: 1000px)
{
body
{
width: 1000px;
}
}
/*******************************************************************************
Link default styling
*******************************************************************************/
a:link
{
text-decoration: none;
color: black;
}
a:visited
{
text-decoration: none;
color: black;
}
a:hover
{
text-decoration: underline;
color: black;
}
a:active
{
text-decoration: none;
color: black;
}
/*******************************************************************************
Header
*******************************************************************************/
.header
{
width:100%;
text-align:center;
float:left;
}
.header-title
{
font-size: 28pt;
font-weight: bolder;
}
.header-subtitle
{
position: relative;
top: -.25em;
font-size: larger;
font-weight: bold;
}
/*******************************************************************************
Menu
*******************************************************************************/
.menu-set
{
text-align: center;
font-weight: 600;
border-bottom: 2px #dddddd solid;
}
.menu-first, .menu
{
white-space: nowrap;
display: inline;
}
.menu
{
margin-left: 6px;
}
.menu-link
{
margin-left: 2px;
margin-right: 2px;
}
/*******************************************************************************
Section
*******************************************************************************/
doc-install, doc-configure, doc-intro
{
display:block;
margin-top: 8px;
}
doc-install-header, doc-configure-header
{
display:block;
background-color: #dddddd;
font-size: 14pt;
padding-left: 4px;
margin-bottom: 4px;
}
/*******************************************************************************
SubSection
*******************************************************************************/
doc-configure-section
{
display:block;
margin-top: 8px;
}
doc-configure-section-header
{
display:block;
border-bottom: 2px #cccccc solid;
font-size: large;
font-weight: 500;
margin-bottom: 4px;
}
/*******************************************************************************
SubSection2
*******************************************************************************/
doc-configure-key
{
display:block;
margin-top: 8px;
margin-left: 2em;
margin-right: 2em;
}
doc-configure-key-header
{
display:block;
font-size: medium;
font-weight: 500;
border-bottom: 1px #dddddd solid;
margin-bottom: 4px;
}
/*******************************************************************************
Code & Detail
*******************************************************************************/
doc-code, doc-code-block, doc-id, doc-file, doc-function, doc-detail,
doc-setting
{
font-family: "Lucida Console", Monaco, monospace;
font-size: smaller;
}
doc-id, doc-file, doc-function
{
white-space: pre;
}
doc-code, doc-code-block, doc-detail-block
{
background-color: #eeeeee;
}
doc-setting, doc-file
{
background-color: #e0e0e0;
}
doc-code-block, doc-detail-block
{
margin: 8px;
padding: 8px;
display:block;
}
doc-detail
{
display: block;
}
doc-detail-value
{
margin-left: .5em;
}

File diff suppressed because it is too large Load Diff

1534
lib/BackRest/Config.pm Normal file

File diff suppressed because it is too large Load Diff

View File

@ -3,34 +3,43 @@
####################################################################################################################################
package BackRest::Db;
use threads;
use strict;
use warnings;
use Carp;
use warnings FATAL => qw(all);
use Carp qw(confess);
use Moose;
use Net::OpenSSH;
use File::Basename;
use IPC::System::Simple qw(capture);
use Exporter qw(import);
use lib dirname($0);
use BackRest::Utility;
# Command strings
has strCommandPsql => (is => 'bare'); # PSQL command
####################################################################################################################################
# Postmaster process Id file
####################################################################################################################################
use constant FILE_POSTMASTER_PID => 'postmaster.pid';
# Module variables
has strDbUser => (is => 'ro'); # Database user
has strDbHost => (is => 'ro'); # Database host
has oDbSSH => (is => 'bare'); # Database SSH object
has fVersion => (is => 'ro'); # Database version
our @EXPORT = qw(FILE_POSTMASTER_PID);
####################################################################################################################################
# CONSTRUCTOR
####################################################################################################################################
sub BUILD
sub new
{
my $self = shift;
my $class = shift; # Class name
my $strCommandPsql = shift; # PSQL command
my $strDbHost = shift; # Database host name
my $strDbUser = shift; # Database user name (generally postgres)
# Create the class hash
my $self = {};
bless $self, $class;
# Initialize variables
$self->{strCommandPsql} = $strCommandPsql;
$self->{strDbHost} = $strDbHost;
$self->{strDbUser} = $strDbUser;
# Connect SSH object if db host is defined
if (defined($self->{strDbHost}) && !defined($self->{oDbSSH}))
@ -44,6 +53,8 @@ sub BUILD
master_opts => [-o => $strOptionSSHRequestTTY]);
$self->{oDbSSH}->error and confess &log(ERROR, "unable to connect to $self->{strDbHost}: " . $self->{oDbSSH}->error);
}
return $self;
}
####################################################################################################################################
@ -132,9 +143,11 @@ sub backup_start
my $strLabel = shift;
my $bStartFast = shift;
return trim($self->psql_execute("set client_min_messages = 'warning';" .
"copy (select pg_xlogfile_name(xlog) from pg_start_backup('${strLabel}'" .
($bStartFast ? ', true' : '') . ') as xlog) to stdout'));
my @stryField = split("\t", trim($self->psql_execute("set client_min_messages = 'warning';" .
"copy (select to_char(current_timestamp, 'YYYY-MM-DD HH24:MI:SS.US TZ'), pg_xlogfile_name(xlog) from pg_start_backup('${strLabel}'" .
($bStartFast ? ', true' : '') . ') as xlog) to stdout')));
return $stryField[1], $stryField[0];
}
####################################################################################################################################
@ -144,9 +157,10 @@ sub backup_stop
{
my $self = shift;
return trim($self->psql_execute("set client_min_messages = 'warning';" .
"copy (select pg_xlogfile_name(xlog) from pg_stop_backup() as xlog) to stdout"))
my @stryField = split("\t", trim($self->psql_execute("set client_min_messages = 'warning';" .
"copy (select to_char(clock_timestamp(), 'YYYY-MM-DD HH24:MI:SS.US TZ'), pg_xlogfile_name(xlog) from pg_stop_backup() as xlog) to stdout")));
return $stryField[1], $stryField[0];
}
no Moose;
__PACKAGE__->meta->make_immutable;
1;

View File

@ -3,16 +3,58 @@
####################################################################################################################################
package BackRest::Exception;
use threads;
use strict;
use warnings;
use Carp;
use warnings FATAL => qw(all);
use Carp qw(confess);
use Exporter qw(import);
use Moose;
####################################################################################################################################
# Exception Codes
####################################################################################################################################
use constant
{
ERROR_ASSERT => 100,
ERROR_CHECKSUM => 101,
ERROR_CONFIG => 102,
ERROR_FILE_INVALID => 103,
ERROR_FORMAT => 104,
ERROR_OPERATION_REQUIRED => 105,
ERROR_OPTION_INVALID => 106,
ERROR_OPTION_INVALID_VALUE => 107,
ERROR_OPTION_INVALID_RANGE => 108,
ERROR_OPTION_INVALID_PAIR => 109,
ERROR_OPTION_DUPLICATE_KEY => 110,
ERROR_OPTION_NEGATE => 111,
ERROR_OPTION_REQUIRED => 112,
ERROR_POSTMASTER_RUNNING => 113,
ERROR_PROTOCOL => 114,
ERROR_RESTORE_PATH_NOT_EMPTY => 115
};
# Module variables
has iCode => (is => 'bare'); # Exception code
has strMessage => (is => 'bare'); # Exception message
our @EXPORT = qw(ERROR_ASSERT ERROR_CHECKSUM ERROR_CONFIG ERROR_FILE_INVALID ERROR_FORMAT ERROR_OPERATION_REQUIRED
ERROR_OPTION_INVALID ERROR_OPTION_INVALID_VALUE ERROR_OPTION_INVALID_RANGE ERROR_OPTION_INVALID_PAIR
ERROR_OPTION_DUPLICATE_KEY ERROR_OPTION_NEGATE ERROR_OPTION_REQUIRED ERROR_POSTMASTER_RUNNING ERROR_PROTOCOL
ERROR_RESTORE_PATH_NOT_EMPTY);
####################################################################################################################################
# CONSTRUCTOR
####################################################################################################################################
sub new
{
my $class = shift; # Class name
my $iCode = shift; # Error code
my $strMessage = shift; # ErrorMessage
# Create the class hash
my $self = {};
bless $self, $class;
# Initialize exception
$self->{iCode} = $iCode;
$self->{strMessage} = $strMessage;
return $self;
}
####################################################################################################################################
# CODE
@ -34,5 +76,4 @@ sub message
return $self->{strMessage};
}
no Moose;
__PACKAGE__->meta->make_immutable;
1;

View File

@ -3,61 +3,24 @@
####################################################################################################################################
package BackRest::File;
use threads;
use strict;
use warnings;
use Carp;
use warnings FATAL => qw(all);
use Carp qw(confess);
use Moose;
use Net::OpenSSH;
use File::Basename;
use File::Basename qw(dirname basename);
use File::Copy qw(cp);
use File::Path qw(make_path remove_tree);
use Digest::SHA;
use File::stat;
use Fcntl ':mode';
use IO::Compress::Gzip qw(gzip $GzipError);
use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
use IO::String;
use Exporter qw(import);
use lib dirname($0) . '/../lib';
use BackRest::Exception;
use BackRest::Utility;
use BackRest::Remote;
# Exports
use Exporter qw(import);
our @EXPORT = qw(PATH_ABSOLUTE PATH_DB PATH_DB_ABSOLUTE PATH_BACKUP PATH_BACKUP_ABSOLUTE
PATH_BACKUP_CLUSTER PATH_BACKUP_TMP PATH_BACKUP_ARCHIVE
COMMAND_ERR_FILE_MISSING COMMAND_ERR_FILE_READ COMMAND_ERR_FILE_MOVE COMMAND_ERR_FILE_TYPE
COMMAND_ERR_LINK_READ COMMAND_ERR_PATH_MISSING COMMAND_ERR_PATH_CREATE COMMAND_ERR_PARAM
PIPE_STDIN PIPE_STDOUT PIPE_STDERR
REMOTE_DB REMOTE_BACKUP REMOTE_NONE
OP_FILE_LIST OP_FILE_EXISTS OP_FILE_HASH OP_FILE_REMOVE OP_FILE_MANIFEST OP_FILE_COMPRESS
OP_FILE_MOVE OP_FILE_COPY OP_FILE_COPY_OUT OP_FILE_COPY_IN OP_FILE_PATH_CREATE);
# Extension and permissions
has strCompressExtension => (is => 'ro', default => 'gz');
has strDefaultPathPermission => (is => 'bare', default => '0750');
has strDefaultFilePermission => (is => 'ro', default => '0640');
# Command strings
has strCommand => (is => 'bare');
# Module variables
has strRemote => (is => 'bare'); # Remote type (db or backup)
has oRemote => (is => 'bare'); # Remote object
has strBackupPath => (is => 'bare'); # Backup base path
# Process flags
has strStanza => (is => 'bare');
has iThreadIdx => (is => 'bare');
####################################################################################################################################
# COMMAND Error Constants
####################################################################################################################################
@ -73,21 +36,28 @@ use constant
COMMAND_ERR_PATH_READ => 8
};
our @EXPORT = qw(COMMAND_ERR_FILE_MISSING COMMAND_ERR_FILE_READ COMMAND_ERR_FILE_MOVE COMMAND_ERR_FILE_TYPE COMMAND_ERR_LINK_READ
COMMAND_ERR_PATH_MISSING COMMAND_ERR_PATH_CREATE COMMAND_ERR_PARAM);
####################################################################################################################################
# PATH_GET Constants
####################################################################################################################################
use constant
{
PATH_ABSOLUTE => 'absolute',
PATH_DB => 'db',
PATH_DB_ABSOLUTE => 'db:absolute',
PATH_BACKUP => 'backup',
PATH_BACKUP_ABSOLUTE => 'backup:absolute',
PATH_BACKUP_CLUSTER => 'backup:cluster',
PATH_BACKUP_TMP => 'backup:tmp',
PATH_BACKUP_ARCHIVE => 'backup:archive'
PATH_ABSOLUTE => 'absolute',
PATH_DB => 'db',
PATH_DB_ABSOLUTE => 'db:absolute',
PATH_BACKUP => 'backup',
PATH_BACKUP_ABSOLUTE => 'backup:absolute',
PATH_BACKUP_CLUSTER => 'backup:cluster',
PATH_BACKUP_TMP => 'backup:tmp',
PATH_BACKUP_ARCHIVE => 'backup:archive',
PATH_BACKUP_ARCHIVE_OUT => 'backup:archive:out'
};
push @EXPORT, qw(PATH_ABSOLUTE PATH_DB PATH_DB_ABSOLUTE PATH_BACKUP PATH_BACKUP_ABSOLUTE PATH_BACKUP_CLUSTER PATH_BACKUP_TMP
PATH_BACKUP_ARCHIVE PATH_BACKUP_ARCHIVE_OUT);
####################################################################################################################################
# STD Pipe Constants
####################################################################################################################################
@ -98,21 +68,15 @@ use constant
PIPE_STDERR => '<STDERR>'
};
####################################################################################################################################
# Remote Types
####################################################################################################################################
use constant
{
REMOTE_DB => PATH_DB,
REMOTE_BACKUP => PATH_BACKUP,
REMOTE_NONE => 'none'
};
push @EXPORT, qw(PIPE_STDIN PIPE_STDOUT PIPE_STDERR);
####################################################################################################################################
# Operation constants
####################################################################################################################################
use constant
{
OP_FILE_OWNER => 'File->owner',
OP_FILE_WAIT => 'File->wait',
OP_FILE_LIST => 'File->list',
OP_FILE_EXISTS => 'File->exists',
OP_FILE_HASH => 'File->hash',
@ -123,31 +87,63 @@ use constant
OP_FILE_COPY => 'File->copy',
OP_FILE_COPY_OUT => 'File->copy_out',
OP_FILE_COPY_IN => 'File->copy_in',
OP_FILE_PATH_CREATE => 'File->path_create'
OP_FILE_PATH_CREATE => 'File->path_create',
OP_FILE_LINK_CREATE => 'File->link_create'
};
push @EXPORT, qw(OP_FILE_OWNER OP_FILE_WAIT OP_FILE_LIST OP_FILE_EXISTS OP_FILE_HASH OP_FILE_REMOVE OP_FILE_MANIFEST
OP_FILE_COMPRESS OP_FILE_MOVE OP_FILE_COPY OP_FILE_COPY_OUT OP_FILE_COPY_IN OP_FILE_PATH_CREATE);
####################################################################################################################################
# CONSTRUCTOR
####################################################################################################################################
sub BUILD
sub new
{
my $self = shift;
my $class = shift;
my $strStanza = shift;
my $strBackupPath = shift;
my $strRemote = shift;
my $oRemote = shift;
my $strDefaultPathMode = shift;
my $strDefaultFileMode = shift;
my $iThreadIdx = shift;
# Create the class hash
my $self = {};
bless $self, $class;
# Default compression extension to gz
$self->{strCompressExtension} = 'gz';
# Default file and path mode
$self->{strDefaultPathMode} = defined($strDefaultPathMode) ? $strDefaultPathMode : '0750';
$self->{strDefaultFileMode} = defined($strDefaultFileMode) ? $strDefaultFileMode : '0640';
# Initialize other variables
$self->{strStanza} = $strStanza;
$self->{strBackupPath} = $strBackupPath;
$self->{strRemote} = $strRemote;
$self->{oRemote} = $oRemote;
$self->{iThreadIdx} = $iThreadIdx;
# Remote object must be set
if (!defined($self->{oRemote}))
{
confess &log(ASSERT, 'oRemote must be defined');
}
# If remote is defined check parameters and open session
if (defined($self->{strRemote}) && $self->{strRemote} ne REMOTE_NONE)
if (defined($self->{strRemote}) && $self->{strRemote} ne NONE)
{
# Make sure remote is valid
if ($self->{strRemote} ne REMOTE_DB && $self->{strRemote} ne REMOTE_BACKUP)
if ($self->{strRemote} ne DB && $self->{strRemote} ne BACKUP)
{
confess &log(ASSERT, 'strRemote must be "' . REMOTE_DB . '" or "' . REMOTE_BACKUP . '"');
}
# Remote object must be set
if (!defined($self->{oRemote}))
{
confess &log(ASSERT, 'oRemote must be defined');
confess &log(ASSERT, 'strRemote must be "' . DB . '" or "' . BACKUP .
"\", $self->{strRemote} was passed");
}
}
return $self;
}
####################################################################################################################################
@ -173,12 +169,13 @@ sub clone
return BackRest::File->new
(
strCommand => $self->{strCommand},
strRemote => $self->{strRemote},
oRemote => defined($self->{oRemote}) ? $self->{oRemote}->clone($iThreadIdx) : undef,
strBackupPath => $self->{strBackupPath},
strStanza => $self->{strStanza},
iThreadIdx => $iThreadIdx
$self->{strStanza},
$self->{strBackupPath},
$self->{strRemote},
defined($self->{oRemote}) ? $self->{oRemote}->clone() : undef,
$self->{strDefaultPathMode},
$self->{strDefaultFileMode},
$iThreadIdx
);
}
@ -228,10 +225,11 @@ sub path_get
confess &log(ASSERT, "absolute path ${strType}:${strFile} must start with /");
}
# Only allow temp files for PATH_BACKUP_ARCHIVE and PATH_BACKUP_TMP and any absolute path
# Only allow temp files for PATH_BACKUP_ARCHIVE, PATH_BACKUP_ARCHIVE_OUT, PATH_BACKUP_TMP and any absolute path
$bTemp = defined($bTemp) ? $bTemp : false;
if ($bTemp && !($strType eq PATH_BACKUP_ARCHIVE || $strType eq PATH_BACKUP_TMP || $bAbsolute))
if ($bTemp && !($strType eq PATH_BACKUP_ARCHIVE || $strType eq PATH_BACKUP_ARCHIVE_OUT || $strType eq PATH_BACKUP_TMP ||
$bAbsolute))
{
confess &log(ASSERT, 'temp file not supported on path ' . $strType);
}
@ -279,28 +277,39 @@ sub path_get
}
# Get the backup archive path
if ($strType eq PATH_BACKUP_ARCHIVE)
if ($strType eq PATH_BACKUP_ARCHIVE_OUT || $strType eq PATH_BACKUP_ARCHIVE)
{
my $strArchivePath = "$self->{strBackupPath}/archive/$self->{strStanza}";
my $strArchive;
my $strArchivePath = "$self->{strBackupPath}/archive";
if ($bTemp)
{
return "${strArchivePath}/file.tmp" . (defined($self->{iThreadIdx}) ? ".$self->{iThreadIdx}" : '');
return "${strArchivePath}/temp/$self->{strStanza}-archive" .
(defined($self->{iThreadIdx}) ? "-$self->{iThreadIdx}" : '') . ".tmp";
}
if (defined($strFile))
$strArchivePath .= "/$self->{strStanza}";
if ($strType eq PATH_BACKUP_ARCHIVE)
{
$strArchive = substr(basename($strFile), 0, 24);
my $strArchive;
if ($strArchive !~ /^([0-F]){24}$/)
if (defined($strFile))
{
return "${strArchivePath}/${strFile}";
}
}
$strArchive = substr(basename($strFile), 0, 24);
return $strArchivePath . (defined($strArchive) ? '/' . substr($strArchive, 0, 16) : '') .
(defined($strFile) ? '/' . $strFile : '');
if ($strArchive !~ /^([0-F]){24}$/)
{
return "${strArchivePath}/${strFile}";
}
}
return $strArchivePath . (defined($strArchive) ? '/' . substr($strArchive, 0, 16) : '') .
(defined($strFile) ? '/' . $strFile : '');
}
else
{
return "${strArchivePath}/out" . (defined($strFile) ? '/' . $strFile : '');
}
}
if ($strType eq PATH_BACKUP_CLUSTER)
@ -348,7 +357,7 @@ sub link_create
# if bPathCreate is not defined, default to true
$bPathCreate = defined($bPathCreate) ? $bPathCreate : true;
# Source and destination path types must be the same (both PATH_DB or both PATH_BACKUP)
# Source and destination path types must be the same (e.g. both PATH_DB or both PATH_BACKUP, etc.)
if ($self->path_type_get($strSourcePathType) ne $self->path_type_get($strDestinationPathType))
{
confess &log(ASSERT, 'path types must be equal in link create');
@ -358,6 +367,15 @@ sub link_create
my $strSource = $self->path_get($strSourcePathType, $strSourceFile);
my $strDestination = $self->path_get($strDestinationPathType, $strDestinationFile);
# Set operation and debug strings
my $strOperation = OP_FILE_LINK_CREATE;
my $strDebug = "${strSourcePathType}" . (defined($strSource) ? ":${strSource}" : '') .
" to ${strDestinationPathType}" . (defined($strDestination) ? ":${strDestination}" : '') .
', hard = ' . ($bHard ? 'true' : 'false') . ", relative = " . ($bRelative ? 'true' : 'false') .
', destination_path_create = ' . ($bPathCreate ? 'true' : 'false');
&log(DEBUG, "${strOperation}: ${strDebug}");
# If the destination path is backup and does not exist, create it
if ($bPathCreate && $self->path_type_get($strDestinationPathType) eq PATH_BACKUP)
{
@ -392,22 +410,24 @@ sub link_create
}
}
# Create the command
my $strCommand = 'ln' . (!$bHard ? ' -s' : '') . " ${strSource} ${strDestination}";
# Run remotely
if ($self->is_remote($strSourcePathType))
{
&log(TRACE, "link_create: remote ${strSourcePathType} '${strCommand}'");
my $oSSH = $self->remote_get($strSourcePathType);
$oSSH->system($strCommand) or confess &log("unable to create link from ${strSource} to ${strDestination}");
confess &log(ASSERT, "${strDebug}: remote operation not supported");
}
# Run locally
else
{
&log(TRACE, "link_create: local '${strCommand}'");
system($strCommand) == 0 or confess &log("unable to create link from ${strSource} to ${strDestination}");
if ($bHard)
{
link($strSource, $strDestination)
or confess &log(ERROR, "unable to create hardlink from ${strSource} to ${strDestination}");
}
else
{
symlink($strSource, $strDestination)
or confess &log(ERROR, "unable to create symlink from ${strSource} to ${strDestination}");
}
}
}
@ -511,25 +531,8 @@ sub compress
# Run locally
else
{
# Compress the file
if (!gzip($strPathOp => "${strPathOp}.gz"))
{
my $strError = "${strPathOp} could not be compressed:" . $!;
my $iErrorCode = COMMAND_ERR_FILE_READ;
if (!$self->exists($strPathType, $strFile))
{
$strError = "${strPathOp} does not exist";
$iErrorCode = COMMAND_ERR_FILE_MISSING;
}
if ($strPathType eq PATH_ABSOLUTE)
{
confess &log(ERROR, $strError, $iErrorCode);
}
confess &log(ERROR, "${strDebug}: " . $strError);
}
# Use copy to compress the file
$self->copy($strPathType, $strFile, $strPathType, "${strFile}.gz", false, true);
# Remove the old file
unlink($strPathOp)
@ -547,7 +550,7 @@ sub path_create
my $self = shift;
my $strPathType = shift;
my $strPath = shift;
my $strPermission = shift;
my $strMode = shift;
my $bIgnoreExists = shift;
# Set operation variables
@ -555,7 +558,7 @@ sub path_create
# Set operation and debug strings
my $strOperation = OP_FILE_PATH_CREATE;
my $strDebug = " ${strPathType}:${strPathOp}, permission " . (defined($strPermission) ? $strPermission : '[undef]');
my $strDebug = " ${strPathType}:${strPathOp}, mode " . (defined($strMode) ? $strMode : '[undef]');
&log(DEBUG, "${strOperation}: ${strDebug}");
if ($self->is_remote($strPathType))
@ -565,9 +568,9 @@ sub path_create
$oParamHash{path} = ${strPathOp};
if (defined($strPermission))
if (defined($strMode))
{
$oParamHash{permission} = ${strPermission};
$oParamHash{mode} = ${strMode};
}
# Add remote info to debug string
@ -585,9 +588,9 @@ sub path_create
# Attempt the create the directory
my $stryError;
if (defined($strPermission))
if (defined($strMode))
{
make_path($strPathOp, {mode => oct($strPermission), error => \$stryError});
make_path($strPathOp, {mode => oct($strMode), error => \$stryError});
}
else
{
@ -695,7 +698,7 @@ sub remove
my $bRemoved = true;
# Set operation and debug strings
my $strOperation = OP_FILE_EXISTS;
my $strOperation = OP_FILE_REMOVE;
my $strDebug = "${strPathType}:${strPathOp}";
&log(DEBUG, "${strOperation}: ${strDebug}");
@ -743,15 +746,39 @@ sub hash
my $self = shift;
my $strPathType = shift;
my $strFile = shift;
my $bCompressed = shift;
my $strHashType = shift;
my ($strHash, $iSize) = $self->hash_size($strPathType, $strFile, $bCompressed, $strHashType);
return $strHash;
}
####################################################################################################################################
# HASH_SIZE
####################################################################################################################################
sub hash_size
{
my $self = shift;
my $strPathType = shift;
my $strFile = shift;
my $bCompressed = shift;
my $strHashType = shift;
# Set defaults
$bCompressed = defined($bCompressed) ? $bCompressed : false;
$strHashType = defined($strHashType) ? $strHashType : 'sha1';
# Set operation variables
my $strFileOp = $self->path_get($strPathType, $strFile);
my $strHash;
my $iSize = 0;
# Set operation and debug strings
my $strOperation = OP_FILE_HASH;
my $strDebug = "${strPathType}:${strFileOp}";
my $strDebug = "${strPathType}:${strFileOp}, " .
'compressed = ' . ($bCompressed ? 'true' : 'false') . ', ' .
"hash_type = ${strHashType}";
&log(DEBUG, "${strOperation}: ${strDebug}");
if ($self->is_remote($strPathType))
@ -781,16 +808,104 @@ sub hash
confess &log(ERROR, "${strDebug}: " . $strError);
}
my $oSHA = Digest::SHA->new(defined($strHashType) ? $strHashType : 'sha1');
my $oSHA = Digest::SHA->new($strHashType);
$oSHA->addfile($hFile);
if ($bCompressed)
{
($strHash, $iSize) =
$self->{oRemote}->binary_xfer($hFile, undef, 'in', true, false, false);
}
else
{
my $iBlockSize;
my $tBuffer;
do
{
# Read a block from the file
$iBlockSize = sysread($hFile, $tBuffer, 4194304);
if (!defined($iBlockSize))
{
confess &log(ERROR, "${strFileOp} could not be read: " . $!);
}
$iSize += $iBlockSize;
$oSHA->add($tBuffer);
}
while ($iBlockSize > 0);
$strHash = $oSHA->hexdigest();
}
close($hFile);
$strHash = $oSHA->hexdigest();
}
return $strHash;
return $strHash, $iSize;
}
####################################################################################################################################
# OWNER
####################################################################################################################################
sub owner
{
my $self = shift;
my $strPathType = shift;
my $strFile = shift;
my $strUser = shift;
my $strGroup = shift;
# Set operation variables
my $strFileOp = $self->path_get($strPathType, $strFile);
# Set operation and debug strings
my $strOperation = OP_FILE_OWNER;
my $strDebug = "${strPathType}:${strFileOp}, " .
'user = ' . (defined($strUser) ? $strUser : '[undef]') .
'group = ' . (defined($strGroup) ? $strGroup : '[undef]');
&log(DEBUG, "${strOperation}: ${strDebug}");
if ($self->is_remote($strPathType))
{
confess &log(ASSERT, "${strDebug}: remote operation not supported");
}
else
{
my $iUserId;
my $iGroupId;
my $oStat;
if (!defined($strUser) || !defined($strGroup))
{
$oStat = stat($strFileOp);
if (!defined($oStat))
{
confess &log(ERROR, 'unable to stat ${strFileOp}');
}
}
if (defined($strUser))
{
$iUserId = getpwnam($strUser);
}
else
{
$iUserId = $oStat->uid;
}
if (defined($strGroup))
{
$iGroupId = getgrnam($strGroup);
}
else
{
$iGroupId = $oStat->gid;
}
chown($iUserId, $iGroupId, $strFileOp)
or confess &log(ERROR, "unable to set ownership for ${strFileOp}");
}
}
####################################################################################################################################
@ -903,6 +1018,47 @@ sub list
return @stryFileList;
}
####################################################################################################################################
# WAIT
#
# Wait until the next second. This is done in the file object because it must be performed on whichever side the db is on, local or
# remote. This function is used to make sure that no files are copied in the same second as the manifest is created. The reason is
# that the db might modify they file again in the same second as the copy and that change will not be visible to a subsequent
# incremental backup using timestamp/size to determine deltas.
####################################################################################################################################
sub wait
{
my $self = shift;
my $strPathType = shift;
# Set operation and debug strings
my $strOperation = OP_FILE_WAIT;
my $strDebug = "${strPathType}";
&log(DEBUG, "${strOperation}: ${strDebug}");
# Second when the function was called
my $lTimeBegin;
# Run remotely
if ($self->is_remote($strPathType))
{
# Add remote info to debug string
$strDebug = "${strOperation}: remote: ${strDebug}";
&log(TRACE, "${strOperation}: remote");
# Execute the command
$lTimeBegin = $self->{oRemote}->command_execute($strOperation, undef, true, $strDebug);
}
# Run locally
else
{
# Wait the remainder of the current second
$lTimeBegin = wait_remainder();
}
return $lTimeBegin;
}
####################################################################################################################################
# MANIFEST
#
@ -1101,10 +1257,10 @@ sub manifest_recurse
# Get group name
${$oManifestHashRef}{name}{"${strFile}"}{group} = getgrgid($oStat->gid);
# Get permissions
# Get mode
if (${$oManifestHashRef}{name}{"${strFile}"}{type} ne 'l')
{
${$oManifestHashRef}{name}{"${strFile}"}{permission} = sprintf('%04o', S_IMODE($oStat->mode));
${$oManifestHashRef}{name}{"${strFile}"}{mode} = sprintf('%04o', S_IMODE($oStat->mode));
}
# Recurse into directories
@ -1123,7 +1279,7 @@ sub manifest_recurse
# * source and destination can be local or remote
# * wire and output compression/decompression are supported
# * intermediate temp files are used to prevent partial copies
# * modification time and permissions can be set on destination file
# * modification time, mode, and ownership can be set on destination file
# * destination path can optionally be created
####################################################################################################################################
sub copy
@ -1137,14 +1293,18 @@ sub copy
my $bDestinationCompress = shift;
my $bIgnoreMissingSource = shift;
my $lModificationTime = shift;
my $strPermission = shift;
my $strMode = shift;
my $bDestinationPathCreate = shift;
my $strUser = shift;
my $strGroup = shift;
my $bAppendChecksum = shift;
# Set defaults
$bSourceCompressed = defined($bSourceCompressed) ? $bSourceCompressed : false;
$bDestinationCompress = defined($bDestinationCompress) ? $bDestinationCompress : false;
$bIgnoreMissingSource = defined($bIgnoreMissingSource) ? $bIgnoreMissingSource : false;
$bDestinationPathCreate = defined($bDestinationPathCreate) ? $bDestinationPathCreate : false;
$bAppendChecksum = defined($bAppendChecksum) ? $bAppendChecksum : false;
# Set working variables
my $bSourceRemote = $self->is_remote($strSourcePathType) || $strSourcePathType eq PIPE_STDIN;
@ -1156,6 +1316,11 @@ sub copy
my $strDestinationTmpOp = $strDestinationPathType eq PIPE_STDOUT ?
undef : $self->path_get($strDestinationPathType, $strDestinationFile, true);
# Checksum and size variables
my $strChecksum = undef;
my $iFileSize = undef;
my $bResult = true;
# Set debug string and log
my $strDebug = ($bSourceRemote ? ' remote' : ' local') . " ${strSourcePathType}" .
(defined($strSourceFile) ? ":${strSourceOp}" : '') .
@ -1164,7 +1329,11 @@ sub copy
', source_compressed = ' . ($bSourceCompressed ? 'true' : 'false') .
', destination_compress = ' . ($bDestinationCompress ? 'true' : 'false') .
', ignore_missing_source = ' . ($bIgnoreMissingSource ? 'true' : 'false') .
', destination_path_create = ' . ($bDestinationPathCreate ? 'true' : 'false');
', destination_path_create = ' . ($bDestinationPathCreate ? 'true' : 'false') .
', modification_time = ' . (defined($lModificationTime) ? $lModificationTime : '[undef]') .
', mode = ' . (defined($strMode) ? $strMode : '[undef]') .
', user = ' . (defined($strUser) ? $strUser : '[undef]') .
', group = ' . (defined($strGroup) ? $strGroup : '[undef]');
&log(DEBUG, OP_FILE_COPY . ": ${strDebug}");
# Open the source and destination files (if needed)
@ -1185,7 +1354,7 @@ sub copy
if ($bIgnoreMissingSource && $strDestinationPathType ne PIPE_STDOUT)
{
return false;
return false, undef, undef;
}
}
@ -1195,7 +1364,7 @@ sub copy
{
if ($strDestinationPathType eq PIPE_STDOUT)
{
$self->{oRemote}->write_line(*STDOUT, 'block 0');
$self->{oRemote}->write_line(*STDOUT, 'block -1');
}
confess &log(ERROR, $strError, $iErrorCode);
@ -1264,6 +1433,7 @@ sub copy
{
$oParamHash{source_file} = $strSourceOp;
$oParamHash{source_compressed} = $bSourceCompressed;
$oParamHash{destination_compress} = $bDestinationCompress;
$hIn = $self->{oRemote}->{hOut};
}
@ -1282,12 +1452,28 @@ sub copy
else
{
$oParamHash{destination_file} = $strDestinationOp;
$oParamHash{source_compressed} = $bSourceCompressed;
$oParamHash{destination_compress} = $bDestinationCompress;
$oParamHash{destination_path_create} = $bDestinationPathCreate;
if (defined($strPermission))
if (defined($strMode))
{
$oParamHash{permission} = $strPermission;
$oParamHash{mode} = $strMode;
}
if (defined($strUser))
{
$oParamHash{user} = $strUser;
}
if (defined($strGroup))
{
$oParamHash{group} = $strGroup;
}
if ($bAppendChecksum)
{
$oParamHash{append_checksum} = true;
}
$hOut = $self->{oRemote}->{hIn};
@ -1304,15 +1490,30 @@ sub copy
$oParamHash{destination_compress} = $bDestinationCompress;
$oParamHash{destination_path_create} = $bDestinationPathCreate;
if (defined($strPermission))
if (defined($strMode))
{
$oParamHash{permission} = $strPermission;
$oParamHash{mode} = $strMode;
}
if (defined($strUser))
{
$oParamHash{user} = $strUser;
}
if (defined($strGroup))
{
$oParamHash{group} = $strGroup;
}
if ($bIgnoreMissingSource)
{
$oParamHash{ignore_missing_source} = $bIgnoreMissingSource;
}
if ($bAppendChecksum)
{
$oParamHash{append_checksum} = true;
}
}
# Build debug string
@ -1333,7 +1534,8 @@ sub copy
# Transfer the file (skip this for copies where both sides are remote)
if ($strOperation ne OP_FILE_COPY)
{
$self->{oRemote}->binary_xfer($hIn, $hOut, $strRemote, $bSourceCompressed, $bDestinationCompress);
($strChecksum, $iFileSize) =
$self->{oRemote}->binary_xfer($hIn, $hOut, $strRemote, $bSourceCompressed, $bDestinationCompress);
}
# If this is the controlling process then wait for OK from remote
@ -1344,7 +1546,47 @@ sub copy
eval
{
$strOutput = $self->{oRemote}->output_read($strOperation eq OP_FILE_COPY, $strDebug, true);
$strOutput = $self->{oRemote}->output_read(true, $strDebug, true);
# Check the result of the remote call
if (substr($strOutput, 0, 1) eq 'Y')
{
# If the operation was purely remote, get checksum/size
if ($strOperation eq OP_FILE_COPY ||
$strOperation eq OP_FILE_COPY_IN && $bSourceCompressed && !$bDestinationCompress)
{
# Checksum shouldn't already be set
if (defined($strChecksum) || defined($iFileSize))
{
confess &log(ASSERT, "checksum and size are already defined, but shouldn't be");
}
# Parse output and check to make sure tokens are defined
my @stryToken = split(/ /, $strOutput);
if (!defined($stryToken[1]) || !defined($stryToken[2]) ||
$stryToken[1] eq '?' && $stryToken[2] eq '?')
{
confess &log(ERROR, "invalid return from copy" . (defined($strOutput) ? ": ${strOutput}" : ''));
}
# Read the checksum and size
if ($stryToken[1] ne '?')
{
$strChecksum = $stryToken[1];
}
if ($stryToken[2] ne '?')
{
$iFileSize = $stryToken[2];
}
}
}
# Remote called returned false
else
{
$bResult = false;
}
};
# If there is an error then evaluate
@ -1360,38 +1602,38 @@ sub copy
close($hDestinationFile) or confess &log(ERROR, "cannot close file ${strDestinationTmpOp}");
unlink($strDestinationTmpOp) or confess &log(ERROR, "cannot remove file ${strDestinationTmpOp}");
return false;
return false, undef, undef;
}
# Otherwise report the error
confess $oMessage;
}
# If this was a remote copy, then return the result
if ($strOperation eq OP_FILE_COPY)
{
return false; #$strOutput eq 'N' ? true : false;
}
}
}
# Else this is a local operation
else
{
# If the source is compressed and the destination is not then decompress
if ($bSourceCompressed && !$bDestinationCompress)
# If the source is not compressed and the destination is then compress
if (!$bSourceCompressed && $bDestinationCompress)
{
gunzip($hSourceFile => $hDestinationFile)
or die confess &log(ERROR, "${strDebug}: unable to uncompress: " . $GunzipError);
($strChecksum, $iFileSize) =
$self->{oRemote}->binary_xfer($hSourceFile, $hDestinationFile, 'out', false, true, false);
}
elsif (!$bSourceCompressed && $bDestinationCompress)
# If the source is compressed and the destination is not then decompress
elsif ($bSourceCompressed && !$bDestinationCompress)
{
gzip($hSourceFile => $hDestinationFile)
or die confess &log(ERROR, "${strDebug}: unable to compress: " . $GzipError);
($strChecksum, $iFileSize) =
$self->{oRemote}->binary_xfer($hSourceFile, $hDestinationFile, 'in', true, false, false);
}
# Else both side are compressed, so copy capturing checksum
elsif ($bSourceCompressed)
{
($strChecksum, $iFileSize) =
$self->{oRemote}->binary_xfer($hSourceFile, $hDestinationFile, 'out', true, true, false);
}
else
{
cp($hSourceFile, $hDestinationFile)
or die confess &log(ERROR, "${strDebug}: unable to copy: " . $!);
($strChecksum, $iFileSize) =
$self->{oRemote}->binary_xfer($hSourceFile, $hDestinationFile, 'in', false, true, false);
}
}
@ -1407,14 +1649,22 @@ sub copy
close($hDestinationFile) or confess &log(ERROR, "cannot close file ${strDestinationTmpOp}");
}
# Where the destination is local, set permissions, modification time, and perform move to final location
if (!$bDestinationRemote)
# Checksum and file size should be set if the destination is not remote
if ($bResult &&
!(!$bSourceRemote && $bDestinationRemote && $bSourceCompressed) &&
(!defined($strChecksum) || !defined($iFileSize)))
{
# Set the file permission if required
if (defined($strPermission))
confess &log(ASSERT, "${strDebug}: checksum or file size not set");
}
# Where the destination is local, set mode, modification time, and perform move to final location
if ($bResult && !$bDestinationRemote)
{
# Set the file Mode if required
if (defined($strMode))
{
chmod(oct($strPermission), $strDestinationTmpOp)
or confess &log(ERROR, "unable to set permissions for local ${strDestinationTmpOp}");
chmod(oct($strMode), $strDestinationTmpOp)
or confess &log(ERROR, "unable to set mode for local ${strDestinationTmpOp}");
}
# Set the file modification time if required
@ -1424,12 +1674,33 @@ sub copy
or confess &log(ERROR, "unable to set time for local ${strDestinationTmpOp}");
}
# set user and/or group if required
if (defined($strUser) || defined($strGroup))
{
$self->owner(PATH_ABSOLUTE, $strDestinationTmpOp, $strUser, $strGroup);
}
# Replace checksum in destination filename (if exists)
if ($bAppendChecksum)
{
# Replace destination filename
if ($bDestinationCompress)
{
$strDestinationOp =
substr($strDestinationOp, 0, length($strDestinationOp) - length($self->{strCompressExtension}) - 1) .
'-' . $strChecksum . '.' . $self->{strCompressExtension};
}
else
{
$strDestinationOp .= '-' . $strChecksum;
}
}
# Move the file from tmp to final destination
$self->move(PATH_ABSOLUTE, $strDestinationTmpOp, PATH_ABSOLUTE, $strDestinationOp, true);
}
return true;
return $bResult, $strChecksum, $iFileSize;
}
no Moose;
__PACKAGE__->meta->make_immutable;
1;

751
lib/BackRest/Manifest.pm Normal file
View File

@ -0,0 +1,751 @@
####################################################################################################################################
# MANIFEST MODULE
####################################################################################################################################
package BackRest::Manifest;
use strict;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename qw(dirname basename);
use Time::Local qw(timelocal);
use Digest::SHA;
use lib dirname($0);
use BackRest::Exception qw(ERROR_CHECKSUM ERROR_FORMAT);
use BackRest::Utility;
use BackRest::File;
# Exports
use Exporter qw(import);
our @EXPORT = qw(MANIFEST_PATH MANIFEST_FILE MANIFEST_LINK
MANIFEST_SECTION_BACKUP MANIFEST_SECTION_BACKUP_OPTION MANIFEST_SECTION_BACKUP_PATH
MANIFEST_SECTION_BACKUP_TABLESPACE
MANIFEST_KEY_ARCHIVE_START MANIFEST_KEY_ARCHIVE_STOP MANIFEST_KEY_BASE MANIFEST_KEY_CHECKSUM MANIFEST_KEY_COMPRESS
MANIFEST_KEY_HARDLINK MANIFEST_KEY_LABEL MANIFEST_KEY_PRIOR MANIFEST_KEY_REFERENCE MANIFEST_KEY_TIMESTAMP_COPY_START
MANIFEST_KEY_TIMESTAMP_START MANIFEST_KEY_TIMESTAMP_STOP MANIFEST_KEY_TYPE MANIFEST_KEY_VERSION
MANIFEST_SUBKEY_CHECKSUM MANIFEST_SUBKEY_DESTINATION MANIFEST_SUBKEY_EXISTS MANIFEST_SUBKEY_FUTURE
MANIFEST_SUBKEY_GROUP MANIFEST_SUBKEY_LINK MANIFEST_SUBKEY_MODE MANIFEST_SUBKEY_MODIFICATION_TIME
MANIFEST_SUBKEY_PATH MANIFEST_SUBKEY_REFERENCE MANIFEST_SUBKEY_SIZE MANIFEST_SUBKEY_USER);
####################################################################################################################################
# File/path constants
####################################################################################################################################
use constant FILE_MANIFEST => 'backup.manifest';
push @EXPORT, qw(FILE_MANIFEST);
####################################################################################################################################
# MANIFEST Constants
####################################################################################################################################
use constant
{
MANIFEST_PATH => 'path',
MANIFEST_FILE => 'file',
MANIFEST_LINK => 'link',
MANIFEST_SECTION_BACKUP => 'backup',
MANIFEST_SECTION_BACKUP_OPTION => 'backup:option',
MANIFEST_SECTION_BACKUP_PATH => 'backup:path',
MANIFEST_SECTION_BACKUP_TABLESPACE => 'backup:tablespace',
MANIFEST_KEY_ARCHIVE_START => 'archive-start',
MANIFEST_KEY_ARCHIVE_STOP => 'archive-stop',
MANIFEST_KEY_BASE => 'base',
MANIFEST_KEY_CHECKSUM => 'checksum',
MANIFEST_KEY_COMPRESS => 'compress',
MANIFEST_KEY_FORMAT => 'format',
MANIFEST_KEY_HARDLINK => 'hardlink',
MANIFEST_KEY_LABEL => 'label',
MANIFEST_KEY_PRIOR => 'prior',
MANIFEST_KEY_REFERENCE => 'reference',
MANIFEST_KEY_TIMESTAMP_COPY_START => 'timestamp-copy-start',
MANIFEST_KEY_TIMESTAMP_START => 'timestamp-start',
MANIFEST_KEY_TIMESTAMP_STOP => 'timestamp-stop',
MANIFEST_KEY_TYPE => 'type',
MANIFEST_KEY_VERSION => 'version',
MANIFEST_SUBKEY_CHECKSUM => 'checksum',
MANIFEST_SUBKEY_DESTINATION => 'link_destination',
MANIFEST_SUBKEY_EXISTS => 'exists',
MANIFEST_SUBKEY_FUTURE => 'future',
MANIFEST_SUBKEY_GROUP => 'group',
MANIFEST_SUBKEY_LINK => 'link',
MANIFEST_SUBKEY_MODE => 'mode',
MANIFEST_SUBKEY_MODIFICATION_TIME => 'modification_time',
MANIFEST_SUBKEY_PATH => 'path',
MANIFEST_SUBKEY_REFERENCE => 'reference',
MANIFEST_SUBKEY_SIZE => 'size',
MANIFEST_SUBKEY_USER => 'user'
};
####################################################################################################################################
# CONSTRUCTOR
####################################################################################################################################
sub new
{
my $class = shift; # Class name
my $strFileName = shift; # Manifest filename
my $bLoad = shift; # Load the manifest?
# Create the class hash
my $self = {};
bless $self, $class;
# Filename must be specified
if (!defined($strFileName))
{
confess &log(ASSERT, 'filename must be provided');
}
# Set variables
my $oManifest = {};
$self->{oManifest} = $oManifest;
$self->{strFileName} = $strFileName;
# Load the manifest if specified
if (!(defined($bLoad) && $bLoad == false))
{
ini_load($strFileName, $oManifest);
# Make sure the manifest is valid by testing checksum
my $strChecksum = $self->get(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_CHECKSUM);
my $strTestChecksum = $self->hash();
if ($strChecksum ne $strTestChecksum)
{
confess &log(ERROR, "backup.manifest checksum is invalid, should be ${strTestChecksum}", ERROR_CHECKSUM);
}
# Make sure that the format is current, otherwise error
my $iFormat = $self->get(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_FORMAT, undef, false, 0);
if ($iFormat != FORMAT)
{
confess &log(ERROR, "backup format of ${strFileName} is ${iFormat} but " . FORMAT . ' is required by this version of ' .
'PgBackRest. If you are attempting an incr/diff backup you will need to take a new full backup. ' .
"If you are trying to restore, you''ll need to use a version that supports format ${iFormat}." ,
ERROR_FORMAT);
}
}
else
{
$self->set(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_FORMAT, undef, FORMAT);
}
return $self;
}
####################################################################################################################################
# SAVE
#
# Save the manifest.
####################################################################################################################################
sub save
{
my $self = shift;
# Create the checksum
$self->hash();
# Save the config file
ini_save($self->{strFileName}, $self->{oManifest});
}
####################################################################################################################################
# HASH
#
# Generate hash for the manifest.
####################################################################################################################################
sub hash
{
my $self = shift;
my $oManifest = $self->{oManifest};
# Remove the old checksum
$self->remove(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_CHECKSUM);
my $oSHA = Digest::SHA->new('sha1');
# Calculate the checksum from section values
foreach my $strSection ($self->keys())
{
$oSHA->add($strSection);
# Calculate the checksum from key values
foreach my $strKey ($self->keys($strSection))
{
$oSHA->add($strKey);
my $strValue = $self->get($strSection, $strKey);
if (!defined($strValue))
{
confess &log(ASSERT, "section ${strSection}, key ${$strKey} has undef value");
}
# Calculate the checksum from subkey values
if (ref($strValue) eq "HASH")
{
foreach my $strSubKey ($self->keys($strSection, $strKey))
{
my $strSubValue = $self->get($strSection, $strKey, $strSubKey);
if (!defined($strSubValue))
{
confess &log(ASSERT, "section ${strSection}, key ${strKey}, subkey ${strSubKey} has undef value");
}
$oSHA->add($strSubValue);
}
}
else
{
$oSHA->add($strValue);
}
}
}
# Set the new checksum
my $strHash = $oSHA->hexdigest();
$self->set(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_CHECKSUM, undef, $strHash);
return $strHash;
}
####################################################################################################################################
# GET
#
# Get a value.
####################################################################################################################################
sub get
{
my $self = shift;
my $strSection = shift;
my $strValue = shift;
my $strSubValue = shift;
my $bRequired = shift;
my $oDefault = shift;
my $oManifest = $self->{oManifest};
# Section must always be defined
if (!defined($strSection))
{
confess &log(ASSERT, 'section is not defined');
}
# Set default for required
$bRequired = defined($bRequired) ? $bRequired : true;
# Store the result
my $oResult = undef;
if (defined($strSubValue))
{
if (!defined($strValue))
{
confess &log(ASSERT, 'subvalue requested bu value is not defined');
}
if (defined(${$oManifest}{$strSection}{$strValue}))
{
$oResult = ${$oManifest}{$strSection}{$strValue}{$strSubValue};
}
}
elsif (defined($strValue))
{
if (defined(${$oManifest}{$strSection}))
{
$oResult = ${$oManifest}{$strSection}{$strValue};
}
}
else
{
$oResult = ${$oManifest}{$strSection};
}
if (!defined($oResult) && $bRequired)
{
confess &log(ASSERT, "manifest section '$strSection'" . (defined($strValue) ? ", value '$strValue'" : '') .
(defined($strSubValue) ? ", subvalue '$strSubValue'" : '') . ' is required but not defined');
}
if (!defined($oResult) && defined($oDefault))
{
$oResult = $oDefault;
}
return $oResult
}
####################################################################################################################################
# SET
#
# Set a value.
####################################################################################################################################
sub set
{
my $self = shift;
my $strSection = shift;
my $strKey = shift;
my $strSubKey = shift;
my $strValue = shift;
my $oManifest = $self->{oManifest};
# Make sure the keys are valid
$self->valid($strSection, $strKey, $strSubKey);
if (defined($strSubKey))
{
${$oManifest}{$strSection}{$strKey}{$strSubKey} = $strValue;
}
else
{
${$oManifest}{$strSection}{$strKey} = $strValue;
}
}
####################################################################################################################################
# REMOVE
#
# Remove a value.
####################################################################################################################################
sub remove
{
my $self = shift;
my $strSection = shift;
my $strKey = shift;
my $strSubKey = shift;
my $strValue = shift;
my $oManifest = $self->{oManifest};
# Make sure the keys are valid
$self->valid($strSection, $strKey, $strSubKey, undef, true);
if (defined($strSubKey))
{
delete(${$oManifest}{$strSection}{$strKey}{$strSubKey});
}
else
{
delete(${$oManifest}{$strSection}{$strKey});
}
}
####################################################################################################################################
# VALID
#
# Determine if section, key, subkey combination is valid.
####################################################################################################################################
sub valid
{
my $self = shift;
my $strSection = shift;
my $strKey = shift;
my $strSubKey = shift;
my $strValue = shift;
my $bDelete = shift;
# Section and key must always be defined
if (!defined($strSection) || !defined($strKey))
{
confess &log(ASSERT, 'section or key is not defined');
}
# Default bDelete
$bDelete = defined($bDelete) ? $bDelete : false;
if ($strSection =~ /^.*\:(file|path|link)$/ && $strSection !~ /^backup\:path$/)
{
if (!defined($strSubKey) && $bDelete)
{
return true;
}
my $strPath = (split(':', $strSection))[0];
my $strType = (split(':', $strSection))[1];
if ($strPath eq 'tablespace')
{
$strPath = (split(':', $strSection))[1];
$strType = (split(':', $strSection))[2];
}
if (($strType eq 'path' || $strType eq 'file' || $strType eq 'link') &&
($strSubKey eq MANIFEST_SUBKEY_USER ||
$strSubKey eq MANIFEST_SUBKEY_GROUP))
{
return true;
}
elsif (($strType eq 'path' || $strType eq 'file') &&
($strSubKey eq MANIFEST_SUBKEY_MODE))
{
return true;
}
elsif ($strType eq 'file' &&
($strSubKey eq MANIFEST_SUBKEY_CHECKSUM ||
$strSubKey eq MANIFEST_SUBKEY_EXISTS ||
$strSubKey eq MANIFEST_SUBKEY_FUTURE ||
$strSubKey eq MANIFEST_SUBKEY_MODIFICATION_TIME ||
$strSubKey eq MANIFEST_SUBKEY_REFERENCE ||
$strSubKey eq MANIFEST_SUBKEY_SIZE))
{
return true;
}
elsif ($strType eq 'link' &&
$strSubKey eq MANIFEST_SUBKEY_DESTINATION)
{
return true;
}
}
if ($strSection eq MANIFEST_SECTION_BACKUP)
{
if ($strKey eq MANIFEST_KEY_ARCHIVE_START ||
$strKey eq MANIFEST_KEY_ARCHIVE_STOP ||
$strKey eq MANIFEST_KEY_CHECKSUM ||
$strKey eq MANIFEST_KEY_FORMAT ||
$strKey eq MANIFEST_KEY_LABEL ||
$strKey eq MANIFEST_KEY_PRIOR ||
$strKey eq MANIFEST_KEY_REFERENCE ||
$strKey eq MANIFEST_KEY_TIMESTAMP_COPY_START ||
$strKey eq MANIFEST_KEY_TIMESTAMP_START ||
$strKey eq MANIFEST_KEY_TIMESTAMP_STOP ||
$strKey eq MANIFEST_KEY_TYPE ||
$strKey eq MANIFEST_KEY_VERSION)
{
return true;
}
}
elsif ($strSection eq MANIFEST_SECTION_BACKUP_OPTION)
{
if ($strKey eq MANIFEST_KEY_CHECKSUM ||
$strKey eq MANIFEST_KEY_COMPRESS ||
$strKey eq MANIFEST_KEY_HARDLINK)
{
return true;
}
}
elsif ($strSection eq MANIFEST_SECTION_BACKUP_TABLESPACE)
{
if ($strSubKey eq 'link' ||
$strSubKey eq 'path')
{
return true;
}
}
elsif ($strSection eq MANIFEST_SECTION_BACKUP_PATH)
{
if ($strKey eq 'base' || $strKey =~ /^tablespace\:.*$/)
{
return true;
}
}
confess &log(ASSERT, "manifest section '${strSection}', key '${strKey}'" .
(defined($strSubKey) ? ", subkey '$strSubKey'" : '') . ' is not valid');
}
####################################################################################################################################
# epoch
#
# Retrieves a value in the format YYYY-MM-DD HH24:MI:SS and converts to epoch time.
####################################################################################################################################
sub epoch
{
my $self = shift;
my $strSection = shift;
my $strKey = shift;
my $strSubKey = shift;
my $strValue = $self->get($strSection, $strKey, $strSubKey);
my ($iYear, $iMonth, $iDay, $iHour, $iMinute, $iSecond) = split(/[\s\-\:]+/, $strValue);
return timelocal($iSecond, $iMinute, $iHour, $iDay , $iMonth - 1, $iYear);
}
####################################################################################################################################
# KEYS
#
# Get a list of keys.
####################################################################################################################################
sub keys
{
my $self = shift;
my $strSection = shift;
my $strKey = shift;
if (defined($strSection))
{
if ($self->test($strSection, $strKey))
{
return sort(keys $self->get($strSection, $strKey));
}
return [];
}
return sort(keys $self->{oManifest});
}
####################################################################################################################################
# TEST
#
# Test a value to see if it equals the supplied test value. If no test value is given, tests that it is defined.
####################################################################################################################################
sub test
{
my $self = shift;
my $strSection = shift;
my $strValue = shift;
my $strSubValue = shift;
my $strTest = shift;
my $strResult = $self->get($strSection, $strValue, $strSubValue, false);
if (defined($strResult))
{
if (defined($strTest))
{
return $strResult eq $strTest ? true : false;
}
return true;
}
return false;
}
####################################################################################################################################
# BUILD
#
# Build the manifest object.
####################################################################################################################################
sub build
{
my $self = shift;
my $oFile = shift;
my $strDbClusterPath = shift;
my $oLastManifest = shift;
my $bNoStartStop = shift;
my $oTablespaceMapRef = shift;
my $strLevel = shift;
&log(DEBUG, 'Manifest->build');
# If no level is defined then it must be base
if (!defined($strLevel))
{
$strLevel = 'base';
if (defined($oLastManifest))
{
$self->set(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_PRIOR, undef,
$oLastManifest->get(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_LABEL));
}
# If bNoStartStop then build the tablespace map from pg_tblspc path
if ($bNoStartStop)
{
$oTablespaceMapRef = {};
my %oTablespaceManifestHash;
$oFile->manifest(PATH_DB_ABSOLUTE, $strDbClusterPath . '/pg_tblspc', \%oTablespaceManifestHash);
foreach my $strName (sort(CORE::keys $oTablespaceManifestHash{name}))
{
if ($strName eq '.' or $strName eq '..')
{
next;
}
if ($oTablespaceManifestHash{name}{$strName}{type} ne 'l')
{
confess &log(ERROR, "pg_tblspc/${strName} is not a link");
}
&log(DEBUG, "Found tablespace ${strName}");
${$oTablespaceMapRef}{oid}{$strName}{name} = $strName;
}
}
}
# Get the manifest for this level
my %oManifestHash;
$oFile->manifest(PATH_DB_ABSOLUTE, $strDbClusterPath, \%oManifestHash);
$self->set(MANIFEST_SECTION_BACKUP_PATH, $strLevel, undef, $strDbClusterPath);
# Loop though all paths/files/links in the manifest
foreach my $strName (sort(CORE::keys $oManifestHash{name}))
{
# Skip certain files during backup
if (($strName =~ /^pg\_xlog\/.*/ && !$bNoStartStop) || # pg_xlog/ - this will be reconstructed
$strName =~ /^postmaster\.pid$/ || # postmaster.pid - to avoid confusing postgres when restoring
$strName =~ /^recovery\.conf$/) # recovery.conf - doesn't make sense to backup this file
{
next;
}
my $cType = $oManifestHash{name}{"${strName}"}{type};
my $strLinkDestination = $oManifestHash{name}{"${strName}"}{link_destination};
my $strSection = "${strLevel}:path";
if ($cType eq 'f')
{
$strSection = "${strLevel}:file";
}
elsif ($cType eq 'l')
{
$strSection = "${strLevel}:link";
}
elsif ($cType ne 'd')
{
confess &log(ASSERT, "unrecognized file type $cType for file $strName");
}
# User and group required for all types
$self->set($strSection, $strName, MANIFEST_SUBKEY_USER, $oManifestHash{name}{"${strName}"}{user});
$self->set($strSection, $strName, MANIFEST_SUBKEY_GROUP, $oManifestHash{name}{"${strName}"}{group});
# Mode for required file and path type only
if ($cType eq 'f' || $cType eq 'd')
{
$self->set($strSection, $strName, MANIFEST_SUBKEY_MODE, $oManifestHash{name}{"${strName}"}{mode});
}
# Modification time and size required for file type only
if ($cType eq 'f')
{
$self->set($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME,
$oManifestHash{name}{"${strName}"}{modification_time} + 0);
$self->set($strSection, $strName, MANIFEST_SUBKEY_SIZE, $oManifestHash{name}{"${strName}"}{size} + 0);
}
# Link destination required for link type only
if ($cType eq 'l')
{
$self->set($strSection, $strName, MANIFEST_SUBKEY_DESTINATION,
$oManifestHash{name}{"${strName}"}{link_destination});
# If this is a tablespace then follow the link
if (index($strName, 'pg_tblspc/') == 0 && $strLevel eq 'base')
{
my $strTablespaceOid = basename($strName);
my $strTablespaceName = ${$oTablespaceMapRef}{oid}{$strTablespaceOid}{name};
$self->set(MANIFEST_SECTION_BACKUP_TABLESPACE, $strTablespaceName,
MANIFEST_SUBKEY_LINK, $strTablespaceOid);
$self->set(MANIFEST_SECTION_BACKUP_TABLESPACE, $strTablespaceName,
MANIFEST_SUBKEY_PATH, $strLinkDestination);
$self->build($oFile, $strLinkDestination, $oLastManifest, $bNoStartStop, $oTablespaceMapRef,
"tablespace:${strTablespaceName}");
}
}
}
# If this is the base level then do post-processing
if ($strLevel eq 'base')
{
my $bTimeInFuture = false;
my $lTimeBegin = $oFile->wait(PATH_DB_ABSOLUTE);
# Loop through all backup paths (base and tablespaces)
foreach my $strPathKey ($self->keys(MANIFEST_SECTION_BACKUP_PATH))
{
my $strSection = "${strPathKey}:file";
# Make sure file section exists
if ($self->test($strSection))
{
# Loop though all files
foreach my $strName ($self->keys($strSection))
{
# If modification time is in the future (in this backup OR the last backup) set warning flag and do not
# allow a reference
if ($self->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME) > $lTimeBegin ||
(defined($oLastManifest) && $oLastManifest->test($strSection, $strName, MANIFEST_SUBKEY_FUTURE, 'y')))
{
$bTimeInFuture = true;
# Only mark as future if still in the future in the current backup
if ($self->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME) > $lTimeBegin)
{
$self->set($strSection, $strName, MANIFEST_SUBKEY_FUTURE, 'y');
}
}
# Else check if modification time and size are unchanged since last backup
elsif (defined($oLastManifest) && $oLastManifest->test($strSection, $strName) &&
$self->get($strSection, $strName, MANIFEST_SUBKEY_SIZE) ==
$oLastManifest->get($strSection, $strName, MANIFEST_SUBKEY_SIZE) &&
$self->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME) ==
$oLastManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME))
{
# Copy reference from previous backup if possible
if ($oLastManifest->test($strSection, $strName, MANIFEST_SUBKEY_REFERENCE))
{
$self->set($strSection, $strName, MANIFEST_SUBKEY_REFERENCE,
$oLastManifest->get($strSection, $strName, MANIFEST_SUBKEY_REFERENCE));
}
# Otherwise the reference is to the previous backup
else
{
$self->set($strSection, $strName, MANIFEST_SUBKEY_REFERENCE,
$oLastManifest->get(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_LABEL));
}
# Copy the checksum from previous manifest
if ($oLastManifest->test($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM))
{
$self->set($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM,
$oLastManifest->get($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM));
}
# Build the manifest reference list - not used for processing but is useful for debugging
my $strFileReference = $self->get($strSection, $strName, MANIFEST_SUBKEY_REFERENCE);
my $strManifestReference = $self->get(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_REFERENCE,
undef, false);
if (!defined($strManifestReference))
{
$self->set(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_REFERENCE, undef, $strFileReference);
}
else
{
if ($strManifestReference !~ /^$strFileReference|,$strFileReference/)
{
$self->set(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_REFERENCE, undef,
$strManifestReference . ",${strFileReference}");
}
}
}
}
}
}
# Warn if any files in the current backup are in the future
if ($bTimeInFuture)
{
&log(WARN, "some files have timestamps in the future - they will be copied to prevent possible race conditions");
}
# Record the time when copying will start
$self->set(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_TIMESTAMP_COPY_START, undef,
timestamp_string_get(undef, $lTimeBegin + 1));
}
}
1;

File diff suppressed because it is too large Load Diff

765
lib/BackRest/Restore.pm Normal file
View File

@ -0,0 +1,765 @@
####################################################################################################################################
# RESTORE MODULE
####################################################################################################################################
package BackRest::Restore;
use threads;
use threads::shared;
use Thread::Queue;
use strict;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename qw(dirname);
use File::stat qw(lstat);
use lib dirname($0);
use BackRest::Exception;
use BackRest::Utility;
use BackRest::ThreadGroup;
use BackRest::Config;
use BackRest::Manifest;
use BackRest::File;
use BackRest::Db;
####################################################################################################################################
# Recovery.conf file
####################################################################################################################################
use constant FILE_RECOVERY_CONF => 'recovery.conf';
####################################################################################################################################
# CONSTRUCTOR
####################################################################################################################################
sub new
{
my $class = shift; # Class name
my $strDbClusterPath = shift; # Database cluster path
my $strBackupPath = shift; # Backup to restore
my $oRemapRef = shift; # Tablespace remaps
my $oFile = shift; # Default file object
my $iThreadTotal = shift; # Total threads to run for restore
my $bDelta = shift; # perform delta restore
my $bForce = shift; # force a restore
my $strType = shift; # Recovery type
my $strTarget = shift; # Recovery target
my $bTargetExclusive = shift; # Target exlusive option
my $bTargetResume = shift; # Target resume option
my $strTargetTimeline = shift; # Target timeline option
my $oRecoveryRef = shift; # Other recovery options
my $strStanza = shift; # Restore stanza
my $strBackRestBin = shift; # Absolute backrest filename
my $strConfigFile = shift; # Absolute config filename (optional)
# Create the class hash
my $self = {};
bless $self, $class;
# Initialize variables
$self->{strDbClusterPath} = $strDbClusterPath;
$self->{strBackupPath} = $strBackupPath;
$self->{oRemapRef} = $oRemapRef;
$self->{oFile} = $oFile;
$self->{iThreadTotal} = defined($iThreadTotal) ? $iThreadTotal : 1;
$self->{bDelta} = $bDelta;
$self->{bForce} = $bForce;
$self->{strType} = $strType;
$self->{strTarget} = $strTarget;
$self->{bTargetExclusive} = $bTargetExclusive;
$self->{bTargetResume} = $bTargetResume;
$self->{strTargetTimeline} = $strTargetTimeline;
$self->{oRecoveryRef} = $oRecoveryRef;
$self->{strStanza} = $strStanza;
$self->{strBackRestBin} = $strBackRestBin;
$self->{strConfigFile} = $strConfigFile;
return $self;
}
####################################################################################################################################
# MANIFEST_OWNERSHIP_CHECK
#
# Checks the users and groups that exist in the manifest and emits warnings for ownership that cannot be set properly, either
# because the current user does not have permissions or because the user/group does not exist.
####################################################################################################################################
sub manifest_ownership_check
{
my $self = shift; # Class hash
my $oManifest = shift; # Backup manifest
# Create hashes to track valid/invalid users/groups
my %oOwnerHash = ();
# Create hash for each type and owner to be checked
my $strDefaultUser = getpwuid($<);
my $strDefaultGroup = getgrgid($();
my %oFileTypeHash = (&MANIFEST_PATH => true, &MANIFEST_LINK => true, &MANIFEST_FILE => true);
my %oOwnerTypeHash = (&MANIFEST_SUBKEY_USER => $strDefaultUser, &MANIFEST_SUBKEY_GROUP => $strDefaultGroup);
# Loop through owner types (user, group)
foreach my $strOwnerType (sort (keys %oOwnerTypeHash))
{
# Loop through all backup paths (base and tablespaces)
foreach my $strPathKey ($oManifest->keys(MANIFEST_SECTION_BACKUP_PATH))
{
# Loop through types (path, link, file)
foreach my $strFileType (sort (keys %oFileTypeHash))
{
my $strSection = "${strPathKey}:${strFileType}";
# Get users and groups for paths
if ($oManifest->test($strSection))
{
foreach my $strName ($oManifest->keys($strSection))
{
my $strOwner = $oManifest->get($strSection, $strName, $strOwnerType);
# If root then test to see if the user/group is valid
if ($< == 0)
{
# If the owner has not been tested yet then test it
if (!defined($oOwnerHash{$strOwnerType}{$strOwner}))
{
my $strOwnerId;
if ($strOwnerType eq 'user')
{
$strOwnerId = getpwnam($strOwner);
}
else
{
$strOwnerId = getgrnam($strOwner);
}
$oOwnerHash{$strOwnerType}{$strOwner} = defined($strOwnerId) ? true : false;
}
if (!$oOwnerHash{$strOwnerType}{$strOwner})
{
$oManifest->set($strSection, $strName, $strOwnerType, $oOwnerTypeHash{$strOwnerType});
}
}
# Else set user/group to current user/group
else
{
if ($strOwner ne $oOwnerTypeHash{$strOwnerType})
{
$oOwnerHash{$strOwnerType}{$strOwner} = false;
$oManifest->set($strSection, $strName, $strOwnerType, $oOwnerTypeHash{$strOwnerType});
}
}
}
}
}
}
# Output warning for any invalid owners
if (defined($oOwnerHash{$strOwnerType}))
{
foreach my $strOwner (sort (keys $oOwnerHash{$strOwnerType}))
{
if (!$oOwnerHash{$strOwnerType}{$strOwner})
{
&log(WARN, "${strOwnerType} ${strOwner} " . ($< == 0 ? "does not exist" : "cannot be set") .
", changed to $oOwnerTypeHash{$strOwnerType}");
}
}
}
}
}
####################################################################################################################################
# MANIFEST_LOAD
#
# Loads the backup manifest and performs requested tablespace remaps.
####################################################################################################################################
sub manifest_load
{
my $self = shift; # Class hash
if ($self->{oFile}->exists(PATH_BACKUP_CLUSTER, $self->{strBackupPath}))
{
# Copy the backup manifest to the db cluster path
$self->{oFile}->copy(PATH_BACKUP_CLUSTER, $self->{strBackupPath} . '/' . FILE_MANIFEST,
PATH_DB_ABSOLUTE, $self->{strDbClusterPath} . '/' . FILE_MANIFEST);
# Load the manifest into a hash
my $oManifest = new BackRest::Manifest($self->{oFile}->path_get(PATH_DB_ABSOLUTE,
$self->{strDbClusterPath} . '/' . FILE_MANIFEST));
# Remove the manifest now that it is in memory
$self->{oFile}->remove(PATH_DB_ABSOLUTE, $self->{strDbClusterPath} . '/' . FILE_MANIFEST);
# If backup is latest then set it equal to backup label, else verify that requested backup and label match
my $strBackupLabel = $oManifest->get(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_LABEL);
if ($self->{strBackupPath} eq OPTION_DEFAULT_RESTORE_SET)
{
$self->{strBackupPath} = $strBackupLabel;
}
elsif ($self->{strBackupPath} ne $strBackupLabel)
{
confess &log(ASSERT, "request backup $self->{strBackupPath} and label ${strBackupLabel} do not match " .
' - this indicates some sort of corruption (at the very least paths have been renamed.');
}
if ($self->{strDbClusterPath} ne $oManifest->get(MANIFEST_SECTION_BACKUP_PATH, MANIFEST_KEY_BASE))
{
&log(INFO, 'base path remapped to ' . $self->{strDbClusterPath});
$oManifest->set(MANIFEST_SECTION_BACKUP_PATH, MANIFEST_KEY_BASE, undef, $self->{strDbClusterPath});
}
# If tablespaces have been remapped, update the manifest
if (defined($self->{oRemapRef}))
{
foreach my $strPathKey (sort(keys $self->{oRemapRef}))
{
my $strRemapPath = ${$self->{oRemapRef}}{$strPathKey};
# Make sure that the tablespace exists in the manifest
if (!$oManifest->test(MANIFEST_SECTION_BACKUP_TABLESPACE, $strPathKey))
{
confess &log(ERROR, "cannot remap invalid tablespace ${strPathKey} to ${strRemapPath}");
}
# Remap the tablespace in the manifest
&log(INFO, "remapping tablespace ${strPathKey} to ${strRemapPath}");
my $strTablespaceLink = $oManifest->get(MANIFEST_SECTION_BACKUP_TABLESPACE, $strPathKey, MANIFEST_SUBKEY_LINK);
$oManifest->set(MANIFEST_SECTION_BACKUP_PATH, "tablespace:${strPathKey}", undef, $strRemapPath);
$oManifest->set(MANIFEST_SECTION_BACKUP_TABLESPACE, $strPathKey, MANIFEST_SUBKEY_PATH, $strRemapPath);
$oManifest->set('base:link', "pg_tblspc/${strTablespaceLink}", MANIFEST_SUBKEY_DESTINATION, $strRemapPath);
}
}
$self->manifest_ownership_check($oManifest);
return $oManifest;
}
confess &log(ERROR, 'backup ' . $self->{strBackupPath} . ' does not exist');
}
####################################################################################################################################
# CLEAN
#
# Checks that the restore paths are empty, or if --force was used then it cleans files/paths/links from the restore directories that
# are not present in the manifest.
####################################################################################################################################
sub clean
{
my $self = shift; # Class hash
my $oManifest = shift; # Backup manifest
# Track if files/links/paths where removed
my %oRemoveHash = (&MANIFEST_FILE => 0, &MANIFEST_PATH => 0, &MANIFEST_LINK => 0);
# Check each restore directory in the manifest and make sure that it exists and is empty.
# The --force option can be used to override the empty requirement.
foreach my $strPathKey ($oManifest->keys(MANIFEST_SECTION_BACKUP_PATH))
{
my $strPath = $oManifest->get(MANIFEST_SECTION_BACKUP_PATH, $strPathKey);
&log(INFO, "checking/cleaning db path ${strPath}");
if (!$self->{oFile}->exists(PATH_DB_ABSOLUTE, $strPath))
{
confess &log(ERROR, "required db path '${strPath}' does not exist");
}
# Load path manifest so it can be compared to deleted files/paths/links that are not in the backup
my %oPathManifest;
$self->{oFile}->manifest(PATH_DB_ABSOLUTE, $strPath, \%oPathManifest);
foreach my $strName (sort {$b cmp $a} (keys $oPathManifest{name}))
{
# Skip the root path
if ($strName eq '.')
{
next;
}
# If force was not specified then error if any file is found
if (!$self->{bForce} && !$self->{bDelta})
{
confess &log(ERROR, "cannot restore to path '${strPath}' that contains files - " .
'try using --delta if this is what you intended', ERROR_RESTORE_PATH_NOT_EMPTY);
}
my $strFile = "${strPath}/${strName}";
# Determine the file/path/link type
my $strType = MANIFEST_FILE;
if ($oPathManifest{name}{$strName}{type} eq 'd')
{
$strType = MANIFEST_PATH;
}
elsif ($oPathManifest{name}{$strName}{type} eq 'l')
{
$strType = MANIFEST_LINK;
}
# Build the section name
my $strSection = "${strPathKey}:${strType}";
# Check to see if the file/path/link exists in the manifest
if ($oManifest->test($strSection, $strName))
{
my $strUser = $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_USER);
my $strGroup = $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_GROUP);
# If ownership does not match, fix it
if ($strUser ne $oPathManifest{name}{$strName}{user} ||
$strGroup ne $oPathManifest{name}{$strName}{group})
{
&log(DEBUG, "setting ${strFile} ownership to ${strUser}:${strGroup}");
$self->{oFile}->owner(PATH_DB_ABSOLUTE, $strFile, $strUser, $strGroup);
}
# If a link does not have the same destination, then delete it (it will be recreated later)
if ($strType eq MANIFEST_LINK)
{
if ($strType eq MANIFEST_LINK && $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_DESTINATION) ne
$oPathManifest{name}{$strName}{link_destination})
{
&log(DEBUG, "removing link ${strFile} - destination changed");
unlink($strFile) or confess &log(ERROR, "unable to delete file ${strFile}");
}
}
# Else if file/path mode does not match, fix it
else
{
my $strMode = $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODE);
if ($strType ne MANIFEST_LINK && $strMode ne $oPathManifest{name}{$strName}{mode})
{
&log(DEBUG, "setting ${strFile} mode to ${strMode}");
chmod(oct($strMode), $strFile)
or confess 'unable to set mode ${strMode} for ${strFile}';
}
}
}
# If it does not then remove it
else
{
# If a path then remove it, all the files should have already been deleted since we are going in reverse order
if ($strType eq MANIFEST_PATH)
{
&log(DEBUG, "removing path ${strFile}");
rmdir($strFile) or confess &log(ERROR, "unable to delete path ${strFile}, is it empty?");
}
# Else delete a file/link
else
{
# Delete only if this is not the recovery.conf file. This is in case the use wants the recovery.conf file
# preserved. It will be written/deleted/preserved as needed in recovery().
if (!($strName eq FILE_RECOVERY_CONF && $strType eq MANIFEST_FILE))
{
&log(DEBUG, "removing file/link ${strFile}");
unlink($strFile) or confess &log(ERROR, "unable to delete file/link ${strFile}");
}
}
$oRemoveHash{$strType} += 1;
}
}
}
# Loop through types (path, link, file) and emit info if any were removed
foreach my $strFileType (sort (keys %oRemoveHash))
{
if ($oRemoveHash{$strFileType} > 0)
{
&log(INFO, "$oRemoveHash{$strFileType} ${strFileType}(s) removed during cleanup");
}
}
}
####################################################################################################################################
# BUILD
#
# Creates missing paths and links and corrects ownership/mode on existing paths and links.
####################################################################################################################################
sub build
{
my $self = shift; # Class hash
my $oManifest = shift; # Backup manifest
# Build paths/links in each restore path
foreach my $strSectionPathKey ($oManifest->keys(MANIFEST_SECTION_BACKUP_PATH))
{
my $strSectionPath = $oManifest->get(MANIFEST_SECTION_BACKUP_PATH, $strSectionPathKey);
# Create all paths in the manifest that do not already exist
my $strSection = "${strSectionPathKey}:path";
foreach my $strName ($oManifest->keys($strSection))
{
# Skip the root path
if ($strName eq '.')
{
next;
}
# Create the Path
my $strPath = "${strSectionPath}/${strName}";
if (!$self->{oFile}->exists(PATH_DB_ABSOLUTE, $strPath))
{
$self->{oFile}->path_create(PATH_DB_ABSOLUTE, $strPath,
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODE));
}
}
# Create all links in the manifest that do not already exist
$strSection = "${strSectionPathKey}:link";
if ($oManifest->test($strSection))
{
foreach my $strName ($oManifest->keys($strSection))
{
my $strLink = "${strSectionPath}/${strName}";
if (!$self->{oFile}->exists(PATH_DB_ABSOLUTE, $strLink))
{
$self->{oFile}->link_create(PATH_DB_ABSOLUTE,
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_DESTINATION),
PATH_DB_ABSOLUTE, $strLink);
}
}
}
}
}
####################################################################################################################################
# RECOVERY
#
# Creates the recovery.conf file.
####################################################################################################################################
sub recovery
{
my $self = shift; # Class hash
# Create recovery.conf path/file
my $strRecoveryConf = $self->{strDbClusterPath} . '/' . FILE_RECOVERY_CONF;
# See if recovery.conf already exists
my $bRecoveryConfExists = $self->{oFile}->exists(PATH_DB_ABSOLUTE, $strRecoveryConf);
# If RECOVERY_TYPE_PRESERVE then make sure recovery.conf exists and return
if ($self->{strType} eq RECOVERY_TYPE_PRESERVE)
{
if (!$bRecoveryConfExists)
{
confess &log(ERROR, "recovery type is $self->{strType} but recovery file does not exist at ${strRecoveryConf}");
}
return;
}
# In all other cases the old recovery.conf should be removed if it exists
if ($bRecoveryConfExists)
{
$self->{oFile}->remove(PATH_DB_ABSOLUTE, $strRecoveryConf);
}
# If RECOVERY_TYPE_NONE then return
if ($self->{strType} eq RECOVERY_TYPE_NONE)
{
return;
}
# Write the recovery options from pg_backrest.conf
my $strRecovery = '';
my $bRestoreCommandOverride = false;
if (defined($self->{oRecoveryRef}))
{
foreach my $strKey (sort(keys $self->{oRecoveryRef}))
{
my $strPgKey = $strKey;
$strPgKey =~ s/\-/\_/g;
if ($strPgKey eq 'restore_command')
{
$bRestoreCommandOverride = true;
}
$strRecovery .= "$strPgKey = '${$self->{oRecoveryRef}}{$strKey}'\n";
}
}
# Write the restore command
if (!$bRestoreCommandOverride)
{
$strRecovery .= "restore_command = '$self->{strBackRestBin} --stanza=$self->{strStanza}" .
(defined($self->{strConfigFile}) ? " --config=$self->{strConfigFile}" : '') .
" archive-get %f \"%p\"'\n";
}
# If RECOVERY_TYPE_DEFAULT do not write target options
if ($self->{strType} ne RECOVERY_TYPE_DEFAULT)
{
# Write the recovery target
$strRecovery .= "recovery_target_$self->{strType} = '$self->{strTarget}'\n";
# Write recovery_target_inclusive
if ($self->{bTargetExclusive})
{
$strRecovery .= "recovery_target_inclusive = false\n";
}
}
# Write pause_at_recovery_target
if ($self->{bTargetResume})
{
$strRecovery .= "pause_at_recovery_target = false\n";
}
# Write recovery_target_timeline
if (defined($self->{strTargetTimeline}))
{
$strRecovery .= "recovery_target_timeline = $self->{strTargetTimeline}\n";
}
# Write recovery.conf
my $hFile;
open($hFile, '>', $strRecoveryConf)
or confess &log(ERROR, "unable to open ${strRecoveryConf}: $!");
syswrite($hFile, $strRecovery)
or confess "unable to write section ${strRecoveryConf}: $!";
close($hFile)
or confess "unable to close ${strRecoveryConf}: $!";
}
####################################################################################################################################
# RESTORE
#
# Takes a backup and restores it back to the original or a remapped location.
####################################################################################################################################
sub restore
{
my $self = shift; # Class hash
# Make sure that Postgres is not running
if ($self->{oFile}->exists(PATH_DB_ABSOLUTE, $self->{strDbClusterPath} . '/' . FILE_POSTMASTER_PID))
{
confess &log(ERROR, 'unable to restore while Postgres is running', ERROR_POSTMASTER_RUNNING);
}
# Log the backup set to restore
&log(INFO, "Restoring backup set " . $self->{strBackupPath});
# Make sure the backup path is valid and load the manifest
my $oManifest = $self->manifest_load();
# Clean the restore paths
$self->clean($oManifest);
# Build paths/links in the restore paths
$self->build($oManifest);
# Create thread queues
my @oyRestoreQueue;
foreach my $strPathKey ($oManifest->keys(MANIFEST_SECTION_BACKUP_PATH))
{
my $strSection = "${strPathKey}:file";
if ($oManifest->test($strSection))
{
$oyRestoreQueue[@oyRestoreQueue] = Thread::Queue->new();
foreach my $strName ($oManifest->keys($strSection))
{
$oyRestoreQueue[@oyRestoreQueue - 1]->enqueue("${strPathKey}|${strName}");
}
}
}
# If multi-threaded then create threads to copy files
if ($self->{iThreadTotal} > 1)
{
# Create threads to process the thread queues
my $oThreadGroup = thread_group_create();
for (my $iThreadIdx = 0; $iThreadIdx < $self->{iThreadTotal}; $iThreadIdx++)
{
&log(DEBUG, "starting restore thread ${iThreadIdx}");
thread_group_add($oThreadGroup, threads->create(\&restore_thread, $self, true,
$iThreadIdx, \@oyRestoreQueue, $oManifest));
}
# Complete thread queues
thread_group_complete($oThreadGroup);
}
# Else copy in the main process
else
{
&log(DEBUG, "starting restore in main process");
$self->restore_thread(false, 0, \@oyRestoreQueue, $oManifest);
}
# Create recovery.conf file
$self->recovery();
}
####################################################################################################################################
# RESTORE_THREAD
#
# Worker threads for the restore process.
####################################################################################################################################
sub restore_thread
{
my $self = shift; # Class hash
my $bMulti = shift; # Is this thread one of many?
my $iThreadIdx = shift; # Defines the index of this thread
my $oyRestoreQueueRef = shift; # Restore queues
my $oManifest = shift; # Backup manifest
my $iDirection = $iThreadIdx % 2 == 0 ? 1 : -1; # Size of files currently copied by this thread
my $oFileThread; # Thread local file object
# If multi-threaded, then clone the file object
if ($bMulti)
{
$oFileThread = $self->{oFile}->clone($iThreadIdx);
}
# Else use the master file object
else
{
$oFileThread = $self->{oFile};
}
# Initialize the starting and current queue index based in the total number of threads in relation to this thread
my $iQueueStartIdx = int((@{$oyRestoreQueueRef} / $self->{iThreadTotal}) * $iThreadIdx);
my $iQueueIdx = $iQueueStartIdx;
# Time when the backup copying began - used for size/timestamp deltas
my $lCopyTimeBegin = $oManifest->epoch(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_TIMESTAMP_COPY_START);
# Set source compression
my $bSourceCompression = $oManifest->get(MANIFEST_SECTION_BACKUP_OPTION, MANIFEST_KEY_COMPRESS) eq 'y' ? true : false;
# When a KILL signal is received, immediately abort
$SIG{'KILL'} = sub {threads->exit();};
# Get the current user and group to compare with stored mode
my $strCurrentUser = getpwuid($<);
my $strCurrentGroup = getgrgid($();
# Loop through all the queues to restore files (exit when the original queue is reached
do
{
while (my $strMessage = ${$oyRestoreQueueRef}[$iQueueIdx]->dequeue_nb())
{
my $strSourcePath = (split(/\|/, $strMessage))[0]; # Source path from backup
my $strSection = "${strSourcePath}:file"; # Backup section with file info
my $strDestinationPath = $oManifest->get(MANIFEST_SECTION_BACKUP_PATH, # Destination path stored in manifest
$strSourcePath);
$strSourcePath =~ s/\:/\//g; # Replace : with / in source path
my $strName = (split(/\|/, $strMessage))[1]; # Name of file to be restored
# If the file is a reference to a previous backup and hardlinks are off, then fetch it from that backup
my $strReference = $oManifest->test(MANIFEST_SECTION_BACKUP_OPTION, MANIFEST_KEY_HARDLINK, undef, 'y') ? undef :
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_REFERENCE, false);
# Generate destination file name
my $strDestinationFile = $oFileThread->path_get(PATH_DB_ABSOLUTE, "${strDestinationPath}/${strName}");
if ($oFileThread->exists(PATH_DB_ABSOLUTE, $strDestinationFile))
{
# Perform delta if requested
if ($self->{bDelta})
{
# If force then use size/timestamp delta
if ($self->{bForce})
{
my $oStat = lstat($strDestinationFile);
# Make sure that timestamp/size are equal and that timestamp is before the copy start time of the backup
if (defined($oStat) &&
$oStat->size == $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_SIZE) &&
$oStat->mtime == $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME) &&
$oStat->mtime < $lCopyTimeBegin)
{
&log(DEBUG, "${strDestinationFile} exists and matches size " . $oStat->size .
" and modification time " . $oStat->mtime);
next;
}
}
else
{
my ($strChecksum, $lSize) = $oFileThread->hash_size(PATH_DB_ABSOLUTE, $strDestinationFile);
if (($lSize == $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_SIZE) && $lSize == 0) ||
($strChecksum eq $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM)))
{
&log(DEBUG, "${strDestinationFile} exists and is zero size or matches backup checksum");
# Even if hash is the same set the time back to backup time. This helps with unit testing, but also
# presents a pristine version of the database.
utime($oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME),
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME),
$strDestinationFile)
or confess &log(ERROR, "unable to set time for ${strDestinationFile}");
next;
}
}
}
$oFileThread->remove(PATH_DB_ABSOLUTE, $strDestinationFile);
}
# Set user and group if running as root (otherwise current user and group will be used for restore)
# Copy the file from the backup to the database
my ($bCopyResult, $strCopyChecksum, $lCopySize) =
$oFileThread->copy(PATH_BACKUP_CLUSTER, (defined($strReference) ? $strReference : $self->{strBackupPath}) .
"/${strSourcePath}/${strName}" .
($bSourceCompression ? '.' . $oFileThread->{strCompressExtension} : ''),
PATH_DB_ABSOLUTE, $strDestinationFile,
$bSourceCompression, # Source is compressed based on backup settings
undef, undef,
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME),
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODE),
undef,
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_USER),
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_GROUP));
if ($lCopySize != 0 && $strCopyChecksum ne $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM))
{
confess &log(ERROR, "error restoring ${strDestinationFile}: actual checksum ${strCopyChecksum} " .
"does not match expected checksum " .
$oManifest->get($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM), ERROR_CHECKSUM);
}
}
# Even number threads move up when they have finished a queue, odd numbered threads move down
$iQueueIdx += $iDirection;
# Reset the queue index when it goes over or under the number of queues
if ($iQueueIdx < 0)
{
$iQueueIdx = @{$oyRestoreQueueRef} - 1;
}
elsif ($iQueueIdx >= @{$oyRestoreQueueRef})
{
$iQueueIdx = 0;
}
&log(TRACE, "thread waiting for new file from queue: queue ${iQueueIdx}, start queue ${iQueueStartIdx}");
}
while ($iQueueIdx != $iQueueStartIdx);
&log(DEBUG, "thread ${iThreadIdx} exiting");
}
1;

165
lib/BackRest/ThreadGroup.pm Normal file
View File

@ -0,0 +1,165 @@
####################################################################################################################################
# THREADGROUP MODULE
####################################################################################################################################
package BackRest::ThreadGroup;
use threads;
use strict;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
####################################################################################################################################
# MODULE EXPORTS
####################################################################################################################################
use Exporter qw(import);
our @EXPORT = qw(thread_group_create thread_group_add thread_group_complete);
####################################################################################################################################
# CONSTRUCTOR
####################################################################################################################################
sub thread_group_create
{
# Create the class hash
my $self = {};
# Initialize variables
$self->{iThreadTotal} = 0;
return $self;
}
####################################################################################################################################
# ADD
#
# Add a thread to the group. Once a thread is added, it can be tracked as part of the group.
####################################################################################################################################
sub thread_group_add
{
my $self = shift;
my $oThread = shift;
$self->{oyThread}[$self->{iThreadTotal}] = $oThread;
$self->{iThreadTotal}++;
return $self->{iThreadTotal} - 1;
}
####################################################################################################################################
# COMPLETE
#
# Wait for threads to complete.
####################################################################################################################################
sub thread_group_complete
{
my $self = shift;
my $iTimeout = shift;
my $bConfessOnError = shift;
# Set defaults
$bConfessOnError = defined($bConfessOnError) ? $bConfessOnError : true;
# Wait for all threads to complete and handle errors
my $iThreadComplete = 0;
my $lTimeBegin = time();
# Rejoin the threads
while ($iThreadComplete < $self->{iThreadTotal})
{
hsleep(.1);
# If a timeout has been defined, make sure we have not been running longer than that
if (defined($iTimeout))
{
if (time() - $lTimeBegin >= $iTimeout)
{
confess &log(ERROR, "threads have been running more than ${iTimeout} seconds, exiting...");
#backup_thread_kill();
#confess &log(WARN, "all threads have exited, aborting...");
}
}
for (my $iThreadIdx = 0; $iThreadIdx < $self->{iThreadTotal}; $iThreadIdx++)
{
if (defined($self->{oyThread}[$iThreadIdx]))
{
if (defined($self->{oyThread}[$iThreadIdx]->error()))
{
$self->kill();
if ($bConfessOnError)
{
confess &log(ERROR, 'error in thread ' . (${iThreadIdx} + 1) . ': check log for details');
}
else
{
return false;
}
}
if ($self->{oyThread}[$iThreadIdx]->is_joinable())
{
&log(DEBUG, "thread ${iThreadIdx} exited");
$self->{oyThread}[$iThreadIdx]->join();
&log(TRACE, "thread ${iThreadIdx} object undef");
undef($self->{oyThread}[$iThreadIdx]);
$iThreadComplete++;
}
}
}
}
&log(DEBUG, 'all threads exited');
return true;
}
####################################################################################################################################
# KILL
####################################################################################################################################
sub thread_group_destroy
{
my $self = shift;
# Total number of threads killed
my $iTotal = 0;
for (my $iThreadIdx = 0; $iThreadIdx < $self->{iThreadTotal}; $iThreadIdx++)
{
if (defined($self->{oyThread}[$iThreadIdx]))
{
if ($self->{oyThread}[$iThreadIdx]->is_running())
{
$self->{oyThread}[$iThreadIdx]->kill('KILL')->join();
}
elsif ($self->{oyThread}[$iThreadIdx]->is_joinable())
{
$self->{oyThread}[$iThreadIdx]->join();
}
undef($self->{oyThread}[$iThreadIdx]);
$iTotal++;
}
}
return($iTotal);
}
####################################################################################################################################
# DESTRUCTOR
####################################################################################################################################
# sub thread_group_destroy
# {
# my $self = shift;
#
# $self->kill();
# }
1;

View File

@ -5,11 +5,13 @@ package BackRest::Utility;
use threads;
use strict;
use warnings;
use Carp;
use warnings FATAL => qw(all);
use Carp qw(confess longmess);
use Fcntl qw(:DEFAULT :flock);
use File::Path qw(remove_tree);
use Time::HiRes qw(gettimeofday usleep);
use POSIX qw(ceil);
use File::Basename;
use JSON;
@ -20,11 +22,11 @@ use Exporter qw(import);
our @EXPORT = qw(version_get
data_hash_build trim common_prefix wait_for_file file_size_format execute
log log_file_set log_level_set test_set test_check
lock_file_create lock_file_remove
config_save config_load timestamp_string_get timestamp_file_string_get
log log_file_set log_level_set test_set test_get test_check
lock_file_create lock_file_remove hsleep wait_remainder
ini_save ini_load timestamp_string_get timestamp_file_string_get
TRACE DEBUG ERROR ASSERT WARN INFO OFF true false
TEST TEST_ENCLOSE TEST_MANIFEST_BUILD);
TEST TEST_ENCLOSE TEST_MANIFEST_BUILD TEST_BACKUP_RESUME TEST_BACKUP_NORESUME FORMAT);
# Global constants
use constant
@ -60,19 +62,29 @@ $oLogLevelRank{ERROR}{rank} = 2;
$oLogLevelRank{ASSERT}{rank} = 1;
$oLogLevelRank{OFF}{rank} = 0;
####################################################################################################################################
# FORMAT Constant
#
# Identified the format of the manifest and file structure. The format is used to determine compatability between versions.
####################################################################################################################################
use constant FORMAT => 3;
####################################################################################################################################
# TEST Constants and Variables
####################################################################################################################################
use constant
{
TEST => 'TEST',
TEST_ENCLOSE => 'PgBaCkReStTeSt',
TEST_MANIFEST_BUILD => 'MANIFEST_BUILD'
TEST => 'TEST',
TEST_ENCLOSE => 'PgBaCkReStTeSt',
TEST_MANIFEST_BUILD => 'MANIFEST_BUILD',
TEST_BACKUP_RESUME => 'BACKUP_RESUME',
TEST_BACKUP_NORESUME => 'BACKUP_NORESUME',
};
# Test global variables
my $bTest = false;
my $iTestDelay;
my $fTestDelay;
####################################################################################################################################
# VERSION_GET
@ -155,6 +167,21 @@ sub lock_file_remove
}
}
####################################################################################################################################
# WAIT_REMAINDER - Wait the remainder of the current second
####################################################################################################################################
sub wait_remainder
{
my $lTimeBegin = gettimeofday();
my $lSleepMs = ceil(((int($lTimeBegin) + 1) - $lTimeBegin) * 1000);
usleep($lSleepMs * 1000);
&log(TRACE, "WAIT_REMAINDER: slept ${lSleepMs}ms: begin ${lTimeBegin}, end " . gettimeofday());
return int($lTimeBegin);
}
####################################################################################################################################
# DATA_HASH_BUILD - Hash a delimited file with header
####################################################################################################################################
@ -209,6 +236,16 @@ sub trim
return $strBuffer;
}
####################################################################################################################################
# hsleep - wrapper for usleep that takes seconds in fractions and returns time slept in ms
####################################################################################################################################
sub hsleep
{
my $fSecond = shift;
return usleep($fSecond * 1000000);
}
####################################################################################################################################
# WAIT_FOR_FILE
####################################################################################################################################
@ -223,18 +260,18 @@ sub wait_for_file
while ($lTime > time() - $iSeconds)
{
opendir $hDir, $strDir
or confess &log(ERROR, "Could not open path ${strDir}: $!\n");
my @stryFile = grep(/$strRegEx/i, readdir $hDir);
close $hDir;
if (scalar @stryFile == 1)
if (opendir($hDir, $strDir))
{
return;
my @stryFile = grep(/$strRegEx/i, readdir $hDir);
close $hDir;
if (scalar @stryFile == 1)
{
return;
}
}
sleep(1);
hsleep(.1);
}
confess &log(ERROR, "could not find $strDir/$strRegEx after ${iSeconds} second(s)");
@ -295,13 +332,19 @@ sub file_size_format
sub timestamp_string_get
{
my $strFormat = shift;
my $lTime = shift;
if (!defined($strFormat))
{
$strFormat = '%4d-%02d-%02d %02d:%02d:%02d';
}
my ($iSecond, $iMinute, $iHour, $iMonthDay, $iMonth, $iYear, $iWeekDay, $iYearDay, $bIsDst) = localtime(time);
if (!defined($lTime))
{
$lTime = time();
}
my ($iSecond, $iMinute, $iHour, $iMonthDay, $iMonth, $iYear, $iWeekDay, $iYearDay, $bIsDst) = localtime($lTime);
return sprintf($strFormat, $iYear + 1900, $iMonth + 1, $iMonthDay, $iHour, $iMinute, $iSecond);
}
@ -350,25 +393,33 @@ sub log_file_set
sub test_set
{
my $bTestParam = shift;
my $iTestDelayParam = shift;
my $fTestDelayParam = shift;
# Set defaults
$bTest = defined($bTestParam) ? $bTestParam : false;
$iTestDelay = defined($bTestParam) ? $iTestDelayParam : $iTestDelay;
$fTestDelay = defined($bTestParam) ? $fTestDelayParam : $fTestDelay;
# Make sure that a delay is specified in test mode
if ($bTest && !defined($iTestDelay))
if ($bTest && !defined($fTestDelay))
{
confess &log(ASSERT, 'iTestDelay must be provided when bTest is true');
}
# Test delay should be between 1 and 600 seconds
if (!($iTestDelay >= 1 && $iTestDelay <= 600))
if (!($fTestDelay >= 0 && $fTestDelay <= 600))
{
confess &log(ERROR, 'test-delay must be between 1 and 600 seconds');
}
}
####################################################################################################################################
# TEST_GET - are we in test mode?
####################################################################################################################################
sub test_get
{
return $bTest;
}
####################################################################################################################################
# LOG_LEVEL_SET - set the log level for file and console
####################################################################################################################################
@ -379,22 +430,22 @@ sub log_level_set
if (defined($strLevelFileParam))
{
if (!defined($oLogLevelRank{"${strLevelFileParam}"}{rank}))
if (!defined($oLogLevelRank{uc($strLevelFileParam)}{rank}))
{
confess &log(ERROR, "file log level ${strLevelFileParam} does not exist");
}
$strLogLevelFile = $strLevelFileParam;
$strLogLevelFile = uc($strLevelFileParam);
}
if (defined($strLevelConsoleParam))
{
if (!defined($oLogLevelRank{"${strLevelConsoleParam}"}{rank}))
if (!defined($oLogLevelRank{uc($strLevelConsoleParam)}{rank}))
{
confess &log(ERROR, "console log level ${strLevelConsoleParam} does not exist");
}
$strLogLevelConsole = $strLevelConsoleParam;
$strLogLevelConsole = uc($strLevelConsoleParam);
}
}
@ -444,6 +495,8 @@ sub log
$strMessageFormat = '(undefined)';
}
$strMessageFormat = (defined($iCode) ? "[${iCode}] " : '') . $strMessageFormat;
# Indent subsequent lines of the message if it has more than one line - makes the log more readable
if ($strLevel eq TRACE || $strLevel eq TEST)
{
@ -464,8 +517,7 @@ sub log
my ($sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst) = localtime(time);
$strMessageFormat = timestamp_string_get() . sprintf(' T%02d', threads->tid()) .
(' ' x (7 - length($strLevel))) . "${strLevel}: ${strMessageFormat}" .
(defined($iCode) ? " (code ${iCode})" : '') . "\n";
(' ' x (7 - length($strLevel))) . "${strLevel}: ${strMessageFormat}\n";
# Output to console depending on log level and test flag
if ($iLogLevelRank <= $oLogLevelRank{"${strLogLevelConsole}"}{rank} ||
@ -479,7 +531,11 @@ sub log
if ($bTest && $strLevel eq TEST)
{
*STDOUT->flush();
sleep($iTestDelay);
if ($fTestDelay > 0)
{
hsleep($fTestDelay);
}
}
}
@ -491,6 +547,14 @@ sub log
if (!$bSuppressLog)
{
print $hLogFile $strMessageFormat;
if ($strLevel eq ERROR || $strLevel eq ASSERT)
{
my $strStackTrace = longmess() . "\n";
$strStackTrace =~ s/\n/\n /g;
print $hLogFile $strStackTrace;
}
}
}
}
@ -498,7 +562,7 @@ sub log
# Throw a typed exception if code is defined
if (defined($iCode))
{
return BackRest::Exception->new(iCode => $iCode, strMessage => $strMessage);
return new BackRest::Exception($iCode, $strMessage);
}
# Return the message test so it can be used in a confess
@ -506,16 +570,16 @@ sub log
}
####################################################################################################################################
# CONFIG_LOAD
# INI_LOAD
#
# Load configuration file from standard INI format to a hash.
# Load file from standard INI format to a hash.
####################################################################################################################################
sub config_load
sub ini_load
{
my $strFile = shift; # Full path to config file to load from
my $oConfig = shift; # Reference to the hash where config data will be stored
my $strFile = shift; # Full path to ini file to load from
my $oConfig = shift; # Reference to the hash where ini data will be stored
# Open the config file for reading
# Open the ini file for reading
my $hFile;
my $strSection;
@ -562,19 +626,21 @@ sub config_load
}
close($hFile);
return($oConfig);
}
####################################################################################################################################
# CONFIG_SAVE
# INI_SAVE
#
# Save configuration file from a hash to standard INI format.
# Save from a hash to standard INI format.
####################################################################################################################################
sub config_save
sub ini_save
{
my $strFile = shift; # Full path to config file to save to
my $oConfig = shift; # Reference to the hash where config data is stored
my $strFile = shift; # Full path to ini file to save to
my $oConfig = shift; # Reference to the hash where ini data is stored
# Open the config file for writing
# Open the ini file for writing
my $hFile;
my $bFirst = true;
@ -600,7 +666,7 @@ sub config_save
{
if (ref($strValue) eq "HASH")
{
syswrite($hFile, "${strKey}=" . encode_json($strValue) . "\n")
syswrite($hFile, "${strKey}=" . to_json($strValue, {canonical => true}) . "\n")
or confess "unable to write key ${strKey}: $!";
}
else

BIN
test/data/test.archive1.bin Normal file

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@ -8,28 +8,36 @@ package BackRestTest::CommonTest;
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use Carp;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename;
use File::Path qw(remove_tree);
use Cwd 'abs_path';
use IPC::Open3;
use POSIX ':sys_wait_h';
use IO::Select;
use File::Copy qw(move);
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::Remote;
use BackRest::File;
use BackRest::Manifest;
use Exporter qw(import);
our @EXPORT = qw(BackRestTestCommon_Setup BackRestTestCommon_ExecuteBegin BackRestTestCommon_ExecuteEnd
BackRestTestCommon_Execute BackRestTestCommon_ExecuteBackRest
BackRestTestCommon_ConfigCreate BackRestTestCommon_Run BackRestTestCommon_Cleanup
BackRestTestCommon_PgSqlBinPathGet BackRestTestCommon_StanzaGet BackRestTestCommon_CommandMainGet
BackRestTestCommon_CommandRemoteGet BackRestTestCommon_HostGet BackRestTestCommon_UserGet
BackRestTestCommon_GroupGet BackRestTestCommon_UserBackRestGet BackRestTestCommon_TestPathGet
BackRestTestCommon_DataPathGet BackRestTestCommon_BackupPathGet BackRestTestCommon_ArchivePathGet
BackRestTestCommon_DbPathGet BackRestTestCommon_DbCommonPathGet BackRestTestCommon_DbPortGet);
our @EXPORT = qw(BackRestTestCommon_Create BackRestTestCommon_Drop BackRestTestCommon_Setup BackRestTestCommon_ExecuteBegin
BackRestTestCommon_ExecuteEnd BackRestTestCommon_Execute BackRestTestCommon_ExecuteBackRest
BackRestTestCommon_PathCreate BackRestTestCommon_PathMode BackRestTestCommon_PathRemove
BackRestTestCommon_FileCreate BackRestTestCommon_FileRemove BackRestTestCommon_PathCopy BackRestTestCommon_PathMove
BackRestTestCommon_ConfigCreate BackRestTestCommon_ConfigRemap BackRestTestCommon_ConfigRecovery
BackRestTestCommon_Run BackRestTestCommon_Cleanup BackRestTestCommon_PgSqlBinPathGet
BackRestTestCommon_StanzaGet BackRestTestCommon_CommandMainGet BackRestTestCommon_CommandRemoteGet
BackRestTestCommon_HostGet BackRestTestCommon_UserGet BackRestTestCommon_GroupGet
BackRestTestCommon_UserBackRestGet BackRestTestCommon_TestPathGet BackRestTestCommon_DataPathGet
BackRestTestCommon_RepoPathGet BackRestTestCommon_LocalPathGet BackRestTestCommon_DbPathGet
BackRestTestCommon_DbCommonPathGet BackRestTestCommon_ClusterStop BackRestTestCommon_DbTablespacePathGet
BackRestTestCommon_DbPortGet);
my $strPgSqlBin;
my $strCommonStanza;
@ -42,10 +50,11 @@ my $strCommonGroup;
my $strCommonUserBackRest;
my $strCommonTestPath;
my $strCommonDataPath;
my $strCommonBackupPath;
my $strCommonArchivePath;
my $strCommonRepoPath;
my $strCommonLocalPath;
my $strCommonDbPath;
my $strCommonDbCommonPath;
my $strCommonDbTablespacePath;
my $iCommonDbPort;
my $iModuleTestRun;
my $bDryRun;
@ -59,8 +68,58 @@ my $hOut;
my $pId;
my $strCommand;
####################################################################################################################################
# BackRestTestBackup_Run
# BackRestTestCommon_ClusterStop
####################################################################################################################################
sub BackRestTestCommon_ClusterStop
{
my $strPath = shift;
my $bImmediate = shift;
# Set default
$strPath = defined($strPath) ? $strPath : BackRestTestCommon_DbCommonPathGet();
$bImmediate = defined($bImmediate) ? $bImmediate : false;
# If postmaster process is running then stop the cluster
if (-e $strPath . '/postmaster.pid')
{
BackRestTestCommon_Execute(BackRestTestCommon_PgSqlBinPathGet() . "/pg_ctl stop -D ${strPath} -w -s -m " .
($bImmediate ? 'immediate' : 'fast'));
}
}
####################################################################################################################################
# BackRestTestCommon_Drop
####################################################################################################################################
sub BackRestTestCommon_Drop
{
# Drop the cluster if it exists
BackRestTestCommon_ClusterStop(BackRestTestCommon_DbCommonPathGet(), true);
# Remove the backrest private directory
while (-e BackRestTestCommon_RepoPathGet())
{
BackRestTestCommon_PathRemove(BackRestTestCommon_RepoPathGet(), true, true);
BackRestTestCommon_PathRemove(BackRestTestCommon_RepoPathGet(), false, true);
hsleep(.1);
}
# Remove the test directory
BackRestTestCommon_PathRemove(BackRestTestCommon_TestPathGet());
}
####################################################################################################################################
# BackRestTestCommon_Create
####################################################################################################################################
sub BackRestTestCommon_Create
{
# Create the test directory
BackRestTestCommon_PathCreate(BackRestTestCommon_TestPathGet(), '0770');
}
####################################################################################################################################
# BackRestTestCommon_Run
####################################################################################################################################
sub BackRestTestCommon_Run
{
@ -83,7 +142,7 @@ sub BackRestTestCommon_Run
}
####################################################################################################################################
# BackRestTestBackup_Cleanup
# BackRestTestCommon_Cleanup
####################################################################################################################################
sub BackRestTestCommon_Cleanup
{
@ -91,7 +150,7 @@ sub BackRestTestCommon_Cleanup
}
####################################################################################################################################
# BackRestTestBackup_ExecuteBegin
# BackRestTestCommon_ExecuteBegin
####################################################################################################################################
sub BackRestTestCommon_ExecuteBegin
{
@ -122,15 +181,18 @@ sub BackRestTestCommon_ExecuteBegin
}
####################################################################################################################################
# BackRestTestBackup_ExecuteEnd
# BackRestTestCommon_ExecuteEnd
####################################################################################################################################
sub BackRestTestCommon_ExecuteEnd
{
my $strTest = shift;
my $bSuppressError = shift;
my $bShowOutput = shift;
my $iExpectedExitStatus = shift;
# Set defaults
$bSuppressError = defined($bSuppressError) ? $bSuppressError : false;
$bShowOutput = defined($bShowOutput) ? $bShowOutput : false;
# Create select objects
my $oErrorSelect = IO::Select->new();
@ -169,15 +231,34 @@ sub BackRestTestCommon_ExecuteEnd
# Check the exit status and output an error if needed
my $iExitStatus = ${^CHILD_ERROR_NATIVE} >> 8;
if ($iExitStatus != 0 && !$bSuppressError)
if (defined($iExpectedExitStatus) && $iExitStatus == $iExpectedExitStatus)
{
confess &log(ERROR, "command '${strCommand}' returned " . $iExitStatus . "\n" .
($strOutLog ne '' ? "STDOUT:\n${strOutLog}" : '') .
($strErrorLog ne '' ? "STDERR:\n${strErrorLog}" : ''));
return $iExitStatus;
}
else
if ($iExitStatus != 0 || (defined($iExpectedExitStatus) && $iExitStatus != $iExpectedExitStatus))
{
&log(DEBUG, "suppressed error was ${iExitStatus}");
if ($bSuppressError)
{
&log(DEBUG, "suppressed error was ${iExitStatus}");
}
else
{
confess &log(ERROR, "command '${strCommand}' returned " . $iExitStatus .
(defined($iExpectedExitStatus) ? ", but ${iExpectedExitStatus} was expected" : '') . "\n" .
($strOutLog ne '' ? "STDOUT:\n${strOutLog}" : '') .
($strErrorLog ne '' ? "STDERR:\n${strErrorLog}" : ''));
}
}
if ($bShowOutput)
{
print "output:\n${strOutLog}\n";
}
if (defined($strTest))
{
confess &log(ASSERT, "test point ${strTest} was not found");
}
$hError = undef;
@ -187,16 +268,159 @@ sub BackRestTestCommon_ExecuteEnd
}
####################################################################################################################################
# BackRestTestBackup_Execute
# BackRestTestCommon_Execute
####################################################################################################################################
sub BackRestTestCommon_Execute
{
my $strCommand = shift;
my $bRemote = shift;
my $bSuppressError = shift;
my $bShowOutput = shift;
my $iExpectedExitStatus = shift;
BackRestTestCommon_ExecuteBegin($strCommand, $bRemote);
return BackRestTestCommon_ExecuteEnd(undef, $bSuppressError);
return BackRestTestCommon_ExecuteEnd(undef, $bSuppressError, $bShowOutput, $iExpectedExitStatus);
}
####################################################################################################################################
# BackRestTestCommon_PathCreate
#
# Create a path and set mode.
####################################################################################################################################
sub BackRestTestCommon_PathCreate
{
my $strPath = shift;
my $strMode = shift;
# Create the path
mkdir($strPath)
or confess "unable to create ${strPath} path";
# Set the mode
chmod(oct(defined($strMode) ? $strMode : '0700'), $strPath)
or confess 'unable to set mode ${strMode} for ${strPath}';
}
####################################################################################################################################
# BackRestTestCommon_PathMode
#
# Set mode of an existing path.
####################################################################################################################################
sub BackRestTestCommon_PathMode
{
my $strPath = shift;
my $strMode = shift;
# Set the mode
chmod(oct($strMode), $strPath)
or confess 'unable to set mode ${strMode} for ${strPath}';
}
####################################################################################################################################
# BackRestTestCommon_PathRemove
#
# Remove a path and all subpaths.
####################################################################################################################################
sub BackRestTestCommon_PathRemove
{
my $strPath = shift;
my $bRemote = shift;
my $bSuppressError = shift;
BackRestTestCommon_Execute('rm -rf ' . $strPath, $bRemote, $bSuppressError);
# remove_tree($strPath, {result => \my $oError});
#
# if (@$oError)
# {
# my $strMessage = "error(s) occurred while removing ${strPath}:";
#
# for my $strFile (@$oError)
# {
# $strMessage .= "\nunable to remove: " . $strFile;
# }
#
# confess $strMessage;
# }
}
####################################################################################################################################
# BackRestTestCommon_PathCopy
#
# Copy a path.
####################################################################################################################################
sub BackRestTestCommon_PathCopy
{
my $strSourcePath = shift;
my $strDestinationPath = shift;
my $bRemote = shift;
my $bSuppressError = shift;
BackRestTestCommon_Execute("cp -rp ${strSourcePath} ${strDestinationPath}", $bRemote, $bSuppressError);
}
####################################################################################################################################
# BackRestTestCommon_PathMove
#
# Copy a path.
####################################################################################################################################
sub BackRestTestCommon_PathMove
{
my $strSourcePath = shift;
my $strDestinationPath = shift;
my $bRemote = shift;
my $bSuppressError = shift;
BackRestTestCommon_PathCopy($strSourcePath, $strDestinationPath, $bRemote, $bSuppressError);
BackRestTestCommon_PathRemove($strSourcePath, $bRemote, $bSuppressError);
}
####################################################################################################################################
# BackRestTestCommon_FileCreate
#
# Create a file specifying content, mode, and time.
####################################################################################################################################
sub BackRestTestCommon_FileCreate
{
my $strFile = shift;
my $strContent = shift;
my $lTime = shift;
my $strMode = shift;
# Open the file and save strContent to it
my $hFile = shift;
open($hFile, '>', $strFile)
or confess "unable to open ${strFile} for writing";
syswrite($hFile, $strContent)
or confess "unable to write to ${strFile}: $!";
close($hFile);
# Set the time
if (defined($lTime))
{
utime($lTime, $lTime, $strFile)
or confess 'unable to set time ${lTime} for ${strPath}';
}
# Set the mode
chmod(oct(defined($strMode) ? $strMode : '0600'), $strFile)
or confess 'unable to set mode ${strMode} for ${strFile}';
}
####################################################################################################################################
# BackRestTestCommon_FileRemove
#
# Remove a file.
####################################################################################################################################
sub BackRestTestCommon_FileRemove
{
my $strFile = shift;
unlink($strFile)
or confess "unable to remove ${strFile}: $!";
}
####################################################################################################################################
@ -230,10 +454,11 @@ sub BackRestTestCommon_Setup
}
$strCommonDataPath = "${strBasePath}/test/data";
$strCommonBackupPath = "${strCommonTestPath}/backrest";
$strCommonArchivePath = "${strCommonTestPath}/archive";
$strCommonRepoPath = "${strCommonTestPath}/backrest";
$strCommonLocalPath = "${strCommonTestPath}/local";
$strCommonDbPath = "${strCommonTestPath}/db";
$strCommonDbCommonPath = "${strCommonTestPath}/db/common";
$strCommonDbTablespacePath = "${strCommonTestPath}/db/tablespace";
$strCommonCommandMain = "${strBasePath}/bin/pg_backrest.pl";
$strCommonCommandRemote = "${strBasePath}/bin/pg_backrest_remote.pl";
@ -245,6 +470,116 @@ sub BackRestTestCommon_Setup
$bNoCleanup = $bNoCleanupParam;
}
####################################################################################################################################
# BackRestTestCommon_ConfigRemap
####################################################################################################################################
sub BackRestTestCommon_ConfigRemap
{
my $oRemapHashRef = shift;
my $oManifestRef = shift;
my $bRemote = shift;
# Create config filename
my $strConfigFile = BackRestTestCommon_DbPathGet() . '/pg_backrest.conf';
my $strStanza = BackRestTestCommon_StanzaGet();
# Load Config file
my %oConfig;
ini_load($strConfigFile, \%oConfig);
# Load remote config file
my %oRemoteConfig;
my $strRemoteConfigFile = BackRestTestCommon_TestPathGet() . '/pg_backrest.conf.remote';
if ($bRemote)
{
BackRestTestCommon_Execute("mv " . BackRestTestCommon_RepoPathGet() . "/pg_backrest.conf ${strRemoteConfigFile}", true);
ini_load($strRemoteConfigFile, \%oRemoteConfig);
}
# Rewrite remap section
delete($oConfig{"${strStanza}:restore:tablespace-map"});
foreach my $strRemap (sort(keys $oRemapHashRef))
{
my $strRemapPath = ${$oRemapHashRef}{$strRemap};
if ($strRemap eq 'base')
{
$oConfig{$strStanza}{'db-path'} = $strRemapPath;
${$oManifestRef}{'backup:path'}{base} = $strRemapPath;
if ($bRemote)
{
$oRemoteConfig{$strStanza}{'db-path'} = $strRemapPath;
}
}
else
{
$oConfig{"${strStanza}:restore:tablespace-map"}{$strRemap} = $strRemapPath;
${$oManifestRef}{'backup:path'}{"tablespace:${strRemap}"} = $strRemapPath;
${$oManifestRef}{'backup:tablespace'}{$strRemap}{'path'} = $strRemapPath;
${$oManifestRef}{'base:link'}{"pg_tblspc/${strRemap}"}{'link_destination'} = $strRemapPath;
}
}
# Resave the config file
ini_save($strConfigFile, \%oConfig);
# Load remote config file
if ($bRemote)
{
ini_save($strRemoteConfigFile, \%oRemoteConfig);
BackRestTestCommon_Execute("mv ${strRemoteConfigFile} " . BackRestTestCommon_RepoPathGet() . '/pg_backrest.conf', true);
}
}
####################################################################################################################################
# BackRestTestCommon_ConfigRecovery
####################################################################################################################################
sub BackRestTestCommon_ConfigRecovery
{
my $oRecoveryHashRef = shift;
my $bRemote = shift;
# Create config filename
my $strConfigFile = BackRestTestCommon_DbPathGet() . '/pg_backrest.conf';
my $strStanza = BackRestTestCommon_StanzaGet();
# Load Config file
my %oConfig;
ini_load($strConfigFile, \%oConfig);
# Load remote config file
my %oRemoteConfig;
my $strRemoteConfigFile = BackRestTestCommon_TestPathGet() . '/pg_backrest.conf.remote';
if ($bRemote)
{
BackRestTestCommon_Execute("mv " . BackRestTestCommon_RepoPathGet() . "/pg_backrest.conf ${strRemoteConfigFile}", true);
ini_load($strRemoteConfigFile, \%oRemoteConfig);
}
# Rewrite remap section
delete($oConfig{"${strStanza}:recovery:option"});
foreach my $strOption (sort(keys $oRecoveryHashRef))
{
$oConfig{"${strStanza}:recovery:option"}{$strOption} = ${$oRecoveryHashRef}{$strOption};
}
# Resave the config file
ini_save($strConfigFile, \%oConfig);
# Load remote config file
if ($bRemote)
{
ini_save($strRemoteConfigFile, \%oRemoteConfig);
BackRestTestCommon_Execute("mv ${strRemoteConfigFile} " . BackRestTestCommon_RepoPathGet() . '/pg_backrest.conf', true);
}
}
####################################################################################################################################
# BackRestTestCommon_ConfigCreate
####################################################################################################################################
@ -256,54 +591,64 @@ sub BackRestTestCommon_ConfigCreate
my $bChecksum = shift;
my $bHardlink = shift;
my $iThreadMax = shift;
my $bArchiveLocal = shift;
my $bArchiveAsync = shift;
my $bCompressAsync = shift;
my %oParamHash;
if (defined($strRemote))
{
$oParamHash{'global:command'}{'remote'} = $strCommonCommandRemote;
$oParamHash{'global:command'}{'cmd-remote'} = $strCommonCommandRemote;
}
$oParamHash{'global:command'}{'psql'} = $strCommonCommandPsql;
$oParamHash{'global:command'}{'cmd-psql'} = $strCommonCommandPsql;
if (defined($strRemote) && $strRemote eq REMOTE_BACKUP)
if (defined($strRemote) && $strRemote eq BACKUP)
{
$oParamHash{'global:backup'}{'host'} = $strCommonHost;
$oParamHash{'global:backup'}{'user'} = $strCommonUserBackRest;
$oParamHash{'global:backup'}{'backup-host'} = $strCommonHost;
$oParamHash{'global:backup'}{'backup-user'} = $strCommonUserBackRest;
}
elsif (defined($strRemote) && $strRemote eq REMOTE_DB)
elsif (defined($strRemote) && $strRemote eq DB)
{
$oParamHash{$strCommonStanza}{'host'} = $strCommonHost;
$oParamHash{$strCommonStanza}{'user'} = $strCommonUser;
$oParamHash{$strCommonStanza}{'db-host'} = $strCommonHost;
$oParamHash{$strCommonStanza}{'db-user'} = $strCommonUser;
}
$oParamHash{'global:log'}{'level-console'} = 'error';
$oParamHash{'global:log'}{'level-file'} = 'trace';
$oParamHash{'global:log'}{'log-level-console'} = 'error';
$oParamHash{'global:log'}{'log-level-file'} = 'trace';
if ($strLocal eq REMOTE_BACKUP)
if ($strLocal eq BACKUP)
{
if (defined($bHardlink) && $bHardlink)
{
$oParamHash{'global:backup'}{'hardlink'} = 'y';
}
$oParamHash{'global:general'}{'repo-path'} = $strCommonRepoPath;
}
elsif ($strLocal eq REMOTE_DB)
elsif ($strLocal eq DB)
{
$oParamHash{'global:general'}{'repo-path'} = $strCommonLocalPath;
if (defined($strRemote))
{
$oParamHash{'global:log'}{'level-console'} = 'trace';
$oParamHash{'global:log'}{'log-level-console'} = 'trace';
# if ($bArchiveAsync)
# {
# $oParamHash{'global:archive'}{path} = BackRestTestCommon_LocalPathGet();
# }
$oParamHash{'global:general'}{'repo-remote-path'} = $strCommonRepoPath;
}
else
{
$oParamHash{'global:general'}{'repo-path'} = $strCommonRepoPath;
}
if ($bArchiveLocal)
if ($bArchiveAsync)
{
$oParamHash{'global:archive'}{path} = BackRestTestCommon_ArchivePathGet();
if (!$bCompressAsync)
{
$oParamHash{'global:archive'}{'compress_async'} = 'n';
}
$oParamHash{'global:archive'}{'archive-async'} = 'y';
#
# if (!$bCompressAsync)
# {
# $oParamHash{'global:archive'}{'compress_async'} = 'n';
# }
}
}
else
@ -311,32 +656,37 @@ sub BackRestTestCommon_ConfigCreate
confess "invalid local type ${strLocal}";
}
if (($strLocal eq REMOTE_BACKUP) || ($strLocal eq REMOTE_DB && !defined($strRemote)))
if (defined($iThreadMax) && $iThreadMax > 1)
{
$oParamHash{'db:command:option'}{'psql'} = "--port=${iCommonDbPort}";
$oParamHash{'global:general'}{'thread-max'} = $iThreadMax;
}
if (($strLocal eq BACKUP) || ($strLocal eq DB && !defined($strRemote)))
{
$oParamHash{'db:command'}{'cmd-psql-option'} = "--port=${iCommonDbPort}";
$oParamHash{'global:backup'}{'thread-max'} = $iThreadMax;
if (defined($bHardlink) && $bHardlink)
{
$oParamHash{'global:backup'}{'hardlink'} = 'y';
}
}
if (defined($bCompress) && !$bCompress)
{
$oParamHash{'global:backup'}{'compress'} = 'n';
$oParamHash{'global:general'}{'compress'} = 'n';
}
if (defined($bChecksum) && !$bChecksum)
{
$oParamHash{'global:backup'}{'checksum'} = 'n';
}
# if (defined($bChecksum) && $bChecksum)
# {
# $oParamHash{'global:backup'}{'checksum'} = 'y';
# }
$oParamHash{$strCommonStanza}{'path'} = $strCommonDbCommonPath;
$oParamHash{'global:backup'}{'path'} = $strCommonBackupPath;
if (defined($iThreadMax))
{
$oParamHash{'global:backup'}{'thread-max'} = $iThreadMax;
}
$oParamHash{$strCommonStanza}{'db-path'} = $strCommonDbCommonPath;
# Write out the configuration file
my $strFile = BackRestTestCommon_TestPathGet() . '/pg_backrest.conf';
config_save($strFile, \%oParamHash);
ini_save($strFile, \%oParamHash);
# Move the configuration file based on local
if ($strLocal eq 'db')
@ -346,12 +696,12 @@ sub BackRestTestCommon_ConfigCreate
}
elsif ($strLocal eq 'backup' && !defined($strRemote))
{
rename($strFile, BackRestTestCommon_BackupPathGet() . '/pg_backrest.conf')
or die "unable to move ${strFile} to " . BackRestTestCommon_BackupPathGet() . '/pg_backrest.conf path';
rename($strFile, BackRestTestCommon_RepoPathGet() . '/pg_backrest.conf')
or die "unable to move ${strFile} to " . BackRestTestCommon_RepoPathGet() . '/pg_backrest.conf path';
}
else
{
BackRestTestCommon_Execute("mv ${strFile} " . BackRestTestCommon_BackupPathGet() . '/pg_backrest.conf', true);
BackRestTestCommon_Execute("mv ${strFile} " . BackRestTestCommon_RepoPathGet() . '/pg_backrest.conf', true);
}
}
@ -408,14 +758,14 @@ sub BackRestTestCommon_DataPathGet
return $strCommonDataPath;
}
sub BackRestTestCommon_BackupPathGet
sub BackRestTestCommon_RepoPathGet
{
return $strCommonBackupPath;
return $strCommonRepoPath;
}
sub BackRestTestCommon_ArchivePathGet
sub BackRestTestCommon_LocalPathGet
{
return $strCommonArchivePath;
return $strCommonLocalPath;
}
sub BackRestTestCommon_DbPathGet
@ -425,7 +775,17 @@ sub BackRestTestCommon_DbPathGet
sub BackRestTestCommon_DbCommonPathGet
{
return $strCommonDbCommonPath;
my $iIndex = shift;
return $strCommonDbCommonPath . (defined($iIndex) ? "-${iIndex}" : '');
}
sub BackRestTestCommon_DbTablespacePathGet
{
my $iTablespace = shift;
my $iIndex = shift;
return $strCommonDbTablespacePath . (defined($iTablespace) ? "/ts${iTablespace}" . (defined($iIndex) ? "-${iIndex}" : '') : '');
}
sub BackRestTestCommon_DbPortGet

View File

@ -0,0 +1,816 @@
#!/usr/bin/perl
####################################################################################################################################
# ConfigTest.pl - Unit Tests for BackRest::Param and BackRest::Config
####################################################################################################################################
package BackRestTest::ConfigTest;
####################################################################################################################################
# Perl includes
####################################################################################################################################
use strict;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename qw(dirname);
use Cwd qw(abs_path);
use Scalar::Util 'blessed';
#use Data::Dumper qw(Dumper);
#use Scalar::Util qw(blessed);
# use Test::More qw(no_plan);
# use Test::Deep;
use lib dirname($0) . '/../lib';
use BackRest::Exception;
use BackRest::Utility;
use BackRest::Config;
use BackRestTest::CommonTest;
use Exporter qw(import);
our @EXPORT = qw(BackRestTestConfig_Test);
sub optionSetTest
{
my $oOption = shift;
my $strKey = shift;
my $strValue = shift;
$$oOption{option}{$strKey} = $strValue;
}
sub optionSetBoolTest
{
my $oOption = shift;
my $strKey = shift;
my $bValue = shift;
$$oOption{boolean}{$strKey} = defined($bValue) ? $bValue : true;
}
sub operationSetTest
{
my $oOption = shift;
my $strOperation = shift;
$$oOption{operation} = $strOperation;
}
sub optionRemoveTest
{
my $oOption = shift;
my $strKey = shift;
delete($$oOption{option}{$strKey});
delete($$oOption{boolean}{$strKey});
}
sub argvWriteTest
{
my $oOption = shift;
@ARGV = ();
if (defined($$oOption{boolean}))
{
foreach my $strKey (keys $$oOption{boolean})
{
if ($$oOption{boolean}{$strKey})
{
$ARGV[@ARGV] = "--${strKey}";
}
else
{
$ARGV[@ARGV] = "--no-${strKey}";
}
}
}
if (defined($$oOption{option}))
{
foreach my $strKey (keys $$oOption{option})
{
$ARGV[@ARGV] = "--${strKey}=$$oOption{option}{$strKey}";
}
}
$ARGV[@ARGV] = $$oOption{operation};
&log(INFO, " command line: " . join(" ", @ARGV));
%$oOption = ();
}
sub configLoadExpect
{
my $oOption = shift;
my $strOperation = shift;
my $iExpectedError = shift;
my $strErrorParam1 = shift;
my $strErrorParam2 = shift;
my $strErrorParam3 = shift;
my $oOptionRuleExpected = optionRuleGet();
operationSetTest($oOption, $strOperation);
argvWriteTest($oOption);
eval
{
configLoad();
};
if ($@)
{
if (!defined($iExpectedError))
{
confess $@;
}
my $oMessage = $@;
if (blessed($oMessage) && $oMessage->isa('BackRest::Exception'))
{
if ($oMessage->code() != $iExpectedError)
{
confess "expected error ${iExpectedError} from configLoad but got " . $oMessage->code() .
" '" . $oMessage->message() . "'";
}
my $strError;
if ($iExpectedError == ERROR_OPTION_REQUIRED)
{
$strError = "backup operation requires option: ${strErrorParam1}";
}
elsif ($iExpectedError == ERROR_OPERATION_REQUIRED)
{
$strError = "operation must be specified";
}
elsif ($iExpectedError == ERROR_OPTION_INVALID)
{
$strError = "option '${strErrorParam1}' not valid without option '${strErrorParam2}'";
if (defined($strErrorParam3))
{
$strError .= @{$strErrorParam3} == 1 ? " = '$$strErrorParam3[0]'" :
" in ('" . join("', '",@{ $strErrorParam3}) . "')";
}
}
elsif ($iExpectedError == ERROR_OPTION_INVALID_VALUE)
{
$strError = "'${strErrorParam1}' is not valid for '${strErrorParam2}' option";
}
elsif ($iExpectedError == ERROR_OPTION_INVALID_RANGE)
{
$strError = "'${strErrorParam1}' is not valid for '${strErrorParam2}' option";
}
elsif ($iExpectedError == ERROR_OPTION_INVALID_PAIR)
{
$strError = "'${strErrorParam1}' not valid key/value for '${strErrorParam2}' option";
}
elsif ($iExpectedError == ERROR_OPTION_NEGATE)
{
$strError = "option '${strErrorParam1}' cannot be both set and negated";
}
elsif ($iExpectedError == ERROR_FILE_INVALID)
{
$strError = "'${strErrorParam1}' is not a file";
}
else
{
confess "must construct message for error ${iExpectedError}, use this as an example: '" . $oMessage->message() . "'";
}
if ($oMessage->message() ne $strError)
{
confess "expected error message \"${strError}\" from configLoad but got \"" . $oMessage->message() . "\"";
}
}
else
{
confess "configLoad should throw BackRest::Exception:\n$oMessage";
}
}
else
{
if (defined($iExpectedError))
{
confess "expected error ${iExpectedError} from configLoad but got success";
}
}
# cmp_deeply(OPTION_rule_get(), $oOptionRuleExpected, 'compare original and new rule hashes')
# or die 'comparison failed';
}
sub optionTestExpect
{
my $strOption = shift;
my $strExpectedValue = shift;
my $strExpectedKey = shift;
if (defined($strExpectedValue))
{
my $strActualValue = optionGet($strOption);
if (defined($strExpectedKey))
{
# use Data::Dumper;
# &log(INFO, Dumper($strActualValue));
# exit 0;
$strActualValue = $$strActualValue{$strExpectedKey};
}
if (!defined($strActualValue))
{
confess "expected option ${strOption} to have value ${strExpectedValue} but [undef] found instead";
}
$strActualValue eq $strExpectedValue
or confess "expected option ${strOption} to have value ${strExpectedValue} but ${strActualValue} found instead";
}
elsif (optionTest($strOption))
{
confess "expected option ${strOption} to be [undef], but " . optionGet($strOption) . ' found instead';
}
}
####################################################################################################################################
# BackRestTestConfig_Test
####################################################################################################################################
sub BackRestTestConfig_Test
{
my $strTest = shift;
# Setup test variables
my $iRun;
my $bCreate;
my $strStanza = 'main';
my $oOption = {};
my $oConfig = {};
my @oyArray;
my $strConfigFile = BackRestTestCommon_TestPathGet() . '/pg_backrest.conf';
use constant BOGUS => 'bogus';
# Print test banner
&log(INFO, 'CONFIG MODULE ******************************************************************');
BackRestTestCommon_Drop();
#-------------------------------------------------------------------------------------------------------------------------------
# Test command-line options
#-------------------------------------------------------------------------------------------------------------------------------
if ($strTest eq 'all' || $strTest eq 'option')
{
$iRun = 0;
&log(INFO, "Option module\n");
if (BackRestTestCommon_Run(++$iRun, 'backup with no stanza'))
{
optionSetTest($oOption, OPTION_DB_PATH, '/db');
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_REQUIRED, OPTION_STANZA);
}
if (BackRestTestCommon_Run(++$iRun, 'backup with boolean stanza'))
{
optionSetBoolTest($oOption, OPTION_STANZA);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPERATION_REQUIRED);
}
if (BackRestTestCommon_Run(++$iRun, 'backup type defaults to ' . BACKUP_TYPE_INCR))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_TYPE, BACKUP_TYPE_INCR);
}
if (BackRestTestCommon_Run(++$iRun, 'backup type set to ' . BACKUP_TYPE_FULL))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_TYPE, BACKUP_TYPE_FULL);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_TYPE, BACKUP_TYPE_FULL);
}
if (BackRestTestCommon_Run(++$iRun, 'backup type invalid'))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_TYPE, BOGUS);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID_VALUE, BOGUS, OPTION_TYPE);
}
if (BackRestTestCommon_Run(++$iRun, 'backup invalid force'))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetBoolTest($oOption, OPTION_FORCE);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID, OPTION_FORCE, OPTION_NO_START_STOP);
}
if (BackRestTestCommon_Run(++$iRun, 'backup valid force'))
{
# $oOption = {};
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetBoolTest($oOption, OPTION_NO_START_STOP);
optionSetBoolTest($oOption, OPTION_FORCE);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_NO_START_STOP, true);
optionTestExpect(OPTION_FORCE, true);
}
if (BackRestTestCommon_Run(++$iRun, 'backup invalid value for ' . OPTION_TEST_DELAY))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetBoolTest($oOption, OPTION_TEST);
optionSetTest($oOption, OPTION_TEST_DELAY, BOGUS);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID_VALUE, BOGUS, OPTION_TEST_DELAY);
}
if (BackRestTestCommon_Run(++$iRun, 'backup invalid ' . OPTION_TEST_DELAY))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_TEST_DELAY, 5);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID, OPTION_TEST_DELAY, OPTION_TEST);
}
if (BackRestTestCommon_Run(++$iRun, 'backup check ' . OPTION_TEST_DELAY . ' undef'))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_TEST_DELAY);
}
if (BackRestTestCommon_Run(++$iRun, 'restore invalid ' . OPTION_TARGET))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_TYPE, RECOVERY_TYPE_DEFAULT);
optionSetTest($oOption, OPTION_TARGET, BOGUS);
@oyArray = (RECOVERY_TYPE_NAME, RECOVERY_TYPE_TIME, RECOVERY_TYPE_XID);
configLoadExpect($oOption, OP_RESTORE, ERROR_OPTION_INVALID, OPTION_TARGET, OPTION_TYPE, \@oyArray);
}
if (BackRestTestCommon_Run(++$iRun, 'restore ' . OPTION_TARGET))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_TYPE, RECOVERY_TYPE_NAME);
optionSetTest($oOption, OPTION_TARGET, BOGUS);
configLoadExpect($oOption, OP_RESTORE);
optionTestExpect(OPTION_TYPE, RECOVERY_TYPE_NAME);
optionTestExpect(OPTION_TARGET, BOGUS);
optionTestExpect(OPTION_TARGET_TIMELINE);
}
if (BackRestTestCommon_Run(++$iRun, 'invalid string ' . OPTION_THREAD_MAX))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_THREAD_MAX, BOGUS);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID_VALUE, BOGUS, OPTION_THREAD_MAX);
}
if (BackRestTestCommon_Run(++$iRun, 'invalid float ' . OPTION_THREAD_MAX))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_THREAD_MAX, '0.0');
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID_VALUE, '0.0', OPTION_THREAD_MAX);
}
if (BackRestTestCommon_Run(++$iRun, 'valid ' . OPTION_THREAD_MAX))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_THREAD_MAX, '2');
configLoadExpect($oOption, OP_BACKUP);
}
if (BackRestTestCommon_Run(++$iRun, 'valid float ' . OPTION_TEST_DELAY))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetBoolTest($oOption, OPTION_TEST);
optionSetTest($oOption, OPTION_TEST_DELAY, '0.25');
configLoadExpect($oOption, OP_BACKUP);
}
if (BackRestTestCommon_Run(++$iRun, 'valid int ' . OPTION_TEST_DELAY))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetBoolTest($oOption, OPTION_TEST);
optionSetTest($oOption, OPTION_TEST_DELAY, 3);
configLoadExpect($oOption, OP_BACKUP);
}
if (BackRestTestCommon_Run(++$iRun, 'restore valid ' . OPTION_TARGET_TIMELINE))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_TARGET_TIMELINE, 2);
configLoadExpect($oOption, OP_RESTORE);
}
if (BackRestTestCommon_Run(++$iRun, 'invalid ' . OPTION_BUFFER_SIZE))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_BUFFER_SIZE, '512');
configLoadExpect($oOption, OP_RESTORE, ERROR_OPTION_INVALID_RANGE, '512', OPTION_BUFFER_SIZE);
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' invalid option' . OPTION_RETENTION_ARCHIVE_TYPE))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_RETENTION_ARCHIVE_TYPE, BOGUS);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID, OPTION_RETENTION_ARCHIVE_TYPE, OPTION_RETENTION_ARCHIVE);
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' invalid value ' . OPTION_RETENTION_ARCHIVE_TYPE))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_RETENTION_ARCHIVE, 3);
optionSetTest($oOption, OPTION_RETENTION_ARCHIVE_TYPE, BOGUS);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID_VALUE, BOGUS, OPTION_RETENTION_ARCHIVE_TYPE);
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' valid value ' . OPTION_RETENTION_ARCHIVE_TYPE))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_RETENTION_ARCHIVE, 1);
optionSetTest($oOption, OPTION_RETENTION_ARCHIVE_TYPE, BACKUP_TYPE_FULL);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_RETENTION_ARCHIVE, 1);
optionTestExpect(OPTION_RETENTION_ARCHIVE_TYPE, BACKUP_TYPE_FULL);
}
if (BackRestTestCommon_Run(++$iRun, OP_RESTORE . ' invalid value ' . OPTION_RESTORE_RECOVERY_SETTING))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_RESTORE_RECOVERY_SETTING, '=');
configLoadExpect($oOption, OP_RESTORE, ERROR_OPTION_INVALID_PAIR, '=', OPTION_RESTORE_RECOVERY_SETTING);
}
if (BackRestTestCommon_Run(++$iRun, OP_RESTORE . ' invalid value ' . OPTION_RESTORE_RECOVERY_SETTING))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_RESTORE_RECOVERY_SETTING, '=' . BOGUS);
configLoadExpect($oOption, OP_RESTORE, ERROR_OPTION_INVALID_PAIR, '=' . BOGUS, OPTION_RESTORE_RECOVERY_SETTING);
}
if (BackRestTestCommon_Run(++$iRun, OP_RESTORE . ' invalid value ' . OPTION_RESTORE_RECOVERY_SETTING))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_RESTORE_RECOVERY_SETTING, BOGUS . '=');
configLoadExpect($oOption, OP_RESTORE, ERROR_OPTION_INVALID_PAIR, BOGUS . '=', OPTION_RESTORE_RECOVERY_SETTING);
}
if (BackRestTestCommon_Run(++$iRun, OP_RESTORE . ' valid value ' . OPTION_RESTORE_RECOVERY_SETTING))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_RESTORE_RECOVERY_SETTING, 'primary-conn-info=db.domain.net');
configLoadExpect($oOption, OP_RESTORE);
optionTestExpect(OPTION_RESTORE_RECOVERY_SETTING, 'db.domain.net', 'primary-conn-info');
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' valid value ' . OPTION_COMMAND_PSQL))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_COMMAND_PSQL, '/psql -X %option%');
optionSetTest($oOption, OPTION_COMMAND_PSQL_OPTION, '--port 5432');
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_COMMAND_PSQL, '/psql -X --port 5432');
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' default value ' . OPTION_COMMAND_REMOTE))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_COMMAND_PSQL, '/psql -X %option%');
optionSetTest($oOption, OPTION_COMMAND_PSQL_OPTION, '--port 5432');
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_COMMAND_REMOTE, dirname(abs_path($0)) . '/pg_backrest_remote.pl');
}
}
#-------------------------------------------------------------------------------------------------------------------------------
# Test mixed command-line/config
#-------------------------------------------------------------------------------------------------------------------------------
if ($strTest eq 'all' || $strTest eq 'config')
{
$iRun = 0;
&log(INFO, "Config module\n");
BackRestTestCommon_Create();
if (BackRestTestCommon_Run(++$iRun, 'set and negate option ' . OPTION_CONFIG))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, '/dude/dude.conf');
optionSetBoolTest($oOption, OPTION_CONFIG, false);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_NEGATE, OPTION_CONFIG);
}
if (BackRestTestCommon_Run(++$iRun, 'option ' . OPTION_CONFIG))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetBoolTest($oOption, OPTION_CONFIG, false);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_CONFIG);
}
if (BackRestTestCommon_Run(++$iRun, 'default option ' . OPTION_CONFIG))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_CONFIG, OPTION_DEFAULT_CONFIG);
}
if (BackRestTestCommon_Run(++$iRun, 'config file is a path'))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, BackRestTestCommon_TestPathGet());
configLoadExpect($oOption, OP_BACKUP, ERROR_FILE_INVALID, BackRestTestCommon_TestPathGet());
}
if (BackRestTestCommon_Run(++$iRun, 'load from config stanza section - option ' . OPTION_THREAD_MAX))
{
$oConfig = {};
$$oConfig{"$strStanza:" . &OP_BACKUP}{&OPTION_THREAD_MAX} = 2;
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_THREAD_MAX, 2);
}
if (BackRestTestCommon_Run(++$iRun, 'load from config stanza inherited section - option ' . OPTION_THREAD_MAX))
{
$oConfig = {};
$$oConfig{"$strStanza:" . &CONFIG_SECTION_GENERAL}{&OPTION_THREAD_MAX} = 3;
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_THREAD_MAX, 3);
}
if (BackRestTestCommon_Run(++$iRun, 'load from config global section - option ' . OPTION_THREAD_MAX))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &OP_BACKUP}{&OPTION_THREAD_MAX} = 2;
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_THREAD_MAX, 2);
}
if (BackRestTestCommon_Run(++$iRun, 'load from config global inherited section - option ' . OPTION_THREAD_MAX))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_GENERAL}{&OPTION_THREAD_MAX} = 5;
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_THREAD_MAX, 5);
}
if (BackRestTestCommon_Run(++$iRun, 'default - option ' . OPTION_THREAD_MAX))
{
$oConfig = {};
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_THREAD_MAX, 1);
}
if (BackRestTestCommon_Run(++$iRun, 'command-line override - option ' . OPTION_THREAD_MAX))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_GENERAL}{&OPTION_THREAD_MAX} = 9;
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_THREAD_MAX, 7);
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_THREAD_MAX, 7);
}
if (BackRestTestCommon_Run(++$iRun, 'invalid boolean - option ' . OPTION_HARDLINK))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &OP_BACKUP}{&OPTION_HARDLINK} = 'Y';
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID_VALUE, 'Y', OPTION_HARDLINK);
}
if (BackRestTestCommon_Run(++$iRun, 'invalid value - option ' . OPTION_LOG_LEVEL_CONSOLE))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_LOG}{&OPTION_LOG_LEVEL_CONSOLE} = BOGUS;
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP, ERROR_OPTION_INVALID_VALUE, BOGUS, OPTION_LOG_LEVEL_CONSOLE);
}
if (BackRestTestCommon_Run(++$iRun, 'valid value - option ' . OPTION_LOG_LEVEL_CONSOLE))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_LOG}{&OPTION_LOG_LEVEL_CONSOLE} = lc(INFO);
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_RESTORE);
}
if (BackRestTestCommon_Run(++$iRun, 'archive-push - option ' . OPTION_LOG_LEVEL_CONSOLE))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_ARCHIVE_PUSH);
}
if (BackRestTestCommon_Run(++$iRun, OP_EXPIRE . ' ' . OPTION_RETENTION_FULL))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_EXPIRE}{&OPTION_RETENTION_FULL} = 2;
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_EXPIRE);
optionTestExpect(OPTION_RETENTION_FULL, 2);
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' option ' . OPTION_COMPRESS))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_BACKUP}{&OPTION_COMPRESS} = 'n';
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_COMPRESS, false);
}
if (BackRestTestCommon_Run(++$iRun, OP_RESTORE . ' option ' . OPTION_RESTORE_RECOVERY_SETTING))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_RESTORE_RECOVERY_SETTING}{'archive-command'} = '/path/to/pg_backrest.pl';
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_RESTORE);
optionTestExpect(OPTION_RESTORE_RECOVERY_SETTING, '/path/to/pg_backrest.pl', 'archive-command');
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' option ' . OPTION_DB_PATH))
{
$oConfig = {};
$$oConfig{$strStanza}{&OPTION_DB_PATH} = '/path/to/db';
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_DB_PATH, '/path/to/db');
}
if (BackRestTestCommon_Run(++$iRun, OP_ARCHIVE_PUSH . ' option ' . OPTION_DB_PATH))
{
$oConfig = {};
$$oConfig{$strStanza}{&OPTION_DB_PATH} = '/path/to/db';
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_ARCHIVE_PUSH);
optionTestExpect(OPTION_DB_PATH, '/path/to/db');
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' option ' . OPTION_REPO_PATH))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_GENERAL}{&OPTION_REPO_PATH} = '/repo';
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_REPO_PATH, '/repo');
}
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' valid value ' . OPTION_COMMAND_PSQL))
{
$oConfig = {};
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_COMMAND}{&OPTION_COMMAND_PSQL} = '/psql -X %option%';
$$oConfig{&CONFIG_GLOBAL . ':' . &CONFIG_SECTION_COMMAND}{&OPTION_COMMAND_PSQL_OPTION} = '--port=5432';
ini_save($strConfigFile, $oConfig);
optionSetTest($oOption, OPTION_STANZA, $strStanza);
optionSetTest($oOption, OPTION_DB_PATH, '/db');
optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
configLoadExpect($oOption, OP_BACKUP);
optionTestExpect(OPTION_COMMAND_PSQL, '/psql -X --port=5432');
}
# Cleanup
if (BackRestTestCommon_Cleanup())
{
&log(INFO, 'cleanup');
BackRestTestCommon_Drop(true);
}
}
}
1;

View File

@ -8,17 +8,20 @@ package BackRestTest::FileTest;
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use Carp;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename;
use Cwd 'abs_path';
use File::stat;
use Fcntl ':mode';
use Scalar::Util 'blessed';
use Time::HiRes qw(gettimeofday usleep);
use POSIX qw(ceil);
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::Config;
use BackRest::File;
use BackRest::Remote;
@ -87,13 +90,26 @@ sub BackRestTestFile_Test
&log(INFO, 'FILE MODULE ********************************************************************');
#-------------------------------------------------------------------------------------------------------------------------------
# Create remote
# Create remotes
#-------------------------------------------------------------------------------------------------------------------------------
my $oRemote = BackRest::Remote->new
(
strHost => $strHost,
strUser => $strUser,
strCommand => BackRestTestCommon_CommandRemoteGet()
$strHost, # Host
$strUser, # User
BackRestTestCommon_CommandRemoteGet(), # Command
OPTION_DEFAULT_BUFFER_SIZE, # Buffer size
OPTION_DEFAULT_COMPRESS_LEVEL, # Compress level
OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK, # Compress network level
);
my $oLocal = new BackRest::Remote
(
undef, # Host
undef, # User
undef, # Command
OPTION_DEFAULT_BUFFER_SIZE, # Buffer size
OPTION_DEFAULT_COMPRESS_LEVEL, # Compress level
OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK, # Compress network level
);
#-------------------------------------------------------------------------------------------------------------------------------
@ -109,25 +125,25 @@ sub BackRestTestFile_Test
for (my $bRemote = 0; $bRemote <= 1; $bRemote++)
{
# Create the file object
my $oFile = (BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
))->clone();
$strStanza,
$strTestPath,
$bRemote ? 'backup' : undef,
$bRemote ? $oRemote : $oLocal
);
# Loop through error
for (my $bError = 0; $bError <= 1; $bError++)
{
# Loop through permission (permission will be set on true)
for (my $bPermission = 0; $bPermission <= 1; $bPermission++)
# Loop through mode (mode will be set on true)
for (my $bMode = 0; $bMode <= 1; $bMode++)
{
my $strPathType = PATH_BACKUP_CLUSTER;
# Increment the run, log, and decide whether this unit test should be run
if (!BackRestTestCommon_Run(++$iRun,
"rmt ${bRemote}, err ${bError}, prm ${bPermission}")) {next}
"rmt ${bRemote}, err ${bError}, mode ${bMode}")) {next}
# Setup test directory
BackRestTestFile_Setup($bError);
@ -136,12 +152,12 @@ sub BackRestTestFile_Test
mkdir("${strTestPath}/backup/db") or confess 'Unable to create test/backup/db directory';
my $strPath = 'path';
my $strPermission;
my $strMode;
# If permission then set one (other than the default)
if ($bPermission)
# If mode then set one (other than the default)
if ($bMode)
{
$strPermission = '0700';
$strMode = '0700';
}
# If not exists then set the path to something bogus
@ -156,7 +172,7 @@ sub BackRestTestFile_Test
eval
{
$oFile->path_create($strPathType, $strPath, $strPermission);
$oFile->path_create($strPathType, $strPath, $strMode);
};
# Check for errors
@ -184,7 +200,7 @@ sub BackRestTestFile_Test
confess 'path was not created';
}
# Check that the permissions were set correctly
# Check that the mode was set correctly
my $oStat = lstat($strPathCheck);
if (!defined($oStat))
@ -192,11 +208,11 @@ sub BackRestTestFile_Test
confess "unable to stat ${strPathCheck}";
}
if ($bPermission)
if ($bMode)
{
if ($strPermission ne sprintf('%04o', S_IMODE($oStat->mode)))
if ($strMode ne sprintf('%04o', S_IMODE($oStat->mode)))
{
confess "permissions were not set to {$strPermission}";
confess "mode were not set to {$strMode}";
}
}
}
@ -217,13 +233,13 @@ sub BackRestTestFile_Test
for (my $bRemote = 0; $bRemote <= 0; $bRemote++)
{
# Create the file object
my $oFile = BackRest::File->new
my $oFile = (new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
);
$strStanza,
$strTestPath,
$bRemote ? 'backup' : undef,
$bRemote ? $oRemote : $oLocal
))->clone(1);
# Loop through source exists
for (my $bSourceExists = 0; $bSourceExists <= 1; $bSourceExists++)
@ -316,12 +332,12 @@ sub BackRestTestFile_Test
for (my $bRemote = 0; $bRemote <= 0; $bRemote++)
{
# Create the file object
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
$strStanza,
$strTestPath,
$bRemote ? 'backup' : undef,
$bRemote ? $oRemote : $oLocal
);
# Loop through exists
@ -337,6 +353,7 @@ sub BackRestTestFile_Test
my $strFile = "${strTestPath}/test.txt";
my $strSourceHash;
my $iSourceSize;
if ($bError)
{
@ -345,7 +362,7 @@ sub BackRestTestFile_Test
elsif ($bExists)
{
system("echo 'TESTDATA' > ${strFile}");
$strSourceHash = $oFile->hash(PATH_BACKUP_ABSOLUTE, $strFile);
($strSourceHash, $iSourceSize) = $oFile->hash_size(PATH_BACKUP_ABSOLUTE, $strFile);
}
# Execute in eval in case of error
@ -383,7 +400,7 @@ sub BackRestTestFile_Test
system("gzip -d ${strDestinationFile}") == 0 or die "could not decompress ${strDestinationFile}";
my $strDestinationHash = $oFile->hash(PATH_BACKUP_ABSOLUTE, $strFile);
my ($strDestinationHash, $iDestinationSize) = $oFile->hash_size(PATH_BACKUP_ABSOLUTE, $strFile);
if ($strSourceHash ne $strDestinationHash)
{
@ -394,6 +411,63 @@ sub BackRestTestFile_Test
}
}
#-------------------------------------------------------------------------------------------------------------------------------
# Test wait()
#-------------------------------------------------------------------------------------------------------------------------------
if ($strTest eq 'all' || $strTest eq 'wait')
{
$iRun = 0;
&log(INFO, '--------------------------------------------------------------------------------');
&log(INFO, "Test File->wait()\n");
for (my $bRemote = 0; $bRemote <= 1; $bRemote++)
{
# Create the file object
my $oFile = new BackRest::File
(
$strStanza,
$strTestPath,
$bRemote ? 'db' : undef,
$bRemote ? $oRemote : $oLocal
);
my $lTimeBegin = gettimeofday();
if (!BackRestTestCommon_Run(++$iRun,
"rmt ${bRemote}, begin ${lTimeBegin}")) {next}
# If there is not enough time to complete the test then sleep
if (ceil($lTimeBegin) - $lTimeBegin < .250)
{
my $lSleepMs = ceil(((int($lTimeBegin) + 1) - $lTimeBegin) * 1000);
usleep($lSleepMs * 1000);
&log(DEBUG, "slept ${lSleepMs}ms: begin ${lTimeBegin}, end " . gettimeofday());
$lTimeBegin = gettimeofday();
}
# Run the test
my $lTimeBeginCheck = $oFile->wait(PATH_DB_ABSOLUTE);
&log(DEBUG, "begin ${lTimeBegin}, check ${lTimeBeginCheck}, end " . time());
# Current time should have advanced by 1 second
if (time() == int($lTimeBegin))
{
confess "time was not advanced by 1 second";
}
# lTimeBegin and lTimeBeginCheck should be equal
if (int($lTimeBegin) != $lTimeBeginCheck)
{
confess 'time begin ' || int($lTimeBegin) || "and check ${lTimeBeginCheck} should be equal";
}
}
}
#-------------------------------------------------------------------------------------------------------------------------------
# Test manifest()
#-------------------------------------------------------------------------------------------------------------------------------
@ -419,12 +493,12 @@ sub BackRestTestFile_Test
for (my $bRemote = 0; $bRemote <= 1; $bRemote++)
{
# Create the file object
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
$strStanza,
$strTestPath,
$bRemote ? 'backup' : undef,
$bRemote ? $oRemote : $oLocal
);
for (my $bError = 0; $bError <= 1; $bError++)
@ -527,8 +601,8 @@ sub BackRestTestFile_Test
$oManifestHash{name}{"${strName}"}{user} : '') . ',' .
(defined($oManifestHash{name}{"${strName}"}{group}) ?
$oManifestHash{name}{"${strName}"}{group} : '') . ',' .
(defined($oManifestHash{name}{"${strName}"}{permission}) ?
$oManifestHash{name}{"${strName}"}{permission} : '') . ',' .
(defined($oManifestHash{name}{"${strName}"}{mode}) ?
$oManifestHash{name}{"${strName}"}{mode} : '') . ',' .
(defined($oManifestHash{name}{"${strName}"}{modification_time}) ?
$oManifestHash{name}{"${strName}"}{modification_time} : '') . ',' .
(defined($oManifestHash{name}{"${strName}"}{inode}) ?
@ -561,12 +635,12 @@ sub BackRestTestFile_Test
for (my $bRemote = false; $bRemote <= true; $bRemote++)
{
# Create the file object
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
$strStanza,
$strTestPath,
$bRemote ? 'backup' : undef,
$bRemote ? $oRemote : $oLocal
);
for (my $bSort = false; $bSort <= true; $bSort++)
@ -687,12 +761,12 @@ sub BackRestTestFile_Test
for (my $bRemote = 0; $bRemote <= 1; $bRemote++)
{
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
$strStanza,
$strTestPath,
$bRemote ? 'backup' : undef,
$bRemote ? $oRemote : $oLocal
);
# Loop through exists
@ -788,24 +862,27 @@ sub BackRestTestFile_Test
&log(INFO, '--------------------------------------------------------------------------------');
&log(INFO, "test File->hash()\n");
for (my $bRemote = 0; $bRemote <= 1; $bRemote++)
for (my $bRemote = false; $bRemote <= true; $bRemote++)
{
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
$strStanza,
$strTestPath,
$bRemote ? 'backup' : undef,
$bRemote ? $oRemote : $oLocal
);
# Loop through error
for (my $bError = 0; $bError <= 1; $bError++)
for (my $bError = false; $bError <= true; $bError++)
{
# Loop through exists
for (my $bExists = 0; $bExists <= 1; $bExists++)
for (my $bExists = false; $bExists <= true; $bExists++)
{
# Loop through exists
for (my $bCompressed = false; $bCompressed <= true; $bCompressed++)
{
if (!BackRestTestCommon_Run(++$iRun,
"rmt ${bRemote}, err ${bError}, exists ${bExists}")) {next}
"rmt ${bRemote}, err ${bError}, exists ${bExists}, cmp ${bCompressed}")) {next}
# Setup test directory
BackRestTestFile_Setup($bError);
@ -823,15 +900,22 @@ sub BackRestTestFile_Test
else
{
system("echo 'TESTDATA' > ${strFile}");
if ($bCompressed && !$bRemote)
{
$oFile->compress(PATH_BACKUP_ABSOLUTE, $strFile);
$strFile = $strFile . '.gz';
}
}
# Execute in eval in case of error
my $strHash;
my $iSize;
my $bErrorExpected = !$bExists || $bError || $bRemote;
eval
{
$strHash = $oFile->hash(PATH_BACKUP_ABSOLUTE, $strFile)
($strHash, $iSize) = $oFile->hash_size(PATH_BACKUP_ABSOLUTE, $strFile, $bCompressed)
};
if ($@)
@ -855,6 +939,7 @@ sub BackRestTestFile_Test
}
}
}
}
}
}
@ -870,12 +955,12 @@ sub BackRestTestFile_Test
for (my $bRemote = 0; $bRemote <= 1; $bRemote++)
{
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
$strStanza,
$strTestPath,
$bRemote ? 'backup' : undef,
$bRemote ? $oRemote : $oLocal
);
# Loop through exists
@ -952,6 +1037,9 @@ sub BackRestTestFile_Test
{
$iRun = 0;
# Loop through small/large
for (my $bLarge = false; $bLarge <= 2; $bLarge++)
{
# Loop through backup local vs remote
for (my $bBackupRemote = 0; $bBackupRemote <= 1; $bBackupRemote++)
{
@ -968,34 +1056,34 @@ sub BackRestTestFile_Test
my $strRemote = $bBackupRemote ? 'backup' : $bDbRemote ? 'db' : undef;
# Create the file object
my $oFile = BackRest::File->new
my $oFile = new BackRest::File
(
strStanza => $strStanza,
strBackupPath => $strTestPath,
strRemote => $strRemote,
oRemote => defined($strRemote) ? $oRemote : undef
$strStanza,
$strTestPath,
$strRemote,
defined($strRemote) ? $oRemote : $oLocal
);
# Loop through source compression
for (my $bSourceCompressed = 0; $bSourceCompressed <= 1; $bSourceCompressed++)
{
# Loop through destination compression
for (my $bDestinationCompress = 0; $bDestinationCompress <= 1; $bDestinationCompress++)
{
# Loop through source path types
for (my $bSourcePathType = 0; $bSourcePathType <= 1; $bSourcePathType++)
{
# Loop through destination path types
for (my $bDestinationPathType = 0; $bDestinationPathType <= 1; $bDestinationPathType++)
{
# Loop through source ignore/require
for (my $bSourceIgnoreMissing = 0; $bSourceIgnoreMissing <= 1; $bSourceIgnoreMissing++)
{
# Loop through source missing/present
for (my $bSourceMissing = 0; $bSourceMissing <= 1; $bSourceMissing++)
for (my $bSourceMissing = 0; $bSourceMissing <= !$bLarge; $bSourceMissing++)
{
# Loop through small/large
for (my $bLarge = false; $bLarge <= defined($strRemote) && !$bSourceMissing; $bLarge++)
# Loop through source ignore/require
for (my $bSourceIgnoreMissing = 0; $bSourceIgnoreMissing <= !$bLarge; $bSourceIgnoreMissing++)
{
# Loop through checksum append
for (my $bChecksumAppend = 0; $bChecksumAppend <= !$bLarge; $bChecksumAppend++)
{
# Loop through source compression
for (my $bSourceCompressed = 0; $bSourceCompressed <= !$bSourceMissing; $bSourceCompressed++)
{
# Loop through destination compression
for (my $bDestinationCompress = 0; $bDestinationCompress <= !$bSourceMissing; $bDestinationCompress++)
{
my $strSourcePathType = $bSourcePathType ? PATH_DB_ABSOLUTE : PATH_BACKUP_ABSOLUTE;
my $strSourcePath = $bSourcePathType ? 'db' : 'backup';
@ -1004,16 +1092,16 @@ sub BackRestTestFile_Test
my $strDestinationPath = $bDestinationPathType ? 'db' : 'backup';
if (!BackRestTestCommon_Run(++$iRun,
'rmt ' .
"lrg ${bLarge}, rmt " .
(defined($strRemote) && ($strRemote eq $strSourcePath ||
$strRemote eq $strDestinationPath) ? 1 : 0) .
", lrg ${bLarge}, " .
'srcpth ' . (defined($strRemote) && $strRemote eq $strSourcePath ? 'rmt' : 'lcl') .
":${strSourcePath}, srccmp $bSourceCompressed, srcmiss ${bSourceMissing}, " .
"srcignmiss ${bSourceIgnoreMissing}, " .
', srcpth ' . (defined($strRemote) && $strRemote eq $strSourcePath ? 'rmt' : 'lcl') .
":${strSourcePath}, srcmiss ${bSourceMissing}, " .
"srcignmiss ${bSourceIgnoreMissing}, srccmp $bSourceCompressed, " .
'dstpth ' .
(defined($strRemote) && $strRemote eq $strDestinationPath ? 'rmt' : 'lcl') .
":${strDestinationPath}, dstcmp $bDestinationCompress")) {next}
":${strDestinationPath}, chkapp ${bChecksumAppend}, " .
"dstcmp $bDestinationCompress")) {next}
# Setup test directory
BackRestTestFile_Setup(false);
@ -1023,8 +1111,12 @@ sub BackRestTestFile_Test
my $strSourceFile = "${strTestPath}/${strSourcePath}/test-source";
my $strDestinationFile = "${strTestPath}/${strDestinationPath}/test-destination";
my $strCopyHash;
my $iCopySize;
# Create the compressed or uncompressed test file
my $strSourceHash;
my $iSourceSize;
if (!$bSourceMissing)
{
@ -1033,7 +1125,7 @@ sub BackRestTestFile_Test
$strSourceFile .= '.bin';
$strDestinationFile .= '.bin';
BackRestTestCommon_Execute('cp ' . BackRestTestCommon_DataPathGet() . "/test.archive.bin ${strSourceFile}");
BackRestTestCommon_Execute('cp ' . BackRestTestCommon_DataPathGet() . "/test.archive${bLarge}.bin ${strSourceFile}");
}
else
{
@ -1043,7 +1135,21 @@ sub BackRestTestFile_Test
system("echo 'TESTDATA' > ${strSourceFile}");
}
$strSourceHash = $oFile->hash(PATH_ABSOLUTE, $strSourceFile);
if ($bLarge == 1)
{
$strSourceHash = 'c2e63b6a49d53a53d6df1aa6b70c7c16747ca099';
$iSourceSize = 16777216;
}
elsif ($bLarge == 2)
{
$strSourceHash = '1c7e00fd09b9dd11fc2966590b3e3274645dd031';
$iSourceSize = 16777216;
}
else
{
$strSourceHash = '06364afe79d801433188262478a76d19777ef351';
$iSourceSize = 9;
}
if ($bSourceCompressed)
{
@ -1062,11 +1168,12 @@ sub BackRestTestFile_Test
eval
{
$bReturn = $oFile->copy($strSourcePathType, $strSourceFile,
$strDestinationPathType, $strDestinationFile,
$bSourceCompressed, $bDestinationCompress,
$bSourceIgnoreMissing, undef,
'0700');
($bReturn, $strCopyHash, $iCopySize) =
$oFile->copy($strSourcePathType, $strSourceFile,
$strDestinationPathType, $strDestinationFile,
$bSourceCompressed, $bDestinationCompress,
$bSourceIgnoreMissing, undef, '0700', false, undef, undef,
$bChecksumAppend);
};
# Check for errors after copy
@ -1109,6 +1216,24 @@ sub BackRestTestFile_Test
confess 'expected source file missing error';
}
if (!defined($strCopyHash))
{
confess 'copy hash must be defined';
}
if ($bChecksumAppend)
{
if ($bDestinationCompress)
{
$strDestinationFile =
substr($strDestinationFile, 0, length($strDestinationFile) -3) . "-${strSourceHash}.gz";
}
else
{
$strDestinationFile .= '-' . $strSourceHash;
}
}
unless (-e $strDestinationFile)
{
confess "could not find destination file ${strDestinationFile}";
@ -1124,12 +1249,18 @@ sub BackRestTestFile_Test
or die "could not decompress ${strDestinationFile}";
}
my $strDestinationHash = $oFile->hash(PATH_ABSOLUTE, $strDestinationTest);
my ($strDestinationHash, $iDestinationSize) = $oFile->hash_size(PATH_ABSOLUTE, $strDestinationTest);
if ($strSourceHash ne $strDestinationHash)
if ($strSourceHash ne $strDestinationHash || $strSourceHash ne $strCopyHash)
{
confess "source ${strSourceHash} and destination ${strDestinationHash} file hashes do not match";
confess "source ${strSourceHash}, copy ${strCopyHash} and destination ${strDestinationHash} file hashes do not match";
}
if ($iSourceSize != $iDestinationSize || $iSourceSize != $iCopySize)
{
confess "source ${iSourceSize}, copy ${iCopySize} and destination ${iDestinationSize} sizes do not match";
}
}
}
}
}

View File

@ -1,6 +1,6 @@
#!/usr/bin/perl
####################################################################################################################################
# BackupTest.pl - Unit Tests for BackRest::File
# UtilityTest.pl - Unit Tests for BackRest::Utility
####################################################################################################################################
package BackRestTest::UtilityTest;
@ -8,13 +8,14 @@ package BackRestTest::UtilityTest;
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use Carp;
use warnings FATAL => qw(all);
use Carp qw(confess);
use File::Basename;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::Config;
use BackRest::File;
use BackRestTest::CommonTest;
@ -22,29 +23,6 @@ use BackRestTest::CommonTest;
use Exporter qw(import);
our @EXPORT = qw(BackRestTestUtility_Test);
####################################################################################################################################
# BackRestTestUtility_Drop
####################################################################################################################################
sub BackRestTestUtility_Drop
{
# Remove the test directory
system('rm -rf ' . BackRestTestCommon_TestPathGet()) == 0
or die 'unable to remove ' . BackRestTestCommon_TestPathGet() . 'path';
}
####################################################################################################################################
# BackRestTestUtility_Create
####################################################################################################################################
sub BackRestTestUtility_Create
{
# Drop the old test directory
BackRestTestUtility_Drop();
# Create the test directory
mkdir(BackRestTestCommon_TestPathGet(), oct('0770'))
or confess 'Unable to create ' . BackRestTestCommon_TestPathGet() . ' path';
}
####################################################################################################################################
# BackRestTestUtility_Test
####################################################################################################################################
@ -60,6 +38,19 @@ sub BackRestTestUtility_Test
# Print test banner
&log(INFO, 'UTILITY MODULE ******************************************************************');
#-------------------------------------------------------------------------------------------------------------------------------
# Create remote
#-------------------------------------------------------------------------------------------------------------------------------
my $oLocal = new BackRest::Remote
(
undef, # Host
undef, # User
undef, # Command
OPTION_DEFAULT_BUFFER_SIZE, # Buffer size
OPTION_DEFAULT_COMPRESS_LEVEL, # Compress level
OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK, # Compress network level
);
#-------------------------------------------------------------------------------------------------------------------------------
# Test config
#-------------------------------------------------------------------------------------------------------------------------------
@ -67,7 +58,14 @@ sub BackRestTestUtility_Test
{
$iRun = 0;
$bCreate = true;
my $oFile = BackRest::File->new();
my $oFile = new BackRest::File
(
undef,
undef,
undef,
$oLocal
);
&log(INFO, "Test config\n");
@ -77,7 +75,8 @@ sub BackRestTestUtility_Test
# Create the test directory
if ($bCreate)
{
BackRestTestUtility_Create();
BackRestTestCommon_Drop();
BackRestTestCommon_Create();
$bCreate = false;
}
@ -96,18 +95,18 @@ sub BackRestTestUtility_Test
# Save the test config
my $strFile = "${strTestPath}/config.cfg";
config_save($strFile, \%oConfig);
ini_save($strFile, \%oConfig);
my $strConfigHash = $oFile->hash(PATH_ABSOLUTE, $strFile);
# Reload the test config
my %oConfigTest;
config_load($strFile, \%oConfigTest);
ini_load($strFile, \%oConfigTest);
# Resave the test config and compare hashes
my $strFileTest = "${strTestPath}/config-test.cfg";
config_save($strFileTest, \%oConfigTest);
ini_save($strFileTest, \%oConfigTest);
my $strConfigTestHash = $oFile->hash(PATH_ABSOLUTE, $strFileTest);
@ -119,7 +118,7 @@ sub BackRestTestUtility_Test
if (BackRestTestCommon_Cleanup())
{
&log(INFO, 'cleanup');
BackRestTestUtility_Drop();
BackRestTestCommon_Drop();
}
}
}

View File

@ -13,7 +13,8 @@ use Carp;
use File::Basename;
use Getopt::Long;
use Cwd 'abs_path';
use Cwd;
use Pod::Usage;
#use Test::More;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
@ -21,30 +22,89 @@ use BackRest::Utility;
use lib dirname($0) . '/lib';
use BackRestTest::CommonTest;
use BackRestTest::UtilityTest;
use BackRestTest::ConfigTest;
use BackRestTest::FileTest;
use BackRestTest::BackupTest;
####################################################################################################################################
# Usage
####################################################################################################################################
=head1 NAME
test.pl - Simple Postgres Backup and Restore Unit Tests
=head1 SYNOPSIS
test.pl [options]
Test Options:
--module test module to execute:
--module-test execute the specified test in a module
--module-test-run execute only the specified test run
--thread-max max threads to run for backup/restore (default 4)
--dry-run show only the tests that would be executed but don't execute them
--no-cleanup don't cleaup after the last test is complete - useful for debugging
--infinite repeat selected tests forever
Configuration Options:
--psql-bin path to the psql executables (e.g. /usr/lib/postgresql/9.3/bin/)
--test-path path where tests are executed (defaults to ./test)
--log-level log level to use for tests (defaults to INFO)
--quiet, -q equivalent to --log-level=off
General Options:
--version display version and exit
--help display usage and exit
=cut
####################################################################################################################################
# Command line parameters
####################################################################################################################################
my $strLogLevel = 'off'; # Log level for tests
my $strLogLevel = 'info'; # Log level for tests
my $strModule = 'all';
my $strModuleTest = 'all';
my $iModuleTestRun = undef;
my $iThreadMax = 1;
my $bDryRun = false;
my $bNoCleanup = false;
my $strPgSqlBin;
my $strTestPath;
my $bVersion = false;
my $bHelp = false;
my $bQuiet = false;
my $bInfinite = false;
GetOptions ('pgsql-bin=s' => \$strPgSqlBin,
GetOptions ('q|quiet' => \$bQuiet,
'version' => \$bVersion,
'help' => \$bHelp,
'pgsql-bin=s' => \$strPgSqlBin,
'test-path=s' => \$strTestPath,
'log-level=s' => \$strLogLevel,
'module=s' => \$strModule,
'module-test=s' => \$strModuleTest,
'module-test-run=s' => \$iModuleTestRun,
'thread-max=s' => \$iThreadMax,
'dry-run' => \$bDryRun,
'no-cleanup' => \$bNoCleanup)
or die 'error in command line arguments';
'no-cleanup' => \$bNoCleanup,
'infinite' => \$bInfinite)
or pod2usage(2);
# Display version and exit if requested
if ($bVersion || $bHelp)
{
print 'pg_backrest ' . version_get() . " unit test\n";
if ($bHelp)
{
print "\n";
pod2usage();
}
exit 0;
}
# Test::More->builder->output('/dev/null');
####################################################################################################################################
# Setup
@ -52,7 +112,12 @@ GetOptions ('pgsql-bin=s' => \$strPgSqlBin,
# Set a neutral umask so tests work as expected
umask(0);
# Set console log level to trace for testing
# Set console log level
if ($bQuiet)
{
$strLogLevel = 'off';
}
log_level_set(undef, uc($strLogLevel));
if ($strModuleTest ne 'all' && $strModule eq 'all')
@ -65,10 +130,36 @@ if (defined($iModuleTestRun) && $strModuleTest eq 'all')
confess "--module-test must be provided for run \"${iModuleTestRun}\"";
}
# Make sure PG bin has been defined
# Search for psql bin
if (!defined($strPgSqlBin))
{
confess 'pgsql-bin was not defined';
my @strySearchPath = ('/usr/lib/postgresql/VERSION/bin', '/Library/PostgreSQL/VERSION/bin');
foreach my $strSearchPath (@strySearchPath)
{
for (my $fVersion = 9; $fVersion >= 0; $fVersion -= 1)
{
my $strVersionPath = $strSearchPath;
$strVersionPath =~ s/VERSION/9\.$fVersion/g;
if (-e "${strVersionPath}/initdb")
{
&log(INFO, "found pgsql-bin at ${strVersionPath}\n");
$strPgSqlBin = ${strVersionPath};
}
}
}
if (!defined($strPgSqlBin))
{
confess 'pgsql-bin was not defined and could not be located';
}
}
# Check thread total
if ($iThreadMax < 1 || $iThreadMax > 32)
{
confess 'thread-max must be between 1 and 32';
}
####################################################################################################################################
@ -127,22 +218,39 @@ BackRestTestCommon_Setup($strTestPath, $strPgSqlBin, $iModuleTestRun, $bDryRun,
# &log(INFO, "Testing with test_path = " . BackRestTestCommon_TestPathGet() . ", host = {strHost}, user = {strUser}, " .
# "group = {strGroup}");
if ($strModule eq 'all' || $strModule eq 'utility')
{
BackRestTestUtility_Test($strModuleTest);
}
my $iRun = 0;
if ($strModule eq 'all' || $strModule eq 'file')
do
{
BackRestTestFile_Test($strModuleTest);
}
if ($bInfinite)
{
$iRun++;
&log(INFO, "INFINITE - RUN ${iRun}\n");
}
if ($strModule eq 'all' || $strModule eq 'backup')
{
BackRestTestBackup_Test($strModuleTest);
if ($strModule eq 'all' || $strModule eq 'utility')
{
BackRestTestUtility_Test($strModuleTest);
}
if ($strModule eq 'all' || $strModule eq 'config')
{
BackRestTestConfig_Test($strModuleTest);
}
if ($strModule eq 'all' || $strModule eq 'file')
{
BackRestTestFile_Test($strModuleTest);
}
if ($strModule eq 'all' || $strModule eq 'backup')
{
BackRestTestBackup_Test($strModuleTest, $iThreadMax);
}
}
while ($bInfinite);
if (!$bDryRun)
{
&log(ASSERT, 'TESTS COMPLETED SUCCESSFULLY (DESPITE ANY ERROR MESSAGES YOU SAW)');
&log(INFO, 'TESTS COMPLETED SUCCESSFULLY (DESPITE ANY ERROR MESSAGES YOU SAW)');
}