1
0
mirror of https://github.com/pgbackrest/pgbackrest.git synced 2025-01-26 05:27:26 +02:00

v0.30: core restructuring and unit tests

* Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations.  Compression is performed in threads rather than forked processes.

* Fairly comprehensive unit tests for all the basic operations.  More work to be done here for sure, but then there is always more work to be done on unit tests.

* Removed dependency on Storable and replaced with a custom ini file implementation.

* Added much needed documentation (see INSTALL.md).

* Numerous other changes that can only be identified with a diff.
This commit is contained in:
David Steele 2014-10-05 19:49:30 -04:00
parent 1fa8dbb778
commit 4bc4d97f2b
21 changed files with 7162 additions and 2599 deletions

418
INSTALL.md Normal file
View File

@ -0,0 +1,418 @@
# PgBackRest Installation
## sample ubuntu 12.04 install
1. Starting from a clean install, update the OS:
```
apt-get update
apt-get upgrade (reboot if required)
```
2. Install ssh, git and cpanminus
```
apt-get install ssh
apt-get install git
apt-get install cpanminus
```
3. Install Postgres (instructions from http://www.postgresql.org/download/linux/ubuntu/)
Create the file /etc/apt/sources.list.d/pgdg.list, and add a line for the repository:
```
deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main
```
Then run the following:
```
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
apt-get install postgresql-9.3
apt-get install postgresql-server-dev-9.3
```
4. Install required Perl modules:
```
cpanm JSON
cpanm Moose
cpanm Net::OpenSSH
cpanm DBI
cpanm DBD::Pg
cpanm IPC::System::Simple
cpanm Digest::SHA
cpanm IO::Compress::Gzip
cpanm IO::Uncompress::Gunzip
```
5. Install PgBackRest
Backrest can be installed by downloading the most recent release:
https://github.com/dwsteele/pg_backrest/releases
6. To run unit tests:
* Create backrest_dev user
* Setup trusted ssh between test user account and backrest_dev
* Backrest user and test user must be in the same group
## configuration examples
PgBackRest takes some command-line parameters, but depends on a configuration file for most of the settings. The default location for the configuration file is /etc/pg_backrest.conf.
#### confguring postgres for archiving with backrest
Modify the following settings in postgresql.conf:
```
wal_level = archive
archive_mode = on
archive_command = '/path/to/backrest/bin/pg_backrest.pl --stanza=db archive-push %p'
```
Replace the path with the actual location where PgBackRest was installed. The stanza parameter should be changed to the actual stanza name you used for your database in pg_backrest.conf.
#### simple single host install
This configuration is appropriate for a small installation where backups are being made locally or to a remote file system that is mounted locally.
`/etc/pg_backrest.conf`:
```
[global:command]
psql=/usr/bin/psql
[global:backup]
path=/var/lib/postgresql/backup
[global:retention]
full-retention=2
differential-retention=2
archive-retention-type=diff
archive-retention=2
[db]
path=/var/lib/postgresql/9.3/main
```
#### simple multiple host install
This configuration is appropriate for a small installation where backups are being made remotely. Make sure that postgres@db-host has trusted ssh to backrest@backup-host and vice versa.
`/etc/pg_backrest.conf on the db host`:
```
[global:command]
psql=/usr/bin/psql
[global:backup]
host=backup-host@mydomain.com
user=postgres
path=/var/lib/postgresql/backup
[db]
path=/var/lib/postgresql/9.3/main
```
`/etc/pg_backrest.conf on the backup host`:
```
[global:command]
psql=/usr/bin/psql
[global:backup]
path=/var/lib/postgresql/backup
[global:retention]
full-retention=2
archive-retention-type=full
[db]
host=db-host@mydomain.com
user=postgres
path=/var/lib/postgresql/9.3/main
```
## running
PgBackRest is intended to be run from a scheduler like cron as there is no built-in scheduler. Postgres does backup rotation, but it is not concerned with when the backups were created. So if two full backups are configured in rentention, PgBackRest will keep two full backup no matter whether they occur 2 hours apart or two weeks apart.
There are four basic operations:
1. Backup
```
/path/to/pg_backrest.pl --stanza=db --type=full backup
```
Run a `full` backup on the `db` stanza. `--type` can also be set to `incr` or `diff` for incremental or differential backups. However, if now `full` backup exists then a `full` backup will be forced even if `incr`
2. Archive Push
```
/path/to/pg_backrest.pl --stanza=db archive-push %p
```
Accepts an archive file from Postgres and pushes it to the backup. `%p` is how Postgres specifies the location of the file to be archived. This command has no other purpose.
3. Archive Get
```
/path/to/pg_backrest.pl --stanza=db archive-get %f %p
```
Retrieves an archive log from the backup. This is used in `restore.conf` to restore a backup to that last archive log, do PITR, or as an alternative to streaming for keep a replica up to date. `%f` is how Postgres specifies the archive log it needs, and `%p` is the location where it should be copied.
3. Backup Expire
```
/path/to/pg_backrest.pl --stanza=db expire
```
Expire (rotate) any backups that exceed the defined retention. Expiration is run after every backup, so there's no need to run this command on its own unless you have reduced rentention, usually to free up some space.
## structure
PgBackRest stores files in a way that is easy for users to work with directly. Each backup directory has two files and two subdirectories:
1. `backup.manifest` file
Stores information about all the directories, links, and files in the backup. The file is plaintext and should be very clear, but documentation of the format is planned in a future release.
2. `version` file
Contains the PgBackRest version that was used to create the backup.
3. `base` directory
Contains the Postgres data directory as defined by the data_directory setting in postgresql.conf
4. `tablespace` directory
Contains each tablespace in a separate subdirectory. The links in `base/pg_tblspc` are rewritten to this directory.
## restoring
PgBackRest does not currently have a restore command - this is planned for the near future. However, PgBackRest stores backups in a way that makes restoring very easy. If `compress=n` it is even possible to start Postgres directly on the backup directory.
In order to restore a backup, simple rsync the files from the base backup directory to your data directory. If you have used compression, then recursively ungzip the files. If you have tablespaces, repeat the process for each tablespace in the backup tablespace directory.
It's good to practice restoring backups in advance of needing to do so.
## configuration options
Each section defines important aspects of the backup. All configuration sections below should be prefixed with `global:` as demonstrated in the configuration samples.
#### command section
The command section defines external commands that are used by PgBackRest.
##### psql key
Defines the full path to psql. psql is used to call pg_start_backup() and pg_stop_backup().
```
required: y
example: psql=/usr/bin/psql
```
##### remote key
Defines the file path to pg_backrest_remote.pl.
Required only if the path to pg_backrest_remote.pl is different on the local and remote systems. If not defined, the remote path will be assumed to be the same as the local path.
```
required: n
example: remote=/home/postgres/backrest/bin/pg_backrest_remote.pl
```
#### command-option section
The command-option section allows abitrary options to be passed to any command in the command section.
##### psql key
Allows command line parameters to be passed to psql.
```
required: no
example: psql=--port=5433
```
#### log section
The log section defines logging-related settings. The following log levels are supported:
- `off `- No logging at all (not recommended)
- `error `- Log only errors
- `warn `- Log warnings and errors
- `info `- Log info, warnings, and errors
- `debug `- Log debug, info, warnings, and errors
- `trace `- Log trace (very verbose debugging), debug, info, warnings, and errors
##### level-file
Sets file log level.
```
default: info
example: level-file=warn
```
##### level-console
Sets console log level.
```
default: error
example: level-file=info
```
#### backup section
The backup section defines settings related to backup and archiving.
##### host
Sets the backup host.
```
required: n (but must be set if user is defined)
example: host=backup.mydomain.com
```
##### user
Sets user account on the backup host.
```
required: n (but must be set if host is defined)
example: user=backrest
```
##### path
Path where backups are stored on the local or remote host.
```
required: y
example: path=/backup/backrest
```
##### compress
Enable gzip compression. Files stored in the backup are compatible with command-line gzip tools.
```
default: y
example: compress=n
```
##### checksum
Enable SHA-1 checksums. Backup checksums are stored in backup.manifest while archive checksums are stored in the filename.
```
default: y
example: checksum=n
```
##### start_fast
Forces an immediate checkpoint (by passing true to the fast parameter of pg_start_backup()) so the backup begins immediately.
```
default: n
example: hardlink=y
```
##### hardlink
Enable hard-linking of files in differential and incremental backups to their full backups. This gives the appearance that each
backup is a full backup. Be care though, because modifying files that are hard-linked can affect all the backups in the set.
```
default: y
example: hardlink=n
```
##### thread-max
Defines the number of threads to use for backup. Each thread will perform compression and transfer to make the backup run faster, but don't set `thread-max` so high that it impacts database performance.
```
default: 1
example: thread-max=4
```
##### thread-timeout
Maximum amount of time that a backup thread should run. This limits the amount of time that a thread might be stuck due to unforeseen issues during the backup.
```
default: <none>
example: thread-max=4
```
##### archive-required
Are archive logs required to to complete the backup? It's a good idea to leave this as the default unless you are using another
method for archiving.
```
default: y
example: archive-required=n
```
#### archive section
The archive section defines parameters when doing async archiving. This means that the archive files will be stored locally, then a background process will pick them and move them to the backup.
##### path
Path where archive logs are stored before being asynchronously transferred to the backup. Make sure this is not the same path as the backup is using if the backup is local.
```
required: y
example: path=/backup/archive
```
##### compress-async
When set then archive logs are not compressed immediately, but are instead compressed when copied to the backup host. This means that more space will be used on local storage, but the initial archive process will complete more quickly allowing greater throughput from Postgres.
```
default: n
example: compress-async=y
```
##### archive-max-mb
Limits the amount of archive log that will be written locally. After the limit is reached, the following will happen:
1. PgBackRest will notify Postgres that the archive was succesfully backed up, then DROP IT.
2. An error will be logged to the console and also to the Postgres log.
3. A stop file will be written in the lock directory and no more archive files will be backed up until it is removed.
If this occurs then the archive log stream will be interrupted and PITR will not be possible past that point. A new backup will be required to regain full restore capability.
The purpose of this feature is to prevent the log volume from filling up at which point Postgres will stop all operation. Better to lose the backup than have the database go down completely.
To start normal archiving again you'll need to remove the stop file which will be located at `${archive-path}/lock/${stanza}-archive.stop` where `${archive-path}` is the path set in the archive section, and ${stanza} is the backup stanza.
```
required: n
example: archive-max-mb=1024
```
#### retention section
The rentention section defines how long backups will be retained. Expiration only occurs when the number of complete backups exceeds the allowed retention. In other words, if full-retention is set to 2, then there must be 3 complete backups before the oldest will be expired. Make sure you always have enough space for rentention + 1 backups.
##### full-retention
Number of full backups to keep. When a full backup expires, all differential and incremental backups associated with the full backup will also expire. When not defined then all full backups will be kept.
```
required: n
example: full-retention=2
```
##### differential-retention
Number of differential backups to keep. When a differential backup expires, all incremental backups associated with the differential backup will also expire. When not defined all differential backups will be kept.
```
required: n
example: differential-retention=3
```
##### archive-retention-type
Type of backup to use for archive retention (full or differential). If set to full, then PgBackRest will keep archive logs for the number of full backups defined by `archive-retention`. If set to differential, then PgBackRest will keep archive logs for the number of differential backups defined by `archive-retention`.
If not defined then archive logs will be kept indefinitely. In general it is not useful to keep archive logs that are older than the oldest backup, but there may be reasons for doing so.
```
required: n
example: archive-retention-type=full
```
##### archive-retention
Number of backups worth of archive log to keep. If not defined, then `full-retention` will be used when `archive-retention-type=full` and `differential-retention` will be used when `archive-retention-type=differential`.
```
required: n
example: archive-retention=2
```
### stanza sections
A stanza defines a backup for a specific database. The stanza section must define the base database path and host/user if the database is remote. Also, any global configuration sections can be overridden to define stanza-specific settings.
##### host
Sets the database host.
```
required: n (but must be set if user is defined)
example: host=db.mydomain.com
```
##### user
Sets user account on the db host.
```
required: n (but must be set if host is defined)
example: user=postgres
```
##### path
Path to the db data directory (data_directory setting in postgresql.conf).
```
required: y
example: path=/var/postgresql/data
```

View File

@ -1,53 +1,32 @@
# pg_backrest
# PgBackRest - Simple Postgres Backup & Restore
Simple Postgres Backup and Restore
## planned for next release
* Capture SDTERR in file functions - start with file_list_get() - IN PROGRESS.
## feature backlog
* Move backups to be removed to temp before deleting.
* Async archive-get.
* Database restore.
* --version param (with VERSION file written to directory).
* Threading for archive-get and archive-put.
* Add configurable sleep to archiver process to reduce ssh connections.
* Fix bug where .backup files written into old directories can cause the archive process to error.
* Default restore.conf is written to each backup.
* Able to set timeout on ssh connection in config file.
## required perl modules
Config::IniFiles
Moose
IPC::System::Simple
Net::OpenSSH
JSON
IPC::Open3
PgBackRest aims to be a simple backup and restore system that can seamlessly scale up to the largest databases and workloads.
## release notes
### v0.19: Improved error reporting/handling
### v0.30: core restructuring and unit tests
* Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations. Compression is performed in threads rather than forked processes.
* Fairly comprehensive unit tests for all the basic operations. More work to be done here for sure, but then there is always more work to be done on unit tests.
* Removed dependency on Storable and replaced with a custom ini file implementation.
* Added much needed documentation (see INSTALL.md).
* Numerous other changes that can only be identified with a diff.
### v0.19: improved error reporting/handling
* Working on improving error handling in the file object. This is not complete, but works well enough to find a few errors that have been causing us problems (notably, find is occasionally failing building the archive async manifest when system is under load).
* Found and squashed a nasty bug where file_copy was defaulted to ignore errors. There was also an issue in file_exists that was causing the test to fail when the file actually did exist. Together they could have resulted in a corrupt backup with no errors, though it is very unlikely.
### v0.18: Return soft error from archive-get when file is missing
### v0.18: return soft error from archive-get when file is missing
* The archive-get function returns a 1 when the archive file is missing to differentiate from hard errors (ssh connection failure, file copy error, etc.) This lets Postgres know that that the archive stream has terminated normally. However, this does not take into account possible holes in the archive stream.
### v0.17: Warn when archive directories cannot be deleted
### v0.17: warn when archive directories cannot be deleted
* If an archive directory which should be empty could not be deleted backrest was throwing an error. There's a good fix for that coming, but for the time being it has been changed to a warning so processing can continue. This was impacting backups as sometimes the final archive file would not get pushed if the first archive file had been in a different directory (plus some bad luck).
@ -55,13 +34,13 @@ IPC::Open3
* Added RequestTTY=yes to ssh sesssions. Hoping this will prevent random lockups.
### v0.15: Added archive-get
### v0.15: added archive-get
* Added archive-get functionality to aid in restores.
* Added option to force a checkpoint when starting the backup (start_fast=y).
### v0.11: Minor fixes
### v0.11: minor fixes
Tweaking a few settings after running backups for about a month.
@ -69,7 +48,7 @@ Tweaking a few settings after running backups for about a month.
* Changed lock file conflicts on backup and expire commands to ERROR. They were set to DEBUG due to a copy-and-paste from the archive locks.
### v0.10: Backup and archiving are functional
### v0.10: backup and archiving are functional
This version has been put into production at Resonate, so it does work, but there are a number of major caveats.
@ -85,4 +64,10 @@ This version has been put into production at Resonate, so it does work, but ther
* Absolutely no documentation (outside the code). Well, excepting these release notes.
* Lots of other little things and not so little things. Much refactoring to follow.
* Lots of other little things and not so little things. Much refactoring to follow.
## recognition
Primary recognition goes to Stephen Frost for all his valuable advice a criticism during the development of PgBackRest. It's a far better piece of software than it would have been without him. Any mistakes should be blamed on me alone.
Resonate (http://www.resonateinsights.com) also contributed to the development of PgBackRest and allowed me to install early (but well tested) versions as their primary Postgres backup solution. Works so far!

1
VERSION Normal file
View File

@ -0,0 +1 @@
0.30

711
bin/pg_backrest.pl Executable file
View File

@ -0,0 +1,711 @@
#!/usr/bin/perl
####################################################################################################################################
# pg_backrest.pl - Simple Postgres Backup and Restore
####################################################################################################################################
####################################################################################################################################
# Perl includes
####################################################################################################################################
use threads;
use strict;
use warnings;
use Carp;
use File::Basename;
use Getopt::Long;
use Pod::Usage;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::File;
use BackRest::Backup;
use BackRest::Db;
####################################################################################################################################
# Usage
####################################################################################################################################
=head1 NAME
pg_backrest.pl - Simple Postgres Backup and Restore
=head1 SYNOPSIS
pg_backrest.pl [options] [operation]
Operation:
archive-get retrieve an archive file from backup
archive-push push an archive file to backup
backup backup a cluster
expire expire old backups (automatically run after backup)
General Options:
--stanza stanza (cluster) to operate on (currently required for all operations)
--config alternate path for pg_backrest.conf (defaults to /etc/pg_backrest.conf)
--version display version and exit
--help display usage and exit
Backup Options:
--type type of backup to perform (full, diff, incr)
=cut
####################################################################################################################################
# Operation constants - basic operations that are allowed in backrest
####################################################################################################################################
use constant
{
OP_ARCHIVE_GET => 'archive-get',
OP_ARCHIVE_PUSH => 'archive-push',
OP_BACKUP => 'backup',
OP_EXPIRE => 'expire'
};
####################################################################################################################################
# Configuration constants - configuration sections and keys
####################################################################################################################################
use constant
{
CONFIG_SECTION_COMMAND => 'command',
CONFIG_SECTION_COMMAND_OPTION => 'command:option',
CONFIG_SECTION_LOG => 'log',
CONFIG_SECTION_BACKUP => 'backup',
CONFIG_SECTION_ARCHIVE => 'archive',
CONFIG_SECTION_RETENTION => 'retention',
CONFIG_SECTION_STANZA => 'stanza',
CONFIG_KEY_USER => 'user',
CONFIG_KEY_HOST => 'host',
CONFIG_KEY_PATH => 'path',
CONFIG_KEY_THREAD_MAX => 'thread-max',
CONFIG_KEY_THREAD_TIMEOUT => 'thread-timeout',
CONFIG_KEY_HARDLINK => 'hardlink',
CONFIG_KEY_ARCHIVE_REQUIRED => 'archive-required',
CONFIG_KEY_ARCHIVE_MAX_MB => 'archive-max-mb',
CONFIG_KEY_START_FAST => 'start-fast',
CONFIG_KEY_COMPRESS_ASYNC => 'compress-async',
CONFIG_KEY_LEVEL_FILE => 'level-file',
CONFIG_KEY_LEVEL_CONSOLE => 'level-console',
CONFIG_KEY_COMPRESS => 'compress',
CONFIG_KEY_CHECKSUM => 'checksum',
CONFIG_KEY_PSQL => 'psql',
CONFIG_KEY_REMOTE => 'remote',
CONFIG_KEY_FULL_RETENTION => 'full-retention',
CONFIG_KEY_DIFFERENTIAL_RETENTION => 'differential-retention',
CONFIG_KEY_ARCHIVE_RETENTION_TYPE => 'archive-retention-type',
CONFIG_KEY_ARCHIVE_RETENTION => 'archive-retention'
};
####################################################################################################################################
# Command line parameters
####################################################################################################################################
my $strConfigFile; # Configuration file
my $strStanza; # Stanza in the configuration file to load
my $strType; # Type of backup: full, differential (diff), incremental (incr)
my $bVersion = false; # Display version and exit
my $bHelp = false; # Display help and exit
# Test parameters - not for general use
my $bNoFork = false; # Prevents the archive process from forking when local archiving is enabled
my $bTest = false; # Enters test mode - not harmful in anyway, but adds special logging and pauses for unit testing
my $iTestDelay = 5; # Amount of time to delay after hitting a test point (the default would not be enough for manual tests)
GetOptions ('config=s' => \$strConfigFile,
'stanza=s' => \$strStanza,
'type=s' => \$strType,
'version' => \$bVersion,
'help' => \$bHelp,
# Test parameters - not for general use (and subject to change without notice)
'no-fork' => \$bNoFork,
'test' => \$bTest,
'test-delay=s' => \$iTestDelay)
or pod2usage(2);
# Display version and exit if requested
if ($bVersion || $bHelp)
{
print 'pg_backrest ' . version_get() . "\n";
if (!$bHelp)
{
exit 0;
}
}
# Display help and exit if requested
if ($bHelp)
{
print "\n";
pod2usage();
}
# Set test parameters
test_set($bTest, $iTestDelay);
####################################################################################################################################
# Global variables
####################################################################################################################################
my %oConfig; # Configuration hash
my $oRemote; # Remote object
my $strRemote; # Defines which side is remote, DB or BACKUP
####################################################################################################################################
# CONFIG_LOAD - Get a value from the config and be sure that it is defined (unless bRequired is false)
####################################################################################################################################
sub config_key_load
{
my $strSection = shift;
my $strKey = shift;
my $bRequired = shift;
my $strDefault = shift;
# Default is that the key is not required
if (!defined($bRequired))
{
$bRequired = false;
}
my $strValue;
# Look in the default stanza section
if ($strSection eq CONFIG_SECTION_STANZA)
{
$strValue = $oConfig{"${strStanza}"}{"${strKey}"};
}
# Else look in the supplied section
else
{
# First check the stanza section
$strValue = $oConfig{"${strStanza}:${strSection}"}{"${strKey}"};
# If the stanza section value is undefined then check global
if (!defined($strValue))
{
$strValue = $oConfig{"global:${strSection}"}{"${strKey}"};
}
}
if (!defined($strValue) && $bRequired)
{
if (defined($strDefault))
{
return $strDefault;
}
confess &log(ERROR, 'config value ' . (defined($strSection) ? $strSection : '[stanza]') . "->${strKey} is undefined");
}
if ($strSection eq CONFIG_SECTION_COMMAND)
{
my $strOption = config_key_load(CONFIG_SECTION_COMMAND_OPTION, $strKey);
if (defined($strOption))
{
$strValue =~ s/\%option\%/${strOption}/g;
}
}
return $strValue;
}
####################################################################################################################################
# REMOTE_EXIT - Close the remote object if it exists
####################################################################################################################################
sub remote_exit
{
my $iExitCode = shift;
if (defined($oRemote))
{
$oRemote->thread_kill()
}
if (defined($iExitCode))
{
exit $iExitCode;
}
}
####################################################################################################################################
# REMOTE_GET - Get the remote object or create it if not exists
####################################################################################################################################
sub remote_get()
{
if (!defined($oRemote) && $strRemote ne REMOTE_NONE)
{
$oRemote = BackRest::Remote->new
(
strHost => config_key_load($strRemote eq REMOTE_DB ? CONFIG_SECTION_STANZA : CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST, true),
strUser => config_key_load($strRemote eq REMOTE_DB ? CONFIG_SECTION_STANZA : CONFIG_SECTION_BACKUP, CONFIG_KEY_USER, true),
strCommand => config_key_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_REMOTE, true)
);
}
return $oRemote;
}
####################################################################################################################################
# SAFE_EXIT - terminate all SSH sessions when the script is terminated
####################################################################################################################################
sub safe_exit
{
remote_exit();
my $iTotal = backup_thread_kill();
confess &log(ERROR, "process was terminated on signal, ${iTotal} threads stopped");
}
$SIG{TERM} = \&safe_exit;
$SIG{HUP} = \&safe_exit;
$SIG{INT} = \&safe_exit;
####################################################################################################################################
# START EVAL BLOCK TO CATCH ERRORS AND STOP THREADS
####################################################################################################################################
eval {
####################################################################################################################################
# START MAIN
####################################################################################################################################
# Get the operation
my $strOperation = $ARGV[0];
# Validate the operation
if (!defined($strOperation))
{
confess &log(ERROR, 'operation is not defined');
}
if ($strOperation ne OP_ARCHIVE_GET &&
$strOperation ne OP_ARCHIVE_PUSH &&
$strOperation ne OP_BACKUP &&
$strOperation ne OP_EXPIRE)
{
confess &log(ERROR, "invalid operation ${strOperation}");
}
# Type should only be specified for backups
if (defined($strType) && $strOperation ne OP_BACKUP)
{
confess &log(ERROR, 'type can only be specified for the backup operation')
}
####################################################################################################################################
# LOAD CONFIG FILE
####################################################################################################################################
if (!defined($strConfigFile))
{
$strConfigFile = '/etc/pg_backrest.conf';
}
config_load($strConfigFile, \%oConfig);
# Load and check the cluster
if (!defined($strStanza))
{
confess 'a backup stanza must be specified';
}
# Set the log levels
log_level_set(uc(config_key_load(CONFIG_SECTION_LOG, CONFIG_KEY_LEVEL_FILE, true, INFO)),
uc(config_key_load(CONFIG_SECTION_LOG, CONFIG_KEY_LEVEL_CONSOLE, true, ERROR)));
####################################################################################################################################
# DETERMINE IF THERE IS A REMOTE
####################################################################################################################################
# First check if backup is remote
if (defined(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST)))
{
$strRemote = REMOTE_BACKUP;
}
# Else check if db is remote
elsif (defined(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST)))
{
# Don't allow both sides to be remote
if (defined($strRemote))
{
confess &log(ERROR, 'db and backup cannot both be configured as remote');
}
$strRemote = REMOTE_DB;
}
else
{
$strRemote = REMOTE_NONE;
}
####################################################################################################################################
# ARCHIVE-PUSH Command
####################################################################################################################################
if ($strOperation eq OP_ARCHIVE_PUSH)
{
# Make sure the archive push operation happens on the db side
if ($strRemote eq REMOTE_DB)
{
confess &log(ERROR, 'archive-push operation must run on the db host');
}
# If an archive section has been defined, use that instead of the backup section when operation is OP_ARCHIVE_PUSH
my $bArchiveLocal = defined(config_key_load(CONFIG_SECTION_ARCHIVE, CONFIG_KEY_PATH));
my $strSection = $bArchiveLocal ? CONFIG_SECTION_ARCHIVE : CONFIG_SECTION_BACKUP;
my $strArchivePath = config_key_load($strSection, CONFIG_KEY_PATH);
# Get checksum flag
my $bChecksum = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_CHECKSUM, true, 'y') eq 'y' ? true : false;
# Get the async compress flag. If compress_async=y then compression is off for the initial push when archiving locally
my $bCompressAsync = false;
if ($bArchiveLocal)
{
config_key_load($strSection, CONFIG_KEY_COMPRESS_ASYNC, true, 'n') eq 'n' ? false : true;
}
# If logging locally then create the stop archiving file name
my $strStopFile;
if ($bArchiveLocal)
{
$strStopFile = "${strArchivePath}/lock/${strStanza}-archive.stop";
}
# If an archive file is defined, then push it
if (defined($ARGV[1]))
{
# If the stop file exists then discard the archive log
if (defined($strStopFile))
{
if (-e $strStopFile)
{
&log(ERROR, "archive stop file (${strStopFile}) exists , discarding " . basename($ARGV[1]));
remote_exit(0);
}
}
# Get the compress flag
my $bCompress = $bCompressAsync ? false : config_key_load($strSection, CONFIG_KEY_COMPRESS, true, 'y') eq 'y' ? true : false;
# Create the file object
my $oFile = BackRest::File->new
(
strStanza => $strStanza,
strRemote => $bArchiveLocal ? REMOTE_NONE : $strRemote,
oRemote => $bArchiveLocal ? undef : remote_get(),
strBackupPath => config_key_load($strSection, CONFIG_KEY_PATH, true)
);
# Init backup
backup_init
(
undef,
$oFile,
undef,
$bCompress,
undef,
!$bChecksum
);
&log(INFO, 'pushing archive log ' . $ARGV[1] . ($bArchiveLocal ? ' asynchronously' : ''));
archive_push(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_PATH), $ARGV[1]);
# Exit if we are archiving local but no backup host has been defined
if (!($bArchiveLocal && defined(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST))))
{
remote_exit(0);
}
# Fork and exit the parent process so the async process can continue
if (!$bNoFork)
{
if (fork())
{
remote_exit(0);
}
}
# Else the no-fork flag has been specified for testing
else
{
&log(INFO, 'No fork on archive local for TESTING');
}
}
# If no backup host is defined it makes no sense to run archive-push without a specified archive file so throw an error
if (!defined(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST)))
{
&log(ERROR, 'archive-push called without an archive file or backup host');
}
&log(INFO, 'starting async archive-push');
# Create a lock file to make sure async archive-push does not run more than once
my $strLockPath = "${strArchivePath}/lock/${strStanza}-archive.lock";
if (!lock_file_create($strLockPath))
{
&log(DEBUG, 'archive-push process is already running - exiting');
remote_exit(0);
}
# Build the basic command string that will be used to modify the command during processing
my $strCommand = $^X . ' ' . $0 . " --stanza=${strStanza}";
# Get the new operational flags
my $bCompress = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_COMPRESS, true, 'y') eq 'y' ? true : false;
my $iArchiveMaxMB = config_key_load(CONFIG_SECTION_ARCHIVE, CONFIG_KEY_ARCHIVE_MAX_MB);
# eval
# {
# Create the file object
my $oFile = BackRest::File->new
(
strStanza => $strStanza,
strRemote => $strRemote,
oRemote => remote_get(),
strBackupPath => config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true)
);
# Init backup
backup_init
(
undef,
$oFile,
undef,
$bCompress,
undef,
!$bChecksum,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
undef,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT)
);
# Call the archive_xfer function and continue to loop as long as there are files to process
my $iLogTotal;
while (!defined($iLogTotal) || $iLogTotal > 0)
{
$iLogTotal = archive_xfer($strArchivePath . "/archive/${strStanza}", $strStopFile, $strCommand, $iArchiveMaxMB);
if ($iLogTotal > 0)
{
&log(DEBUG, "${iLogTotal} archive logs were transferred, calling archive_xfer() again");
}
else
{
&log(DEBUG, 'no more logs to transfer - exiting');
}
}
#
# };
# # If there were errors above then start compressing
# if ($@)
# {
# if ($bCompressAsync)
# {
# &log(ERROR, "error during transfer: $@");
# &log(WARN, "errors during transfer, starting compression");
#
# # Run file_init_archive - this is the minimal config needed to run archive pulling !!! need to close the old file
# my $oFile = BackRest::File->new
# (
# # strStanza => $strStanza,
# # bNoCompression => false,
# # strBackupPath => config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true),
# # strCommand => $0,
# # strCommandCompress => config_key_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_COMPRESS, $bCompress),
# # strCommandDecompress => config_key_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_DECOMPRESS, $bCompress)
# );
#
# backup_init
# (
# undef,
# $oFile,
# undef,
# $bCompress,
# undef,
# !$bChecksum,
# config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
# undef,
# config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT)
# );
#
# archive_compress($strArchivePath . "/archive/${strStanza}", $strCommand, 256);
# }
# else
# {
# confess $@;
# }
# }
lock_file_remove();
remote_exit(0);
}
####################################################################################################################################
# ARCHIVE-GET Command
####################################################################################################################################
if ($strOperation eq OP_ARCHIVE_GET)
{
# Make sure the archive file is defined
if (!defined($ARGV[1]))
{
confess &log(ERROR, 'archive file not provided');
}
# Make sure the destination file is defined
if (!defined($ARGV[2]))
{
confess &log(ERROR, 'destination file not provided');
}
# Init the file object
my $oFile = BackRest::File->new
(
strStanza => $strStanza,
strRemote => $strRemote,
oRemote => remote_get(),
strBackupPath => config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true)
);
# Init the backup object
backup_init
(
undef,
$oFile
);
# Info for the Postgres log
&log(INFO, 'getting archive log ' . $ARGV[1]);
# Get the archive file
remote_exit(archive_get(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_PATH), $ARGV[1], $ARGV[2]));
}
####################################################################################################################################
# OPEN THE LOG FILE
####################################################################################################################################
if (defined(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST)))
{
confess &log(ASSERT, 'backup/expire operations must be performed locally on the backup server');
}
log_file_set(config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true) . "/log/${strStanza}");
####################################################################################################################################
# GET MORE CONFIG INFO
####################################################################################################################################
# Make sure backup and expire operations happen on the backup side
if ($strRemote eq REMOTE_BACKUP)
{
confess &log(ERROR, 'backup and expire operations must run on the backup host');
}
# Set the backup type
if (!defined($strType))
{
$strType = 'incremental';
}
elsif ($strType eq 'diff')
{
$strType = 'differential';
}
elsif ($strType eq 'incr')
{
$strType = 'incremental';
}
elsif ($strType ne 'full' && $strType ne 'differential' && $strType ne 'incremental')
{
confess &log(ERROR, 'backup type must be full, differential (diff), incremental (incr)');
}
# Get the operational flags
my $bCompress = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_COMPRESS, true, 'y') eq 'y' ? true : false;
my $bChecksum = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_CHECKSUM, true, 'y') eq 'y' ? true : false;
# Set the lock path
my $strLockPath = config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true) . "/lock/${strStanza}-${strOperation}.lock";
if (!lock_file_create($strLockPath))
{
&log(ERROR, "backup process is already running for stanza ${strStanza} - exiting");
remote_exit(0);
}
# Run file_init_archive - the rest of the file config required for backup and restore
my $oFile = BackRest::File->new
(
strStanza => $strStanza,
strRemote => $strRemote,
oRemote => remote_get(),
strBackupPath => config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true)
);
my $oDb = BackRest::Db->new
(
strDbUser => config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_USER),
strDbHost => config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST),
strCommandPsql => config_key_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_PSQL),
oDbSSH => $oFile->{oDbSSH}
);
# Run backup_init - parameters required for backup and restore operations
backup_init
(
$oDb,
$oFile,
$strType,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_COMPRESS, true, 'y') eq 'y' ? true : false,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HARDLINK, true, 'y') eq 'y' ? true : false,
!$bChecksum,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_ARCHIVE_REQUIRED, true, 'y') eq 'y' ? true : false,
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT),
$bTest,
$iTestDelay
);
####################################################################################################################################
# BACKUP
####################################################################################################################################
if ($strOperation eq OP_BACKUP)
{
backup(config_key_load(CONFIG_SECTION_STANZA, CONFIG_KEY_PATH),
config_key_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_START_FAST, true, 'n') eq 'y' ? true : false);
$strOperation = OP_EXPIRE;
}
####################################################################################################################################
# EXPIRE
####################################################################################################################################
if ($strOperation eq OP_EXPIRE)
{
backup_expire
(
$oFile->path_get(PATH_BACKUP_CLUSTER),
config_key_load(CONFIG_SECTION_RETENTION, CONFIG_KEY_FULL_RETENTION),
config_key_load(CONFIG_SECTION_RETENTION, CONFIG_KEY_DIFFERENTIAL_RETENTION),
config_key_load(CONFIG_SECTION_RETENTION, CONFIG_KEY_ARCHIVE_RETENTION_TYPE),
config_key_load(CONFIG_SECTION_RETENTION, CONFIG_KEY_ARCHIVE_RETENTION)
);
lock_file_remove();
}
remote_exit(0);
};
####################################################################################################################################
# CHECK FOR ERRORS AND STOP THREADS
####################################################################################################################################
if ($@)
{
remote_exit();
confess $@;
}

183
bin/pg_backrest_remote.pl Executable file
View File

@ -0,0 +1,183 @@
#!/usr/bin/perl
####################################################################################################################################
# pg_backrest_remote.pl - Simple Postgres Backup and Restore Remote
####################################################################################################################################
####################################################################################################################################
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use File::Basename;
use Getopt::Long;
use Carp;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::File;
use BackRest::Remote;
use BackRest::Exception;
####################################################################################################################################
# Operation constants
####################################################################################################################################
use constant
{
OP_NOOP => 'noop',
OP_EXIT => 'exit'
};
####################################################################################################################################
# PARAM_GET - helper function that returns the param or an error if required and it does not exist
####################################################################################################################################
sub param_get
{
my $oParamHashRef = shift;
my $strParam = shift;
my $bRequired = shift;
my $strValue = ${$oParamHashRef}{$strParam};
if (!defined($strValue) && (!defined($bRequired) || $bRequired))
{
confess "${strParam} must be defined";
}
return $strValue;
}
####################################################################################################################################
# START MAIN
####################################################################################################################################
# Turn off logging
log_level_set(OFF, OFF);
# Create the remote object
my $oRemote = BackRest::Remote->new();
# Create the file object
my $oFile = BackRest::File->new
(
oRemote => $oRemote
);
# Write the greeting so remote process knows who we are
$oRemote->greeting_write();
# Command string
my $strCommand = OP_NOOP;
# Loop until the exit command is received
while ($strCommand ne OP_EXIT)
{
my %oParamHash;
$strCommand = $oRemote->command_read(\%oParamHash);
eval
{
# Copy a file to STDOUT
if ($strCommand eq OP_FILE_COPY_OUT)
{
$oFile->copy(PATH_ABSOLUTE, param_get(\%oParamHash, 'source_file'),
PIPE_STDOUT, undef,
param_get(\%oParamHash, 'source_compressed'), undef);
$oRemote->output_write();
}
# Copy a file from STDIN
elsif ($strCommand eq OP_FILE_COPY_IN)
{
$oFile->copy(PIPE_STDIN, undef,
PATH_ABSOLUTE, param_get(\%oParamHash, 'destination_file'),
undef, param_get(\%oParamHash, 'destination_compress'),
undef, undef,
param_get(\%oParamHash, 'permission', false),
param_get(\%oParamHash, 'destination_path_create'));
$oRemote->output_write();
}
# List files in a path
elsif ($strCommand eq OP_FILE_LIST)
{
my $strOutput;
foreach my $strFile ($oFile->list(PATH_ABSOLUTE, param_get(\%oParamHash, 'path'),
param_get(\%oParamHash, 'expression', false),
param_get(\%oParamHash, 'sort_order'),
param_get(\%oParamHash, 'ignore_missing')))
{
if (defined($strOutput))
{
$strOutput .= "\n";
}
$strOutput .= $strFile;
}
$oRemote->output_write($strOutput);
}
# Create a path
elsif ($strCommand eq OP_FILE_PATH_CREATE)
{
$oFile->path_create(PATH_ABSOLUTE, param_get(\%oParamHash, 'path'), param_get(\%oParamHash, 'permission', false));
$oRemote->output_write();
}
# Check if a file/path exists
elsif ($strCommand eq OP_FILE_EXISTS)
{
$oRemote->output_write($oFile->exists(PATH_ABSOLUTE, param_get(\%oParamHash, 'path')) ? 'Y' : 'N');
}
# Copy a file locally
elsif ($strCommand eq OP_FILE_COPY)
{
$oRemote->output_write(
$oFile->copy(PATH_ABSOLUTE, param_get(\%oParamHash, 'source_file'),
PATH_ABSOLUTE, param_get(\%oParamHash, 'destination_file'),
param_get(\%oParamHash, 'source_compressed'),
param_get(\%oParamHash, 'destination_compress'),
param_get(\%oParamHash, 'ignore_missing_source', false),
undef,
param_get(\%oParamHash, 'permission', false),
param_get(\%oParamHash, 'destination_path_create')) ? 'Y' : 'N');
}
# Generate a manifest
elsif ($strCommand eq OP_FILE_MANIFEST)
{
my %oManifestHash;
$oFile->manifest(PATH_ABSOLUTE, param_get(\%oParamHash, 'path'), \%oManifestHash);
my $strOutput = "name\ttype\tuser\tgroup\tpermission\tmodification_time\tinode\tsize\tlink_destination";
foreach my $strName (sort(keys $oManifestHash{name}))
{
$strOutput .= "\n${strName}\t" .
$oManifestHash{name}{"${strName}"}{type} . "\t" .
(defined($oManifestHash{name}{"${strName}"}{user}) ? $oManifestHash{name}{"${strName}"}{user} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{group}) ? $oManifestHash{name}{"${strName}"}{group} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{permission}) ? $oManifestHash{name}{"${strName}"}{permission} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{modification_time}) ?
$oManifestHash{name}{"${strName}"}{modification_time} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{inode}) ? $oManifestHash{name}{"${strName}"}{inode} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{size}) ? $oManifestHash{name}{"${strName}"}{size} : "") . "\t" .
(defined($oManifestHash{name}{"${strName}"}{link_destination}) ?
$oManifestHash{name}{"${strName}"}{link_destination} : "");
}
$oRemote->output_write($strOutput);
}
# Continue if noop or exit
elsif ($strCommand ne OP_NOOP && $strCommand ne OP_EXIT)
{
confess "invalid command: ${strCommand}";
}
};
# Process errors
if ($@)
{
$oRemote->error_write($@);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,20 +1,20 @@
####################################################################################################################################
# DB MODULE
####################################################################################################################################
package pg_backrest_db;
package BackRest::Db;
use threads;
use Moose;
use strict;
use warnings;
use Carp;
use Moose;
use Net::OpenSSH;
use File::Basename;
use IPC::System::Simple qw(capture);
use lib dirname($0);
use pg_backrest_utility;
use BackRest::Utility;
# Command strings
has strCommandPsql => (is => 'bare'); # PSQL command
@ -35,8 +35,8 @@ sub BUILD
# Connect SSH object if db host is defined
if (defined($self->{strDbHost}) && !defined($self->{oDbSSH}))
{
my $strOptionSSHRequestTTY = "RequestTTY=yes";
my $strOptionSSHRequestTTY = 'RequestTTY=yes';
&log(TRACE, "connecting to database ssh host $self->{strDbHost}");
# !!! This could be improved by redirecting stderr to a file to get a better error message
@ -97,24 +97,25 @@ sub psql_execute
sub tablespace_map_get
{
my $self = shift;
my $oHashRef = shift;
return data_hash_build("oid\tname\n" . $self->psql_execute(
"copy (select oid, spcname from pg_tablespace) to stdout"), "\t");
data_hash_build($oHashRef, "oid\tname\n" . $self->psql_execute(
'copy (select oid, spcname from pg_tablespace) to stdout'), "\t");
}
####################################################################################################################################
# VERSION_GET
# DB_VERSION_GET
####################################################################################################################################
sub version_get
sub db_version_get
{
my $self = shift;
if (defined($self->{fVersion}))
{
return $self->{fVersion};
}
$self->{fVersion} =
$self->{fVersion} =
trim($self->psql_execute("copy (select (regexp_matches(split_part(version(), ' ', 2), '^[0-9]+\.[0-9]+'))[1]) to stdout"));
&log(DEBUG, "database version is $self->{fVersion}");
@ -131,9 +132,9 @@ sub backup_start
my $strLabel = shift;
my $bStartFast = shift;
return trim($self->psql_execute("set client_min_messages = 'warning';" .
return trim($self->psql_execute("set client_min_messages = 'warning';" .
"copy (select pg_xlogfile_name(xlog) from pg_start_backup('${strLabel}'" .
($bStartFast ? ", true" : "") . ") as xlog) to stdout"));
($bStartFast ? ', true' : '') . ') as xlog) to stdout'));
}
####################################################################################################################################
@ -148,4 +149,4 @@ sub backup_stop
}
no Moose;
__PACKAGE__->meta->make_immutable;
__PACKAGE__->meta->make_immutable;

38
lib/BackRest/Exception.pm Normal file
View File

@ -0,0 +1,38 @@
####################################################################################################################################
# EXCEPTION MODULE
####################################################################################################################################
package BackRest::Exception;
use threads;
use strict;
use warnings;
use Carp;
use Moose;
# Module variables
has iCode => (is => 'bare'); # Exception code
has strMessage => (is => 'bare'); # Exception message
####################################################################################################################################
# CODE
####################################################################################################################################
sub code
{
my $self = shift;
return $self->{iCode};
}
####################################################################################################################################
# MESSAGE
####################################################################################################################################
sub message
{
my $self = shift;
return $self->{strMessage};
}
no Moose;
__PACKAGE__->meta->make_immutable;

1435
lib/BackRest/File.pm Normal file

File diff suppressed because it is too large Load Diff

821
lib/BackRest/Remote.pm Normal file
View File

@ -0,0 +1,821 @@
####################################################################################################################################
# REMOTE MODULE
####################################################################################################################################
package BackRest::Remote;
use threads;
use strict;
use warnings;
use Carp;
use Moose;
use Thread::Queue;
use Net::OpenSSH;
use File::Basename;
use IO::Handle;
use POSIX ':sys_wait_h';
use IO::Compress::Gzip qw(gzip $GzipError);
use IO::Uncompress::Gunzip qw(gunzip $GunzipError);
use lib dirname($0) . '/../lib';
use BackRest::Exception;
use BackRest::Utility;
####################################################################################################################################
# Remote xfer default block size constant
####################################################################################################################################
use constant
{
DEFAULT_BLOCK_SIZE => 1048576
};
####################################################################################################################################
# Module variables
####################################################################################################################################
# Protocol strings
has strGreeting => (is => 'ro', default => 'PG_BACKREST_REMOTE');
# Command strings
has strCommand => (is => 'bare');
# Module variables
has strHost => (is => 'bare'); # Host host
has strUser => (is => 'bare'); # User user
has oSSH => (is => 'bare'); # SSH object
# Process variables
has pId => (is => 'bare'); # Process Id
has hIn => (is => 'bare'); # Input stream
has hOut => (is => 'bare'); # Output stream
has hErr => (is => 'bare'); # Error stream
# Thread variables
has iThreadIdx => (is => 'bare'); # Thread index
has oThread => (is => 'bare'); # Thread object
has oThreadQueue => (is => 'bare'); # Thread queue object
has oThreadResult => (is => 'bare'); # Thread result object
# Block size
has iBlockSize => (is => 'bare', default => DEFAULT_BLOCK_SIZE); # Set block size to default
####################################################################################################################################
# CONSTRUCTOR
####################################################################################################################################
sub BUILD
{
my $self = shift;
$self->{strGreeting} .= ' ' . version_get();
if (defined($self->{strHost}))
{
# User must be defined
if (!defined($self->{strUser}))
{
confess &log(ASSERT, 'strUser must be defined');
}
# User must be defined
if (!defined($self->{strCommand}))
{
confess &log(ASSERT, 'strCommand must be defined');
}
# Set SSH Options
my $strOptionSSHRequestTTY = 'RequestTTY=yes';
my $strOptionSSHCompression = 'Compression=no';
&log(TRACE, 'connecting to remote ssh host ' . $self->{strHost});
# Make SSH connection
$self->{oSSH} = Net::OpenSSH->new($self->{strHost}, timeout => 300, user => $self->{strUser},
master_opts => [-o => $strOptionSSHCompression, -o => $strOptionSSHRequestTTY]);
$self->{oSSH}->error and confess &log(ERROR, "unable to connect to $self->{strHost}: " . $self->{oSSH}->error);
# Execute remote command
($self->{hIn}, $self->{hOut}, $self->{hErr}, $self->{pId}) = $self->{oSSH}->open3($self->{strCommand});
$self->greeting_read();
}
$self->{oThreadQueue} = Thread::Queue->new();
$self->{oThreadResult} = Thread::Queue->new();
$self->{oThread} = threads->create(\&binary_xfer_thread, $self);
}
####################################################################################################################################
# thread_kill
####################################################################################################################################
sub thread_kill
{
my $self = shift;
if (defined($self->{oThread}))
{
$self->{oThreadQueue}->enqueue(undef);
$self->{oThread}->join();
$self->{oThread} = undef;
}
}
####################################################################################################################################
# DESTRUCTOR
####################################################################################################################################
sub DEMOLISH
{
my $self = shift;
$self->thread_kill();
}
####################################################################################################################################
# CLONE
####################################################################################################################################
sub clone
{
my $self = shift;
my $iThreadIdx = shift;
return BackRest::Remote->new
(
strCommand => $self->{strCommand},
strHost => $self->{strHost},
strUser => $self->{strUser},
iBlockSize => $self->{iBlockSize},
iThreadIdx => $iThreadIdx
);
}
####################################################################################################################################
# GREETING_READ
#
# Read the greeting and make sure it is as expected.
####################################################################################################################################
sub greeting_read
{
my $self = shift;
# Make sure that the remote is running the right version
if ($self->read_line($self->{hOut}) ne $self->{strGreeting})
{
confess &log(ERROR, 'remote version mismatch');
}
}
####################################################################################################################################
# GREETING_WRITE
#
# Send a greeting to the master process.
####################################################################################################################################
sub greeting_write
{
my $self = shift;
if (!syswrite(*STDOUT, "$self->{strGreeting}\n"))
{
confess 'unable to write greeting';
}
}
####################################################################################################################################
# STRING_WRITE
#
# Write a string.
####################################################################################################################################
sub string_write
{
my $self = shift;
my $hOut = shift;
my $strBuffer = shift;
$strBuffer =~ s/\n/\n\./g;
if (!syswrite($hOut, '.' . $strBuffer))
{
confess 'unable to write string';
}
}
####################################################################################################################################
# PIPE_TO_STRING Function
#
# Copies data from a file handle into a string.
####################################################################################################################################
sub pipe_to_string
{
my $self = shift;
my $hOut = shift;
my $strBuffer;
my $hString = IO::String->new($strBuffer);
$self->binary_xfer($hOut, $hString);
return $strBuffer;
}
####################################################################################################################################
# ERROR_WRITE
#
# Write errors with error codes in protocol format, otherwise write to stderr and exit with error.
####################################################################################################################################
sub error_write
{
my $self = shift;
my $oMessage = shift;
my $iCode;
my $strMessage;
if (blessed($oMessage))
{
if ($oMessage->isa('BackRest::Exception'))
{
$iCode = $oMessage->code();
$strMessage = $oMessage->message();
}
else
{
syswrite(*STDERR, 'unknown error object: ' . $oMessage);
exit 1;
}
}
else
{
syswrite(*STDERR, $oMessage);
exit 1;
}
if (defined($strMessage))
{
$self->string_write(*STDOUT, trim($strMessage));
}
if (!syswrite(*STDOUT, "\nERROR" . (defined($iCode) ? " $iCode" : '') . "\n"))
{
confess 'unable to write error';
}
}
####################################################################################################################################
# READ_LINE
#
# Read a line.
####################################################################################################################################
sub read_line
{
my $self = shift;
my $hIn = shift;
my $bError = shift;
my $strLine;
my $strChar;
my $iByteIn;
while (1)
{
$iByteIn = sysread($hIn, $strChar, 1);
if (!defined($iByteIn) || $iByteIn != 1)
{
$self->wait_pid();
if (defined($bError) and !$bError)
{
return undef;
}
confess &log(ERROR, 'unable to read 1 byte' . (defined($!) ? ': ' . $! : ''));
}
if ($strChar eq "\n")
{
last;
}
$strLine .= $strChar;
}
return $strLine;
}
####################################################################################################################################
# WRITE_LINE
#
# Write a line data
####################################################################################################################################
sub write_line
{
my $self = shift;
my $hOut = shift;
my $strBuffer = shift;
$strBuffer = $strBuffer . "\n";
my $iLineOut = syswrite($hOut, $strBuffer, length($strBuffer));
if (!defined($iLineOut) || $iLineOut != length($strBuffer))
{
confess 'unable to write ' . length($strBuffer) . ' byte(s)';
}
}
####################################################################################################################################
# WAIT_PID
#
# See if the remote process has terminated unexpectedly.
####################################################################################################################################
sub wait_pid
{
my $self = shift;
if (defined($self->{pId}) && waitpid($self->{pId}, WNOHANG) != 0)
{
my $strError = 'no error on stderr';
if (!defined($self->{hErr}))
{
$strError = 'no error captured because stderr is already closed';
}
else
{
$strError = $self->pipe_to_string($self->{hErr});
}
$self->{pId} = undef;
$self->{hIn} = undef;
$self->{hOut} = undef;
$self->{hErr} = undef;
confess &log(ERROR, "remote process terminated: ${strError}");
}
}
####################################################################################################################################
# BINARY_XFER_THREAD
#
# De/Compresses data on a thread.
####################################################################################################################################
sub binary_xfer_thread
{
my $self = shift;
while (my $strMessage = $self->{oThreadQueue}->dequeue())
{
my @stryMessage = split(':', $strMessage);
my @strHandle = split(',', $stryMessage[1]);
my $hIn = IO::Handle->new_from_fd($strHandle[0], '<');
my $hOut = IO::Handle->new_from_fd($strHandle[1], '>');
$self->{oThreadResult}->enqueue('running');
if ($stryMessage[0] eq 'compress')
{
gzip($hIn => $hOut)
or confess &log(ERROR, 'unable to compress: ' . $GzipError);
}
else
{
gunzip($hIn => $hOut)
or die confess &log(ERROR, 'unable to uncompress: ' . $GunzipError);
}
close($hOut);
$self->{oThreadResult}->enqueue('complete');
}
}
####################################################################################################################################
# BINARY_XFER
#
# Copies data from one file handle to another, optionally compressing or decompressing the data in stream.
####################################################################################################################################
sub binary_xfer
{
my $self = shift;
my $hIn = shift;
my $hOut = shift;
my $strRemote = shift;
my $bSourceCompressed = shift;
my $bDestinationCompress = shift;
# If no remote is defined then set to none
if (!defined($strRemote))
{
$strRemote = 'none';
}
# Only set compression defaults when remote is defined
else
{
$bSourceCompressed = defined($bSourceCompressed) ? $bSourceCompressed : false;
$bDestinationCompress = defined($bDestinationCompress) ? $bDestinationCompress : false;
}
# Working variables
my $iBlockSize = $self->{iBlockSize};
my $iBlockIn;
my $iBlockInTotal = $iBlockSize;
my $iBlockOut;
my $iBlockTotal = 0;
my $strBlockHeader;
my $strBlock;
my $oGzip;
my $hPipeIn;
my $hPipeOut;
my $pId;
my $bThreadRunning = false;
# Both the in and out streams must be defined
if (!defined($hIn) || !defined($hOut))
{
confess &log(ASSERT, 'hIn or hOut is not defined');
}
# If this is output and the source is not already compressed
if ($strRemote eq 'out' && !$bSourceCompressed)
{
# Increase the blocksize since we are compressing
$iBlockSize *= 4;
# Open the in/out pipes
pipe $hPipeOut, $hPipeIn;
# Queue the compression job with the thread
$self->{oThreadQueue}->enqueue('compress:' . fileno($hIn) . ',' . fileno($hPipeIn));
# Wait for the thread to acknowledge that it has duplicated the file handles
my $strMessage = $self->{oThreadResult}->dequeue();
# Close input pipe so that thread has the only copy, reset hIn to hPipeOut
if ($strMessage eq 'running')
{
close($hPipeIn);
$hIn = $hPipeOut;
}
# If any other message is returned then error
else
{
confess "unknown thread message while waiting for running: ${strMessage}";
}
$bThreadRunning = true;
}
# Spawn a child process to do decompression
elsif ($strRemote eq 'in' && !$bDestinationCompress)
{
# Open the in/out pipes
pipe $hPipeOut, $hPipeIn;
# Queue the decompression job with the thread
$self->{oThreadQueue}->enqueue('decompress:' . fileno($hPipeOut) . ',' . fileno($hOut));
# Wait for the thread to acknowledge that it has duplicated the file handles
my $strMessage = $self->{oThreadResult}->dequeue();
# Close output pipe so that thread has the only copy, reset hOut to hPipeIn
if ($strMessage eq 'running')
{
close($hPipeOut);
$hOut = $hPipeIn;
}
# If any other message is returned then error
else
{
confess "unknown thread message while waiting for running: ${strMessage}";
}
$bThreadRunning = true;
}
while (1)
{
if ($strRemote eq 'in')
{
if ($iBlockInTotal == $iBlockSize)
{
$strBlockHeader = $self->read_line($hIn);
if ($strBlockHeader !~ /^block [0-9]+$/)
{
$self->wait_pid();
confess "unable to read block header ${strBlockHeader}";
}
$iBlockInTotal = 0;
$iBlockTotal += 1;
}
$iBlockSize = trim(substr($strBlockHeader, index($strBlockHeader, ' ') + 1));
if ($iBlockSize != 0)
{
$iBlockIn = sysread($hIn, $strBlock, $iBlockSize - $iBlockInTotal);
if (!defined($iBlockIn))
{
my $strError = $!;
$self->wait_pid();
confess "unable to read block #${iBlockTotal}/${iBlockSize} bytes from remote" .
(defined($strError) ? ": ${strError}" : '');
}
$iBlockInTotal += $iBlockIn;
}
else
{
$iBlockIn = 0;
}
}
else
{
$iBlockIn = sysread($hIn, $strBlock, $iBlockSize);
if (!defined($iBlockIn))
{
$self->wait_pid();
confess &log(ERROR, 'unable to read');
}
}
if ($strRemote eq 'out')
{
$strBlockHeader = "block ${iBlockIn}\n";
$iBlockOut = syswrite($hOut, $strBlockHeader);
if (!defined($iBlockOut) || $iBlockOut != length($strBlockHeader))
{
$self->wait_pid();
confess 'unable to write block header';
}
}
if ($iBlockIn > 0)
{
$iBlockOut = syswrite($hOut, $strBlock, $iBlockIn);
if (!defined($iBlockOut) || $iBlockOut != $iBlockIn)
{
$self->wait_pid();
confess "unable to write ${iBlockIn} bytes" . (defined($!) ? ': ' . $! : '');
}
}
else
{
last;
}
}
if ($bThreadRunning)
{
# Make sure the de/compress pipes are closed
if ($strRemote eq 'out' && !$bSourceCompressed)
{
close($hPipeOut);
}
elsif ($strRemote eq 'in' && !$bDestinationCompress)
{
close($hPipeIn);
}
# Wait for the thread to acknowledge that it has completed
my $strMessage = $self->{oThreadResult}->dequeue();
if ($strMessage eq 'complete')
{
}
# If any other message is returned then error
else
{
confess "unknown thread message while waiting for complete: ${strMessage}";
}
}
}
####################################################################################################################################
# OUTPUT_READ
#
# Read output from the remote process.
####################################################################################################################################
sub output_read
{
my $self = shift;
my $bOutputRequired = shift;
my $strErrorPrefix = shift;
my $bSuppressLog = shift;
my $strLine;
my $strOutput;
my $bError = false;
my $iErrorCode;
my $strError;
# Read output lines
while ($strLine = $self->read_line($self->{hOut}, false))
{
if ($strLine =~ /^ERROR.*/)
{
$bError = true;
$iErrorCode = (split(' ', $strLine))[1];
last;
}
if ($strLine =~ /^OK$/)
{
last;
}
$strOutput .= (defined($strOutput) ? "\n" : '') . substr($strLine, 1);
}
# Check if the process has exited abnormally
$self->wait_pid();
# Raise any errors
if ($bError)
{
confess &log(ERROR, (defined($strErrorPrefix) ? "${strErrorPrefix}" : '') .
(defined($strOutput) ? ": ${strOutput}" : ''), $iErrorCode, $bSuppressLog);
}
# If output is required and there is no output, raise exception
if ($bOutputRequired && !defined($strOutput))
{
confess &log(ERROR, (defined($strErrorPrefix) ? "${strErrorPrefix}: " : '') . 'output is not defined');
}
# Return output
return $strOutput;
}
####################################################################################################################################
# OUTPUT_WRITE
#
# Write output for the master process.
####################################################################################################################################
sub output_write
{
my $self = shift;
my $strOutput = shift;
if (defined($strOutput))
{
$self->string_write(*STDOUT, "${strOutput}");
if (!syswrite(*STDOUT, "\n"))
{
confess 'unable to write output';
}
}
if (!syswrite(*STDOUT, "OK\n"))
{
confess 'unable to write output';
}
}
####################################################################################################################################
# COMMAND_PARAM_STRING
#
# Output command parameters in the hash as a string (used for debugging).
####################################################################################################################################
sub command_param_string
{
my $self = shift;
my $oParamHashRef = shift;
my $strParamList;
foreach my $strParam (sort(keys $oParamHashRef))
{
$strParamList .= (defined($strParamList) ? ',' : '') . "${strParam}=" .
(defined(${$oParamHashRef}{"${strParam}"}) ? ${$oParamHashRef}{"${strParam}"} : '[undef]');
}
return $strParamList;
}
####################################################################################################################################
# COMMAND_READ
#
# Read command sent by the master process.
####################################################################################################################################
sub command_read
{
my $self = shift;
my $oParamHashRef = shift;
my $strLine;
my $strCommand;
while ($strLine = $self->read_line(*STDIN))
{
if (!defined($strCommand))
{
if ($strLine =~ /:$/)
{
$strCommand = substr($strLine, 0, length($strLine) - 1);
}
else
{
$strCommand = $strLine;
last;
}
}
else
{
if ($strLine eq 'end')
{
last;
}
my $iPos = index($strLine, '=');
if ($iPos == -1)
{
confess "param \"${strLine}\" is missing = character";
}
my $strParam = substr($strLine, 0, $iPos);
my $strValue = substr($strLine, $iPos + 1);
${$oParamHashRef}{"${strParam}"} = ${strValue};
}
}
return $strCommand;
}
####################################################################################################################################
# COMMAND_WRITE
#
# Send command to remote process.
####################################################################################################################################
sub command_write
{
my $self = shift;
my $strCommand = shift;
my $oParamRef = shift;
my $strOutput = $strCommand;
if (defined($oParamRef))
{
$strOutput = "${strCommand}:\n";
foreach my $strParam (sort(keys $oParamRef))
{
if ($strParam =~ /=/)
{
confess &log(ASSERT, "param \"${strParam}\" cannot contain = character");
}
my $strValue = ${$oParamRef}{"${strParam}"};
if ($strParam =~ /\n\$/)
{
confess &log(ASSERT, "param \"${strParam}\" value cannot end with LF");
}
if (defined(${strValue}))
{
$strOutput .= "${strParam}=${strValue}\n";
}
}
$strOutput .= 'end';
}
&log(TRACE, "Remote->command_write:\n" . $strOutput);
if (!syswrite($self->{hIn}, "${strOutput}\n"))
{
confess 'unable to write command';
}
}
####################################################################################################################################
# COMMAND_EXECUTE
#
# Send command to remote process and wait for output.
####################################################################################################################################
sub command_execute
{
my $self = shift;
my $strCommand = shift;
my $oParamRef = shift;
my $bOutputRequired = shift;
my $strErrorPrefix = shift;
$self->command_write($strCommand, $oParamRef);
return $self->output_read($bOutputRequired, $strErrorPrefix);
}
no Moose;
__PACKAGE__->meta->make_immutable;

620
lib/BackRest/Utility.pm Normal file
View File

@ -0,0 +1,620 @@
####################################################################################################################################
# UTILITY MODULE
####################################################################################################################################
package BackRest::Utility;
use threads;
use strict;
use warnings;
use Carp;
use Fcntl qw(:DEFAULT :flock);
use File::Path qw(remove_tree);
use File::Basename;
use JSON;
use lib dirname($0) . '/../lib';
use BackRest::Exception;
use Exporter qw(import);
our @EXPORT = qw(version_get
data_hash_build trim common_prefix wait_for_file file_size_format execute
log log_file_set log_level_set test_set test_check
lock_file_create lock_file_remove
config_save config_load timestamp_string_get timestamp_file_string_get
TRACE DEBUG ERROR ASSERT WARN INFO OFF true false
TEST TEST_ENCLOSE TEST_MANIFEST_BUILD);
# Global constants
use constant
{
true => 1,
false => 0
};
use constant
{
TRACE => 'TRACE',
DEBUG => 'DEBUG',
INFO => 'INFO',
WARN => 'WARN',
ERROR => 'ERROR',
ASSERT => 'ASSERT',
OFF => 'OFF'
};
my $hLogFile;
my $strLogLevelFile = ERROR;
my $strLogLevelConsole = ERROR;
my %oLogLevelRank;
my $strLockPath;
my $hLockFile;
$oLogLevelRank{TRACE}{rank} = 6;
$oLogLevelRank{DEBUG}{rank} = 5;
$oLogLevelRank{INFO}{rank} = 4;
$oLogLevelRank{WARN}{rank} = 3;
$oLogLevelRank{ERROR}{rank} = 2;
$oLogLevelRank{ASSERT}{rank} = 1;
$oLogLevelRank{OFF}{rank} = 0;
####################################################################################################################################
# TEST Constants and Variables
####################################################################################################################################
use constant
{
TEST => 'TEST',
TEST_ENCLOSE => 'PgBaCkReStTeSt',
TEST_MANIFEST_BUILD => 'MANIFEST_BUILD'
};
# Test global variables
my $bTest = false;
my $iTestDelay;
####################################################################################################################################
# VERSION_GET
####################################################################################################################################
my $strVersion;
sub version_get
{
my $hVersion;
my $strVersion;
if (!open($hVersion, '<', dirname($0) . '/../VERSION'))
{
confess &log(ASSERT, 'unable to open VERSION file');
}
if (!($strVersion = readline($hVersion)))
{
confess &log(ASSERT, 'unable to read VERSION file');
}
close($hVersion);
return trim($strVersion);
}
####################################################################################################################################
# LOCK_FILE_CREATE
####################################################################################################################################
sub lock_file_create
{
my $strLockPathParam = shift;
my $strLockFile = $strLockPathParam . '/process.lock';
if (defined($hLockFile))
{
confess &lock(ASSERT, "${strLockFile} lock is already held");
}
$strLockPath = $strLockPathParam;
unless (-e $strLockPath)
{
if (system("mkdir -p ${strLockPath}") != 0)
{
confess &log(ERROR, "Unable to create lock path ${strLockPath}");
}
}
sysopen($hLockFile, $strLockFile, O_WRONLY | O_CREAT)
or confess &log(ERROR, "unable to open lock file ${strLockFile}");
if (!flock($hLockFile, LOCK_EX | LOCK_NB))
{
close($hLockFile);
return 0;
}
return $hLockFile;
}
####################################################################################################################################
# LOCK_FILE_REMOVE
####################################################################################################################################
sub lock_file_remove
{
if (defined($hLockFile))
{
close($hLockFile);
remove_tree($strLockPath) or confess &log(ERROR, "unable to delete lock path ${strLockPath}");
$hLockFile = undef;
$strLockPath = undef;
}
else
{
confess &log(ASSERT, 'there is no lock to free');
}
}
####################################################################################################################################
# DATA_HASH_BUILD - Hash a delimited file with header
####################################################################################################################################
sub data_hash_build
{
my $oHashRef = shift;
my $strData = shift;
my $strDelimiter = shift;
my $strUndefinedKey = shift;
my @stryFile = split("\n", $strData);
my @stryHeader = split($strDelimiter, $stryFile[0]);
for (my $iLineIdx = 1; $iLineIdx < scalar @stryFile; $iLineIdx++)
{
my @stryLine = split($strDelimiter, $stryFile[$iLineIdx]);
if (!defined($stryLine[0]) || $stryLine[0] eq '')
{
$stryLine[0] = $strUndefinedKey;
}
for (my $iColumnIdx = 1; $iColumnIdx < scalar @stryHeader; $iColumnIdx++)
{
if (defined(${$oHashRef}{"$stryHeader[0]"}{"$stryLine[0]"}{"$stryHeader[$iColumnIdx]"}))
{
confess 'the first column must be unique to build the hash';
}
if (defined($stryLine[$iColumnIdx]) && $stryLine[$iColumnIdx] ne '')
{
${$oHashRef}{"$stryHeader[0]"}{"$stryLine[0]"}{"$stryHeader[$iColumnIdx]"} = $stryLine[$iColumnIdx];
}
}
}
}
####################################################################################################################################
# TRIM - trim whitespace off strings
####################################################################################################################################
sub trim
{
my $strBuffer = shift;
if (!defined($strBuffer))
{
return undef;
}
$strBuffer =~ s/^\s+|\s+$//g;
return $strBuffer;
}
####################################################################################################################################
# WAIT_FOR_FILE
####################################################################################################################################
sub wait_for_file
{
my $strDir = shift;
my $strRegEx = shift;
my $iSeconds = shift;
my $lTime = time();
my $hDir;
while ($lTime > time() - $iSeconds)
{
opendir $hDir, $strDir
or confess &log(ERROR, "Could not open path ${strDir}: $!\n");
my @stryFile = grep(/$strRegEx/i, readdir $hDir);
close $hDir;
if (scalar @stryFile == 1)
{
return;
}
sleep(1);
}
confess &log(ERROR, "could not find $strDir/$strRegEx after ${iSeconds} second(s)");
}
####################################################################################################################################
# COMMON_PREFIX
####################################################################################################################################
sub common_prefix
{
my $strString1 = shift;
my $strString2 = shift;
my $iCommonLen = 0;
my $iCompareLen = length($strString1) < length($strString2) ? length($strString1) : length($strString2);
for (my $iIndex = 0; $iIndex < $iCompareLen; $iIndex++)
{
if (substr($strString1, $iIndex, 1) ne substr($strString2, $iIndex, 1))
{
last;
}
$iCommonLen++;
}
return $iCommonLen;
}
####################################################################################################################################
# FILE_SIZE_FORMAT - Format file sizes in human-readable form
####################################################################################################################################
sub file_size_format
{
my $lFileSize = shift;
if ($lFileSize < 1024)
{
return $lFileSize . 'B';
}
if ($lFileSize < (1024 * 1024))
{
return int($lFileSize / 1024) . 'KB';
}
if ($lFileSize < (1024 * 1024 * 1024))
{
return int($lFileSize / 1024 / 1024) . 'MB';
}
return int($lFileSize / 1024 / 1024 / 1024) . 'GB';
}
####################################################################################################################################
# TIMESTAMP_STRING_GET - Get backrest standard timestamp (or formatted as specified
####################################################################################################################################
sub timestamp_string_get
{
my $strFormat = shift;
if (!defined($strFormat))
{
$strFormat = '%4d-%02d-%02d %02d:%02d:%02d';
}
my ($iSecond, $iMinute, $iHour, $iMonthDay, $iMonth, $iYear, $iWeekDay, $iYearDay, $bIsDst) = localtime(time);
return sprintf($strFormat, $iYear + 1900, $iMonth + 1, $iMonthDay, $iHour, $iMinute, $iSecond);
}
####################################################################################################################################
# TIMESTAMP_FILE_STRING_GET - Get the date and time string formatted for filenames
####################################################################################################################################
sub timestamp_file_string_get
{
return timestamp_string_get('%4d%02d%02d-%02d%02d%02d');
}
####################################################################################################################################
# LOG_FILE_SET - set the file messages will be logged to
####################################################################################################################################
sub log_file_set
{
my $strFile = shift;
unless (-e dirname($strFile))
{
mkdir(dirname($strFile)) or die "unable to create directory for log file ${strFile}";
}
$strFile .= '-' . timestamp_string_get('%4d%02d%02d') . '.log';
my $bExists = false;
if (-e $strFile)
{
$bExists = true;
}
open($hLogFile, '>>', $strFile) or confess "unable to open log file ${strFile}";
if ($bExists)
{
print $hLogFile "\n";
}
print $hLogFile "-------------------PROCESS START-------------------\n";
}
####################################################################################################################################
# TEST_SET - set test parameters
####################################################################################################################################
sub test_set
{
my $bTestParam = shift;
my $iTestDelayParam = shift;
# Set defaults
$bTest = defined($bTestParam) ? $bTestParam : false;
$iTestDelay = defined($bTestParam) ? $iTestDelayParam : $iTestDelay;
# Make sure that a delay is specified in test mode
if ($bTest && !defined($iTestDelay))
{
confess &log(ASSERT, 'iTestDelay must be provided when bTest is true');
}
# Test delay should be between 1 and 600 seconds
if (!($iTestDelay >= 1 && $iTestDelay <= 600))
{
confess &log(ERROR, 'test-delay must be between 1 and 600 seconds');
}
}
####################################################################################################################################
# LOG_LEVEL_SET - set the log level for file and console
####################################################################################################################################
sub log_level_set
{
my $strLevelFileParam = shift;
my $strLevelConsoleParam = shift;
if (defined($strLevelFileParam))
{
if (!defined($oLogLevelRank{"${strLevelFileParam}"}{rank}))
{
confess &log(ERROR, "file log level ${strLevelFileParam} does not exist");
}
$strLogLevelFile = $strLevelFileParam;
}
if (defined($strLevelConsoleParam))
{
if (!defined($oLogLevelRank{"${strLevelConsoleParam}"}{rank}))
{
confess &log(ERROR, "console log level ${strLevelConsoleParam} does not exist");
}
$strLogLevelConsole = $strLevelConsoleParam;
}
}
####################################################################################################################################
# TEST_CHECK - Check for a test message
####################################################################################################################################
sub test_check
{
my $strLog = shift;
my $strTest = shift;
return index($strLog, TEST_ENCLOSE . '-' . $strTest . '-' . TEST_ENCLOSE) != -1;
}
####################################################################################################################################
# LOG - log messages
####################################################################################################################################
sub log
{
my $strLevel = shift;
my $strMessage = shift;
my $iCode = shift;
my $bSuppressLog = shift;
# Set defaults
$bSuppressLog = defined($bSuppressLog) ? $bSuppressLog : false;
# Set operational variables
my $strMessageFormat = $strMessage;
my $iLogLevelRank = $oLogLevelRank{"${strLevel}"}{rank};
# If test message
if ($strLevel eq TEST)
{
$iLogLevelRank = $oLogLevelRank{TRACE}{rank} + 1;
$strMessageFormat = TEST_ENCLOSE . '-' . $strMessageFormat . '-' . TEST_ENCLOSE;
}
# Else level rank must be valid
elsif (!defined($iLogLevelRank))
{
confess &log(ASSERT, "log level ${strLevel} does not exist");
}
# If message was undefined then set default message
if (!defined($strMessageFormat))
{
$strMessageFormat = '(undefined)';
}
# Indent subsequent lines of the message if it has more than one line - makes the log more readable
if ($strLevel eq TRACE || $strLevel eq TEST)
{
$strMessageFormat =~ s/\n/\n /g;
$strMessageFormat = ' ' . $strMessageFormat;
}
elsif ($strLevel eq DEBUG)
{
$strMessageFormat =~ s/\n/\n /g;
$strMessageFormat = ' ' . $strMessageFormat;
}
else
{
$strMessageFormat =~ s/\n/\n /g;
}
# Format the message text
my ($sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst) = localtime(time);
$strMessageFormat = timestamp_string_get() . sprintf(' T%02d', threads->tid()) .
(' ' x (7 - length($strLevel))) . "${strLevel}: ${strMessageFormat}" .
(defined($iCode) ? " (code ${iCode})" : '') . "\n";
# Output to console depending on log level and test flag
if ($iLogLevelRank <= $oLogLevelRank{"${strLogLevelConsole}"}{rank} ||
$bTest && $strLevel eq TEST)
{
if (!$bSuppressLog)
{
print $strMessageFormat;
}
if ($bTest && $strLevel eq TEST)
{
*STDOUT->flush();
sleep($iTestDelay);
}
}
# Output to file depending on log level and test flag
if ($iLogLevelRank <= $oLogLevelRank{"${strLogLevelFile}"}{rank})
{
if (defined($hLogFile))
{
if (!$bSuppressLog)
{
print $hLogFile $strMessageFormat;
}
}
}
# Throw a typed exception if code is defined
if (defined($iCode))
{
return BackRest::Exception->new(iCode => $iCode, strMessage => $strMessage);
}
# Return the message test so it can be used in a confess
return $strMessage;
}
####################################################################################################################################
# CONFIG_LOAD
#
# Load configuration file from standard INI format to a hash.
####################################################################################################################################
sub config_load
{
my $strFile = shift; # Full path to config file to load from
my $oConfig = shift; # Reference to the hash where config data will be stored
# Open the config file for reading
my $hFile;
my $strSection;
open($hFile, '<', $strFile)
or confess &log(ERROR, "unable to open ${strFile}");
while (my $strLine = readline($hFile))
{
$strLine = trim($strLine);
if ($strLine ne '')
{
# Get the section
if (index($strLine, '[') == 0)
{
$strSection = substr($strLine, 1, length($strLine) - 2);
}
else
{
# Get key and value
my $iIndex = index($strLine, '=');
if ($iIndex == -1)
{
confess &log(ERROR, "unable to read from ${strFile}: ${strLine}");
}
my $strKey = substr($strLine, 0, $iIndex);
my $strValue = substr($strLine, $iIndex + 1);
# Try to store value as JSON
eval
{
${$oConfig}{"${strSection}"}{"${strKey}"} = decode_json($strValue);
};
# On error store value as a scalar
if ($@)
{
${$oConfig}{"${strSection}"}{"${strKey}"} = $strValue;
}
}
}
}
close($hFile);
}
####################################################################################################################################
# CONFIG_SAVE
#
# Save configuration file from a hash to standard INI format.
####################################################################################################################################
sub config_save
{
my $strFile = shift; # Full path to config file to save to
my $oConfig = shift; # Reference to the hash where config data is stored
# Open the config file for writing
my $hFile;
my $bFirst = true;
open($hFile, '>', $strFile)
or confess &log(ERROR, "unable to open ${strFile}");
foreach my $strSection (sort(keys $oConfig))
{
if (!$bFirst)
{
syswrite($hFile, "\n")
or confess "unable to write lf: $!";
}
syswrite($hFile, "[${strSection}]\n")
or confess "unable to write section ${strSection}: $!";
foreach my $strKey (sort(keys ${$oConfig}{"${strSection}"}))
{
my $strValue = ${$oConfig}{"${strSection}"}{"${strKey}"};
if (defined($strValue))
{
if (ref($strValue) eq "HASH")
{
syswrite($hFile, "${strKey}=" . encode_json($strValue) . "\n")
or confess "unable to write key ${strKey}: $!";
}
else
{
syswrite($hFile, "${strKey}=${strValue}\n")
or confess "unable to write key ${strKey}: $!";
}
}
}
$bFirst = false;
}
close($hFile);
}
1;

View File

@ -1,39 +0,0 @@
[global:command]
#compress=pigz --rsyncable --best --stdout %file% # Ubuntu Linux
compress=/usr/bin/gzip --stdout %file%
decompress=/usr/bin/gzip -dc %file%
#checksum=sha1sum %file% | awk '{print $1}' # Ubuntu Linux
checksum=/usr/bin/shasum %file% | awk '{print $1}'
manifest=/opt/local/bin/gfind %path% -printf '%P\t%y\t%u\t%g\t%m\t%T@\t%i\t%s\t%l\n'
psql=/Library/PostgreSQL/9.3/bin/psql -X %option%
[global:log]
level-file=debug
level-console=info
[global:backup]
user=backrest
host=localhost
path=/Users/backrest/test
archive-required=y
thread-max=2
thread-timeout=900
start_fast=y
[global:archive]
path=/Users/dsteele/test
compress-async=y
archive-max-mb=500
[global:retention]
full_retention=2
differential_retention=2
archive_retention_type=full
archive_retention=2
[db]
psql_options=--cluster=9.3/main
path=/Users/dsteele/test/db/common
[db:command:option]
psql=--port=6001

View File

@ -1,566 +0,0 @@
#!/usr/bin/perl
####################################################################################################################################
# pg_backrest.pl - Simple Postgres Backup and Restore
####################################################################################################################################
####################################################################################################################################
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use threads;
use File::Basename;
use Getopt::Long;
use Config::IniFiles;
use Carp;
use lib dirname($0);
use pg_backrest_utility;
use pg_backrest_file;
use pg_backrest_backup;
use pg_backrest_db;
####################################################################################################################################
# Operation constants - basic operations that are allowed in backrest
####################################################################################################################################
use constant
{
OP_ARCHIVE_GET => "archive-get",
OP_ARCHIVE_PUSH => "archive-push",
OP_ARCHIVE_PULL => "archive-pull",
OP_BACKUP => "backup",
OP_EXPIRE => "expire"
};
####################################################################################################################################
# Configuration constants - configuration sections and keys
####################################################################################################################################
use constant
{
CONFIG_SECTION_COMMAND => "command",
CONFIG_SECTION_COMMAND_OPTION => "command:option",
CONFIG_SECTION_LOG => "log",
CONFIG_SECTION_BACKUP => "backup",
CONFIG_SECTION_ARCHIVE => "archive",
CONFIG_SECTION_RETENTION => "retention",
CONFIG_SECTION_STANZA => "stanza",
CONFIG_KEY_USER => "user",
CONFIG_KEY_HOST => "host",
CONFIG_KEY_PATH => "path",
CONFIG_KEY_THREAD_MAX => "thread-max",
CONFIG_KEY_THREAD_TIMEOUT => "thread-timeout",
CONFIG_KEY_HARDLINK => "hardlink",
CONFIG_KEY_ARCHIVE_REQUIRED => "archive-required",
CONFIG_KEY_ARCHIVE_MAX_MB => "archive-max-mb",
CONFIG_KEY_START_FAST => "start_fast",
CONFIG_KEY_LEVEL_FILE => "level-file",
CONFIG_KEY_LEVEL_CONSOLE => "level-console",
CONFIG_KEY_COMPRESS => "compress",
CONFIG_KEY_COMPRESS_ASYNC => "compress-async",
CONFIG_KEY_DECOMPRESS => "decompress",
CONFIG_KEY_CHECKSUM => "checksum",
CONFIG_KEY_MANIFEST => "manifest",
CONFIG_KEY_PSQL => "psql"
};
####################################################################################################################################
# Command line parameters
####################################################################################################################################
my $strConfigFile; # Configuration file
my $strStanza; # Stanza in the configuration file to load
my $strType; # Type of backup: full, differential (diff), incremental (incr)
GetOptions ("config=s" => \$strConfigFile,
"stanza=s" => \$strStanza,
"type=s" => \$strType)
or die("Error in command line arguments\n");
####################################################################################################################################
# Global variables
####################################################################################################################################
my %oConfig; # Configuration hash
####################################################################################################################################
# CONFIG_LOAD - Get a value from the config and be sure that it is defined (unless bRequired is false)
####################################################################################################################################
sub config_load
{
my $strSection = shift;
my $strKey = shift;
my $bRequired = shift;
my $strDefault = shift;
# Default is that the key is not required
if (!defined($bRequired))
{
$bRequired = false;
}
my $strValue;
# Look in the default stanza section
if ($strSection eq CONFIG_SECTION_STANZA)
{
$strValue = $oConfig{"${strStanza}"}{"${strKey}"};
}
# Else look in the supplied section
else
{
# First check the stanza section
$strValue = $oConfig{"${strStanza}:${strSection}"}{"${strKey}"};
# If the stanza section value is undefined then check global
if (!defined($strValue))
{
$strValue = $oConfig{"global:${strSection}"}{"${strKey}"};
}
}
if (!defined($strValue) && $bRequired)
{
if (defined($strDefault))
{
return $strDefault;
}
confess &log(ERROR, "config value " . (defined($strSection) ? $strSection : "[stanza]") . "->${strKey} is undefined");
}
if ($strSection eq CONFIG_SECTION_COMMAND)
{
my $strOption = config_load(CONFIG_SECTION_COMMAND_OPTION, $strKey);
if (defined($strOption))
{
$strValue =~ s/\%option\%/${strOption}/g;
}
}
return $strValue;
}
####################################################################################################################################
# SAFE_EXIT - terminate all SSH sessions when the script is terminated
####################################################################################################################################
sub safe_exit
{
my $iTotal = backup_thread_kill();
confess &log(ERROR, "process was terminated on signal, ${iTotal} threads stopped");
}
$SIG{TERM} = \&safe_exit;
$SIG{HUP} = \&safe_exit;
$SIG{INT} = \&safe_exit;
####################################################################################################################################
# START MAIN
####################################################################################################################################
# Get the operation
my $strOperation = $ARGV[0];
# Validate the operation
if (!defined($strOperation))
{
confess &log(ERROR, "operation is not defined");
}
if ($strOperation ne OP_ARCHIVE_GET &&
$strOperation ne OP_ARCHIVE_PUSH &&
$strOperation ne OP_ARCHIVE_PULL &&
$strOperation ne OP_BACKUP &&
$strOperation ne OP_EXPIRE)
{
confess &log(ERROR, "invalid operation ${strOperation}");
}
# Type should only be specified for backups
if (defined($strType) && $strOperation ne OP_BACKUP)
{
confess &log(ERROR, "type can only be specified for the backup operation")
}
####################################################################################################################################
# LOAD CONFIG FILE
####################################################################################################################################
if (!defined($strConfigFile))
{
$strConfigFile = "/etc/pg_backrest.conf";
}
tie %oConfig, 'Config::IniFiles', (-file => $strConfigFile) or confess &log(ERROR, "unable to find config file ${strConfigFile}");
# Load and check the cluster
if (!defined($strStanza))
{
confess "a backup stanza must be specified - show usage";
}
# Set the log levels
log_level_set(uc(config_load(CONFIG_SECTION_LOG, CONFIG_KEY_LEVEL_FILE, true, "INFO")),
uc(config_load(CONFIG_SECTION_LOG, CONFIG_KEY_LEVEL_CONSOLE, true, "ERROR")));
####################################################################################################################################
# ARCHIVE-GET Command
####################################################################################################################################
if ($strOperation eq OP_ARCHIVE_GET)
{
# Make sure the archive file is defined
if (!defined($ARGV[1]))
{
confess &log(ERROR, "archive file not provided - show usage");
}
# Make sure the destination file is defined
if (!defined($ARGV[2]))
{
confess &log(ERROR, "destination file not provided - show usage");
}
# Init the file object
my $oFile = pg_backrest_file->new
(
strStanza => $strStanza,
bNoCompression => true,
strBackupUser => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_USER),
strBackupHost => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST),
strBackupPath => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true),
strCommandDecompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_DECOMPRESS, true)
);
# Init the backup object
backup_init
(
undef,
$oFile
);
# Info for the Postgres log
&log(INFO, "getting archive log " . $ARGV[1]);
# Get the archive file
exit archive_get($ARGV[1], $ARGV[2]);
}
####################################################################################################################################
# ARCHIVE-PUSH and ARCHIVE-PULL Commands
####################################################################################################################################
if ($strOperation eq OP_ARCHIVE_PUSH || $strOperation eq OP_ARCHIVE_PULL)
{
# If an archive section has been defined, use that instead of the backup section when operation is OP_ARCHIVE_PUSH
my $strSection = defined(config_load(CONFIG_SECTION_ARCHIVE, CONFIG_KEY_PATH)) ? CONFIG_SECTION_ARCHIVE : CONFIG_SECTION_BACKUP;
# Get the async compress flag. If compress_async=y then compression is off for the initial push
my $bCompressAsync = config_load($strSection, CONFIG_KEY_COMPRESS_ASYNC, true, "n") eq "n" ? false : true;
# Get the async compress flag. If compress_async=y then compression is off for the initial push
my $strStopFile;
my $strArchivePath;
# If logging locally then create the stop archiving file name
if ($strSection eq CONFIG_SECTION_ARCHIVE)
{
$strArchivePath = config_load(CONFIG_SECTION_ARCHIVE, CONFIG_KEY_PATH);
$strStopFile = "${strArchivePath}/lock/${strStanza}-archive.stop";
}
# Perform the archive-push
if ($strOperation eq OP_ARCHIVE_PUSH)
{
# Call the archive_push function
if (!defined($ARGV[1]))
{
confess &log(ERROR, "source archive file not provided - show usage");
}
# If the stop file exists then discard the archive log
if (defined($strStopFile))
{
if (-e $strStopFile)
{
&log(ERROR, "archive stop file exists ($strStopFile), discarding " . basename($ARGV[1]));
exit 0;
}
}
# Make sure that archive-push is running locally
if (defined(config_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST)))
{
confess &log(ERROR, "stanza host cannot be set on archive-push - must be run locally on db server");
}
# Get the compress flag
my $bCompress = $bCompressAsync ? false : config_load($strSection, CONFIG_KEY_COMPRESS, true, "y") eq "y" ? true : false;
# Get the checksum flag
my $bChecksum = config_load($strSection, CONFIG_KEY_CHECKSUM, true, "y") eq "y" ? true : false;
# Run file_init_archive - this is the minimal config needed to run archiving
my $oFile = pg_backrest_file->new
(
strStanza => $strStanza,
bNoCompression => !$bCompress,
strBackupUser => config_load($strSection, CONFIG_KEY_USER),
strBackupHost => config_load($strSection, CONFIG_KEY_HOST),
strBackupPath => config_load($strSection, CONFIG_KEY_PATH, true),
strCommandChecksum => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_CHECKSUM, $bChecksum),
strCommandCompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_COMPRESS, $bCompress),
strCommandDecompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_DECOMPRESS, $bCompress)
);
backup_init
(
undef,
$oFile,
undef,
undef,
!$bChecksum
);
&log(INFO, "pushing archive log " . $ARGV[1] . ($bCompressAsync ? " asynchronously" : ""));
archive_push($ARGV[1]);
# Only continue if we are archiving local and a backup server is defined
if (!($strSection eq CONFIG_SECTION_ARCHIVE && defined(config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST))))
{
exit 0;
}
# Set the operation so that archive-pull will be called next
$strOperation = OP_ARCHIVE_PULL;
# fork and exit the parent process
if (fork())
{
exit 0;
}
}
# Perform the archive-pull
if ($strOperation eq OP_ARCHIVE_PULL)
{
# Make sure that archive-pull is running on the db server
if (defined(config_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST)))
{
confess &log(ERROR, "stanza host cannot be set on archive-pull - must be run locally on db server");
}
# Create a lock file to make sure archive-pull does not run more than once
my $strLockPath = "${strArchivePath}/lock/${strStanza}-archive.lock";
if (!lock_file_create($strLockPath))
{
&log(DEBUG, "archive-pull process is already running - exiting");
exit 0
}
# Build the basic command string that will be used to modify the command during processing
my $strCommand = $^X . " " . $0 . " --stanza=${strStanza}";
# Get the new operational flags
my $bCompress = config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_COMPRESS, true, "y") eq "y" ? true : false;
my $bChecksum = config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_CHECKSUM, true, "y") eq "y" ? true : false;
my $iArchiveMaxMB = config_load(CONFIG_SECTION_ARCHIVE, CONFIG_KEY_ARCHIVE_MAX_MB);
eval
{
# Run file_init_archive - this is the minimal config needed to run archive pulling
my $oFile = pg_backrest_file->new
(
strStanza => $strStanza,
bNoCompression => !$bCompress,
strBackupUser => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_USER),
strBackupHost => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST),
strBackupPath => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true),
strCommandChecksum => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_CHECKSUM, $bChecksum),
strCommandCompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_COMPRESS, $bCompress),
strCommandDecompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_DECOMPRESS, $bCompress),
strCommandManifest => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_MANIFEST),
strLockPath => $strLockPath
);
backup_init
(
undef,
$oFile,
undef,
undef,
!$bChecksum,
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
undef,
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT)
);
# Call the archive_pull function Continue to loop as long as there are files to process.
while (archive_pull($strArchivePath . "/archive/${strStanza}", $strStopFile, $strCommand, $iArchiveMaxMB))
{
&log(DEBUG, "archive logs were transferred, calling archive_pull() again");
}
};
# If there were errors above then start compressing
if ($@)
{
if ($bCompressAsync)
{
&log(ERROR, "error during transfer: $@");
&log(WARN, "errors during transfer, starting compression");
# Run file_init_archive - this is the minimal config needed to run archive pulling !!! need to close the old file
my $oFile = pg_backrest_file->new
(
strStanza => $strStanza,
bNoCompression => false,
strBackupPath => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true),
strCommandChecksum => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_CHECKSUM, $bChecksum),
strCommandCompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_COMPRESS, $bCompress),
strCommandDecompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_DECOMPRESS, $bCompress),
strCommandManifest => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_MANIFEST)
);
backup_init
(
undef,
$oFile,
undef,
undef,
!$bChecksum,
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
undef,
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT)
);
archive_compress($strArchivePath . "/archive/${strStanza}", $strCommand, 256);
}
else
{
confess $@;
}
}
lock_file_remove();
}
exit 0;
}
####################################################################################################################################
# OPEN THE LOG FILE
####################################################################################################################################
if (defined(config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST)))
{
confess &log(ASSERT, "backup/expire operations must be performed locally on the backup server");
}
log_file_set(config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true) . "/log/${strStanza}");
####################################################################################################################################
# GET MORE CONFIG INFO
####################################################################################################################################
# Set the backup type
if (!defined($strType))
{
$strType = "incremental";
}
elsif ($strType eq "diff")
{
$strType = "differential";
}
elsif ($strType eq "incr")
{
$strType = "incremental";
}
elsif ($strType ne "full" && $strType ne "differential" && $strType ne "incremental")
{
confess &log(ERROR, "backup type must be full, differential (diff), incremental (incr)");
}
# Get the operational flags
my $bCompress = config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_COMPRESS, true, "y") eq "y" ? true : false;
my $bChecksum = config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_CHECKSUM, true, "y") eq "y" ? true : false;
# Set the lock path
my $strLockPath = config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true) . "/lock/${strStanza}-${strOperation}.lock";
if (!lock_file_create($strLockPath))
{
&log(ERROR, "backup process is already running for stanza ${strStanza} - exiting");
exit 0
}
# Run file_init_archive - the rest of the file config required for backup and restore
my $oFile = pg_backrest_file->new
(
strStanza => $strStanza,
bNoCompression => !$bCompress,
strBackupUser => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_USER),
strBackupHost => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HOST),
strBackupPath => config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_PATH, true),
strDbUser => config_load(CONFIG_SECTION_STANZA, CONFIG_KEY_USER),
strDbHost => config_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST),
strCommandChecksum => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_CHECKSUM, $bChecksum),
strCommandCompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_COMPRESS, $bCompress),
strCommandDecompress => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_DECOMPRESS, $bCompress),
strCommandManifest => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_MANIFEST),
strCommandPsql => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_PSQL),
strLockPath => $strLockPath
);
my $oDb = pg_backrest_db->new
(
strDbUser => config_load(CONFIG_SECTION_STANZA, CONFIG_KEY_USER),
strDbHost => config_load(CONFIG_SECTION_STANZA, CONFIG_KEY_HOST),
strCommandPsql => config_load(CONFIG_SECTION_COMMAND, CONFIG_KEY_PSQL),
oDbSSH => $oFile->{oDbSSH}
);
# Run backup_init - parameters required for backup and restore operations
backup_init
(
$oDb,
$oFile,
$strType,
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_HARDLINK, true, "n") eq "y" ? true : false,
!$bChecksum,
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_MAX),
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_ARCHIVE_REQUIRED, true, "y") eq "y" ? true : false,
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_THREAD_TIMEOUT)
);
####################################################################################################################################
# BACKUP
####################################################################################################################################
if ($strOperation eq OP_BACKUP)
{
backup(config_load(CONFIG_SECTION_STANZA, CONFIG_KEY_PATH),
config_load(CONFIG_SECTION_BACKUP, CONFIG_KEY_START_FAST, true, "n") eq "y" ? true : false);
$strOperation = OP_EXPIRE;
}
####################################################################################################################################
# EXPIRE
####################################################################################################################################
if ($strOperation eq OP_EXPIRE)
{
backup_expire
(
$oFile->path_get(PATH_BACKUP_CLUSTER),
config_load(CONFIG_SECTION_RETENTION, "full_retention"),
config_load(CONFIG_SECTION_RETENTION, "differential_retention"),
config_load(CONFIG_SECTION_RETENTION, "archive_retention_type"),
config_load(CONFIG_SECTION_RETENTION, "archive_retention")
);
lock_file_remove();
exit 0;
}
confess &log(ASSERT, "invalid operation ${strOperation} - missing handler block");

View File

@ -1,996 +0,0 @@
####################################################################################################################################
# FILE MODULE
####################################################################################################################################
package pg_backrest_file;
use threads;
use Moose;
use strict;
use warnings;
use Carp;
use Net::OpenSSH;
use IPC::Open3;
use File::Basename;
use IPC::System::Simple qw(capture);
use lib dirname($0);
use pg_backrest_utility;
use Exporter qw(import);
our @EXPORT = qw(PATH_DB PATH_DB_ABSOLUTE PATH_BACKUP PATH_BACKUP_ABSOLUTE PATH_BACKUP_CLUSTER PATH_BACKUP_TMP PATH_BACKUP_ARCHIVE);
# Extension and permissions
has strCompressExtension => (is => 'ro', default => 'gz');
has strDefaultPathPermission => (is => 'bare', default => '0750');
has strDefaultFilePermission => (is => 'ro', default => '0640');
# Command strings
has strCommandChecksum => (is => 'bare');
has strCommandCompress => (is => 'bare');
has strCommandDecompress => (is => 'bare');
has strCommandCat => (is => 'bare', default => 'cat %file%');
has strCommandManifest => (is => 'bare');
# Lock path
has strLockPath => (is => 'bare');
# Files to hold stderr
#has strBackupStdErrFile => (is => 'bare');
#has strDbStdErrFile => (is => 'bare');
# Module variables
has strDbUser => (is => 'bare'); # Database user
has strDbHost => (is => 'bare'); # Database host
has oDbSSH => (is => 'bare'); # Database SSH object
has strBackupUser => (is => 'bare'); # Backup user
has strBackupHost => (is => 'bare'); # Backup host
has oBackupSSH => (is => 'bare'); # Backup SSH object
has strBackupPath => (is => 'bare'); # Backup base path
has strBackupClusterPath => (is => 'bare'); # Backup cluster path
# Process flags
has bNoCompression => (is => 'bare');
has strStanza => (is => 'bare');
has iThreadIdx => (is => 'bare');
####################################################################################################################################
# PATH_GET Constants
####################################################################################################################################
use constant
{
PATH_DB => 'db',
PATH_DB_ABSOLUTE => 'db:absolute',
PATH_BACKUP => 'backup',
PATH_BACKUP_ABSOLUTE => 'backup:absolute',
PATH_BACKUP_CLUSTER => 'backup:cluster',
PATH_BACKUP_TMP => 'backup:tmp',
PATH_BACKUP_ARCHIVE => 'backup:archive',
PATH_LOCK_ERR => 'lock:err'
};
####################################################################################################################################
# CONSTRUCTOR
####################################################################################################################################
sub BUILD
{
my $self = shift;
# Make sure the backup path is defined
if (!defined($self->{strBackupPath}))
{
confess &log(ERROR, "common:backup_path undefined");
}
# Create the backup cluster path
$self->{strBackupClusterPath} = $self->{strBackupPath} . "/" . $self->{strStanza};
# Create the ssh options string
if (defined($self->{strBackupHost}) || defined($self->{strDbHost}))
{
my $strOptionSSHRequestTTY = "RequestTTY=yes";
my $strOptionSSHCompression = "Compression=no";
if ($self->{bNoCompression})
{
$strOptionSSHCompression = "Compression=yes";
}
# Connect SSH object if backup host is defined
if (!defined($self->{oBackupSSH}) && defined($self->{strBackupHost}))
{
&log(TRACE, "connecting to backup ssh host " . $self->{strBackupHost});
$self->{oBackupSSH} = Net::OpenSSH->new($self->{strBackupHost}, timeout => 300, user => $self->{strBackupUser},
default_stderr_file => $self->path_get(PATH_LOCK_ERR, "file"),
master_opts => [-o => $strOptionSSHCompression, -o => $strOptionSSHRequestTTY]);
$self->{oBackupSSH}->error and confess &log(ERROR, "unable to connect to $self->{strBackupHost}: " . $self->{oBackupSSH}->error);
}
# Connect SSH object if db host is defined
if (!defined($self->{oDbSSH}) && defined($self->{strDbHost}))
{
&log(TRACE, "connecting to database ssh host $self->{strDbHost}");
$self->{oDbSSH} = Net::OpenSSH->new($self->{strDbHost}, timeout => 300, user => $self->{strDbUser},
default_stderr_file => $self->path_get(PATH_LOCK_ERR, "file"),
master_opts => [-o => $strOptionSSHCompression, -o => $strOptionSSHRequestTTY]);
$self->{oDbSSH}->error and confess &log(ERROR, "unable to connect to $self->{strDbHost}: " . $self->{oDbSSH}->error);
}
}
}
####################################################################################################################################
# CLONE
####################################################################################################################################
sub clone
{
my $self = shift;
my $iThreadIdx = shift;
return pg_backrest_file->new
(
strCompressExtension => $self->{strCompressExtension},
strDefaultPathPermission => $self->{strDefaultPathPermission},
strDefaultFilePermission => $self->{strDefaultFilePermission},
strCommandChecksum => $self->{strCommandChecksum},
strCommandCompress => $self->{strCommandCompress},
strCommandDecompress => $self->{strCommandDecompress},
strCommandCat => $self->{strCommandCat},
strCommandManifest => $self->{strCommandManifest},
# oDbSSH => $self->{strDbSSH},
strDbUser => $self->{strDbUser},
strDbHost => $self->{strDbHost},
# oBackupSSH => $self->{strBackupSSH},
strBackupUser => $self->{strBackupUser},
strBackupHost => $self->{strBackupHost},
strBackupPath => $self->{strBackupPath},
strBackupClusterPath => $self->{strBackupClusterPath},
bNoCompression => $self->{bNoCompression},
strStanza => $self->{strStanza},
iThreadIdx => $iThreadIdx,
strLockPath => $self->{strLockPath}
);
}
####################################################################################################################################
# ERROR_GET
####################################################################################################################################
sub error_get
{
my $self = shift;
my $strErrorFile = $self->path_get(PATH_LOCK_ERR, "file");
open my $hFile, '<', $strErrorFile or return "error opening ${strErrorFile} to read STDERR output";
my $strError = do {local $/; <$hFile>};
close $hFile;
return trim($strError);
}
####################################################################################################################################
# PATH_GET
####################################################################################################################################
sub path_type_get
{
my $self = shift;
my $strType = shift;
# If db type
if ($strType =~ /^db(\:.*){0,1}/)
{
return PATH_DB;
}
# Else if backup type
elsif ($strType =~ /^backup(\:.*){0,1}/)
{
return PATH_BACKUP;
}
# Error when path type not recognized
confess &log(ASSERT, "no known path types in '${strType}'");
}
sub path_get
{
my $self = shift;
my $strType = shift; # Base type of the path to get (PATH_DB_ABSOLUTE, PATH_BACKUP_TMP, etc)
my $strFile = shift; # File to append to the base path (can include a path as well)
my $bTemp = shift; # Return the temp file for this path type - only some types have temp files
# Only allow temp files for PATH_BACKUP_ARCHIVE and PATH_BACKUP_TMP
if (defined($bTemp) && $bTemp && !($strType eq PATH_BACKUP_ARCHIVE || $strType eq PATH_BACKUP_TMP || $strType eq PATH_DB_ABSOLUTE))
{
confess &log(ASSERT, "temp file not supported on path " . $strType);
}
# Get absolute db path
if ($strType eq PATH_DB_ABSOLUTE)
{
if (defined($bTemp) && $bTemp)
{
return $strFile . ".backrest.tmp";
}
return $strFile;
}
# Make sure the base backup path is defined
if (!defined($self->{strBackupPath}))
{
confess &log(ASSERT, "\$strBackupPath not yet defined");
}
# Get absolute backup path
if ($strType eq PATH_BACKUP_ABSOLUTE)
{
# Need a check in here to make sure this is relative to the backup path
return $strFile;
}
# Get base backup path
if ($strType eq PATH_BACKUP)
{
return $self->{strBackupPath} . (defined($strFile) ? "/${strFile}" : "");
}
# Make sure the cluster is defined
if (!defined($self->{strStanza}))
{
confess &log(ASSERT, "\$strStanza not yet defined");
}
# Get the lock error path
if ($strType eq PATH_LOCK_ERR)
{
my $strTempPath = "$self->{strLockPath}";
return ${strTempPath} . (defined($strFile) ? "/${strFile}" .
(defined($self->{iThreadIdx}) ? ".$self->{iThreadIdx}" : "") . ".err" : "");
}
# Get the backup tmp path
if ($strType eq PATH_BACKUP_TMP)
{
my $strTempPath = "$self->{strBackupPath}/temp/$self->{strStanza}.tmp";
if (defined($bTemp) && $bTemp)
{
return "${strTempPath}/file.tmp" . (defined($self->{iThreadIdx}) ? ".$self->{iThreadIdx}" : "");
}
return "${strTempPath}" . (defined($strFile) ? "/${strFile}" : "");
}
# Get the backup archive path
if ($strType eq PATH_BACKUP_ARCHIVE)
{
my $strArchivePath = "$self->{strBackupPath}/archive/$self->{strStanza}";
my $strArchive;
if (defined($bTemp) && $bTemp)
{
return "${strArchivePath}/file.tmp" . (defined($self->{iThreadIdx}) ? ".$self->{iThreadIdx}" : "");
}
if (defined($strFile))
{
$strArchive = substr(basename($strFile), 0, 24);
if ($strArchive !~ /^([0-F]){24}$/)
{
return "${strArchivePath}/${strFile}";
}
}
return $strArchivePath . (defined($strArchive) ? "/" . substr($strArchive, 0, 16) : "") .
(defined($strFile) ? "/" . $strFile : "");
}
if ($strType eq PATH_BACKUP_CLUSTER)
{
return $self->{strBackupPath} . "/backup/$self->{strStanza}" . (defined($strFile) ? "/${strFile}" : "");
}
# Error when path type not recognized
confess &log(ASSERT, "no known path types in '${strType}'");
}
####################################################################################################################################
# LINK_CREATE
####################################################################################################################################
sub link_create
{
my $self = shift;
my $strSourcePathType = shift;
my $strSourceFile = shift;
my $strDestinationPathType = shift;
my $strDestinationFile = shift;
my $bHard = shift;
my $bRelative = shift;
my $bPathCreate = shift;
# if bHard is not defined default to false
$bHard = defined($bHard) ? $bHard : false;
# if bRelative is not defined or bHard is true, default to false
$bRelative = !defined($bRelative) || $bHard ? false : $bRelative;
# if bPathCreate is not defined, default to true
$bPathCreate = defined($bPathCreate) ? $bPathCreate : true;
# Source and destination path types must be the same (both PATH_DB or both PATH_BACKUP)
if ($self->path_type_get($strSourcePathType) ne $self->path_type_get($strDestinationPathType))
{
confess &log(ASSERT, "path types must be equal in link create");
}
# Generate source and destination files
my $strSource = $self->path_get($strSourcePathType, $strSourceFile);
my $strDestination = $self->path_get($strDestinationPathType, $strDestinationFile);
# If the destination path is backup and does not exist, create it
if ($bPathCreate && $self->path_type_get($strDestinationPathType) eq PATH_BACKUP)
{
$self->path_create(PATH_BACKUP_ABSOLUTE, dirname($strDestination));
}
unless (-e $strSource)
{
if (-e $strSource . ".$self->{strCompressExtension}")
{
$strSource .= ".$self->{strCompressExtension}";
$strDestination .= ".$self->{strCompressExtension}";
}
else
{
# Error when a hardlink will be created on a missing file
if ($bHard)
{
confess &log(ASSERT, "unable to find ${strSource}(.$self->{strCompressExtension}) for link");
}
}
}
# Generate relative path if requested
if ($bRelative)
{
my $iCommonLen = common_prefix($strSource, $strDestination);
if ($iCommonLen != 0)
{
$strSource = ("../" x substr($strDestination, $iCommonLen) =~ tr/\///) . substr($strSource, $iCommonLen);
}
}
# Create the command
my $strCommand = "ln" . (!$bHard ? " -s" : "") . " ${strSource} ${strDestination}";
# Run remotely
if ($self->is_remote($strSourcePathType))
{
&log(TRACE, "link_create: remote ${strSourcePathType} '${strCommand}'");
my $oSSH = $self->remote_get($strSourcePathType);
$oSSH->system($strCommand) or confess &log("unable to create link from ${strSource} to ${strDestination}");
}
# Run locally
else
{
&log(TRACE, "link_create: local '${strCommand}'");
system($strCommand) == 0 or confess &log("unable to create link from ${strSource} to ${strDestination}");
}
}
####################################################################################################################################
# PATH_CREATE
#
# Creates a path locally or remotely. Currently does not error if the path already exists. Also does not set permissions if the
# path aleady exists.
####################################################################################################################################
sub path_create
{
my $self = shift;
my $strPathType = shift;
my $strPath = shift;
my $strPermission = shift;
# If no permissions are given then use the default
if (!defined($strPermission))
{
$strPermission = $self->{strDefaultPathPermission};
}
# Get the path to create
my $strPathCreate = $strPath;
if (defined($strPathType))
{
$strPathCreate = $self->path_get($strPathType, $strPath);
}
my $strCommand = "mkdir -p -m ${strPermission} ${strPathCreate}";
# Run remotely
if ($self->is_remote($strPathType))
{
&log(TRACE, "path_create: remote ${strPathType} '${strCommand}'");
my $oSSH = $self->remote_get($strPathType);
$oSSH->system($strCommand) or confess &log("unable to create remote path ${strPathType}:${strPath}");
}
# Run locally
else
{
&log(TRACE, "path_create: local '${strCommand}'");
system($strCommand) == 0 or confess &log(ERROR, "unable to create path ${strPath}");
}
}
####################################################################################################################################
# IS_REMOTE
#
# Determine whether any operations are being performed remotely. If $strPathType is defined, the function will return true if that
# path is remote. If $strPathType is not defined, then function will return true if any path is remote.
####################################################################################################################################
sub is_remote
{
my $self = shift;
my $strPathType = shift;
# If the SSH object is defined then some paths are remote
if (defined($self->{oDbSSH}) || defined($self->{oBackupSSH}))
{
# If path type is not defined but the SSH object is, then some paths are remote
if (!defined($strPathType))
{
return true;
}
# If a host is defined for the path then it is remote
if (defined($self->{strBackupHost}) && $self->path_type_get($strPathType) eq PATH_BACKUP ||
defined($self->{strDbHost}) && $self->path_type_get($strPathType) eq PATH_DB)
{
return true;
}
}
return false;
}
####################################################################################################################################
# REMOTE_GET
#
# Get remote SSH object depending on the path type.
####################################################################################################################################
sub remote_get
{
my $self = shift;
my $strPathType = shift;
# Get the db SSH object
if ($self->path_type_get($strPathType) eq PATH_DB && defined($self->{oDbSSH}))
{
return $self->{oDbSSH};
}
# Get the backup SSH object
if ($self->path_type_get($strPathType) eq PATH_BACKUP && defined($self->{oBackupSSH}))
{
return $self->{oBackupSSH}
}
# Error when no ssh object is found
confess &log(ASSERT, "path type ${strPathType} does not have a defined ssh object");
}
####################################################################################################################################
# FILE_MOVE
#
# Moves a file locally or remotely.
####################################################################################################################################
sub file_move
{
my $self = shift;
my $strSourcePathType = shift;
my $strSourceFile = shift;
my $strDestinationPathType = shift;
my $strDestinationFile = shift;
my $bPathCreate = shift;
# if bPathCreate is not defined, default to true
$bPathCreate = defined($bPathCreate) ? $bPathCreate : true;
&log(TRACE, "file_move: ${strSourcePathType}: " . (defined($strSourceFile) ? ":${strSourceFile}" : "") .
" to ${strDestinationPathType}" . (defined($strDestinationFile) ? ":${strDestinationFile}" : ""));
# Get source and desination files
if ($self->path_type_get($strSourcePathType) ne $self->path_type_get($strSourcePathType))
{
confess &log(ASSERT, "source and destination path types must be equal");
}
my $strSource = $self->path_get($strSourcePathType, $strSourceFile);
my $strDestination = $self->path_get($strDestinationPathType, $strDestinationFile);
# If the destination path is backup and does not exist, create it
if ($bPathCreate && $self->path_type_get($strDestinationPathType) eq PATH_BACKUP)
{
$self->path_create(PATH_BACKUP_ABSOLUTE, dirname($strDestination));
}
my $strCommand = "mv ${strSource} ${strDestination}";
# Run remotely
if ($self->is_remote($strDestinationPathType))
{
&log(TRACE, "file_move: remote ${strDestinationPathType} '${strCommand}'");
my $oSSH = $self->remote_get($strDestinationPathType);
$oSSH->system($strCommand)
or confess &log("unable to move remote ${strDestinationPathType}:${strSourceFile} to ${strDestinationFile}");
}
# Run locally
else
{
&log(TRACE, "file_move: '${strCommand}'");
system($strCommand) == 0 or confess &log("unable to move local ${strSourceFile} to ${strDestinationFile}");
}
}
####################################################################################################################################
# FILE_COPY
####################################################################################################################################
sub file_copy
{
my $self = shift;
my $strSourcePathType = shift;
my $strSourceFile = shift;
my $strDestinationPathType = shift;
my $strDestinationFile = shift;
my $bNoCompressionOverride = shift;
my $lModificationTime = shift;
my $strPermission = shift;
my $bPathCreate = shift;
my $bConfessCopyError = shift;
# if bPathCreate is not defined, default to true
$bPathCreate = defined($bPathCreate) ? $bPathCreate : true;
$bConfessCopyError = defined($bConfessCopyError) ? $bConfessCopyError : true;
&log(TRACE, "file_copy: ${strSourcePathType}: " . (defined($strSourceFile) ? ":${strSourceFile}" : "") .
" to ${strDestinationPathType}" . (defined($strDestinationFile) ? ":${strDestinationFile}" : ""));
# Modification time and permissions cannot be set remotely
if ((defined($lModificationTime) || defined($strPermission)) && $self->is_remote($strDestinationPathType))
{
confess &log(ASSERT, "modification time and permissions cannot be set on remote destination file");
}
# Generate source, destination and tmp filenames
my $strSource = $self->path_get($strSourcePathType, $strSourceFile);
my $strDestination = $self->path_get($strDestinationPathType, $strDestinationFile);
my $strDestinationTmp = $self->path_get($strDestinationPathType, $strDestinationFile, true);
# Is this already a compressed file?
my $bAlreadyCompressed = $strSource =~ "^.*\.$self->{strCompressExtension}\$";
if ($bAlreadyCompressed && $strDestination !~ "^.*\.$self->{strCompressExtension}\$")
{
$strDestination .= ".$self->{strCompressExtension}";
}
# Does the file need compression?
my $bCompress = !((defined($bNoCompressionOverride) && $bNoCompressionOverride) ||
(!defined($bNoCompressionOverride) && $self->{bNoCompression}));
# If the destination path is backup and does not exist, create it
if ($bPathCreate && $self->path_type_get($strDestinationPathType) eq PATH_BACKUP)
{
$self->path_create(PATH_BACKUP_ABSOLUTE, dirname($strDestination));
}
# Generate the command string depending on compression/decompression/cat
my $strCommand = $self->{strCommandCat};
if (!$bAlreadyCompressed && $bCompress)
{
$strCommand = $self->{strCommandCompress};
$strDestination .= ".gz";
}
elsif ($bAlreadyCompressed && !$bCompress)
{
$strCommand = $self->{strCommandDecompress};
$strDestination = substr($strDestination, 0, length($strDestination) - length($self->{strCompressExtension}) - 1);
}
$strCommand =~ s/\%file\%/${strSource}/g;
$strCommand .= " 2> /dev/null";
# If this command is remote on only one side
if ($self->is_remote($strSourcePathType) && !$self->is_remote($strDestinationPathType) ||
!$self->is_remote($strSourcePathType) && $self->is_remote($strDestinationPathType))
{
# Else if the source is remote
if ($self->is_remote($strSourcePathType))
{
&log(TRACE, "file_copy: remote ${strSource} to local ${strDestination}");
# Open the destination file for writing (will be streamed from the ssh session)
my $hFile;
open($hFile, ">", $strDestinationTmp) or confess &log(ERROR, "cannot open ${strDestination}");
# Execute the command through ssh
my $oSSH = $self->remote_get($strSourcePathType);
unless ($oSSH->system({stdout_fh => $hFile}, $strCommand))
{
close($hFile) or confess &log(ERROR, "cannot close file ${strDestinationTmp}");
my $strResult = "unable to execute ssh '${strCommand}'";
$bConfessCopyError ? confess &log(ERROR, $strResult) : return false;
}
# Close the destination file handle
close($hFile) or confess &log(ERROR, "cannot close file ${strDestinationTmp}");
}
# Else if the destination is remote
elsif ($self->is_remote($strDestinationPathType))
{
&log(TRACE, "file_copy: local ${strSource} ($strCommand) to remote ${strDestination}");
# Open the input command as a stream
my $hOut;
my $pId = open3(undef, $hOut, undef, $strCommand) or confess(ERROR, "unable to execute '${strCommand}'");
# Execute the command though ssh
my $oSSH = $self->remote_get($strDestinationPathType);
$oSSH->system({stdin_fh => $hOut}, "cat > ${strDestinationTmp}") or confess &log(ERROR, "unable to execute ssh 'cat'");
# Wait for the stream process to finish
waitpid($pId, 0);
my $iExitStatus = ${^CHILD_ERROR_NATIVE} >> 8;
if ($iExitStatus != 0)
{
my $strResult = "command '${strCommand}' returned " . $iExitStatus;
$bConfessCopyError ? confess &log(ERROR, $strResult) : return false;
}
}
}
# If the source and destination are both remote but not the same remote
elsif ($self->is_remote($strSourcePathType) && $self->is_remote($strDestinationPathType) &&
$self->path_type_get($strSourcePathType) ne $self->path_type_get($strDestinationPathType))
{
&log(TRACE, "file_copy: remote ${strSource} to remote ${strDestination}");
confess &log(ASSERT, "remote source and destination not supported");
}
# Else this is a local command or remote where both sides are the same remote
else
{
# Complete the command by redirecting to the destination tmp file
$strCommand .= " > ${strDestinationTmp}";
if ($self->is_remote($strSourcePathType))
{
&log(TRACE, "file_copy: remote ${strSourcePathType} '${strCommand}'");
my $oSSH = $self->remote_get($strSourcePathType);
unless($oSSH->system($strCommand))
{
my $strResult = "unable to execute remote command ${strCommand}:" . oSSH->error;
$bConfessCopyError ? confess &log(ERROR, $strResult) : return false;
}
}
else
{
&log(TRACE, "file_copy: local '${strCommand}'");
unless(system($strCommand) == 0)
{
my $strResult = "unable to copy local ${strSource} to local ${strDestinationTmp}";
$bConfessCopyError ? confess &log(ERROR, $strResult) : return false;
}
}
}
# Set the file permission if required (this only works locally for now)
if (defined($strPermission))
{
&log(TRACE, "file_copy: chmod ${strPermission}");
system("chmod ${strPermission} ${strDestinationTmp}") == 0
or confess &log(ERROR, "unable to set permissions for local ${strDestinationTmp}");
}
# Set the file modification time if required (this only works locally for now)
if (defined($lModificationTime))
{
&log(TRACE, "file_copy: time ${lModificationTime}");
utime($lModificationTime, $lModificationTime, $strDestinationTmp)
or confess &log(ERROR, "unable to set time for local ${strDestinationTmp}");
}
# Move the file from tmp to final destination
$self->file_move($self->path_type_get($strSourcePathType) . ":absolute", $strDestinationTmp,
$self->path_type_get($strDestinationPathType) . ":absolute", $strDestination, $bPathCreate);
return true;
}
####################################################################################################################################
# FILE_HASH_GET
####################################################################################################################################
sub file_hash_get
{
my $self = shift;
my $strPathType = shift;
my $strFile = shift;
# For now this operation is not supported remotely. Not currently needed.
if ($self->is_remote($strPathType))
{
confess &log(ASSERT, "remote operation not supported");
}
if (!defined($self->{strCommandChecksum}))
{
confess &log(ASSERT, "\$strCommandChecksum not defined");
}
my $strPath = $self->path_get($strPathType, $strFile);
my $strCommand;
if (-e $strPath)
{
$strCommand = $self->{strCommandChecksum};
$strCommand =~ s/\%file\%/${strPath}/g;
}
elsif (-e $strPath . ".$self->{strCompressExtension}")
{
$strCommand = $self->{strCommandDecompress};
$strCommand =~ s/\%file\%/${strPath}/g;
$strCommand .= " | " . $self->{strCommandChecksum};
$strCommand =~ s/\%file\%//g;
}
else
{
confess &log(ASSERT, "unable to find $strPath(.$self->{strCompressExtension}) for checksum");
}
return trim(capture($strCommand)) or confess &log(ERROR, "unable to checksum ${strPath}");
}
####################################################################################################################################
# FILE_COMPRESS
####################################################################################################################################
sub file_compress
{
my $self = shift;
my $strPathType = shift;
my $strFile = shift;
# For now this operation is not supported remotely. Not currently needed.
if ($self->is_remote($strPathType))
{
confess &log(ASSERT, "remote operation not supported");
}
if (!defined($self->{strCommandCompress}))
{
confess &log(ASSERT, "\$strCommandCompress not defined");
}
my $strPath = $self->path_get($strPathType, $strFile);
# Build the command
my $strCommand = $self->{strCommandCompress};
$strCommand =~ s/\%file\%/${strPath}/g;
$strCommand =~ s/\ \-\-stdout//g;
system($strCommand) == 0 or confess &log(ERROR, "unable to compress ${strPath}: ${strCommand}");
}
####################################################################################################################################
# FILE_LIST_GET
####################################################################################################################################
sub file_list_get
{
my $self = shift;
my $strPathType = shift;
my $strPath = shift;
my $strExpression = shift;
my $strSortOrder = shift;
# Get the root path for the file list
my $strPathList = $self->path_get($strPathType, $strPath);
# Builds the file list command
# my $strCommand = "ls ${strPathList} | egrep \"$strExpression\"";
my $strCommand = "ls -1 ${strPathList}";
# Run the file list command
my $strFileList = "";
# Run remotely
if ($self->is_remote($strPathType))
{
&log(TRACE, "file_list_get: remote ${strPathType}:${strPathList} ${strCommand}");
my $oSSH = $self->remote_get($strPathType);
$strFileList = $oSSH->capture($strCommand);
if ($oSSH->error)
{
confess &log(ERROR, "unable to execute file list (${strCommand}): " . $self->error_get());
}
}
# Run locally
else
{
&log(TRACE, "file_list_get: local ${strPathType}:${strPathList} ${strCommand}");
$strFileList = capture($strCommand);
}
# Split the files into an array
my @stryFileList;
if (defined($strExpression))
{
@stryFileList = grep(/$strExpression/i, split(/\n/, $strFileList));
}
else
{
@stryFileList = split(/\n/, $strFileList);
}
# Return the array in reverse order if specified
if (defined($strSortOrder) && $strSortOrder eq "reverse")
{
return sort {$b cmp $a} @stryFileList;
}
# Return in normal sorted order
return sort @stryFileList;
}
####################################################################################################################################
# FILE_EXISTS
####################################################################################################################################
sub file_exists
{
my $self = shift;
my $strPathType = shift;
my $strPath = shift;
# Get the root path for the manifest
my $strPathExists = $self->path_get($strPathType, $strPath);
# Builds the exists command
my $strCommand = "ls ${strPathExists}";
# Run the file exists command
my $strExists = "";
# Run remotely
if ($self->is_remote($strPathType))
{
&log(TRACE, "file_exists: remote ${strPathType}:${strPathExists}");
my $oSSH = $self->remote_get($strPathType);
$strExists = trim($oSSH->capture($strCommand));
if ($oSSH->error)
{
confess &log(ERROR, "unable to execute file exists (${strCommand}): " . $self->error_get());
}
}
# Run locally
else
{
&log(TRACE, "file_exists: local ${strPathType}:${strPathExists}");
$strExists = trim(capture($strCommand));
}
&log(TRACE, "file_exists: search = ${strPathExists}, result = ${strExists}");
# If the return from ls eq strPathExists then true
return ($strExists eq $strPathExists);
}
####################################################################################################################################
# FILE_REMOVE
####################################################################################################################################
sub file_remove
{
my $self = shift;
my $strPathType = shift;
my $strPath = shift;
my $bTemp = shift;
my $bErrorIfNotExists = shift;
if (!defined($bErrorIfNotExists))
{
$bErrorIfNotExists = false;
}
# Get the root path for the manifest
my $strPathRemove = $self->path_get($strPathType, $strPath, $bTemp);
# Builds the exists command
my $strCommand = "rm -f ${strPathRemove}";
# Run remotely
if ($self->is_remote($strPathType))
{
&log(TRACE, "file_remove: remote ${strPathType}:${strPathRemove}");
my $oSSH = $self->remote_get($strPathType);
$oSSH->system($strCommand) or $bErrorIfNotExists ? confess &log(ERROR, "unable to remove remote ${strPathType}:${strPathRemove}") : true;
if ($oSSH->error)
{
confess &log(ERROR, "unable to execute file_remove (${strCommand}): " . $self->error_get());
}
}
# Run locally
else
{
&log(TRACE, "file_remove: local ${strPathType}:${strPathRemove}");
system($strCommand) == 0 or $bErrorIfNotExists ? confess &log(ERROR, "unable to remove local ${strPathType}:${strPathRemove}") : true;
}
}
####################################################################################################################################
# MANIFEST_GET
#
# Builds a path/file manifest starting with the base path and including all subpaths. The manifest contains all the information
# needed to perform a backup or a delta with a previous backup.
####################################################################################################################################
sub manifest_get
{
my $self = shift;
my $strPathType = shift;
my $strPath = shift;
&log(TRACE, "manifest: " . $self->{strCommandManifest});
# Get the root path for the manifest
my $strPathManifest = $self->path_get($strPathType, $strPath);
# Builds the manifest command
my $strCommand = $self->{strCommandManifest};
$strCommand =~ s/\%path\%/${strPathManifest}/g;
# Run the manifest command
my $strManifest;
# Run remotely
if ($self->is_remote($strPathType))
{
&log(TRACE, "manifest_get: remote ${strPathType}:${strPathManifest}");
my $oSSH = $self->remote_get($strPathType);
$strManifest = $oSSH->capture($strCommand) or
confess &log(ERROR, "unable to execute remote manifest (${strCommand}): " . $self->error_get());
}
# Run locally
else
{
&log(TRACE, "manifest_get: local ${strPathType}:${strPathManifest}");
$strManifest = capture($strCommand) or confess &log(ERROR, "unable to execute local command '${strCommand}'");
}
# Load the manifest into a hash
return data_hash_build("name\ttype\tuser\tgroup\tpermission\tmodification_time\tinode\tsize\tlink_destination\n" .
$strManifest, "\t", ".");
}
no Moose;
__PACKAGE__->meta->make_immutable;

View File

@ -1,366 +0,0 @@
####################################################################################################################################
# UTILITY MODULE
####################################################################################################################################
package pg_backrest_utility;
use threads;
use strict;
use warnings;
use Carp;
use IPC::System::Simple qw(capture);
use Fcntl qw(:DEFAULT :flock);
use File::Path qw(remove_tree);
use Exporter qw(import);
our @EXPORT = qw(data_hash_build trim common_prefix wait_for_file date_string_get file_size_format execute
log log_file_set log_level_set
lock_file_create lock_file_remove
TRACE DEBUG ERROR ASSERT WARN INFO true false);
# Global constants
use constant
{
true => 1,
false => 0
};
use constant
{
TRACE => 'TRACE',
DEBUG => 'DEBUG',
INFO => 'INFO',
WARN => 'WARN',
ERROR => 'ERROR',
ASSERT => 'ASSERT',
OFF => 'OFF'
};
my $hLogFile;
my $strLogLevelFile = ERROR;
my $strLogLevelConsole = ERROR;
my %oLogLevelRank;
my $strLockPath;
my $hLockFile;
$oLogLevelRank{TRACE}{rank} = 6;
$oLogLevelRank{DEBUG}{rank} = 5;
$oLogLevelRank{INFO}{rank} = 4;
$oLogLevelRank{WARN}{rank} = 3;
$oLogLevelRank{ERROR}{rank} = 2;
$oLogLevelRank{ASSERT}{rank} = 1;
$oLogLevelRank{OFF}{rank} = 0;
####################################################################################################################################
# LOCK_FILE_CREATE
####################################################################################################################################
sub lock_file_create
{
my $strLockPathParam = shift;
my $strLockFile = $strLockPathParam . "/process.lock";
if (defined($hLockFile))
{
confess &lock(ASSERT, "${strLockFile} lock is already held");
}
$strLockPath = $strLockPathParam;
unless (-e $strLockPath)
{
if (system("mkdir -p ${strLockPath}") != 0)
{
confess &log(ERROR, "Unable to create lock path ${strLockPath}");
}
}
sysopen($hLockFile, $strLockFile, O_WRONLY | O_CREAT)
or confess &log(ERROR, "unable to open lock file ${strLockFile}");
if (!flock($hLockFile, LOCK_EX | LOCK_NB))
{
close($hLockFile);
return 0;
}
return $hLockFile;
}
####################################################################################################################################
# LOCK_FILE_REMOVE
####################################################################################################################################
sub lock_file_remove
{
if (defined($hLockFile))
{
close($hLockFile);
remove_tree($strLockPath) or confess &log(ERROR, "unable to delete lock path ${strLockPath}");
$hLockFile = undef;
$strLockPath = undef;
}
else
{
confess &log(ASSERT, "there is no lock to free");
}
}
####################################################################################################################################
# DATA_HASH_BUILD - Hash a delimited file with header
####################################################################################################################################
sub data_hash_build
{
my $strData = shift;
my $strDelimiter = shift;
my $strUndefinedKey = shift;
my @stryFile = split("\n", $strData);
my @stryHeader = split($strDelimiter, $stryFile[0]);
my %oHash;
for (my $iLineIdx = 1; $iLineIdx < scalar @stryFile; $iLineIdx++)
{
my @stryLine = split($strDelimiter, $stryFile[$iLineIdx]);
if (!defined($stryLine[0]) || $stryLine[0] eq "")
{
$stryLine[0] = $strUndefinedKey;
}
for (my $iColumnIdx = 1; $iColumnIdx < scalar @stryHeader; $iColumnIdx++)
{
if (defined($oHash{"$stryHeader[0]"}{"$stryLine[0]"}{"$stryHeader[$iColumnIdx]"}))
{
confess "the first column must be unique to build the hash";
}
$oHash{"$stryHeader[0]"}{"$stryLine[0]"}{"$stryHeader[$iColumnIdx]"} = $stryLine[$iColumnIdx];
}
}
return %oHash;
}
####################################################################################################################################
# TRIM - trim whitespace off strings
####################################################################################################################################
sub trim
{
my $strBuffer = shift;
$strBuffer =~ s/^\s+|\s+$//g;
return $strBuffer;
}
####################################################################################################################################
# WAIT_FOR_FILE
####################################################################################################################################
sub wait_for_file
{
my $strDir = shift;
my $strRegEx = shift;
my $iSeconds = shift;
my $lTime = time();
my $hDir;
while ($lTime > time() - $iSeconds)
{
opendir $hDir, $strDir or die "Could not open dir: $!\n";
my @stryFile = grep(/$strRegEx/i, readdir $hDir);
close $hDir;
if (scalar @stryFile == 1)
{
return;
}
sleep(1);
}
confess &log(ERROR, "could not find $strDir/$strRegEx after $iSeconds second(s)");
}
####################################################################################################################################
# COMMON_PREFIX
####################################################################################################################################
sub common_prefix
{
my $strString1 = shift;
my $strString2 = shift;
my $iCommonLen = 0;
my $iCompareLen = length($strString1) < length($strString2) ? length($strString1) : length($strString2);
for (my $iIndex = 0; $iIndex < $iCompareLen; $iIndex++)
{
if (substr($strString1, $iIndex, 1) ne substr($strString2, $iIndex, 1))
{
last;
}
$iCommonLen ++;
}
return $iCommonLen;
}
####################################################################################################################################
# FILE_SIZE_FORMAT - Format file sizes in human-readable form
####################################################################################################################################
sub file_size_format
{
my $lFileSize = shift;
if ($lFileSize < 1024)
{
return $lFileSize . "B";
}
if ($lFileSize < (1024 * 1024))
{
return int($lFileSize / 1024) . "KB";
}
if ($lFileSize < (1024 * 1024 * 1024))
{
return int($lFileSize / 1024 / 1024) . "MB";
}
return int($lFileSize / 1024 / 1024 / 1024) . "GB";
}
####################################################################################################################################
# DATE_STRING_GET - Get the date and time string
####################################################################################################################################
sub date_string_get
{
my $strFormat = shift;
if (!defined($strFormat))
{
$strFormat = "%4d%02d%02d-%02d%02d%02d";
}
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
return(sprintf($strFormat, $year+1900, $mon+1, $mday, $hour, $min, $sec));
}
####################################################################################################################################
# LOG_FILE_SET - set the file messages will be logged to
####################################################################################################################################
sub log_file_set
{
my $strFile = shift;
$strFile .= "-" . date_string_get("%4d%02d%02d") . ".log";
my $bExists = false;
if (-e $strFile)
{
$bExists = true;
}
open($hLogFile, '>>', $strFile) or confess "unable to open log file ${strFile}";
if ($bExists)
{
print $hLogFile "\n";
}
print $hLogFile "-------------------PROCESS START-------------------\n";
}
####################################################################################################################################
# LOG_LEVEL_SET - set the log level for file and console
####################################################################################################################################
sub log_level_set
{
my $strLevelFileParam = shift;
my $strLevelConsoleParam = shift;
if (!defined($oLogLevelRank{"${strLevelFileParam}"}{rank}))
{
confess &log(ERROR, "file log level ${strLevelFileParam} does not exist");
}
if (!defined($oLogLevelRank{"${strLevelConsoleParam}"}{rank}))
{
confess &log(ERROR, "console log level ${strLevelConsoleParam} does not exist");
}
$strLogLevelFile = $strLevelFileParam;
$strLogLevelConsole = $strLevelConsoleParam;
}
####################################################################################################################################
# LOG - log messages
####################################################################################################################################
sub log
{
my $strLevel = shift;
my $strMessage = shift;
if (!defined($oLogLevelRank{"${strLevel}"}{rank}))
{
confess &log(ASSERT, "log level ${strLevel} does not exist");
}
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
if (!defined($strMessage))
{
$strMessage = "(undefined)";
}
if ($strLevel eq "TRACE")
{
$strMessage = " " . $strMessage;
}
elsif ($strLevel eq "DEBUG")
{
$strMessage = " " . $strMessage;
}
$strMessage = sprintf("%4d-%02d-%02d %02d:%02d:%02d", $year+1900, $mon+1, $mday, $hour, $min, $sec) .
(" " x (7 - length($strLevel))) . "${strLevel} " . (" " x (2 - length(threads->tid()))) .
threads->tid() . ": ${strMessage}\n";
if ($oLogLevelRank{"${strLevel}"}{rank} <= $oLogLevelRank{"${strLogLevelConsole}"}{rank})
{
print $strMessage;
}
if ($oLogLevelRank{"${strLevel}"}{rank} <= $oLogLevelRank{"${strLogLevelFile}"}{rank})
{
if (defined($hLogFile))
{
print $hLogFile $strMessage;
}
}
return $strMessage;
}
####################################################################################################################################
# EXECUTE - execute a command
####################################################################################################################################
sub execute
{
my $strCommand = shift;
my $strOutput;
# print("$strCommand");
$strOutput = capture($strCommand) or confess &log(ERROR, "unable to execute command ${strCommand}: " . $_);
return $strOutput;
}
1;

BIN
test/data/test.archive.bin Normal file

Binary file not shown.

View File

@ -0,0 +1,630 @@
#!/usr/bin/perl
####################################################################################################################################
# BackupTest.pl - Unit Tests for BackRest::File
####################################################################################################################################
package BackRestTest::BackupTest;
####################################################################################################################################
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use Carp;
use File::Basename;
use File::Copy 'cp';
use DBI;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::File;
use BackRest::Remote;
use BackRestTest::CommonTest;
use Exporter qw(import);
our @EXPORT = qw(BackRestTestBackup_Test);
my $strTestPath;
my $strHost;
my $strUserBackRest;
my $hDb;
####################################################################################################################################
# BackRestTestBackup_PgConnect
####################################################################################################################################
sub BackRestTestBackup_PgConnect
{
# Disconnect user session
BackRestTestBackup_PgDisconnect();
# Connect to the db (whether it is local or remote)
$hDb = DBI->connect('dbi:Pg:dbname=postgres;port=' . BackRestTestCommon_DbPortGet .
';host=' . BackRestTestCommon_DbPathGet(),
BackRestTestCommon_UserGet(),
undef,
{AutoCommit => 1, RaiseError => 1});
}
####################################################################################################################################
# BackRestTestBackup_Disconnect
####################################################################################################################################
sub BackRestTestBackup_PgDisconnect
{
# Connect to the db (whether it is local or remote)
if (defined($hDb))
{
$hDb->disconnect;
undef($hDb);
}
}
####################################################################################################################################
# BackRestTestBackup_PgExecute
####################################################################################################################################
sub BackRestTestBackup_PgExecute
{
my $strSql = shift;
my $bCheckpoint = shift;
# Log and execute the statement
&log(DEBUG, "SQL: ${strSql}");
my $hStatement = $hDb->prepare($strSql);
$hStatement->execute() or
confess &log(ERROR, "Unable to execute: ${strSql}");
$hStatement->finish();
# Perform a checkpoint if requested
if (defined($bCheckpoint) && $bCheckpoint)
{
BackRestTestBackup_PgExecute('checkpoint');
}
}
####################################################################################################################################
# BackRestTestBackup_ClusterStop
####################################################################################################################################
sub BackRestTestBackup_ClusterStop
{
my $strPath = shift;
# Disconnect user session
BackRestTestBackup_PgDisconnect();
# If postmaster process is running them stop the cluster
if (-e $strPath . '/postmaster.pid')
{
BackRestTestCommon_Execute(BackRestTestCommon_PgSqlBinPathGet() . "/pg_ctl stop -D ${strPath} -w -s -m fast");
}
}
####################################################################################################################################
# BackRestTestBackup_ClusterRestart
####################################################################################################################################
sub BackRestTestBackup_ClusterRestart
{
my $strPath = BackRestTestCommon_DbCommonPathGet();
# Disconnect user session
BackRestTestBackup_PgDisconnect();
# If postmaster process is running them stop the cluster
if (-e $strPath . '/postmaster.pid')
{
BackRestTestCommon_Execute(BackRestTestCommon_PgSqlBinPathGet() . "/pg_ctl restart -D ${strPath} -w -s");
}
# Connect user session
BackRestTestBackup_PgConnect();
}
####################################################################################################################################
# BackRestTestBackup_ClusterCreate
####################################################################################################################################
sub BackRestTestBackup_ClusterCreate
{
my $strPath = shift;
my $iPort = shift;
my $strArchive = BackRestTestCommon_CommandMainGet() . ' --stanza=' . BackRestTestCommon_StanzaGet() .
' --config=' . BackRestTestCommon_DbPathGet() . '/pg_backrest.conf archive-push %p';
BackRestTestCommon_Execute(BackRestTestCommon_PgSqlBinPathGet() . "/initdb -D ${strPath} -A trust");
BackRestTestCommon_Execute(BackRestTestCommon_PgSqlBinPathGet() . "/pg_ctl start -o \"-c port=${iPort} -c " .
"checkpoint_segments=1 -c wal_level=archive -c archive_mode=on -c archive_command='${strArchive}' " .
"-c unix_socket_directories='" . BackRestTestCommon_DbPathGet() . "'\" " .
"-D ${strPath} -l ${strPath}/postgresql.log -w -s");
# Connect user session
BackRestTestBackup_PgConnect();
}
####################################################################################################################################
# BackRestTestBackup_Drop
####################################################################################################################################
sub BackRestTestBackup_Drop
{
# Stop the cluster if one is running
BackRestTestBackup_ClusterStop(BackRestTestCommon_DbCommonPathGet());
# Remove the backrest private directory
if (-e BackRestTestCommon_BackupPathGet())
{
BackRestTestCommon_Execute('rm -rf ' . BackRestTestCommon_BackupPathGet(), true, true);
}
# Remove the test directory
system('rm -rf ' . BackRestTestCommon_TestPathGet()) == 0
or die 'unable to remove ' . BackRestTestCommon_TestPathGet() . 'path';
}
####################################################################################################################################
# BackRestTestBackup_Create
####################################################################################################################################
sub BackRestTestBackup_Create
{
my $bRemote = shift;
my $bCluster = shift;
# Set defaults
$bRemote = defined($bRemote) ? $bRemote : false;
$bCluster = defined($bCluster) ? $bCluster : true;
# Drop the old test directory
BackRestTestBackup_Drop();
# Create the test directory
mkdir(BackRestTestCommon_TestPathGet(), oct('0770'))
or confess 'Unable to create ' . BackRestTestCommon_TestPathGet() . ' path';
# Create the db directory
mkdir(BackRestTestCommon_DbPathGet(), oct('0700'))
or confess 'Unable to create ' . BackRestTestCommon_DbPathGet() . ' path';
# Create the db/common directory
mkdir(BackRestTestCommon_DbCommonPathGet())
or confess 'Unable to create ' . BackRestTestCommon_DbCommonPathGet() . ' path';
# Create the archive directory
mkdir(BackRestTestCommon_ArchivePathGet(), oct('0700'))
or confess 'Unable to create ' . BackRestTestCommon_ArchivePathGet() . ' path';
# Create the backup directory
if ($bRemote)
{
BackRestTestCommon_Execute('mkdir -m 700 ' . BackRestTestCommon_BackupPathGet(), true);
}
else
{
mkdir(BackRestTestCommon_BackupPathGet(), oct('0700'))
or confess 'Unable to create ' . BackRestTestCommon_BackupPathGet() . ' path';
}
# Create the cluster
if ($bCluster)
{
BackRestTestBackup_ClusterCreate(BackRestTestCommon_DbCommonPathGet(), BackRestTestCommon_DbPortGet());
}
}
####################################################################################################################################
# BackRestTestBackup_Test
####################################################################################################################################
sub BackRestTestBackup_Test
{
my $strTest = shift;
# If no test was specified, then run them all
if (!defined($strTest))
{
$strTest = 'all';
}
# Setup global variables
$strTestPath = BackRestTestCommon_TestPathGet();
$strHost = BackRestTestCommon_HostGet();
$strUserBackRest = BackRestTestCommon_UserBackRestGet();
# Setup test variables
my $iRun;
my $bCreate;
my $strStanza = BackRestTestCommon_StanzaGet();
my $strGroup = BackRestTestCommon_GroupGet();
my $strArchiveChecksum = '1c7e00fd09b9dd11fc2966590b3e3274645dd031';
my $iArchiveMax = 3;
my $strXlogPath = BackRestTestCommon_DbCommonPathGet() . '/pg_xlog';
my $strArchiveTestFile = BackRestTestCommon_DataPathGet() . '/test.archive.bin';
my $iThreadMax = 4;
# Print test banner
&log(INFO, 'BACKUP MODULE ******************************************************************');
#-------------------------------------------------------------------------------------------------------------------------------
# Create remote
#-------------------------------------------------------------------------------------------------------------------------------
my $oRemote = BackRest::Remote->new
(
strHost => $strHost,
strUser => $strUserBackRest,
strCommand => BackRestTestCommon_CommandRemoteGet()
);
#-------------------------------------------------------------------------------------------------------------------------------
# Test archive-push
#-------------------------------------------------------------------------------------------------------------------------------
if ($strTest eq 'all' || $strTest eq 'archive-push')
{
$iRun = 0;
$bCreate = true;
my $oFile;
&log(INFO, "Test archive-push\n");
for (my $bRemote = false; $bRemote <= true; $bRemote++)
{
for (my $bCompress = false; $bCompress <= true; $bCompress++)
{
for (my $bChecksum = false; $bChecksum <= true; $bChecksum++)
{
for (my $bArchiveAsync = false; $bArchiveAsync <= $bRemote; $bArchiveAsync++)
{
for (my $bCompressAsync = false; $bCompressAsync <= true; $bCompressAsync++)
{
# Increment the run, log, and decide whether this unit test should be run
if (!BackRestTestCommon_Run(++$iRun,
"rmt ${bRemote}, cmp ${bCompress}, chk ${bChecksum}, " .
"arc_async ${bArchiveAsync}, cmp_async ${bCompressAsync}")) {next}
# Create the test directory
if ($bCreate)
{
# Create the file object
$oFile = (BackRest::File->new
(
strStanza => $strStanza,
strBackupPath => BackRestTestCommon_BackupPathGet(),
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
))->clone();
BackRestTestBackup_Create($bRemote, false);
#
# # Create the db/common/pg_xlog directory
# mkdir($strXlogPath)
# or confess 'Unable to create ${strXlogPath} path';
$bCreate = false;
}
BackRestTestCommon_ConfigCreate('db',
($bRemote ? REMOTE_BACKUP : undef),
$bCompress,
$bChecksum, # checksum
undef, # hardlink
undef, # thread-max
$bArchiveAsync,
$bCompressAsync);
my $strCommand = BackRestTestCommon_CommandMainGet() . ' --config=' . BackRestTestCommon_DbPathGet() .
'/pg_backrest.conf --stanza=db archive-push';
# Loop through backups
for (my $iBackup = 1; $iBackup <= 3; $iBackup++)
{
my $strArchiveFile;
# Loop through archive files
for (my $iArchive = 1; $iArchive <= $iArchiveMax; $iArchive++)
{
# Construct the archive filename
my $iArchiveNo = (($iBackup - 1) * $iArchiveMax + ($iArchive - 1)) + 1;
if ($iArchiveNo > 255)
{
confess 'backup total * archive total cannot be greater than 255';
}
$strArchiveFile = uc(sprintf('0000000100000001%08x', $iArchiveNo));
&log(INFO, ' backup ' . sprintf('%02d', $iBackup) .
', archive ' .sprintf('%02x', $iArchive) .
" - ${strArchiveFile}");
my $strSourceFile = "${strXlogPath}/${strArchiveFile}";
$oFile->copy(PATH_DB_ABSOLUTE, $strArchiveTestFile, # Source file
PATH_DB_ABSOLUTE, $strSourceFile, # Destination file
false, # Source is not compressed
false, # Destination is not compressed
undef, undef, undef, # Unused params
true); # Create path if it does not exist
BackRestTestCommon_Execute($strCommand . " ${strSourceFile}");
# Build the archive name to check for at the destination
my $strArchiveCheck = $strArchiveFile;
if ($bChecksum)
{
$strArchiveCheck .= "-${strArchiveChecksum}";
}
if ($bCompress)
{
$strArchiveCheck .= '.gz';
}
if (!$oFile->exists(PATH_BACKUP_ARCHIVE, $strArchiveCheck))
{
sleep(1);
if (!$oFile->exists(PATH_BACKUP_ARCHIVE, $strArchiveCheck))
{
confess 'unable to find ' . $oFile->path_get(PATH_BACKUP_ARCHIVE, $strArchiveCheck);
}
}
}
# !!! Need to put in tests for .backup files here
}
}
}
}
}
$bCreate = true;
}
if (BackRestTestCommon_Cleanup())
{
&log(INFO, 'cleanup');
BackRestTestBackup_Drop();
}
}
#-------------------------------------------------------------------------------------------------------------------------------
# Test archive-get
#-------------------------------------------------------------------------------------------------------------------------------
if ($strTest eq 'all' || $strTest eq 'archive-get')
{
$iRun = 0;
$bCreate = true;
my $oFile;
&log(INFO, "Test archive-get\n");
for (my $bRemote = false; $bRemote <= true; $bRemote++)
{
for (my $bCompress = false; $bCompress <= true; $bCompress++)
{
for (my $bChecksum = false; $bChecksum <= true; $bChecksum++)
{
for (my $bExists = false; $bExists <= true; $bExists++)
{
# Increment the run, log, and decide whether this unit test should be run
if (!BackRestTestCommon_Run(++$iRun,
"rmt ${bRemote}, cmp ${bCompress}, chk ${bChecksum}, exists ${bExists}")) {next}
# Create the test directory
if ($bCreate)
{
# Create the file object
$oFile = (BackRest::File->new
(
strStanza => $strStanza,
strBackupPath => BackRestTestCommon_BackupPathGet(),
strRemote => $bRemote ? 'backup' : undef,
oRemote => $bRemote ? $oRemote : undef
))->clone();
BackRestTestBackup_Create($bRemote, false);
# Create the db/common/pg_xlog directory
mkdir($strXlogPath)
or confess 'Unable to create ${strXlogPath} path';
$bCreate = false;
}
BackRestTestCommon_ConfigCreate('db', # local
($bRemote ? REMOTE_BACKUP : undef), # remote
$bCompress, # compress
$bChecksum, # checksum
undef, # hardlink
undef, # thread-max
undef, # archive-async
undef); # compress-async
my $strCommand = BackRestTestCommon_CommandMainGet() . ' --config=' . BackRestTestCommon_DbPathGet() .
'/pg_backrest.conf --stanza=db archive-get';
if ($bExists)
{
# Loop through archive files
my $strArchiveFile;
for (my $iArchiveNo = 1; $iArchiveNo <= $iArchiveMax; $iArchiveNo++)
{
# Construct the archive filename
if ($iArchiveNo > 255)
{
confess 'backup total * archive total cannot be greater than 255';
}
$strArchiveFile = uc(sprintf('0000000100000001%08x', $iArchiveNo));
&log(INFO, ' archive ' .sprintf('%02x', $iArchiveNo) .
" - ${strArchiveFile}");
my $strSourceFile = $strArchiveFile;
if ($bChecksum)
{
$strSourceFile .= "-${strArchiveChecksum}";
}
if ($bCompress)
{
$strSourceFile .= '.gz';
}
$oFile->copy(PATH_DB_ABSOLUTE, $strArchiveTestFile, # Source file
PATH_BACKUP_ARCHIVE, $strSourceFile, # Destination file
false, # Source is not compressed
$bCompress, # Destination compress based on test
undef, undef, undef, # Unused params
true); # Create path if it does not exist
my $strDestinationFile = "${strXlogPath}/${strArchiveFile}";
BackRestTestCommon_Execute($strCommand . " ${strArchiveFile} ${strDestinationFile}");
# Check that the destination file exists
if ($oFile->exists(PATH_DB_ABSOLUTE, $strDestinationFile))
{
if ($oFile->hash(PATH_DB_ABSOLUTE, $strDestinationFile) ne $strArchiveChecksum)
{
confess "archive file hash does not match ${strArchiveChecksum}";
}
}
else
{
confess 'archive file is not in destination';
}
}
}
else
{
if (BackRestTestCommon_Execute($strCommand . " 000000090000000900000009 ${strXlogPath}/RECOVERYXLOG",
false, true) != 1)
{
confess 'archive-get should return 1 when archive log is not present';
}
}
$bCreate = true;
}
}
}
}
if (BackRestTestCommon_Cleanup())
{
&log(INFO, 'cleanup');
BackRestTestBackup_Drop();
}
}
#-------------------------------------------------------------------------------------------------------------------------------
# Test full
#-------------------------------------------------------------------------------------------------------------------------------
if ($strTest eq 'all' || $strTest eq 'full')
{
$iRun = 0;
$bCreate = true;
&log(INFO, "Test Full Backup\n");
for (my $bRemote = false; $bRemote <= true; $bRemote++)
{
for (my $bLarge = false; $bLarge <= false; $bLarge++)
{
for (my $bCompress = false; $bCompress <= false; $bCompress++)
{
for (my $bChecksum = false; $bChecksum <= false; $bChecksum++)
{
for (my $bHardlink = false; $bHardlink <= true; $bHardlink++)
{
for (my $bArchiveAsync = false; $bArchiveAsync <= $bRemote; $bArchiveAsync++)
{
# Increment the run, log, and decide whether this unit test should be run
if (!BackRestTestCommon_Run(++$iRun,
"rmt ${bRemote}, lrg ${bLarge}, cmp ${bCompress}, chk ${bChecksum}, " .
"hardlink ${bHardlink}, arc_async ${bArchiveAsync}")) {next}
# Create the test directory
if ($bCreate)
{
BackRestTestBackup_Create($bRemote);
$bCreate = false;
}
# Create db config
BackRestTestCommon_ConfigCreate('db', # local
$bRemote ? REMOTE_BACKUP : undef, # remote
$bCompress, # compress
$bChecksum, # checksum
defined($bRemote) ? undef : $bHardlink, # hardlink
defined($bRemote) ? undef : $iThreadMax, # thread-max
$bArchiveAsync, # archive-async
undef); # compress-async
# Create backup config
if ($bRemote)
{
BackRestTestCommon_ConfigCreate('backup', # local
$bRemote ? REMOTE_DB : undef, # remote
$bCompress, # compress
$bChecksum, # checksum
$bHardlink, # hardlink
$iThreadMax, # thread-max
undef, # archive-async
undef); # compress-async
}
# Create the backup command
my $strCommand = BackRestTestCommon_CommandMainGet() . ' --config=' .
($bRemote ? BackRestTestCommon_BackupPathGet() : BackRestTestCommon_DbPathGet()) .
"/pg_backrest.conf --test --type=incr --stanza=${strStanza} backup";
# Run the full/incremental tests
for (my $iFull = 1; $iFull <= 1; $iFull++)
{
for (my $iIncr = 0; $iIncr <= 2; $iIncr++)
{
&log(INFO, ' ' . ($iIncr == 0 ? ('full ' . sprintf('%02d', $iFull)) :
(' incr ' . sprintf('%02d', $iIncr))));
# Create a table in each backup to check references
BackRestTestBackup_PgExecute("create table test_backup_${iIncr} (id int)", true);
# Create a table to be dropped to test missing file code
BackRestTestBackup_PgExecute('create table test_drop (id int)');
BackRestTestCommon_ExecuteBegin($strCommand, $bRemote);
if (BackRestTestCommon_ExecuteEnd(TEST_MANIFEST_BUILD))
{
BackRestTestBackup_PgExecute('drop table test_drop', true);
BackRestTestCommon_ExecuteEnd();
}
else
{
confess &log(ERROR, 'test point ' . TEST_MANIFEST_BUILD . ' was not found');
}
}
}
$bCreate = true;
}
}
}
}
}
}
if (BackRestTestCommon_Cleanup())
{
&log(INFO, 'cleanup');
BackRestTestBackup_Drop();
}
}
}
1;

View File

@ -0,0 +1,436 @@
#!/usr/bin/perl
####################################################################################################################################
# CommonTest.pm - Common globals used for testing
####################################################################################################################################
package BackRestTest::CommonTest;
####################################################################################################################################
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use Carp;
use File::Basename;
use Cwd 'abs_path';
use IPC::Open3;
use POSIX ':sys_wait_h';
use IO::Select;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::File;
use Exporter qw(import);
our @EXPORT = qw(BackRestTestCommon_Setup BackRestTestCommon_ExecuteBegin BackRestTestCommon_ExecuteEnd
BackRestTestCommon_Execute BackRestTestCommon_ExecuteBackRest
BackRestTestCommon_ConfigCreate BackRestTestCommon_Run BackRestTestCommon_Cleanup
BackRestTestCommon_PgSqlBinPathGet BackRestTestCommon_StanzaGet BackRestTestCommon_CommandMainGet
BackRestTestCommon_CommandRemoteGet BackRestTestCommon_HostGet BackRestTestCommon_UserGet
BackRestTestCommon_GroupGet BackRestTestCommon_UserBackRestGet BackRestTestCommon_TestPathGet
BackRestTestCommon_DataPathGet BackRestTestCommon_BackupPathGet BackRestTestCommon_ArchivePathGet
BackRestTestCommon_DbPathGet BackRestTestCommon_DbCommonPathGet BackRestTestCommon_DbPortGet);
my $strPgSqlBin;
my $strCommonStanza;
my $strCommonCommandMain;
my $strCommonCommandRemote;
my $strCommonCommandPsql;
my $strCommonHost;
my $strCommonUser;
my $strCommonGroup;
my $strCommonUserBackRest;
my $strCommonTestPath;
my $strCommonDataPath;
my $strCommonBackupPath;
my $strCommonArchivePath;
my $strCommonDbPath;
my $strCommonDbCommonPath;
my $iCommonDbPort;
my $iModuleTestRun;
my $bDryRun;
my $bNoCleanup;
# Execution globals
my $strErrorLog;
my $hError;
my $strOutLog;
my $hOut;
my $pId;
my $strCommand;
####################################################################################################################################
# BackRestTestBackup_Run
####################################################################################################################################
sub BackRestTestCommon_Run
{
my $iRun = shift;
my $strLog = shift;
if (defined($iModuleTestRun) && $iModuleTestRun != $iRun)
{
return false;
}
&log(INFO, 'run ' . sprintf('%03d', $iRun) . ' - ' . $strLog);
if ($bDryRun)
{
return false;
}
return true;
}
####################################################################################################################################
# BackRestTestBackup_Cleanup
####################################################################################################################################
sub BackRestTestCommon_Cleanup
{
return !$bNoCleanup && !$bDryRun;
}
####################################################################################################################################
# BackRestTestBackup_ExecuteBegin
####################################################################################################################################
sub BackRestTestCommon_ExecuteBegin
{
my $strCommandParam = shift;
my $bRemote = shift;
# Set defaults
$bRemote = defined($bRemote) ? $bRemote : false;
if ($bRemote)
{
$strCommand = "ssh ${strCommonUserBackRest}\@${strCommonHost} '${strCommandParam}'";
}
else
{
$strCommand = $strCommandParam;
}
$strErrorLog = '';
$hError = undef;
$strOutLog = '';
$hOut = undef;
&log(DEBUG, "executing command: ${strCommand}");
# Execute the command
$pId = open3(undef, $hOut, $hError, $strCommand);
}
####################################################################################################################################
# BackRestTestBackup_ExecuteEnd
####################################################################################################################################
sub BackRestTestCommon_ExecuteEnd
{
my $strTest = shift;
my $bSuppressError = shift;
# Set defaults
$bSuppressError = defined($bSuppressError) ? $bSuppressError : false;
# Create select objects
my $oErrorSelect = IO::Select->new();
$oErrorSelect->add($hError);
my $oOutSelect = IO::Select->new();
$oOutSelect->add($hOut);
# While the process is running drain the stdout and stderr streams
while(waitpid($pId, WNOHANG) == 0)
{
# Drain the stderr stream
if ($oErrorSelect->can_read(.1))
{
while (my $strLine = readline($hError))
{
$strErrorLog .= $strLine;
}
}
# Drain the stdout stream
if ($oOutSelect->can_read(.1))
{
while (my $strLine = readline($hOut))
{
$strOutLog .= $strLine;
if (defined($strTest) && test_check($strLine, $strTest))
{
&log(DEBUG, "Found test ${strTest}");
return true;
}
}
}
}
# Check the exit status and output an error if needed
my $iExitStatus = ${^CHILD_ERROR_NATIVE} >> 8;
if ($iExitStatus != 0 && !$bSuppressError)
{
confess &log(ERROR, "command '${strCommand}' returned " . $iExitStatus . "\n" .
($strOutLog ne '' ? "STDOUT:\n${strOutLog}" : '') .
($strErrorLog ne '' ? "STDERR:\n${strErrorLog}" : ''));
}
else
{
&log(DEBUG, "suppressed error was ${iExitStatus}");
}
$hError = undef;
$hOut = undef;
return $iExitStatus;
}
####################################################################################################################################
# BackRestTestBackup_Execute
####################################################################################################################################
sub BackRestTestCommon_Execute
{
my $strCommand = shift;
my $bRemote = shift;
my $bSuppressError = shift;
BackRestTestCommon_ExecuteBegin($strCommand, $bRemote);
return BackRestTestCommon_ExecuteEnd(undef, $bSuppressError);
}
####################################################################################################################################
# BackRestTestCommon_Setup
####################################################################################################################################
sub BackRestTestCommon_Setup
{
my $strTestPathParam = shift;
my $strPgSqlBinParam = shift;
my $iModuleTestRunParam = shift;
my $bDryRunParam = shift;
my $bNoCleanupParam = shift;
my $strBasePath = dirname(dirname(abs_path($0)));
$strPgSqlBin = $strPgSqlBinParam;
$strCommonStanza = 'db';
$strCommonHost = '127.0.0.1';
$strCommonUser = getpwuid($<);
$strCommonGroup = getgrgid($();
$strCommonUserBackRest = 'backrest';
if (defined($strTestPathParam))
{
$strCommonTestPath = $strTestPathParam;
}
else
{
$strCommonTestPath = "${strBasePath}/test/test";
}
$strCommonDataPath = "${strBasePath}/test/data";
$strCommonBackupPath = "${strCommonTestPath}/backrest";
$strCommonArchivePath = "${strCommonTestPath}/archive";
$strCommonDbPath = "${strCommonTestPath}/db";
$strCommonDbCommonPath = "${strCommonTestPath}/db/common";
$strCommonCommandMain = "${strBasePath}/bin/pg_backrest.pl";
$strCommonCommandRemote = "${strBasePath}/bin/pg_backrest_remote.pl";
$strCommonCommandPsql = "${strPgSqlBin}/psql -X %option% -h ${strCommonDbPath}";
$iCommonDbPort = 6543;
$iModuleTestRun = $iModuleTestRunParam;
$bDryRun = $bDryRunParam;
$bNoCleanup = $bNoCleanupParam;
}
####################################################################################################################################
# BackRestTestCommon_ConfigCreate
####################################################################################################################################
sub BackRestTestCommon_ConfigCreate
{
my $strLocal = shift;
my $strRemote = shift;
my $bCompress = shift;
my $bChecksum = shift;
my $bHardlink = shift;
my $iThreadMax = shift;
my $bArchiveLocal = shift;
my $bCompressAsync = shift;
my %oParamHash;
if (defined($strRemote))
{
$oParamHash{'global:command'}{'remote'} = $strCommonCommandRemote;
}
$oParamHash{'global:command'}{'psql'} = $strCommonCommandPsql;
if (defined($strRemote) && $strRemote eq REMOTE_BACKUP)
{
$oParamHash{'global:backup'}{'host'} = $strCommonHost;
$oParamHash{'global:backup'}{'user'} = $strCommonUserBackRest;
}
elsif (defined($strRemote) && $strRemote eq REMOTE_DB)
{
$oParamHash{$strCommonStanza}{'host'} = $strCommonHost;
$oParamHash{$strCommonStanza}{'user'} = $strCommonUser;
}
$oParamHash{'global:log'}{'level-console'} = 'error';
$oParamHash{'global:log'}{'level-file'} = 'trace';
if ($strLocal eq REMOTE_BACKUP)
{
if (defined($bHardlink) && $bHardlink)
{
$oParamHash{'global:backup'}{'hardlink'} = 'y';
}
}
elsif ($strLocal eq REMOTE_DB)
{
if (defined($strRemote))
{
$oParamHash{'global:log'}{'level-console'} = 'trace';
}
if ($bArchiveLocal)
{
$oParamHash{'global:archive'}{path} = BackRestTestCommon_ArchivePathGet();
if (!$bCompressAsync)
{
$oParamHash{'global:archive'}{'compress_async'} = 'n';
}
}
}
else
{
confess "invalid local type ${strLocal}";
}
if (($strLocal eq REMOTE_BACKUP) || ($strLocal eq REMOTE_DB && !defined($strRemote)))
{
$oParamHash{'db:command:option'}{'psql'} = "--port=${iCommonDbPort}";
}
if (defined($bCompress) && !$bCompress)
{
$oParamHash{'global:backup'}{'compress'} = 'n';
}
if (defined($bChecksum) && !$bChecksum)
{
$oParamHash{'global:backup'}{'checksum'} = 'n';
}
$oParamHash{$strCommonStanza}{'path'} = $strCommonDbCommonPath;
$oParamHash{'global:backup'}{'path'} = $strCommonBackupPath;
if (defined($iThreadMax))
{
$oParamHash{'global:backup'}{'thread-max'} = $iThreadMax;
}
# Write out the configuration file
my $strFile = BackRestTestCommon_TestPathGet() . '/pg_backrest.conf';
config_save($strFile, \%oParamHash);
# Move the configuration file based on local
if ($strLocal eq 'db')
{
rename($strFile, BackRestTestCommon_DbPathGet() . '/pg_backrest.conf')
or die "unable to move ${strFile} to " . BackRestTestCommon_DbPathGet() . '/pg_backrest.conf path';
}
elsif ($strLocal eq 'backup' && !defined($strRemote))
{
rename($strFile, BackRestTestCommon_BackupPathGet() . '/pg_backrest.conf')
or die "unable to move ${strFile} to " . BackRestTestCommon_BackupPathGet() . '/pg_backrest.conf path';
}
else
{
BackRestTestCommon_Execute("mv ${strFile} " . BackRestTestCommon_BackupPathGet() . '/pg_backrest.conf', true);
}
}
####################################################################################################################################
# Get Methods
####################################################################################################################################
sub BackRestTestCommon_PgSqlBinPathGet
{
return $strPgSqlBin;
}
sub BackRestTestCommon_StanzaGet
{
return $strCommonStanza;
}
sub BackRestTestCommon_CommandMainGet
{
return $strCommonCommandMain;
}
sub BackRestTestCommon_CommandRemoteGet
{
return $strCommonCommandRemote;
}
sub BackRestTestCommon_HostGet
{
return $strCommonHost;
}
sub BackRestTestCommon_UserGet
{
return $strCommonUser;
}
sub BackRestTestCommon_GroupGet
{
return $strCommonGroup;
}
sub BackRestTestCommon_UserBackRestGet
{
return $strCommonUserBackRest;
}
sub BackRestTestCommon_TestPathGet
{
return $strCommonTestPath;
}
sub BackRestTestCommon_DataPathGet
{
return $strCommonDataPath;
}
sub BackRestTestCommon_BackupPathGet
{
return $strCommonBackupPath;
}
sub BackRestTestCommon_ArchivePathGet
{
return $strCommonArchivePath;
}
sub BackRestTestCommon_DbPathGet
{
return $strCommonDbPath;
}
sub BackRestTestCommon_DbCommonPathGet
{
return $strCommonDbCommonPath;
}
sub BackRestTestCommon_DbPortGet
{
return $iCommonDbPort;
}
1;

1150
test/lib/BackRestTest/FileTest.pm Executable file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,128 @@
#!/usr/bin/perl
####################################################################################################################################
# BackupTest.pl - Unit Tests for BackRest::File
####################################################################################################################################
package BackRestTest::UtilityTest;
####################################################################################################################################
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use Carp;
use File::Basename;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use BackRest::File;
use BackRestTest::CommonTest;
use Exporter qw(import);
our @EXPORT = qw(BackRestTestUtility_Test);
####################################################################################################################################
# BackRestTestUtility_Drop
####################################################################################################################################
sub BackRestTestUtility_Drop
{
# Remove the test directory
system('rm -rf ' . BackRestTestCommon_TestPathGet()) == 0
or die 'unable to remove ' . BackRestTestCommon_TestPathGet() . 'path';
}
####################################################################################################################################
# BackRestTestUtility_Create
####################################################################################################################################
sub BackRestTestUtility_Create
{
# Drop the old test directory
BackRestTestUtility_Drop();
# Create the test directory
mkdir(BackRestTestCommon_TestPathGet(), oct('0770'))
or confess 'Unable to create ' . BackRestTestCommon_TestPathGet() . ' path';
}
####################################################################################################################################
# BackRestTestUtility_Test
####################################################################################################################################
sub BackRestTestUtility_Test
{
my $strTest = shift;
# Setup test variables
my $iRun;
my $bCreate;
my $strTestPath = BackRestTestCommon_TestPathGet();
# Print test banner
&log(INFO, 'UTILITY MODULE ******************************************************************');
#-------------------------------------------------------------------------------------------------------------------------------
# Test config
#-------------------------------------------------------------------------------------------------------------------------------
if ($strTest eq 'all' || $strTest eq 'config')
{
$iRun = 0;
$bCreate = true;
my $oFile = BackRest::File->new();
&log(INFO, "Test config\n");
# Increment the run, log, and decide whether this unit test should be run
if (BackRestTestCommon_Run(++$iRun, 'base'))
{
# Create the test directory
if ($bCreate)
{
BackRestTestUtility_Create();
$bCreate = false;
}
# Generate a test config
my %oConfig;
$oConfig{test1}{key1} = 'value';
$oConfig{test1}{key2} = 'value';
$oConfig{test2}{key1} = 'value';
$oConfig{test3}{key1} = 'value';
$oConfig{test3}{key2}{sub1} = 'value';
$oConfig{test3}{key2}{sub2} = 'value';
# Save the test config
my $strFile = "${strTestPath}/config.cfg";
config_save($strFile, \%oConfig);
my $strConfigHash = $oFile->hash(PATH_ABSOLUTE, $strFile);
# Reload the test config
my %oConfigTest;
config_load($strFile, \%oConfigTest);
# Resave the test config and compare hashes
my $strFileTest = "${strTestPath}/config-test.cfg";
config_save($strFileTest, \%oConfigTest);
my $strConfigTestHash = $oFile->hash(PATH_ABSOLUTE, $strFileTest);
if ($strConfigHash ne $strConfigTestHash)
{
confess "config hash ${strConfigHash} != ${strConfigTestHash}";
}
if (BackRestTestCommon_Cleanup())
{
&log(INFO, 'cleanup');
BackRestTestUtility_Drop();
}
}
}
}
1;

View File

@ -1,244 +1,148 @@
#!/usr/bin/perl
####################################################################################################################################
# test.pl - BackRest Unit Tests
####################################################################################################################################
# /Library/PostgreSQL/9.3/bin/pg_ctl start -o "-c port=7000" -D /Users/backrest/test/backup/db/20140205-103801F/base -l /Users/backrest/test/backup/db/20140205-103801F/base/postgresql.log -w -s
####################################################################################################################################
# Perl includes
####################################################################################################################################
use strict;
use warnings;
use Carp;
#use strict;
use DBI;
use IPC::System::Simple qw(capture);
use Config::IniFiles;
use File::Find;
use File::Basename;
use Getopt::Long;
use Cwd 'abs_path';
use Cwd;
sub trim
use lib dirname($0) . '/../lib';
use BackRest::Utility;
use lib dirname($0) . '/lib';
use BackRestTest::CommonTest;
use BackRestTest::UtilityTest;
use BackRestTest::FileTest;
use BackRestTest::BackupTest;
####################################################################################################################################
# Command line parameters
####################################################################################################################################
my $strLogLevel = 'off'; # Log level for tests
my $strModule = 'all';
my $strModuleTest = 'all';
my $iModuleTestRun = undef;
my $bDryRun = false;
my $bNoCleanup = false;
my $strPgSqlBin;
my $strTestPath;
GetOptions ('pgsql-bin=s' => \$strPgSqlBin,
'test-path=s' => \$strTestPath,
'log-level=s' => \$strLogLevel,
'module=s' => \$strModule,
'module-test=s' => \$strModuleTest,
'module-test-run=s' => \$iModuleTestRun,
'dry-run' => \$bDryRun,
'no-cleanup' => \$bNoCleanup)
or die 'error in command line arguments';
####################################################################################################################################
# Setup
####################################################################################################################################
# Set a neutral umask so tests work as expected
umask(0);
# Set console log level to trace for testing
log_level_set(undef, uc($strLogLevel));
if ($strModuleTest ne 'all' && $strModule eq 'all')
{
local($strBuffer) = @_;
$strBuffer =~ s/^\s+|\s+$//g;
return $strBuffer;
confess "--module must be provided for test \"${strModuleTest}\"";
}
sub execute
if (defined($iModuleTestRun) && $strModuleTest eq 'all')
{
local($strCommand) = @_;
my $strOutput;
confess "--module-test must be provided for run \"${iModuleTestRun}\"";
}
print("$strCommand");
$strOutput = trim(capture($strCommand));
# Make sure PG bin has been defined
if (!defined($strPgSqlBin))
{
confess 'pgsql-bin was not defined';
}
if ($strOutput eq "")
{
print(" ... complete\n\n");
}
else
####################################################################################################################################
# Make sure version number matches in README.md and VERSION
####################################################################################################################################
my $hReadMe;
my $strLine;
my $bMatch = false;
my $strVersion = version_get();
if (!open($hReadMe, '<', dirname($0) . '/../README.md'))
{
confess 'unable to open README.md';
}
while ($strLine = readline($hReadMe))
{
if ($strLine =~ /^\#\#\# v/)
{
print(" ... complete\n$strOutput\n\n");
$bMatch = substr($strLine, 5, length($strVersion)) eq $strVersion;
last;
}
return $strOutput;
}
sub pg_create
if (!$bMatch)
{
local($strPgBinPath, $strTestPath, $strTestDir, $strArchiveDir, $strBackupDir) = @_;
execute("mkdir $strTestPath");
execute("mkdir $strTestPath/$strTestDir");
execute("mkdir $strTestPath/$strTestDir/ts1");
execute("mkdir $strTestPath/$strTestDir/ts2");
execute($strPgBinPath . "/initdb -D $strTestPath/$strTestDir/common -A trust -k");
execute("mkdir $strTestPath/$strBackupDir");
# execute("mkdir -p $strTestPath/$strArchiveDir");
confess "unable to find version ${strVersion} as last revision in README.md";
}
sub pg_start
####################################################################################################################################
# Clean whitespace only if test.pl is being run from the test directory in the backrest repo
####################################################################################################################################
my $hVersion;
if (-e './test.pl' && -e '../bin/pg_backrest.pl' && open($hVersion, '<', '../VERSION'))
{
local($strPgBinPath, $strDbPath, $strPort, $strAchiveCommand) = @_;
my $strCommand = "$strPgBinPath/pg_ctl start -o \"-c port=$strPort -c checkpoint_segments=1 -c wal_level=archive -c archive_mode=on -c archive_command=\'$strAchiveCommand\'\" -D $strDbPath -l $strDbPath/postgresql.log -w -s";
execute($strCommand);
}
my $strTestVersion = readline($hVersion);
sub pg_password_set
{
local($strPgBinPath, $strPath, $strUser, $strPort) = @_;
my $strCommand = "$strPgBinPath/psql --port=$strPort -c \"alter user $strUser with password 'password'\" postgres";
execute($strCommand);
}
sub pg_stop
{
local($strPgBinPath, $strPath) = @_;
my $strCommand = "$strPgBinPath/pg_ctl stop -D $strPath -w -s -m fast";
execute($strCommand);
}
sub pg_drop
{
local($strTestPath) = @_;
my $strCommand = "rm -rf $strTestPath";
execute($strCommand);
}
sub pg_execute
{
local($dbh, $strSql) = @_;
print($strSql);
$sth = $dbh->prepare($strSql);
$sth->execute() or die;
$sth->finish();
print(" ... complete\n\n");
}
sub archive_command_build
{
my $strBackRestBinPath = shift;
my $strDestinationPath = shift;
my $bCompression = shift;
my $bChecksum = shift;
my $strCommand = "$strBackRestBinPath/pg_backrest.pl --stanza=db --config=$strBackRestBinPath/pg_backrest.conf";
# if (!$bCompression)
# {
# $strCommand .= " --no-compression"
# }
#
# if (!$bChecksum)
# {
# $strCommand .= " --no-checksum"
# }
return $strCommand . " archive-push %p";
}
sub wait_for_file
{
my $strDir = shift;
my $strRegEx = shift;
my $iSeconds = shift;
my $lTime = time();
my $hDir;
while ($lTime > time() - $iSeconds)
if (defined($strTestVersion) && $strVersion eq trim($strTestVersion))
{
opendir $hDir, $strDir or die "Could not open dir: $!\n";
my @stryFile = grep(/$strRegEx/i, readdir $hDir);
close $hDir;
if (scalar @stryFile == 1)
{
return;
}
sleep(1);
BackRestTestCommon_Execute(
"find .. -type f -not -path \"../.git/*\" -not -path \"*.DS_Store\" -not -path \"../test/test/*\" " .
"-not -path \"../test/data/*\" " .
"-exec sh -c 'for i;do echo \"\$i\" && sed 's/[[:space:]]*\$//' \"\$i\">/tmp/.\$\$ && cat /tmp/.\$\$ " .
"> \"\$i\";done' arg0 {} + > /dev/null", false, true);
}
die "could not find $strDir/$strRegEx after $iSeconds second(s)";
close($hVersion);
}
sub pgbr_backup
####################################################################################################################################
# Runs tests
####################################################################################################################################
BackRestTestCommon_Setup($strTestPath, $strPgSqlBin, $iModuleTestRun, $bDryRun, $bNoCleanup);
# &log(INFO, "Testing with test_path = " . BackRestTestCommon_TestPathGet() . ", host = {strHost}, user = {strUser}, " .
# "group = {strGroup}");
if ($strModule eq 'all' || $strModule eq 'utility')
{
my $strBackRestBinPath = shift;
my $strCluster = shift;
my $strCommand = "$strBackRestBinPath/pg_backrest.pl --config=$strBackRestBinPath/pg_backrest.conf backup $strCluster";
execute($strCommand);
BackRestTestUtility_Test($strModuleTest);
}
my $strUser = execute('whoami');
my $strTestPath = "/Users/dsteele/test";
my $strDbDir = "db";
my $strArchiveDir = "backup/db/archive";
my $strBackupDir = "backup";
my $strPgBinPath = "/Library/PostgreSQL/9.3/bin";
my $strPort = "6001";
my $strBackRestBinPath = "/Users/dsteele/pg_backrest";
my $strArchiveCommand = archive_command_build($strBackRestBinPath, "$strTestPath/$strArchiveDir", 0, 0);
################################################################################
# Stop the current test cluster if it is running and create a new one
################################################################################
eval {pg_stop($strPgBinPath, "$strTestPath/$strDbDir")};
if ($@)
if ($strModule eq 'all' || $strModule eq 'file')
{
print(" ... unable to stop pg server (ignoring): " . trim($@) . "\n\n")
BackRestTestFile_Test($strModuleTest);
}
pg_drop($strTestPath);
pg_create($strPgBinPath, $strTestPath, $strDbDir, $strArchiveDir, $strBackupDir);
pg_start($strPgBinPath, "$strTestPath/$strDbDir/common", $strPort, $strArchiveCommand);
pg_password_set($strPgBinPath, "$strTestPath/$strDbDir/common", $strUser, $strPort);
if ($strModule eq 'all' || $strModule eq 'backup')
{
BackRestTestBackup_Test($strModuleTest);
}
################################################################################
# Connect and start tests
################################################################################
$dbh = DBI->connect("dbi:Pg:dbname=postgres;port=$strPort;host=127.0.0.1", $strUser,
'password', {AutoCommit => 1});
pg_execute($dbh, "create tablespace ts1 location '$strTestPath/$strDbDir/ts1'");
pg_execute($dbh, "create tablespace ts2 location '$strTestPath/$strDbDir/ts2'");
pg_execute($dbh, "create table test (id int)");
pg_execute($dbh, "create table test_ts1 (id int) tablespace ts1");
pg_execute($dbh, "create table test_ts2 (id int) tablespace ts1");
pg_execute($dbh, "insert into test values (1)");
pg_execute($dbh, "select pg_switch_xlog()");
execute("mkdir -p $strTestPath/$strArchiveDir/0000000100000000");
# Test for archive log file 000000010000000000000001
wait_for_file("$strTestPath/$strArchiveDir/0000000100000000", "^000000010000000000000001\$", 5);
# Turn on log checksum for the next test
$dbh->disconnect();
pg_stop($strPgBinPath, "$strTestPath/$strDbDir/common");
$strArchiveCommand = archive_command_build($strBackRestBinPath, "$strTestPath/$strArchiveDir", 0, 1);
pg_start($strPgBinPath, "$strTestPath/$strDbDir/common", $strPort, $strArchiveCommand);
$dbh = DBI->connect("dbi:Pg:dbname=postgres;port=$strPort;host=127.0.0.1", $strUser,
'password', {AutoCommit => 1});
# Write another value into the test table
pg_execute($dbh, "insert into test values (2)");
pg_execute($dbh, "select pg_switch_xlog()");
# Test for archive log file 000000010000000000000002
wait_for_file("$strTestPath/$strArchiveDir/0000000100000000", "^000000010000000000000002-([a-f]|[0-9]){40}\$", 5);
# Turn on log compression and checksum for the next test
$dbh->disconnect();
pg_stop($strPgBinPath, "$strTestPath/$strDbDir/common");
$strArchiveCommand = archive_command_build($strBackRestBinPath, "$strTestPath/$strArchiveDir", 1, 1);
pg_start($strPgBinPath, "$strTestPath/$strDbDir/common", $strPort, $strArchiveCommand);
$dbh = DBI->connect("dbi:Pg:dbname=postgres;port=$strPort;host=127.0.0.1", $strUser,
'password', {AutoCommit => 1});
# Write another value into the test table
pg_execute($dbh, "insert into test values (3)");
pg_execute($dbh, "select pg_switch_xlog()");
# Test for archive log file 000000010000000000000003
wait_for_file("$strTestPath/$strArchiveDir/0000000100000000", "^000000010000000000000003-([a-f]|[0-9]){40}\\.gz\$", 5);
$dbh->disconnect();
################################################################################
# Stop the server
################################################################################
#pg_stop($strPgBinPath, "$strTestPath/$strDbDir/common");
################################################################################
# Start an offline backup
################################################################################
#pgbr_backup($strBackRestBinPath, "db");
if (!$bDryRun)
{
&log(ASSERT, 'TESTS COMPLETED SUCCESSFULLY (DESPITE ANY ERROR MESSAGES YOU SAW)');
}