diff --git a/README.md b/README.md
index 5810d195a..336234248 100644
--- a/README.md
+++ b/README.md
@@ -59,6 +59,8 @@ cpanm Net::OpenSSH
cpanm IPC::System::Simple
cpanm Digest::SHA
cpanm Compress::ZLib
+cpanm threads (update this package)
+cpanm Thread::Queue (update this package)
```
* Install PgBackRest
@@ -196,7 +198,7 @@ Perform a database restore. This command is generall run manually, but there ar
The backup set to be restored. `latest` will restore the latest backup, otherwise provide the name of the backup to restore.
```
required: n
-default: default
+default: latest
example: --set=20150131-153358F_20150131-153401I
```
@@ -470,9 +472,9 @@ The `general` section defines settings that are shared between multiple operatio
Set the buffer size used for copy, compress, and uncompress functions. A maximum of 3 buffers will be in use at a time per thread. An additional maximum of 256K per thread may be used for zlib buffers.
```
required: n
-default: 1048576
-allow: 4096 - 8388608
-example: buffer-size=16384
+default: 4194304
+allow: 16384 - 8388608
+example: buffer-size=32768
```
##### `compress` key
@@ -621,7 +623,7 @@ If this occurs then the archive log stream will be interrupted and PITR will not
The purpose of this feature is to prevent the log volume from filling up at which point Postgres will stop completely. Better to lose the backup than have the database go down.
-To start normal archiving again you'll need to remove the stop file which will be located at `${archive-path}/lock/${stanza}-archive.stop` where `${archive-path}` is the path set in the `archive` section, and `${stanza}` is the backup stanza.
+To start normal archiving again you'll need to remove the stop file which will be located at `${repo-path}/lock/${stanza}-archive.stop` where `${repo-path}` is the path set in the `general` section, and `${stanza}` is the backup stanza.
```
required: n
example: archive-max-mb=1024
@@ -696,83 +698,93 @@ example: db-path=/data/db
## Release Notes
+### v0.60: better version support and WAL improvements
+
+* Pushing duplicate WAL now generates an error. This worked before only if checksums were disabled.
+
+* Database System IDs are used to make sure that all WAL in an archive matches up. This should help prevent misconfigurations that send WAL from multiple clusters to the same archive.
+
+* Regression tests working back to PostgreSQL 8.3.
+
+* Improved threading model by starting threads early and terminating them late.
+
### v0.50: restore and much more
-- Added restore functionality.
+* Added restore functionality.
-- All options can now be set on the command-line making pg_backrest.conf optional.
+* All options can now be set on the command-line making pg_backrest.conf optional.
-- De/compression is now performed without threads and checksum/size is calculated in stream. That means file checksums are no longer optional.
+* De/compression is now performed without threads and checksum/size is calculated in stream. That means file checksums are no longer optional.
-- Added option `--no-start-stop` to allow backups when Postgres is shut down. If `postmaster.pid` is present then `--force` is required to make the backup run (though if Postgres is running an inconsistent backup will likely be created). This option was added primarily for the purpose of unit testing, but there may be applications in the real world as well.
+* Added option `--no-start-stop` to allow backups when Postgres is shut down. If `postmaster.pid` is present then `--force` is required to make the backup run (though if Postgres is running an inconsistent backup will likely be created). This option was added primarily for the purpose of unit testing, but there may be applications in the real world as well.
-- Fixed broken checksums and now they work with normal and resumed backups. Finally realized that checksums and checksum deltas should be functionally separated and this simplied a number of things. Issue #28 has been created for checksum deltas.
+* Fixed broken checksums and now they work with normal and resumed backups. Finally realized that checksums and checksum deltas should be functionally separated and this simplied a number of things. Issue #28 has been created for checksum deltas.
-- Fixed an issue where a backup could be resumed from an aborted backup that didn't have the same type and prior backup.
+* Fixed an issue where a backup could be resumed from an aborted backup that didn't have the same type and prior backup.
-- Removed dependency on Moose. It wasn't being used extensively and makes for longer startup times.
+* Removed dependency on Moose. It wasn't being used extensively and makes for longer startup times.
-- Checksum for backup.manifest to detect corrupted/modified manifest.
+* Checksum for backup.manifest to detect corrupted/modified manifest.
-- Link `latest` always points to the last backup. This has been added for convenience and to make restores simpler.
+* Link `latest` always points to the last backup. This has been added for convenience and to make restores simpler.
-- More comprehensive unit tests in all areas.
+* More comprehensive unit tests in all areas.
### v0.30: Core Restructuring and Unit Tests
-- Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations. Compression is performed in threads rather than forked processes.
+* Complete rewrite of BackRest::File module to use a custom protocol for remote operations and Perl native GZIP and SHA operations. Compression is performed in threads rather than forked processes.
-- Fairly comprehensive unit tests for all the basic operations. More work to be done here for sure, but then there is always more work to be done on unit tests.
+* Fairly comprehensive unit tests for all the basic operations. More work to be done here for sure, but then there is always more work to be done on unit tests.
-- Removed dependency on Storable and replaced with a custom ini file implementation.
+* Removed dependency on Storable and replaced with a custom ini file implementation.
-- Added much needed documentation
+* Added much needed documentation
-- Numerous other changes that can only be identified with a diff.
+* Numerous other changes that can only be identified with a diff.
### v0.19: Improved Error Reporting/Handling
-- Working on improving error handling in the file object. This is not complete, but works well enough to find a few errors that have been causing us problems (notably, find is occasionally failing building the archive async manifest when system is under load).
+* Working on improving error handling in the file object. This is not complete, but works well enough to find a few errors that have been causing us problems (notably, find is occasionally failing building the archive async manifest when system is under load).
-- Found and squashed a nasty bug where `file_copy()` was defaulted to ignore errors. There was also an issue in file_exists that was causing the test to fail when the file actually did exist. Together they could have resulted in a corrupt backup with no errors, though it is very unlikely.
+* Found and squashed a nasty bug where `file_copy()` was defaulted to ignore errors. There was also an issue in file_exists that was causing the test to fail when the file actually did exist. Together they could have resulted in a corrupt backup with no errors, though it is very unlikely.
### v0.18: Return Soft Error When Archive Missing
-- The `archive-get` operation returns a 1 when the archive file is missing to differentiate from hard errors (ssh connection failure, file copy error, etc.) This lets Postgres know that that the archive stream has terminated normally. However, this does not take into account possible holes in the archive stream.
+* The `archive-get` operation returns a 1 when the archive file is missing to differentiate from hard errors (ssh connection failure, file copy error, etc.) This lets Postgres know that that the archive stream has terminated normally. However, this does not take into account possible holes in the archive stream.
### v0.17: Warn When Archive Directories Cannot Be Deleted
-- If an archive directory which should be empty could not be deleted backrest was throwing an error. There's a good fix for that coming, but for the time being it has been changed to a warning so processing can continue. This was impacting backups as sometimes the final archive file would not get pushed if the first archive file had been in a different directory (plus some bad luck).
+* If an archive directory which should be empty could not be deleted backrest was throwing an error. There's a good fix for that coming, but for the time being it has been changed to a warning so processing can continue. This was impacting backups as sometimes the final archive file would not get pushed if the first archive file had been in a different directory (plus some bad luck).
### v0.16: RequestTTY=yes for SSH Sessions
-- Added `RequestTTY=yes` to ssh sesssions. Hoping this will prevent random lockups.
+* Added `RequestTTY=yes` to ssh sesssions. Hoping this will prevent random lockups.
### v0.15: RequestTTY=yes for SSH Sessions
-- Added archive-get functionality to aid in restores.
+* Added archive-get functionality to aid in restores.
-- Added option to force a checkpoint when starting the backup `start-fast=y`.
+* Added option to force a checkpoint when starting the backup `start-fast=y`.
### v0.11: Minor Fixes
-- Removed `master_stderr_discard` option on database SSH connections. There have been occasional lockups and they could be related to issues originally seen in the file code.
+* Removed `master_stderr_discard` option on database SSH connections. There have been occasional lockups and they could be related to issues originally seen in the file code.
-- Changed lock file conflicts on backup and expire commands to ERROR. They were set to DEBUG due to a copy-and-paste from the archive locks.
+* Changed lock file conflicts on backup and expire commands to ERROR. They were set to DEBUG due to a copy-and-paste from the archive locks.
### v0.10: Backup and Archiving are Functional
-- No restore functionality, but the backup directories are consistent Postgres data directories. You'll need to either uncompress the files or turn off compression in the backup. Uncompressed backups on a ZFS (or similar) filesystem are a good option because backups can be restored locally via a snapshot to create logical backups or do spot data recovery.
+* No restore functionality, but the backup directories are consistent Postgres data directories. You'll need to either uncompress the files or turn off compression in the backup. Uncompressed backups on a ZFS (or similar) filesystem are a good option because backups can be restored locally via a snapshot to create logical backups or do spot data recovery.
-- Archiving is single-threaded. This has not posed an issue on our multi-terabyte databases with heavy write volume. Recommend a large WAL volume or to use the async option with a large volume nearby.
+* Archiving is single-threaded. This has not posed an issue on our multi-terabyte databases with heavy write volume. Recommend a large WAL volume or to use the async option with a large volume nearby.
-- Backups are multi-threaded, but the Net::OpenSSH library does not appear to be 100% threadsafe so it will very occasionally lock up on a thread. There is an overall process timeout that resolves this issue by killing the process. Yes, very ugly.
+* Backups are multi-threaded, but the Net::OpenSSH library does not appear to be 100% threadsafe so it will very occasionally lock up on a thread. There is an overall process timeout that resolves this issue by killing the process. Yes, very ugly.
-- Checksums are lost on any resumed backup. Only the final backup will record checksum on multiple resumes. Checksums from previous backups are correctly recorded and a full backup will reset everything.
+* Checksums are lost on any resumed backup. Only the final backup will record checksum on multiple resumes. Checksums from previous backups are correctly recorded and a full backup will reset everything.
-- The backup.manifest is being written as Storable because Config::IniFile does not seem to handle large files well. Would definitely like to save these as human-readable text.
+* The backup.manifest is being written as Storable because Config::IniFile does not seem to handle large files well. Would definitely like to save these as human-readable text.
-- Absolutely no documentation (outside the code). Well, excepting these release notes.
+* Absolutely no documentation (outside the code). Well, excepting these release notes.
## Recognition
diff --git a/VERSION b/VERSION
index c49766cb9..08072c181 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-0.50
+0.60
diff --git a/bin/pg_backrest.pl b/bin/pg_backrest.pl
index f2f95314b..33f9b6fd3 100755
--- a/bin/pg_backrest.pl
+++ b/bin/pg_backrest.pl
@@ -13,10 +13,16 @@ use Carp qw(confess);
use File::Basename;
use lib dirname($0) . '/../lib';
+use BackRest::Exception;
use BackRest::Utility;
use BackRest::Config;
-use BackRest::Remote;
+use BackRest::Remote qw(DB BACKUP NONE);
+use BackRest::Db;
use BackRest::File;
+use BackRest::Archive;
+use BackRest::Backup;
+use BackRest::Restore;
+use BackRest::ThreadGroup;
####################################################################################################################################
# Usage
@@ -62,7 +68,7 @@ pg_backrest.pl [options] [operation]
time - timestamp target
xid - transaction id target
preserve - preserve the existing recovery.conf
- none - no recovery past database becoming consistent
+ none - no recovery.conf generated
--target recovery target if type is name, time, or xid.
--target-exclusive stop just before the recovery target (default is inclusive).
--target-resume do not pause after recovery (default is to pause).
@@ -70,91 +76,30 @@ pg_backrest.pl [options] [operation]
=cut
-####################################################################################################################################
-# Global variables
-####################################################################################################################################
-my $oRemote; # Remote protocol object
-my $oLocal; # Local protocol object
-my $strRemote; # Defines which side is remote, DB or BACKUP
-
-####################################################################################################################################
-# REMOTE_GET - Get the remote object or create it if not exists
-####################################################################################################################################
-sub remote_get
-{
- my $bForceLocal = shift;
- my $iCompressLevel = shift;
- my $iCompressLevelNetwork = shift;
-
- # Return the remote if is already defined
- if (defined($oRemote))
- {
- return $oRemote;
- }
-
- # Return the remote when required
- if ($strRemote ne NONE && !$bForceLocal)
- {
- $oRemote = new BackRest::Remote
- (
- $strRemote eq DB ? optionGet(OPTION_DB_HOST) : optionGet(OPTION_BACKUP_HOST),
- $strRemote eq DB ? optionGet(OPTION_DB_USER) : optionGet(OPTION_BACKUP_USER),
- optionGet(OPTION_COMMAND_REMOTE),
- optionGet(OPTION_BUFFER_SIZE),
- $iCompressLevel, $iCompressLevelNetwork
- );
-
- return $oRemote;
- }
-
- # Otherwise return local
- if (!defined($oLocal))
- {
- $oLocal = new BackRest::Remote
- (
- undef, undef, undef,
- optionGet(OPTION_BUFFER_SIZE),
- $iCompressLevel, $iCompressLevelNetwork
- );
- }
-
- return $oLocal;
-}
-
####################################################################################################################################
# SAFE_EXIT - terminate all SSH sessions when the script is terminated
####################################################################################################################################
sub safe_exit
-{
- remote_exit();
-
- my $iTotal = backup_thread_kill();
-
- confess &log(ERROR, "process was terminated on signal, ${iTotal} threads stopped");
-}
-
-$SIG{TERM} = \&safe_exit;
-$SIG{HUP} = \&safe_exit;
-$SIG{INT} = \&safe_exit;
-
-####################################################################################################################################
-# REMOTE_EXIT - Close the remote object if it exists
-####################################################################################################################################
-sub remote_exit
{
my $iExitCode = shift;
- if (defined($oRemote))
- {
- $oRemote->thread_kill()
- }
+ &log(DEBUG, "safe exit called, terminating threads");
+
+ my $iTotal = threadGroupDestroy();
+ remoteDestroy();
if (defined($iExitCode))
{
exit $iExitCode;
}
+
+ &log(ERROR, "process terminated on signal or exception, ${iTotal} threads stopped");
}
+$SIG{TERM} = \&safe_exit;
+$SIG{HUP} = \&safe_exit;
+$SIG{INT} = \&safe_exit;
+
####################################################################################################################################
# START EVAL BLOCK TO CATCH ERRORS AND STOP THREADS
####################################################################################################################################
@@ -172,215 +117,22 @@ log_level_set(optionGet(OPTION_LOG_LEVEL_FILE), optionGet(OPTION_LOG_LEVEL_CONSO
!optionGet(OPTION_TEST) or test_set(optionGet(OPTION_TEST), optionGet(OPTION_TEST_DELAY));
####################################################################################################################################
-# DETERMINE IF THERE IS A REMOTE
+# Process archive commands
####################################################################################################################################
-# First check if backup is remote
-if (optionTest(OPTION_BACKUP_HOST))
+if (operationTest(OP_ARCHIVE_PUSH) || operationTest(OP_ARCHIVE_GET))
{
- $strRemote = BACKUP;
-}
-# Else check if db is remote
-elsif (optionTest(OPTION_DB_HOST))
-{
- # Don't allow both sides to be remote
- if (defined($strRemote))
- {
- confess &log(ERROR, 'db and backup cannot both be configured as remote');
- }
-
- $strRemote = DB;
-}
-else
-{
- $strRemote = NONE;
+ safe_exit(new BackRest::Archive()->process());
}
####################################################################################################################################
-# ARCHIVE-PUSH Command
+# Open the log file
####################################################################################################################################
-if (operationTest(OP_ARCHIVE_PUSH))
-{
- # Make sure the archive push operation happens on the db side
- if ($strRemote eq DB)
- {
- confess &log(ERROR, 'archive-push operation must run on the db host');
- }
-
- # If an archive section has been defined, use that instead of the backup section when operation is OP_ARCHIVE_PUSH
- my $bArchiveAsync = optionTest(OPTION_ARCHIVE_ASYNC);
- my $strArchivePath = optionGet(OPTION_REPO_PATH);
-
- # If logging locally then create the stop archiving file name
- my $strStopFile;
-
- if ($bArchiveAsync)
- {
- $strStopFile = "${strArchivePath}/lock/" . optionGet(OPTION_STANZA) . "-archive.stop";
- }
-
- # If an archive file is defined, then push it
- if (defined($ARGV[1]))
- {
- # If the stop file exists then discard the archive log
- if (defined($strStopFile))
- {
- if (-e $strStopFile)
- {
- &log(ERROR, "archive stop file (${strStopFile}) exists , discarding " . basename($ARGV[1]));
- remote_exit(0);
- }
- }
-
- # Get the compress flag
- my $bCompress = $bArchiveAsync ? false : optionGet(OPTION_COMPRESS);
-
- # Create the file object
- my $oFile = new BackRest::File
- (
- optionGet(OPTION_STANZA),
- $bArchiveAsync || $strRemote eq NONE ? optionGet(OPTION_REPO_PATH) : optionGet(OPTION_REPO_REMOTE_PATH),
- $bArchiveAsync ? NONE : $strRemote,
- remote_get($bArchiveAsync, optionGet(OPTION_COMPRESS_LEVEL),
- optionGet(OPTION_COMPRESS_LEVEL_NETWORK))
- );
-
- # Init backup
- backup_init
- (
- undef,
- $oFile,
- undef,
- $bCompress,
- undef
- );
-
- &log(INFO, 'pushing archive log ' . $ARGV[1] . ($bArchiveAsync ? ' asynchronously' : ''));
-
- archive_push(optionGet(OPTION_DB_PATH, false), $ARGV[1], $bArchiveAsync);
-
- # Exit if we are archiving async
- if (!$bArchiveAsync)
- {
- remote_exit(0);
- }
-
- # Fork and exit the parent process so the async process can continue
- if (!optionTest(OPTION_TEST_NO_FORK) && fork())
- {
- remote_exit(0);
- }
- # Else the no-fork flag has been specified for testing
- else
- {
- &log(INFO, 'No fork on archive local for TESTING');
- }
-
- # Start the async archive push
- &log(INFO, 'starting async archive-push');
- }
-
- # Create a lock file to make sure async archive-push does not run more than once
- my $strLockPath = "${strArchivePath}/lock/" . optionGet(OPTION_STANZA) . "-archive.lock";
-
- if (!lock_file_create($strLockPath))
- {
- &log(DEBUG, 'archive-push process is already running - exiting');
- remote_exit(0);
- }
-
- # Build the basic command string that will be used to modify the command during processing
- my $strCommand = $^X . ' ' . $0 . " --stanza=" . optionGet(OPTION_STANZA);
-
- # Get the new operational flags
- my $bCompress = optionGet(OPTION_COMPRESS);
- my $iArchiveMaxMB = optionGet(OPTION_ARCHIVE_MAX_MB, false);
-
- # Create the file object
- my $oFile = new BackRest::File
- (
- optionGet(OPTION_STANZA),
- $strRemote eq NONE ? optionGet(OPTION_REPO_PATH) : optionGet(OPTION_REPO_REMOTE_PATH),
- $strRemote,
- remote_get(false, optionGet(OPTION_COMPRESS_LEVEL),
- optionGet(OPTION_COMPRESS_LEVEL_NETWORK))
- );
-
- # Init backup
- backup_init
- (
- undef,
- $oFile,
- undef,
- $bCompress,
- undef,
- 1, #optionGet(OPTION_THREAD_MAX),
- undef,
- optionGet(OPTION_THREAD_TIMEOUT, false)
- );
-
- # Call the archive_xfer function and continue to loop as long as there are files to process
- my $iLogTotal;
-
- while (!defined($iLogTotal) || $iLogTotal > 0)
- {
- $iLogTotal = archive_xfer($strArchivePath . "/archive/" . optionGet(OPTION_STANZA) . "/out", $strStopFile,
- $strCommand, $iArchiveMaxMB);
-
- if ($iLogTotal > 0)
- {
- &log(DEBUG, "${iLogTotal} archive logs were transferred, calling archive_xfer() again");
- }
- else
- {
- &log(DEBUG, 'no more logs to transfer - exiting');
- }
- }
-
- lock_file_remove();
- remote_exit(0);
-}
+log_file_set(optionGet(OPTION_REPO_PATH) . '/log/' . optionGet(OPTION_STANZA) . '-' . lc(operationGet()));
####################################################################################################################################
-# ARCHIVE-GET Command
+# Create the thread group that will be used for parallel processing
####################################################################################################################################
-if (operationTest(OP_ARCHIVE_GET))
-{
- # Make sure the archive file is defined
- if (!defined($ARGV[1]))
- {
- confess &log(ERROR, 'archive file not provided');
- }
-
- # Make sure the destination file is defined
- if (!defined($ARGV[2]))
- {
- confess &log(ERROR, 'destination file not provided');
- }
-
- # Init the file object
- my $oFile = new BackRest::File
- (
- optionGet(OPTION_STANZA),
- $strRemote eq BACKUP ? optionGet(OPTION_REPO_REMOTE_PATH) : optionGet(OPTION_REPO_PATH),
- $strRemote,
- remote_get(false,
- optionGet(OPTION_COMPRESS_LEVEL),
- optionGet(OPTION_COMPRESS_LEVEL_NETWORK))
- );
-
- # Init the backup object
- backup_init
- (
- undef,
- $oFile
- );
-
- # Info for the Postgres log
- &log(INFO, 'getting archive log ' . $ARGV[1]);
-
- # Get the archive file
- remote_exit(archive_get(optionGet(OPTION_DB_PATH, false), $ARGV[1], $ARGV[2]));
-}
+threadGroupCreate();
####################################################################################################################################
# Initialize the default file object
@@ -388,11 +140,9 @@ if (operationTest(OP_ARCHIVE_GET))
my $oFile = new BackRest::File
(
optionGet(OPTION_STANZA),
- $strRemote eq BACKUP ? optionGet(OPTION_REPO_REMOTE_PATH) : optionGet(OPTION_REPO_PATH),
- $strRemote,
- remote_get(false,
- operationTest(OP_EXPIRE) ? OPTION_DEFAULT_COMPRESS_LEVEL : optionGet(OPTION_COMPRESS_LEVEL),
- operationTest(OP_EXPIRE) ? OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK : optionGet(OPTION_COMPRESS_LEVEL_NETWORK))
+ optionRemoteTypeTest(BACKUP) ? optionGet(OPTION_REPO_REMOTE_PATH) : optionGet(OPTION_REPO_PATH),
+ optionRemoteType(),
+ optionRemote()
);
####################################################################################################################################
@@ -400,20 +150,16 @@ my $oFile = new BackRest::File
####################################################################################################################################
if (operationTest(OP_RESTORE))
{
- if ($strRemote eq DB)
+ if (optionRemoteTypeTest(DB))
{
confess &log(ASSERT, 'restore operation must be performed locally on the db server');
}
- # Open the log file
- log_file_set(optionGet(OPTION_REPO_PATH) . '/log/' . optionGet(OPTION_STANZA) . '-restore');
-
# Set the lock path
my $strLockPath = optionGet(OPTION_REPO_PATH) . '/lock/' .
optionGet(OPTION_STANZA) . '-' . operationGet() . '.lock';
# Do the restore
- use BackRest::Restore;
new BackRest::Restore
(
optionGet(OPTION_DB_PATH),
@@ -434,17 +180,14 @@ if (operationTest(OP_RESTORE))
optionGet(OPTION_CONFIG)
)->restore;
- remote_exit(0);
+ safe_exit(0);
}
####################################################################################################################################
# GET MORE CONFIG INFO
####################################################################################################################################
-# Open the log file
-log_file_set(optionGet(OPTION_REPO_PATH) . '/log/' . optionGet(OPTION_STANZA));
-
# Make sure backup and expire operations happen on the backup side
-if ($strRemote eq BACKUP)
+if (optionRemoteTypeTest(BACKUP))
{
confess &log(ERROR, 'backup and expire operations must run on the backup host');
}
@@ -455,11 +198,10 @@ my $strLockPath = optionGet(OPTION_REPO_PATH) . '/lock/' . optionGet(OPTION_STA
if (!lock_file_create($strLockPath))
{
&log(ERROR, 'backup process is already running for stanza ' . optionGet(OPTION_STANZA) . ' - exiting');
- remote_exit(0);
+ safe_exit(0);
}
# Initialize the db object
-use BackRest::Db;
my $oDb;
if (operationTest(OP_BACKUP))
@@ -468,6 +210,7 @@ if (operationTest(OP_BACKUP))
{
$oDb = new BackRest::Db
(
+ optionGet(OPTION_DB_PATH),
optionGet(OPTION_COMMAND_PSQL),
optionGet(OPTION_DB_HOST, false),
optionGet(OPTION_DB_USER, optionTest(OPTION_DB_HOST))
@@ -494,7 +237,6 @@ if (operationTest(OP_BACKUP))
####################################################################################################################################
if (operationTest(OP_BACKUP))
{
- use BackRest::Backup;
backup(optionGet(OPTION_DB_PATH), optionGet(OPTION_START_FAST));
operationSet(OP_EXPIRE);
@@ -527,7 +269,7 @@ if (operationTest(OP_EXPIRE))
}
backup_cleanup();
-remote_exit(0);
+safe_exit(0);
};
####################################################################################################################################
@@ -540,9 +282,9 @@ if ($@)
# If a backrest exception then return the code - don't confess
if ($oMessage->isa('BackRest::Exception'))
{
- remote_exit($oMessage->code());
+ safe_exit($oMessage->code());
}
- remote_exit();
+ safe_exit();
confess $@;
}
diff --git a/bin/pg_backrest_remote.pl b/bin/pg_backrest_remote.pl
index 910bec28d..f8264f35f 100755
--- a/bin/pg_backrest_remote.pl
+++ b/bin/pg_backrest_remote.pl
@@ -18,6 +18,7 @@ use BackRest::Utility;
use BackRest::File;
use BackRest::Remote;
use BackRest::Exception;
+use BackRest::Archive;
####################################################################################################################################
# Operation constants
@@ -64,12 +65,16 @@ my $oRemote = new BackRest::Remote
# Create the file object
my $oFile = new BackRest::File
(
- undef,
- undef,
+ $oRemote->stanza(),
+ $oRemote->repoPath(),
undef,
$oRemote,
);
+
+# Create the archive object
+my $oArchive = new BackRest::Archive();
+
# Command string
my $strCommand = OP_NOOP;
@@ -197,6 +202,16 @@ while ($strCommand ne OP_EXIT)
$oRemote->output_write($strOutput);
}
+ # Archive push checks
+ elsif ($strCommand eq OP_ARCHIVE_PUSH_CHECK)
+ {
+ $oArchive->pushCheck($oFile,
+ param_get(\%oParamHash, 'wal-segment'),
+ param_get(\%oParamHash, 'db-version'),
+ param_get(\%oParamHash, 'db-sys-id'));
+
+ $oRemote->output_write('Y');
+ }
# Continue if noop or exit
elsif ($strCommand ne OP_NOOP && $strCommand ne OP_EXIT)
{
diff --git a/doc/doc.pl b/doc/doc.pl
index cb792f9f2..135bf1a2e 100755
--- a/doc/doc.pl
+++ b/doc/doc.pl
@@ -668,7 +668,7 @@ sub doc_render
if ($bChildList)
{
- $strBuffer .= '- ';
+ $strBuffer .= '* ';
}
$strBuffer .= doc_render_text($$oDoc{field}{text}, $strType);
diff --git a/doc/doc.xml b/doc/doc.xml
index f1b135e2d..acc630b8a 100644
--- a/doc/doc.xml
+++ b/doc/doc.xml
@@ -61,6 +61,8 @@
cpanm IPC::System::Simple
cpanm Digest::SHA
cpanm Compress::ZLib
+ cpanm threads (update this package)
+ cpanm Thread::Queue (update this package)
* Install PgBackRest
@@ -447,8 +449,8 @@ Run a full backup on the db stanza. --type can
Set the buffer size used for copy, compress, and uncompress functions. A maximum of 3 buffers will be in use at a time per thread. An additional maximum of 256K per thread may be used for zlib buffers.
- 4096 - 8388608
- 16384
+ 16384 - 8388608
+ 32768
@@ -578,7 +580,7 @@ Run a full backup on the db stanza. --type can
The purpose of this feature is to prevent the log volume from filling up at which point Postgres will stop completely. Better to lose the backup than have the database go down.
- To start normal archiving again you'll need to remove the stop file which will be located at ${archive-path}/lock/${stanza}-archive.stop where ${archive-path}
is the path set in the archive section, and ${stanza}
is the backup stanza.
+ To start normal archiving again you'll need to remove the stop file which will be located at ${repo-path}/lock/${stanza}-archive.stop where ${repo-path}
is the path set in the general section, and ${stanza}
is the backup stanza.
1024
@@ -654,6 +656,23 @@ Run a full backup on the db stanza. --type can
+
+
+
+ Pushing duplicate WAL now generates an error. This worked before only if checksums were disabled.
+
+
+ Database System IDs are used to make sure that all WAL in an archive matches up. This should help prevent misconfigurations that send WAL from multiple clusters to the same archive.
+
+
+ Regression tests working back to 8.3.
+
+
+ Improved threading model by starting threads early and terminating them late.
+
+
+
+
diff --git a/lib/BackRest/Archive.pm b/lib/BackRest/Archive.pm
new file mode 100644
index 000000000..22b6aff0f
--- /dev/null
+++ b/lib/BackRest/Archive.pm
@@ -0,0 +1,758 @@
+####################################################################################################################################
+# ARCHIVE MODULE
+####################################################################################################################################
+package BackRest::Archive;
+
+use strict;
+use warnings FATAL => qw(all);
+use Carp qw(confess);
+
+use File::Basename qw(dirname basename);
+use Fcntl qw(SEEK_CUR O_RDONLY O_WRONLY O_CREAT O_EXCL);
+use Exporter qw(import);
+
+use lib dirname($0);
+use BackRest::Utility;
+use BackRest::Exception;
+use BackRest::Config;
+use BackRest::File;
+use BackRest::Remote;
+
+####################################################################################################################################
+# Operation constants
+####################################################################################################################################
+use constant
+{
+ OP_ARCHIVE_PUSH_CHECK => 'Archive->pushCheck'
+};
+
+our @EXPORT = qw(OP_ARCHIVE_PUSH_CHECK);
+
+####################################################################################################################################
+# File constants
+####################################################################################################################################
+use constant
+{
+ ARCHIVE_INFO_FILE => 'archive.info'
+};
+
+push @EXPORT, qw(ARCHIVE_INFO_FILE);
+
+####################################################################################################################################
+# constructor
+####################################################################################################################################
+sub new
+{
+ my $class = shift; # Class name
+
+ # Create the class hash
+ my $self = {};
+ bless $self, $class;
+
+ return $self;
+}
+
+####################################################################################################################################
+# process
+#
+# Process archive commands.
+####################################################################################################################################
+sub process
+{
+ my $self = shift;
+
+ # Process push
+ if (operationTest(OP_ARCHIVE_PUSH))
+ {
+ return $self->pushProcess();
+ }
+
+ # Process get
+ if (operationTest(OP_ARCHIVE_GET))
+ {
+ return $self->getProcess();
+ }
+
+ # Error if any other operation is found
+ confess &log(ASSERT, "Archive->process() called with invalid operation: " . operationGet());
+}
+
+################################################################################################################################
+# getProcess
+################################################################################################################################
+sub getProcess
+{
+ my $self = shift;
+
+ # Make sure the archive file is defined
+ if (!defined($ARGV[1]))
+ {
+ confess &log(ERROR, 'WAL segment not provided', ERROR_PARAM_REQUIRED);
+ }
+
+ # Make sure the destination file is defined
+ if (!defined($ARGV[2]))
+ {
+ confess &log(ERROR, 'WAL segment destination no provided', ERROR_PARAM_REQUIRED);
+ }
+
+ # Info for the Postgres log
+ &log(INFO, 'getting WAL segment ' . $ARGV[1]);
+
+ # Get the WAL segment
+ return $self->get($ARGV[1], $ARGV[2]);
+}
+
+####################################################################################################################################
+# walFileName
+#
+# Returns the filename in the archive of a WAL segment. Optionally, a wait time can be specified. In this case an error will be
+# thrown when the WAL segment is not found.
+####################################################################################################################################
+sub walFileName
+{
+ my $self = shift;
+ my $oFile = shift;
+ my $strWalSegment = shift;
+ my $iWaitSeconds = shift;
+
+ # Record the start time
+ my $lTime = time();
+ my $fSleep = .1;
+
+ # Determine the path where the requested WAL segment is located
+ my $strArchivePath = dirname($oFile->path_get(PATH_BACKUP_ARCHIVE, $strWalSegment));
+
+ do
+ {
+ # Get the name of the requested WAL segment (may have hash info and compression extension)
+ my @stryWalFileName = $oFile->list(PATH_BACKUP_ABSOLUTE, $strArchivePath,
+ "^${strWalSegment}(-[0-f]+){0,1}(\\.$oFile->{strCompressExtension}){0,1}\$", undef, true);
+
+ # If there is only one result then return it
+ if (@stryWalFileName == 1)
+ {
+ return $stryWalFileName[0];
+ }
+
+ # If there is more than one matching archive file then there is a serious issue - likely a bug in the archiver
+ if (@stryWalFileName > 1)
+ {
+ confess &log(ASSERT, @stryWalFileName . " duplicate files found for ${strWalSegment}", ERROR_ARCHIVE_DUPLICATE);
+ }
+
+ # If waiting then sleep before trying again
+ if (defined($iWaitSeconds))
+ {
+ hsleep($fSleep);
+ $fSleep = $fSleep * 2 < $iWaitSeconds - (time() - $lTime) ? $fSleep * 2 : ($iWaitSeconds - (time() - $lTime)) + .1;
+ }
+ }
+ while (defined($iWaitSeconds) && (time() - $lTime) < $iWaitSeconds);
+
+ # If waiting and no WAL segment was found then throw an error
+ if (defined($iWaitSeconds))
+ {
+ confess &log(ERROR, "could not find WAL segment ${strWalSegment} after " . (time() - $lTime) . ' second(s)');
+ }
+
+ return undef;
+}
+
+####################################################################################################################################
+# walInfo
+#
+# Retrieve information such as db version and system identifier from a WAL segment.
+####################################################################################################################################
+sub walInfo
+{
+ my $self = shift;
+ my $strWalFile = shift;
+
+ # Set operation and debug strings
+ my $strOperation = 'Archive->walInfo';
+ &log(TRACE, "${strOperation}: " . PATH_ABSOLUTE . ":${strWalFile}");
+
+ # Open the WAL segment
+ my $hFile;
+ my $tBlock;
+
+ sysopen($hFile, $strWalFile, O_RDONLY)
+ or confess &log(ERROR, "unable to open ${strWalFile}", ERROR_FILE_OPEN);
+
+ # Read magic
+ sysread($hFile, $tBlock, 2) == 2
+ or confess &log(ERROR, "unable to read xlog magic");
+
+ my $iMagic = unpack('S', $tBlock);
+
+ # Make sure the WAL magic is supported
+ my $strDbVersion;
+ my $iSysIdOffset;
+
+ if ($iMagic == hex('0xD07E'))
+ {
+ $strDbVersion = '9.4';
+ $iSysIdOffset = 20;
+ }
+ elsif ($iMagic == hex('0xD075'))
+ {
+ $strDbVersion = '9.3';
+ $iSysIdOffset = 20;
+ }
+ elsif ($iMagic == hex('0xD071'))
+ {
+ $strDbVersion = '9.2';
+ $iSysIdOffset = 12;
+ }
+ elsif ($iMagic == hex('0xD066'))
+ {
+ $strDbVersion = '9.1';
+ $iSysIdOffset = 12;
+ }
+ elsif ($iMagic == hex('0xD064'))
+ {
+ $strDbVersion = '9.0';
+ $iSysIdOffset = 12;
+ }
+ elsif ($iMagic == hex('0xD063'))
+ {
+ $strDbVersion = '8.4';
+ $iSysIdOffset = 12;
+ }
+ elsif ($iMagic == hex('0xD062'))
+ {
+ $strDbVersion = '8.3';
+ $iSysIdOffset = 12;
+ }
+ # elsif ($iMagic == hex('0xD05E'))
+ # {
+ # $strDbVersion = '8.2';
+ # $iSysIdOffset = 12;
+ # }
+ # elsif ($iMagic == hex('0xD05D'))
+ # {
+ # $strDbVersion = '8.1';
+ # $iSysIdOffset = 12;
+ # }
+ else
+ {
+ confess &log(ERROR, "unexpected xlog magic 0x" . sprintf("%X", $iMagic) . ' (unsupported PostgreSQL version?)',
+ ERROR_VERSION_NOT_SUPPORTED);
+ }
+
+ # Read flags
+ sysread($hFile, $tBlock, 2) == 2
+ or confess &log(ERROR, "unable to read xlog info");
+
+ my $iFlag = unpack('S', $tBlock);
+
+ $iFlag & 2
+ or confess &log(ERROR, "expected long header in flags " . sprintf("%x", $iFlag));
+
+ # Get the database system id
+ sysseek($hFile, $iSysIdOffset, SEEK_CUR)
+ or confess &log(ERROR, "unable to read padding");
+
+ sysread($hFile, $tBlock, 8) == 8
+ or confess &log(ERROR, "unable to read database system identifier");
+
+ length($tBlock) == 8
+ or confess &log(ERROR, "block is incorrect length");
+
+ close($hFile);
+
+ my $ullDbSysId = unpack('Q', $tBlock);
+
+ &log(TRACE, sprintf("${strOperation}: WAL magic = 0x%X, database system id = ", $iMagic) . $ullDbSysId);
+
+ return $strDbVersion, $ullDbSysId;
+}
+
+####################################################################################################################################
+# get
+####################################################################################################################################
+sub get
+{
+ my $self = shift;
+ my $strSourceArchive = shift;
+ my $strDestinationFile = shift;
+
+ # Create the file object
+ my $oFile = new BackRest::File
+ (
+ optionGet(OPTION_STANZA),
+ optionRemoteTypeTest(BACKUP) ? optionGet(OPTION_REPO_REMOTE_PATH) : optionGet(OPTION_REPO_PATH),
+ optionRemoteType(),
+ optionRemote()
+ );
+
+ # If the destination file path is not absolute then it is relative to the db data path
+ if (index($strDestinationFile, '/',) != 0)
+ {
+ if (!optionTest(OPTION_DB_PATH))
+ {
+ confess &log(ERROR, 'database path must be set if relative xlog paths are used');
+ }
+
+ $strDestinationFile = optionGet(OPTION_DB_PATH) . "/${strDestinationFile}";
+ }
+
+ # Get the wal segment filename
+ my $strArchiveFile = $self->walFileName($oFile, $strSourceArchive);
+
+ # If there are no matching archive files then there are two possibilities:
+ # 1) The end of the archive stream has been reached, this is normal and a 1 will be returned
+ # 2) There is a hole in the archive stream and a hard error should be returned. However, holes are possible due to
+ # async archiving and threading - so when to report a hole? Since a hard error will cause PG to terminate, for now
+ # treat as case #1.
+ if (!defined($strArchiveFile))
+ {
+ &log(INFO, "${strSourceArchive} was not found in the archive repository");
+
+ return 1;
+ }
+
+ &log(DEBUG, "archive_get: cp ${strArchiveFile} ${strDestinationFile}");
+
+ # Determine if the source file is already compressed
+ my $bSourceCompressed = $strArchiveFile =~ "^.*\.$oFile->{strCompressExtension}\$" ? true : false;
+
+ # Copy the archive file to the requested location
+ $oFile->copy(PATH_BACKUP_ARCHIVE, $strArchiveFile, # Source file
+ PATH_DB_ABSOLUTE, $strDestinationFile, # Destination file
+ $bSourceCompressed, # Source compression based on detection
+ false); # Destination is not compressed
+
+ return 0;
+}
+
+####################################################################################################################################
+# pushProcess
+####################################################################################################################################
+sub pushProcess
+{
+ my $self = shift;
+
+ # Make sure the archive push operation happens on the db side
+ if (optionRemoteTypeTest(DB))
+ {
+ confess &log(ERROR, OP_ARCHIVE_PUSH . ' operation must run on the db host');
+ }
+
+ # Load the archive object
+ use BackRest::Archive;
+
+ # If an archive section has been defined, use that instead of the backup section when operation is OP_ARCHIVE_PUSH
+ my $bArchiveAsync = optionGet(OPTION_ARCHIVE_ASYNC);
+ my $strArchivePath = optionGet(OPTION_REPO_PATH);
+
+ # If logging locally then create the stop archiving file name
+ my $strStopFile;
+
+ if ($bArchiveAsync)
+ {
+ $strStopFile = "${strArchivePath}/lock/" . optionGet(OPTION_STANZA) . "-archive.stop";
+ }
+
+ # If an archive file is defined, then push it
+ if (defined($ARGV[1]))
+ {
+ # If the stop file exists then discard the archive log
+ if ($bArchiveAsync)
+ {
+ if (-e $strStopFile)
+ {
+ &log(ERROR, "archive stop file (${strStopFile}) exists , discarding " . basename($ARGV[1]));
+ remote_exit(0);
+ }
+ }
+
+ &log(INFO, 'pushing WAL segment ' . $ARGV[1] . ($bArchiveAsync ? ' asynchronously' : ''));
+
+ $self->push($ARGV[1], $bArchiveAsync);
+
+ # Exit if we are not archiving async
+ if (!$bArchiveAsync)
+ {
+ return 0;
+ }
+
+ # Fork and exit the parent process so the async process can continue
+ if (!optionTest(OPTION_TEST_NO_FORK) || !optionGet(OPTION_TEST_NO_FORK))
+ {
+ if (fork())
+ {
+ return 0;
+ }
+ }
+ # Else the no-fork flag has been specified for testing
+ else
+ {
+ &log(INFO, 'No fork on archive local for TESTING');
+ }
+
+ # Start the async archive push
+ &log(INFO, 'starting async archive-push');
+ }
+
+ # Create a lock file to make sure async archive-push does not run more than once
+ my $strLockPath = "${strArchivePath}/lock/" . optionGet(OPTION_STANZA) . "-archive.lock";
+
+ if (!lock_file_create($strLockPath))
+ {
+ &log(DEBUG, 'archive-push process is already running - exiting');
+ return 0;
+ }
+
+ # Open the log file
+ log_file_set(optionGet(OPTION_REPO_PATH) . '/log/' . optionGet(OPTION_STANZA) . '-archive-async');
+
+ # Build the basic command string that will be used to modify the command during processing
+ my $strCommand = $^X . ' ' . $0 . " --stanza=" . optionGet(OPTION_STANZA);
+
+ # Call the archive_xfer function and continue to loop as long as there are files to process
+ my $iLogTotal;
+
+ while (!defined($iLogTotal) || $iLogTotal > 0)
+ {
+ $iLogTotal = $self->xfer($strArchivePath . "/archive/" . optionGet(OPTION_STANZA) . "/out", $strStopFile);
+
+ if ($iLogTotal > 0)
+ {
+ &log(DEBUG, "${iLogTotal} WAL segments were transferred, calling Archive->xfer() again");
+ }
+ else
+ {
+ &log(DEBUG, 'no more WAL segments to transfer - exiting');
+ }
+ }
+
+ lock_file_remove();
+ return 0;
+}
+
+####################################################################################################################################
+# push
+####################################################################################################################################
+sub push
+{
+ my $self = shift;
+ my $strSourceFile = shift;
+ my $bAsync = shift;
+
+ # Create the file object
+ my $oFile = new BackRest::File
+ (
+ optionGet(OPTION_STANZA),
+ $bAsync || optionRemoteTypeTest(NONE) ? optionGet(OPTION_REPO_PATH) : optionGet(OPTION_REPO_REMOTE_PATH),
+ $bAsync ? NONE : optionRemoteType(),
+ optionRemote($bAsync)
+ );
+
+ # If the source file path is not absolute then it is relative to the data path
+ if (index($strSourceFile, '/',) != 0)
+ {
+ if (!optionTest(OPTION_DB_PATH))
+ {
+ confess &log(ERROR, 'database path must be set if relative xlog paths are used');
+ }
+
+ $strSourceFile = optionGet(OPTION_DB_PATH) . "/${strSourceFile}";
+ }
+
+ # Get the destination file
+ my $strDestinationFile = basename($strSourceFile);
+
+ # Get the compress flag
+ my $bCompress = $bAsync ? false : optionGet(OPTION_COMPRESS);
+
+ # Determine if this is an archive file (don't do compression or checksum on .backup, .history, etc.)
+ my $bArchiveFile = basename($strSourceFile) =~ /^[0-F]{24}$/ ? true : false;
+
+ # Check that there are no issues with pushing this WAL segment
+ if ($bArchiveFile)
+ {
+ my ($strDbVersion, $ullDbSysId) = $self->walInfo($strSourceFile);
+ $self->pushCheck($oFile, substr(basename($strSourceFile), 0, 24), $strDbVersion, $ullDbSysId);
+ }
+
+ # Append compression extension
+ if ($bArchiveFile && $bCompress)
+ {
+ $strDestinationFile .= '.' . $oFile->{strCompressExtension};
+ }
+
+ # Copy the WAL segment
+ $oFile->copy(PATH_DB_ABSOLUTE, $strSourceFile, # Source type/file
+ $bAsync ? PATH_BACKUP_ARCHIVE_OUT : PATH_BACKUP_ARCHIVE, # Destination type
+ $strDestinationFile, # Destination file
+ false, # Source is not compressed
+ $bArchiveFile && $bCompress, # Destination compress is configurable
+ undef, undef, undef, # Unused params
+ true, # Create path if it does not exist
+ undef, undef, # User and group
+ $bArchiveFile); # Append checksum if archive file
+}
+
+####################################################################################################################################
+# pushCheck
+####################################################################################################################################
+sub pushCheck
+{
+ my $self = shift;
+ my $oFile = shift;
+ my $strWalSegment = shift;
+ my $strDbVersion = shift;
+ my $ullDbSysId = shift;
+
+ # Set operation and debug strings
+ my $strOperation = OP_ARCHIVE_PUSH_CHECK;
+ &log(DEBUG, "${strOperation}: " . PATH_BACKUP_ARCHIVE . ":${strWalSegment}");
+
+ if ($oFile->is_remote(PATH_BACKUP_ARCHIVE))
+ {
+ # Build param hash
+ my %oParamHash;
+
+ $oParamHash{'wal-segment'} = $strWalSegment;
+ $oParamHash{'db-version'} = $strDbVersion;
+ $oParamHash{'db-sys-id'} = $ullDbSysId;
+
+ # Output remote trace info
+ &log(TRACE, "${strOperation}: remote (" . $oFile->{oRemote}->command_param_string(\%oParamHash) . ')');
+
+ # Execute the command
+ $oFile->{oRemote}->command_execute($strOperation, \%oParamHash);
+ }
+ else
+ {
+ # Create the archive path if it does not exist
+ if (!$oFile->exists(PATH_BACKUP_ARCHIVE))
+ {
+ $oFile->path_create(PATH_BACKUP_ARCHIVE);
+ }
+
+ # If the info file exists check db version and system-id
+ my %oDbConfig;
+
+ if ($oFile->exists(PATH_BACKUP_ARCHIVE, ARCHIVE_INFO_FILE))
+ {
+ ini_load($oFile->path_get(PATH_BACKUP_ARCHIVE, ARCHIVE_INFO_FILE), \%oDbConfig);
+
+ if ($oDbConfig{database}{'version'} ne $strDbVersion)
+ {
+ confess &log(ERROR, "WAL segment version ${strDbVersion} " .
+ "does not match archive version $oDbConfig{database}{'version'}", ERROR_ARCHIVE_MISMATCH);
+ }
+
+ if ($oDbConfig{database}{'system-id'} ne $ullDbSysId)
+ {
+ confess &log(ERROR, "WAL segment system-id ${ullDbSysId} " .
+ "does not match archive system-id $oDbConfig{database}{'system-id'}", ERROR_ARCHIVE_MISMATCH);
+ }
+ }
+ # Else create the info file from the current WAL segment
+ else
+ {
+ $oDbConfig{database}{'system-id'} = $ullDbSysId;
+ $oDbConfig{database}{'version'} = $strDbVersion;
+ ini_save($oFile->path_get(PATH_BACKUP_ARCHIVE, ARCHIVE_INFO_FILE), \%oDbConfig);
+ }
+
+ # Check if the WAL segment already exists in the archive
+ if (defined($self->walFileName($oFile, $strWalSegment)))
+ {
+ confess &log(ERROR, "WAL segment ${strWalSegment} already exists in the archive", ERROR_ARCHIVE_DUPLICATE);
+ }
+ }
+}
+
+####################################################################################################################################
+# xfer
+####################################################################################################################################
+sub xfer
+{
+ my $self = shift;
+ my $strArchivePath = shift;
+ my $strStopFile = shift;
+
+ # Create the file object
+ my $oFile = new BackRest::File
+ (
+ optionGet(OPTION_STANZA),
+ optionRemoteTypeTest(NONE) ? optionGet(OPTION_REPO_PATH) : optionGet(OPTION_REPO_REMOTE_PATH),
+ optionRemoteType(),
+ optionRemote()
+ );
+
+ # Load the archive manifest - all the files that need to be pushed
+ my %oManifestHash;
+ $oFile->manifest(PATH_DB_ABSOLUTE, $strArchivePath, \%oManifestHash);
+
+ # Get all the files to be transferred and calculate the total size
+ my @stryFile;
+ my $lFileSize = 0;
+ my $lFileTotal = 0;
+
+ foreach my $strFile (sort(keys $oManifestHash{name}))
+ {
+ if ($strFile =~ "^[0-F]{24}(-[0-f]{40})(\\.$oFile->{strCompressExtension}){0,1}\$" ||
+ $strFile =~ /^[0-F]{8}\.history$/ || $strFile =~ /^[0-F]{24}\.[0-F]{8}\.backup$/)
+ {
+ CORE::push(@stryFile, $strFile);
+
+ $lFileSize += $oManifestHash{name}{"${strFile}"}{size};
+ $lFileTotal++;
+ }
+ }
+
+ if (optionTest(OPTION_ARCHIVE_MAX_MB))
+ {
+ my $iArchiveMaxMB = optionGet(OPTION_ARCHIVE_MAX_MB);
+
+ if ($iArchiveMaxMB < int($lFileSize / 1024 / 1024))
+ {
+ &log(ERROR, "local archive store has exceeded limit of ${iArchiveMaxMB}MB, archive logs will be discarded");
+
+ my $hStopFile;
+ open($hStopFile, '>', $strStopFile) or confess &log(ERROR, "unable to create stop file file ${strStopFile}");
+ close($hStopFile);
+ }
+ }
+
+ if ($lFileTotal == 0)
+ {
+ &log(DEBUG, 'no archive logs to be copied to backup');
+
+ return 0;
+ }
+
+ # Modify process name to indicate async archiving
+ $0 = $^X . ' ' . $0 . " --stanza=" . optionGet(OPTION_STANZA) .
+ "archive-push-async " . $stryFile[0] . '-' . $stryFile[scalar @stryFile - 1];
+
+ # Output files to be moved to backup
+ &log(INFO, "archive to be copied to backup total ${lFileTotal}, size " . file_size_format($lFileSize));
+
+ # Transfer each file
+ foreach my $strFile (sort @stryFile)
+ {
+ # Construct the archive filename to backup
+ my $strArchiveFile = "${strArchivePath}/${strFile}";
+
+ # Determine if the source file is already compressed
+ my $bSourceCompressed = $strArchiveFile =~ "^.*\.$oFile->{strCompressExtension}\$" ? true : false;
+
+ # Determine if this is an archive file (don't want to do compression or checksum on .backup files)
+ my $bArchiveFile = basename($strFile) =~
+ "^[0-F]{24}(-[0-f]+){0,1}(\\.$oFile->{strCompressExtension}){0,1}\$" ? true : false;
+
+ # Figure out whether the compression extension needs to be added or removed
+ my $bDestinationCompress = $bArchiveFile && optionGet(OPTION_COMPRESS);
+ my $strDestinationFile = basename($strFile);
+
+ if (!$bSourceCompressed && $bDestinationCompress)
+ {
+ $strDestinationFile .= ".$oFile->{strCompressExtension}";
+ }
+ elsif ($bSourceCompressed && !$bDestinationCompress)
+ {
+ $strDestinationFile = substr($strDestinationFile, 0, length($strDestinationFile) - 3);
+ }
+
+ &log(DEBUG, "archive ${strFile}, is WAL ${bArchiveFile}, source_compressed = ${bSourceCompressed}, " .
+ "destination_compress ${bDestinationCompress}, default_compress = " . optionGet(OPTION_COMPRESS));
+
+ # Check that there are no issues with pushing this WAL segment
+ if ($bArchiveFile)
+ {
+ my ($strDbVersion, $ullDbSysId) = $self->walInfo($strArchiveFile);
+ $self->pushCheck($oFile, substr(basename($strArchiveFile), 0, 24), $strDbVersion, $ullDbSysId);
+ }
+
+ # Copy the archive file
+ $oFile->copy(PATH_DB_ABSOLUTE, $strArchiveFile, # Source file
+ PATH_BACKUP_ARCHIVE, $strDestinationFile, # Destination file
+ $bSourceCompressed, # Source is not compressed
+ $bDestinationCompress, # Destination compress is configurable
+ undef, undef, undef, # Unused params
+ true); # Create path if it does not exist
+
+ # Remove the source archive file
+ unlink($strArchiveFile)
+ or confess &log(ERROR, "copied ${strArchiveFile} to archive successfully but unable to remove it locally. " .
+ 'This file will need to be cleaned up manually. If the problem persists, check if ' .
+ OP_ARCHIVE_PUSH . ' is being run with different permissions in different contexts.');
+ }
+
+ # Return number of files indicating that processing should continue
+ return $lFileTotal;
+}
+
+
+####################################################################################################################################
+# range
+#
+# Generates a range of archive log file names given the start and end log file name. For pre-9.3 databases, use bSkipFF to exclude
+# the FF that prior versions did not generate.
+####################################################################################################################################
+sub range
+{
+ my $self = shift;
+ my $strArchiveStart = shift;
+ my $strArchiveStop = shift;
+ my $bSkipFF = shift;
+
+ # strSkipFF default to false
+ $bSkipFF = defined($bSkipFF) ? $bSkipFF : false;
+
+ if ($bSkipFF)
+ {
+ &log(TRACE, 'archive_list_get: pre-9.3 database, skipping log FF');
+ }
+ else
+ {
+ &log(TRACE, 'archive_list_get: post-9.3 database, including log FF');
+ }
+
+ # Get the timelines and make sure they match
+ my $strTimeline = substr($strArchiveStart, 0, 8);
+ my @stryArchive;
+ my $iArchiveIdx = 0;
+
+ if ($strTimeline ne substr($strArchiveStop, 0, 8))
+ {
+ confess &log(ERROR, "Timelines between ${strArchiveStart} and ${strArchiveStop} differ");
+ }
+
+ # Iterate through all archive logs between start and stop
+ my $iStartMajor = hex substr($strArchiveStart, 8, 8);
+ my $iStartMinor = hex substr($strArchiveStart, 16, 8);
+
+ my $iStopMajor = hex substr($strArchiveStop, 8, 8);
+ my $iStopMinor = hex substr($strArchiveStop, 16, 8);
+
+ $stryArchive[$iArchiveIdx] = uc(sprintf("${strTimeline}%08x%08x", $iStartMajor, $iStartMinor));
+ $iArchiveIdx += 1;
+
+ while (!($iStartMajor == $iStopMajor && $iStartMinor == $iStopMinor))
+ {
+ $iStartMinor += 1;
+
+ if ($bSkipFF && $iStartMinor == 255 || !$bSkipFF && $iStartMinor == 256)
+ {
+ $iStartMajor += 1;
+ $iStartMinor = 0;
+ }
+
+ $stryArchive[$iArchiveIdx] = uc(sprintf("${strTimeline}%08x%08x", $iStartMajor, $iStartMinor));
+ $iArchiveIdx += 1;
+ }
+
+ &log(TRACE, " archive_list_get: $strArchiveStart:$strArchiveStop (@stryArchive)");
+
+ return @stryArchive;
+}
+
+1;
diff --git a/lib/BackRest/Backup.pm b/lib/BackRest/Backup.pm
index 5604c2073..5594a38e7 100644
--- a/lib/BackRest/Backup.pm
+++ b/lib/BackRest/Backup.pm
@@ -11,6 +11,7 @@ use Carp qw(confess);
use File::Basename;
use File::Path qw(remove_tree);
use Scalar::Util qw(looks_like_number);
+use Fcntl 'SEEK_CUR';
use Thread::Queue;
use lib dirname($0);
@@ -20,31 +21,24 @@ use BackRest::Config;
use BackRest::Manifest;
use BackRest::File;
use BackRest::Db;
+use BackRest::ThreadGroup;
+use BackRest::Archive;
+use BackRest::BackupFile;
use Exporter qw(import);
-our @EXPORT = qw(backup_init backup_cleanup backup_thread_kill archive_push archive_xfer archive_get archive_compress
- backup backup_expire archive_list_get);
+our @EXPORT = qw(backup_init backup_cleanup backup backup_expire archive_list_get);
my $oDb;
my $oFile;
my $strType; # Type of backup: full, differential (diff), incremental (incr)
my $bCompress;
my $bHardLink;
-my $iThreadMax;
-my $iThreadLocalMax;
-#my $iThreadThreshold = 10;
-my $iSmallFileThreshold = 65536;
my $bNoStartStop;
my $bForce;
+my $iThreadMax;
my $iThreadTimeout;
-# Thread variables
-my @oThread;
-my @oThreadQueue;
-my @oMasterQueue;
-my %oFileCopyMap;
-
####################################################################################################################################
# BACKUP_INIT
####################################################################################################################################
@@ -69,16 +63,6 @@ sub backup_init
$iThreadTimeout = $iThreadTimeoutParam;
$bNoStartStop = $bNoStartStopParam;
$bForce = $bForceParam;
-
- if (!defined($iThreadMax))
- {
- $iThreadMax = 1;
- }
-
- if ($iThreadMax < 1 || $iThreadMax > 32)
- {
- confess &log(ERROR, 'thread_max must be between 1 and 32');
- }
}
####################################################################################################################################
@@ -89,344 +73,6 @@ sub backup_cleanup
undef($oFile);
}
-####################################################################################################################################
-# THREAD_INIT
-####################################################################################################################################
-sub thread_init
-{
- my $iThreadRequestTotal = shift; # Number of threads that were requested
-
- my $iThreadActualTotal; # Number of actual threads assigned
-
- if (!defined($iThreadRequestTotal))
- {
- $iThreadActualTotal = $iThreadMax;
- }
- else
- {
- $iThreadActualTotal = $iThreadRequestTotal < $iThreadMax ? $iThreadRequestTotal : $iThreadMax;
-
- if ($iThreadActualTotal < 1)
- {
- $iThreadActualTotal = 1;
- }
- }
-
- for (my $iThreadIdx = 0; $iThreadIdx < $iThreadActualTotal; $iThreadIdx++)
- {
- $oThreadQueue[$iThreadIdx] = Thread::Queue->new();
- $oMasterQueue[$iThreadIdx] = Thread::Queue->new();
- }
-
- return $iThreadActualTotal;
-}
-
-####################################################################################################################################
-# BACKUP_THREAD_KILL
-####################################################################################################################################
-sub backup_thread_kill
-{
- my $iTotal = 0;
-
- for (my $iThreadIdx = 0; $iThreadIdx < scalar @oThread; $iThreadIdx++)
- {
- if (defined($oThread[$iThreadIdx]))
- {
- if ($oThread[$iThreadIdx]->is_running())
- {
- $oThread[$iThreadIdx]->kill('KILL')->join();
- }
- elsif ($oThread[$iThreadIdx]->is_joinable())
- {
- $oThread[$iThreadIdx]->join();
- }
-
- undef($oThread[$iThreadIdx]);
- $iTotal++;
- }
- }
-
- return($iTotal);
-}
-
-####################################################################################################################################
-# BACKUP_THREAD_COMPLETE
-####################################################################################################################################
-sub backup_thread_complete
-{
- my $iTimeout = shift;
- my $bConfessOnError = shift;
-
- if (!defined($bConfessOnError))
- {
- $bConfessOnError = true;
- }
-
-# if (!defined($iTimeout))
-# {
-# &log(WARN, "no thread timeout was set");
-# }
-
- # Wait for all threads to complete and handle errors
- my $iThreadComplete = 0;
- my $lTimeBegin = time();
-
- # Rejoin the threads
- while ($iThreadComplete < $iThreadLocalMax)
- {
- # If a timeout has been defined, make sure we have not been running longer than that
- if (defined($iTimeout))
- {
- if (time() - $lTimeBegin >= $iTimeout)
- {
- confess &log(ERROR, "threads have been running more than ${iTimeout} seconds, exiting...");
-
- #backup_thread_kill();
-
- #confess &log(WARN, "all threads have exited, aborting...");
- }
- }
-
- for (my $iThreadIdx = 0; $iThreadIdx < $iThreadLocalMax; $iThreadIdx++)
- {
- if (defined($oThread[$iThreadIdx]))
- {
- if (defined($oThread[$iThreadIdx]->error()))
- {
- backup_thread_kill();
-
- if ($bConfessOnError)
- {
- confess &log(ERROR, 'error in thread ' . (${iThreadIdx} + 1) . ': check log for details');
- }
- else
- {
- return false;
- }
- }
-
- if ($oThread[$iThreadIdx]->is_joinable())
- {
- &log(DEBUG, "thread ${iThreadIdx} exited");
- $oThread[$iThreadIdx]->join();
- &log(TRACE, "thread ${iThreadIdx} object undef");
- undef($oThread[$iThreadIdx]);
- $iThreadComplete++;
- }
- }
- }
-
- # Sleep before trying again
- hsleep(.1);
- }
-
- &log(DEBUG, 'all threads exited');
-
- return true;
-}
-
-####################################################################################################################################
-# ARCHIVE_GET
-####################################################################################################################################
-sub archive_get
-{
- my $strDbClusterPath = shift;
- my $strSourceArchive = shift;
- my $strDestinationFile = shift;
-
- # If the destination file path is not absolute then it is relative to the data path
- if (index($strDestinationFile, '/',) != 0)
- {
- if (!defined($strDbClusterPath))
- {
- confess &log(ERROR, 'database path must be set if relative xlog paths are used');
- }
-
- $strDestinationFile = "${strDbClusterPath}/${strDestinationFile}";
- }
-
- # Determine the path where the requested archive file is located
- my $strArchivePath = dirname($oFile->path_get(PATH_BACKUP_ARCHIVE, $strSourceArchive));
-
- # Get the name of the requested archive file (may have hash info and compression extension)
- my @stryArchiveFile = $oFile->list(PATH_BACKUP_ABSOLUTE, $strArchivePath,
- "^${strSourceArchive}(-[0-f]+){0,1}(\\.$oFile->{strCompressExtension}){0,1}\$", undef, true);
-
- # If there is more than one matching archive file then there is a serious issue - likely a bug in the archiver
- if (scalar @stryArchiveFile > 1)
- {
- confess &log(ASSERT, (scalar @stryArchiveFile) . " archive files found for ${strSourceArchive}.");
- }
-
- # If there are no matching archive files then there are two possibilities:
- # 1) The end of the archive stream has been reached, this is normal and a 1 will be returned
- # 2) There is a hole in the archive stream so return a hard error (!!! However if turns out that due to race conditions this
- # is harder than it looks. Postponed and added to the backlog. For now treated as case #1.)
- elsif (scalar @stryArchiveFile == 0)
- {
- &log(INFO, "${strSourceArchive} was not found in the archive repository");
-
- return 1;
- }
-
- &log(DEBUG, "archive_get: cp ${stryArchiveFile[0]} ${strDestinationFile}");
-
- # Determine if the source file is already compressed
- my $bSourceCompressed = $stryArchiveFile[0] =~ "^.*\.$oFile->{strCompressExtension}\$" ? true : false;
-
- # Copy the archive file to the requested location
- $oFile->copy(PATH_BACKUP_ARCHIVE, $stryArchiveFile[0], # Source file
- PATH_DB_ABSOLUTE, $strDestinationFile, # Destination file
- $bSourceCompressed, # Source compression based on detection
- false); # Destination is not compressed
-
- return 0;
-}
-
-####################################################################################################################################
-# ARCHIVE_PUSH
-####################################################################################################################################
-sub archive_push
-{
- my $strDbClusterPath = shift;
- my $strSourceFile = shift;
- my $bAsync = shift;
-
- # If the source file path is not absolute then it is relative to the data path
- if (index($strSourceFile, '/',) != 0)
- {
- if (!defined($strDbClusterPath))
- {
- confess &log(ERROR, 'database path must be set if relative xlog paths are used');
- }
-
- $strSourceFile = "${strDbClusterPath}/${strSourceFile}";
- }
-
- # Get the destination file
- my $strDestinationFile = basename($strSourceFile);
-
- # Determine if this is an archive file (don't want to do compression or checksum on .backup files)
- my $bArchiveFile = basename($strSourceFile) =~ /^[0-F]{24}$/ ? true : false;
-
- # Append compression extension
- if ($bArchiveFile && $bCompress)
- {
- $strDestinationFile .= ".$oFile->{strCompressExtension}";
- }
-
- # Copy the archive file
- $oFile->copy(PATH_DB_ABSOLUTE, $strSourceFile, # Source type/file
- $bAsync ? PATH_BACKUP_ARCHIVE_OUT : PATH_BACKUP_ARCHIVE, # Destination type
- $strDestinationFile, # Destination file
- false, # Source is not compressed
- $bArchiveFile && $bCompress, # Destination compress is configurable
- undef, undef, undef, # Unused params
- true, # Create path if it does not exist
- undef, undef, # User and group
- $bArchiveFile); # Append checksum if archive file
-}
-
-####################################################################################################################################
-# ARCHIVE_XFER
-####################################################################################################################################
-sub archive_xfer
-{
- my $strArchivePath = shift;
- my $strStopFile = shift;
- my $strCommand = shift;
- my $iArchiveMaxMB = shift;
-
- # Load the archive manifest - all the files that need to be pushed
- my %oManifestHash;
- $oFile->manifest(PATH_DB_ABSOLUTE, $strArchivePath, \%oManifestHash);
-
- # Get all the files to be transferred and calculate the total size
- my @stryFile;
- my $lFileSize = 0;
- my $lFileTotal = 0;
-
- foreach my $strFile (sort(keys $oManifestHash{name}))
- {
- if ($strFile =~ /^[0-F]{24}.*/ || $strFile =~ /^[0-F]{8}\.history$/)
- {
- push @stryFile, $strFile;
-
- $lFileSize += $oManifestHash{name}{"${strFile}"}{size};
- $lFileTotal++;
- }
- }
-
- if (defined($iArchiveMaxMB))
- {
- if ($iArchiveMaxMB < int($lFileSize / 1024 / 1024))
- {
- &log(ERROR, "local archive store has exceeded limit of ${iArchiveMaxMB}MB, archive logs will be discarded");
-
- my $hStopFile;
- open($hStopFile, '>', $strStopFile) or confess &log(ERROR, "unable to create stop file file ${strStopFile}");
- close($hStopFile);
- }
- }
-
- if ($lFileTotal == 0)
- {
- &log(DEBUG, 'no archive logs to be copied to backup');
-
- return 0;
- }
-
- # Modify process name to indicate async archiving
- $0 = "${strCommand} archive-push-async " . $stryFile[0] . '-' . $stryFile[scalar @stryFile - 1];
-
- # Output files to be moved to backup
- &log(INFO, "archive to be copied to backup total ${lFileTotal}, size " . file_size_format($lFileSize));
-
- # Transfer each file
- foreach my $strFile (sort @stryFile)
- {
- # Construct the archive filename to backup
- my $strArchiveFile = "${strArchivePath}/${strFile}";
-
- # Determine if the source file is already compressed
- my $bSourceCompressed = $strArchiveFile =~ "^.*\.$oFile->{strCompressExtension}\$" ? true : false;
-
- # Determine if this is an archive file (don't want to do compression or checksum on .backup files)
- my $bArchiveFile = basename($strFile) =~
- "^[0-F]{24}(-[0-f]+){0,1}(\\.$oFile->{strCompressExtension}){0,1}\$" ? true : false;
-
- # Figure out whether the compression extension needs to be added or removed
- my $bDestinationCompress = $bArchiveFile && $bCompress;
- my $strDestinationFile = basename($strFile);
-
- if (!$bSourceCompressed && $bDestinationCompress)
- {
- $strDestinationFile .= ".$oFile->{strCompressExtension}";
- }
- elsif ($bSourceCompressed && !$bDestinationCompress)
- {
- $strDestinationFile = substr($strDestinationFile, 0, length($strDestinationFile) - 3);
- }
-
- &log(DEBUG, "backup archive file ${strFile}, archive ${bArchiveFile}, source_compressed = ${bSourceCompressed}, " .
- "destination_compress ${bDestinationCompress}, default_compress = ${bCompress}");
-
- # Copy the archive file
- $oFile->copy(PATH_DB_ABSOLUTE, $strArchiveFile, # Source file
- PATH_BACKUP_ARCHIVE, $strDestinationFile, # Destination file
- $bSourceCompressed, # Source is not compressed
- $bDestinationCompress, # Destination compress is configurable
- undef, undef, undef, # Unused params
- true); # Create path if it does not exist
-
- # Remove the source archive file
- unlink($strArchiveFile) or confess &log(ERROR, "unable to remove ${strArchiveFile}");
- }
-
- # Return number of files indicating that processing should continue
- return $lFileTotal;
-}
-
####################################################################################################################################
# BACKUP_REGEXP_GET - Generate a regexp depending on the backups that need to be found
####################################################################################################################################
@@ -668,78 +314,54 @@ sub backup_file
my $oBackupManifest = shift; # Manifest for the current backup
# Variables used for parallel copy
- my $lTablespaceIdx = 0;
+ my %oFileCopyMap;
my $lFileTotal = 0;
- my $lFileLargeSize = 0;
- my $lFileLargeTotal = 0;
- my $lFileSmallSize = 0;
- my $lFileSmallTotal = 0;
-
- # Decide if all the paths will be created in advance
- my $bPathCreate = $bHardLink || $strType eq BACKUP_TYPE_FULL;
+ my $lSizeTotal = 0;
# Iterate through the path sections of the manifest to backup
- foreach my $strSectionPath ($oBackupManifest->keys())
+ foreach my $strPathKey ($oBackupManifest->keys(MANIFEST_SECTION_BACKUP_PATH))
{
- # Skip non-path sections
- if ($strSectionPath !~ /\:path$/ || $strSectionPath =~ /^backup\:path$/)
- {
- next;
- }
-
# Determine the source and destination backup paths
my $strBackupSourcePath; # Absolute path to the database base directory or tablespace to backup
my $strBackupDestinationPath; # Relative path to the backup directory where the data will be stored
my $strSectionFile; # Manifest section that contains the file data
# Process the base database directory
- if ($strSectionPath =~ /^base\:/)
+ if ($strPathKey =~ /^base$/)
{
- $lTablespaceIdx++;
$strBackupSourcePath = $strDbClusterPath;
$strBackupDestinationPath = 'base';
- $strSectionFile = 'base:file';
# Create the archive log directory
$oFile->path_create(PATH_BACKUP_TMP, 'base/pg_xlog');
}
# Process each tablespace
- elsif ($strSectionPath =~ /^tablespace\:/)
+ elsif ($strPathKey =~ /^tablespace\:/)
{
- $lTablespaceIdx++;
- my $strTablespaceName = (split(':', $strSectionPath))[1];
+ my $strTablespaceName = (split(':', $strPathKey))[1];
$strBackupSourcePath = $oBackupManifest->get(MANIFEST_SECTION_BACKUP_TABLESPACE, $strTablespaceName,
MANIFEST_SUBKEY_PATH);
$strBackupDestinationPath = "tablespace/${strTablespaceName}";
$strSectionFile = "tablespace:${strTablespaceName}:file";
# Create the tablespace directory and link
- if ($bPathCreate)
+ if ($bHardLink || $strType eq BACKUP_TYPE_FULL)
{
- $oFile->path_create(PATH_BACKUP_TMP, $strBackupDestinationPath);
-
- $oFile->link_create(PATH_BACKUP_TMP, ${strBackupDestinationPath},
- PATH_BACKUP_TMP,
- 'base/pg_tblspc/' . $oBackupManifest->get(MANIFEST_SECTION_BACKUP_TABLESPACE, $strTablespaceName,
- MANIFEST_SUBKEY_LINK),
- false, true);
+ $oFile->link_create(PATH_BACKUP_TMP, $strBackupDestinationPath,
+ PATH_BACKUP_TMP,
+ 'base/pg_tblspc/' . $oBackupManifest->get(MANIFEST_SECTION_BACKUP_TABLESPACE, $strTablespaceName,
+ MANIFEST_SUBKEY_LINK),
+ false, true, true);
}
}
else
{
- confess &log(ASSERT, "cannot find type for section ${strSectionPath}");
+ confess &log(ASSERT, "cannot find type for path ${strPathKey}");
}
- # Create all the sub paths if this is a full backup or hardlinks are requested
- if ($bPathCreate)
- {
- foreach my $strPath ($oBackupManifest->keys($strSectionPath))
- {
- $oFile->path_create(PATH_BACKUP_TMP, "${strBackupDestinationPath}/${strPath}", undef, true);
- }
- }
+ # Possible for the file section to exist with no files (i.e. empty tablespace)
+ $strSectionFile = "$strPathKey:file";
- # Possible for the path section to exist with no files (i.e. empty tablespace)
if (!$oBackupManifest->test($strSectionFile))
{
next;
@@ -775,7 +397,7 @@ sub backup_file
&log(DEBUG, "hard-linking ${strBackupSourceFile} from ${strReference}");
$oFile->link_create(PATH_BACKUP_CLUSTER, "${strReference}/${strBackupDestinationPath}/${strFile}",
- PATH_BACKUP_TMP, "${strBackupDestinationPath}/${strFile}", true, false, !$bPathCreate);
+ PATH_BACKUP_TMP, "${strBackupDestinationPath}/${strFile}", true, false, true);
}
}
# Else copy/compress the file and generate a checksum
@@ -791,291 +413,101 @@ sub backup_file
# Setup variables needed for threaded copy
$lFileTotal++;
- $lFileLargeSize += $lFileSize > $iSmallFileThreshold ? $lFileSize : 0;
- $lFileLargeTotal += $lFileSize > $iSmallFileThreshold ? 1 : 0;
- $lFileSmallSize += $lFileSize <= $iSmallFileThreshold ? $lFileSize : 0;
- $lFileSmallTotal += $lFileSize <= $iSmallFileThreshold ? 1 : 0;
+ $lSizeTotal += $lFileSize;
- # Load the hash used by threaded copy
- my $strKey = sprintf('ts%012x-fs%012x-fn%012x', $lTablespaceIdx,
- $lFileSize, $lFileTotal);
-
- $oFileCopyMap{"${strKey}"}{db_file} = $strBackupSourceFile;
- $oFileCopyMap{"${strKey}"}{file_section} = $strSectionFile;
- $oFileCopyMap{"${strKey}"}{file} = ${strFile};
- $oFileCopyMap{"${strKey}"}{backup_file} = "${strBackupDestinationPath}/${strFile}";
- $oFileCopyMap{"${strKey}"}{size} = $lFileSize;
- $oFileCopyMap{"${strKey}"}{modification_time} =
- $oBackupManifest->get($strSectionFile, $strFile, MANIFEST_SUBKEY_MODIFICATION_TIME);
- $oFileCopyMap{"${strKey}"}{checksum_only} = $bProcessChecksumOnly;
- $oFileCopyMap{"${strKey}"}{checksum} =
+ $oFileCopyMap{$strPathKey}{$strFile}{db_file} = $strBackupSourceFile;
+ $oFileCopyMap{$strPathKey}{$strFile}{file_section} = $strSectionFile;
+ $oFileCopyMap{$strPathKey}{$strFile}{file} = ${strFile};
+ $oFileCopyMap{$strPathKey}{$strFile}{backup_file} = "${strBackupDestinationPath}/${strFile}";
+ $oFileCopyMap{$strPathKey}{$strFile}{size} = $lFileSize;
+ $oFileCopyMap{$strPathKey}{$strFile}{checksum_only} = $bProcessChecksumOnly;
+ $oFileCopyMap{$strPathKey}{$strFile}{checksum} =
$oBackupManifest->get($strSectionFile, $strFile, MANIFEST_SUBKEY_CHECKSUM, false);
}
}
}
- # Build the thread queues
- $iThreadLocalMax = thread_init($iThreadMax);
- &log(DEBUG, "actual threads ${iThreadLocalMax}/${iThreadMax}");
-
- # Initialize the thread size array
- my @oyThreadData;
-
- for (my $iThreadIdx = 0; $iThreadIdx < $iThreadLocalMax; $iThreadIdx++)
+ # If there are no files to backup then we'll exit with a warning unless in test mode. The other way this could happen is if
+ # the database is down and backup is called with --no-start-stop twice in a row.
+ if ($lFileTotal == 0)
{
- $oyThreadData[$iThreadIdx]{size} = 0;
- $oyThreadData[$iThreadIdx]{total} = 0;
- $oyThreadData[$iThreadIdx]{large_size} = 0;
- $oyThreadData[$iThreadIdx]{large_total} = 0;
- $oyThreadData[$iThreadIdx]{small_size} = 0;
- $oyThreadData[$iThreadIdx]{small_total} = 0;
- }
-
- # Assign files to each thread queue
- my $iThreadFileSmallIdx = 0;
- my $iThreadFileSmallTotalMax = int($lFileSmallTotal / $iThreadLocalMax);
-
- my $iThreadFileLargeIdx = 0;
- my $fThreadFileLargeSizeMax = $lFileLargeSize / $iThreadLocalMax;
-
- &log(INFO, "file total ${lFileTotal}");
- &log(DEBUG, "file small total ${lFileSmallTotal}, small size: " . file_size_format($lFileSmallSize) .
- ', small thread avg total ' . file_size_format(int($iThreadFileSmallTotalMax)));
- &log(DEBUG, "file large total ${lFileLargeTotal}, large size: " . file_size_format($lFileLargeSize) .
- ', large thread avg size ' . file_size_format(int($fThreadFileLargeSizeMax)));
-
- foreach my $strFile (sort (keys %oFileCopyMap))
- {
- my $lFileSize = $oFileCopyMap{"${strFile}"}{size};
-
- if ($lFileSize > $iSmallFileThreshold)
+ if (!optionGet(OPTION_TEST))
{
- $oThreadQueue[$iThreadFileLargeIdx]->enqueue($strFile);
-
- $oyThreadData[$iThreadFileLargeIdx]{large_size} += $lFileSize;
- $oyThreadData[$iThreadFileLargeIdx]{large_total}++;
- $oyThreadData[$iThreadFileLargeIdx]{size} += $lFileSize;
-
- if ($oyThreadData[$iThreadFileLargeIdx]{large_size} >= $fThreadFileLargeSizeMax &&
- $iThreadFileLargeIdx < $iThreadLocalMax - 1)
- {
- $iThreadFileLargeIdx++;
- }
- }
- else
- {
- $oThreadQueue[$iThreadFileSmallIdx]->enqueue($strFile);
-
- $oyThreadData[$iThreadFileSmallIdx]{small_size} += $lFileSize;
- $oyThreadData[$iThreadFileSmallIdx]{small_total}++;
- $oyThreadData[$iThreadFileSmallIdx]{size} += $lFileSize;
-
- if ($oyThreadData[$iThreadFileSmallIdx]{small_total} >= $iThreadFileSmallTotalMax &&
- $iThreadFileSmallIdx < $iThreadLocalMax - 1)
- {
- $iThreadFileSmallIdx++;
- }
- }
- }
-
- if ($iThreadLocalMax > 1)
- {
- # End each thread queue and start the backup_file threads
- for (my $iThreadIdx = 0; $iThreadIdx < $iThreadLocalMax; $iThreadIdx++)
- {
- # Output info about how much work each thread is going to do
- &log(DEBUG, "thread ${iThreadIdx} large total $oyThreadData[$iThreadIdx]{large_total}, " .
- "size $oyThreadData[$iThreadIdx]{large_size}");
- &log(DEBUG, "thread ${iThreadIdx} small total $oyThreadData[$iThreadIdx]{small_total}, " .
- "size $oyThreadData[$iThreadIdx]{small_size}");
-
- # Start the thread
- $oThread[$iThreadIdx] = threads->create(\&backup_file_thread, true, $iThreadIdx, !$bPathCreate,
- $oyThreadData[$iThreadIdx]{size}, $oBackupManifest);
+ confess &log(WARN, "no files have changed since the last backup - this seems unlikely");
}
- # Wait for the threads to complete
- backup_thread_complete($iThreadTimeout);
+ return;
+ }
- # Read the messages that we passed back from the threads. These should be two types:
- # 1) remove - files that were skipped because they were removed from the database during backup
- # 2) checksum - file checksums calculated by the threads
- for (my $iThreadIdx = 0; $iThreadIdx < $iThreadLocalMax; $iThreadIdx++)
+ # Create backup and result queues
+ my $oResultQueue = Thread::Queue->new();
+ my @oyBackupQueue;
+
+ # Variables used for local copy
+ my $lSizeCurrent = 0; # Running total of bytes copied
+ my $bCopied; # Was the file copied?
+ my $lCopySize; # Size reported by copy
+ my $strCopyChecksum; # Checksum reported by copy
+
+ # Iterate all backup files
+ foreach my $strPathKey (sort (keys %oFileCopyMap))
+ {
+ if ($iThreadMax > 1)
{
- while (my $strMessage = $oMasterQueue[$iThreadIdx]->dequeue_nb())
- {
- &log (DEBUG, "message received in master queue: ${strMessage}");
-
- # Split the message. Currently using | as the split character. Not ideal, but it will do for now.
- my @strSplit = split(/\|/, $strMessage);
-
- my $strCommand = $strSplit[0]; # Command to execute on a file
- my $strFileSection = $strSplit[1]; # File section where the file is located
- my $strFile = $strSplit[2]; # The file to act on
-
- # These three parts are required
- if (!defined($strCommand) || !defined($strFileSection) || !defined($strFile))
- {
- confess &log(ASSERT, 'thread messages must have strCommand, strFileSection and strFile defined');
- }
-
- &log (DEBUG, "command = ${strCommand}, file_section = ${strFileSection}, file = ${strFile}");
-
- # If command is 'remove' then mark the skipped file in the manifest
- if ($strCommand eq 'remove')
- {
- $oBackupManifest->remove($strFileSection, $strFile);
-
- &log (INFO, "removed file ${strFileSection}:${strFile} from the manifest (it was removed by db during backup)");
- }
- # If command is 'checksum' then record the checksum in the manifest
- elsif ($strCommand eq 'checksum')
- {
- my $strChecksum = $strSplit[3]; # File checksum calculated by the thread
- my $lFileSize = $strSplit[4]; # File size calculated by the thread
-
- # Checksum must be defined
- if (!defined($strChecksum))
- {
- confess &log(ASSERT, 'thread checksum messages must have strChecksum defined');
- }
-
- # Checksum must be defined
- if (!defined($lFileSize))
- {
- confess &log(ASSERT, 'thread checksum messages must have lFileSize defined');
- }
-
- $oBackupManifest->set($strFileSection, $strFile, MANIFEST_SUBKEY_CHECKSUM, $strChecksum);
- $oBackupManifest->set($strFileSection, $strFile, MANIFEST_SUBKEY_SIZE, $lFileSize + 0);
-
- # Log the checksum
- &log (DEBUG, "write checksum ${strFileSection}:${strFile} into manifest: ${strChecksum} (${lFileSize})");
- }
- }
- }
- }
- else
- {
- &log(DEBUG, "starting backup in main process");
- backup_file_thread(false, 0, !$bPathCreate, $oyThreadData[0]{size}, $oBackupManifest);
- }
-}
-
-sub backup_file_thread
-{
- my $bMulti = shift; # Is this thread one of many?
- my $iThreadIdx = shift; # Defines the index of this thread
- my $bPathCreate = shift; # Should paths be created automatically?
- my $lSizeTotal = shift; # Total size of the files to be copied by this thread
- my $oBackupManifest = shift; # Backup manifest object (only used when single-threaded)
-
- my $lSize = 0; # Size of files currently copied by this thread
- my $strLog; # Store the log message
- my $strLogProgress; # Part of the log message that shows progress
- my $oFileThread; # Thread local file object
- my $bCopyResult; # Copy result
- my $strCopyChecksum; # Copy checksum
- my $lCopySize; # Copy Size
-
- # If multi-threaded, then clone the file object
- if ($bMulti)
- {
- $oFileThread = $oFile->clone($iThreadIdx);
- }
- else
- {
- $oFileThread = $oFile;
- }
-
- # When a KILL signal is received, immediately abort
- $SIG{'KILL'} = sub {threads->exit();};
-
- # Iterate through all the files in this thread's queue to be copied from the database to the backup
- while (my $strFile = $oThreadQueue[$iThreadIdx]->dequeue_nb())
- {
- # Add the size of the current file to keep track of percent complete
- $lSize += $oFileCopyMap{$strFile}{size};
-
- if (!$oFileCopyMap{$strFile}{checksum_only})
- {
- # Output information about the file to be copied
- $strLog = "thread ${iThreadIdx} backing up file";
-
- # Copy the file from the database to the backup (will return false if the source file is missing)
- ($bCopyResult, $strCopyChecksum, $lCopySize) =
- $oFileThread->copy(PATH_DB_ABSOLUTE, $oFileCopyMap{$strFile}{db_file},
- PATH_BACKUP_TMP, $oFileCopyMap{$strFile}{backup_file} .
- ($bCompress ? '.' . $oFile->{strCompressExtension} : ''),
- false, # Source is not compressed since it is the db directory
- $bCompress, # Destination should be compressed based on backup settings
- true, # Ignore missing files
- $oFileCopyMap{$strFile}{modification_time}, # Set modification time
- undef, # Do not set original mode
- true); # Create the destination directory if it does not exist
-
- if (!$bCopyResult)
- {
- # If file is missing assume the database removed it (else corruption and nothing we can do!)
- &log(INFO, "thread ${iThreadIdx} skipped file removed by database: " . $oFileCopyMap{$strFile}{db_file});
-
- # Remove file from the manifest
- if ($bMulti)
- {
- # Write a message into the master queue to have the file removed from the manifest
- $oMasterQueue[$iThreadIdx]->enqueue("remove|$oFileCopyMap{$strFile}{file_section}|".
- "$oFileCopyMap{$strFile}{file}");
- }
- else
- {
- # remove it directly
- $oBackupManifest->remove($oFileCopyMap{$strFile}{file_section}, $oFileCopyMap{$strFile}{file});
- }
-
- # Move on to the next file
- next;
- }
+ $oyBackupQueue[@oyBackupQueue] = Thread::Queue->new();
}
- $strLogProgress = "$oFileCopyMap{$strFile}{db_file} (" . file_size_format($lCopySize) .
- ($lSizeTotal > 0 ? ', ' . int($lSize * 100 / $lSizeTotal) . '%' : '') . ')';
-
- # Generate checksum for file if configured
- if ($lCopySize != 0)
+ foreach my $strFile (sort (keys $oFileCopyMap{$strPathKey}))
{
- # Store checksum in the manifest
- if ($bMulti)
+ my $oFileCopy = $oFileCopyMap{$strPathKey}{$strFile};
+
+ if ($iThreadMax > 1)
{
- # Write the checksum message into the master queue
- $oMasterQueue[$iThreadIdx]->enqueue("checksum|$oFileCopyMap{$strFile}{file_section}|" .
- "$oFileCopyMap{$strFile}{file}|${strCopyChecksum}|${lCopySize}");
+ $oyBackupQueue[@oyBackupQueue - 1]->enqueue($oFileCopy);
}
else
{
- # Write it directly
- $oBackupManifest->set($oFileCopyMap{$strFile}{file_section}, $oFileCopyMap{$strFile}{file},
- MANIFEST_SUBKEY_CHECKSUM, $strCopyChecksum);
- $oBackupManifest->set($oFileCopyMap{$strFile}{file_section}, $oFileCopyMap{$strFile}{file},
- MANIFEST_SUBKEY_SIZE, $lCopySize + 0);
+ # Backup the file
+ ($bCopied, $lSizeCurrent, $lCopySize, $strCopyChecksum) =
+ backupFile($oFile, $$oFileCopy{db_file}, $$oFileCopy{backup_file}, $bCompress,
+ $$oFileCopy{checksum}, $$oFileCopy{checksum_only},
+ $$oFileCopy{size}, $lSizeTotal, $lSizeCurrent);
+
+ backupManifestUpdate($oBackupManifest, $$oFileCopy{file_section}, $$oFileCopy{file},
+ $bCopied, $lCopySize, $strCopyChecksum);
}
-
- # Output information about the file to be checksummed
- if (!defined($strLog))
- {
- $strLog = "thread ${iThreadIdx} checksum-only ${strLogProgress}";
- }
-
- &log(INFO, $strLog . " checksum ${strCopyChecksum}");
}
- else
- {
- &log(INFO, $strLog . ' ' . $strLogProgress);
- }
-
- &log(TRACE, "thread waiting for new file from queue");
}
- &log(DEBUG, "thread ${iThreadIdx} exiting");
+ # If multi-threaded then create threads to copy files
+ if ($iThreadMax > 1)
+ {
+ for (my $iThreadIdx = 0; $iThreadIdx < $iThreadMax; $iThreadIdx++)
+ {
+ my %oParam;
+
+ $oParam{compress} = $bCompress;
+ $oParam{size_total} = $lSizeTotal;
+ $oParam{queue} = \@oyBackupQueue;
+ $oParam{result_queue} = $oResultQueue;
+
+ threadGroupRun($iThreadIdx, 'backup', \%oParam);
+ }
+
+ # Complete thread queues
+ threadGroupComplete();
+
+ # Read the messages that are passed back from the backup threads
+ while (my $oMessage = $oResultQueue->dequeue_nb())
+ {
+ &log(TRACE, "message received in master queue: section = $$oMessage{file_section}, file = $$oMessage{file}" .
+ ", copied = $$oMessage{copied}"); #, size = $$oMessage{size}, checksum = " .
+# (defined($$oMessage{checksum}) ? $$oMessage{checksum} : '[undef]'));
+
+ backupManifestUpdate($oBackupManifest, $$oMessage{file_section}, $$oMessage{file},
+ $$oMessage{copied}, $$oMessage{size}, $$oMessage{checksum});
+ }
+ }
}
####################################################################################################################################
@@ -1298,7 +730,7 @@ sub backup
# If archive logs are required to complete the backup, then fetch them. This is the default, but can be overridden if the
# archive logs are going to a different server. Be careful here because there is no way to verify that the backup will be
- # consistent - at least not in this routine.
+ # consistent - at least not here.
if (!optionGet(OPTION_NO_START_STOP) && optionGet(OPTION_BACKUP_ARCHIVE_CHECK))
{
# Save the backup manifest a second time - before getting archive logs in case that fails
@@ -1309,33 +741,24 @@ sub backup
# After the backup has been stopped, need to make a copy of the archive logs need to make the db consistent
&log(DEBUG, "retrieving archive logs ${strArchiveStart}:${strArchiveStop}");
- my @stryArchive = archive_list_get($strArchiveStart, $strArchiveStop, $oDb->db_version_get() < 9.3);
+ my $oArchive = new BackRest::Archive();
+ my @stryArchive = $oArchive->range($strArchiveStart, $strArchiveStop, $oDb->db_version_get() < 9.3);
foreach my $strArchive (@stryArchive)
{
- my $strArchivePath = dirname($oFile->path_get(PATH_BACKUP_ARCHIVE, $strArchive));
-
- wait_for_file($strArchivePath, "^${strArchive}(-[0-f]+){0,1}(\\.$oFile->{strCompressExtension}){0,1}\$", 600);
-
- my @stryArchiveFile = $oFile->list(PATH_BACKUP_ABSOLUTE, $strArchivePath,
- "^${strArchive}(-[0-f]+){0,1}(\\.$oFile->{strCompressExtension}){0,1}\$");
-
- if (scalar @stryArchiveFile != 1)
- {
- confess &log(ERROR, "Zero or more than one file found for glob: ${strArchivePath}");
- }
+ my $strArchiveFile = $oArchive->walFileName($oFile, $strArchive, 600);
if (optionGet(OPTION_BACKUP_ARCHIVE_COPY))
{
- &log(DEBUG, "archiving: ${strArchive} (${stryArchiveFile[0]})");
+ &log(DEBUG, "archiving: ${strArchive} (${strArchiveFile})");
# Copy the log file from the archive repo to the backup
my $strDestinationFile = "base/pg_xlog/${strArchive}" . ($bCompress ? ".$oFile->{strCompressExtension}" : '');
my ($bCopyResult, $strCopyChecksum, $lCopySize) =
- $oFile->copy(PATH_BACKUP_ARCHIVE, $stryArchiveFile[0],
+ $oFile->copy(PATH_BACKUP_ARCHIVE, $strArchiveFile,
PATH_BACKUP_TMP, $strDestinationFile,
- $stryArchiveFile[0] =~ "^.*\.$oFile->{strCompressExtension}\$",
+ $strArchiveFile =~ "^.*\.$oFile->{strCompressExtension}\$",
$bCompress, undef, $lModificationTime);
# Add the archive file to the manifest so it can be part of the restore and checked in validation
@@ -1345,10 +768,10 @@ sub backup
my $strFileLog = "pg_xlog/${strArchive}";
# Compare the checksum against the one already in the archive log name
- if ($stryArchiveFile[0] !~ "^${strArchive}-${strCopyChecksum}(\\.$oFile->{strCompressExtension}){0,1}\$")
+ if ($strArchiveFile !~ "^${strArchive}-${strCopyChecksum}(\\.$oFile->{strCompressExtension}){0,1}\$")
{
- confess &log(ERROR, "error copying log '$stryArchiveFile[0]' to backup - checksum recorded with file does " .
- "not match actual checksum of '${strCopyChecksum}'", ERROR_CHECKSUM);
+ confess &log(ERROR, "error copying WAL segment '${strArchiveFile}' to backup - checksum recorded with " .
+ "file does not match actual checksum of '${strCopyChecksum}'", ERROR_CHECKSUM);
}
# Set manifest values
@@ -1406,69 +829,6 @@ sub backup
$oFile->link_create(PATH_BACKUP_CLUSTER, $strBackupPath, PATH_BACKUP_CLUSTER, "latest", undef, true);
}
-####################################################################################################################################
-# ARCHIVE_LIST_GET
-#
-# Generates a range of archive log file names given the start and end log file name. For pre-9.3 databases, use bSkipFF to exclude
-# the FF that prior versions did not generate.
-####################################################################################################################################
-sub archive_list_get
-{
- my $strArchiveStart = shift;
- my $strArchiveStop = shift;
- my $bSkipFF = shift;
-
- # strSkipFF default to false
- $bSkipFF = defined($bSkipFF) ? $bSkipFF : false;
-
- if ($bSkipFF)
- {
- &log(TRACE, 'archive_list_get: pre-9.3 database, skipping log FF');
- }
- else
- {
- &log(TRACE, 'archive_list_get: post-9.3 database, including log FF');
- }
-
- # Get the timelines and make sure they match
- my $strTimeline = substr($strArchiveStart, 0, 8);
- my @stryArchive;
- my $iArchiveIdx = 0;
-
- if ($strTimeline ne substr($strArchiveStop, 0, 8))
- {
- confess &log(ERROR, "Timelines between ${strArchiveStart} and ${strArchiveStop} differ");
- }
-
- # Iterate through all archive logs between start and stop
- my $iStartMajor = hex substr($strArchiveStart, 8, 8);
- my $iStartMinor = hex substr($strArchiveStart, 16, 8);
-
- my $iStopMajor = hex substr($strArchiveStop, 8, 8);
- my $iStopMinor = hex substr($strArchiveStop, 16, 8);
-
- $stryArchive[$iArchiveIdx] = uc(sprintf("${strTimeline}%08x%08x", $iStartMajor, $iStartMinor));
- $iArchiveIdx += 1;
-
- while (!($iStartMajor == $iStopMajor && $iStartMinor == $iStopMinor))
- {
- $iStartMinor += 1;
-
- if ($bSkipFF && $iStartMinor == 255 || !$bSkipFF && $iStartMinor == 256)
- {
- $iStartMajor += 1;
- $iStartMinor = 0;
- }
-
- $stryArchive[$iArchiveIdx] = uc(sprintf("${strTimeline}%08x%08x", $iStartMajor, $iStartMinor));
- $iArchiveIdx += 1;
- }
-
- &log(TRACE, " archive_list_get: $strArchiveStart:$strArchiveStop (@stryArchive)");
-
- return @stryArchive;
-}
-
####################################################################################################################################
# BACKUP_EXPIRE
#
diff --git a/lib/BackRest/BackupFile.pm b/lib/BackRest/BackupFile.pm
new file mode 100644
index 000000000..edc6161df
--- /dev/null
+++ b/lib/BackRest/BackupFile.pm
@@ -0,0 +1,132 @@
+####################################################################################################################################
+# BACKUP FILE MODULE
+####################################################################################################################################
+package BackRest::BackupFile;
+
+use threads;
+use strict;
+use Thread::Queue;
+use warnings FATAL => qw(all);
+use Carp qw(confess);
+
+use File::Basename qw(dirname);
+use Exporter qw(import);
+
+use lib dirname($0);
+use BackRest::Utility;
+use BackRest::Exception;
+use BackRest::Manifest;
+use BackRest::File;
+
+####################################################################################################################################
+# backupFile
+####################################################################################################################################
+sub backupFile
+{
+ my $oFile = shift; # File object
+ my $strSourceFile = shift; # Source file to backup
+ my $strDestinationFile = shift; # Destination backup file
+ my $bDestinationCompress = shift; # Compress destination file
+ my $strChecksum = shift; # File checksum to be checked
+ my $bChecksumOnly = shift; # Checksum destination only
+ my $lSizeFile = shift; # Total size of the files to be copied
+ my $lSizeTotal = shift; # Total size of the files to be copied
+ my $lSizeCurrent = shift; # Size of files copied so far
+
+ my $strLog; # Store the log message
+ my $strLogProgress; # Part of the log message that shows progress
+ my $bCopyResult; # Copy result
+ my $strCopyChecksum; # Copy checksum
+ my $lCopySize; # Copy Size
+
+ # Add the size of the current file to keep track of percent complete
+ $lSizeCurrent += $lSizeFile;
+
+ if ($bChecksumOnly)
+ {
+ $lCopySize = $lSizeFile;
+ $strCopyChecksum = 'dude';
+ # !!! Need to put checksum code in here
+ }
+ else
+ {
+ # Output information about the file to be copied
+ $strLog = "backed up file";
+
+ # Copy the file from the database to the backup (will return false if the source file is missing)
+ ($bCopyResult, $strCopyChecksum, $lCopySize) =
+ $oFile->copy(PATH_DB_ABSOLUTE, $strSourceFile,
+ PATH_BACKUP_TMP, $strDestinationFile .
+ ($bDestinationCompress ? '.' . $oFile->{strCompressExtension} : ''),
+ false, # Source is not compressed since it is the db directory
+ $bDestinationCompress, # Destination should be compressed based on backup settings
+ true, # Ignore missing files
+ undef, # Do not set original modification time
+ undef, # Do not set original mode
+ true); # Create the destination directory if it does not exist
+
+ if (!$bCopyResult)
+ {
+ # If file is missing assume the database removed it (else corruption and nothing we can do!)
+ &log(INFO, "skipped file removed by database: " . $strSourceFile);
+
+ return false, $lSizeCurrent, undef, undef;
+ }
+ }
+
+ $strLogProgress = "$strSourceFile (" . file_size_format($lCopySize) .
+ ($lSizeTotal > 0 ? ', ' . int($lSizeCurrent * 100 / $lSizeTotal) . '%' : '') . ')';
+
+ # Generate checksum for file if configured
+ if ($lCopySize != 0)
+ {
+ # Output information about the file to be checksummed
+ if (!defined($strLog))
+ {
+ $strLog = "checksum-only";
+ }
+
+ &log(INFO, $strLog . " ${strLogProgress} checksum ${strCopyChecksum}");
+ }
+ else
+ {
+ &log(INFO, $strLog . ' ' . $strLogProgress);
+ }
+
+ return true, $lSizeCurrent, $lCopySize, $strCopyChecksum;
+}
+
+our @EXPORT = qw(backupFile);
+
+####################################################################################################################################
+# backupManifestUpdate
+####################################################################################################################################
+sub backupManifestUpdate
+{
+ my $oManifest = shift;
+ my $strSection = shift;
+ my $strFile = shift;
+ my $bCopied = shift;
+ my $lSize = shift;
+ my $strChecksum = shift;
+
+ # If copy was successful store the checksum and size
+ if ($bCopied)
+ {
+ $oManifest->set($strSection, $strFile, MANIFEST_SUBKEY_SIZE, $lSize + 0);
+
+ if ($lSize > 0)
+ {
+ $oManifest->set($strSection, $strFile, MANIFEST_SUBKEY_CHECKSUM, $strChecksum);
+ }
+ }
+ # Else the file was removed during backup so remove from manifest
+ else
+ {
+ $oManifest->remove($strSection, $strFile);
+ }
+}
+
+push @EXPORT, qw(backupManifestUpdate);
+
+1;
diff --git a/lib/BackRest/Config.pm b/lib/BackRest/Config.pm
index 6cfe7da0f..46d96c6e8 100644
--- a/lib/BackRest/Config.pm
+++ b/lib/BackRest/Config.pm
@@ -19,7 +19,20 @@ use BackRest::Utility;
# Export functions
####################################################################################################################################
our @EXPORT = qw(configLoad optionGet optionTest optionRuleGet optionRequired optionDefault operationGet operationTest
- operationSet);
+ operationSet operationWrite optionRemoteType optionRemoteTypeTest optionRemote optionRemoteTest
+ remoteDestroy);
+
+####################################################################################################################################
+# DB/BACKUP Constants
+####################################################################################################################################
+use constant
+{
+ DB => 'db',
+ BACKUP => 'backup',
+ NONE => 'none'
+};
+
+push @EXPORT, qw(DB BACKUP NONE);
####################################################################################################################################
# Operation constants - basic operations that are allowed in backrest
@@ -47,6 +60,17 @@ use constant
push @EXPORT, qw(BACKUP_TYPE_FULL BACKUP_TYPE_DIFF BACKUP_TYPE_INCR);
+
+####################################################################################################################################
+# SOURCE Constants
+####################################################################################################################################
+use constant
+{
+ SOURCE_CONFIG => 'config',
+ SOURCE_PARAM => 'param',
+ SOURCE_DEFAULT => 'default'
+};
+
####################################################################################################################################
# RECOVERY Type Constants
####################################################################################################################################
@@ -182,8 +206,8 @@ push @EXPORT, qw(OPTION_CONFIG OPTION_DELTA OPTION_FORCE OPTION_NO_START_STOP OP
####################################################################################################################################
use constant
{
- OPTION_DEFAULT_BUFFER_SIZE => 1048576,
- OPTION_DEFAULT_BUFFER_SIZE_MIN => 4096,
+ OPTION_DEFAULT_BUFFER_SIZE => 4194304,
+ OPTION_DEFAULT_BUFFER_SIZE_MIN => 16384,
OPTION_DEFAULT_BUFFER_SIZE_MAX => 8388608,
OPTION_DEFAULT_COMPRESS => true,
@@ -348,7 +372,7 @@ my %oOptionRule =
{
&OP_RESTORE =>
{
- &OPTION_RULE_DEFAULT => OPTION_DEFAULT_RESTORE_TYPE,
+ &OPTION_RULE_DEFAULT => OPTION_DEFAULT_RESTORE_SET,
}
}
},
@@ -938,6 +962,8 @@ my %oOptionRule =
####################################################################################################################################
my %oOption; # Option hash
my $strOperation; # Operation (backup, archive-get, ...)
+my $strRemoteType; # Remote type (DB, BACKUP, NONE)
+my $oRemote; # Global remote object that is created on first request (NOT THREADSAFE!)
####################################################################################################################################
# configLoad
@@ -972,10 +998,14 @@ sub configLoad
$oOptionAllow{$strOption} = $strOption;
# Check if the option can be negated
- if (defined($oOptionRule{$strKey}{&OPTION_RULE_NEGATE}) && $oOptionRule{$strKey}{&OPTION_RULE_NEGATE})
+ if ((defined($oOptionRule{$strKey}{&OPTION_RULE_NEGATE}) &&
+ $oOptionRule{$strKey}{&OPTION_RULE_NEGATE}) ||
+ ($oOptionRule{$strKey}{&OPTION_RULE_TYPE} eq OPTION_TYPE_BOOLEAN &&
+ defined($oOptionRule{$strKey}{&OPTION_RULE_SECTION})))
{
$strOption = "no-${strKey}";
$oOptionAllow{$strOption} = $strOption;
+ $oOptionRule{$strKey}{&OPTION_RULE_NEGATE} = true;
}
}
@@ -1017,13 +1047,34 @@ sub configLoad
# Replace command psql options if set
if (optionTest(OPTION_COMMAND_PSQL) && optionTest(OPTION_COMMAND_PSQL_OPTION))
{
- $oOption{&OPTION_COMMAND_PSQL} =~ s/\%option\%/$oOption{&OPTION_COMMAND_PSQL_OPTION}/g;
+ $oOption{&OPTION_COMMAND_PSQL}{value} =~ s/\%option\%/$oOption{&OPTION_COMMAND_PSQL_OPTION}{value}/g;
}
# Set repo-remote-path to repo-path if it is not set
if (optionTest(OPTION_REPO_PATH) && !optionTest(OPTION_REPO_REMOTE_PATH))
{
- $oOption{&OPTION_REPO_REMOTE_PATH} = optionGet(OPTION_REPO_PATH);
+ $oOption{&OPTION_REPO_REMOTE_PATH}{value} = optionGet(OPTION_REPO_PATH);
+ }
+
+ # Check if the backup host is remote
+ if (optionTest(OPTION_BACKUP_HOST))
+ {
+ $strRemoteType = BACKUP;
+ }
+ # Else check if db is remote
+ elsif (optionTest(OPTION_DB_HOST))
+ {
+ # Don't allow both sides to be remote
+ if (defined($strRemoteType))
+ {
+ confess &log(ERROR, 'db and backup cannot both be configured as remote', ERROR_CONFIG);
+ }
+
+ $strRemoteType = DB;
+ }
+ else
+ {
+ $strRemoteType = NONE;
}
}
@@ -1102,13 +1153,64 @@ sub optionValid
{
confess &log(ERROR, "option '${strOption}' cannot be both set and negated", ERROR_OPTION_NEGATE);
}
+
+ if ($bNegate && $oOptionRule{$strOption}{&OPTION_RULE_TYPE} eq OPTION_TYPE_BOOLEAN)
+ {
+ $strValue = false;
+ }
+ }
+
+ # If the operation has rules store them for later evaluation
+ my $oOperationRule = optionOperationRule($strOption, $strOperation);
+
+ # Check dependency for the operation then for the option
+ my $bDependResolved = true;
+ my $oDepend = defined($oOperationRule) ? $$oOperationRule{&OPTION_RULE_DEPEND} :
+ $oOptionRule{$strOption}{&OPTION_RULE_DEPEND};
+ my $strDependOption;
+ my $strDependValue;
+ my $strDependType;
+
+ if (defined($oDepend))
+ {
+ # Check if the depend option has a value
+ $strDependOption = $$oDepend{&OPTION_RULE_DEPEND_OPTION};
+ $strDependValue = $oOption{$strDependOption}{value};
+
+ # Make sure the depend option has been resolved, otherwise skip this option for now
+ if (!defined($oOptionResolved{$strDependOption}))
+ {
+ $bDependUnresolved = true;
+ next;
+ }
+
+ if (!defined($strDependValue))
+ {
+ $bDependResolved = false;
+ $strDependType = 'source';
+ }
+
+ # If a depend value exists, make sure the option value matches
+ if ($bDependResolved && defined($$oDepend{&OPTION_RULE_DEPEND_VALUE}) &&
+ $$oDepend{&OPTION_RULE_DEPEND_VALUE} ne $strDependValue)
+ {
+ $bDependResolved = false;
+ $strDependType = 'value';
+ }
+
+ # If a depend list exists, make sure the value is in the list
+ if ($bDependResolved && defined($$oDepend{&OPTION_RULE_DEPEND_LIST}) &&
+ !defined($$oDepend{&OPTION_RULE_DEPEND_LIST}{$strDependValue}))
+ {
+ $bDependResolved = false;
+ $strDependType = 'list';
+ }
}
# If the option value is undefined and not negated, see if it can be loaded from pg_backrest.conf
if (!defined($strValue) && !$bNegate && $strOption ne OPTION_CONFIG &&
- $oOptionRule{$strOption}{&OPTION_RULE_SECTION})
+ $oOptionRule{$strOption}{&OPTION_RULE_SECTION} && $bDependResolved)
{
-
# If the config option has not been resolved yet then continue processing
if (!defined($oOptionResolved{&OPTION_CONFIG}) || !defined($oOptionResolved{&OPTION_STANZA}))
{
@@ -1117,12 +1219,12 @@ sub optionValid
}
# If the config option is defined try to get the option from the config file
- if ($bConfigExists && defined($oOption{&OPTION_CONFIG}))
+ if ($bConfigExists && defined($oOption{&OPTION_CONFIG}{value}))
{
# Attempt to load the config file if it has not been loaded
if (!defined($oConfig))
{
- my $strConfigFile = $oOption{&OPTION_CONFIG};
+ my $strConfigFile = $oOption{&OPTION_CONFIG}{value};
$bConfigExists = -e $strConfigFile;
if ($bConfigExists)
@@ -1205,82 +1307,51 @@ sub optionValid
ERROR_OPTION_INVALID_VALUE);
}
}
+
+ $oOption{$strOption}{source} = SOURCE_CONFIG;
}
}
}
- # If the operation has rules store them for later evaluation
- my $oOperationRule = optionOperationRule($strOption, $strOperation);
-
- # Check dependency for the operation then for the option
- my $bDependResolved = true;
- my $oDepend = defined($oOperationRule) ? $$oOperationRule{&OPTION_RULE_DEPEND} :
- $oOptionRule{$strOption}{&OPTION_RULE_DEPEND};
-
- if (defined($oDepend))
+ if (defined($oDepend) && !$bDependResolved && defined($strValue))
{
- # Make sure the depend option has been resolved, otherwise skip this option for now
- my $strDependOption = $$oDepend{&OPTION_RULE_DEPEND_OPTION};
-
- if (!defined($oOptionResolved{$strDependOption}))
- {
- $bDependUnresolved = true;
- next;
- }
-
- # Check if the depend option has a value
- my $strDependValue = $oOption{$strDependOption};
my $strError = "option '${strOption}' not valid without option '${strDependOption}'";
- $bDependResolved = defined($strDependValue) ? true : false;
-
- if (!$bDependResolved && defined($strValue))
+ if ($strDependType eq 'source')
{
confess &log(ERROR, $strError, ERROR_OPTION_INVALID);
}
# If a depend value exists, make sure the option value matches
- if ($bDependResolved && defined($$oDepend{&OPTION_RULE_DEPEND_VALUE}) &&
- $$oDepend{&OPTION_RULE_DEPEND_VALUE} ne $strDependValue)
+ if ($strDependType eq 'value')
{
- $bDependResolved = false;
-
- if (defined($strValue))
+ if ($oOptionRule{$strDependOption}{&OPTION_RULE_TYPE} eq OPTION_TYPE_BOOLEAN)
{
- if ($oOptionRule{$strDependOption}{&OPTION_RULE_TYPE} eq OPTION_TYPE_BOOLEAN)
+ if (!$$oDepend{&OPTION_RULE_DEPEND_VALUE})
{
- if (!$$oDepend{&OPTION_RULE_DEPEND_VALUE})
- {
- confess &log(ASSERT, "no error has been created for unused case where depend value = false");
- }
+ confess &log(ASSERT, "no error has been created for unused case where depend value = false");
}
- else
- {
- $strError .= " = '$$oDepend{&OPTION_RULE_DEPEND_VALUE}'";
- }
-
- confess &log(ERROR, $strError, ERROR_OPTION_INVALID);
}
+ else
+ {
+ $strError .= " = '$$oDepend{&OPTION_RULE_DEPEND_VALUE}'";
+ }
+
+ confess &log(ERROR, $strError, ERROR_OPTION_INVALID);
}
# If a depend list exists, make sure the value is in the list
- if ($bDependResolved && defined($$oDepend{&OPTION_RULE_DEPEND_LIST}) &&
- !defined($$oDepend{&OPTION_RULE_DEPEND_LIST}{$strDependValue}))
+ if ($strDependType eq 'list')
{
- $bDependResolved = false;
+ my @oyValue;
- if (defined($strValue))
+ foreach my $strValue (sort(keys($$oDepend{&OPTION_RULE_DEPEND_LIST})))
{
- my @oyValue;
-
- foreach my $strValue (sort(keys($$oDepend{&OPTION_RULE_DEPEND_LIST})))
- {
- push(@oyValue, "'${strValue}'");
- }
-
- $strError .= @oyValue == 1 ? " = $oyValue[0]" : " in (" . join(", ", @oyValue) . ")";
- confess &log(ERROR, $strError, ERROR_OPTION_INVALID);
+ push(@oyValue, "'${strValue}'");
}
+
+ $strError .= @oyValue == 1 ? " = $oyValue[0]" : " in (" . join(", ", @oyValue) . ")";
+ confess &log(ERROR, $strError, ERROR_OPTION_INVALID);
}
}
@@ -1347,18 +1418,24 @@ sub optionValid
# Check that the key has not already been set
my $strKey = substr($strItem, 0, $iEqualPos);
- if (defined($oOption{$strOption}{$strKey}))
+ if (defined($oOption{$strOption}{$strKey}{value}))
{
confess &log(ERROR, "'${$strItem}' already defined for '${strOption}' option",
ERROR_OPTION_DUPLICATE_KEY);
}
- $oOption{$strOption}{$strKey} = substr($strItem, $iEqualPos + 1);
+ $oOption{$strOption}{value}{$strKey} = substr($strItem, $iEqualPos + 1);
}
}
else
{
- $oOption{$strOption} = $strValue;
+ $oOption{$strOption}{value} = $strValue;
+ }
+
+ # If not config sourced then it must be a param
+ if (!defined($oOption{$strOption}{source}))
+ {
+ $oOption{$strOption}{source} = SOURCE_PARAM;
}
}
# Else try to set a default
@@ -1366,6 +1443,9 @@ sub optionValid
(!defined($oOptionRule{$strOption}{&OPTION_RULE_OPERATION}) ||
defined($oOptionRule{$strOption}{&OPTION_RULE_OPERATION}{$strOperation})))
{
+ # Source is default for this option
+ $oOption{$strOption}{source} = SOURCE_DEFAULT;
+
# Check for default in operation then option
my $strDefault = optionDefault($strOption, $strOperation);
@@ -1373,7 +1453,7 @@ sub optionValid
if (defined($strDefault))
{
# Only set default if dependency is resolved
- $oOption{$strOption} = $strDefault if !$bNegate;
+ $oOption{$strOption}{value} = $strDefault if !$bNegate;
}
# Else check required
elsif (optionRequired($strOption, $strOperation))
@@ -1494,16 +1574,50 @@ sub optionGet
my $strOption = shift;
my $bRequired = shift;
- if (!defined($oOption{$strOption}) && (!defined($bRequired) || $bRequired))
+ if (!defined($oOption{$strOption}{value}) && (!defined($bRequired) || $bRequired))
{
confess &log(ASSERT, "option ${strOption} is required");
}
- return $oOption{$strOption};
+ return $oOption{$strOption}{value};
}
####################################################################################################################################
-# optionTest
+# operationWrite
+#
+# Using the options that were passed to the current operations, write the command string for another operation. For example, this
+# can be used to write the archive-get command for recovery.conf during a restore.
+####################################################################################################################################
+sub operationWrite
+{
+ my $strNewOperation = shift;
+
+ my $strCommand = "$0";
+
+ foreach my $strOption (sort(keys(%oOption)))
+ {
+ if ((!defined($oOptionRule{$strOption}{&OPTION_RULE_OPERATION}) ||
+ defined($oOptionRule{$strOption}{&OPTION_RULE_OPERATION}{$strNewOperation})) &&
+ $oOption{$strOption}{source} eq SOURCE_PARAM)
+ {
+ my $strParam = "--${strOption}=$oOption{$strOption}{value}";
+
+ if (index($oOption{$strOption}{value}, " ") != -1)
+ {
+ $strCommand .= " \"${strParam}\"";
+ }
+ else
+ {
+ $strCommand .= " ${strParam}";
+ }
+ }
+ }
+
+ $strCommand .= " ${strNewOperation}";
+}
+
+####################################################################################################################################
+# commandWrite
#
# Test a option value.
####################################################################################################################################
@@ -1517,7 +1631,103 @@ sub optionTest
return optionGet($strOption) eq $strValue;
}
- return defined($oOption{$strOption});
+ return defined($oOption{$strOption}{value});
+}
+
+####################################################################################################################################
+# optionRemoteType
+#
+# Returns the remote type.
+####################################################################################################################################
+sub optionRemoteType
+{
+ return $strRemoteType;
+}
+
+####################################################################################################################################
+# optionRemoteTypeTest
+#
+# Test the remote type.
+####################################################################################################################################
+sub optionRemoteTypeTest
+{
+ my $strTest = shift;
+
+ return $strRemoteType eq $strTest ? true : false;
+}
+
+####################################################################################################################################
+# optionRemote
+#
+# Get the remote object or create it if does not exist. Shared remotes are used because they create an SSH connection to the remote
+# host and the number of these connections should be minimized. A remote can be shared without a single thread - for new threads
+# clone() should be called on the shared remote.
+####################################################################################################################################
+sub optionRemote
+{
+ my $bForceLocal = shift;
+ my $bStore = shift;
+
+ # If force local or remote = NONE then create a local remote and return it
+ if ((defined($bForceLocal) && $bForceLocal) || optionRemoteTypeTest(NONE))
+ {
+ return new BackRest::Remote
+ (
+ undef, undef, undef, undef, undef,
+ optionGet(OPTION_BUFFER_SIZE),
+ operationTest(OP_EXPIRE) ? OPTION_DEFAULT_COMPRESS_LEVEL : optionGet(OPTION_COMPRESS_LEVEL),
+ operationTest(OP_EXPIRE) ? OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK : optionGet(OPTION_COMPRESS_LEVEL_NETWORK)
+ );
+ }
+
+ # Return the remote if is already defined
+ if (defined($oRemote))
+ {
+ return $oRemote;
+ }
+
+ # Return the remote when required
+ my $oRemoteTemp = new BackRest::Remote
+ (
+ optionRemoteTypeTest(DB) ? optionGet(OPTION_DB_HOST) : optionGet(OPTION_BACKUP_HOST),
+ optionRemoteTypeTest(DB) ? optionGet(OPTION_DB_USER) : optionGet(OPTION_BACKUP_USER),
+ optionGet(OPTION_COMMAND_REMOTE),
+ optionGet(OPTION_STANZA),
+ optionGet(OPTION_REPO_REMOTE_PATH),
+ optionGet(OPTION_BUFFER_SIZE),
+ operationTest(OP_EXPIRE) ? OPTION_DEFAULT_COMPRESS_LEVEL : optionGet(OPTION_COMPRESS_LEVEL),
+ operationTest(OP_EXPIRE) ? OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK : optionGet(OPTION_COMPRESS_LEVEL_NETWORK)
+ );
+
+ if ($bStore)
+ {
+ $oRemote = $oRemoteTemp;
+ }
+
+ return $oRemoteTemp;
+}
+
+####################################################################################################################################
+# remoteDestroy
+#
+# Undefined the remote if it is stored locally.
+####################################################################################################################################
+sub remoteDestroy
+{
+ if (defined($oRemote))
+ {
+ undef($oRemote);
+ }
+}
+
+####################################################################################################################################
+# optionRemoteTest
+#
+# Test if the remote DB or BACKUP.
+####################################################################################################################################
+sub optionRemoteTest
+{
+ return $strRemoteType ne NONE ? true : false;
}
####################################################################################################################################
diff --git a/lib/BackRest/Db.pm b/lib/BackRest/Db.pm
index 659885c17..88b1a0e66 100644
--- a/lib/BackRest/Db.pm
+++ b/lib/BackRest/Db.pm
@@ -13,6 +13,7 @@ use IPC::System::Simple qw(capture);
use Exporter qw(import);
use lib dirname($0);
+use BackRest::Exception;
use BackRest::Utility;
####################################################################################################################################
@@ -27,7 +28,8 @@ our @EXPORT = qw(FILE_POSTMASTER_PID);
####################################################################################################################################
sub new
{
- my $class = shift; # Class name
+ my $class = shift; # Class name
+ my $strDbPath = shift; # Database path
my $strCommandPsql = shift; # PSQL command
my $strDbHost = shift; # Database host name
my $strDbUser = shift; # Database user name (generally postgres)
@@ -70,6 +72,20 @@ sub is_remote
return defined($self->{oDbSSH}) ? true : false;
}
+####################################################################################################################################
+# versionSupport
+#
+# Returns an array of the supported Postgres versions.
+####################################################################################################################################
+sub versionSupport
+{
+ my @strySupportVersion = ('8.3', '8.4', '9.0', '9.1', '9.2', '9.3', '9.4');
+
+ return \@strySupportVersion;
+}
+
+push @EXPORT, qw(versionSupport);
+
####################################################################################################################################
# PSQL_EXECUTE
####################################################################################################################################
@@ -131,6 +147,13 @@ sub db_version_get
&log(DEBUG, "database version is $self->{fVersion}");
+ my $strVersionSupport = versionSupport();
+
+ if ($self->{fVersion} < ${$strVersionSupport}[0])
+ {
+ confess &log(ERROR, "unsupported Postgres version ${$strVersionSupport}[0]", ERROR_VERSION_NOT_SUPPORTED);
+ }
+
return $self->{fVersion};
}
@@ -143,6 +166,14 @@ sub backup_start
my $strLabel = shift;
my $bStartFast = shift;
+ $self->db_version_get();
+
+ if ($self->{fVersion} < 8.4 && $bStartFast)
+ {
+ &log(WARN, 'start-fast option is only available in PostgreSQL >= 8.4');
+ $bStartFast = false;
+ }
+
my @stryField = split("\t", trim($self->psql_execute("set client_min_messages = 'warning';" .
"copy (select to_char(current_timestamp, 'YYYY-MM-DD HH24:MI:SS.US TZ'), pg_xlogfile_name(xlog) from pg_start_backup('${strLabel}'" .
($bStartFast ? ', true' : '') . ') as xlog) to stdout')));
diff --git a/lib/BackRest/Exception.pm b/lib/BackRest/Exception.pm
index dd1bb3920..e7caf5fd9 100644
--- a/lib/BackRest/Exception.pm
+++ b/lib/BackRest/Exception.pm
@@ -28,13 +28,20 @@ use constant
ERROR_OPTION_REQUIRED => 112,
ERROR_POSTMASTER_RUNNING => 113,
ERROR_PROTOCOL => 114,
- ERROR_RESTORE_PATH_NOT_EMPTY => 115
+ ERROR_RESTORE_PATH_NOT_EMPTY => 115,
+ ERROR_FILE_OPEN => 116,
+ ERROR_FILE_READ => 117,
+ ERROR_PARAM_REQUIRED => 118,
+ ERROR_ARCHIVE_MISMATCH => 119,
+ ERROR_ARCHIVE_DUPLICATE => 120,
+ ERROR_VERSION_NOT_SUPPORTED => 121
};
our @EXPORT = qw(ERROR_ASSERT ERROR_CHECKSUM ERROR_CONFIG ERROR_FILE_INVALID ERROR_FORMAT ERROR_OPERATION_REQUIRED
ERROR_OPTION_INVALID ERROR_OPTION_INVALID_VALUE ERROR_OPTION_INVALID_RANGE ERROR_OPTION_INVALID_PAIR
ERROR_OPTION_DUPLICATE_KEY ERROR_OPTION_NEGATE ERROR_OPTION_REQUIRED ERROR_POSTMASTER_RUNNING ERROR_PROTOCOL
- ERROR_RESTORE_PATH_NOT_EMPTY);
+ ERROR_RESTORE_PATH_NOT_EMPTY ERROR_FILE_OPEN ERROR_FILE_READ ERROR_PARAM_REQUIRED ERROR_ARCHIVE_MISMATCH
+ ERROR_ARCHIVE_DUPLICATE ERROR_VERSION_NOT_SUPPORTED);
####################################################################################################################################
# CONSTRUCTOR
diff --git a/lib/BackRest/File.pm b/lib/BackRest/File.pm
index 5d70746c7..e77fadbfd 100644
--- a/lib/BackRest/File.pm
+++ b/lib/BackRest/File.pm
@@ -13,12 +13,13 @@ use File::Copy qw(cp);
use File::Path qw(make_path remove_tree);
use Digest::SHA;
use File::stat;
-use Fcntl ':mode';
+use Fcntl qw(:mode O_RDONLY O_WRONLY O_CREAT O_EXCL);
use Exporter qw(import);
use lib dirname($0) . '/../lib';
use BackRest::Exception;
use BackRest::Utility;
+use BackRest::Config;
use BackRest::Remote;
####################################################################################################################################
@@ -179,6 +180,16 @@ sub clone
);
}
+####################################################################################################################################
+# stanza
+####################################################################################################################################
+sub stanza
+{
+ my $self = shift;
+
+ return $self->{strStanza};
+}
+
####################################################################################################################################
# PATH_TYPE_GET
####################################################################################################################################
@@ -279,15 +290,7 @@ sub path_get
# Get the backup archive path
if ($strType eq PATH_BACKUP_ARCHIVE_OUT || $strType eq PATH_BACKUP_ARCHIVE)
{
- my $strArchivePath = "$self->{strBackupPath}/archive";
-
- if ($bTemp)
- {
- return "${strArchivePath}/temp/$self->{strStanza}-archive" .
- (defined($self->{iThreadIdx}) ? "-$self->{iThreadIdx}" : '') . ".tmp";
- }
-
- $strArchivePath .= "/$self->{strStanza}";
+ my $strArchivePath = "$self->{strBackupPath}/archive/$self->{strStanza}";
if ($strType eq PATH_BACKUP_ARCHIVE)
{
@@ -303,13 +306,25 @@ sub path_get
}
}
- return $strArchivePath . (defined($strArchive) ? '/' . substr($strArchive, 0, 16) : '') .
- (defined($strFile) ? '/' . $strFile : '');
+ $strArchivePath = $strArchivePath . (defined($strArchive) ? '/' . substr($strArchive, 0, 16) : '') .
+ (defined($strFile) ? '/' . $strFile : '');
}
else
{
- return "${strArchivePath}/out" . (defined($strFile) ? '/' . $strFile : '');
+ $strArchivePath = "${strArchivePath}/out" . (defined($strFile) ? '/' . $strFile : '');
}
+
+ if ($bTemp)
+ {
+ if (!defined($strFile))
+ {
+ confess &log(ASSERT, 'archive temp must have strFile defined');
+ }
+
+ $strArchivePath = "${strArchivePath}.tmp";
+ }
+
+ return $strArchivePath;
}
if ($strType eq PATH_BACKUP_CLUSTER)
@@ -377,6 +392,7 @@ sub link_create
&log(DEBUG, "${strOperation}: ${strDebug}");
# If the destination path is backup and does not exist, create it
+ # !!! This should only happen when the link create errors
if ($bPathCreate && $self->path_type_get($strDestinationPathType) eq PATH_BACKUP)
{
$self->path_create(PATH_BACKUP_ABSOLUTE, dirname($strDestination));
@@ -476,30 +492,31 @@ sub move
{
if (!rename($strPathOpSource, $strPathOpDestination))
{
- my $strError = "${strPathOpDestination} could not be moved: " . $!;
- my $iErrorCode = COMMAND_ERR_FILE_READ;
-
- if (!$self->exists(PATH_ABSOLUTE, dirname($strPathOpDestination)))
+ if ($bDestinationPathCreate)
{
- $strError = "${strPathOpDestination} does not exist";
- $iErrorCode = COMMAND_ERR_FILE_MISSING;
+ $self->path_create(PATH_ABSOLUTE, dirname($strPathOpDestination), undef, true);
}
- if (!($bDestinationPathCreate && $iErrorCode == COMMAND_ERR_FILE_MISSING))
+ if (!$bDestinationPathCreate || !rename($strPathOpSource, $strPathOpDestination))
{
- if ($strSourcePathType eq PATH_ABSOLUTE)
+ my $strError = "unable to move file ${strPathOpSource} to ${strPathOpDestination}: " . $!;
+ my $iErrorCode = COMMAND_ERR_FILE_READ;
+
+ if (!$self->exists(PATH_ABSOLUTE, dirname($strPathOpDestination)))
{
- confess &log(ERROR, $strError, $iErrorCode);
+ $strError = "${strPathOpDestination} does not exist";
+ $iErrorCode = COMMAND_ERR_FILE_MISSING;
}
- confess &log(ERROR, "${strDebug}: " . $strError);
- }
+ if (!($bDestinationPathCreate && $iErrorCode == COMMAND_ERR_FILE_MISSING))
+ {
+ if ($strSourcePathType eq PATH_ABSOLUTE)
+ {
+ confess &log(ERROR, $strError, $iErrorCode);
+ }
- $self->path_create(PATH_ABSOLUTE, dirname($strPathOpDestination));
-
- if (!rename($strPathOpSource, $strPathOpDestination))
- {
- confess &log(ERROR, "unable to move file ${strPathOpSource}: " . $!);
+ confess &log(ERROR, "${strDebug}: " . $strError);
+ }
}
}
}
@@ -789,7 +806,7 @@ sub hash_size
{
my $hFile;
- if (!open($hFile, '<', $strFileOp))
+ if (!sysopen($hFile, $strFileOp, O_RDONLY))
{
my $strError = "${strFileOp} could not be read: " . $!;
my $iErrorCode = 2;
@@ -1342,7 +1359,7 @@ sub copy
if (!$bSourceRemote)
{
- if (!open($hSourceFile, '<', $strSourceOp))
+ if (!sysopen($hSourceFile, $strSourceOp, O_RDONLY))
{
my $strError = $!;
my $iErrorCode = COMMAND_ERR_FILE_READ;
@@ -1377,32 +1394,33 @@ sub copy
if (!$bDestinationRemote)
{
# Open the destination temp file
- if (!open($hDestinationFile, '>', $strDestinationTmpOp))
+ if (!sysopen($hDestinationFile, $strDestinationTmpOp, O_WRONLY | O_CREAT))
{
- my $strError = "${strDestinationTmpOp} could not be opened: " . $!;
- my $iErrorCode = COMMAND_ERR_FILE_READ;
-
- if (!$self->exists(PATH_ABSOLUTE, dirname($strDestinationTmpOp)))
+ if ($bDestinationPathCreate)
{
- $strError = dirname($strDestinationTmpOp) . ' does not exist';
- $iErrorCode = COMMAND_ERR_FILE_MISSING;
+ $self->path_create(PATH_ABSOLUTE, dirname($strDestinationTmpOp), undef, true);
}
- if (!($bDestinationPathCreate && $iErrorCode == COMMAND_ERR_FILE_MISSING))
+ if (!$bDestinationPathCreate || !sysopen($hDestinationFile, $strDestinationTmpOp, O_WRONLY | O_CREAT))
{
- if ($strSourcePathType eq PATH_ABSOLUTE)
+ my $strError = "unable to open ${strDestinationTmpOp}: " . $!;
+ my $iErrorCode = COMMAND_ERR_FILE_READ;
+
+ if (!$self->exists(PATH_ABSOLUTE, dirname($strDestinationTmpOp)))
{
- confess &log(ERROR, $strError, $iErrorCode);
+ $strError = dirname($strDestinationTmpOp) . ' does not exist';
+ $iErrorCode = COMMAND_ERR_FILE_MISSING;
}
- confess &log(ERROR, "${strDebug}: " . $strError);
- }
+ if (!($bDestinationPathCreate && $iErrorCode == COMMAND_ERR_FILE_MISSING))
+ {
+ if ($strSourcePathType eq PATH_ABSOLUTE)
+ {
+ confess &log(ERROR, $strError, $iErrorCode);
+ }
- $self->path_create(PATH_ABSOLUTE, dirname($strDestinationTmpOp));
-
- if (!open($hDestinationFile, '>', $strDestinationTmpOp))
- {
- confess &log(ERROR, "unable to open destination file ${strDestinationOp}: " . $!);
+ confess &log(ERROR, "${strDebug}: " . $strError);
+ }
}
}
}
@@ -1697,7 +1715,7 @@ sub copy
}
# Move the file from tmp to final destination
- $self->move(PATH_ABSOLUTE, $strDestinationTmpOp, PATH_ABSOLUTE, $strDestinationOp, true);
+ $self->move(PATH_ABSOLUTE, $strDestinationTmpOp, PATH_ABSOLUTE, $strDestinationOp, $bDestinationPathCreate);
}
return $bResult, $strChecksum, $iFileSize;
diff --git a/lib/BackRest/Remote.pm b/lib/BackRest/Remote.pm
index 9cc305329..7689e6cbf 100644
--- a/lib/BackRest/Remote.pm
+++ b/lib/BackRest/Remote.pm
@@ -17,22 +17,7 @@ use IO::String qw();
use lib dirname($0) . '/../lib';
use BackRest::Exception qw(ERROR_PROTOCOL);
use BackRest::Utility qw(log version_get trim TRACE ERROR ASSERT true false);
-
-####################################################################################################################################
-# Exports
-####################################################################################################################################
-use Exporter qw(import);
-our @EXPORT = qw(DB BACKUP NONE);
-
-####################################################################################################################################
-# DB/BACKUP Constants
-####################################################################################################################################
-use constant
-{
- DB => 'db',
- BACKUP => 'backup',
- NONE => 'none'
-};
+use BackRest::Config qw(optionGet OPTION_STANZA OPTION_REPO_REMOTE_PATH);
####################################################################################################################################
# CONSTRUCTOR
@@ -43,6 +28,8 @@ sub new
my $strHost = shift; # Host to connect to for remote (optional as this can also be used on the remote)
my $strUser = shift; # User to connect to for remote (must be set if strHost is set)
my $strCommand = shift; # Command to execute on remote ('remote' if this is the remote)
+ my $strStanza = shift; # Stanza
+ my $strRepoPath = shift; # Remote Repository Path
my $iBlockSize = shift; # Buffer size
my $iCompressLevel = shift; # Set compression level
my $iCompressLevelNetwork = shift; # Set compression level for network only compression
@@ -54,6 +41,10 @@ sub new
# Create the greeting that will be used to check versions with the remote
$self->{strGreeting} = 'PG_BACKREST_REMOTE ' . version_get();
+ # Set stanza and repo path
+ $self->{strStanza} = $strStanza;
+ $self->{strRepoPath} = $strRepoPath;
+
# Set default block size
$self->{iBlockSize} = $iBlockSize;
@@ -91,12 +82,14 @@ sub new
master_opts => [-o => $strOptionSSHCompression, -o => $strOptionSSHRequestTTY]);
$self->{oSSH}->error and confess &log(ERROR, "unable to connect to $self->{strHost}: " . $self->{oSSH}->error);
+ &log(TRACE, 'connected to remote ssh host ' . $self->{strHost});
# Execute remote command
($self->{hIn}, $self->{hOut}, $self->{hErr}, $self->{pId}) = $self->{oSSH}->open3($self->{strCommand});
$self->greeting_read();
- $self->setting_write($self->{iBlockSize}, $self->{iCompressLevel}, $self->{iCompressLevelNetwork});
+ $self->setting_write($self->{strStanza}, $self->{strRepoPath},
+ $self->{iBlockSize}, $self->{iCompressLevel}, $self->{iCompressLevelNetwork});
}
elsif (defined($strCommand) && $strCommand eq 'remote')
{
@@ -104,7 +97,8 @@ sub new
$self->greeting_write();
# Read settings from master
- ($self->{iBlockSize}, $self->{iCompressLevel}, $self->{iCompressLevelNetwork}) = $self->setting_read();
+ ($self->{strStanza}, $self->{strRepoPath}, $self->{iBlockSize}, $self->{iCompressLevel},
+ $self->{iCompressLevelNetwork}) = $self->setting_read();
}
# Check block size
@@ -127,22 +121,47 @@ sub new
return $self;
}
+
####################################################################################################################################
-# THREAD_KILL
+# DESTROY
####################################################################################################################################
-sub thread_kill
+sub DESTROY
{
my $self = shift;
+
+ # Only send the exit command if the process is running
+ if (defined($self->{pId}))
+ {
+ &log(TRACE, "sending exit command to process");
+ $self->command_write('exit');
+
+ # &log(TRACE, "waiting for remote process");
+ # if (!$self->wait_pid(5, false))
+ # {
+ # &log(TRACE, "killed remote process");
+ # kill('KILL', $self->{pId});
+ # }
+ }
}
####################################################################################################################################
-# DESTRUCTOR
+# repoPath
####################################################################################################################################
-sub DEMOLISH
+sub repoPath
{
my $self = shift;
- $self->thread_kill();
+ return $self->{strRepoPath};
+}
+
+####################################################################################################################################
+# stanza
+####################################################################################################################################
+sub stanza
+{
+ my $self = shift;
+
+ return $self->{strStanza};
}
####################################################################################################################################
@@ -157,6 +176,8 @@ sub clone
$self->{strHost},
$self->{strUser},
$self->{strCommand},
+ $self->{strStanza},
+ $self->{strRepoPath},
$self->{iBlockSize},
$self->{iCompressLevel},
$self->{iCompressLevelNetwork}
@@ -200,6 +221,12 @@ sub setting_read
{
my $self = shift;
+ # Get Stanza
+ my $strStanza = $self->read_line(*STDIN);
+
+ # Get Repo Path
+ my $strRepoPath = $self->read_line(*STDIN);
+
# Tokenize the settings
my @stryToken = split(/ /, $self->read_line(*STDIN));
@@ -216,7 +243,7 @@ sub setting_read
}
# Return the settings
- return $stryToken[1], $stryToken[2], $stryToken[3];
+ return $strStanza, $strRepoPath, $stryToken[1], $stryToken[2], $stryToken[3];
}
####################################################################################################################################
@@ -227,10 +254,14 @@ sub setting_read
sub setting_write
{
my $self = shift;
+ my $strStanza = shift; # Database stanza
+ my $strRepoPath = shift; # Path to the repository on the remote
my $iBlockSize = shift; # Optionally, set the block size (defaults to DEFAULT_BLOCK_SIZE)
my $iCompressLevel = shift; # Set compression level
my $iCompressLevelNetwork = shift; # Set compression level for network only compression
+ $self->write_line($self->{hIn}, $strStanza);
+ $self->write_line($self->{hIn}, $strRepoPath);
$self->write_line($self->{hIn}, "setting ${iBlockSize} ${iCompressLevel} ${iCompressLevelNetwork}");
}
@@ -382,27 +413,70 @@ sub write_line
sub wait_pid
{
my $self = shift;
+ my $fWaitTime = shift;
+ my $bReportError = shift;
- if (defined($self->{pId}) && waitpid($self->{pId}, WNOHANG) != 0)
+ # Record the start time and set initial sleep interval
+ my $fStartTime = defined($fWaitTime) ? gettimeofday() : undef;
+ my $fSleep = defined($fWaitTime) ? .1 : undef;
+
+ if (defined($self->{pId}))
{
- my $strError = 'no error on stderr';
-
- if (!defined($self->{hErr}))
+ do
{
- $strError = 'no error captured because stderr is already closed';
- }
- else
- {
- $strError = $self->pipe_to_string($self->{hErr});
- }
+ my $iResult = waitpid($self->{pId}, WNOHANG);
- $self->{pId} = undef;
- $self->{hIn} = undef;
- $self->{hOut} = undef;
- $self->{hErr} = undef;
+ if (defined($fWaitTime))
+ {
+ confess &log(TRACE, "waitpid result = $iResult");
+ }
- confess &log(ERROR, "remote process terminated: ${strError}");
+ # If there is no such process
+ if ($iResult == -1)
+ {
+ return true;
+ }
+
+ if ($iResult > 0)
+ {
+ if (!defined($bReportError) || $bReportError)
+ {
+ my $strError = 'no error on stderr';
+
+ if (!defined($self->{hErr}))
+ {
+ $strError = 'no error captured because stderr is already closed';
+ }
+ else
+ {
+ $strError = $self->pipe_to_string($self->{hErr});
+ }
+
+ $self->{pId} = undef;
+ $self->{hIn} = undef;
+ $self->{hOut} = undef;
+ $self->{hErr} = undef;
+
+ confess &log(ERROR, "remote process terminated: ${strError}");
+ }
+
+ return true;
+ }
+
+ &log(TRACE, "waiting for pid");
+
+ # If waiting then sleep before trying again
+ if (defined($fWaitTime))
+ {
+ hsleep($fSleep);
+ $fSleep = $fSleep * 2 < $fWaitTime - (gettimeofday() - $fStartTime) ?
+ $fSleep * 2 : ($fWaitTime - (gettimeofday() - $fStartTime)) + .001;
+ }
+ }
+ while (defined($fWaitTime) && (gettimeofday() - $fStartTime) < $fWaitTime);
}
+
+ return false;
}
####################################################################################################################################
@@ -1024,7 +1098,7 @@ sub output_read
}
# If output is required and there is no output, raise exception
- if ($bOutputRequired && !defined($strOutput))
+ if (defined($bOutputRequired) && $bOutputRequired && !defined($strOutput))
{
confess &log(ERROR, (defined($strErrorPrefix) ? "${strErrorPrefix}: " : '') . 'output is not defined');
}
diff --git a/lib/BackRest/Restore.pm b/lib/BackRest/Restore.pm
index 09be1d423..2c07cac42 100644
--- a/lib/BackRest/Restore.pm
+++ b/lib/BackRest/Restore.pm
@@ -17,6 +17,7 @@ use lib dirname($0);
use BackRest::Exception;
use BackRest::Utility;
use BackRest::ThreadGroup;
+use BackRest::RestoreFile;
use BackRest::Config;
use BackRest::Manifest;
use BackRest::File;
@@ -496,9 +497,7 @@ sub recovery
# Write the restore command
if (!$bRestoreCommandOverride)
{
- $strRecovery .= "restore_command = '$self->{strBackRestBin} --stanza=$self->{strStanza}" .
- (defined($self->{strConfigFile}) ? " --config=$self->{strConfigFile}" : '') .
- " archive-get %f \"%p\"'\n";
+ $strRecovery .= "restore_command = '" . operationWrite(OP_ARCHIVE_GET) . " %f \"%p\"'\n";
}
# If RECOVERY_TYPE_DEFAULT do not write target options
@@ -510,20 +509,20 @@ sub recovery
# Write recovery_target_inclusive
if ($self->{bTargetExclusive})
{
- $strRecovery .= "recovery_target_inclusive = false\n";
+ $strRecovery .= "recovery_target_inclusive = 'false'\n";
}
}
# Write pause_at_recovery_target
if ($self->{bTargetResume})
{
- $strRecovery .= "pause_at_recovery_target = false\n";
+ $strRecovery .= "pause_at_recovery_target = 'false'\n";
}
# Write recovery_target_timeline
if (defined($self->{strTargetTimeline}))
{
- $strRecovery .= "recovery_target_timeline = $self->{strTargetTimeline}\n";
+ $strRecovery .= "recovery_target_timeline = '$self->{strTargetTimeline}'\n";
}
# Write recovery.conf
@@ -566,20 +565,46 @@ sub restore
# Build paths/links in the restore paths
$self->build($oManifest);
- # Create thread queues
+ # Get variables required for restore
+ my $lCopyTimeBegin = $oManifest->epoch(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_TIMESTAMP_COPY_START);
+ my $bSourceCompression = $oManifest->get(MANIFEST_SECTION_BACKUP_OPTION, MANIFEST_KEY_COMPRESS) eq 'y' ? true : false;
+ my $strCurrentUser = getpwuid($<);
+ my $strCurrentGroup = getgrgid($();
+
+ # Create thread queues (or do restore if single-threaded)
my @oyRestoreQueue;
+ if ($self->{iThreadTotal} > 1)
+ {
+ &log(TRACE, "building thread queues");
+ }
+ else
+ {
+ &log(TRACE, "starting restore in main process");
+ }
+
foreach my $strPathKey ($oManifest->keys(MANIFEST_SECTION_BACKUP_PATH))
{
my $strSection = "${strPathKey}:file";
if ($oManifest->test($strSection))
{
- $oyRestoreQueue[@oyRestoreQueue] = Thread::Queue->new();
+ if ($self->{iThreadTotal} > 1)
+ {
+ $oyRestoreQueue[@oyRestoreQueue] = Thread::Queue->new();
+ }
foreach my $strName ($oManifest->keys($strSection))
{
- $oyRestoreQueue[@oyRestoreQueue - 1]->enqueue("${strPathKey}|${strName}");
+ if ($self->{iThreadTotal} > 1)
+ {
+ $oyRestoreQueue[@oyRestoreQueue - 1]->enqueue("${strPathKey}|${strName}");
+ }
+ else
+ {
+ restoreFile($strPathKey, $strName, $lCopyTimeBegin, $self->{bDelta}, $self->{bForce}, $self->{strBackupPath},
+ $bSourceCompression, $strCurrentUser, $strCurrentGroup, $oManifest, $self->{oFile});
+ }
}
}
}
@@ -587,179 +612,29 @@ sub restore
# If multi-threaded then create threads to copy files
if ($self->{iThreadTotal} > 1)
{
- # Create threads to process the thread queues
- my $oThreadGroup = thread_group_create();
-
for (my $iThreadIdx = 0; $iThreadIdx < $self->{iThreadTotal}; $iThreadIdx++)
{
- &log(DEBUG, "starting restore thread ${iThreadIdx}");
- thread_group_add($oThreadGroup, threads->create(\&restore_thread, $self, true,
- $iThreadIdx, \@oyRestoreQueue, $oManifest));
+ my %oParam;
+
+ $oParam{copy_time_begin} = $lCopyTimeBegin;
+ $oParam{delta} = $self->{bDelta};
+ $oParam{force} = $self->{bForce};
+ $oParam{backup_path} = $self->{strBackupPath};
+ $oParam{source_compression} = $bSourceCompression;
+ $oParam{current_user} = $strCurrentUser;
+ $oParam{current_group} = $strCurrentGroup;
+ $oParam{queue} = \@oyRestoreQueue;
+ $oParam{manifest} = $oManifest;
+
+ threadGroupRun($iThreadIdx, 'restore', \%oParam);
}
# Complete thread queues
- thread_group_complete($oThreadGroup);
- }
- # Else copy in the main process
- else
- {
- &log(DEBUG, "starting restore in main process");
- $self->restore_thread(false, 0, \@oyRestoreQueue, $oManifest);
+ threadGroupComplete();
}
# Create recovery.conf file
$self->recovery();
}
-####################################################################################################################################
-# RESTORE_THREAD
-#
-# Worker threads for the restore process.
-####################################################################################################################################
-sub restore_thread
-{
- my $self = shift; # Class hash
- my $bMulti = shift; # Is this thread one of many?
- my $iThreadIdx = shift; # Defines the index of this thread
- my $oyRestoreQueueRef = shift; # Restore queues
- my $oManifest = shift; # Backup manifest
-
- my $iDirection = $iThreadIdx % 2 == 0 ? 1 : -1; # Size of files currently copied by this thread
- my $oFileThread; # Thread local file object
-
- # If multi-threaded, then clone the file object
- if ($bMulti)
- {
- $oFileThread = $self->{oFile}->clone($iThreadIdx);
- }
- # Else use the master file object
- else
- {
- $oFileThread = $self->{oFile};
- }
-
- # Initialize the starting and current queue index based in the total number of threads in relation to this thread
- my $iQueueStartIdx = int((@{$oyRestoreQueueRef} / $self->{iThreadTotal}) * $iThreadIdx);
- my $iQueueIdx = $iQueueStartIdx;
-
- # Time when the backup copying began - used for size/timestamp deltas
- my $lCopyTimeBegin = $oManifest->epoch(MANIFEST_SECTION_BACKUP, MANIFEST_KEY_TIMESTAMP_COPY_START);
-
- # Set source compression
- my $bSourceCompression = $oManifest->get(MANIFEST_SECTION_BACKUP_OPTION, MANIFEST_KEY_COMPRESS) eq 'y' ? true : false;
-
- # When a KILL signal is received, immediately abort
- $SIG{'KILL'} = sub {threads->exit();};
-
- # Get the current user and group to compare with stored mode
- my $strCurrentUser = getpwuid($<);
- my $strCurrentGroup = getgrgid($();
-
- # Loop through all the queues to restore files (exit when the original queue is reached
- do
- {
- while (my $strMessage = ${$oyRestoreQueueRef}[$iQueueIdx]->dequeue_nb())
- {
- my $strSourcePath = (split(/\|/, $strMessage))[0]; # Source path from backup
- my $strSection = "${strSourcePath}:file"; # Backup section with file info
- my $strDestinationPath = $oManifest->get(MANIFEST_SECTION_BACKUP_PATH, # Destination path stored in manifest
- $strSourcePath);
- $strSourcePath =~ s/\:/\//g; # Replace : with / in source path
- my $strName = (split(/\|/, $strMessage))[1]; # Name of file to be restored
-
- # If the file is a reference to a previous backup and hardlinks are off, then fetch it from that backup
- my $strReference = $oManifest->test(MANIFEST_SECTION_BACKUP_OPTION, MANIFEST_KEY_HARDLINK, undef, 'y') ? undef :
- $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_REFERENCE, false);
-
- # Generate destination file name
- my $strDestinationFile = $oFileThread->path_get(PATH_DB_ABSOLUTE, "${strDestinationPath}/${strName}");
-
- if ($oFileThread->exists(PATH_DB_ABSOLUTE, $strDestinationFile))
- {
- # Perform delta if requested
- if ($self->{bDelta})
- {
- # If force then use size/timestamp delta
- if ($self->{bForce})
- {
- my $oStat = lstat($strDestinationFile);
-
- # Make sure that timestamp/size are equal and that timestamp is before the copy start time of the backup
- if (defined($oStat) &&
- $oStat->size == $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_SIZE) &&
- $oStat->mtime == $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME) &&
- $oStat->mtime < $lCopyTimeBegin)
- {
- &log(DEBUG, "${strDestinationFile} exists and matches size " . $oStat->size .
- " and modification time " . $oStat->mtime);
- next;
- }
- }
- else
- {
- my ($strChecksum, $lSize) = $oFileThread->hash_size(PATH_DB_ABSOLUTE, $strDestinationFile);
-
- if (($lSize == $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_SIZE) && $lSize == 0) ||
- ($strChecksum eq $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM)))
- {
- &log(DEBUG, "${strDestinationFile} exists and is zero size or matches backup checksum");
-
- # Even if hash is the same set the time back to backup time. This helps with unit testing, but also
- # presents a pristine version of the database.
- utime($oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME),
- $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME),
- $strDestinationFile)
- or confess &log(ERROR, "unable to set time for ${strDestinationFile}");
-
- next;
- }
- }
- }
-
- $oFileThread->remove(PATH_DB_ABSOLUTE, $strDestinationFile);
- }
-
- # Set user and group if running as root (otherwise current user and group will be used for restore)
- # Copy the file from the backup to the database
- my ($bCopyResult, $strCopyChecksum, $lCopySize) =
- $oFileThread->copy(PATH_BACKUP_CLUSTER, (defined($strReference) ? $strReference : $self->{strBackupPath}) .
- "/${strSourcePath}/${strName}" .
- ($bSourceCompression ? '.' . $oFileThread->{strCompressExtension} : ''),
- PATH_DB_ABSOLUTE, $strDestinationFile,
- $bSourceCompression, # Source is compressed based on backup settings
- undef, undef,
- $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODIFICATION_TIME),
- $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_MODE),
- undef,
- $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_USER),
- $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_GROUP));
-
- if ($lCopySize != 0 && $strCopyChecksum ne $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM))
- {
- confess &log(ERROR, "error restoring ${strDestinationFile}: actual checksum ${strCopyChecksum} " .
- "does not match expected checksum " .
- $oManifest->get($strSection, $strName, MANIFEST_SUBKEY_CHECKSUM), ERROR_CHECKSUM);
- }
- }
-
- # Even number threads move up when they have finished a queue, odd numbered threads move down
- $iQueueIdx += $iDirection;
-
- # Reset the queue index when it goes over or under the number of queues
- if ($iQueueIdx < 0)
- {
- $iQueueIdx = @{$oyRestoreQueueRef} - 1;
- }
- elsif ($iQueueIdx >= @{$oyRestoreQueueRef})
- {
- $iQueueIdx = 0;
- }
-
- &log(TRACE, "thread waiting for new file from queue: queue ${iQueueIdx}, start queue ${iQueueStartIdx}");
- }
- while ($iQueueIdx != $iQueueStartIdx);
-
- &log(DEBUG, "thread ${iThreadIdx} exiting");
-}
-
1;
diff --git a/lib/BackRest/RestoreFile.pm b/lib/BackRest/RestoreFile.pm
new file mode 100644
index 000000000..ea9f1ea63
--- /dev/null
+++ b/lib/BackRest/RestoreFile.pm
@@ -0,0 +1,126 @@
+####################################################################################################################################
+# RESTORE FILE MODULE
+####################################################################################################################################
+package BackRest::RestoreFile;
+
+use threads;
+use threads::shared;
+use Thread::Queue;
+use strict;
+use warnings FATAL => qw(all);
+use Carp qw(confess);
+
+use File::Basename qw(dirname);
+use File::stat qw(lstat);
+use Exporter qw(import);
+
+use lib dirname($0);
+use BackRest::Exception;
+use BackRest::Utility;
+use BackRest::Config;
+use BackRest::Manifest;
+use BackRest::File;
+
+####################################################################################################################################
+# restoreFile
+#
+# Restores a single file.
+####################################################################################################################################
+sub restoreFile
+{
+ my $strSourcePath = shift; # Source path of the file
+ my $strFileName = shift; # File to restore
+ my $lCopyTimeBegin = shift; # Time that the backup begain - used for size/timestamp deltas
+ my $bDelta = shift; # Is restore a delta?
+ my $bForce = shift; # Force flag
+ my $strBackupPath = shift; # Backup path
+ my $bSourceCompression = shift; # Is the source compressed?
+ my $strCurrentUser = shift; # Current OS user
+ my $strCurrentGroup = shift; # Current OS group
+ my $oManifest = shift; # Backup manifest
+ my $oFile = shift; # File object (only provided in single-threaded mode)
+
+ my $strSection = "${strSourcePath}:file"; # Backup section with file info
+ my $strDestinationPath = $oManifest->get(MANIFEST_SECTION_BACKUP_PATH, # Destination path stored in manifest
+ $strSourcePath);
+ $strSourcePath =~ s/\:/\//g; # Replace : with / in source path
+
+ # If the file is a reference to a previous backup and hardlinks are off, then fetch it from that backup
+ my $strReference = $oManifest->test(MANIFEST_SECTION_BACKUP_OPTION, MANIFEST_KEY_HARDLINK, undef, 'y') ? undef :
+ $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_REFERENCE, false);
+
+ # Generate destination file name
+ my $strDestinationFile = $oFile->path_get(PATH_DB_ABSOLUTE, "${strDestinationPath}/${strFileName}");
+
+ if ($oFile->exists(PATH_DB_ABSOLUTE, $strDestinationFile))
+ {
+ # Perform delta if requested
+ if ($bDelta)
+ {
+ # If force then use size/timestamp delta
+ if ($bForce)
+ {
+ my $oStat = lstat($strDestinationFile);
+
+ # Make sure that timestamp/size are equal and that timestamp is before the copy start time of the backup
+ if (defined($oStat) &&
+ $oStat->size == $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_SIZE) &&
+ $oStat->mtime == $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_MODIFICATION_TIME) &&
+ $oStat->mtime < $lCopyTimeBegin)
+ {
+ &log(DEBUG, "${strDestinationFile} exists and matches size " . $oStat->size .
+ " and modification time " . $oStat->mtime);
+ return;
+ }
+ }
+ else
+ {
+ my ($strChecksum, $lSize) = $oFile->hash_size(PATH_DB_ABSOLUTE, $strDestinationFile);
+ my $strManifestChecksum = $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_CHECKSUM, false, 'INVALID');
+
+ if (($lSize == $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_SIZE) && $lSize == 0) ||
+ ($strChecksum eq $strManifestChecksum))
+ {
+ &log(DEBUG, "${strDestinationFile} exists and is zero size or matches backup checksum");
+
+ # Even if hash is the same set the time back to backup time. This helps with unit testing, but also
+ # presents a pristine version of the database.
+ utime($oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_MODIFICATION_TIME),
+ $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_MODIFICATION_TIME),
+ $strDestinationFile)
+ or confess &log(ERROR, "unable to set time for ${strDestinationFile}");
+
+ return;
+ }
+ }
+ }
+
+ $oFile->remove(PATH_DB_ABSOLUTE, $strDestinationFile);
+ }
+
+ # Set user and group if running as root (otherwise current user and group will be used for restore)
+ # Copy the file from the backup to the database
+ my ($bCopyResult, $strCopyChecksum, $lCopySize) =
+ $oFile->copy(PATH_BACKUP_CLUSTER, (defined($strReference) ? $strReference : $strBackupPath) .
+ "/${strSourcePath}/${strFileName}" .
+ ($bSourceCompression ? '.' . $oFile->{strCompressExtension} : ''),
+ PATH_DB_ABSOLUTE, $strDestinationFile,
+ $bSourceCompression, # Source is compressed based on backup settings
+ undef, undef,
+ $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_MODIFICATION_TIME),
+ $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_MODE),
+ undef,
+ $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_USER),
+ $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_GROUP));
+
+ if ($lCopySize != 0 && $strCopyChecksum ne $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_CHECKSUM))
+ {
+ confess &log(ERROR, "error restoring ${strDestinationFile}: actual checksum ${strCopyChecksum} " .
+ "does not match expected checksum " .
+ $oManifest->get($strSection, $strFileName, MANIFEST_SUBKEY_CHECKSUM), ERROR_CHECKSUM);
+ }
+}
+
+our @EXPORT = qw(restoreFile);
+
+1;
diff --git a/lib/BackRest/ThreadGroup.pm b/lib/BackRest/ThreadGroup.pm
index 4e8cd85d1..8611c28bf 100644
--- a/lib/BackRest/ThreadGroup.pm
+++ b/lib/BackRest/ThreadGroup.pm
@@ -12,50 +12,249 @@ use File::Basename;
use lib dirname($0) . '/../lib';
use BackRest::Utility;
+use BackRest::Config;
+use BackRest::RestoreFile;
+use BackRest::BackupFile;
####################################################################################################################################
# MODULE EXPORTS
####################################################################################################################################
use Exporter qw(import);
-our @EXPORT = qw(thread_group_create thread_group_add thread_group_complete);
+our @EXPORT = qw(threadGroupCreate threadGroupRun threadGroupComplete threadGroupDestroy);
+
+my @oyThread;
+my @oyMessageQueue;
+my @oyCommandQueue;
+my @oyResultQueue;
+my @byThreadRunning;
####################################################################################################################################
-# CONSTRUCTOR
+# threadGroupCreate
####################################################################################################################################
-sub thread_group_create
+sub threadGroupCreate
{
- # Create the class hash
- my $self = {};
+ # If thread-max is not defined then this operation does not use threads
+ if (!optionTest(OPTION_THREAD_MAX))
+ {
+ return;
+ }
- # Initialize variables
- $self->{iThreadTotal} = 0;
+ # Get thread-max
+ my $iThreadMax = optionGet(OPTION_THREAD_MAX);
- return $self;
+ # Only create threads when thread-max > 1
+ if ($iThreadMax > 1)
+ {
+ for (my $iThreadIdx = 0; $iThreadIdx < $iThreadMax; $iThreadIdx++)
+ {
+ push @oyCommandQueue, Thread::Queue->new();
+ push @oyMessageQueue, Thread::Queue->new();
+ push @oyResultQueue, Thread::Queue->new();
+ push @oyThread, (threads->create(\&threadGroupThread, $iThreadIdx));
+ push @byThreadRunning, false;
+ }
+ }
}
####################################################################################################################################
-# ADD
-#
-# Add a thread to the group. Once a thread is added, it can be tracked as part of the group.
+# threadGroupThread
####################################################################################################################################
-sub thread_group_add
+sub threadGroupThread
{
- my $self = shift;
- my $oThread = shift;
+ my $iThreadIdx = shift;
- $self->{oyThread}[$self->{iThreadTotal}] = $oThread;
- $self->{iThreadTotal}++;
+ # When a KILL signal is received, immediately abort
+ $SIG{'KILL'} = sub {threads->exit();};
- return $self->{iThreadTotal} - 1;
+ while (my $oCommand = $oyCommandQueue[$iThreadIdx]->dequeue())
+ {
+ # Exit thread
+ if ($$oCommand{function} eq 'exit')
+ {
+ &log(TRACE, 'thread terminated');
+ return;
+ }
+
+ &log(TRACE, "$$oCommand{function} thread started");
+
+ # Create a file object
+ my $oFile = new BackRest::File
+ (
+ optionGet(OPTION_STANZA),
+ optionRemoteTypeTest(BACKUP) ? optionGet(OPTION_REPO_REMOTE_PATH) : optionGet(OPTION_REPO_PATH),
+ optionRemoteType(),
+ optionRemote(undef, false),
+ undef, undef,
+ $iThreadIdx + 1
+ );
+
+ # Notify parent that init is complete
+ threadMessage($oyResultQueue[$iThreadIdx], 'init');
+
+ my $iDirection = $iThreadIdx % 2 == 0 ? 1 : -1; # Size of files currently copied by this thread
+
+ # Initialize the starting and current queue index based in the total number of threads in relation to this thread
+ my $iQueueStartIdx = int((@{$$oCommand{param}{queue}} / $$oCommand{thread_total}) * $iThreadIdx);
+ my $iQueueIdx = $iQueueStartIdx;
+
+ # Keep track of progress
+ my $lSizeCurrent = 0; # Running total of bytes copied
+
+ # Loop through all the queues (exit when the original queue is reached)
+ do
+ {
+ &log(TRACE, "reading queue ${iQueueIdx}, start queue ${iQueueStartIdx}");
+
+ while (my $oMessage = ${$$oCommand{param}{queue}}[$iQueueIdx]->dequeue_nb())
+ {
+ if ($$oCommand{function} eq 'restore')
+ {
+ my $strSourcePath = (split(/\|/, $oMessage))[0];
+ my $strFileName = (split(/\|/, $oMessage))[1];
+
+ restoreFile($strSourcePath, $strFileName, $$oCommand{param}{copy_time_begin}, $$oCommand{param}{delta},
+ $$oCommand{param}{force}, $$oCommand{param}{backup_path}, $$oCommand{param}{source_compression},
+ $$oCommand{param}{current_user}, $$oCommand{param}{current_group}, $$oCommand{param}{manifest},
+ $oFile);
+ }
+ elsif ($$oCommand{function} eq 'backup')
+ {
+ # Result hash that can be passed back to the master process
+ my $oResult = {};
+
+ # Backup the file
+ ($$oResult{copied}, $lSizeCurrent, $$oResult{size}, $$oResult{checksum}) =
+ backupFile($oFile, $$oMessage{db_file}, $$oMessage{backup_file}, $$oCommand{param}{compress},
+ $$oMessage{checksum}, $$oMessage{checksum_only},
+ $$oMessage{size}, $$oCommand{param}{size_total}, $lSizeCurrent);
+
+ # Send a message to update the manifest
+ $$oResult{file_section} = $$oMessage{file_section};
+ $$oResult{file} = $$oMessage{file};
+
+ $$oCommand{param}{result_queue}->enqueue($oResult);
+ }
+ else
+ {
+ confess &log(ERROR, "unknown command");
+ }
+ }
+
+ # Even numbered threads move up when they have finished a queue, odd numbered threads move down
+ $iQueueIdx += $iDirection;
+
+ # Reset the queue index when it goes over or under the number of queues
+ if ($iQueueIdx < 0)
+ {
+ $iQueueIdx = @{$$oCommand{param}{queue}} - 1;
+ }
+ elsif ($iQueueIdx >= @{$$oCommand{param}{queue}})
+ {
+ $iQueueIdx = 0;
+ }
+ }
+ while ($iQueueIdx != $iQueueStartIdx);
+
+ # Notify parent of shutdown
+ threadMessage($oyResultQueue[$iThreadIdx], 'shutdown');
+ threadMessageExpect($oyMessageQueue[$iThreadIdx], 'continue');
+
+ # Destroy the file object
+ undef($oFile);
+
+ # Notify the parent process of thread exit
+ $oyResultQueue[$iThreadIdx]->enqueue('complete');
+
+ &log(TRACE, "$$oCommand{function} thread exiting");
+ }
}
####################################################################################################################################
-# COMPLETE
+# threadMessage
+####################################################################################################################################
+sub threadMessage
+{
+ my $oQueue = shift;
+ my $strMessage = shift;
+ my $iThreadIdx = shift;
+
+ # Send the message
+ $oQueue->enqueue($strMessage);
+
+ # Define calling context
+ &log(TRACE, "sent message '${strMessage}' to " . (defined($iThreadIdx) ? 'thread ' . ($iThreadIdx + 1) : 'controller'));
+}
+
+####################################################################################################################################
+# threadMessageExpect
+####################################################################################################################################
+sub threadMessageExpect
+{
+ my $oQueue = shift;
+ my $strExpected = shift;
+ my $iThreadIdx = shift;
+ my $bNoBlock = shift;
+
+ # Set timeout based on the message type
+ my $iTimeout = defined($bNoBlock) ? undef: 600;
+
+ # Define calling context
+ my $strContext = defined($iThreadIdx) ? 'thread ' . ($iThreadIdx + 1) : 'controller';
+
+ # Wait for the message
+ my $strMessage;
+
+ if (defined($iTimeout))
+ {
+ &log(TRACE, "waiting for '${strExpected}' message from ${strContext}");
+ $strMessage = $oQueue->dequeue_timed($iTimeout);
+ }
+ else
+ {
+ $strMessage = $oQueue->dequeue_nb();
+
+ return false if !defined($strMessage);
+ }
+
+ # Throw an exeception when the message was not received
+ if (!defined($strMessage) || $strMessage ne $strExpected)
+ {
+ confess &log(ASSERT, "expected message '$strExpected' from ${strContext} but " .
+ (defined($strMessage) ? "got '$strMessage'" : "timed out after ${iTimeout} second(s)"));
+ }
+
+ &log(TRACE, "got '${strExpected}' message from ${strContext}");
+
+ return true;
+}
+
+####################################################################################################################################
+# threadGroupRun
+####################################################################################################################################
+sub threadGroupRun
+{
+ my $iThreadIdx = shift;
+ my $strFunction = shift;
+ my $oParam = shift;
+
+ my %oCommand;
+ $oCommand{function} = $strFunction;
+ $oCommand{thread_total} = @oyThread;
+ $oCommand{param} = $oParam;
+
+ $oyCommandQueue[$iThreadIdx]->enqueue(\%oCommand);
+
+ threadMessageExpect($oyResultQueue[$iThreadIdx], 'init', $iThreadIdx);
+ $byThreadRunning[$iThreadIdx] = true;
+}
+
+####################################################################################################################################
+# threadGroupComplete
#
# Wait for threads to complete.
####################################################################################################################################
-sub thread_group_complete
+sub threadGroupComplete
{
my $self = shift;
my $iTimeout = shift;
@@ -67,9 +266,13 @@ sub thread_group_complete
# Wait for all threads to complete and handle errors
my $iThreadComplete = 0;
my $lTimeBegin = time();
+ my $strFirstError;
+ my $iFirstErrorThreadIdx;
+
+ &log(DEBUG, "waiting for " . @oyThread . " threads to complete");
# Rejoin the threads
- while ($iThreadComplete < $self->{iThreadTotal})
+ while ($iThreadComplete < @oyThread)
{
hsleep(.1);
@@ -79,37 +282,43 @@ sub thread_group_complete
if (time() - $lTimeBegin >= $iTimeout)
{
confess &log(ERROR, "threads have been running more than ${iTimeout} seconds, exiting...");
-
- #backup_thread_kill();
-
- #confess &log(WARN, "all threads have exited, aborting...");
}
}
- for (my $iThreadIdx = 0; $iThreadIdx < $self->{iThreadTotal}; $iThreadIdx++)
+ for (my $iThreadIdx = 0; $iThreadIdx < @oyThread; $iThreadIdx++)
{
- if (defined($self->{oyThread}[$iThreadIdx]))
+ if ($byThreadRunning[$iThreadIdx])
{
- if (defined($self->{oyThread}[$iThreadIdx]->error()))
- {
- $self->kill();
+ my $oError = $oyThread[$iThreadIdx]->error();
- if ($bConfessOnError)
+ if (defined($oError))
+ {
+ my $strError;
+
+ if ($oError->isa('BackRest::Exception'))
{
- confess &log(ERROR, 'error in thread ' . (${iThreadIdx} + 1) . ': check log for details');
+ $strError = $oError->message();
}
else
{
- return false;
+ $strError = $oError;
+ &log(ERROR, "thread " . ($iThreadIdx) . ": ${strError}");
}
- }
- if ($self->{oyThread}[$iThreadIdx]->is_joinable())
+ if (!defined($strFirstError))
+ {
+ $strFirstError = $strError;
+ $iFirstErrorThreadIdx = $iThreadIdx;
+ }
+
+ $byThreadRunning[$iThreadIdx] = false;
+ $iThreadComplete++;
+ }
+ elsif (threadMessageExpect($oyResultQueue[$iThreadIdx], 'shutdown', $iThreadIdx, true))
{
- &log(DEBUG, "thread ${iThreadIdx} exited");
- $self->{oyThread}[$iThreadIdx]->join();
- &log(TRACE, "thread ${iThreadIdx} object undef");
- undef($self->{oyThread}[$iThreadIdx]);
+ threadMessage($oyMessageQueue[$iThreadIdx], 'continue', $iThreadIdx);
+ threadMessageExpect($oyResultQueue[$iThreadIdx], 'complete', $iThreadIdx);
+ $byThreadRunning[$iThreadIdx] = false;
$iThreadComplete++;
}
}
@@ -118,48 +327,46 @@ sub thread_group_complete
&log(DEBUG, 'all threads exited');
- return true;
+ if (defined($strFirstError) && $bConfessOnError)
+ {
+ confess &log(ERROR, 'error in thread' . ($iFirstErrorThreadIdx + 1) . ": $strFirstError");
+ }
}
####################################################################################################################################
-# KILL
+# threadGroupDestroy
####################################################################################################################################
-sub thread_group_destroy
+sub threadGroupDestroy
{
my $self = shift;
- # Total number of threads killed
- my $iTotal = 0;
+ &log(TRACE, "waiting for " . @oyThread . " threads to be destroyed");
- for (my $iThreadIdx = 0; $iThreadIdx < $self->{iThreadTotal}; $iThreadIdx++)
+ for (my $iThreadIdx = 0; $iThreadIdx < @oyThread; $iThreadIdx++)
{
- if (defined($self->{oyThread}[$iThreadIdx]))
- {
- if ($self->{oyThread}[$iThreadIdx]->is_running())
- {
- $self->{oyThread}[$iThreadIdx]->kill('KILL')->join();
- }
- elsif ($self->{oyThread}[$iThreadIdx]->is_joinable())
- {
- $self->{oyThread}[$iThreadIdx]->join();
- }
+ my %oCommand;
+ $oCommand{function} = 'exit';
- undef($self->{oyThread}[$iThreadIdx]);
- $iTotal++;
+ $oyCommandQueue[$iThreadIdx]->enqueue(\%oCommand);
+ hsleep(.1);
+
+ if ($oyThread[$iThreadIdx]->is_running())
+ {
+ $oyThread[$iThreadIdx]->kill('KILL')->join();
+ &log(TRACE, "thread ${iThreadIdx} killed");
}
+ elsif ($oyThread[$iThreadIdx]->is_joinable())
+ {
+ $oyThread[$iThreadIdx]->join();
+ &log(TRACE, "thread ${iThreadIdx} joined");
+ }
+
+ undef($oyThread[$iThreadIdx]);
}
- return($iTotal);
+ &log(TRACE, @oyThread . " threads destroyed");
+
+ return(@oyThread);
}
-####################################################################################################################################
-# DESTRUCTOR
-####################################################################################################################################
-# sub thread_group_destroy
-# {
-# my $self = shift;
-#
-# $self->kill();
-# }
-
1;
diff --git a/lib/BackRest/Utility.pm b/lib/BackRest/Utility.pm
index 0db4c202c..34b40d811 100644
--- a/lib/BackRest/Utility.pm
+++ b/lib/BackRest/Utility.pm
@@ -13,6 +13,7 @@ use File::Path qw(remove_tree);
use Time::HiRes qw(gettimeofday usleep);
use POSIX qw(ceil);
use File::Basename;
+use Cwd qw(abs_path);
use JSON;
use lib dirname($0) . '/../lib';
@@ -21,7 +22,7 @@ use BackRest::Exception;
use Exporter qw(import);
our @EXPORT = qw(version_get
- data_hash_build trim common_prefix wait_for_file file_size_format execute
+ data_hash_build trim common_prefix file_size_format execute
log log_file_set log_level_set test_set test_get test_check
lock_file_create lock_file_remove hsleep wait_remainder
ini_save ini_load timestamp_string_get timestamp_file_string_get
@@ -94,21 +95,34 @@ my $strVersion;
sub version_get
{
my $hVersion;
- my $strVersion;
- if (!open($hVersion, '<', dirname($0) . '/../VERSION'))
+ # If version is already stored then return it (should never change during execution)
+ if (defined($strVersion))
{
- confess &log(ASSERT, 'unable to open VERSION file');
+ return $strVersion;
}
+ # Construct the version file name
+ my $strVersionFile = abs_path(dirname($0) . '/../VERSION');
+
+ # Open the file
+ if (!open($hVersion, '<', $strVersionFile))
+ {
+ confess &log(ASSERT, "unable to open VERSION file: ${strVersionFile}");
+ }
+
+ # Read version and trim
if (!($strVersion = readline($hVersion)))
{
- confess &log(ASSERT, 'unable to read VERSION file');
+ confess &log(ASSERT, "unable to read VERSION file: ${strVersionFile}");
}
+ $strVersion = trim($strVersion);
+
+ # Close file
close($hVersion);
- return trim($strVersion);
+ return $strVersion;
}
####################################################################################################################################
@@ -173,7 +187,7 @@ sub lock_file_remove
sub wait_remainder
{
my $lTimeBegin = gettimeofday();
- my $lSleepMs = ceil(((int($lTimeBegin) + 1) - $lTimeBegin) * 1000);
+ my $lSleepMs = ceil(((int($lTimeBegin) + 1.05) - $lTimeBegin) * 1000);
usleep($lSleepMs * 1000);
@@ -315,15 +329,15 @@ sub file_size_format
if ($lFileSize < (1024 * 1024))
{
- return int($lFileSize / 1024) . 'KB';
+ return (int($lFileSize / 102.4) / 10) . 'KB';
}
if ($lFileSize < (1024 * 1024 * 1024))
{
- return int($lFileSize / 1024 / 1024) . 'MB';
+ return (int($lFileSize / 1024 / 102.4) / 10) . 'MB';
}
- return int($lFileSize / 1024 / 1024 / 1024) . 'GB';
+ return (int($lFileSize / 1024 / 1024 / 102.4) / 10) . 'GB';
}
####################################################################################################################################
@@ -516,7 +530,8 @@ sub log
# Format the message text
my ($sec, $min, $hour, $mday, $mon, $year, $wday, $yday, $isdst) = localtime(time);
- $strMessageFormat = timestamp_string_get() . sprintf(' T%02d', threads->tid()) .
+ $strMessageFormat = timestamp_string_get() . sprintf('.%03d T%02d', (gettimeofday() - int(gettimeofday())) * 1000,
+ threads->tid()) .
(' ' x (7 - length($strLevel))) . "${strLevel}: ${strMessageFormat}\n";
# Output to console depending on log level and test flag
diff --git a/test/data/test.table.bin b/test/data/test.table.bin
new file mode 100644
index 000000000..7c52b23cb
Binary files /dev/null and b/test/data/test.table.bin differ
diff --git a/test/lib/BackRestTest/BackupTest.pm b/test/lib/BackRestTest/BackupTest.pm
index 5533f8b9a..38586f6b2 100755
--- a/test/lib/BackRestTest/BackupTest.pm
+++ b/test/lib/BackRestTest/BackupTest.pm
@@ -25,11 +25,13 @@ use BackRest::Config;
use BackRest::Manifest;
use BackRest::File;
use BackRest::Remote;
+use BackRest::Archive;
use BackRestTest::CommonTest;
use Exporter qw(import);
-our @EXPORT = qw(BackRestTestBackup_Test);
+our @EXPORT = qw(BackRestTestBackup_Test BackRestTestBackup_Create BackRestTestBackup_Drop BackRestTestBackup_ClusterStop
+ BackRestTestBackup_PgSelectOne BackRestTestBackup_PgExecute);
my $strTestPath;
my $strHost;
@@ -43,15 +45,43 @@ my $hDb;
####################################################################################################################################
sub BackRestTestBackup_PgConnect
{
+ my $iWaitSeconds = shift;
+
# Disconnect user session
BackRestTestBackup_PgDisconnect();
- # Connect to the db (whether it is local or remote)
- $hDb = DBI->connect('dbi:Pg:dbname=postgres;port=' . BackRestTestCommon_DbPortGet .
- ';host=' . BackRestTestCommon_DbPathGet(),
- BackRestTestCommon_UserGet(),
- undef,
- {AutoCommit => 0, RaiseError => 1});
+ # Default
+ $iWaitSeconds = defined($iWaitSeconds) ? $iWaitSeconds : 30;
+
+ # Record the start time
+ my $lTime = time();
+
+ do
+ {
+ # Connect to the db (whether it is local or remote)
+ eval
+ {
+ $hDb = DBI->connect('dbi:Pg:dbname=postgres;port=' . BackRestTestCommon_DbPortGet .
+ ';host=' . BackRestTestCommon_DbPathGet(),
+ BackRestTestCommon_UserGet(),
+ undef,
+ {AutoCommit => 0, RaiseError => 1});
+ };
+
+ if (!$@)
+ {
+ return;
+ }
+
+ # If waiting then sleep before trying again
+ if (defined($iWaitSeconds))
+ {
+ hsleep(.1);
+ }
+ }
+ while ($lTime > time() - $iWaitSeconds);
+
+ confess &log(ERROR, "unable to connect to Postgres after ${iWaitSeconds} second(s)");
}
####################################################################################################################################
@@ -195,7 +225,7 @@ sub BackRestTestBackup_ClusterStop
BackRestTestBackup_PgDisconnect();
# Drop the cluster
- BackRestTestCommon_ClusterStop
+ BackRestTestCommon_ClusterStop($strPath, $bImmediate);
}
####################################################################################################################################
@@ -206,11 +236,13 @@ sub BackRestTestBackup_ClusterStart
my $strPath = shift;
my $iPort = shift;
my $bHotStandby = shift;
+ my $bArchive = shift;
# Set default
$iPort = defined($iPort) ? $iPort : BackRestTestCommon_DbPortGet();
$strPath = defined($strPath) ? $strPath : BackRestTestCommon_DbCommonPathGet();
$bHotStandby = defined($bHotStandby) ? $bHotStandby : false;
+ $bArchive = defined($bArchive) ? $bArchive : true;
# Make sure postgres is not running
if (-e $strPath . '/postmaster.pid')
@@ -223,12 +255,38 @@ sub BackRestTestBackup_ClusterStart
' --config=' . BackRestTestCommon_DbPathGet() . '/pg_backrest.conf archive-push %p';
# Start the cluster
- BackRestTestCommon_Execute(BackRestTestCommon_PgSqlBinPathGet() . "/pg_ctl start -o \"-c port=${iPort}" .
- ' -c checkpoint_segments=1' .
- " -c wal_level=hot_standby -c archive_mode=on -c archive_command='${strArchive}'" .
- ($bHotStandby ? ' -c hot_standby=on' : '') .
- " -c unix_socket_directories='" . BackRestTestCommon_DbPathGet() . "'\" " .
- "-D ${strPath} -l ${strPath}/postgresql.log -w -s");
+ my $strCommand = BackRestTestCommon_PgSqlBinPathGet() . "/pg_ctl start -o \"-c port=${iPort}" .
+ ' -c checkpoint_segments=1';
+
+ if ($bArchive)
+ {
+ if (BackRestTestCommon_DbVersion() >= '8.3')
+ {
+ $strCommand .= " -c archive_mode=on";
+ }
+
+ $strCommand .= " -c archive_command='${strArchive}'";
+
+ if (BackRestTestCommon_DbVersion() >= '9.0')
+ {
+ $strCommand .= " -c wal_level=hot_standby";
+
+ if ($bHotStandby)
+ {
+ $strCommand .= ' -c hot_standby=on';
+ }
+ }
+ }
+ else
+ {
+ $strCommand .= " -c archive_mode=on -c wal_level=archive -c archive_command=true";
+ }
+
+ $strCommand .= " -c unix_socket_director" . (BackRestTestCommon_DbVersion() < '9.3' ? "y='" : "ies='") .
+ BackRestTestCommon_DbPathGet() . "'\" " .
+ "-D ${strPath} -l ${strPath}/postgresql.log -s";
+
+ BackRestTestCommon_Execute($strCommand);
# Connect user session
BackRestTestBackup_PgConnect();
@@ -261,10 +319,14 @@ sub BackRestTestBackup_ClusterCreate
{
my $strPath = shift;
my $iPort = shift;
+ my $bArchive = shift;
+
+ # Defaults
+ $strPath = defined($strPath) ? $strPath : BackRestTestCommon_DbCommonPathGet();
BackRestTestCommon_Execute(BackRestTestCommon_PgSqlBinPathGet() . "/initdb -D ${strPath} -A trust");
- BackRestTestBackup_ClusterStart($strPath, $iPort);
+ BackRestTestBackup_ClusterStart($strPath, $iPort, undef, $bArchive);
# Connect user session
BackRestTestBackup_PgConnect();
@@ -302,6 +364,7 @@ sub BackRestTestBackup_Create
{
my $bRemote = shift;
my $bCluster = shift;
+ my $bArchive = shift;
# Set defaults
$bRemote = defined($bRemote) ? $bRemote : false;
@@ -331,20 +394,12 @@ sub BackRestTestBackup_Create
BackRestTestCommon_PathCreate(BackRestTestCommon_LocalPathGet());
}
- # Create the backup directory
- if ($bRemote)
- {
- BackRestTestCommon_Execute('mkdir -m 700 ' . BackRestTestCommon_RepoPathGet(), true);
- }
- else
- {
- BackRestTestCommon_PathCreate(BackRestTestCommon_RepoPathGet());
- }
+ BackRestTestCommon_CreateRepo($bRemote);
# Create the cluster
if ($bCluster)
{
- BackRestTestBackup_ClusterCreate(BackRestTestCommon_DbCommonPathGet(), BackRestTestCommon_DbPortGet());
+ BackRestTestBackup_ClusterCreate(undef, undef, $bArchive);
}
}
@@ -1388,6 +1443,8 @@ sub BackRestTestBackup_Test
$strHost, # Host
$strUserBackRest, # User
BackRestTestCommon_CommandRemoteGet(), # Command
+ $strStanza, # Stanza
+ '', # Repo Path
OPTION_DEFAULT_BUFFER_SIZE, # Buffer size
OPTION_DEFAULT_COMPRESS_LEVEL, # Compress level
OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK, # Compress network level
@@ -1398,6 +1455,8 @@ sub BackRestTestBackup_Test
undef, # Host
undef, # User
undef, # Command
+ undef, # Stanza
+ undef, # Repo Path
OPTION_DEFAULT_BUFFER_SIZE, # Buffer size
OPTION_DEFAULT_COMPRESS_LEVEL, # Compress level
OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK, # Compress network level
@@ -1425,10 +1484,9 @@ sub BackRestTestBackup_Test
"rmt ${bRemote}, cmp ${bCompress}, " .
"arc_async ${bArchiveAsync}")) {next}
- # Create the test directory
+ # Create the file object
if ($bCreate)
{
- # Create the file object
$oFile = (new BackRest::File
(
$strStanza,
@@ -1437,11 +1495,12 @@ sub BackRestTestBackup_Test
$bRemote ? $oRemote : $oLocal
))->clone();
- BackRestTestBackup_Create($bRemote, false);
-
$bCreate = false;
}
+ # Create the test directory
+ BackRestTestBackup_Create($bRemote, false);
+
BackRestTestCommon_ConfigCreate('db',
($bRemote ? BACKUP : undef),
$bCompress,
@@ -1452,7 +1511,7 @@ sub BackRestTestBackup_Test
undef);
my $strCommand = BackRestTestCommon_CommandMainGet() . ' --config=' . BackRestTestCommon_DbPathGet() .
- '/pg_backrest.conf --stanza=db archive-push';
+ '/pg_backrest.conf --no-fork --stanza=db archive-push';
# Loop through backups
for (my $iBackup = 1; $iBackup <= 3; $iBackup++)
@@ -1487,6 +1546,54 @@ sub BackRestTestBackup_Test
BackRestTestCommon_Execute($strCommand . " ${strSourceFile}");
+ if ($iArchive == $iBackup)
+ {
+ # load the archive info file so it can be munged for testing
+ my $strInfoFile = $oFile->path_get(PATH_BACKUP_ARCHIVE, ARCHIVE_INFO_FILE);
+ my %oInfo;
+ BackRestTestCommon_iniLoad($strInfoFile, \%oInfo, $bRemote);
+ my $strDbVersion = $oInfo{database}{version};
+ my $ullDbSysId = $oInfo{database}{'system-id'};
+
+ # Break the database version
+ $oInfo{database}{version} = '8.0';
+ BackRestTestCommon_iniSave($strInfoFile, \%oInfo, $bRemote);
+
+ &log(INFO, ' test db version mismatch error');
+
+ BackRestTestCommon_Execute($strCommand . " ${strSourceFile}", undef, undef, undef,
+ ERROR_ARCHIVE_MISMATCH);
+
+ # Break the database version
+ $oInfo{database}{version} = $strDbVersion;
+ $oInfo{database}{'system-id'} = '5000900090001855000';
+ BackRestTestCommon_iniSave($strInfoFile, \%oInfo, $bRemote);
+
+ &log(INFO, ' test db system-id mismatch error');
+
+ BackRestTestCommon_Execute($strCommand . " ${strSourceFile}", undef, undef, undef,
+ ERROR_ARCHIVE_MISMATCH);
+
+ # Move settings back to original
+ $oInfo{database}{'system-id'} = $ullDbSysId;
+ BackRestTestCommon_iniSave($strInfoFile, \%oInfo, $bRemote);
+
+ # Now it should break on archive duplication
+ &log(INFO, ' test archive duplicate error');
+
+ BackRestTestCommon_Execute($strCommand . " ${strSourceFile}", undef, undef, undef,
+ ERROR_ARCHIVE_DUPLICATE);
+
+ if ($bArchiveAsync && $bRemote)
+ {
+ my $strDuplicateWal = BackRestTestCommon_LocalPathGet() . "/archive/${strStanza}/out/" .
+ "${strArchiveFile}-1c7e00fd09b9dd11fc2966590b3e3274645dd031";
+
+ unlink ($strDuplicateWal)
+ or confess "unable to remove duplicate WAL segment created for testing: ${strDuplicateWal}";
+ }
+ }
+
# Build the archive name to check for at the destination
my $strArchiveCheck = "${strArchiveFile}-${strArchiveChecksum}";
@@ -1626,11 +1733,8 @@ sub BackRestTestBackup_Test
}
else
{
- if (BackRestTestCommon_Execute($strCommand . " 000000090000000900000009 ${strXlogPath}/RECOVERYXLOG",
- false, true) != 1)
- {
- confess 'archive-get should return 1 when archive log is not present';
- }
+ BackRestTestCommon_Execute($strCommand . " 000000090000000900000009 ${strXlogPath}/RECOVERYXLOG",
+ undef, undef, undef, 1);
}
$bCreate = true;
@@ -2097,8 +2201,7 @@ sub BackRestTestBackup_Test
# Create the test directory
if ($bCreate)
{
- BackRestTestBackup_Create($bRemote);
- $bCreate = false;
+ BackRestTestBackup_Create($bRemote, false);
}
# Create db config
@@ -2124,6 +2227,13 @@ sub BackRestTestBackup_Test
undef); # compress-async
}
+ # Create the cluster
+ if ($bCreate)
+ {
+ BackRestTestBackup_ClusterCreate();
+ $bCreate = false;
+ }
+
# Static backup parameters
my $bSynthetic = false;
my $fTestDelay = .1;
@@ -2210,7 +2320,11 @@ sub BackRestTestBackup_Test
BackRestTestBackup_PgExecute("update test set message = '$strNameMessage'", false, true);
BackRestTestBackup_PgSwitchXlog();
- BackRestTestBackup_PgExecute("select pg_create_restore_point('${strNameTarget}')", false, false);
+
+ if (BackRestTestCommon_DbVersion() >= 9.1)
+ {
+ BackRestTestBackup_PgExecute("select pg_create_restore_point('${strNameTarget}')", false, false);
+ }
&log(INFO, " name target is ${strNameTarget}");
@@ -2233,7 +2347,7 @@ sub BackRestTestBackup_Test
$strComment = 'postmaster running';
$iExpectedExitStatus = ERROR_POSTMASTER_RUNNING;
- BackRestTestBackup_Restore($oFile, $strFullBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
+ BackRestTestBackup_Restore($oFile, 'latest', $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
$strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
$oRecoveryHashRef, $strComment, $iExpectedExitStatus);
@@ -2243,7 +2357,7 @@ sub BackRestTestBackup_Test
$strComment = 'path not empty';
$iExpectedExitStatus = ERROR_RESTORE_PATH_NOT_EMPTY;
- BackRestTestBackup_Restore($oFile, $strFullBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
+ BackRestTestBackup_Restore($oFile, 'latest', $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
$strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
$oRecoveryHashRef, $strComment, $iExpectedExitStatus);
@@ -2255,7 +2369,7 @@ sub BackRestTestBackup_Test
$strComment = undef;
$iExpectedExitStatus = undef;
- BackRestTestBackup_Restore($oFile, $strFullBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
+ BackRestTestBackup_Restore($oFile, 'latest', $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
$strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
$oRecoveryHashRef, $strComment, $iExpectedExitStatus);
@@ -2269,7 +2383,7 @@ sub BackRestTestBackup_Test
$strType = RECOVERY_TYPE_XID;
$strTarget = $strXidTarget;
$bTargetExclusive = undef;
- $bTargetResume = true;
+ $bTargetResume = BackRestTestCommon_DbVersion() >= 9.1 ? true : undef;
$strTargetTimeline = undef;
$oRecoveryHashRef = undef;
$strComment = undef;
@@ -2279,7 +2393,7 @@ sub BackRestTestBackup_Test
BackRestTestBackup_ClusterStop();
- BackRestTestBackup_Restore($oFile, $strFullBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
+ BackRestTestBackup_Restore($oFile, $strIncrBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
$strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
$oRecoveryHashRef, $strComment, $iExpectedExitStatus);
@@ -2313,7 +2427,7 @@ sub BackRestTestBackup_Test
$oFile->move(PATH_ABSOLUTE, BackRestTestCommon_TestPathGet() . '/recovery.conf',
PATH_ABSOLUTE, BackRestTestCommon_DbCommonPathGet() . '/recovery.conf');
- BackRestTestBackup_Restore($oFile, $strFullBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
+ BackRestTestBackup_Restore($oFile, 'latest', $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
$strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
$oRecoveryHashRef, $strComment, $iExpectedExitStatus);
@@ -2364,7 +2478,7 @@ sub BackRestTestBackup_Test
BackRestTestBackup_ClusterStop();
- BackRestTestBackup_Restore($oFile, $strFullBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
+ BackRestTestBackup_Restore($oFile, $strIncrBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
$strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
$oRecoveryHashRef, $strComment, $iExpectedExitStatus);
@@ -2373,52 +2487,57 @@ sub BackRestTestBackup_Test
# Restore (restore type = name)
#-----------------------------------------------------------------------------------------------------------------------
- $bDelta = true;
- $bForce = true;
- $strType = RECOVERY_TYPE_NAME;
- $strTarget = $strNameTarget;
- $bTargetExclusive = undef;
- $bTargetResume = undef;
- $strTargetTimeline = undef;
- $oRecoveryHashRef = undef;
- $strComment = undef;
- $iExpectedExitStatus = undef;
+ if (BackRestTestCommon_DbVersion() >= 9.1)
+ {
+ $bDelta = true;
+ $bForce = true;
+ $strType = RECOVERY_TYPE_NAME;
+ $strTarget = $strNameTarget;
+ $bTargetExclusive = undef;
+ $bTargetResume = undef;
+ $strTargetTimeline = undef;
+ $oRecoveryHashRef = undef;
+ $strComment = undef;
+ $iExpectedExitStatus = undef;
- &log(INFO, " testing recovery type = ${strType}");
+ &log(INFO, " testing recovery type = ${strType}");
- BackRestTestBackup_ClusterStop();
+ BackRestTestBackup_ClusterStop();
- BackRestTestBackup_Restore($oFile, $strFullBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
- $strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
- $oRecoveryHashRef, $strComment, $iExpectedExitStatus);
+ BackRestTestBackup_Restore($oFile, 'latest', $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
+ $strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
+ $oRecoveryHashRef, $strComment, $iExpectedExitStatus);
- BackRestTestBackup_ClusterStart();
- BackRestTestBackup_PgSelectOneTest('select message from test', $strNameMessage);
+ BackRestTestBackup_ClusterStart();
+ BackRestTestBackup_PgSelectOneTest('select message from test', $strNameMessage);
+ }
# Restore (restore type = default, timeline = 3)
#-----------------------------------------------------------------------------------------------------------------------
- $bDelta = true;
- $bForce = false;
- $strType = RECOVERY_TYPE_DEFAULT;
- $strTarget = undef;
- $bTargetExclusive = undef;
- $bTargetResume = undef;
- $strTargetTimeline = 3;
- $oRecoveryHashRef = {'standy-mode' => 'on'};
- $oRecoveryHashRef = undef;
- $strComment = undef;
- $iExpectedExitStatus = undef;
+ if (BackRestTestCommon_DbVersion() >= 8.4)
+ {
+ $bDelta = true;
+ $bForce = false;
+ $strType = RECOVERY_TYPE_DEFAULT;
+ $strTarget = undef;
+ $bTargetExclusive = undef;
+ $bTargetResume = undef;
+ $strTargetTimeline = 3;
+ $oRecoveryHashRef = BackRestTestCommon_DbVersion() >= 9.0 ? {'standby-mode' => 'on'} : undef;
+ $strComment = undef;
+ $iExpectedExitStatus = undef;
- &log(INFO, " testing recovery type = ${strType}");
+ &log(INFO, " testing recovery type = ${strType}");
- BackRestTestBackup_ClusterStop();
+ BackRestTestBackup_ClusterStop();
- BackRestTestBackup_Restore($oFile, $strFullBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
- $strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
- $oRecoveryHashRef, $strComment, $iExpectedExitStatus);
+ BackRestTestBackup_Restore($oFile, $strIncrBackup, $strStanza, $bRemote, undef, undef, $bDelta, $bForce,
+ $strType, $strTarget, $bTargetExclusive, $bTargetResume, $strTargetTimeline,
+ $oRecoveryHashRef, $strComment, $iExpectedExitStatus);
- BackRestTestBackup_ClusterStart(undef, undef, true);
- BackRestTestBackup_PgSelectOneTest('select message from test', $strTimelineMessage, 120);
+ BackRestTestBackup_ClusterStart(undef, undef, true);
+ BackRestTestBackup_PgSelectOneTest('select message from test', $strTimelineMessage, 120);
+ }
$bCreate = true;
}
@@ -2449,8 +2568,8 @@ sub BackRestTestBackup_Test
(
$strStanza,
BackRestTestCommon_RepoPathGet(),
- undef,
- undef
+ NONE,
+ $oLocal
))->clone();
# Create the test database
@@ -2576,8 +2695,8 @@ sub BackRestTestBackup_Test
(
$strStanza,
BackRestTestCommon_RepoPathGet(),
- undef,
- undef
+ NONE,
+ $oLocal
))->clone();
# Create the test database
diff --git a/test/lib/BackRestTest/CommonTest.pm b/test/lib/BackRestTest/CommonTest.pm
index 23ec01fb2..e233dd64b 100755
--- a/test/lib/BackRestTest/CommonTest.pm
+++ b/test/lib/BackRestTest/CommonTest.pm
@@ -21,9 +21,11 @@ use File::Copy qw(move);
use lib dirname($0) . '/../lib';
use BackRest::Utility;
+use BackRest::Config;
use BackRest::Remote;
use BackRest::File;
use BackRest::Manifest;
+use BackRest::Db;
use Exporter qw(import);
our @EXPORT = qw(BackRestTestCommon_Create BackRestTestCommon_Drop BackRestTestCommon_Setup BackRestTestCommon_ExecuteBegin
@@ -37,7 +39,8 @@ our @EXPORT = qw(BackRestTestCommon_Create BackRestTestCommon_Drop BackRestTestC
BackRestTestCommon_UserBackRestGet BackRestTestCommon_TestPathGet BackRestTestCommon_DataPathGet
BackRestTestCommon_RepoPathGet BackRestTestCommon_LocalPathGet BackRestTestCommon_DbPathGet
BackRestTestCommon_DbCommonPathGet BackRestTestCommon_ClusterStop BackRestTestCommon_DbTablespacePathGet
- BackRestTestCommon_DbPortGet);
+ BackRestTestCommon_DbPortGet BackRestTestCommon_iniLoad BackRestTestCommon_iniSave BackRestTestCommon_DbVersion
+ BackRestTestCommon_CommandPsqlGet BackRestTestCommon_DropRepo BackRestTestCommon_CreateRepo);
my $strPgSqlBin;
my $strCommonStanza;
@@ -56,6 +59,7 @@ my $strCommonDbPath;
my $strCommonDbCommonPath;
my $strCommonDbTablespacePath;
my $iCommonDbPort;
+my $strCommonDbVersion;
my $iModuleTestRun;
my $bDryRun;
my $bNoCleanup;
@@ -68,7 +72,6 @@ my $hOut;
my $pId;
my $strCommand;
-
####################################################################################################################################
# BackRestTestCommon_ClusterStop
####################################################################################################################################
@@ -89,6 +92,41 @@ sub BackRestTestCommon_ClusterStop
}
}
+####################################################################################################################################
+# BackRestTestCommon_DropRepo
+####################################################################################################################################
+sub BackRestTestCommon_DropRepo
+{
+ # Remove the backrest private directory
+ while (-e BackRestTestCommon_RepoPathGet())
+ {
+ BackRestTestCommon_PathRemove(BackRestTestCommon_RepoPathGet(), true, true);
+ BackRestTestCommon_PathRemove(BackRestTestCommon_RepoPathGet(), false, true);
+ hsleep(.1);
+ }
+}
+
+
+####################################################################################################################################
+# BackRestTestCommon_CreateRepo
+####################################################################################################################################
+sub BackRestTestCommon_CreateRepo
+{
+ my $bRemote = shift;
+
+ BackRestTestCommon_DropRepo();
+
+ # Create the backup directory
+ if ($bRemote)
+ {
+ BackRestTestCommon_Execute('mkdir -m 700 ' . BackRestTestCommon_RepoPathGet(), true);
+ }
+ else
+ {
+ BackRestTestCommon_PathCreate(BackRestTestCommon_RepoPathGet());
+ }
+}
+
####################################################################################################################################
# BackRestTestCommon_Drop
####################################################################################################################################
@@ -98,12 +136,7 @@ sub BackRestTestCommon_Drop
BackRestTestCommon_ClusterStop(BackRestTestCommon_DbCommonPathGet(), true);
# Remove the backrest private directory
- while (-e BackRestTestCommon_RepoPathGet())
- {
- BackRestTestCommon_PathRemove(BackRestTestCommon_RepoPathGet(), true, true);
- BackRestTestCommon_PathRemove(BackRestTestCommon_RepoPathGet(), false, true);
- hsleep(.1);
- }
+ BackRestTestCommon_DropRepo();
# Remove the test directory
BackRestTestCommon_PathRemove(BackRestTestCommon_TestPathGet());
@@ -468,6 +501,73 @@ sub BackRestTestCommon_Setup
$iModuleTestRun = $iModuleTestRunParam;
$bDryRun = $bDryRunParam;
$bNoCleanup = $bNoCleanupParam;
+
+ BackRestTestCommon_Execute($strPgSqlBin . '/postgres --version');
+
+ # Get the Postgres version
+ my @stryVersionToken = split(/ /, $strOutLog);
+ @stryVersionToken = split(/\./, $stryVersionToken[2]);
+ $strCommonDbVersion = $stryVersionToken[0] . '.' . $stryVersionToken[1];
+
+ # Don't run unit tests for unsupported versions
+ my $strVersionSupport = versionSupport();
+
+ if ($strCommonDbVersion < ${$strVersionSupport}[0])
+ {
+ confess "currently only version ${$strVersionSupport}[0] and up are supported";
+ }
+}
+
+####################################################################################################################################
+# BackRestTestCommon_iniLoad
+####################################################################################################################################
+sub BackRestTestCommon_iniLoad
+{
+ my $strFileName = shift;
+ my $oIniRef = shift;
+ my $bRemote = shift;
+
+ # Defaults
+ $bRemote = defined($bRemote) ? $bRemote : false;
+
+ if ($bRemote)
+ {
+ BackRestTestCommon_Execute("chmod g+x " . BackRestTestCommon_RepoPathGet(), $bRemote);
+ }
+
+ ini_load($strFileName, $oIniRef);
+
+ if ($bRemote)
+ {
+ BackRestTestCommon_Execute("chmod g-x " . BackRestTestCommon_RepoPathGet(), $bRemote);
+ }
+}
+
+####################################################################################################################################
+# BackRestTestCommon_iniSave
+####################################################################################################################################
+sub BackRestTestCommon_iniSave
+{
+ my $strFileName = shift;
+ my $oIniRef = shift;
+ my $bRemote = shift;
+
+ # Defaults
+ $bRemote = defined($bRemote) ? $bRemote : false;
+
+ if ($bRemote)
+ {
+ BackRestTestCommon_Execute("chmod g+x " . BackRestTestCommon_RepoPathGet(), $bRemote);
+ BackRestTestCommon_Execute("chmod g+w " . $strFileName, $bRemote);
+ }
+
+ ini_save($strFileName, $oIniRef);
+
+ if ($bRemote)
+ {
+ BackRestTestCommon_Execute("chmod g-w " . $strFileName, $bRemote);
+ BackRestTestCommon_Execute("chmod g-x " . BackRestTestCommon_RepoPathGet(), $bRemote);
+ }
}
####################################################################################################################################
@@ -562,11 +662,11 @@ sub BackRestTestCommon_ConfigRecovery
}
# Rewrite remap section
- delete($oConfig{"${strStanza}:recovery:option"});
+ delete($oConfig{"${strStanza}:restore:recovery-setting"});
foreach my $strOption (sort(keys $oRecoveryHashRef))
{
- $oConfig{"${strStanza}:recovery:option"}{$strOption} = ${$oRecoveryHashRef}{$strOption};
+ $oConfig{"${strStanza}:restore:recovery-setting"}{$strOption} = ${$oRecoveryHashRef}{$strOption};
}
# Resave the config file
@@ -614,7 +714,7 @@ sub BackRestTestCommon_ConfigCreate
$oParamHash{$strCommonStanza}{'db-user'} = $strCommonUser;
}
- $oParamHash{'global:log'}{'log-level-console'} = 'error';
+ $oParamHash{'global:log'}{'log-level-console'} = 'debug';
$oParamHash{'global:log'}{'log-level-file'} = 'trace';
if ($strLocal eq BACKUP)
@@ -627,7 +727,7 @@ sub BackRestTestCommon_ConfigCreate
if (defined($strRemote))
{
- $oParamHash{'global:log'}{'log-level-console'} = 'trace';
+# $oParamHash{'global:log'}{'log-level-console'} = 'trace';
# if ($bArchiveAsync)
# {
@@ -718,6 +818,11 @@ sub BackRestTestCommon_StanzaGet
return $strCommonStanza;
}
+sub BackRestTestCommon_CommandPsqlGet
+{
+ return $strCommonCommandPsql;
+}
+
sub BackRestTestCommon_CommandMainGet
{
return $strCommonCommandMain;
@@ -793,4 +898,9 @@ sub BackRestTestCommon_DbPortGet
return $iCommonDbPort;
}
+sub BackRestTestCommon_DbVersion
+{
+ return $strCommonDbVersion;
+}
+
1;
diff --git a/test/lib/BackRestTest/CompareTest.pm b/test/lib/BackRestTest/CompareTest.pm
new file mode 100755
index 000000000..49cdf2b79
--- /dev/null
+++ b/test/lib/BackRestTest/CompareTest.pm
@@ -0,0 +1,127 @@
+#!/usr/bin/perl
+####################################################################################################################################
+# CompareTest.pl - Performance comparison tests between rsync and backrest
+####################################################################################################################################
+package BackRestTest::CompareTest;
+
+####################################################################################################################################
+# Perl includes
+####################################################################################################################################
+use strict;
+use warnings FATAL => qw(all);
+use Carp qw(confess);
+
+use File::Basename qw(dirname);
+use Time::HiRes qw(gettimeofday);
+use File::stat;
+use Exporter qw(import);
+
+use lib dirname($0) . '/../lib';
+use BackRest::Utility;
+use BackRestTest::CommonTest;
+use BackRestTest::BackupTest;
+
+####################################################################################################################################
+# Exports
+####################################################################################################################################
+our @EXPORT = qw(BackRestTestCompare_Test);
+
+####################################################################################################################################
+# BackRestTestCompare_BuildDb
+####################################################################################################################################
+sub BackRestTestCompare_BuildDb
+{
+ my $iTableTotal = shift;
+ my $iTableSize = shift;
+
+ &log(INFO, "build database: " . file_size_format($iTableTotal * $iTableSize * 1024 * 1024));
+
+ for (my $iTableIdx = 0; $iTableIdx < $iTableTotal; $iTableIdx++)
+ {
+ my $strSourceFile = BackRestTestCommon_DataPathGet() . "/test.table.bin";
+ my $strTableFile = BackRestTestCommon_DbCommonPathGet() . "/test-${iTableIdx}";
+
+ for (my $iTableSizeIdx = 0; $iTableSizeIdx < $iTableSize; $iTableSizeIdx++)
+ {
+ BackRestTestCommon_Execute("cat ${strSourceFile} >> ${strTableFile}");
+ }
+ }
+}
+
+####################################################################################################################################
+# BackRestTestCompare_Test
+####################################################################################################################################
+sub BackRestTestCompare_Test
+{
+ my $strTest = shift;
+
+ #-------------------------------------------------------------------------------------------------------------------------------
+ # Test rsync
+ #-------------------------------------------------------------------------------------------------------------------------------
+ if ($strTest eq 'all' || $strTest eq 'rsync')
+ {
+ my $iRun = 0;
+ my $bRemote = false;
+
+ &log(INFO, "Test rsync\n");
+
+ # Increment the run, log, and decide whether this unit test should be run
+ if (!BackRestTestCommon_Run(++$iRun,
+ "rmt ${bRemote}")) {next}
+
+ # Create the cluster and paths
+ BackRestTestBackup_Create($bRemote, false);
+ BackRestTestCommon_PathCreate(BackRestTestCommon_DbCommonPathGet() . '/pg_tblspc');
+
+ BackRestTestCompare_BuildDb(48, 10);
+ BackRestTestCommon_Execute('sync');
+
+ for (my $bRemote = true; $bRemote <= true; $bRemote++)
+ {
+ for (my $bRsync = true; $bRsync >= false; $bRsync--)
+ {
+ my $strCommand;
+ BackRestTestCommon_CreateRepo($bRemote);
+
+ &log(INFO, ($bRsync ? 'rsync' : 'backrest') . " test");
+
+ if ($bRsync)
+ {
+ $strCommand = 'rsync --compress-level=6 -zvlhprtogHS --delete ' .
+ ($bRemote ? BackRestTestCommon_UserGet . '@' . BackRestTestCommon_HostGet . ':' : '') .
+ BackRestTestCommon_DbCommonPathGet() . '/ ' . BackRestTestCommon_RepoPathGet() . ';' .
+ 'gzip -r "' . BackRestTestCommon_RepoPathGet() . '"';
+ }
+ else
+ {
+ $strCommand = BackRestTestCommon_CommandMainGet() .
+ ' --stanza=main' .
+ ($bRemote ? ' "--db-host=' . BackRestTestCommon_HostGet . '"' .
+ ' "--db-user=' . BackRestTestCommon_UserGet . '"' : '') .
+# ' --log-level-file=debug' .
+ ' --no-start-stop' .
+# ' --no-compress' .
+ ' --thread-max=4' .
+ ' "--db-path=' . BackRestTestCommon_DbCommonPathGet() . '"' .
+ ' "--repo-path=' . BackRestTestCommon_RepoPathGet() . '"' .
+ ' --type=full backup';
+ }
+
+ my $fTimeBegin = gettimeofday();
+ BackRestTestCommon_Execute($strCommand, $bRemote);
+ BackRestTestCommon_Execute('sync');
+ my $fTimeEnd = gettimeofday();
+
+ &log(INFO, " time = " . (int(($fTimeEnd - $fTimeBegin) * 100) / 100));
+ }
+ }
+
+ if (BackRestTestCommon_Cleanup())
+ {
+ &log(INFO, 'cleanup');
+ BackRestTestBackup_Drop();
+ }
+ }
+}
+
+1;
diff --git a/test/lib/BackRestTest/ConfigTest.pm b/test/lib/BackRestTest/ConfigTest.pm
index 0454387ed..1e9e935d7 100755
--- a/test/lib/BackRestTest/ConfigTest.pm
+++ b/test/lib/BackRestTest/ConfigTest.pm
@@ -507,6 +507,25 @@ sub BackRestTestConfig_Test
optionTestExpect(OPTION_RESTORE_RECOVERY_SETTING, 'db.domain.net', 'primary-conn-info');
}
+ if (BackRestTestCommon_Run(++$iRun, OP_RESTORE . ' values passed to ' . OP_ARCHIVE_GET))
+ {
+ optionSetTest($oOption, OPTION_STANZA, $strStanza);
+ optionSetTest($oOption, OPTION_DB_PATH, '/db path/main');
+ optionSetTest($oOption, OPTION_REPO_PATH, '/repo');
+ optionSetTest($oOption, OPTION_BACKUP_HOST, 'db.mydomain.com');
+
+ configLoadExpect($oOption, OP_RESTORE);
+
+ my $strCommand = operationWrite(OP_ARCHIVE_GET);
+ my $strExpectedCommand = "$0 --backup-host=db.mydomain.com \"--db-path=/db path/main\"" .
+ " --repo-path=/repo --stanza=main " . OP_ARCHIVE_GET;
+
+ if ($strCommand ne $strExpectedCommand)
+ {
+ confess "expected command '${strExpectedCommand}' but got '${strCommand}'";
+ }
+ }
+
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' valid value ' . OPTION_COMMAND_PSQL))
{
optionSetTest($oOption, OPTION_STANZA, $strStanza);
@@ -749,6 +768,18 @@ sub BackRestTestConfig_Test
optionTestExpect(OPTION_RESTORE_RECOVERY_SETTING, '/path/to/pg_backrest.pl', 'archive-command');
}
+ if (BackRestTestCommon_Run(++$iRun, OP_RESTORE . ' option ' . OPTION_RESTORE_RECOVERY_SETTING))
+ {
+ $oConfig = {};
+ $$oConfig{$strStanza . ':' . &CONFIG_SECTION_RESTORE_RECOVERY_SETTING}{'standby-mode'} = 'on';
+ ini_save($strConfigFile, $oConfig);
+
+ optionSetTest($oOption, OPTION_STANZA, $strStanza);
+ optionSetTest($oOption, OPTION_CONFIG, $strConfigFile);
+
+ configLoadExpect($oOption, OP_ARCHIVE_GET);
+ }
+
if (BackRestTestCommon_Run(++$iRun, OP_BACKUP . ' option ' . OPTION_DB_PATH))
{
$oConfig = {};
diff --git a/test/lib/BackRestTest/FileTest.pm b/test/lib/BackRestTest/FileTest.pm
index e5be572f7..25cffe385 100755
--- a/test/lib/BackRestTest/FileTest.pm
+++ b/test/lib/BackRestTest/FileTest.pm
@@ -97,6 +97,8 @@ sub BackRestTestFile_Test
$strHost, # Host
$strUser, # User
BackRestTestCommon_CommandRemoteGet(), # Command
+ $strStanza, # Stanza
+ '', # Repo Path
OPTION_DEFAULT_BUFFER_SIZE, # Buffer size
OPTION_DEFAULT_COMPRESS_LEVEL, # Compress level
OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK, # Compress network level
@@ -107,6 +109,8 @@ sub BackRestTestFile_Test
undef, # Host
undef, # User
undef, # Command
+ undef, # Stanza
+ undef, # Repo Path
OPTION_DEFAULT_BUFFER_SIZE, # Buffer size
OPTION_DEFAULT_COMPRESS_LEVEL, # Compress level
OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK, # Compress network level
@@ -455,7 +459,7 @@ sub BackRestTestFile_Test
&log(DEBUG, "begin ${lTimeBegin}, check ${lTimeBeginCheck}, end " . time());
# Current time should have advanced by 1 second
- if (time() == int($lTimeBegin))
+ if (int(time()) == int($lTimeBegin))
{
confess "time was not advanced by 1 second";
}
diff --git a/test/lib/BackRestTest/UtilityTest.pm b/test/lib/BackRestTest/UtilityTest.pm
index bdd1289a7..cfe0a9037 100755
--- a/test/lib/BackRestTest/UtilityTest.pm
+++ b/test/lib/BackRestTest/UtilityTest.pm
@@ -46,6 +46,8 @@ sub BackRestTestUtility_Test
undef, # Host
undef, # User
undef, # Command
+ undef, # Stanza
+ undef, # Repo Path
OPTION_DEFAULT_BUFFER_SIZE, # Buffer size
OPTION_DEFAULT_COMPRESS_LEVEL, # Compress level
OPTION_DEFAULT_COMPRESS_LEVEL_NETWORK, # Compress network level
diff --git a/test/test.pl b/test/test.pl
index 45d383be7..d1ab7db00 100755
--- a/test/test.pl
+++ b/test/test.pl
@@ -7,8 +7,8 @@
# Perl includes
####################################################################################################################################
use strict;
-use warnings;
-use Carp;
+use warnings FATAL => qw(all);
+use Carp qw(confess);
use File::Basename;
use Getopt::Long;
@@ -17,6 +17,7 @@ use Pod::Usage;
#use Test::More;
use lib dirname($0) . '/../lib';
+use BackRest::Db;
use BackRest::Utility;
use lib dirname($0) . '/lib';
@@ -25,6 +26,7 @@ use BackRestTest::UtilityTest;
use BackRestTest::ConfigTest;
use BackRestTest::FileTest;
use BackRestTest::BackupTest;
+use BackRestTest::CompareTest;
####################################################################################################################################
# Usage
@@ -46,6 +48,7 @@ test.pl [options]
--dry-run show only the tests that would be executed but don't execute them
--no-cleanup don't cleaup after the last test is complete - useful for debugging
--infinite repeat selected tests forever
+ --db-version version of postgres to test (or all)
Configuration Options:
--psql-bin path to the psql executables (e.g. /usr/lib/postgresql/9.3/bin/)
@@ -61,7 +64,7 @@ test.pl [options]
####################################################################################################################################
# Command line parameters
####################################################################################################################################
-my $strLogLevel = 'info'; # Log level for tests
+my $strLogLevel = 'info';
my $strModule = 'all';
my $strModuleTest = 'all';
my $iModuleTestRun = undef;
@@ -74,6 +77,7 @@ my $bVersion = false;
my $bHelp = false;
my $bQuiet = false;
my $bInfinite = false;
+my $strDbVersion = 'max';
GetOptions ('q|quiet' => \$bQuiet,
'version' => \$bVersion,
@@ -87,7 +91,8 @@ GetOptions ('q|quiet' => \$bQuiet,
'thread-max=s' => \$iThreadMax,
'dry-run' => \$bDryRun,
'no-cleanup' => \$bNoCleanup,
- 'infinite' => \$bInfinite)
+ 'infinite' => \$bInfinite,
+ 'db-version=s' => \$strDbVersion)
or pod2usage(2);
# Display version and exit if requested
@@ -104,7 +109,11 @@ if ($bVersion || $bHelp)
exit 0;
}
-# Test::More->builder->output('/dev/null');
+if (@ARGV > 0)
+{
+ print "invalid parameter\n\n";
+ pod2usage();
+}
####################################################################################################################################
# Setup
@@ -131,29 +140,39 @@ if (defined($iModuleTestRun) && $strModuleTest eq 'all')
}
# Search for psql bin
+my @stryTestVersion;
+my $strVersionSupport = versionSupport();
+
if (!defined($strPgSqlBin))
{
my @strySearchPath = ('/usr/lib/postgresql/VERSION/bin', '/Library/PostgreSQL/VERSION/bin');
foreach my $strSearchPath (@strySearchPath)
{
- for (my $fVersion = 9; $fVersion >= 0; $fVersion -= 1)
+ for (my $iVersionIdx = @{$strVersionSupport} - 1; $iVersionIdx >= 0; $iVersionIdx--)
{
- my $strVersionPath = $strSearchPath;
- $strVersionPath =~ s/VERSION/9\.$fVersion/g;
-
- if (-e "${strVersionPath}/initdb")
+ if ($strDbVersion eq 'all' || $strDbVersion eq 'max' && @stryTestVersion == 0 ||
+ $strDbVersion eq ${$strVersionSupport}[$iVersionIdx])
{
- &log(INFO, "found pgsql-bin at ${strVersionPath}\n");
- $strPgSqlBin = ${strVersionPath};
+ my $strVersionPath = $strSearchPath;
+ $strVersionPath =~ s/VERSION/${$strVersionSupport}[$iVersionIdx]/g;
+
+ if (-e "${strVersionPath}/initdb")
+ {
+ &log(INFO, "FOUND pgsql-bin at ${strVersionPath}");
+ push @stryTestVersion, $strVersionPath;
+ }
}
}
}
- if (!defined($strPgSqlBin))
- {
- confess 'pgsql-bin was not defined and could not be located';
- }
+ # Make sure at least one version of postgres was found
+ @{$strVersionSupport} > 0
+ or confess 'pgsql-bin was not defined and postgres could not be located automatically';
+}
+else
+{
+ push @stryTestVersion, $strPgSqlBin;
}
# Check thread total
@@ -213,8 +232,6 @@ if (-e './test.pl' && -e '../bin/pg_backrest.pl' && open($hVersion, '<', '../VER
####################################################################################################################################
# Runs tests
####################################################################################################################################
-BackRestTestCommon_Setup($strTestPath, $strPgSqlBin, $iModuleTestRun, $bDryRun, $bNoCleanup);
-
# &log(INFO, "Testing with test_path = " . BackRestTestCommon_TestPathGet() . ", host = {strHost}, user = {strUser}, " .
# "group = {strGroup}");
@@ -222,6 +239,10 @@ my $iRun = 0;
do
{
+ BackRestTestCommon_Setup($strTestPath, $stryTestVersion[0], $iModuleTestRun, $bDryRun, $bNoCleanup);
+
+ &log(INFO, "TESTING psql-bin = $stryTestVersion[0]\n");
+
if ($bInfinite)
{
$iRun++;
@@ -246,6 +267,21 @@ do
if ($strModule eq 'all' || $strModule eq 'backup')
{
BackRestTestBackup_Test($strModuleTest, $iThreadMax);
+
+ if (@stryTestVersion > 1 && ($strModuleTest eq 'all' || $strModuleTest eq 'full'))
+ {
+ for (my $iVersionIdx = 1; $iVersionIdx < @stryTestVersion; $iVersionIdx++)
+ {
+ BackRestTestCommon_Setup($strTestPath, $stryTestVersion[$iVersionIdx], $iModuleTestRun, $bDryRun, $bNoCleanup);
+ &log(INFO, "TESTING psql-bin = $stryTestVersion[$iVersionIdx] for backup/full\n");
+ BackRestTestBackup_Test('full', $iThreadMax);
+ }
+ }
+ }
+
+ if ($strModule eq 'compare')
+ {
+ BackRestTestCompare_Test($strModuleTest);
}
}
while ($bInfinite);