1
0
mirror of https://github.com/postgrespro/pg_probackup.git synced 2025-02-03 14:01:57 +02:00

[Issue #116] WAL archive information

This commit is contained in:
Grigory Smolkin 2019-09-17 17:35:27 +03:00
parent 11ab44fb04
commit d34a6a3ad8
25 changed files with 1976 additions and 397 deletions

View File

@ -36,6 +36,7 @@ Current version - 2.1.5
* [Running pg_probackup on Parallel Threads](#running-pg_probackup-on-parallel-threads)
* [Configuring pg_probackup](#configuring-pg_probackup)
* [Managing the Backup Catalog](#managing-the-backup-catalog)
* [Viewing WAL Archive Information](#viewing-wal-archive-information)
* [Configuring Backup Retention Policy](#configuring-backup-retention-policy)
* [Merging Backups](#merging-backups)
* [Deleting Backups](#deleting-backups)
@ -126,7 +127,7 @@ As compared to other backup solutions, pg_probackup offers the following benefit
- Remote operations: backup PostgreSQL instance located on remote machine or restore backup on it
- Backup from replica: avoid extra load on the master server by taking backups from a standby
- External directories: add to backup content of directories located outside of the PostgreSQL data directory (PGDATA), such as scripts, configs, logs and pg_dump files
- Backup Catalog: get list of backups and corresponding meta information in `plain` or `json` formats
- Backup Catalog: get list of backups and corresponding meta information in `plain` or `json` formats and view WAL Archive information.
- Partial Restore: restore the only specified databases or skip the specified databases.
To manage backup data, pg_probackup creates a `backup catalog`. This is a directory that stores all backup files with additional meta information, as well as WAL archives required for point-in-time recovery. You can store backups for different instances in separate subdirectories of a single backup catalog.
@ -300,7 +301,9 @@ Making backups in PAGE backup mode, performing [PITR](#performing-point-in-time-
Where *backup_dir* and *instance_name* refer to the already initialized backup catalog instance for this database cluster and optional parameters [remote_options](#remote-mode-options) should be used to archive WAL to the remote host. For details about all possible `archive-push` parameters, see the section [archive-push](#archive-push).
Once these steps are complete, you can start making backups with ARCHIVE WAL mode, backups in PAGE backup mode and perform [PITR](#performing-point-in-time-pitr-recovery).
Once these steps are complete, you can start making backups with [ARCHIVE](#archive-mode) WAL-mode, backups in PAGE backup mode and perform [PITR](#performing-point-in-time-pitr-recovery).
Current state of WAL Archive can be obtained via [show](#show) command. For details, see the sections [Viewing WAL Archive information](#viewing-wal-archive-information).
If you are planning to make PAGE backups and/or backups with [ARCHIVE](#archive-mode) WAL mode from a standby of a server, that generates small amount of WAL traffic, without long waiting for WAL segment to fill up, consider setting [archive_timeout](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT) PostgreSQL parameter **on master**. It is advisable to set the value of this setting slightly lower than pg_probackup parameter `--archive-timeout` (default 5 min), so there should be enough time for rotated segment to be streamed to replica and send to archive before backup is aborted because of `--archive-timeout`.
@ -408,7 +411,7 @@ Where *backup_mode* can take one of the following values:
- FULL — creates a full backup that contains all the data files of the cluster to be restored.
- DELTA — reads all data files in the data directory and creates an incremental backup for pages that have changed since the previous backup.
- PAGE — creates an incremental PAGE backup based on the WAL files that have changed since the previous full or incremental backup was taken.
- PAGE — creates an incremental PAGE backup based on the WAL files that have generated since the previous full or incremental backup was taken. Only changed blocks are readed from data files.
- PTRACK — creates an incremental PTRACK backup tracking page changes on the fly.
When restoring a cluster from an incremental backup, pg_probackup relies on the parent full backup and all the incremental backups between them, which is called `the backup chain`. You must create at least one full backup before taking incremental ones.
@ -668,6 +671,7 @@ If nothing is given, the default values are taken. By default pg_probackup tries
With pg_probackup, you can manage backups from the command line:
- View available backups
- View available WAL Archive Information
- Validate backups
- Merge backups
- Delete backups
@ -794,6 +798,266 @@ The sample output is as follows:
]
```
#### Viewing WAL Archive Information
To view the information about WAL archive for every instance, run the command:
pg_probackup show -B backup_dir [--instance instance_name] --archive
pg_probackup displays the list of all the available WAL files grouped by timelines. For example:
```
ARCHIVE INSTANCE 'node'
===================================================================================================================
TLI Parent TLI Switchpoint Min Segno Max Segno N segments Size Zratio N backups Status
===================================================================================================================
5 1 0/B000000 000000000000000B 000000000000000C 2 685kB 48.00 0 OK
4 3 0/18000000 0000000000000018 000000000000001A 3 648kB 77.00 0 OK
3 2 0/15000000 0000000000000015 0000000000000017 3 648kB 77.00 0 OK
2 1 0/B000108 000000000000000B 0000000000000015 5 892kB 94.00 1 DEGRADED
1 0 0/0 0000000000000001 000000000000000A 10 8774kB 19.00 1 OK
```
For each backup, the following information is provided:
- TLI — timeline identifier.
- Parent TLI — identifier of timeline TLI branched off.
- Switchpoint — LSN of the moment when the timeline branched off from "Parent TLI".
- Min Segno — number of the first existing WAL segment belonging to the timeline.
- Max Segno — number of the last existing WAL segment belonging to the timeline.
- N segments — number of WAL segments belonging to the timeline.
- Size — the size files take on disk.
- Zratio - compression ratio calculated as "N segments" * wal_seg_size / "Size".
- N backups — number of backups belonging to the timeline. To get the details about backups, use json format.
- Status — archive status for this exact timeline. Possible values:
- OK — all WAL segments between Min and Max are present.
- DEGRADED — some WAL segments between Min and Max are lost. To get details about lost files, use json format.
To get more detailed information about the WAL archive in json format, run the command:
pg_probackup show -B backup_dir [--instance instance_name] --archive --format=json
The sample output is as follows:
```
[
{
"instance": "replica",
"timelines": [
{
"tli": 5,
"parent-tli": 1,
"switchpoint": "0/B000000",
"min-segno": "000000000000000B",
"max-segno": "000000000000000C",
"n-segments": 2,
"size": 685320,
"zratio": 48.00,
"closest-backup-id": "PXS92O",
"status": "OK",
"lost-segments": [],
"backups": []
},
{
"tli": 4,
"parent-tli": 3,
"switchpoint": "0/18000000",
"min-segno": "0000000000000018",
"max-segno": "000000000000001A",
"n-segments": 3,
"size": 648625,
"zratio": 77.00,
"closest-backup-id": "PXS9CE",
"status": "OK",
"lost-segments": [],
"backups": []
},
{
"tli": 3,
"parent-tli": 2,
"switchpoint": "0/15000000",
"min-segno": "0000000000000015",
"max-segno": "0000000000000017",
"n-segments": 3,
"size": 648911,
"zratio": 77.00,
"closest-backup-id": "PXS9CE",
"status": "OK",
"lost-segments": [],
"backups": []
},
{
"tli": 2,
"parent-tli": 1,
"switchpoint": "0/B000108",
"min-segno": "000000000000000B",
"max-segno": "0000000000000015",
"n-segments": 5,
"size": 892173,
"zratio": 94.00,
"closest-backup-id": "PXS92O",
"status": "DEGRADED",
"lost-segments": [
{
"begin-segno": "000000000000000D",
"end-segno": "000000000000000E"
},
{
"begin-segno": "0000000000000010",
"end-segno": "0000000000000012"
}
],
"backups": [
{
"id": "PXS9CE",
"backup-mode": "FULL",
"wal": "ARCHIVE",
"compress-alg": "none",
"compress-level": 1,
"from-replica": "false",
"block-size": 8192,
"xlog-block-size": 8192,
"checksum-version": 1,
"program-version": "2.1.5",
"server-version": "10",
"current-tli": 2,
"parent-tli": 0,
"start-lsn": "0/C000028",
"stop-lsn": "0/C000160",
"start-time": "2019-09-13 21:43:26+03",
"end-time": "2019-09-13 21:43:30+03",
"recovery-xid": 0,
"recovery-time": "2019-09-13 21:43:29+03",
"data-bytes": 104674852,
"wal-bytes": 16777216,
"primary_conninfo": "user=backup passfile=/var/lib/pgsql/.pgpass port=5432 sslmode=disable sslcompression=1 target_session_attrs=any",
"status": "OK"
}
]
},
{
"tli": 1,
"parent-tli": 0,
"switchpoint": "0/0",
"min-segno": "0000000000000001",
"max-segno": "000000000000000A",
"n-segments": 10,
"size": 8774805,
"zratio": 19.00,
"closest-backup-id": "",
"status": "OK",
"lost-segments": [],
"backups": [
{
"id": "PXS92O",
"backup-mode": "FULL",
"wal": "ARCHIVE",
"compress-alg": "none",
"compress-level": 1,
"from-replica": "true",
"block-size": 8192,
"xlog-block-size": 8192,
"checksum-version": 1,
"program-version": "2.1.5",
"server-version": "10",
"current-tli": 1,
"parent-tli": 0,
"start-lsn": "0/4000028",
"stop-lsn": "0/6000028",
"start-time": "2019-09-13 21:37:36+03",
"end-time": "2019-09-13 21:38:45+03",
"recovery-xid": 0,
"recovery-time": "2019-09-13 21:37:30+03",
"data-bytes": 25987319,
"wal-bytes": 50331648,
"primary_conninfo": "user=backup passfile=/var/lib/pgsql/.pgpass port=5432 sslmode=disable sslcompression=1 target_session_attrs=any",
"status": "OK"
}
]
}
]
},
{
"instance": "master",
"timelines": [
{
"tli": 1,
"parent-tli": 0,
"switchpoint": "0/0",
"min-segno": "0000000000000001",
"max-segno": "000000000000000B",
"n-segments": 11,
"size": 8860892,
"zratio": 20.00,
"status": "OK",
"lost-segments": [],
"backups": [
{
"id": "PXS92H",
"parent-backup-id": "PXS92C",
"backup-mode": "PAGE",
"wal": "ARCHIVE",
"compress-alg": "none",
"compress-level": 1,
"from-replica": "false",
"block-size": 8192,
"xlog-block-size": 8192,
"checksum-version": 1,
"program-version": "2.1.5",
"server-version": "10",
"current-tli": 1,
"parent-tli": 1,
"start-lsn": "0/4000028",
"stop-lsn": "0/50000B8",
"start-time": "2019-09-13 21:37:29+03",
"end-time": "2019-09-13 21:37:31+03",
"recovery-xid": 0,
"recovery-time": "2019-09-13 21:37:30+03",
"data-bytes": 1328461,
"wal-bytes": 33554432,
"primary_conninfo": "user=backup passfile=/var/lib/pgsql/.pgpass port=5432 sslmode=disable sslcompression=1 target_session_attrs=any",
"status": "OK"
},
{
"id": "PXS92C",
"backup-mode": "FULL",
"wal": "ARCHIVE",
"compress-alg": "none",
"compress-level": 1,
"from-replica": "false",
"block-size": 8192,
"xlog-block-size": 8192,
"checksum-version": 1,
"program-version": "2.1.5",
"server-version": "10",
"current-tli": 1,
"parent-tli": 0,
"start-lsn": "0/2000028",
"stop-lsn": "0/2000160",
"start-time": "2019-09-13 21:37:24+03",
"end-time": "2019-09-13 21:37:29+03",
"recovery-xid": 0,
"recovery-time": "2019-09-13 21:37:28+03",
"data-bytes": 24871902,
"wal-bytes": 16777216,
"primary_conninfo": "user=backup passfile=/var/lib/pgsql/.pgpass port=5432 sslmode=disable sslcompression=1 target_session_attrs=any",
"status": "OK"
}
]
}
]
}
]
```
Most fields are consistent with plain format, with some exceptions:
- size is in bytes.
- 'closest-backup-id' attribute contain ID of valid backup closest to the timeline, located on some of the previous timelines. This backup is the closest starting point to reach the timeline from other timelines by PITR. If such backup do not exists, then string is empty.
- DEGRADED timelines contain 'lost-segments' array with information about intervals of missing segments. In OK timelines 'lost-segments' array is empty.
- 'N backups' attribute is replaced with 'backups' array containing backups belonging to the timeline. If timeline has no backups, then 'backups' array is empty.
### Configuring Backup Retention Policy
By default, all backup copies created with pg_probackup are stored in the specified backup catalog. To save disk space, you can configure retention policy and periodically clean up redundant backup copies accordingly.
@ -963,12 +1227,15 @@ To edit pg_probackup.conf, use the [set-config](#set-config) command.
#### show
pg_probackup show -B backup_dir
[--help] [--instance instance_name [-i backup_id]] [--format=plain|json]
[--help] [--instance instance_name [-i backup_id | --archive]] [--format=plain|json]
Shows the contents of the backup catalog. If *instance_name* and *backup_id* are specified, shows detailed information about this backup. You can specify the `--format=json` option to return the result in the JSON format.
Shows the contents of the backup catalog. If *instance_name* and *backup_id* are specified, shows detailed information about this backup. You can specify the `--format=json` option to return the result in the JSON format. If `--archive` option is specified, shows the content of WAL archive of the backup catalog.
By default, the contents of the backup catalog is shown as plain text.
For details on usage, see the sections [Managing the Backup Catalog](#managing-the-backup-catalog) and [Viewing WAL Archive Information](#viewing-wal-archive-information).
#### backup
pg_probackup backup -B backup_dir -b backup_mode --instance instance_name

View File

@ -13,7 +13,7 @@
#include <unistd.h>
static void push_wal_file(const char *from_path, const char *to_path,
bool is_compress, bool overwrite);
bool is_compress, bool overwrite, int compress_level);
static void get_wal_file(const char *from_path, const char *to_path);
#ifdef HAVE_LIBZ
static const char *get_gz_error(gzFile gzf, int errnum);
@ -31,11 +31,10 @@ static void copy_file_attributes(const char *from_path,
* --wal-file-path %p --wal-file-name %f', to move backups into arclog_path.
* Where archlog_path is $BACKUP_PATH/wal/system_id.
* Currently it just copies wal files to the new location.
* TODO: Planned options: list the arclog content,
* compute and validate checksums.
*/
int
do_archive_push(char *wal_file_path, char *wal_file_name, bool overwrite)
do_archive_push(InstanceConfig *instance,
char *wal_file_path, char *wal_file_name, bool overwrite)
{
char backup_wal_file_path[MAXPGPATH];
char absolute_wal_file_path[MAXPGPATH];
@ -60,33 +59,33 @@ do_archive_push(char *wal_file_path, char *wal_file_name, bool overwrite)
/* verify that archive-push --instance parameter is valid */
system_id = get_system_identifier(current_dir);
if (instance_config.pgdata == NULL)
if (instance->pgdata == NULL)
elog(ERROR, "cannot read pg_probackup.conf for this instance");
if(system_id != instance_config.system_identifier)
if(system_id != instance->system_identifier)
elog(ERROR, "Refuse to push WAL segment %s into archive. Instance parameters mismatch."
"Instance '%s' should have SYSTEM_ID = " UINT64_FORMAT " instead of " UINT64_FORMAT,
wal_file_name, instance_name, instance_config.system_identifier,
wal_file_name, instance->name, instance->system_identifier,
system_id);
/* Create 'archlog_path' directory. Do nothing if it already exists. */
fio_mkdir(arclog_path, DIR_PERMISSION, FIO_BACKUP_HOST);
fio_mkdir(instance->arclog_path, DIR_PERMISSION, FIO_BACKUP_HOST);
join_path_components(absolute_wal_file_path, current_dir, wal_file_path);
join_path_components(backup_wal_file_path, arclog_path, wal_file_name);
join_path_components(backup_wal_file_path, instance->arclog_path, wal_file_name);
elog(INFO, "pg_probackup archive-push from %s to %s", absolute_wal_file_path, backup_wal_file_path);
if (instance_config.compress_alg == PGLZ_COMPRESS)
if (instance->compress_alg == PGLZ_COMPRESS)
elog(ERROR, "pglz compression is not supported");
#ifdef HAVE_LIBZ
if (instance_config.compress_alg == ZLIB_COMPRESS)
if (instance->compress_alg == ZLIB_COMPRESS)
is_compress = IsXLogFileName(wal_file_name);
#endif
push_wal_file(absolute_wal_file_path, backup_wal_file_path, is_compress,
overwrite);
overwrite, instance->compress_level);
elog(INFO, "pg_probackup archive-push completed successfully");
return 0;
@ -97,7 +96,8 @@ do_archive_push(char *wal_file_path, char *wal_file_name, bool overwrite)
* Move files from arclog_path to pgdata/wal_file_path.
*/
int
do_archive_get(char *wal_file_path, char *wal_file_name)
do_archive_get(InstanceConfig *instance,
char *wal_file_path, char *wal_file_name)
{
char backup_wal_file_path[MAXPGPATH];
char absolute_wal_file_path[MAXPGPATH];
@ -118,7 +118,7 @@ do_archive_get(char *wal_file_path, char *wal_file_name)
elog(ERROR, "getcwd() error");
join_path_components(absolute_wal_file_path, current_dir, wal_file_path);
join_path_components(backup_wal_file_path, arclog_path, wal_file_name);
join_path_components(backup_wal_file_path, instance->arclog_path, wal_file_name);
elog(INFO, "pg_probackup archive-get from %s to %s",
backup_wal_file_path, absolute_wal_file_path);
@ -134,7 +134,7 @@ do_archive_get(char *wal_file_path, char *wal_file_name)
*/
void
push_wal_file(const char *from_path, const char *to_path, bool is_compress,
bool overwrite)
bool overwrite, int compress_level)
{
FILE *in = NULL;
int out = -1;
@ -183,7 +183,7 @@ push_wal_file(const char *from_path, const char *to_path, bool is_compress,
{
snprintf(to_path_temp, sizeof(to_path_temp), "%s.part", gz_to_path);
gz_out = fio_gzopen(to_path_temp, PG_BINARY_W, instance_config.compress_level, FIO_BACKUP_HOST);
gz_out = fio_gzopen(to_path_temp, PG_BINARY_W, compress_level, FIO_BACKUP_HOST);
if (gz_out == NULL)
{
partial_file_exists = true;
@ -246,7 +246,7 @@ push_wal_file(const char *from_path, const char *to_path, bool is_compress,
#ifdef HAVE_LIBZ
if (is_compress)
{
gz_out = fio_gzopen(to_path_temp, PG_BINARY_W, instance_config.compress_level, FIO_BACKUP_HOST);
gz_out = fio_gzopen(to_path_temp, PG_BINARY_W, compress_level, FIO_BACKUP_HOST);
if (gz_out == NULL)
elog(ERROR, "Cannot open destination temporary WAL file \"%s\": %s",
to_path_temp, strerror(errno));

View File

@ -196,7 +196,7 @@ do_backup_instance(PGconn *backup_conn, PGNodeInfo *nodeInfo)
char prev_backup_filelist_path[MAXPGPATH];
/* get list of backups already taken */
backup_list = catalog_get_backup_list(INVALID_BACKUP_ID);
backup_list = catalog_get_backup_list(instance_name, INVALID_BACKUP_ID);
prev_backup = catalog_get_last_data_backup(backup_list, current.tli, current.start_time);
if (prev_backup == NULL)
@ -1681,7 +1681,9 @@ pg_stop_backup(pgBackup *backup, PGconn *pg_startbackup_conn,
* In case of backup from replica >= 9.6 we do not trust minRecPoint
* and stop_backup LSN, so we use latest replayed LSN as STOP LSN.
*/
if (backup->from_replica)
/* current is used here because of cleanup */
if (current.from_replica)
stop_backup_query = "SELECT"
" pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot()),"
" current_timestamp(0)::timestamptz,"

View File

@ -53,13 +53,14 @@ unlink_lock_atexit(void)
* If no backup matches, return NULL.
*/
pgBackup *
read_backup(time_t timestamp)
read_backup(const char *instance_name, time_t timestamp)
{
pgBackup tmp;
char conf_path[MAXPGPATH];
tmp.start_time = timestamp;
pgBackupGetPath(&tmp, conf_path, lengthof(conf_path), BACKUP_CONTROL_FILE);
pgBackupGetPathInInstance(instance_name, &tmp, conf_path,
lengthof(conf_path), BACKUP_CONTROL_FILE, NULL);
return readBackupControlFile(conf_path);
}
@ -71,11 +72,12 @@ read_backup(time_t timestamp)
* status.
*/
void
write_backup_status(pgBackup *backup, BackupStatus status)
write_backup_status(pgBackup *backup, BackupStatus status,
const char *instance_name)
{
pgBackup *tmp;
tmp = read_backup(backup->start_time);
tmp = read_backup(instance_name, backup->start_time);
if (!tmp)
{
/*
@ -302,6 +304,69 @@ IsDir(const char *dirpath, const char *entry, fio_location location)
return fio_stat(path, &st, false, location) == 0 && S_ISDIR(st.st_mode);
}
/*
* Create list of instances in given backup catalog.
*
* Returns parray of "InstanceConfig" structures, filled with
* actual config of each instance.
*/
parray *
catalog_get_instance_list(void)
{
char path[MAXPGPATH];
DIR *dir;
struct dirent *dent;
parray *instances;
instances = parray_new();
/* open directory and list contents */
join_path_components(path, backup_path, BACKUPS_DIR);
dir = opendir(path);
if (dir == NULL)
elog(ERROR, "Cannot open directory \"%s\": %s",
path, strerror(errno));
while (errno = 0, (dent = readdir(dir)) != NULL)
{
char child[MAXPGPATH];
struct stat st;
InstanceConfig *instance;
/* skip entries point current dir or parent dir */
if (strcmp(dent->d_name, ".") == 0 ||
strcmp(dent->d_name, "..") == 0)
continue;
join_path_components(child, path, dent->d_name);
if (lstat(child, &st) == -1)
elog(ERROR, "Cannot stat file \"%s\": %s",
child, strerror(errno));
if (!S_ISDIR(st.st_mode))
continue;
instance = readInstanceConfigFile(dent->d_name);
parray_append(instances, instance);
}
/* TODO 3.0: switch to ERROR */
if (parray_num(instances) == 0)
elog(WARNING, "This backup catalog contains no backup instances. Backup instance can be added via 'add-instance' command.");
if (errno)
elog(ERROR, "Cannot read directory \"%s\": %s",
path, strerror(errno));
if (closedir(dir))
elog(ERROR, "Cannot close directory \"%s\": %s",
path, strerror(errno));
return instances;
}
/*
* Create list of backups.
* If 'requested_backup_id' is INVALID_BACKUP_ID, return list of all backups.
@ -309,12 +374,16 @@ IsDir(const char *dirpath, const char *entry, fio_location location)
* If valid backup id is passed only matching backup will be added to the list.
*/
parray *
catalog_get_backup_list(time_t requested_backup_id)
catalog_get_backup_list(const char *instance_name, time_t requested_backup_id)
{
DIR *data_dir = NULL;
struct dirent *data_ent = NULL;
parray *backups = NULL;
int i;
char backup_instance_path[MAXPGPATH];
sprintf(backup_instance_path, "%s/%s/%s",
backup_path, BACKUPS_DIR, instance_name);
/* open backup instance backups directory */
data_dir = fio_opendir(backup_instance_path, FIO_BACKUP_HOST);
@ -420,6 +489,7 @@ err_proc:
* Create list of backup datafiles.
* If 'requested_backup_id' is INVALID_BACKUP_ID, exit with error.
* If valid backup id is passed only matching backup will be added to the list.
* TODO this function only used once. Is it really needed?
*/
parray *
get_backup_filelist(pgBackup *backup)
@ -1195,6 +1265,33 @@ pgBackupGetPath2(const pgBackup *backup, char *path, size_t len,
base36enc(backup->start_time), subdir1, subdir2);
}
/*
* independent from global variable backup_instance_path
* Still depends from backup_path
*/
void
pgBackupGetPathInInstance(const char *instance_name,
const pgBackup *backup, char *path, size_t len,
const char *subdir1, const char *subdir2)
{
char backup_instance_path[MAXPGPATH];
sprintf(backup_instance_path, "%s/%s/%s",
backup_path, BACKUPS_DIR, instance_name);
/* If "subdir1" is NULL do not check "subdir2" */
if (!subdir1)
snprintf(path, len, "%s/%s", backup_instance_path,
base36enc(backup->start_time));
else if (!subdir2)
snprintf(path, len, "%s/%s/%s", backup_instance_path,
base36enc(backup->start_time), subdir1);
/* "subdir1" and "subdir2" is not NULL */
else
snprintf(path, len, "%s/%s/%s/%s", backup_instance_path,
base36enc(backup->start_time), subdir1, subdir2);
}
/*
* Check if multiple backups consider target backup to be their direct parent
*/

View File

@ -312,10 +312,12 @@ do_set_config(bool missing_ok)
}
void
init_config(InstanceConfig *config)
init_config(InstanceConfig *config, const char *instance_name)
{
MemSet(config, 0, sizeof(InstanceConfig));
config->name = pgut_strdup(instance_name);
/*
* Starting from PostgreSQL 11 WAL segment size may vary. Prior to
* PostgreSQL 10 xlog_seg_size is equal to XLOG_SEG_SIZE.
@ -342,6 +344,236 @@ init_config(InstanceConfig *config)
config->remote.proto = (char*)"ssh";
}
/*
* read instance config from file
*/
InstanceConfig *
readInstanceConfigFile(const char *instance_name)
{
char path[MAXPGPATH];
InstanceConfig *instance = pgut_new(InstanceConfig);
char *log_level_console = NULL;
char *log_level_file = NULL;
char *compress_alg = NULL;
int parsed_options;
ConfigOption instance_options[] =
{
/* Instance options */
{
's', 'D', "pgdata",
&instance->pgdata, SOURCE_CMD, 0,
OPTION_INSTANCE_GROUP, 0, option_get_value
},
{
'U', 200, "system-identifier",
&instance->system_identifier, SOURCE_FILE_STRICT, 0,
OPTION_INSTANCE_GROUP, 0, option_get_value
},
#if PG_VERSION_NUM >= 110000
{
'u', 201, "xlog-seg-size",
&instance->xlog_seg_size, SOURCE_FILE_STRICT, 0,
OPTION_INSTANCE_GROUP, 0, option_get_value
},
#endif
{
's', 'E', "external-dirs",
&instance->external_dir_str, SOURCE_CMD, 0,
OPTION_INSTANCE_GROUP, 0, option_get_value
},
/* Connection options */
{
's', 'd', "pgdatabase",
&instance->conn_opt.pgdatabase, SOURCE_CMD, 0,
OPTION_CONN_GROUP, 0, option_get_value
},
{
's', 'h', "pghost",
&instance->conn_opt.pghost, SOURCE_CMD, 0,
OPTION_CONN_GROUP, 0, option_get_value
},
{
's', 'p', "pgport",
&instance->conn_opt.pgport, SOURCE_CMD, 0,
OPTION_CONN_GROUP, 0, option_get_value
},
{
's', 'U', "pguser",
&instance->conn_opt.pguser, SOURCE_CMD, 0,
OPTION_CONN_GROUP, 0, option_get_value
},
/* Replica options */
{
's', 202, "master-db",
&instance->master_conn_opt.pgdatabase, SOURCE_CMD, 0,
OPTION_REPLICA_GROUP, 0, option_get_value
},
{
's', 203, "master-host",
&instance->master_conn_opt.pghost, SOURCE_CMD, 0,
OPTION_REPLICA_GROUP, 0, option_get_value
},
{
's', 204, "master-port",
&instance->master_conn_opt.pgport, SOURCE_CMD, 0,
OPTION_REPLICA_GROUP, 0, option_get_value
},
{
's', 205, "master-user",
&instance->master_conn_opt.pguser, SOURCE_CMD, 0,
OPTION_REPLICA_GROUP, 0, option_get_value
},
{
'u', 206, "replica-timeout",
&instance->replica_timeout, SOURCE_CMD, SOURCE_DEFAULT,
OPTION_REPLICA_GROUP, OPTION_UNIT_S, option_get_value
},
/* Archive options */
{
'u', 207, "archive-timeout",
&instance->archive_timeout, SOURCE_CMD, SOURCE_DEFAULT,
OPTION_ARCHIVE_GROUP, OPTION_UNIT_S, option_get_value
},
/* Logging options */
{
's', 208, "log-level-console",
&log_level_console, SOURCE_CMD, 0,
OPTION_LOG_GROUP, 0, option_get_value
},
{
's', 209, "log-level-file",
&log_level_file, SOURCE_CMD, 0,
OPTION_LOG_GROUP, 0, option_get_value
},
{
's', 210, "log-filename",
&instance->logger.log_filename, SOURCE_CMD, 0,
OPTION_LOG_GROUP, 0, option_get_value
},
{
's', 211, "error-log-filename",
&instance->logger.error_log_filename, SOURCE_CMD, 0,
OPTION_LOG_GROUP, 0, option_get_value
},
{
's', 212, "log-directory",
&instance->logger.log_directory, SOURCE_CMD, 0,
OPTION_LOG_GROUP, 0, option_get_value
},
{
'U', 213, "log-rotation-size",
&instance->logger.log_rotation_size, SOURCE_CMD, SOURCE_DEFAULT,
OPTION_LOG_GROUP, OPTION_UNIT_KB, option_get_value
},
{
'U', 214, "log-rotation-age",
&instance->logger.log_rotation_age, SOURCE_CMD, SOURCE_DEFAULT,
OPTION_LOG_GROUP, OPTION_UNIT_MS, option_get_value
},
/* Retention options */
{
'u', 215, "retention-redundancy",
&instance->retention_redundancy, SOURCE_CMD, 0,
OPTION_RETENTION_GROUP, 0, option_get_value
},
{
'u', 216, "retention-window",
&instance->retention_window, SOURCE_CMD, 0,
OPTION_RETENTION_GROUP, 0, option_get_value
},
/* Compression options */
{
's', 217, "compress-algorithm",
&compress_alg, SOURCE_CMD, 0,
OPTION_LOG_GROUP, 0, option_get_value
},
{
'u', 218, "compress-level",
&instance->compress_level, SOURCE_CMD, 0,
OPTION_COMPRESS_GROUP, 0, option_get_value
},
/* Remote backup options */
{
's', 219, "remote-proto",
&instance->remote.proto, SOURCE_CMD, 0,
OPTION_REMOTE_GROUP, 0, option_get_value
},
{
's', 220, "remote-host",
&instance->remote.host, SOURCE_CMD, 0,
OPTION_REMOTE_GROUP, 0, option_get_value
},
{
's', 221, "remote-port",
&instance->remote.port, SOURCE_CMD, 0,
OPTION_REMOTE_GROUP, 0, option_get_value
},
{
's', 222, "remote-path",
&instance->remote.path, SOURCE_CMD, 0,
OPTION_REMOTE_GROUP, 0, option_get_value
},
{
's', 223, "remote-user",
&instance->remote.user, SOURCE_CMD, 0,
OPTION_REMOTE_GROUP, 0, option_get_value
},
{
's', 224, "ssh-options",
&instance->remote.ssh_options, SOURCE_CMD, 0,
OPTION_REMOTE_GROUP, 0, option_get_value
},
{
's', 225, "ssh-config",
&instance->remote.ssh_config, SOURCE_CMD, 0,
OPTION_REMOTE_GROUP, 0, option_get_value
},
{ 0 }
};
init_config(instance, instance_name);
sprintf(instance->backup_instance_path, "%s/%s/%s",
backup_path, BACKUPS_DIR, instance_name);
canonicalize_path(instance->backup_instance_path);
sprintf(instance->arclog_path, "%s/%s/%s",
backup_path, "wal", instance_name);
canonicalize_path(instance->arclog_path);
join_path_components(path, instance->backup_instance_path,
BACKUP_CATALOG_CONF_FILE);
if (fio_access(path, F_OK, FIO_BACKUP_HOST) != 0)
{
elog(WARNING, "Control file \"%s\" doesn't exist", path);
pfree(instance);
return NULL;
}
parsed_options = config_read_opt(path, instance_options, WARNING, true, true);
if (parsed_options == 0)
{
elog(WARNING, "Control file \"%s\" is empty", path);
pfree(instance);
return NULL;
}
if (log_level_console)
instance->logger.log_level_console = parse_log_level(log_level_console);
if (log_level_file)
instance->logger.log_level_file = parse_log_level(log_level_file);
if (compress_alg)
instance->compress_alg = parse_compress_alg(compress_alg);
return instance;
}
static void
assign_log_level_console(ConfigOption *opt, const char *arg)
{

View File

@ -37,7 +37,7 @@ do_delete(time_t backup_id)
TimeLineID oldest_tli = 0;
/* Get complete list of backups */
backup_list = catalog_get_backup_list(INVALID_BACKUP_ID);
backup_list = catalog_get_backup_list(instance_name, INVALID_BACKUP_ID);
delete_list = parray_new();
@ -133,7 +133,7 @@ int do_retention(void)
backup_merged = false;
/* Get a complete list of backups. */
backup_list = catalog_get_backup_list(INVALID_BACKUP_ID);
backup_list = catalog_get_backup_list(instance_name, INVALID_BACKUP_ID);
if (parray_num(backup_list) == 0)
backup_list_is_empty = true;
@ -634,7 +634,7 @@ do_retention_wal(void)
int i;
/* Get list of backups. */
backup_list = catalog_get_backup_list(INVALID_BACKUP_ID);
backup_list = catalog_get_backup_list(instance_name, INVALID_BACKUP_ID);
if (parray_num(backup_list) == 0)
backup_list_is_empty = true;
@ -697,7 +697,7 @@ delete_backup_files(pgBackup *backup)
* Update STATUS to BACKUP_STATUS_DELETING in preparation for the case which
* the error occurs before deleting all backup files.
*/
write_backup_status(backup, BACKUP_STATUS_DELETING);
write_backup_status(backup, BACKUP_STATUS_DELETING, instance_name);
/* list files to be deleted */
files = parray_new();
@ -853,7 +853,7 @@ do_delete_instance(void)
/* Delete all backups. */
backup_list = catalog_get_backup_list(INVALID_BACKUP_ID);
backup_list = catalog_get_backup_list(instance_name, INVALID_BACKUP_ID);
catalog_lock_backup_list(backup_list, 0, parray_num(backup_list) - 1);

View File

@ -118,11 +118,11 @@ typedef struct TablespaceCreatedList
TablespaceCreatedListCell *tail;
} TablespaceCreatedList;
static int BlackListCompare(const void *str1, const void *str2);
static int pgCompareString(const void *str1, const void *str2);
static char dir_check_file(pgFile *file);
static void dir_list_file_internal(parray *files, pgFile *parent, bool exclude,
bool follow_symlink, parray *black_list,
bool follow_symlink,
int external_dir_num, fio_location location);
static void opt_path_map(ConfigOption *opt, const char *arg,
TablespaceList *list, const char *type);
@ -450,7 +450,7 @@ pgFileCompareSize(const void *f1, const void *f2)
}
static int
BlackListCompare(const void *str1, const void *str2)
pgCompareString(const void *str1, const void *str2)
{
return strcmp(*(char **) str1, *(char **) str2);
}
@ -491,45 +491,6 @@ dir_list_file(parray *files, const char *root, bool exclude, bool follow_symlink
bool add_root, int external_dir_num, fio_location location)
{
pgFile *file;
parray *black_list = NULL;
char path[MAXPGPATH];
join_path_components(path, backup_instance_path, PG_BLACK_LIST);
/* List files with black list */
if (root && instance_config.pgdata &&
strcmp(root, instance_config.pgdata) == 0 &&
fileExists(path, FIO_BACKUP_HOST))
{
FILE *black_list_file = NULL;
char buf[MAXPGPATH * 2];
char black_item[MAXPGPATH * 2];
black_list = parray_new();
black_list_file = fio_open_stream(path, FIO_BACKUP_HOST);
if (black_list_file == NULL)
elog(ERROR, "cannot open black_list: %s", strerror(errno));
while (fgets(buf, lengthof(buf), black_list_file) != NULL)
{
black_item[0] = '\0';
join_path_components(black_item, instance_config.pgdata, buf);
if (black_item[strlen(black_item) - 1] == '\n')
black_item[strlen(black_item) - 1] = '\0';
if (black_item[0] == '#' || black_item[0] == '\0')
continue;
parray_append(black_list, pgut_strdup(black_item));
}
if (ferror(black_list_file))
elog(ERROR, "Failed to read from file: \"%s\"", path);
fio_close_stream(black_list_file);
parray_qsort(black_list, BlackListCompare);
}
file = pgFileNew(root, "", follow_symlink, external_dir_num, location);
if (file == NULL)
@ -553,17 +514,11 @@ dir_list_file(parray *files, const char *root, bool exclude, bool follow_symlink
if (add_root)
parray_append(files, file);
dir_list_file_internal(files, file, exclude, follow_symlink, black_list,
dir_list_file_internal(files, file, exclude, follow_symlink,
external_dir_num, location);
if (!add_root)
pgFileFree(file);
if (black_list)
{
parray_walk(black_list, pfree);
parray_free(black_list);
}
}
#define CHECK_FALSE 0
@ -772,7 +727,7 @@ dir_check_file(pgFile *file)
*/
static void
dir_list_file_internal(parray *files, pgFile *parent, bool exclude,
bool follow_symlink, parray *black_list,
bool follow_symlink,
int external_dir_num, fio_location location)
{
DIR *dir;
@ -829,15 +784,6 @@ dir_list_file_internal(parray *files, pgFile *parent, bool exclude,
continue;
}
/* Skip if the directory is in black_list defined by user */
if (black_list && parray_bsearch(black_list, file->path,
BlackListCompare))
{
elog(LOG, "Skip \"%s\": it is in the user's black list", file->path);
pgFileFree(file);
continue;
}
if (exclude)
{
check_res = dir_check_file(file);
@ -863,7 +809,7 @@ dir_list_file_internal(parray *files, pgFile *parent, bool exclude,
*/
if (S_ISDIR(file->mode))
dir_list_file_internal(files, file, exclude, follow_symlink,
black_list, external_dir_num, location);
external_dir_num, location);
}
if (errno && errno != ENOENT)
@ -1662,7 +1608,7 @@ make_external_directory_list(const char *colon_separated_dirs, bool remap)
p = strtok(NULL, EXTERNAL_DIRECTORY_DELIMITER);
}
pfree(tmp);
parray_qsort(list, BlackListCompare);
parray_qsort(list, pgCompareString);
return list;
}
@ -1690,7 +1636,7 @@ backup_contains_external(const char *dir, parray *dirs_list)
if (!dirs_list) /* There is no external dirs in backup */
return false;
search_result = parray_bsearch(dirs_list, dir, BlackListCompare);
search_result = parray_bsearch(dirs_list, dir, pgCompareString);
return search_result != NULL;
}

View File

@ -166,7 +166,7 @@ help_pg_probackup(void)
printf(_("\n %s show -B backup-path\n"), PROGRAM_NAME);
printf(_(" [--instance=instance_name [-i backup-id]]\n"));
printf(_(" [--format=format]\n"));
printf(_(" [--format=format] [--archive]\n"));
printf(_(" [--help]\n"));
printf(_("\n %s delete -B backup-path --instance=instance_name\n"), PROGRAM_NAME);
@ -543,11 +543,12 @@ help_show(void)
{
printf(_("\n%s show -B backup-path\n"), PROGRAM_NAME);
printf(_(" [--instance=instance_name [-i backup-id]]\n"));
printf(_(" [--format=format]\n\n"));
printf(_(" [--format=format] [--archive]\n\n"));
printf(_(" -B, --backup-path=backup-path location of the backup storage area\n"));
printf(_(" --instance=instance_name show info about specific instance\n"));
printf(_(" -i, --backup-id=backup-id show info about specific backups\n"));
printf(_(" --archive show WAL archive\n"));
printf(_(" --format=format show format=PLAIN|JSON\n\n"));
}

View File

@ -49,21 +49,21 @@ do_init(void)
}
int
do_add_instance(void)
do_add_instance(InstanceConfig *instance)
{
char path[MAXPGPATH];
char arclog_path_dir[MAXPGPATH];
struct stat st;
/* PGDATA is always required */
if (instance_config.pgdata == NULL)
if (instance->pgdata == NULL)
elog(ERROR, "Required parameter not specified: PGDATA "
"(-D, --pgdata)");
/* Read system_identifier from PGDATA */
instance_config.system_identifier = get_system_identifier(instance_config.pgdata);
instance->system_identifier = get_system_identifier(instance->pgdata);
/* Starting from PostgreSQL 11 read WAL segment size from PGDATA */
instance_config.xlog_seg_size = get_xlog_seg_size(instance_config.pgdata);
instance->xlog_seg_size = get_xlog_seg_size(instance->pgdata);
/* Ensure that all root directories already exist */
if (access(backup_path, F_OK) != 0)
@ -78,18 +78,18 @@ do_add_instance(void)
elog(ERROR, "%s directory does not exist.", arclog_path_dir);
/* Create directory for data files of this specific instance */
if (stat(backup_instance_path, &st) == 0 && S_ISDIR(st.st_mode))
elog(ERROR, "instance '%s' already exists", backup_instance_path);
dir_create_dir(backup_instance_path, DIR_PERMISSION);
if (stat(instance->backup_instance_path, &st) == 0 && S_ISDIR(st.st_mode))
elog(ERROR, "instance '%s' already exists", instance->backup_instance_path);
dir_create_dir(instance->backup_instance_path, DIR_PERMISSION);
/*
* Create directory for wal files of this specific instance.
* Existence check is extra paranoid because if we don't have such a
* directory in data dir, we shouldn't have it in wal as well.
*/
if (stat(arclog_path, &st) == 0 && S_ISDIR(st.st_mode))
elog(ERROR, "arclog_path '%s' already exists", arclog_path);
dir_create_dir(arclog_path, DIR_PERMISSION);
if (stat(instance->arclog_path, &st) == 0 && S_ISDIR(st.st_mode))
elog(ERROR, "arclog_path '%s' already exists", instance->arclog_path);
dir_create_dir(instance->arclog_path, DIR_PERMISSION);
/*
* Write initial configuration file.
@ -99,9 +99,9 @@ do_add_instance(void)
* We need to manually set options source to save them to the configuration
* file.
*/
config_set_opt(instance_options, &instance_config.system_identifier,
config_set_opt(instance_options, &instance->system_identifier,
SOURCE_FILE);
config_set_opt(instance_options, &instance_config.xlog_seg_size,
config_set_opt(instance_options, &instance->xlog_seg_size,
SOURCE_FILE);
/* pgdata was set through command line */
do_set_config(true);

View File

@ -66,7 +66,7 @@ do_merge(time_t backup_id)
elog(INFO, "Merge started");
/* Get list of all backups sorted in order of descending start time */
backups = catalog_get_backup_list(INVALID_BACKUP_ID);
backups = catalog_get_backup_list(instance_name, INVALID_BACKUP_ID);
/* Find destination backup first */
for (i = 0; i < parray_num(backups); i++)
@ -253,8 +253,8 @@ merge_backups(pgBackup *to_backup, pgBackup *from_backup)
if (from_backup->status == BACKUP_STATUS_DELETING)
goto delete_source_backup;
write_backup_status(to_backup, BACKUP_STATUS_MERGING);
write_backup_status(from_backup, BACKUP_STATUS_MERGING);
write_backup_status(to_backup, BACKUP_STATUS_MERGING, instance_name);
write_backup_status(from_backup, BACKUP_STATUS_MERGING, instance_name);
create_data_directories(files, to_database_path, from_backup_path, false, FIO_BACKUP_HOST);

View File

@ -280,7 +280,7 @@ validate_backup_wal_from_start_to_stop(pgBackup *backup,
* If we don't have WAL between start_lsn and stop_lsn,
* the backup is definitely corrupted. Update its status.
*/
write_backup_status(backup, BACKUP_STATUS_CORRUPT);
write_backup_status(backup, BACKUP_STATUS_CORRUPT, instance_name);
elog(WARNING, "There are not enough WAL records to consistenly restore "
"backup %s from START LSN: %X/%X to STOP LSN: %X/%X",

View File

@ -126,6 +126,7 @@ static bool file_overwrite = false;
/* show options */
ShowFormat show_format = SHOW_PLAIN;
bool show_archive = false;
/* current settings */
pgBackup current;
@ -203,6 +204,7 @@ static ConfigOption cmd_options[] =
{ 'b', 152, "overwrite", &file_overwrite, SOURCE_CMD_STRICT },
/* show options */
{ 'f', 153, "format", opt_show_format, SOURCE_CMD_STRICT },
{ 'b', 160, "archive", &show_archive, SOURCE_CMD_STRICT },
/* options for backward compatibility */
{ 's', 136, "time", &target_time, SOURCE_CMD_STRICT },
@ -251,7 +253,7 @@ main(int argc, char *argv[])
pgBackupInit(&current);
/* Initialize current instance configuration */
init_config(&instance_config);
init_config(&instance_config, instance_name);
PROGRAM_NAME = get_progname(argv[0]);
PROGRAM_FULL_PATH = palloc0(MAXPGPATH);
@ -445,10 +447,28 @@ main(int argc, char *argv[])
*/
if ((backup_path != NULL) && instance_name)
{
/*
* Fill global variables used to generate pathes inside the instance's
* backup catalog.
* TODO replace global variables with InstanceConfig structure fields
*/
sprintf(backup_instance_path, "%s/%s/%s",
backup_path, BACKUPS_DIR, instance_name);
sprintf(arclog_path, "%s/%s/%s", backup_path, "wal", instance_name);
/*
* Fill InstanceConfig structure fields used to generate pathes inside
* the instance's backup catalog.
* TODO continue refactoring to use these fields instead of global vars
*/
sprintf(instance_config.backup_instance_path, "%s/%s/%s",
backup_path, BACKUPS_DIR, instance_name);
canonicalize_path(instance_config.backup_instance_path);
sprintf(instance_config.arclog_path, "%s/%s/%s",
backup_path, "wal", instance_name);
canonicalize_path(instance_config.arclog_path);
/*
* Ensure that requested backup instance exists.
* for all commands except init, which doesn't take this parameter
@ -641,11 +661,13 @@ main(int argc, char *argv[])
switch (backup_subcmd)
{
case ARCHIVE_PUSH_CMD:
return do_archive_push(wal_file_path, wal_file_name, file_overwrite);
return do_archive_push(&instance_config, wal_file_path,
wal_file_name, file_overwrite);
case ARCHIVE_GET_CMD:
return do_archive_get(wal_file_path, wal_file_name);
return do_archive_get(&instance_config,
wal_file_path, wal_file_name);
case ADD_INSTANCE_CMD:
return do_add_instance();
return do_add_instance(&instance_config);
case DELETE_INSTANCE_CMD:
return do_delete_instance();
case INIT_CMD:
@ -682,7 +704,7 @@ main(int argc, char *argv[])
recovery_target_options,
restore_params);
case SHOW_CMD:
return do_show(current.backup_id);
return do_show(instance_name, current.backup_id, show_archive);
case DELETE_CMD:
if (delete_expired && backup_id_string)
elog(ERROR, "You cannot specify --delete-expired and (-i, --backup-id) options together");

View File

@ -58,7 +58,6 @@ extern const char *PROGRAM_EMAIL;
#define BACKUP_CATALOG_PID "backup.pid"
#define DATABASE_FILE_LIST "backup_content.control"
#define PG_BACKUP_LABEL_FILE "backup_label"
#define PG_BLACK_LIST "black_list"
#define PG_TABLESPACE_MAP_FILE "tablespace_map"
#define EXTERNAL_DIR "external_directories/externaldir"
#define DATABASE_MAP "database_map"
@ -227,6 +226,10 @@ typedef struct ConnectionArgs
*/
typedef struct InstanceConfig
{
char *name;
char arclog_path[MAXPGPATH];
char backup_instance_path[MAXPGPATH];
uint64 system_identifier;
uint32 xlog_seg_size;
@ -382,6 +385,29 @@ typedef struct
} backup_files_arg;
typedef struct timelineInfo timelineInfo;
/* struct to collect info about timelines in WAL archive */
struct timelineInfo {
TimeLineID tli; /* this timeline */
TimeLineID parent_tli; /* parent timeline. 0 if none */
timelineInfo *parent_link; /* link to parent timeline */
XLogRecPtr switchpoint; /* if this timeline has a parent
* switchpoint contains switchpoint LSN,
* otherwise 0 */
XLogSegNo begin_segno; /* first present segment in this timeline */
XLogSegNo end_segno; /* last present segment in this timeline */
int n_xlog_files; /* number of segments (only really existing)
* does not include lost segments */
size_t size; /* space on disk taken by regular WAL files */
parray *backups; /* array of pgBackup sturctures with info
* about backups belonging to this timeline */
parray *lost_segments; /* array of intervals of lost segments */
pgBackup *closest_backup; /* link to backup, closest to timeline */
};
/*
* When copying datafiles to backup we validate and compress them block
* by block. Thus special header is required for each data block.
@ -525,6 +551,7 @@ extern parray *get_dbOid_exclude_list(pgBackup *backup, parray *datname_list,
PartialRestoreType partial_restore_type);
extern parray *get_backup_filelist(pgBackup *backup);
extern parray *read_timeline_history(const char *arclog_path, TimeLineID targetTLI);
/* in merge.c */
extern void do_merge(time_t backup_id);
@ -534,21 +561,22 @@ extern parray *read_database_map(pgBackup *backup);
/* in init.c */
extern int do_init(void);
extern int do_add_instance(void);
extern int do_add_instance(InstanceConfig *instance);
/* in archive.c */
extern int do_archive_push(char *wal_file_path, char *wal_file_name,
bool overwrite);
extern int do_archive_get(char *wal_file_path, char *wal_file_name);
extern int do_archive_push(InstanceConfig *instance, char *wal_file_path,
char *wal_file_name, bool overwrite);
extern int do_archive_get(InstanceConfig *instance, char *wal_file_path,
char *wal_file_name);
/* in configure.c */
extern void do_show_config(void);
extern void do_set_config(bool missing_ok);
extern void init_config(InstanceConfig *config);
extern void init_config(InstanceConfig *config, const char *instance_name);
extern InstanceConfig *readInstanceConfigFile(const char *instance_name);
/* in show.c */
extern int do_show(time_t requested_backup_id);
extern int do_show(const char *instance_name, time_t requested_backup_id, bool show_archive);
/* in delete.c */
extern void do_delete(time_t backup_id);
@ -573,15 +601,17 @@ extern void pgBackupValidate(pgBackup* backup, pgRestoreParams *params);
extern int do_validate_all(void);
/* in catalog.c */
extern pgBackup *read_backup(time_t timestamp);
extern pgBackup *read_backup(const char *instance_name, time_t timestamp);
extern void write_backup(pgBackup *backup);
extern void write_backup_status(pgBackup *backup, BackupStatus status);
extern void write_backup_status(pgBackup *backup, BackupStatus status,
const char *instance_name);
extern void write_backup_data_bytes(pgBackup *backup);
extern bool lock_backup(pgBackup *backup);
extern const char *pgBackupGetBackupMode(pgBackup *backup);
extern parray *catalog_get_backup_list(time_t requested_backup_id);
extern parray *catalog_get_instance_list(void);
extern parray *catalog_get_backup_list(const char *instance_name, time_t requested_backup_id);
extern void catalog_lock_backup_list(parray *backup_list, int from_idx,
int to_idx);
extern pgBackup *catalog_get_last_data_backup(parray *backup_list,
@ -595,6 +625,9 @@ extern void pgBackupGetPath(const pgBackup *backup, char *path, size_t len,
const char *subdir);
extern void pgBackupGetPath2(const pgBackup *backup, char *path, size_t len,
const char *subdir1, const char *subdir2);
extern void pgBackupGetPathInInstance(const char *instance_name,
const pgBackup *backup, char *path, size_t len,
const char *subdir1, const char *subdir2);
extern int pgBackupCreateDir(pgBackup *backup);
extern void pgNodeInit(PGNodeInfo *node);
extern void pgBackupInit(pgBackup *backup);
@ -621,7 +654,8 @@ extern const char* deparse_compress_alg(int alg);
/* in dir.c */
extern void dir_list_file(parray *files, const char *root, bool exclude,
bool follow_symlink, bool add_root, int external_dir_num, fio_location location);
bool follow_symlink, bool add_root,
int external_dir_num, fio_location location);
extern void create_data_directories(parray *dest_files,
const char *data_dir,

View File

@ -42,7 +42,6 @@ static void create_recovery_conf(time_t backup_id,
pgRecoveryTarget *rt,
pgBackup *backup,
pgRestoreParams *params);
static parray *read_timeline_history(TimeLineID targetTLI);
static void *restore_files(void *arg);
static void set_orphan_status(parray *backups, pgBackup *parent_backup);
@ -70,7 +69,7 @@ set_orphan_status(parray *backups, pgBackup *parent_backup)
if (backup->status == BACKUP_STATUS_OK ||
backup->status == BACKUP_STATUS_DONE)
{
write_backup_status(backup, BACKUP_STATUS_ORPHAN);
write_backup_status(backup, BACKUP_STATUS_ORPHAN, instance_name);
elog(WARNING,
"Backup %s is orphaned because his parent %s has status: %s",
@ -125,7 +124,7 @@ do_restore_or_validate(time_t target_backup_id, pgRecoveryTarget *rt,
elog(LOG, "%s begin.", action);
/* Get list of all backups sorted in order of descending start time */
backups = catalog_get_backup_list(INVALID_BACKUP_ID);
backups = catalog_get_backup_list(instance_name, INVALID_BACKUP_ID);
/* Find backup range we should restore or validate. */
while ((i < parray_num(backups)) && !dest_backup)
@ -196,7 +195,7 @@ do_restore_or_validate(time_t target_backup_id, pgRecoveryTarget *rt,
elog(LOG, "target timeline ID = %u", rt->target_tli);
/* Read timeline history files from archives */
timelines = read_timeline_history(rt->target_tli);
timelines = read_timeline_history(arclog_path, rt->target_tli);
if (!satisfy_timeline(timelines, current_backup))
{
@ -273,7 +272,7 @@ do_restore_or_validate(time_t target_backup_id, pgRecoveryTarget *rt,
if (backup->status == BACKUP_STATUS_OK ||
backup->status == BACKUP_STATUS_DONE)
{
write_backup_status(backup, BACKUP_STATUS_ORPHAN);
write_backup_status(backup, BACKUP_STATUS_ORPHAN, instance_name);
elog(WARNING, "Backup %s is orphaned because his parent %s is missing",
base36enc(backup->start_time), missing_backup_id);
@ -920,7 +919,7 @@ create_recovery_conf(time_t backup_id,
* based on readTimeLineHistory() in timeline.c
*/
parray *
read_timeline_history(TimeLineID targetTLI)
read_timeline_history(const char *arclog_path, TimeLineID targetTLI)
{
parray *result;
char path[MAXPGPATH];

1004
src/show.c

File diff suppressed because it is too large Load Diff

View File

@ -69,7 +69,7 @@ pgBackupValidate(pgBackup *backup, pgRestoreParams *params)
{
elog(WARNING, "Backup %s has status %s, change it to ERROR and skip validation",
base36enc(backup->start_time), status2str(backup->status));
write_backup_status(backup, BACKUP_STATUS_ERROR);
write_backup_status(backup, BACKUP_STATUS_ERROR, instance_name);
corrupted_backup_found = true;
return;
}
@ -167,7 +167,7 @@ pgBackupValidate(pgBackup *backup, pgRestoreParams *params)
/* Update backup status */
write_backup_status(backup, corrupted ? BACKUP_STATUS_CORRUPT :
BACKUP_STATUS_OK);
BACKUP_STATUS_OK, instance_name);
if (corrupted)
elog(WARNING, "Backup %s data files are corrupted", base36enc(backup->start_time));
@ -426,7 +426,7 @@ do_validate_instance(void)
elog(INFO, "Validate backups of the instance '%s'", instance_name);
/* Get list of all backups sorted in order of descending start time */
backups = catalog_get_backup_list(INVALID_BACKUP_ID);
backups = catalog_get_backup_list(instance_name, INVALID_BACKUP_ID);
/* Examine backups one by one and validate them */
for (i = 0; i < parray_num(backups); i++)
@ -456,7 +456,7 @@ do_validate_instance(void)
if (current_backup->status == BACKUP_STATUS_OK ||
current_backup->status == BACKUP_STATUS_DONE)
{
write_backup_status(current_backup, BACKUP_STATUS_ORPHAN);
write_backup_status(current_backup, BACKUP_STATUS_ORPHAN, instance_name);
elog(WARNING, "Backup %s is orphaned because his parent %s is missing",
base36enc(current_backup->start_time),
parent_backup_id);
@ -480,7 +480,7 @@ do_validate_instance(void)
if (current_backup->status == BACKUP_STATUS_OK ||
current_backup->status == BACKUP_STATUS_DONE)
{
write_backup_status(current_backup, BACKUP_STATUS_ORPHAN);
write_backup_status(current_backup, BACKUP_STATUS_ORPHAN, instance_name);
elog(WARNING, "Backup %s is orphaned because his parent %s has status: %s",
base36enc(current_backup->start_time), backup_id,
status2str(tmp_backup->status));
@ -553,7 +553,7 @@ do_validate_instance(void)
if (backup->status == BACKUP_STATUS_OK ||
backup->status == BACKUP_STATUS_DONE)
{
write_backup_status(backup, BACKUP_STATUS_ORPHAN);
write_backup_status(backup, BACKUP_STATUS_ORPHAN, instance_name);
elog(WARNING, "Backup %s is orphaned because his parent %s has status: %s",
base36enc(backup->start_time),

View File

@ -7,6 +7,7 @@ from datetime import datetime, timedelta
import subprocess
from sys import exit
from time import sleep
from distutils.dir_util import copy_tree
module_name = 'archive'
@ -263,8 +264,7 @@ class ArchiveTest(ProbackupTest, unittest.TestCase):
log_content)
else:
self.assertIn(
"ERROR: Switched WAL segment 000000010000000000000002 "
"could not be archived in 60 seconds",
"ERROR: WAL segment 000000010000000000000002 could not be archived in 60 seconds",
log_content)
log_file = os.path.join(node.logs_dir, 'postgresql.log')
@ -1115,3 +1115,349 @@ class ArchiveTest(ProbackupTest, unittest.TestCase):
# Clean after yourself
pg_receivexlog.kill()
self.del_test_dir(module_name, fname)
# @unittest.expectedFailure
# @unittest.skip("skip")
def test_archive_catalog(self):
"""
ARCHIVE replica:
t6 |-----------------------
t5 | |-------
| |
t4 | |--------------
| |
t3 | |--B1--|/|--B2-|/|-B3---
| |
t2 |--A1--------A2---
t1 ---------Y1--Y2--
ARCHIVE master:
t1 -Z1--Z2---
"""
fname = self.id().split('.')[3]
backup_dir = os.path.join(self.tmp_path, module_name, fname, 'backup')
master = self.make_simple_node(
base_dir=os.path.join(module_name, fname, 'master'),
set_replication=True,
initdb_params=['--data-checksums'],
pg_options={
'archive_timeout': '30s',
'checkpoint_timeout': '30s',
'autovacuum': 'off'})
self.init_pb(backup_dir)
self.add_instance(backup_dir, 'master', master)
self.set_archiving(backup_dir, 'master', master)
master.slow_start()
# FULL
master.safe_psql(
"postgres",
"create table t_heap as select i as id, md5(i::text) as text, "
"md5(repeat(i::text,10))::tsvector as tsvector "
"from generate_series(0,10000) i")
self.backup_node(backup_dir, 'master', master)
# PAGE
master.safe_psql(
"postgres",
"insert into t_heap select i as id, md5(i::text) as text, "
"md5(repeat(i::text,10))::tsvector as tsvector "
"from generate_series(10000,20000) i")
self.backup_node(
backup_dir, 'master', master, backup_type='page')
replica = self.make_simple_node(
base_dir=os.path.join(module_name, fname, 'replica'))
replica.cleanup()
self.restore_node(backup_dir, 'master', replica)
self.set_replica(master, replica)
self.add_instance(backup_dir, 'replica', replica)
self.set_archiving(backup_dir, 'replica', replica, replica=True)
copy_tree(
os.path.join(backup_dir, 'wal', 'master'),
os.path.join(backup_dir, 'wal', 'replica'))
# Check data correctness on replica
replica.slow_start(replica=True)
# FULL backup replica
Y1 = self.backup_node(
backup_dir, 'replica', replica,
options=['--stream', '--archive-timeout=60s'])
master.pgbench_init(scale=5)
# PAGE backup replica
Y2 = self.backup_node(
backup_dir, 'replica', replica,
backup_type='page', options=['--stream', '--archive-timeout=60s'])
# create timeline t2
replica.promote()
# do checkpoint to increment timeline ID in pg_control
replica.safe_psql(
'postgres',
'CHECKPOINT')
# FULL backup replica
A1 = self.backup_node(
backup_dir, 'replica', replica)
replica.pgbench_init(scale=5)
replica.safe_psql(
'postgres',
"CREATE TABLE t1 (a text)")
target_xid = None
with replica.connect("postgres") as con:
res = con.execute(
"INSERT INTO t1 VALUES ('inserted') RETURNING (xmin)")
con.commit()
target_xid = res[0][0]
# DELTA backup replica
A2 = self.backup_node(
backup_dir, 'replica', replica, backup_type='delta')
# create timeline t3
replica.cleanup()
self.restore_node(
backup_dir, 'replica', replica,
options=[
'--recovery-target-xid={0}'.format(target_xid),
'--recovery-target-timeline=2',
'--recovery-target-action=promote'])
replica.slow_start()
B1 = self.backup_node(
backup_dir, 'replica', replica)
replica.pgbench_init(scale=2)
B2 = self.backup_node(
backup_dir, 'replica', replica, backup_type='page')
replica.pgbench_init(scale=2)
target_xid = None
with replica.connect("postgres") as con:
res = con.execute(
"INSERT INTO t1 VALUES ('inserted') RETURNING (xmin)")
con.commit()
target_xid = res[0][0]
B3 = self.backup_node(
backup_dir, 'replica', replica, backup_type='page')
replica.pgbench_init(scale=2)
# create timeline t4
replica.cleanup()
self.restore_node(
backup_dir, 'replica', replica,
options=[
'--recovery-target-xid={0}'.format(target_xid),
'--recovery-target-timeline=3',
'--recovery-target-action=promote'])
replica.slow_start()
replica.safe_psql(
'postgres',
'CREATE TABLE '
't2 as select i, '
'repeat(md5(i::text),5006056) as fat_attr '
'from generate_series(0,6) i')
target_xid = None
with replica.connect("postgres") as con:
res = con.execute(
"INSERT INTO t1 VALUES ('inserted') RETURNING (xmin)")
con.commit()
target_xid = res[0][0]
replica.safe_psql(
'postgres',
'CREATE TABLE '
't3 as select i, '
'repeat(md5(i::text),5006056) as fat_attr '
'from generate_series(0,10) i')
# create timeline t5
replica.cleanup()
self.restore_node(
backup_dir, 'replica', replica,
options=[
'--recovery-target-xid={0}'.format(target_xid),
'--recovery-target-timeline=4',
'--recovery-target-action=promote'])
replica.slow_start()
replica.safe_psql(
'postgres',
'CREATE TABLE '
't4 as select i, '
'repeat(md5(i::text),5006056) as fat_attr '
'from generate_series(0,6) i')
# create timeline t6
replica.cleanup()
self.restore_node(
backup_dir, 'replica', replica, backup_id=A1,
options=[
'--recovery-target=immediate',
'--recovery-target-action=promote'])
replica.slow_start()
replica.pgbench_init(scale=2)
show = self.show_archive(backup_dir, as_text=True)
show = self.show_archive(backup_dir)
for instance in show:
if instance['instance'] == 'replica':
replica_timelines = instance['timelines']
if instance['instance'] == 'master':
master_timelines = instance['timelines']
# check that all timelines are ok
for timeline in replica_timelines:
self.assertTrue(timeline['status'], 'OK')
# check that all timelines are ok
for timeline in master_timelines:
self.assertTrue(timeline['status'], 'OK')
# create holes in t3
wals_dir = os.path.join(backup_dir, 'wal', 'replica')
wals = [
f for f in os.listdir(wals_dir) if os.path.isfile(os.path.join(wals_dir, f))
and not f.endswith('.backup') and not f.endswith('.history') and f.startswith('00000003')
]
wals.sort()
# check that t3 is ok
self.show_archive(backup_dir)
file = os.path.join(backup_dir, 'wal', 'replica', '000000030000000000000017')
if self.archive_compress:
file = file + '.gz'
os.remove(file)
file = os.path.join(backup_dir, 'wal', 'replica', '000000030000000000000012')
if self.archive_compress:
file = file + '.gz'
os.remove(file)
file = os.path.join(backup_dir, 'wal', 'replica', '000000030000000000000013')
if self.archive_compress:
file = file + '.gz'
os.remove(file)
# check that t3 is not OK
show = self.show_archive(backup_dir)
show = self.show_archive(backup_dir)
for instance in show:
if instance['instance'] == 'replica':
replica_timelines = instance['timelines']
# sanity
for timeline in replica_timelines:
if timeline['tli'] == 1:
timeline_1 = timeline
continue
if timeline['tli'] == 2:
timeline_2 = timeline
continue
if timeline['tli'] == 3:
timeline_3 = timeline
continue
if timeline['tli'] == 4:
timeline_4 = timeline
continue
if timeline['tli'] == 5:
timeline_5 = timeline
continue
if timeline['tli'] == 6:
timeline_6 = timeline
continue
self.assertEqual(timeline_6['status'], "OK")
self.assertEqual(timeline_5['status'], "OK")
self.assertEqual(timeline_4['status'], "OK")
self.assertEqual(timeline_3['status'], "DEGRADED")
self.assertEqual(timeline_2['status'], "OK")
self.assertEqual(timeline_1['status'], "OK")
self.assertEqual(len(timeline_3['lost-segments']), 2)
self.assertEqual(timeline_3['lost-segments'][0]['begin-segno'], '0000000000000012')
self.assertEqual(timeline_3['lost-segments'][0]['end-segno'], '0000000000000013')
self.assertEqual(timeline_3['lost-segments'][1]['begin-segno'], '0000000000000017')
self.assertEqual(timeline_3['lost-segments'][1]['end-segno'], '0000000000000017')
self.assertEqual(len(timeline_6['backups']), 0)
self.assertEqual(len(timeline_5['backups']), 0)
self.assertEqual(len(timeline_4['backups']), 0)
self.assertEqual(len(timeline_3['backups']), 3)
self.assertEqual(len(timeline_2['backups']), 2)
self.assertEqual(len(timeline_1['backups']), 2)
# check closest backup correctness
self.assertEqual(timeline_6['closest-backup-id'], A1)
self.assertEqual(timeline_5['closest-backup-id'], B2)
self.assertEqual(timeline_4['closest-backup-id'], B2)
self.assertEqual(timeline_3['closest-backup-id'], A1)
self.assertEqual(timeline_2['closest-backup-id'], Y2)
# check parent tli correctness
self.assertEqual(timeline_6['parent-tli'], 2)
self.assertEqual(timeline_5['parent-tli'], 4)
self.assertEqual(timeline_4['parent-tli'], 3)
self.assertEqual(timeline_3['parent-tli'], 2)
self.assertEqual(timeline_2['parent-tli'], 1)
self.assertEqual(timeline_1['parent-tli'], 0)
self.del_test_dir(module_name, fname)
# important - switchpoint may be NullOffset LSN and not actually existing in archive to boot.
# so write validation code accordingly
# change wal-seg-size
#
#
#t3 ----------------
# /
#t2 ----------------
# /
#t1 -A--------
#
#
#t3 ----------------
# /
#t2 ----------------
# /
#t1 -A--------
#

View File

@ -3,6 +3,8 @@ import os
from time import sleep
from .helpers.ptrack_helpers import ProbackupTest, ProbackupException
import shutil
from distutils.dir_util import copy_tree
from testgres import ProcessType
module_name = 'backup'
@ -2014,21 +2016,32 @@ class BackupTest(ProbackupTest, unittest.TestCase):
self.add_instance(backup_dir, 'replica', replica)
self.set_archiving(backup_dir, 'replica', replica, replica=True)
copy_tree(
os.path.join(backup_dir, 'wal', 'node'),
os.path.join(backup_dir, 'wal', 'replica'))
replica.slow_start(replica=True)
# freeze bgwriter to get rid of RUNNING XACTS records
bgwriter_pid = node.auxiliary_pids[ProcessType.BackgroundWriter][0]
gdb_checkpointer = self.gdb_attach(bgwriter_pid)
# FULL backup from replica
self.backup_node(
backup_dir, 'replica', replica,
datname='backupdb', options=['--stream', '-U', 'backup'])
self.switch_wal_segment(node)
self.backup_node(
backup_dir, 'replica', replica, datname='backupdb',
options=['-U', 'backup', '--log-level-file=verbose'])
options=['-U', 'backup', '--log-level-file=verbose', '--archive-timeout=100s'])
# PAGE backup from replica
self.backup_node(
backup_dir, 'replica', replica, backup_type='page',
datname='backupdb', options=['-U', 'backup'])
datname='backupdb', options=['-U', 'backup', '--archive-timeout=100s'])
self.backup_node(
backup_dir, 'replica', replica, backup_type='page',
datname='backupdb', options=['--stream', '-U', 'backup'])
@ -2036,7 +2049,7 @@ class BackupTest(ProbackupTest, unittest.TestCase):
# DELTA backup from replica
self.backup_node(
backup_dir, 'replica', replica, backup_type='delta',
datname='backupdb', options=['-U', 'backup'])
datname='backupdb', options=['-U', 'backup', '--archive-timeout=100s'])
self.backup_node(
backup_dir, 'replica', replica, backup_type='delta',
datname='backupdb', options=['--stream', '-U', 'backup'])

View File

@ -1275,7 +1275,7 @@ class DeltaTest(ProbackupTest, unittest.TestCase):
content = f.read()
self.assertIn(
"LOG: File: {0} blknum 1, empty page".format(file),
"VERBOSE: File: {0} blknum 1, empty page".format(file),
content)
self.assertNotIn(
"Skipping blknum 1 in file: {0}".format(file),

View File

@ -947,6 +947,39 @@ class ProbackupTest(object):
return specific_record
def show_archive(
self, backup_dir, instance=None, options=[],
as_text=False, as_json=True, old_binary=False
):
cmd_list = [
'show',
'--archive',
'-B', backup_dir,
]
if instance:
cmd_list += ['--instance={0}'.format(instance)]
# AHTUNG, WARNING will break json parsing
if as_json:
cmd_list += ['--format=json', '--log-level-console=error']
if as_text:
# You should print it when calling as_text=true
return self.run_pb(cmd_list + options, old_binary=old_binary)
if as_json:
if as_text:
data = self.run_pb(cmd_list + options, old_binary=old_binary)
else:
data = json.loads(self.run_pb(cmd_list + options, old_binary=old_binary))
return data
else:
show_splitted = self.run_pb(
cmd_list + options, old_binary=old_binary).splitlines()
print(show_splitted)
exit(1)
def validate_pb(
self, backup_dir, instance=None,
backup_id=None, options=[], old_binary=False, gdb=False

View File

@ -100,7 +100,8 @@ class OptionTest(ProbackupTest, unittest.TestCase):
repr(self.output), self.cmd))
except ProbackupException as e:
self.assertIn(
'ERROR: You must specify at least one of the delete options: --expired |--wal |--merge-expired |--delete-invalid |--backup_id',
'ERROR: You must specify at least one of the delete options: '
'--delete-expired |--delete-wal |--merge-expired |(-i, --backup-id)',
e.message,
'\n Unexpected Error Message: {0}\n CMD: {1}'.format(repr(e.message), self.cmd))

View File

@ -696,8 +696,8 @@ class PageBackupTest(ProbackupTest, unittest.TestCase):
self.output, self.cmd))
except ProbackupException as e:
self.assertTrue(
'INFO: Wait for LSN' in e.message and
'in archived WAL segment' in e.message and
'INFO: Wait for WAL segment' in e.message and
'to be archived' in e.message and
'Could not read WAL record at' in e.message and
'is absent' in e.message,
'\n Unexpected Error Message: {0}\n CMD: {1}'.format(
@ -721,8 +721,8 @@ class PageBackupTest(ProbackupTest, unittest.TestCase):
self.output, self.cmd))
except ProbackupException as e:
self.assertTrue(
'INFO: Wait for LSN' in e.message and
'in archived WAL segment' in e.message and
'INFO: Wait for WAL segment' in e.message and
'to be archived' in e.message and
'Could not read WAL record at' in e.message and
'is absent' in e.message,
'\n Unexpected Error Message: {0}\n CMD: {1}'.format(
@ -811,8 +811,8 @@ class PageBackupTest(ProbackupTest, unittest.TestCase):
self.output, self.cmd))
except ProbackupException as e:
self.assertTrue(
'INFO: Wait for LSN' in e.message and
'in archived WAL segment' in e.message and
'INFO: Wait for WAL segment' in e.message and
'to be archived' in e.message and
'Could not read WAL record at' in e.message and
'incorrect resource manager data checksum in record at' in e.message and
'Possible WAL corruption. Error has occured during reading WAL segment' in e.message,
@ -836,8 +836,8 @@ class PageBackupTest(ProbackupTest, unittest.TestCase):
self.output, self.cmd))
except ProbackupException as e:
self.assertTrue(
'INFO: Wait for LSN' in e.message and
'in archived WAL segment' in e.message and
'INFO: Wait for WAL segment' in e.message and
'to be archived' in e.message and
'Could not read WAL record at' in e.message and
'incorrect resource manager data checksum in record at' in e.message and
'Possible WAL corruption. Error has occured during reading WAL segment "{0}"'.format(
@ -933,8 +933,8 @@ class PageBackupTest(ProbackupTest, unittest.TestCase):
self.output, self.cmd))
except ProbackupException as e:
self.assertTrue(
'INFO: Wait for LSN' in e.message and
'in archived WAL segment' in e.message and
'INFO: Wait for WAL segment' in e.message and
'to be archived' in e.message and
'Could not read WAL record at' in e.message and
'WAL file is from different database system: WAL file database system identifier is' in e.message and
'pg_control database system identifier is' in e.message and

View File

@ -126,7 +126,7 @@ class BugTest(ProbackupTest, unittest.TestCase):
'recovery.conf', "recovery_target = 'immediate'")
replica.append_conf(
'recovery.conf', "recovery_target_action = 'promote'")
replica.slow_start()
replica.slow_start(replica=True)
if self.get_version(node) < 100000:
script = '''

View File

@ -57,8 +57,8 @@ class ArchiveCheck(ProbackupTest, unittest.TestCase):
except ProbackupException as e:
self.assertTrue(
'INFO: Wait for WAL segment' in e.message and
'ERROR: Switched WAL segment' in e.message and
'could not be archived' in e.message,
'ERROR: WAL segment' in e.message and
'could not be archived in 10 seconds' in e.message,
'\n Unexpected Error Message: {0}\n CMD: {1}'.format(
repr(e.message), self.cmd))

View File

@ -13,6 +13,36 @@ module_name = 'validate'
class ValidateTest(ProbackupTest, unittest.TestCase):
# @unittest.skip("skip")
# @unittest.expectedFailure
def test_validate_all_empty_catalog(self):
"""
"""
fname = self.id().split('.')[3]
node = self.make_simple_node(
base_dir=os.path.join(module_name, fname, 'node'),
initdb_params=['--data-checksums'])
backup_dir = os.path.join(self.tmp_path, module_name, fname, 'backup')
self.init_pb(backup_dir)
try:
self.validate_pb(backup_dir)
self.assertEqual(
1, 0,
"Expecting Error because backup_dir is empty.\n "
"Output: {0} \n CMD: {1}".format(
repr(self.output), self.cmd))
except ProbackupException as e:
self.assertIn(
'ERROR: This backup catalog contains no backup instances',
e.message,
'\n Unexpected Error Message: {0}\n CMD: {1}'.format(
repr(e.message), self.cmd))
# Clean after yourself
self.del_test_dir(module_name, fname)
# @unittest.skip("skip")
# @unittest.expectedFailure
def test_basic_validate_nullified_heap_page_backup(self):
@ -843,11 +873,18 @@ class ValidateTest(ProbackupTest, unittest.TestCase):
backup_dir, 'node', node, backup_type='page')
# PAGE4
node.safe_psql(
"postgres",
"insert into t_heap select i as id, md5(i::text) as text, "
"md5(repeat(i::text,10))::tsvector as tsvector "
"from generate_series(20000,30000) i")
target_xid = node.safe_psql(
"postgres",
"insert into t_heap select i as id, md5(i::text) as text, "
"md5(repeat(i::text,10))::tsvector as tsvector "
"from generate_series(20000,30000) i RETURNING (xmin)")[0][0]
"from generate_series(30001, 30001) i RETURNING (xmin)").rstrip()
backup_id_5 = self.backup_node(
backup_dir, 'node', node, backup_type='page')
@ -899,8 +936,7 @@ class ValidateTest(ProbackupTest, unittest.TestCase):
self.validate_pb(
backup_dir, 'node',
options=[
'-i', backup_id_4, '--xid={0}'.format(target_xid),
"-j", "4"])
'-i', backup_id_4, '--xid={0}'.format(target_xid), "-j", "4"])
self.assertEqual(
1, 0,
"Expecting Error because of data files corruption.\n "
@ -3599,7 +3635,9 @@ class ValidateTest(ProbackupTest, unittest.TestCase):
"md5(repeat(i::text,10))::tsvector as tsvector "
"from generate_series(0,100) i")
gdb = self.backup_node(backup_dir, 'node', node, gdb=True)
gdb = self.backup_node(
backup_dir, 'node', node,
options=['--log-level-console=LOG'], gdb=True)
gdb.set_breakpoint('pg_stop_backup')
gdb.run_until_break()
@ -3614,6 +3652,8 @@ class ValidateTest(ProbackupTest, unittest.TestCase):
self.show_pb(backup_dir, 'node', backup_id)['status'],
'Backup STATUS should be "ERROR"')
self.switch_wal_segment(node)
target_lsn = self.show_pb(backup_dir, 'node', backup_id)['start-lsn']
self.validate_pb(