https://github.com {[github-url-root]}/pgbackrest/pgbackrest {[github-url-base]}/blob/master {[github-url-base]}/issues {[github-url-master]}/CHANGELOG.md {[github-url-base]}/archive/release {[github-url-master]}/LICENSE http://www.pgbackrest.org user-guide.html configuration.html command.html backlog.html http://www.crunchydata.com {[crunchy-url-base]}/crunchy-backup-manager http://www.resonate.com
Introduction

aims to be a simple, reliable backup and restore system that can seamlessly scale up to the largest databases and workloads.

Instead of relying on traditional backup tools like tar and rsync, implements all backup features internally and uses a custom protocol for communicating with remote systems. Removing reliance on tar and rsync allows for better solutions to database-specific backup challenges. The custom remote protocol allows for more flexibility and limits the types of connections that are required to perform a backup which increases security.

v{[version]} is the current stable release.

Features
Multithreaded Backup & Restore

Compression is usually the bottleneck during backup operations but, even with now ubiquitous multi-core servers, most database backup solutions are still single-threaded. solves the compression bottleneck with multithreading.

Utilizing multiple cores for compression makes it possible to achieve 1TB/hr raw throughput even on a 1Gb/s link. More cores and a larger pipe lead to even higher throughput.

Local or Remote Operation

A custom protocol allows to backup, restore, and archive locally or remotely via SSH with minimal configuration. An interface to query is also provided via the protocol layer so that remote access to is never required, which enhances security.

Full, Incremental, & Differential Backups

Full, differential, and incremental backups are supported. is not susceptible to the time resolution issues of rsync, making differential and incremental backups completely safe.

Backup Rotation & Archive Expiration

Retention polices can be set for full and differential backups to create coverage for any timeframe. WAL archive can be maintained for all backups or strictly for the most recent backups. In the latter case WAL required to make older backups consistent will be maintained in the archive.

Backup Integrity

Checksums are calculated for every file in the backup and rechecked during a restore. After a backup finishes copying files, it waits until every WAL segment required to make the backup consistent reaches the repository.

Backups in the repository are stored in the same format as a standard cluster (including tablespaces). If compression is disabled and hard links are enabled it is possible to snapshot a backup in the repository and bring up a cluster directly on the snapshot. This is advantageous for terabyte-scale databases that are time consuming to restore in the traditional way.

All operations utilize file and directory level fsync to ensure durability.

Backup Resume

An aborted backup can be resumed from the point where it was stopped. Files that were already copied are compared with the checksums in the manifest to ensure integrity. Since this operation can take place entirely on the backup server, it reduces load on the database server and saves time since checksum calculation is faster than compressing and retransmitting data.

Streaming Compression & Checksums

Compression and checksum calculations are performed in stream while files are being copied to the repository, whether the repository is located locally or remotely.

If the repository is on a backup server, compression is performed on the database server and files are transmitted in a compressed format and simply stored on the backup server. When compression is disabled a lower level of compression is utilized to make efficient use of available bandwidth while keeping CPU cost to a minimum.

Delta Restore

The manifest contains checksums for every file in the backup so that during a restore it is possible to use these checksums to speed processing enormously. On a delta restore any files not present in the backup are first removed and then checksums are taken for the remaining files. Files that match the backup are left in place and the rest of the files are restored as usual. Since this process is multithreaded, it can lead to a dramatic reduction in restore times.

Advanced Archiving

Dedicated commands are included for both pushing WAL to the archive and retrieving WAL from the archive.

The push command automatically detects WAL segments that are pushed multiple times and de-duplicates when the segment is identical, otherwise an error is raised. The push and get commands both ensure that the database and repository match by comparing versions and system identifiers. This precludes the possibility of misconfiguring the WAL archive location.

Asynchronous archiving allows compression and transfer to be offloaded to another process which maintains a continuous connection to the remote server, improving throughput significantly. This can be a critical feature for databases with extremely high write volume.

Compatibility with <postgres/> >= 8.3

includes support for versions down to 8.3, since older versions of PostgreSQL are still regularly utilized.

Getting Started

strives to be easy to configure and operate:

Contributions

Contributions to are always welcome! Code fixes or new features can be submitted via pull requests. Ideas for new features and improvements to existing functionality or documentation can be submitted as issues. You may want to check the Feature Backlog to see if your suggestion has already been submitted. Bug reports should be submitted as issues. Please provide as much information as possible to aid in determining the cause of the problem. You will always receive credit in the change log for your contributions.

Support

is completely free and open source under the MIT license. You may use it for personal or commercial purposes without any restrictions whatsoever. Bug reports are taken very seriously and will be addressed as quickly as possible. Creating a robust disaster recovery policy with proper replication and backup strategies can be a very complex and daunting task. You may find that you need help during the architecture phase and ongoing support to ensure that your enterprise continues running smoothly. Crunchy Data provides packaged versions of for major operating systems and expert full life-cycle commercial support for and all things . Crunchy Data is committed to providing open source solutions with no vendor lock-in, ensuring that cross-compatibility with the community version of is always strictly maintained. Please visit Crunchy Data for more information.

Recognition

Primary recognition goes to Stephen Frost for all his valuable advice and criticism during the development of . Crunchy Data has contributed significant time and resources to and continues to actively support development. Resonate also contributed to the development of and allowed early (but well tested) versions to be installed as their primary backup solution.