Yes, the deduplication technique used by Borg makes sure only the modified parts of the file are stored. Also, we have optional simple sparse file support for extract.
If you use non-snapshotting backup tools like Borg to back up virtual machines, then the VMs should be turned off for the duration of the backup. Backing up live VMs can (and will) result in corrupted or inconsistent backup contents: a VM image is just a regular file to Borg with the same issues as regular files when it comes to concurrent reading and writing from the same file.
For backing up live VMs use file system snapshots on the VM host, which establishes crash-consistency for the VM images. This means that with most file systems (that are journaling) the FS will always be fine in the backup (but may need a journal replay to become accessible).
Usually this does not mean that file contents on the VM are consistent, since file contents are normally not journaled. Notable exceptions are ext4 in data=journal mode, ZFS and btrfs (unless nodatacow is used).
Applications designed with crash-consistency in mind (most relational databases like PostgreSQL, SQLite etc. but also for example Borg repositories) should always be able to recover to a consistent state from a backup created with crash-consistent snapshots (even on ext4 with data=writeback or XFS).
Hypervisor snapshots capturing most of the VM’s state can also be used for backups and can be a better alternative to pure file system based snapshots of the VM’s disk, since no state is lost. Depending on the application this can be the easiest and most reliable way to create application-consistent backups.
Other applications may require a lot of work to reach application-consistency: It’s a broad and complex issue that cannot be explained in entirety here.
Borg doesn’t intend to address these issues due to their huge complexity and platform/software dependency. Combining Borg with the mechanisms provided by the platform (snapshots, hypervisor features) will be the best approach to start tackling them.
Yes, but in order for the deduplication used by Borg to work, it
needs to keep a local cache containing checksums of all file
chunks already stored in the repository. This cache is stored in
~/.cache/borg/
. If Borg detects that a repository has been
modified since the local cache was updated it will need to rebuild
the cache. This rebuild can be quite time consuming.
So, yes it’s possible. But it will be most efficient if a single repository is only modified from one place. Also keep in mind that Borg will keep an exclusive lock on the repository while creating or deleting archives, which may make simultaneous backups fail.
Yes, you could just copy all the files. Make sure you do that while no backup is running (use borg with-lock ...). So what you get here is this:
There is no special borg command to do the copying, just use cp or rsync if you want to do that.
But think about whether that is really what you want. If something goes wrong in repo1, you will have the same issue in repo2 after the copy.
If you want to have 2 independent backups, it is better to do it like this:
- UNIX domain sockets (because it does not make sense - they are meaningless without the running process that created them and the process needs to recreate them in any case). So, don’t panic if your backup misses a UDS!
- The precise on-disk (or rather: not-on-disk) representation of the holes in a sparse file. Archive creation has no special support for sparse files, holes are backed up as (deduplicated and compressed) runs of zero bytes. Archive extraction has optional support to extract all-zero chunks as holes in a sparse file.
- Some filesystem specific attributes, like btrfs NOCOW, see Support for file metadata.
- For hardlinked symlinks, the hardlinking can not be archived (and thus, the hardlinking will not be done at extraction time). The symlinks will be archived and extracted as non-hardlinked symlinks, see #2379.
A single archive can only reference a limited volume of file/dir metadata, usually corresponding to tens or hundreds of millions of files/dirs. When trying to go beyond that limit, you will get a fatal IntegrityError exception telling that the (archive) object is too big. An easy workaround is to create multiple archives with less items each. See also the Note about archive limitations and #1452.
borg info shows how large (relative to the maximum size) existing archives are.
Yes, Borg supports resuming backups.
During a backup a special checkpoint archive named <archive-name>.checkpoint
is saved every checkpoint interval (the default value for this is 30
minutes) containing all the data backed-up until that point.
This checkpoint archive is a valid archive, but it is only a partial backup (not all files that you wanted to backup are contained in it). Having it in the repo until a successful, full backup is completed is useful because it references all the transmitted chunks up to the checkpoint. This means that in case of an interruption, you only need to retransfer the data since the last checkpoint.
If a backup was interrupted, you do not need to do any special considerations,
just invoke borg create
as you always do. You may use the same archive name
as in previous attempt or a different one (e.g. if you always include the current
datetime), it does not matter.
Borg always does full single-pass backups, so it will start again from the beginning - but it will be much faster, because some of the data was already stored into the repo (and is still referenced by the checkpoint archive), so it does not need to get transmitted and stored again.
Once your backup has finished successfully, you can delete all
<archive-name>.checkpoint
archives. If you run borg prune
, it will
also care for deleting unneeded checkpoints.
Note: the checkpointing mechanism creates hidden, partial files in an archive,
so that checkpoints even work while a big file is being processed.
They are named <filename>.borg_part_<N>
and all operations usually ignore
these files, but you can make them considered by giving the option
--consider-part-files
. You usually only need that option if you are
really desperate (e.g. if you have no completed backup of that file and you’ld
rather get a partial file extracted than nothing). You do not want to give
that option under any normal circumstances.
This is not a problem anymore.
For more details, see If a backup stops mid-way, does the already-backed-up data stay there?.
If you cannot manage to extract the whole big file in one go, you can extract all the part files and manually concatenate them together.
For more details, see If a backup stops mid-way, does the already-backed-up data stay there?.
No, it can’t. While that at first sounds like a good idea to defend against some defect HDD sectors or SSD flash blocks, dealing with this in a reliable way needs a lot of low-level storage layout information and control which we do not have (and also can’t get, even if we wanted).
So, if you need that, consider RAID or a filesystem that offers redundant storage or just make backups to different locations / different hardware.
See also #225.
Yes, if you want to detect accidental data damage (like bit rot), use the
check
operation. It will notice corruption using CRCs and hashes.
If you want to be able to detect malicious tampering also, use an encrypted
repo. It will then be able to check using CRCs and HMACs.
SMR (shingled magnetic recording) hard drives are very different from regular hard drives. Applications have to behave in certain ways or performance will be heavily degraded.
Borg 1.1 ships with default settings suitable for SMR drives, and has been successfully tested on Seagate Archive v2 drives using the ext4 file system.
Some Linux kernel versions between 3.19 and 4.5 had various bugs handling device-managed SMR drives, leading to IO errors, unresponsive drives and unreliable operation in general.
For more details, refer to #2252.
A single error does not necessarily indicate bad hardware or a Borg bug. All hardware exhibits a bit error rate (BER). Hard drives are typically specified as exhibiting less than one error every 12 to 120 TB (one bit error in 10e14 to 10e15 bits). The specification is often called unrecoverable read error rate (URE rate).
Apart from these very rare errors there are two main causes of errors:
Finding defective hardware
Note
Hardware diagnostics are operating system dependent and do not apply universally. The commands shown apply for popular Unix-like systems. Refer to your operating system’s manual.
Find the drive containing the repository and use findmnt, mount or lsblk to learn the device path (typically /dev/...) of the drive. Then, smartmontools can retrieve self-diagnostics of the drive in question:
# smartctl -a /dev/sdSomething
The Offline_Uncorrectable, Current_Pending_Sector and Reported_Uncorrect attributes indicate data corruption. A high UDMA_CRC_Error_Count usually indicates a bad cable.
I/O errors logged by the system (refer to the system journal or dmesg) can point to issues as well. I/O errors only affecting the file system easily go unnoticed, since they are not reported to applications (e.g. Borg), while these errors can still corrupt data.
Drives can corrupt some sectors in one event, while remaining reliable otherwise. Conversely, drives can fail completely with no advance warning. If in doubt, copy all data from the drive in question to another drive – just in case it fails completely.
If any of these are suspicious, a self-test is recommended:
# smartctl -t long /dev/sdSomething
Running fsck
if not done already might yield further insights.
Intermittent issues, such as borg check
finding errors
inconsistently between runs, are frequently caused by bad memory.
Run memtest86+ (or an equivalent memory tester) to verify that the memory subsystem is operating correctly.
Processors rarely cause errors. If they do, they are usually overclocked or otherwise operated outside their specifications. We do not recommend to operate hardware outside its specifications for productive use.
Tools to verify correct processor operation include Prime95 (mprime), linpack, and the Intel Processor Diagnostic Tool (applies only to Intel processors).
Repairing a damaged repository
With any defective hardware found and replaced, the damage done to the repository needs to be ascertained and fixed.
borg check provides diagnostics and --repair
options for repositories with
issues. We recommend to first run without --repair
to assess the situation.
If the found issues and proposed repairs seem right, re-run “check” with --repair
enabled.
There are several ways to specify a passphrase without human intervention:
BORG_PASSPHRASE
The passphrase can be specified using the BORG_PASSPHRASE
enviroment variable.
This is often the simplest option, but can be insecure if the script that sets it
is world-readable.
Note
Be careful how you set the environment; using the env
command, a system()
call or using inline shell scripts
(e.g. BORG_PASSPHRASE=hunter2 borg ...
)
might expose the credentials in the process list directly
and they will be readable to all users on a system. Using
export
in a shell script file should be safe, however, as
the environment of a process is accessible only to that
user.
BORG_PASSCOMMAND
with a properly permissioned fileAnother option is to create a file with a password in it in your home directory and use permissions to keep anyone else from reading it. For example, first create a key:
head -c 1024 /dev/urandom | base64 > ~/.borg-passphrase
chmod 400 ~/.borg-passphrase
Then in an automated script one can put:
export BORG_PASSCOMMAND="cat ~/.borg-passphrase"
and Borg will automatically use that passphrase.
keyfile
mode instead of the default
repokey
mode and use a blank passphrase for the key file (simply press Enter twice
when borg init
asks for the password). See Repository encryption
for more details.BORG_PASSCOMMAND
with macOS KeychainmacOS has a native manager for secrets (such as passphrases) which is safer
than just using a file as it is encrypted at rest and unlocked manually
(fortunately, the login keyring automatically unlocks when you login). With
the built-in security
command, you can access it from the command line,
making it useful for BORG_PASSCOMMAND
.
First generate a passphrase and use security
to save it to your login
(default) keychain:
security add-generic-password -D secret -U -a $USER -s borg-passphrase -w $(head -c 1024 /dev/urandom | base64)
In your backup script retrieve it in the BORG_PASSCOMMAND
:
export BORG_PASSCOMMAND="security find-generic-password -a $USER -s borg-passphrase -w"
BORG_PASSCOMMAND
with GNOME KeyringGNOME also has a keyring daemon that can be used to store a Borg passphrase.
First ensure libsecret-tools
, gnome-keyring
and libpam-gnome-keyring
are installed. If libpam-gnome-keyring
wasn’t already installed, ensure it
runs on login:
sudo sh -c "echo session optional pam_gnome_keyring.so auto_start >> /etc/pam.d/login"
sudo sh -c "echo password optional pam_gnome_keyring.so >> /etc/pam.d/passwd"
# you may need to relogin afterwards to activate the login keyring
Then add a secret to the login keyring:
head -c 1024 /dev/urandom | base64 | secret-tool store borg-repository repo-name --label="Borg Passphrase"
If a dialog box pops up prompting you to pick a password for a new keychain, use your login password. If there is a checkbox for automatically unlocking on login, check it to allow backups without any user intervention whatsoever.
Once the secret is saved, retrieve it in a backup script using BORG_PASSCOMMAND
:
export BORG_PASSCOMMAND="secret-tool lookup borg-repository repo-name"
Note
For this to automatically unlock the keychain it must be run
in the dbus
session of an unlocked terminal; for example, running a backup
script as a cron
job might not work unless you also export DISPLAY=:0
so secret-tool
can pick up your open session. It gets even more complicated
when you are running the tool as a different user (e.g. running a backup as root
with the password stored in the user keyring).
BORG_PASSCOMMAND
with KWalletKDE also has a keychain feature in the form of KWallet. The command-line tool
kwalletcli
can be used to store and retrieve secrets. Ensure kwalletcli
is installed, generate a passphrase, and store it in your “wallet”:
head -c 1024 /dev/urandom | base64 | kwalletcli -Pe borg-passphrase -f Passwords
Once the secret is saved, retrieve it in a backup script using BORG_PASSCOMMAND
:
export BORG_PASSCOMMAND="kwalletcli -e borg-passphrase -f Passwords"
Yes, file and directory metadata and data is locally encrypted, before leaving the local machine. We do not mean the transport layer encryption by that, but the data/metadata itself. Transport layer encryption (e.g. when ssh is used as a transport) applies additionally.
Yes and No.
No, as far as data confidentiality is concerned - if you use encryption, all your files/dirs data and metadata are stored in their encrypted form into the repository.
Yes, as an attacker with access to the remote server could delete (or otherwise make unavailable) all your backups.
Assume you backup your backup client machine C to the backup server S and C gets hacked. In a simple push setup, the attacker could then use borg on C to delete all backups residing on S.
These are your options to protect against that:
ssh -R
, see #900.Just in case you got the impression that pull-mode backups are way more safe than push-mode, you also need to consider the case that your backup server S gets hacked. In case S has access to a lot of clients C, that might bring you into even bigger trouble than a hacked backup client in the previous FAQ entry.
These are your options to protect against that:
In general: if your only backup medium is nearby the backupped machine and always connected, you can easily get into trouble: they likely share the same fate if something goes really wrong.
Thus:
Send a private email to the security contact if you think you have discovered a security issue. Please disclose security issues responsibly.
When doing a backup to a remote server (using a ssh: repo URL), it sometimes stops after a while (some minutes, hours, ... - not immediately) with “connection closed by remote” error message. Why?
That’s a good question and we are trying to find a good answer in #636.
Maybe the ssh connection between client and server broke down and that was not yet noticed on the server. Try these settings:
# /etc/ssh/sshd_config on borg repo server - kill connection to client
# after ClientAliveCountMax * ClientAliveInterval seconds with no response
ClientAliveInterval 20
ClientAliveCountMax 3
If you have multiple borg create ... ; borg create ... commands in a already
serialized way in a single script, you need to give them --lock-wait N
(with N
being a bit more than the time the server needs to terminate broken down
connections and release the lock).
There is a temporary (but maybe long lived) hack to avoid using lots of disk space for chunks.archive.d (see #235 for details):
# this assumes you are working with the same user as the backup.
# you can get the REPOID from the "config" file inside the repository.
cd ~/.cache/borg/<REPOID>
rm -rf chunks.archive.d ; touch chunks.archive.d
This deletes all the cached archive chunk indexes and replaces the directory that kept them with a file, so borg won’t be able to store anything “in” there in future.
This has some pros and cons, though:
The long term plan to improve this is called “borgception”, see #474.
Backing up your entire root partition works just fine, but remember to
exclude directories that make no sense to backup, such as /dev, /proc,
/sys, /tmp and /run, and to use --one-file-system
if you only want to
backup the root partition (and not any mounted devices e.g.).
Check if your encoding is set correctly. For most POSIX-like systems, try:
export LANG=en_US.UTF-8 # or similar, important is correct charset
This might be due to different ways to represent some characters in unicode or due to other non-ascii encoding issues.
If you run into that, try this:
The files cache is used to determine whether Borg already “knows” / has backed up a file and if so, to skip the file from chunking. It does intentionally not contain files that have a modification time (mtime) same as the newest mtime in the created archive.
So, if you see an ‘A’ status for unchanged file(s), they are likely the files with the most recent mtime in that archive.
This is expected: it is to avoid data loss with files that are backed up from a snapshot and that are immediately changed after the snapshot (but within mtime granularity time, so the mtime would not change). Without the code that removes these files from the files cache, the change that happened right after the snapshot would not be contained in the next backup as Borg would think the file is unchanged.
This does not affect deduplication, the file will be chunked, but as the chunks will often be the same and already stored in the repo (except in the above mentioned rare condition), it will just re-use them as usual and not store new data chunks.
If you want to avoid unnecessary chunking, just create or touch a small or empty file in your backup source file set (so that one has the latest mtime, not your 50GB VM disk image) and, if you do snapshots, do the snapshot after that.
Since only the files cache is used in the display of files status, those files are reported as being added when, really, chunks are already used.
Borg maintains a files cache where it remembers the mtime, size and inode of files. When Borg does a new backup and starts processing a file, it first looks whether the file has changed (compared to the values stored in the files cache). If the values are the same, the file is assumed unchanged and thus its contents won’t get chunked (again).
Borg can’t keep an infinite history of files of course, thus entries in the files cache have a “maximum time to live” which is set via the environment variable BORG_FILES_CACHE_TTL (and defaults to 20). Every time you do a backup (on the same machine, using the same user), the cache entries’ ttl values of files that were not “seen” are incremented by 1 and if they reach BORG_FILES_CACHE_TTL, the entry is removed from the cache.
So, for example, if you do daily backups of 26 different data sets A, B, C, ..., Z on one machine (using the default TTL), the files from A will be already forgotten when you repeat the same backups on the next day and it will be slow because it would chunk all the files each time. If you set BORG_FILES_CACHE_TTL to at least 26 (or maybe even a small multiple of that), it would be much faster.
Another possible reason is that files don’t always have the same path, for example if you mount a filesystem without stable mount points for each backup or if you are running the backup from a filesystem snapshot whose name is not stable. If the directory where you mount a filesystem is different every time, Borg assume they are different files.
To limit upload (i.e. borg create) bandwidth, use the
--remote-ratelimit
option.
There is no built-in way to limit download (i.e. borg extract) bandwidth, but limiting download bandwidth can be accomplished with pipeviewer:
Create a wrapper script: /usr/local/bin/pv-wrapper
#!/bin/sh
## -q, --quiet do not output any transfer information at all
## -L, --rate-limit RATE limit transfer to RATE bytes per second
RATE=307200
pv -q -L $RATE | "$@"
Add BORG_RSH environment variable to use pipeviewer wrapper script with ssh.
export BORG_RSH='/usr/local/bin/pv-wrapper ssh'
Now Borg will be bandwidth limited. Nice thing about pv is that you can change rate-limit on the fly:
pv -R $(pidof pv) -L 102400
Borg is doing nothing special in the filesystem, it only uses very common and compatible operations (even the locking is just “mkdir”).
So, if you are encountering issues like slowness, corruption or malfunction when using a specific filesystem, please try if you can reproduce the issues with a local (non-network) and proven filesystem (like ext4 on Linux).
If you can’t reproduce the issue then, you maybe have found an issue within the filesystem code you used (not with Borg). For this case, it is recommended that you talk to the developers / support of the network fs and maybe open an issue in their issue tracker. Do not file an issue in the Borg issue tracker.
If you can reproduce the issue with the proven filesystem, please file an issue in the Borg issue tracker about that.
Repair usually works for recovering data in a corrupted archive. However, it’s impossible to predict all modes of corruption. In some very rare instances, such as malfunctioning storage hardware, additional repo corruption may occur. If you can’t afford to lose the repo, it’s strongly recommended that you perform repair on a copy of the repo.
In the case of malfunctioning hardware, such as a drive or USB hub corrupting data when read or written, it’s best to diagnose and fix the cause of the initial corruption before attempting to repair the repo. If the corruption is caused by a one time event such as a power outage, running borg check –repair will fix most problems.
Some borg runs take quite a bit, so it would be nice to see a progress display, maybe even including a ETA (expected time of “arrival” [here rather “completion”]).
For some functionality, this can be done: if the total amount of work is more or
less known, we can display progress. So check if there is a --progress
option.
But sometimes, the total amount is unknown (e.g. for borg create
we just do
a single pass over the filesystem, so we do not know the total file count or data
volume before reaching the end). Adding another pass just to determine that would
take additional time and could be incorrect, if the filesystem is changing.
Even if the fs does not change and we knew count and size of all files, we still
could not compute the borg create
ETA as we do not know the amount of changed
chunks, how the bandwidth of source and destination or system performance might
fluctuate.
You see, trying to display ETA would be futile. The borg developers prefer to rather not implement progress / ETA display than doing futile attempts.
See also: https://xkcd.com/612/
We try to build the binary on old, but still supported systems - to keep the minimum requirement for the (g)libc low. The (g)libc can’t be bundled into the binary as it needs to fit your kernel and OS, but Python and all other required libraries will be bundled into the binary.
If your system fulfills the minimum (g)libc requirement (see the README that is released with the binary), there should be no problem. If you are slightly below the required version, maybe just try. Due to the dynamic loading (or not loading) of some shared libraries, it might still work depending on what libraries are actually loaded and used.
In the borg git repository, there is scripts/glibc_check.py that can determine (based on the symbols’ versions they want to link to) whether a set of given (Linux) binaries works with a given glibc version.
Borg was created in May 2015 in response to the difficulty of getting new code or larger changes incorporated into Attic and establishing a bigger developer community / more open development.
More details can be found in ticket 217 that led to the fork.
Borg intends to be:
Borg is a fork of Attic and maintained by “The Borg collective”.
Here’s a (incomplete) list of some major changes:
Please read the Changelog (or docs/changes.rst
in the source distribution) for more
information.
Borg is not compatible with original Attic (but there is a one-way conversion).
Use borg upgrade. This is a one-way process that cannot be reversed.
There are some caveats:
config
file in the repository.~/.attic/keys/
. The correct key for the
repository starts with the line ATTIC_KEY <repository id>
.~/.config/borg/keys/
ATTIC_KEY ...
to BORG_KEY ...
.borg list <repository>
).Attic was rather unflexible when it comes to compression, it always compressed using zlib level 6 (no way to switch compression off or adjust the level or algorithm).
The default in Borg is lz4, which is fast enough to not use significant CPU time in most cases, but can only achieve modest compression. It still compresses easily compressed data fairly well.
zlib compression with all levels (1-9) as well as LZMA (1-6) are available as well, for cases where they are worth it.
Which choice is the best option depends on a number of factors, like bandwidth to the repository, how well the data compresses, available CPU power and so on.