Installation instructions for BackupAFS

The sample commands in this document assume you're running a recent version of Ubuntu. If you're running some other linux distribution or unix flavor, you'll need to substitute the appropriate commands.

Most of these commands will need to be completed as root. Text in blue are commands which would be typed. As always, please do a sanity check that the instructions are correct for your setup before blindly following them.

Prerequisites / Dependencies

  1. Disable current AFS backups

    This step is optional, but recommended. BackupAFS performs a "vos backup" of each volume to create a new .backup volume to immediately dump. This ensures that the dump is the freshest possible. If no other process in the cell is creating .backup volumes (such as a "vos backupsys" cron job), then it is also possible to very quickly tell when a volume was last dumped by BackupAFS by simply looking at the Backup time in the output of "vos examine volume".

    If you plan to run BackupAFS alongside another vos dump or afs backup based mechanism, you should take steps to ensure that volume operations (vos backup, vos dump, and backup dump) do not attempt to operate on the same .backup volume at the same time to prevent errors due to volume locking.

  2. Compress::Zlib

    To enable log compression, you will need to install Compress::Zlib. Most recent perl installations install this by default. If yours does not, you may install it via your distribution's package management system or obtain it from

  3. XML::RSS

    To support the (experimental) RSS feature you will need to install XML::RSS. On Ubuntu, you may:

    aptitude install libxml-rss-perl

    Or obtain the module via your distribution's package management system or obtain it from

  4. OpenAFS Client

    The BackupAFS server must be an AFS client for your AFS cell. So, if you have not already, you should install and configure the OpenAFS client using the settings for your cell. Many popular linux distributions include a pre-packaged version of the client. If your distribution does not, or if you prefer not to use it, you can download it from

  5. Pigz or Gzip binary

    BackupAFS can optionally compress volume dumps using either gzip or pigz for compression. Both applications perform compression using the gzip algorithm; pigz is simply a parallel implementation designed to leverage multiple cores in modern multi-CPU servers. If your distribution does not include the one you choose, you may download Gzip from or Pigz from If you have a multi-processor system, pigz is strongly recommended.

Install Apache

Ubuntu offers a version of apache2 with SSL by default. It is not advised to send usernames and passwords over a non SSL-encrypted http session.

apt-get install apache2
Stop the apache server (we will customize it shortly)
/etc/init.d/apache2 stop

Install BackupAFS

  • Create a backupafs user

    It is recommended to run the BackupAFS daemon as a separate user. If you have not already done so, create the user.

    adduser --home /home/backupafs --shell /bin/bash backupafs
  • Create a directory for data storage and set the ownership

    Feel free to change this location. It is critical that it either have sufficient space for the data you plan to back up (fulls plus incrementals) or that you mount another disk or partition here which does. Mounting a large RAID array at this location is recommended.

    mkdir -p /srv/backupafs
    chown backupafs:backupafs /srv/backupafs
    chmod 700 /srv/backupafs
  • Download the BackupAFS source

    Visit and click the "Download BackupAFS" link. Save the file to a convenient location such as /home/backupafs.

  • Unpack and install BackupAFS

    As of the date this document was written, the newest release is version 1.0.0rc1. Please use the newest version of BackupAFS available.

    Change to the download directory and unpack the distribution

    tar -zxvpf BackupAFS-1.0.0rc1.tar.gz 

    Perform the installation. The cgi-bin directory shown below is correct for apache on Debian Etch. Please change it to your distribution's value if it's different.

    cd BackupAFS-1.0.0
    perl ./
    Is this a new installation or upgrade for BackupAFS?  If this is
    an upgrade please tell me the full path of the existing BackupAFS
    configuration file (eg: /etc/BackupAFS/  Otherwise, just
    hit return.
    --> Full path to existing main []? 
    I found the following locations for these programs:
        cat          => /bin/cat
        df           => /bin/df
        gzip         => /bin/gzip
        hostname     => /bin/hostname
        perl         => /usr/bin/perl
        pigz         => /usr/bin/pigz
        ping         => /bin/ping
        sendmail     => /usr/sbin/sendmail
        ssh/ssh2     => /usr/bin/ssh
        vos          => /usr/bin/vos
    --> Are these paths correct? [y]? y
    Please tell me the hostname of the machine that BackupAFS will run on.
    --> BackupAFS will run on host [vm1]? vm1
    BackupAFS should run as a dedicated user with limited privileges.  You
    need to create a user.  This user will need read/write permission on
    the main data directory and read/execute permission on the install
    directory (these directories will be setup shortly).
    The primary group for this user should also be chosen carefully.
    The data directories and files will have group read permission,
    so group members can access backup files.
    --> BackupAFS should run as user [backupafs]? backupafs
    Please specify an install directory for BackupAFS.  This is where the
    BackupAFS scripts, library and documentation will be installed.
    --> Install directory (full path) [/opt/BackupAFS]? /opt/BackupAFS
    Please specify a data directory for BackupAFS.  This is where all the
    PC backup data is stored.  This file system needs to be big enough to
    accommodate all the PCs you expect to backup (eg: at least several GB
    per machine).
    --> Data directory (full path) [/srv/BackupAFS]? /srv/BackupAFS
    BackupAFS can compress files, providing around a 40% reduction in backup
    size (your mileage may vary). Specify the compression level (0 turns
    off compression, and 1 to 9 represent good/fastest to best/slowest).
    The recommended values are 0 (off) or 3 (reasonable compression and speed).
    Increasing the compression level to 5 will use around 20% more cpu time
    and give perhaps 2-3% more compression.
    --> Compression level [3]? 3
    BackupAFS has a powerful CGI perl interface that runs under Apache.
    A single executable needs to be installed in a cgi-bin directory.
    This executable needs to run as set-uid backupafs, or
    it can be run under mod_perl with Apache running as user backupafs.
    Leave this path empty if you don't want to install the CGI interface.
    --> CGI bin directory (full path) []? /usr/lib/cgi-bin
    BackupAFS's CGI script needs to display various GIF images that
    should be stored where Apache can serve them.  They should be
    placed somewher under Apache's DocumentRoot.  BackupAFS also
    needs to know the URL to access these images.  Example:
        Apache image directory:  /usr/local/apache/htdocs/BackupAFS
        URL for image directory: /BackupAFS
    The URL for the image directory should start with a slash.
    --> Apache image directory (full path) []? /var/www/BackupAFS
    --> URL for image directory (omit http://host; starts with '/') []? /BackupAFS
    Ok, we're about to:
      - install the binaries, lib and docs in /usr/lib/BackupAFS,
      - create the data directory /var/BackupAFS,
      - create/update the file /etc/BackupAFS/,
      - optionally install the cgi-bin interface.
    --> Do you want to continue? [y]? y
    Created /srv/BackupAFS
    Created /srv/BackupAFS/volsets
    Created /srv/BackupAFS/trash
    Created /etc/BackupAFS
    Created /var/log/BackupAFS
    Installing binaries in /opt/BackupAFS/bin
    Installing library in /opt/BackupAFS/lib
    Installing images in /var/www/BackupAFS
    Making init.d scripts
    Making Apache configuration file for suid-perl
    Installing docs in /opt/BackupAFS/doc
    Installing and VolumeSet-List in /etc/BackupAFS
    Making backup copy of /etc/BackupAFS/ -> /etc/BackupAFS/
    Installing cgi script BackupAFS_Admin in /usr/lib/cgi-bin
    Ok, it looks like we are finished.  There are several more things you
    will need to do:
      - Browse through the config file, /etc/BackupAFS/,
        and make sure all the settings are correct.  In particular,
        you will need to set $Conf{CgiAdminUsers} so you have
        administration privileges in the CGI interface.
      - Edit the list of VolumeSets to backup in /etc/BackupAFS/VolumeSet-List.
        The easiest way to do this is via the CGI once you're logged in as
        an admin user.
      - Read the documentation in /opt/BackupAFS/doc/BackupAFS.html.
        Please pay special attention to the security section.
      - Verify that the CGI script BackupAFS_Admin runs correctly.  You might
        need to change the permissions or group ownership of BackupAFS_Admin.
        If this is an upgrade and you are using mod_perl, you will need
        to restart Apache.  Otherwise it will have stale code.
      - BackupAFS should be ready to start.  Don't forget to run it
        as user backup!  The installation also contains an
        init.d/backupafs script that can be copied to /etc/init.d
        so that BackupAFS can auto-start on boot.  This will also enable
        administrative users to start the server from the CGI interface.
        See init.d/README.

    Configure Apache

    Entire books can be written about configuring Apache. This will just attempt to help you get it working. If you need more assistance, please consult the Apache documentation.

  • Disable setuid cgi script

    Ubuntu's apache doesn't support suid cgi-bins. Disable the suid bit, and instead run apache2 as the backupafs user (or use mod_perl).

    chmod u-s /usr/lib/cgi-bin/BackupAFS_Admin 

    If your installation does not support suid cgi-bin scripts, make apache run as the backupafs user. Edit /etc/apache-ssl/httpd.conf and change the User and Group. In Ubuntu 10.4+, this is configured in the file /etc/apache2/envvars. Other distributions may have this in httpd.conf or other locations.

    diff envvars.dist envvars
    < export APACHE_RUN_USER=www-data
    < export APACHE_RUN_GROUP=www-data
    > export APACHE_RUN_USER=backup
    > export APACHE_RUN_GROUP=backup
  • Create an appropriate .htaccess file in the cgi-bin directory

    BackupAFS uses apache's REMOTE_USER variable to control access. This can be set using any existing authentication method, but is normally set via an .htaccess file. Enable .htaccess files for the apache cgi-bin directory by adding an AllowOverride to the cgi-bin Directory entry in your site's site-available file:

          <Directory /usr/lib/cgi-bin/>
    -         AllowOverride None
              Options ExecCGI -MultiViews +SymLinksIfOwnerMatch
              Order allow,deny
              Allow from all
          <Directory /usr/lib/cgi-bin/>
    +         AllowOverride All
              Options ExecCGI -MultiViews +SymLinksIfOwnerMatch
              Order allow,deny
              Allow from all

    Review the instructions for help with actually creating a generic or kerberized .htaccess file.

    Allow yourself to access the CGI as an administrator

  • Add BackupAFS admin users

    Add one or more users as a BackupAFS admin. This is done by adding them to the $Conf{CgiAdminUsers} variable in /etc/BackupAFS/ Separate multiple users with spaces.

    non-kerberized example

    $Conf{CgiAdminUsers} = 'backupafs otheruser';

    kerberized example

    $Conf{CgiAdminUsers} = 'someuser@MY.KRB5REALM.COM someotheruser@MY.KRB5REALM.COM ';

    Start Apache and BackupAFS

  • Start apache
    /etc/init.d/apache2 start
    Starting web server apache2							[OK]
  • Activate backupafs init script and start the daemon
    cp /home/backupafs/BackupAFS-1.0.0/init.d/debian-backupafs /etc/init.d/backupafs
    update-rc.d backupafs defaults
     Adding system startup for /etc/init.d/backupafs ...
        /etc/rc0.d/K20backupafs -> ../init.d/backupafs
        /etc/rc1.d/K20backupafs -> ../init.d/backupafs
        /etc/rc6.d/K20backupafs -> ../init.d/backupafs
        /etc/rc2.d/S20backupafs -> ../init.d/backupafs
        /etc/rc3.d/S20backupafs -> ../init.d/backupafs
        /etc/rc4.d/S20backupafs -> ../init.d/backupafs
        /etc/rc5.d/S20backupafs -> ../init.d/backupafs
    /bin/sh /etc/init.d/backupafs start
    Starting backupafs: ok.

    Configure BackupAFS

  • Add AFS cell KeyFile

    In order to perform vos dump operations without tokens, we need to copy the AFS cell's KeyFile to the backupafs server. This KeyFile must be protected. Only the backupafs user should be able to read it. This means that the backupafs server should be kept as secure as your AFS fileservers. This install document won't devolve into a security lecture, but suffice it to say you should turn off all unnecessary services and wrap or firewall the remaining services.

    Assuming that root can ssh into your AFS fileservers, the following example should work. If root ssh is disallowed (a very good idea), you may wish to tar up the contents of /etc/openafs/server on a fileserver and (temporarily) allow your user account permission to read the tarball.

    mkdir -p /etc/openafs/server
    chmod 700 /etc/openafs/server
    cd /etc/openafs
    scp -R root@fs1:/etc/openafs/server server/
    root@fs1's password: Type your password; it will not echo
    KeyFile                                       100%   16     0.0KB/s   00:00    
    ThisCell                                      100%   16     0.0KB/s   00:00    
    CellServDB                                    100%   16     0.0KB/s   00:00    
    UserList                                      100%   16     0.0KB/s   00:00    
    chown -R backupafs:backupafs .
    chmod 700 KeyFile
  • Access the web interface

    BackupAFS is most easily configured via the CGI web interface.

    Visit (or whatever other exact URL your apache2 configuration dictates) and log in using the username and password you previously added to your .htaccess file.

    If you do not see the prompt for username and password, double-check your .htaccess setup and file permissions. The user running apache (backupafs in this example) must be able to read the .htaccess file and the .htpasswd file if it exists. The Apache2 AllowOverride directive must be set to All for the cgi-bin directory.

    Once logged in, if you don't see all of the expected links under "Server", including "Edit Server Config" and "Edit Volumesets", double-check that the username you're using is one of the listed CgiAdminUsers in and that you restarted backupafs after making any changes to

  • Add AFS volumeset(s)

    BackupAFS operates on volume sets, which are collections of one or more volumes with common characteristics (generally the same location and/or names that match specified patterns). All of the volumes in a volume set are backed up at the same time and at the same dump level. Volumes may be individually restored, however.

    Unlike BackupPC4AFS, which leveraged the AFS "backup" database to store the list of volume entries in a given volume set, BackupAFS keeps track of this itself, and allows admins to easily edit it via the CGI.

    Once logged into the CGI as an admin user, volumesets may be added by clicking the "Edit Volumesets" link. Each volumeset must have at least the following fields completed: volset (the name of the VolumeSet), Entry1_Servers, Entry1_Partitions, Entry1_Volumes

    The remaining fields are optional: user, moreUsers, Entry2_Servers, Entry2_Partitions, Entry2_Volumes, Entry3_Servers, Entry3_Partitions, Entry3_Volumes, Entry4_Servers, Entry4_Partitions, Entry4_Volumes, Entry5_Servers, Entry5_Partitions, Entry5_Volumes.

    Under normal circumstances, you want to backup the AFS .backup volumes rather than the readwrite volumes. To create a volumeset named "testvolumeset1" which will back up volumes on any servers, any partition, and with volumename matching the regular expression "user.<anything>.backup" you would enter the following into each field. Notice that "." represents any single character, ".*" represents zero or more of any single character. Literal periods must be escaped with a backslash: "\.".

      volset              testvolumeset1
      user                (optional; may be blank)
      moreUsers           (optional; may be blank)
      Entry1_Servers      .*
      Entry1_Partitions   .*
      Entry1_Volumes      user\..*\.backup
      Entry2_Servers      (leave blank)
      Entry2_Partitions   (leave blank)
      Entry2_Volumes      (leave blank)
      Entry3_Servers      (leave blank)
      Entry3_Partitions   (leave blank)
      Entry3_Volumes      (leave blank)
      Entry4_Servers      (leave blank)
      Entry4_Partitions   (leave blank)
      Entry4_Volumes      (leave blank)
      Entry5_Servers      (leave blank)
      Entry5_Partitions   (leave blank)
      Entry5_Volumes      (leave blank)

    Remember to click outside the last text entry box, then click "Save" at the top of the page.

    Repeat as necessary for additional volumesets. After changing any settings, you must restart the backupafs process by doing one of the following:

    The list and definition of the volumesets is stored in the VolumeSet-List file in BackupAFS's config directory. The full documentation describes this file in more detail, along with a method of populating this file from existing volumeset definitions stored in the AFS backup database.

  • Verify AFS backup configuration

    Once configured correctly, the "BackupAFS_getVols" script should yield a list of volumes when given a volumeset. You will need to run this script as the BackupAFS user (probably backupafs).

    su -c "/opt/BackupAFS/bin/BackupAFS_getVols testvolumeset1" backupafs
    Querying for volumes in VolumeSet testvolumeset1
    Looking for volumes matching
     server:.* partition:.* volume:user\..*\.backup
    ping delay to fs1.yourdomain.dom: 0.319
    ping delay to fs2.yourdomain.dom: 0.465

    The volumes listed above, user.testuser1.backup and user.anotheruser.backup, are the volumes which will be dumped when a backup of the volume set "testvolumeset1" triggers.

  • Set sensible backup defaults

    At a minimum, you'll want to configure the following settings. Suggested values are shown, but please read /etc/BackupAFS/ or the BackupAFS configuration documentation (available by clicking the "Documentation" link in the CGI) for full details. These settings can be changed via the web by a user previously added to CgiAdminUsers by clicking on "Edit Config", or by editing directly.

    Server | Wakeup Schedule | WakeupSchedule: 23
    Server | Concurrent Jobs | MaxBackups: 4 (You may wish to vary this. It is the max number of simultaneous vos dumps commands. A sane value depends on many factors including the horsepower of your BackupAFS server and the number and configuration of your AFS fileservers.)
    Schedule | Full Backups | FullPeriod: 62.67
    Schedule | Full Backups | FullKeepCnt: 3,0,1,1,1 This illustrates exponential expiry (optional).
    Schedule | Full Backups | FullKeepCntMin: 4
    Schedule | Full Backups | FullAgeMax: 90
    Schedule | Incremental Backups | IncrPeriod: 0.67
    Schedule | Incremental Backups | IncrKeepCnt: 60
    Schedule | Incremental Backups | IncrKeepCntMin: 60
    Schedule | Incremental Backups | IncrAgeMax: 180
    Schedule | Incremental Backups | IncrLevels: 1,3,2,5,4,7,6,9,8,1,3,2,5,4,7,6,9,8,1,3,2,5,4,7,6,9,8,1,3,2,5,4,7,6,9,8,1,3,2,5,4,7,6,9,8,1,3,2,5,4,7,6,9,8
    Xfer | Xfer Settings | XferMethod: vos
    CGI | Paths | CgiURL: Change to https:// for SSL access

    Note that IncrLevels does not include the level 0 (full) dump. The levels list presented here is based on the Towers of Hanoi dump sequence. A full discussion of backup rotations is beyond the scope of this installation doc. For background on designing backup schedules, I recommend UNIX(R) System Administration Handbook by Nemeth, Snyder, SeeBass, and Hein. (it's old, but good.)

    Don't forget to Save your changes.

    Test your results

  • Perform a test backup

    Test the setup by performing a backup. Click "VolumeSet Summary", "testvolumeset1" (or whatever you called your volumeset), then "Start Full Backup" (twice in a row). The full docs, available via the CGI, have more information on backups and backup concepts (fulls, incrementals, etc).

  • Perform a test restore

    It is highly recommended that you test not only backups, but restorations directly into AFS. There is a section in the full docuemntation entitled "Restore functions".

    Migrating from BackupPC4AFS

    While both BackupPC4AFS and BackupAFS share the common heritage of being based on BackupPC, there are several notable functional differences between them.

  • Migrating VolumeSets

    BackupPC4AFS stored the definition of exactly which volumes are included in a specific volumeset in the AFS backup database. BackupAFS stores this definition in the VolumeSet-List file. The exact format of the VolumeSet-List file is covered in Step 6: Setting up the VolumeSet-List file portion of the installation instructions.

    BackupPC4AFS prefixed all AFS volume sets with "afs_", which was used internally to indicate to BackupPC4AFS that a specific volumeset (host) actually represented a set of volumes to dump. BackupAFS does NOT do this. The name of the volumeset is used as recorded in the VolumeSet-List file.

    To facilitate the migration of volumesets and their definitions (volume entries) from the AFS database to the BackupAFS VolumeSet-List file, a migration script, BackupAFS_migrate_populate_VolSet-List is included with the distribution.

    "BackupAFS_migrate_populate_VolSet-List" takes no arguments. It queries the AFS backup database and outputs its best guess at correct VolumeSet names and corresponding volume entries on STDOUT. It does expect to be able to query the backup database using the -localauth option, therefore it should be executed after the cell's KeyFile is already installed on the BackupAFS server.


    It is recommended to run a test first:

        cd /etc/BackupAFS
        backup: waiting for job termination

    Note that the line beginning "backup: waiting..." is on STDERR, not STDOUT. If the output looks amenable, add it to the VolumeSet-List file:

        cd /etc/BackupAFS
        /opt/BackupAFS/bin/BackupAFS_migrate_populate_VolSet-List >> VolumeSet-List
        backup: waiting for job termination

    Any volsets with more than 5 volume entries will be omitted and you will be warned by BackupAFS_migrate_populate_VolSet-List. Make sure to check the results for sanity to make sure it looks correct for your cell.

        cd /etc/BackupAFS
        /opt/BackupAFS/bin/BackupAFS_migrate_populate_VolSet-List >> VolumeSet-List
        backup: waiting for job termination
        test1 has more than 5 volentries. Omitting "    Entry   6: server .*, partition .*, volumes: e30.*\.backup"
        test1 has more than 5 volentries. Omitting "    Entry   7: server .*, partition .*, volumes: d70.*\.backup"
        test1 has more than 5 volentries. Omitting "    Entry   8: server .*, partition .*, volumes: oar.*\.backup"
        test1 has more than 5 volentries. Omitting "    Entry   9: server .*, partition .*, volumes: r90.*\.backup"
        test2 has more than 5 volentries. Omitting "    Entry   6: server .*, partition .*, volumes: d70\..*\.backup"

  • Unmangling the Existing Backups

    BackupPC4AFS stored its dump data in files which had "mangled" names. Name mangling is a concept used by BackupPC (the product on which BackupPC4AFS and BackupAFS are based) to avoid namespace collisions and to allow it to store files' metadata separately from the file itself. BackupPC4AFS went with the flow and mangled filenames because the CGI interface understood mangled names by default.

    BackupAFS does not mangle file names. Therefore it is recommended to unmangle the existing backups to prevent confusion.

    Additionally, the default backup directory (datadir) in BackupPC4AFS in $Conf{TopDir}/pc. For example, /srv/BackupPC/pc. BackupAFS stores its volume backups in $Conf{TopDir}/volsets. For example, /srv/BackupAFS/volsets.

    As mentioned in the section above, BackupPC4AFS prefixed all AFS volume sets with "afs_", which was used internally to indicate to BackupPC4AFS that a specific volumeset (host) actually represented a set of volumes to dump. BackupAFS does NOT do this. The name of the volumeset is used as recorded in the VolumeSet-List file. BackupAFS_migrate_unmangle_datadir renames the data directories to remove the "afs_" prefix. ($volsetname=~s^afs_//;)

    To ease migration, BackupAFS comes with a script, BackupAFS_migrate_unmangle_datadir to move any existing backups from the BackupPC4AFS directory structure and names to that expected by BackupAFS.

    BackupAFS_migrate_unmangle_datadir takes one argument, the defined TopDir, passed in as --topdir=path. When properly executed, the script takes no action itself. It merely outputs, to STDOUT, a bourne-shell script which contains the actions that are necessary. This gives the admin a chance to review the output for sanity prior to execution.

        /opt/BackupAFS/bin/BackupAFS_migrate_unmangle_datadir --topdir=/opt/BackupAFS >> /tmp/
        less /tmp/                                                # Please review for correctness.
        chmod u+rx /tmp/
        /tmp/                                                     # This may take some time to execute.

  • Compression of Existing Backups

    BackupPC4AFS stored its dump data in files which were uncompressed. If available and requested, BackupAFS can compress dumps immediately after they occur in order to save disk space.

    The backup and restore routines know how to handle both compressed and uncompressed dumps, so it is not necessary to compress existing dumps, however it is recommended and can save considerable space (35% or more is not uncommon).

    If you do compress, please do so as instructed here. Manually compressing files outside of these guidelines will not store the compression statistics, and BackupAFS will not be able to accurately report compression savings.

    In order to facilitate compression of existing dumps, a script named BackupAFS_migrate_compress_volsets is included in the distribution. This script may be used to schedule the compression of an entire data directory or a single volumeset, to give the administrator flexibility. Compressing the entire data directory may be very time consuming, depending on the volume of existing data and the speed and quantity of processors.

    A real-world example: compressing 9.7TB of data (375 full dumps, 6000 incrementals), on a server with (8) 2.66GHz X5355 Xeon processors and 4GB of RAM took approximately 20 hours.

    BackupAFS_migrate_compress_volsets takes either 2 or 3 arguments. --datadir= and --backupuser= are mandatory. --volset is optional, and if specified will operate on only the specified volumeset. If --volset is omitted, then all volumesets will be processed.

    The action of the script is to locate all volume dump files (.vdmp files) for each volumeset and add them to a "NewFileList.backupnumber" file inside the volset's directory. Once the "NewFileList" file is constructed, BackupAFS can be requested to perform the compression immediately or it will be performed during the next regularly-scheduled wakeup period (along with any files backed up during that wakeup period).

    To compress all backups at once, an admin might do:

        /opt/BackupAFS/bin/BackupAFS_migrate_compress_volsets --datadir=/srv/BackupAFS --backupuser=backupafs
        # Output snipped for documentation purposes, but each file found is echoed on STDOUT

    To compress all backups for ONLY ONE volume set, an admin might do:

        /opt/BackupAFS/bin/BackupAFS_migrate_compress_volsets --datadir=/srv/BackupAFS --backupuser=backupafs --volset=test1
        # Output snipped for documentation purposes, but each file found is echoed on STDOUT

    After performing either of the above steps, to request BackupAFS perform a compression for a given volset immediately, issue the following command, substituting the name of one of your volumesets in the place of "test1" and substituting in the name of the backup user if it is not backupafs.

        su -c "/opt/BackupAFS/bin/BackupAFS_serverMesg compress test1" backupafs
        Got reply: ok

    The above command may be repeated for each volumeset. Additional compressions will be queued since only one compress operation may occur at any given time. Compressions scheduled via the "BackupAFS_serverMesg compress" method will show up in the CGI (in Status and Current queues) and statistics for it will be recorded in each volumeset's "backups" file.

    NOTE that the maximum number of pending jobs that may exist is defined by $Conf{MaxPendingCmds}. If the number of pending jobs equals or exceeds the value defined there, no new dumps will occur until the number of pending jobs decreases. Therefore you may wish to temporarily increase this number to a very high number if you wish to allow compressions to occur without hampering backups. This would be useful if the expected duration of compression is greater than your backup interval.