This section documents the Server tab, which has the following subtabs:

  • Scheduled Jobs
  • Active Jobs
  • Importing Jobs
  • Deduping Jobs
  • Recent Jobs
  • Storage
  • Reports
  • Advanced
info If you are running replication, some of the subtabs listed above are not shown in the Management Console of the secondary CFA.

Whenever you make changes, remember to save them. (See Activating Changes.)

Scheduled Jobs subtab

CFA uses a schedule to determine when a given client will be backed up. You can define as many schedules as you need. There are no restrictions on how many clients can use a specific schedule.

The Scheduled Jobs subtab lists all jobs scheduled to run within the next 24 hours.

Active Jobs subtab

The Active Jobs subtab shows jobs that are currently running or waiting to be executed.

Information

Information on the Active Jobs subtab is presented in the table format with the following default columns:

Column name Possible values Column/Option Description
Status   Indicates whether the job is running or waiting 1.
  Waiting For Start Time A backup started, erred out and was rescheduled because it has retry on error set in the client configuration. The client may be down (which will eventually cause an error), or the client is up and something else caused the job to error out (network, OS issue, etc.).
  Waiting For File Daemon The CFA cannot talk to the agent and is waiting for it to come online. Once the connection times out, the job will error out and either fail or reschedule (and go to wait for start time). It’s likely the client computer is down, the agent is not running, or there is a firewall blocking access. If none of those are true, then restarting the agent (on the server) may help.
  Waiting for Storage Daemon Waiting for storage daemon means either the storage daemon is not running (CFA is out of space) or Bacula is hung up and backup daemons need to be restarted (Settings > Director Status > Restart Backup Service).
  Waiting for Storage Resource Usually only seen when more than one job for a particular agent is running. Only one job per client is allowed to run at a time, so only one job should be running and the rest should be Waiting for Storage Resource. If storage daemon does think jobs are running but director does not, or vice versa, then Bacula is confused and needs to be restarted (both on the server and the CFA).
  Waiting For Mount A reboot is necessary after a configuration is loaded. This error can appear if the reboot does not happen after uploading a configuration.
  Waiting Max Priority There are jobs with a higher priority in queue that need to finish in order for this backup to run. Bacula can only process one priority level at a time, so all jobs of a higher priority need to complete before it will begin backups for the next level of priority.
  Waiting For Client Resources This state is a job that is pending a plugin on the client to be free.
Type BMR Backup, DR Backup, File Backup, Hyper-V Backup, VMware Backup, Restore Type of the job.
Level   Level of the backup.
  Full Saves all files specified for the client, whether or not they have been modified since the last backup.
  Synthetic Full  
  Differential Saves all files that have been modified since the last full backup.
  Incremental Saves any file that has been modified since any previous successful backup (full, differential, or incremental) of that file.
Job Id   The job ID number.
Priority   The job priority. Lower numbered jobs are executed first. Jobs with a priority greater than 500 are executed when no other higher priority jobs are running. (See Priority.)
Client   Name of the client the job belongs to.

Special actions

All special actions on the Active Jobs subtab are available via

  • the toolbar on the top left, or

  • the context menu of a job

Action name Action description
Message Logs View details for an active job
Show Client Properties View properties of the client, for which the job is run
Cancel Job Cancel an active job 2

Importing Jobs subtab

The Importing Jobs subtab lists all backup jobs that have finished and are being imported into the catalog. The file catalog allows access to individual files in each backup.

The screen lists jobs that are pending, the job currently being imported, and jobs that were mostly recently imported.

You cannot access individual files to browse or restore them until the job with the file is completely imported.

Deduping Jobs subtab

Currently, Infrascale offers a post-process deduplication feature. This requires a backup job’s data to occupy space on your RAID while deduplication occurs. Deduplication occurs by reading the backup job data and copying unique files to a repository (that is also on the RAID) where duplicate files can be referenced rather than copied. During this time, both the original backup job data and repository copies of unique files are on the RAID. In the worst case, (if a backup job consists entirely of unique files), free space equal to the backup job’s size is required for the repository copies. Consequently, prior to running a backup job which will be deduplicated, the amount of free space required to backup and deduplicate may be as much as double the size of the data to be backed up. When deduplication completes, the original backup job data is then deleted from the RAID.

Given these requirements, the system checks the following before deduplicating a backup job on the RAID

  1. There must be at least 1.5 GB free space on the RAID.  
  2. There must be more free space on the RAID than the size of the backup job. For example, if a 100 GB backup job completes, the system requires at least 100 GB of free space to begin the deduplication process.

If your CFA is nearing its full capacity, you can try these suggestions to allow successful deduplication: 

  1. Stagger the full jobs of large clients. The time between each job should account for its backup, importing, and deduplication time. For example, if you have 3 clients that each take 30 minutes to backup, import, and deduplicate; schedule the full backups at least 45-60 minutes apart. The length of time it takes to backup and deduplicate varies with the size and number of files.

  2. If a single client’s full backup is too large and will not deduplicate, we recommend you break the client into two or more separate clients, each with a non-overlapping file set that has roughly a balanced amount of data. For example, if you have a client SRV with 1 TB of data on C: and 1 TB of data on D:, create a SRV-A client with C:/ in its file set, and a SRV-B client with D:/ in its file set. Stagger these clients’ schedules per the above recommendation. Separating the data in this manner will require a minimum of 2 TB of free space instead of 4 TB.

The Deduping Jobs subtab lists backup jobs that have been deduplicated, meaning that only one copy of a given file is kept in the backup.

The screen lists jobs that are pending, the job currently being deduped, and jobs that were most recently deduped.

Recent Jobs subtab

The Recent Jobs subtab displays a summary of the last several jobs that have completed execution.

If a job terminated with an error, the job summary will be highlighted in red for easy identification. You may consult the message logs or the system-generated email for the specific cause of the error.

Storage subtab

The Storage subtab displays information about your UCAR (unique content-addressable repository) garbage collection system.

For clients using deduplication, the UCAR system runs a garbage collection process every day to find and purge any files that are no longer referenced. Some ways that data can become unreferenced “garbage” are when clients are deleted without their jobs being purged or when old jobs were not removed completely. It is recommended to run garbage collection after deleting jobs to insure the data is cleared completely. This is similar to jobs with unreferenced data. (See Unreferenced Data).

info Starting with IDR v3.1, garbage collection will not occur if jobs are currently being deduped. The progress bar for the garbage collection process will say deferred in this case.

The garbage collection will be deferred for up to 12 hours before it terminates the process. If the process times out, it will be retried at its next regular time. There is one exception. In the event the system is running low on space, the garbage collection will proceed whether there are jobs deduplicating or not.

The Storage screen has the following settings:

Setting Description
Garbage Collection Allows to manually start the garbage collection
Garbage Collection Time of Day Allows to set the time of day when the garbage collection will run automatically
Compact Online Start Online DDFS Compact manually
Compact Online Time of Day Set time when Online DDFS Compact starts automatically each day
Verify UCAR Allows to verify UCAR integrity. This will systematically read all the files in UCAR, and verify if their computed signature matches the recorded one. If not, the file will be quarantined. The process is extremely I/O intensive and can take weeks to run to completion on systems with large amount of the stored data. Use only when told by the Infrascale Support

Also, by scrolling down the same page you may find the Block Deduplication Statistics (raw) section. It will look similar to the one that you see below:

Data name Data description
Block Written Total number of full blocks that have been written into DDFS since it was configured initially
Block Size The size of the blocks files are divided into during the deduplication process. This option is not configurable
Total Blocks The total number of blocks that have been written to DDFS since it was configured initially. It includes both full and partial blocks
Total Bytes The number of partial blocks that have been written to DDFS since it was configured initially
Partial Blocks The total of the size of all the partial blocks that have been written to DDFS. Partial blocks happen at the end of a file that does not evenly divide into blocks. For example, a 96 KB file will be divided into 64 KB full block, and 32 KB partial block
Partial Bytes The number of times a block already existed in the block store and did not need to be written again, thus saving space
Duplicate Blocks The number of bytes that did not have to get written to the raid because we already had a copy of a block
Duplicate Bytes The sum of the size of all of the blocks marked as free in the block store
Free Blocks A counter of times blocks have been read back from the DDFS
Free Bytes The sum of the size of all of the blocks marked as free in the block store
Blocks Read A counter of times blocks have been read back from the DDFS
Allocated Bytes The size of the block stores. Includes both the used and the free blocks

Reports subtab

The Reports subtab displays daily and weekly summary reports.

The Daily Report Preview section is a simple summary of the results for the jobs run for the day.

The Weekly Report Preview section is a simple summary of the results for the jobs run for the week.

You can change the day and time that each report is sent.

Advanced subtab

The Advanced subtab lets you reset to factory defaults, create a client for backing up the NAS share space on the CFA, and change some global settings if required.

notification_important This subtab is hidden by default unless in the advanced view mode. It is not recommended to open without prior consultation with the support representative.

Administration Settings

The first setting in the Administration group is Create NAS Backup Client.

Create NAS Backup Client

To create an NAS backup client, click Create. A results window will appear; click OK to close the window.

Then click Activate Configuration in the upper-right corner of the screen to complete the creation.

Global Settings

The Global Settings group provides an interface to some of the advanced settings used by the backup system’s backup program. In most cases, the settings on this page should not require any changes.

After you make any changes, click Apply to save your changes.

File Daemon

Setting Description
Listening TCP Port 3 The TCP port that the backup system Director binds to. The default is 9102. This port has been registered, so conflicts with another application are unlikely. If you encounter a conflict in the network environment, you can change the port specification.
Heartbeat interval This setting defines an interval of time. For each heartbeat that the client agent receives from the Storage daemon, it will forward it to the Director. In addition, if no heartbeat has been received from the Storage daemon and thus forwarded the File daemon will send a heartbeat signal to the Director and to the Storage daemon to keep the channels active. The default interval is 0, which disables the heartbeat. This setting affects only the client agent running on the CFA itself.
It is particularly useful, if you have a router that does not follow Internet standards and times out an inactive connection after a short duration.

Storage Daemon

Setting Description
Listening TCP Port 4 The TCP port that the backup system Storage Daemon binds to. The default is 9103. This port has been registered, so conflicts with another application are unlikely. If you encounter a port conflict in your network environment, you can change the port specification.
Listening (bind to) Address The IP Address that the backup system Storage Daemon will bind to. Normally the backup system Storage Daemon will bind to all addresses present on the CFA. To limit the daemon to one of the two network interfaces, select the IP address of the appropriate network interface.
Maximum Concurrent Jobs The maximum number of backup jobs that the backup storage system will accept simultaneously. This number should be equal to or greater than the Maximum Concurrent Jobs for the Director.
  1. Common reasons for waiting:

    • Execution — the job is waiting for running jobs to finish
    • Higher priority job — the job is waiting for higher priority jobs to finish
    • Max client jobs — the client, for which this job will run, is running another job

  2. If a canceled job does not clear from the Active Jobs subtab, see Director Status for troubleshooting. 

  3. If you change this parameter, you will need to add each client back in, and download each client’s configuration file again. You can also edit each client’s configuration file by changing the line that reads FD Port = 9102 to indicate the new port number. If you change a port to one already in use, errors will occur. 

  4. If you change a port to one that is already in use, errors will occur.