Throughput is the rate at which an entire backup job is able to gather all of its data from a client over the life of the job. When throughput is not high enough, and backups are taking too long, please consider the following when troubleshooting:

  • The Jobs > History > Throughput calculation is for the life of the job. This means that if there is preparation work on the client, such as Volume Shadow Copy Service (VSS) snapshots being performed, and that no data is being transferred during that time, then the overall throughput will appear lower than the potential throughput. When compressing on the client, CPU contention can also slow things down. Disk contention on the client is factor that is not easily diagnosed from job messages on the CFA.
    • You can disable VSS to start getting backup data as soon as the job starts, but files in use will not be backed up. You might consider creating a different client for the VSS and system state data.
    • Disabling compression is recommended on 100 Mbps and 1 Gbps networks. Compression can save space on the CFA, but deduplication is usually a better means of doing that. Compression is performed on the client when sending data to the CFA, and can significantly slow down throughput on large amounts of already-compressed data. Compression can be disabled per client in the File Set editor for each entry.
  • Live throughput can be seen while a job is running using the Clients > Summary > right-click client > Properties > Real-time Status > Throughput column, if running a 3.4 client. This can help you determine whether the throughput is fast for some filesystems and slow for others.
    • While attending to the throughput, if a single large file is backing up and the throughput varies widely, then consider what may be causing disk or network contention on your client during the backup. If there are other maintenance tasks occurring during the backup, you should consider rescheduling them or the backup so they do not overlap. Microsoft's Performance Monitor (for Windows Server 2008) or System Performance Advisor (for Windows Server 2003) may be used to gather performance data during a backup useful in determining I/O and other types of contention.
    • Clients prior to 3.4 only report the overall throughput.
  • With modern disk speeds, the maximum throughput is usually the smaller of either the disk throughput or the network link speed. Write speeds to the CFA array cannot usually be exceeded by data received on its 1 Gbps link, regardless of the number of jobs - the array is much faster than that. So a good baseline throughput rate for 1 Gbps links is 30 MBps assuming the source is a standard SATA disk, but can be up to 60 MBps for a faster disk or RAID array on the client. 100 Mbps links usually top out at 8 MBps.
    • While a job is running, ensure there is little to no contention for disk or network I/O on the client. Memory and CPU usage should not significantly affect throughput, unless using compression.
    • Backups performed at the same time from different clients contend for the CFA's network bandwidth. Ensure that clients which must backup within a more rigorous window are performed at different times or exclusively using the Clients > Edit > Priority setting (all higher priority jobs must complete before lower priority jobs start).
  • Ensure there are no network segments between the client and the CFA which are less than your desired link speed, such as a 100Mbps switch between their gigabit switches.
  • Check that the System > Settings > Networking > Current Speed is 1000 for 1 Gbps and 100 for 100 Mbps. If they differ, you may try a different port on your switch or troubleshoot the CFA's network settings with our specialist.
  • Incremental backups scan the filesystems looking for updated files, and can take a long time to do so while only finding few changed files. Only changed files are sent and contribute to throughput, so the throughput rates for incrementals are typically much lower than for fulls of the same client. Differentials act similarly depending on how much data changes since the last full.
  • Many small files can be significantly slower than fewer large files.
  • All other client-local considerations with disk I/O come into play when doing a backup. Ensure your disks are defragmented, for example