Throughput general information

Throughput is the rate, at which an entire backup job is able to gather all of its data from a client over the life of the job.

When throughput is not high enough, and backups are taking too long, please consider the following when troubleshooting:

  • The JobsHistoryThroughput calculation is for the life of the job.

    This means that if there is preparation work on the client, such as Volume Shadow Copy Service (VSS) snapshots being performed, and that no data is being transferred during that time, then the overall throughput will appear lower than the potential throughput. When compressing on the client, CPU contention can also slow things down. Disk contention on the client is factor that is not easily diagnosed from job messages on the Backup & Disaster Recovery appliance.

    • You can disable VSS to start getting backup data as soon as the job starts, but files in use will not be backed up. You might consider creating a different client for the VSS and system state data.

    • Disabling compression is recommended on 100 Mbit/s and 1 Gbit/s networks. Compression can save space on appliance, but deduplication is usually a better means of doing that. Compression is performed on the client when sending data to the appliance, and can significantly slow down throughput on large amounts of already-compressed data. Compression can be disabled per client in the File Set editor for each entry.

  • Live throughput can be seen while a job is running using the Throughput column in ClientsSummary (right-click a client, and then click PropertiesReal-time Status), if running a 3.4 client. This can help you determine if the throughput is fast for some file systems and slow for others.

    • While attending to the throughput, if a single large file is backing up and the throughput varies widely, then consider what may be causing disk or network contention on your client during the backup.

      If there are other maintenance tasks occurring during the backup, you should consider rescheduling them or the backup so they do not overlap. Microsoft Performance Monitor (for Windows Server 2008) or System Performance Advisor (for Windows Server 2003) may be used to gather performance data during a backup useful in determining I/O and other types of contention.

    • Clients prior to 3.4 only report the overall throughput.

  • With modern disk speeds, the maximum throughput is usually the smaller of either the disk throughput or the network link speed.

    Write speeds to the appliance array cannot usually be exceeded by data received on its 1 Gbit/s link, regardless of the number of jobs – the array is much faster than that. So a good baseline throughput rate for 1 Gbit/s links is 30 MB/s assuming the source is a standard SATA disk, but can be up to 60 MB/s for a faster disk or RAID array on the client. 100 Mbit/s links usually top out at 8 MB/s.

    • While a job is running, ensure there is little to no contention for disk or network I/O on the client. Memory and CPU usage should not significantly affect throughput, unless using compression.

    • Backups performed at the same time from different clients contend for the appliance network bandwidth. Ensure that clients which must backup within a more rigorous window are performed at different times or exclusively using ClientsEditPriority setting (all higher priority jobs must complete before lower priority jobs start).

  • Ensure there are no network segments between the client and appliance, which are less than your desired link speed, such as a 100 Mbit/s switch between their gigabit switches.

  • Check that the current speed in SettingsNetworking is 1000 for 1 Gbit/s and 100 for 100 Mbit/s.

    If they differ, you may try a different port on your switch or troubleshoot the appliance network settings with our specialist.

  • Incremental backups scan the file systems looking for updated files, and can take a long time to do so while only finding few changed files.

    Only changed files are sent and contribute to throughput, so the throughput rates for incremental backups are typically much lower than for fulls of the same client. Differential backups act similarly depending on how much data changes since the last full.

  • Many small files can be significantly slower than fewer large files.

  • All other client-local considerations with disk I/O come into play when doing a backup. Ensure your disks are defragmented.