The reason that it is not freeing up space is due to the way the Backup & Disaster Recovery appliance stores data after it deduplicates the data, and how it references blocks in the block store.
If you go to the appliance Management Console › Settings › Deduplicated File System, you can see two important fields in the Block Deduplication Statistics section. The first field is the bottom right number, and is labelled Allocated Bytes. The second field is the one right above this one, and is labelled Free Bytes. What these numbers represent is the total size that your block store has been allocated up to, and the amount of free space within it respectively.
So how this works is when backups run and deduplicate it will first break a job up into its constituent files, and will compare those files against other files that already exist. If exact duplicate copies of those files do already exist then the duplicates will be discarded. If an exact duplicate does not already exist then the files will be stored in a shared directory where it can be checked against files from future jobs. When a file gets stored in this directory it is stored inside of the block-level deduplicated file system, or DDFS. Here the file is broken down into 64-kB blocks, and it gets run through a similar process, where the individual blocks are compared against blocks that already exist, if an exact copy exists it is discarded, if not it is stored. When it is stored it is added to the block store volume, which is what that allocated bytes number is.
When we map blocks into this block store we map them as a reference point from zero. So if later down the line when you delete data, and it is no longer referenced by any files it needs to get deleted we cannot just delete out the blocks, then truncate out that section of the block store and compress it out without having to recalculate where every block after the section that got deleted would now reside, which would take a massive amount of disk I/O to calculate. So what we do instead is just mark those sections as available in the database, then next time something needs to write to the block store we will fill these holes first, before expanding the size of the block store again.
This method will work fine, until the block store grows to the point that you no longer have enough space outside of the block store for appliance to work and process its jobs. Then you start encountering issues with backups failing due to the director refusing connections.
In appliances with firmware version 6.15 or later, you can schedule and run compact online manually. This feature will prevent occupying additional space in consequence of freeing space on the device, for example after deleting some jobs/clients.