
The host that the VMs are deleted from is the host that issues the automatic UNMAP for those VMs.Even if you set the rate higher, a single ESXi will not surpass that throughput level. If only a single host is running UNMAP on a volume it will max out at 1 GB/s.Though if it finishes reclaiming 400 GB, then a minute later another 400 GB of dead space is created, it will resume at 1 GB/s. Once it reclaims 400 GB then throttling will kick in. If the existing dead space is 400 GB or less it will run at 1 GB/s to reclaim it.
If your dead space builds up in small increments UNMAP throttle rates are irrelevant. If you set the limit to 500 MB/s for a datastore, each host actively using that datastore can run UNMAP at up to 500 MB/s on it. The rate limits are per host, per datastore. UNMAP starts as soon as you create dead space (delete a VM or disk, or move one). So stay tuned as I continue to hunt this down. UPDATE: I have been talking with some other storage vendors and most of this seems to be accurate for them as well, except for point “3” below. Note that these are my observations–I cannot 100% confirm these statements yet. If you don’t want to run through my investigation here are my findings: In vSphere 6.7, this is now configurable to a specific throughput limit. Low had a reclamation limit of around 25 MB/s. In vSphere 6.5, there were two options: low or off. To make those customers happier, VMware introduced the ability to tune the UNMAP rate limit on a datastore by datastore basis.
Modern all-flash-arrays on the other hand handle UNMAP exceptionally well–especially meta-data based AFAs, like the FlashArray. VMware did this for a valid reason–a lot of legacy arrays did not like heavy UNMAP I/O. It could take up to a day to reclaim space. The problem with this was that it was asynchronous and also very slow. In vSphere 6.5, VMFS-6 introduced automatic space reclamation, so that you no longer had to run UNMAP manually to reclaim space after virtual disks or VMs had been deleted. VMware has continued to improve and refine automatic UNMAP in vSphere 6.7.
What’s New in Core Storage in vSphere 6.7 Part VI: Flat LUN ID Addressing Support. What’s New in Core Storage in vSphere 6.7 Part V: Rate Control for Automatic VMFS UNMAP. What’s New in Core Storage in vSphere 6.7 Part IV: NVMe Controller In-Guest UNMAP Support. What’s New in Core Storage in vSphere 6.7 Part III: Increased Storage Limits.
What’s New in Core Storage in vSphere 6.7 Part II: Sector Size and VMFS-6.What’s New in Core Storage in vSphere 6.7 Part I: In-Guest UNMAP and Snapshots.VSphere 6.7 core storage “what’s new” series: