This is another installment in my series on Windows 8 Storage & Hyper-V. Previous blogs in the series can be found here:
Another promising new storage functionality that can be found in Windows Server 8 is the new transparent fast copy feature called Offload Data Transfer or ODX. If you know VMware’s vStorage API for Array Integration (VAAI), you probably know where to place ODX because it is more or less in the same league.
What’s the Challenge?
If you have a large Hyper-V guest with multi-Terabyte VHDX files, it depends on the amount of memory, the activity of the VM and the available bandwidth how long it takes to Live Migrate that VM to another node in your Hyper-V cluster. However, it is an entirely different story if you also need to move these very large VHDX files from one disk to another, from one array to another, from one cluster to another or even from one cloud to another. It would take ages doing this the classic way. Every read and every write including its confirmation would have to go through the sending server and the receiving server. Even if there would only be one Hyper-V server involved (copying between two CSV’s on the same server) this is highly inefficient. After all the VHD(X) is already on the storage array. Why let the data travel all the way from CSV1 through server A to server B and then back to CSV2 again? Why would the data have to leave the storage array at all?
With ODX we can avoid taking the long route and let the storage array controller(s) do the hard work. All you need is status information (how much has been copied; copy has been completed).
Even when you are moving a Windows 8 Hyper-V VM between hosts in different locations with a multiple arrays (replicating synchronously or asynchronously), there is no need to push the bulk of the data through the Hyper-V servers and the network.
ODX takes advantage of the more advanced features of modern storage arrays to speed up the movement of data. Instead of passing the data around, it passes around a token which contains a point in time view of the data. The good thing is that it supports fast copy of data within a machine or between machines on the same location or on multiple locations. It is not constrained by any protocol or transport type. Even better, any application can use this efficient way of data transfer.
ODX instructs the storage array to generate and return a token which shows the state of the transfer. The operation behaves like a non-cached read and is totally transparent to the operating system. In other words the storage array is in full control. The storage array determines if the write can take place, performs the data movement and confirms its success or failure. This operation is functionally identical to a normal non-cached write operation. The only requirement is that the disk space must be available and pre-allocated.
The reason why ODX is so transparent is because the offload transfer technique is integrated into the Win23 CopyFile API. This means that any copy, xcopy, robocopy or even drag & drop will benefit from ODX if the underlying storage array supports this functionality.
Are there any limitations?
ODX works fine on any NTFS partitioned disk. However files cannot be compressed or encrypted. Neither are spare files or BitLocker protected volumes supported.
ODX and file system filters
ODX was written with other system filters in mind and tries to avoid compatibility issues with other system filters that are not aware of ODX. When an ODX operation is issued, the IO Manager checks all of the filters on the file system stack to determine if the feature is supported or not and takes appropriate action. You can run fltmc instances at a command prompt to see all the filters for a given volume.
What does it require on the array side?
Microsoft is currently working with several array vendors to get ODX supported. Hopefully a firmware update of the array controllers will be enough to get things started.
If any vendors would like to comment, please do!
Source: Windows 8 File System Performance and Reliability Enhancements, SDC11 presentation, SNIA