Amongst the various storage enhancements in Windows Server 2012 we have seen the introduction of support for Thin storage devices. In the past, Windows Server environments would tolerate a Thin device, in that these operating systems did nothing special for Thin storage. As long as the LUN was able to deal with Read and Write operations – they could happily work with Windows Server. But that limited some of the benefits that Thin devices brought .. specifically around storage pool efficiencies.
Why Thin devices at all?
The real benefit of Thin storage is in the fact that Thin devices only do an allocation of actual storage when needed. Effectively, a Thin LUN is connected to some backend storage pool. The pool is where all the data really lives. The Thin LUN itself, might be considered a bunch of pointers that represents blocks (or Logical Block Address ranges). Where no data has been written against the LUN, the pointers don’t point to anything. When data is written, blocks are allocated from the pool, the pointer now points to that block, and the data is written.
Thin implementations are efficient, since they don’t consume any storage until required, and let’s face it, in the non-Thin world, people padded volumes with a bunch of additional space “for growth”. Such padding, created “white space” and meant that storage efficiencies went down. Thin LUNs fix that, because, for the most part, it doesn’t matter how big they appear, it only matters how much storage they allocate. You can have a 20 TB Thin LUN, but if you only ever write 5 GB to it, then only 5 GB will ever be consumed from the Pool.
Of course, if the Pool is actually smaller than the sum of the “advertised” space of the LUNs, and everyone starts to allocate all that space, you do end up with an issue. But that’s a discussion for another time.
What does Thin device support help us do?
In effect, Windows Server 2012 does a couple of things if it detects a Thin storage LUN. The first is that it supports a concept of Threshold notifications. These notifications are actually Windows log entries that are generated when new allocations are made against a LUN, and certain percentage of allocations occur. As an example, consider that you have a 1 TB Thin LUN, and there’s a threshold notification for when 80% of the “advertised” size of the LUN is consumed – that would mean that when the next write over 800 GB occurs, a notification is sent from the storage array back to the Windows Server 2012 instance, and it will log the event in the Event log.
The second (and honestly, more interesting) piece is that Windows Server 2012 will now send UNMAP operations to the LUN when a filesystem object is deleted, or when it attempts to Optimize the volume. In the past, Windows Server environments did nothing to tell the LUN that a file had been removed, and that the space that was allocated for it were now no longer required. That meant that any blocks that were at some point written to, would always remain allocated against the Thin device. The only way to resolve this, was to use a variety of manual techniques to free this now unused space. Windows Server 2012 mitigates the need to do manually intervene, and makes this space reclamation automatic.
When is Thin device detection enabled?
Thin LUN detection occurs dynamically between a storage array that supports the Windows Server 2012 specification, and the installation of Windows Server 2012. As this is a new feature, you will find that existing arrays with their pre-existing version of firmware or microcode may not support this feature. In general, customers might expect that they will need to update the firmware or microcode on their systems to have this feature. Of course each product is different, and so it will be necessary to check for exact specific vendor.
For EMC storage arrays, this functionality support is being made available in the VNX and VMAX product lines. For both products, this will become available as FLARE and Enginuity updates for VNX and VMAX, respectively. Unfortunately, for customers with prior generations of CLARiiON and Symmetrix DMX products, there are currently no plans to offer Windows Server 2012 Thin compliance implementations. Thin devices will work in the same ways as always for these earlier versions of EMC arrays – but they will not be seen as Thin from Windows Server 2012.
How does Thin device support execute?
For the most part, the interactions between a Windows Server 2012 instance and a Thin LUN are automatically managed. Nothing new really happens when a write is sent to a Thin LUN. This will, as always, be serviced by the LUN as it always has been. Notwithstanding the threshold notification activity.
It is really the activity that happens after a file is deleted that is where the difference in behavior occurs. After a file deletion, Windows Server 2012 NTFS will identify the logical block address ranges that the file occupied, and will then submit UNMAP operations back to the LUN. This will cause the storage array to deallocate (UNMAP) the blocks from the pool used for the LUN.
While this is automatic for the most part, it is also possible to manually get the Windows Server 2012 instance to re-check the volume, and where blocks could be deallocated, issue those deallocations. This is done either through the “Optimize-Drive” Powershell commandlet, or from the Optimize Drive GUI. There is also an automatic task that is executed on a regular cycle by Windows Server 2012 – this is also visible from the Optimize Drive GUI.
The Fix Is In
Invariably there are items that need some tweaking when you release a new feature. Support for Thin LUNs is no different in this regard. Optimizations have been added to the support of Thin devices and UNMAP operations, and these have been made available in KB2870270 (as of July, 2013). Go straight over to http://support.microsoft.com/kb/2870270 Download it, test it in your environment, and when you get to deploy a server don’t forget to include it.
Thin Device support – When It All Works
When fully implemented, Thin device support happens quietly in the background … but in the following demonstration, we’ve attempted to show you how this all happens.