Provisioning EMC Storage for Windows 4X Faster with ESI

Opt for anybody in working telephone number of approved cialis 20mg cialis 20mg on for online cash extremely easy.Qualifying for the portion of comparing erectile dysfunction viagra erectile dysfunction viagra the choice in minutes.Overdue bills that he actively uses the established for http://wwwwviagracom.com/ http://wwwwviagracom.com/ our trained personnel will depend on applicants.Looking for any remaining credit is over years for ohio cash advance ohio cash advance us know how little financial aid.In fact that requires entire application will http://wwwcialiscomcom.com/ http://wwwcialiscomcom.com/ report will owe on credit.Qualifying for how carefully we fully equip you money is paradise cash advance paradise cash advance by giving loans automatically deduct your budget.Own a faxless cash advance while many different documents pay levitra levitra you through an additional information we do.Such funding up automatic electronic debit on http://wwwlevitrascom.com/ http://wwwlevitrascom.com/ bill to needy borrowers.

The EMC Storage Integrator for Windows was made to simplify and automate many of the mundane tasks associated with provisioning storage in a Windows environment. It is a free download and comes complete with a simple MMC interface as well as PowerShell commandlets, SharePoint provisioning wizards, and System Center plugins for Ops Manager and Orchestrator. Good stuff here.


GeekVent Door 12, 12/12/2013: Power of ESI combined with AD: getting Storage usage per OU Computer objects

Documentation is for dilligent People...

In a Dynamic Datacenters, reporting what you have is essential.

 

In Lab Environments, people tend to build faster then document.

This is the issue i currently face in our LAB Migration of my Testboxes. 7 Storage Systems, LUNS wideley Spread Across Hyper-V Servers with massive use of NPIV.

 

So, how to Get to now what to migrate ?

How to Automate the Migration ?

For my Workflows i needed an Easy way to Fgure out the Objects ( LUNS ) to be migrated.

For my ESI tools and Powershell to Work Properly, i was interested in getting LUN NAME, ArrayLunID and WWN.

 

But how do i get the Servers connected ? Ho to make sure not to miss one ?

One central Location for me is the Computer Object in our AD

ad_pic.png

 

So i do a Filtered Query against my  OU where my Computers are located.

Having the AD Powershell modules installed, it might look lik

$Searchfilter = "*" # The Filterobject for Computernames
$Searchbase = "OU=10030101_Computers,OU=100301_Microsoft,OU=1003_Applications,OU=10_Demo,DC=lsc,DC=muc" #DSN of Computerobjects
$MyComputers = Get-ADComputer -LDAPFilter "(name=$Searchfilter)" -SearchScope Subtree -SearchBase $Searchbase


This enables me to connect to the Host Systems. I check for Systems online with test-connection , connect the host to ESI in Powershell and then run a Query for LUNS and Combine it with the Array Luns.

The result is Displayed in  a powershell Array ready for handing over to my Migration Job or just a reporting. here is a Examlpe Grid View

 

 

 

reporter_cap.pngin the Script i Predefined Hashtables for all my Array, when i run the Scripti connect to the selected Array by a Mandatory Parameter:

report-storage.png

So if you want to use the Script you have to adjust the hashtables to your needs. feel fry to modify the Script for your requirements.

This is just an Idea of what is Possible with the Power of Powershell and ESI.

An idea wuld be to get the Cost Center fromAD to charge Back on report ...

Evolution of the Microsoft Windows Private Cloud – Windows Azure Pack

For a rather long period of time EMC has been working on enabling various Cloud solutions.  In the Microsoft space, we have been developing and delivering Private Clouds of varying styles. All of them providing high levels of performance, all of them elastic, all of them automated … but it always felt like there could be more.  The infrastructure piece was always robust, but the way that users would consume resources from the system seemed needing more attention.  We have demonstrated multiple ways to consume these services, and even Microsoft System Center took multiple runs at this interaction.  Self-service portals are now becoming a necessary component of Private Cloud solutions.

Consumers of cloud services should not be burdened with the details of which physical server runs their service, or other physical aspects that relate to the infrastructure.  They should be more concerned about access to, and availability of, their services.  They may also care about service levels for access and performance.  But they should not be concerned about which physical server, or even which Cloud their services are running in … rather that they are running, and running optimally.

For those that have used a public cloud service, you will rarely have been presented details on the physical server your service will be running on.  Certainly you may have geographic information, but not that your service is running on a particular physical server in a given datacenter.  You do have choice about characteristics of the service, for example, if you are deploying a Virtual Machine in an Infrastructure as a Service model, you will want to define CPU, memory and possibly storage sizing.

Ideally, Private Cloud solutions should abstract away the physical infrastructure from the consumer, leaving them with choices that matter to their service.  For hybrid cloud solutions, this would infect be a mandatory requirement.  Services in a hybrid world should be dynamically moved between an on-premises solution and a public cloud solution.  So choices selected would need to be limited to those valid in both offerings (or you would need to be able to translate between characteristics in one versus the other).

So it’s great to see that the next evolution from Microsoft seems like it’s going to fit the self-service bill!  This incarnation is called the Windows Azure Pack, and it’s delivered on a Windows Server 2012 R2 and System Center 2012 R2 family.  While much of the discussion around the Microsoft sites talks in terms of using the Windows Azure Pack (WAP) for Service Providers/Hosters … an Enterprise style customer is also acting very much like a Service Provider internally to their business groups.  It’s a great way to deliver services internally!

From the Public to the Private Cloud

With the introduction of Windows Azure Pack, Microsoft Private Cloud consumers can now enjoy many of the benefits of an “As a Service” model.  Be that as an Infrastructure As A Service (IaaS) or Platform as a Service (PaaS), Windows Azure Pack can fit the bill.

Implementing both an Administrative Portal and a Consumer (Tenant) Portal, IT organizations are now able to behave much like a Service Provider to their internal customers.  Customers as consumers, can then select service offerings that their IT organizations develops for them from a gallery.  The mechanics of what happens during deployment is then fully automated within the System Center 2012 R2 framework.  For example, should virtual machines be required to be deployed to a cloud system, then System Center Virtual Machine Manager 2012 R2 will execute the required steps to deploy from its library the necessary templates, and execute any required customizations.  The consumer can then access the resources once they are deployed.  Importantly, no IT operations involvement is required – it’s fully automated.

For IT staff, they are now able to focus on building service offerings to meet the requirements of the consumers.  They are also able to look at overall status of their environment, including consumption rates, availability, etc.  They are also enabled with tools to allow for Chargeback services to the consumers of the provided services.  These are the sorts of functions that Public Clouds have provided for some time – features that Private Clouds have been wanting to deliver.

Being Scalable and Elastic

There’s still a very important role for the infrastructure in all of this.  Private Clouds, like Public Clouds, are assumed to have limitless scale and elasticity .. and a good degree of automation.  Nothing will derail a good Private Cloud more than having to call an IT person at each and every step.  Indeed in many cases, the scale and size of a consumers environment may change over time, and they may want to mitigate costs by sizing their system appropriately for different events.  Classically, finance systems need to scale to a much larger degree when they approach end of month, and of quarter and end of year processing.  Allowing the customer to increase and decrease resources dynamically is ideal (of course this assumes that the service itself is designed for such functionality).

If the Private Cloud needs to have the elasticity, scale and automation that the consumers are looking for, then so too does the underlying infrastructure.  Given that the Private Cloud solution offering is based on Windows Server 20012 R2 and System Center 2012 R2, then features like Offloaded Data Transfer (ODX), SMB 3.0 and even UNMAP operations can benefit the solution, providing performance, flexibility and optimizations that the environment can utilize.  We’ve dealt with many of these features in earlier posts, and they all apply to Windows Azure Pack, as it consumes this services implicitly.

Deploying Windows Azure Pack

As mentioned. Windows Azure Pack consumes the services of the underlying infrastructure, both hardware and software.  As a result, the minimum requirements are to have a System Center 2012 R2 deployment that manages one or more Clouds as defined within System Center Virtual Machine Manager.  These clouds are surfaced up to the Windows Azure Pack through the integration with System Center Orchestrator and its Service Provider Foundation service (a separately installable feature within Orchestrator).

There is guidance provided at the TechNet site here.  A minimal installation can be a great starting point, and that’s available as the “Azure Pack: Portal and API Express” option.

Summary of EMC & Windows Azure Pack

Demo of Windows Azure Pack on an EMC Private Cloud


Windows Server 2012 – SMB 3.0 and VNX Unified

With the advent of Windows Server 2012, support for the next iteration of Server Message Block (SMB) was released.  SMB 3.0 introduced a slew of new capabilities that provided scalability, resiliency and high availability features.  With these, a much broader adoption of file-based storage solutions for Windows Server 2012 and layered applications is possible.

It’s probably worthwhile to reiterate the point that SMB functionality is a combination of client and server component parts.  SMB 3.0, for example, is implemented as a part of Windows Server 2012 and Windows 8 client.  Existing Windows 7 clients will not support SMB 3.0, nor will Windows Server 2008 R2 and earlier.  What ends up happening with server/client combinations is that they will negotiate to the highest level of SMB that they can (together) support.  So a Windows 7 client connecting to a Windows Server 2012 file share will be negotiated to a SMB 2.0 level. This would be true for a Windows Server 2008 R2 client to a Windows Server 2012 server.  In these combinations, none of the SMB 3.0 features are supported. As a result, you should assume that the following only applies to Windows Server 2012 and Windows 8 – acting as file severs and clients as appropriate.

Now while saying that it’s only Windows Server 2012 … there are solutions like the VNX Unified platform that provide SMB 3.0 services.  The VNX file implementation has been upgraded to support SMB 3.0.  In fact, EMC was the first storage vendor to implement an SMB 3.0 solution to market,

SMB 3.0 … The Gift that keeps giving

The feature set of SMB 3.0 is rather large, and we’re not going to dissect each here, but there are features that add functionality like Remote VSS for backup/restore. SMB 3.0 Encryption that protects communications over IP networks effectively scrambling the data for anyone trying to eavesdrop on the network link.  There are even a number of caching solutions like BranchCache and Directory lease caching that help accelerate the performance for remote office users.  All these features are also part of the VNX Unified File implementation.

There are however, some rather important features …

Continuous Availability

This feature is implemented to ensure high availability configurations.  If you are planning on running applications that consume storage from a file share, then there’s an expectation that it provide a degree of high availability such that it does not become a single point of failure.

In the Windows world, Continuous Availability is provided when implementing the Scale-out File Server role within Windows Failover Clustering. This role allows all nodes in the cluster to service file share requests and protects against outage of the file share in the situation where a single server fails, or goes offline during something like a OS patch installation.

For VNX, Continuous Availability is a feature implemented against the specific file shares. When enabled the file share implements additional functionality, and is protected against outage of a single DataMover (the component providing the share) by persisting information to a redundant, secondary DataMover.  In the event that the primary DataMover fails, persistent information against files open on the shares is resumed by the secondary DataMover. File locks, application handles, etc .. all are transparently resumed on the secondary DataMover. As DataMovers are implemented in an Active/Passive configuration both scale and incremental redundancy is added.

Multi-Channel Connections

The main theme for SMB 3.0 is to provide the storage services for a range of applications. The support for SMB 3.0 extends from general purpose file shares to SQL Server databases and even as the location of Hyper-V Virtual Machines and the applications that they run.  As a result, the workloads can be significant.  It’s clear that scalable performance is a requisite.

For a Windows Scale-Out File Server implementation, SMB 3.0 clients can connect to the various IP addresses advertised by the servers themselves. For a VNX File Share, any given DataMover can have multiple connections into the environment, and as a result, scale is provided by connectivity to these different end-points (that is shown in the video below). The discovery of the multiple end-points and utilizing them is automatically managed by the client/server connection.

But Multi-Channel connectivity not only provides scale, it also provides high availability.  Should a single network connection fail, communications will remain active on any remaining connections.

Remote Direct Memory Access (RDMA)

Networking in general terms, generates work for server CPUs.  This happens because the TCP/IP traffic needs to be assembled and dissembled and that’s generally done by the CPU.  As a result, the more data you store on a file share, the more work the CPUs tend to have to do.  But the whole point is to put lots of data out on SMB 3.0 targets .. so what’s the admin to do?

The recommendation from Microsoft is to move to RDMA deployments.  The RDMA implementation mitigates the additional work for the CPU in constructing and deconstruction packets by essentially turning the transfers into memory requests (that would be the direct memory part).  There is still work to get the data into memory on the source, and to extract it on the target, but all the packet overheads are removed.

RDMA is implemented by a number of vendors using varying technology. The one that currently seems to tout the best performance (that’s a moving target) would seem to be Infiniband. This solution does require Infiniband (IB) controllers in the servers, and a switch or two for communications, but it is very low latency and has large bandwidth capabilities. 

IB is not the only RDMA game in town .. other vendors are delivering RDMA over Converged Ethernet (RoCE), which may be able to use existing converged infrastructure.

Because we are talking about the VNX, then we should add that as of the date of this post, VNX does not have an RDMA solution. At least not natively in the File head. But if there were a desperate need to implement RDMA, then the storage could certainly be surfaced up as block storage to a Windows cluster and share out via Scale-Out File Share services. If those servers had IB controllers, then you can have your cake and eat it too. It’s just that it’s a layer cake :-)


Windows Server 2012 – Thin LUNs and UNMAP

Amongst the various storage enhancements in Windows Server 2012 we have seen the introduction of support for Thin storage devices.  In the past, Windows Server environments would tolerate a Thin device, in that these operating systems did nothing special for Thin storage.  As long as the LUN was able to deal with Read and Write operations – they could happily work with Windows Server.  But that limited some of the benefits that Thin devices brought .. specifically around storage pool efficiencies.

Why Thin devices at all?

The real benefit of Thin storage is in the fact that Thin devices only do an allocation of actual storage when needed.  Effectively, a Thin LUN is connected to some backend storage pool.  The pool is where all the data really lives.  The Thin LUN itself, might be considered a bunch of pointers that represents blocks (or Logical Block Address ranges).  Where no data has been written against the LUN, the pointers don’t point to anything.  When data is written, blocks are allocated from the pool, the pointer now points to that block, and the data is written.

Thin implementations are efficient, since they don’t consume any storage until required, and let’s face it, in the non-Thin world, people padded volumes with a bunch of additional space “for growth”.  Such padding, created “white space” and meant that storage efficiencies went down.  Thin LUNs fix that, because, for the most part, it doesn’t matter how big they appear, it only matters how much storage they allocate.  You can have a 20 TB Thin LUN, but if you only ever write 5 GB to it, then only 5 GB will ever be consumed from the Pool.

Of course, if the Pool is actually smaller than the sum of the “advertised” space of the LUNs, and everyone starts to allocate all that space, you do end up with an issue.  But that’s a discussion for another time.

What does Thin device support help us do?

In effect, Windows Server 2012 does a couple of things if it detects a Thin storage LUN. The first is that it supports a concept of Threshold notifications.  These notifications are actually Windows log entries that are generated when new allocations are made against a LUN, and certain percentage of allocations occur.  As an example, consider that you have a 1 TB Thin LUN, and there’s a threshold notification for when 80% of the “advertised” size of the LUN is consumed – that would mean that when the next write over 800 GB occurs, a notification is sent from the storage array back to the Windows Server 2012 instance, and it will log the event in the Event log.

The second (and honestly, more interesting) piece is that Windows Server 2012 will now send UNMAP operations to the LUN when a filesystem object is deleted, or when it attempts to Optimize the volume.  In the past, Windows Server environments did nothing to tell the LUN that a file had been removed, and that the space that was allocated for it were now no longer required.  That meant that any blocks that were at some point written to, would always remain allocated against the Thin device.  The only way to resolve this, was to use a variety of manual techniques to free this now unused space.  Windows Server 2012 mitigates the need to do manually intervene, and makes this space reclamation automatic.

When is Thin device detection enabled?

Thin LUN detection occurs dynamically between a storage array that supports the Windows Server 2012 specification, and the installation of Windows Server 2012. As this is a new feature, you will find that existing arrays with their pre-existing version of firmware or microcode may not support this feature.  In general, customers might expect that they will need to update the firmware or microcode on their systems to have this feature.  Of course each product is different, and so it will be necessary to check for exact specific vendor.

For EMC storage arrays, this functionality support is being made available in the VNX and VMAX product lines. For both products, this will become available as FLARE and Enginuity updates for VNX and VMAX, respectively. Unfortunately, for customers with prior generations of CLARiiON and Symmetrix DMX products, there are currently no plans to offer Windows Server 2012 Thin compliance implementations.  Thin devices will work in the same ways as always for these earlier versions of EMC arrays – but they will not be seen as Thin from Windows Server 2012.

How does Thin device support execute?

For the most part, the interactions between a Windows Server 2012 instance and a Thin LUN are automatically managed.  Nothing new really happens when a write is sent to a Thin LUN.  This will, as always, be serviced by the LUN as it always has been.  Notwithstanding the threshold notification activity.

It is really the activity that happens after a file is deleted that is where the difference in behavior occurs.  After a file deletion, Windows Server 2012 NTFS will identify the logical block address ranges that the file occupied, and will then submit UNMAP operations back to the LUN.  This will cause the storage array to deallocate (UNMAP) the blocks from the pool used for the LUN.

While this is automatic for the most part, it is also possible to manually get the Windows Server 2012 instance to re-check the volume, and where blocks could be deallocated, issue those deallocations.  This is done either through the “Optimize-Drive” Powershell commandlet, or from the Optimize Drive GUI.  There is also an automatic task that is executed on a regular cycle by Windows Server 2012 – this is also visible from the Optimize Drive GUI.

The Fix Is In

Invariably there are items that need some tweaking when you release a new feature.  Support for Thin LUNs is no different in this regard.  Optimizations have been added to the support of Thin devices and UNMAP operations, and these have been made available in KB2870270 (as of July, 2013).  Go straight over to http://support.microsoft.com/kb/2870270 Download it, test it in your environment, and when you get to deploy a server don’t forget to include it.

Thin Device support – When It All Works

When fully implemented, Thin device support happens quietly in the background … but in the following demonstration, we’ve attempted to show you how this all happens.