Windows File Server or EMC File Services? You decide…

I was just asked if EMC Marketing had a competitive slide deck covering the differences between Windows File Services in Windows Server 2012R2 and the EMC File Portfolio of products.  Hmmm, I thought… What ARE the differences?  There are so many, I'm not sure where to start.  So let's do this: let's start with an old expression; "the mother of invention is necessity".  In this context, that means, "what was the initial set of requirements?"  Let's take a look at a potential list:

  1. Store "home" directories
  2. Store high-performance VHDx files in a Hyper-V Server Cluster
  3. Replicate advertising content (videos, PowerPoint docs, and graphics files used for placing ads in our magazines and web sites) to another site 120-miles away so they can use the unified file space
  4. Link a bunch of file servers together using DFS-R (Distributed File System - Replication) Services
  5. Store a single department's files in a single remote location
  6. Store user files so that they can access them on any mobile device anywhere in the world without using VPN

Ok… that's a diverse set of requirements.  So many use cases, but all of them using file servers to reach the goals.  What should we do?

Here's where it gets easy, but also gets interesting. By clearly defining what you need to accomplish, you can easily match needs to features or offerings.  For example: 1. Store Home Directories: how large are they?, are there capacity quotas?, are they scanned for corporate governance compliance (data that might contain credit card numbers)?, are the directories synchronized to laptops using My Documents redirection?  If all these are true, you may need the advanced features offered by an Isilon Scale Out file system. Check out Isilon Family - Big Data Storage, Scale-out NAS Storage - EMC

What about #2. Store high performance VHDx files for use in a Hyper-V Cluster?  Well, now we are in a completely different place, eh? All of a sudden we have concerns for performance, availability, clustering support, metadata handlers, snapshots, recoveries, potential replication concerns, we need to arrange VSS backups of individual files.  It's like we aren't talking about a file server anymore.  We are finding more and more customers clamoring for the advanced, scale-to-fit, extremely efficient, and trusted VNX Family - Unified Storage Hardware and Software for Midrange - EMC.

#3 - Replicated and shared file spaces have been a challenge for IT professionals since the dawn of IT.  Microsoft's Distributed File System Replication is an extremely evolved solution for highly specific use cases.  Sharing replicated file spaces is a tricky task, Windows Server 2012R2 delivers a unique and highly optimized set of tools to allow small sets a data to be replicated among sites efficiently.  There are complex setup steps, but Virtualizing your Windows File Servers can ease management headaches and reduce costs!  Our paper on Microsoft Private Clouds is great starting place for your journey. http://www.emc.com/collateral/whitepaper/h11228-management-integration-cloud-wp.pdf  Likewise, #4, specifically calls for DFS-R -- EMC technologies, when working WITH Microsoft technologies will lower costs, reduce downtime, reduce support issues, and allow you to reach your business goals faster.

#5 -- Store a single departments files in a single location? This is one of the requirements that can go in so many directions because it seems as if the requirement is simple.  The problem here is latent -- it doesn't present itself at first.  At first blush, you might think, "hey, no problem: stick a VM out at the remote office.  Back it up with Avamar - Backup and Recovery, Data Deduplication - EMC , and I'm done.  Maybe you are, maybe you're not.  What if the remote location has power issues, physical security issues, the staff constantly deletes files and needs point-in-time recoveries routinely?  The simple problem just consumed your week.  EMC knows that these little "ankle biter issues" are detailers for IT shops. Handling remote file usage has become the bane of so many IT shops and IT managers.  EMC sees that there is more to remote file access than placing a VM at a remote location; that's why we introduced leading technologies to assist you and your users get what they want when they want it.  My ol' pal Paul Galjan has posted an article to get you thinking about the possibilities: http://flippingbits.typepad.com/blog/2014/05/mobilize-sharepoint-with-syncplicity.html

#6.  Access anywhere files.  Oddly, sometimes Dropbox just isn't good enough.  That's why EMC launched Syncplicity.  Please take a moment to see all you can do for your increasing group of remote and mobile users. Features

The summary to all this is that File doesn't mean File Server anymore.  It means storage.  Every storage scenario is different and that's why EMC has a proud portfolio of offerings.  Not just services, not just products, not just software.  EMC has become the answer to an ever increasing number of questions and scenarios.  Thanks for reading.

Getting Started with Azure Hybrid Cloud Powershell

Leveraging the power of Powershell, its easier than ever for an EMC customer to leverage their on-premise investment in Hyper V and EMC VNX, say, and the power of Microsoft's cloud offering, Azure. While many of our customers are leveraging Hyper V 2012 to manage their INTERNAL cloud, the embrace of a Hybrid model is still in the very early stages.


Here's how to get your feet wet and see how this thing drives.


Please note that you will need a Windows Azure account with an active subscription. You can start a free trial subscripting by visiting the following site:

http://aka.ms/DoAzureTrial

 

It will require a credit card, but this is primarily for Identity Verification!

 

Once the account is active and you have an active subscription, you can proceed with your Powershell experimentation pretty easily. Download the powershell bits and bobs at the following location:

http://www.windowsazure.com/en-us/downloads/

 

You are looking for Windows Powershell under the command line tools section. Once the installation has completed, you are ready to rock and roll! Open up the appropriate Powershell window by going to the Windows Azure folder in you start button programs and clicking on Windows Azure Powershell:

Screenshot (14).png

Execute a get-module command and you will see that there is a loaded module – appropriately named Azure – ready for business. Now, I want to connect it to my active Azure account. To achieve this, it’s as easy as typing:

PS C:\> Add-AzureAccount

 

auth.png

 

Without parameters, it will open up an Authentication window that you enter your Windows Live Azure account credentials to:

That’s it. You are now ready to run some stuff against your Azure cloud account. Let’s say you want to see what locations are available to place my planned new VM. To see what’s available, I simply type:

PS C:\> $location = Get-AzureLocation

 

To see what VM Images are available in the service catalog, simply type:

PS C:\> $images = Get-AzureVMImage 

 

At the time of this blog post, there are over 209 VM base images that are available – right out of the gate. Windows and Linux are the available platforms (multiple flavors of each). Take a look at the list of whats available by typing something like the following:

              PS C:\> $imagelist | format-list label,description,imagename,os

 

This will list some descriptive fields of the object list. There are images with SQL Server 2012 Preview Edition pre-loaded, SQL 2014 Data Warehouse Preview Edition, etc. Obviously you could search the labels for what you need to make this very dynamic (but that’s outside of the scope of what we are doing today). Platforms and OS listed below:

Microsoft

Linux

Microsoft SQL

Oracle SQL

Windows 2008R2

OpenSUSE

SQL Server 2008

Oracle 11g

Windows 2012 and R2

Centos 6.3

SQL Server 2012

Oracle 12c

 

Ubuntu 12.04/12.10/13.04

SQL Server 2013

Weblogic 11g

 

SUSE Linux Enterprise 11SP3

BizTalk Server 2013

Weblogic 12c

 

Oracl Linux 64

 

 

 

Oracle 11 and 12 are available, running on the Windows Platform. We know this by checking the underlying OS in the list provided above. Very interesting seeing what’s available in the catalog. Now that I have an idea of what’s available in the catalog, as well as the possible locations for them to reside,  I need to start defining the service name and the password of the Administrative account. These can be simple strings.

PS C:\> $mySvc = "myservicename"

PS C:\> $myPwd = "yourpassword"


Ready to deploy a VM! Here is what the command line looks like:

PS C:\> New-AzureQuickVM -Windows -name "SomeVMName" -ImageName $imagelist[4].imagename -ServiceName $mySvc -Location $locations[0].name -Password $myPwd


Pretty simple, powerful – and ready to play with. I literally had this up and going in around 15 minutes – from opening the account, downloading the powershell bits, to deploying workstations in the “Cloud”.


You have the option of uploading your own sysprepped images to cloud BLOB storage, and leveraging your own optimized images as well. With Orchestrator Runbooks and System Center Operations Manager plugins, its an interesting approach to Hybrid Cloud.

When disaster Strikes Back

This Post is dedicated to my Friends at Adlon.

We at EMC normally talk a lot about Disaster recovery. We have Plenty Demos on it. We live DR. At least at our Customers Sites.

And in our own Production. No outages at all.

But sometimes, the Shoemakers Son always goes Barefoot.

Not because he does not have the tools, maybe because he is getting lazy for himself :-)

I am outing myself here :-) for my Demo Lab ....

Due to the Lack of DR Hosts ( and not due to the lack of Storage :-) ) not all of my Hyper-V VM´s are Disaster Protected by a DR Host or Cluster. ( Well, supporters can donate me Hosts if they wan´t. There is enough Open Space on my EMC Cycling Shirt for Sponsors )

So, in reality does my Demo Environment need to be Disaster Protected ? Not Really, It´s a Sandbox. Could reinstall it fro Scratch ..

 

But what a if disaster really happens ?

When does it happen ?

Do i need it when it happens ?

Do i want to reinstall from that scratch ? not really?

 

It happened to me yesterday.

One day before Vacation . . .

 

here is what Happened briefly

Thursday, 1pm, Partner Workshop with Adlon, one of our Cloud OS Partners in Ravensburg

My Personal Disaster happens.

We spent already a half day on integrations from EMC into Microsoft, like ESI and Powershell.

 

Then it was time to do some Azure Pack and SCVMM Demo´s.

I tried to connect to my SCVMM Host, but the Console failed.

Pretty soon i figured out that my Hyper-V Host running SCVMM and one Node of my SQL Servers Crashed.

Well, he did not really Crash, he was in dimension between here and cloud. I Still receives Pings, but Remote Management with wmi/cim/wsman no longer worked.

 

OK. Plan b, i ran trough the Demos with Videos, a Plan i normally don´t like.

 

Friday, 10am, Homeoffice

6 hours till vacation, i do not want to leave my Lab in a nonworking fashion.

Encouraged by the Adlon Folks, i thought it is a good idea to use what i am Always praying.

Yes, Dogfood, eat your own meal ! Do your DR !

Also, i had to finish a presentation on SCVMM with some Creenshots, thus i need my SCVMM VM.

Easy Way would be Power off and On the Failed Host.

But how about doing a Disaster test rather than doing a "Reboot" ?

 

My failed Host, Agent-J, still only reacting to pings. The Lab Admin not available, unfortunately i do not have  KVM access to that host currently.

What are my Options ? I have 17 VM´s on that host, 5 are Guest-Clustered. Good, ignore them

The others are Application Replicated ( Exchange Machines )

But one is not Protected.

My SCVMM host,  Normally not that Important... unless you use some Self Servicing Stuff like Azure Pack :-)

Last Backup from Tuesday. Could be Plan C.

Plan B: Unmask the Production LUN from Host and Present it to other host. Might be an Option, but if data gets Corrupted ....

Plan A: The host is running on an RecoverPoint Protected VNX5300 Array. Why not Replicate the LUN to a DR Site and do some Testing First ?

Obviously, the LUN is not in a CG right now. No Problem.

 

Step 1: Create a CG by using ESI´s Recoverpoint Plugin. Pretty Simple, Creates a DR Lun on my Array in the DR SITE.

( you may notice the nRed Cross for my Production Host in the Picture :-) )

The Process of creating took only a few Minutes, plus some Minutes to Sync 2TB Data to the Remote Array.

Waited till Data was in Sync for 2 TB.

Not a too long time, but a good Idea to grab a Coffee.

strega.jpg

 

A few Weeks ago i blogged on Something that i called SRM_4_Hyperv . Good chance to test this thing now in a realworld disaster.

 

Rather that running the Full Automation, i wanted to test the individual Steps to verify that everything works in a real Desaster :-)

Recover Pint has a Powershell Integration with EMC´s ESIPSToolKit, so the Automation is a No-Brainer.

So i ran the Script in the Powershell ISE.

The Fist thing i test in Which of my 2 Sites i am currently Running my Production for that Host/Volume:

 

Fine, Production is on agent-J, Array VNX5300C, proteced by RecoverPoint Site RPA_C.

The i check if i can do a Ordered Shutdown/Unmount of the Production Host / Volumes:

 

 

Since the host is no longer Managable, i enter my Forced Process.

This is the Point where i enable the Volume Access on the Remote site.

Now comes the fine Trick: I do not need to specify a DR Host upfront. I can slect this dynamically, based upon DR Capacities in my DR Site.

In this Example, Agent-K was selected as the DR Host.

 

After Presenting and rescanning the LUNS, the Volume get´s dicovered and Mounted on Agent-K

 

The Best Time now for doing a selective Import. One Method is to do an automated Testing of VM Configs by using compare-vm. Ben Armstrong has written a good Description on that. This is what i use in my Script normally when i do the Automated failover.

When i use a selective Fail over, i prefer to use the Import Wizard from Hyper-V Manager:

 

I did the Above for my SCVMM VM. Since the Machine was not shutdown remiotely and files where open, the Guest needed to Run Check disk once to get into a Consistent State.

After that my VM was Up and Running.

 

After Considering a good State, it is a Good time to Fail over the Complete Replication to the Remote Site and replixcate Back to old Production:

 

For People tha do not know Recoverpoint: When doing a failover, we first take the last Point in Time of Synchronization. That Guarantees that every IO has made it to the Remote Site. If for what ever reason, that Image does not Work ( e.g. a Rolling Disaster ) whe can choose from

A: A consistent Bookmark ( Triggered by Backups like VSS, Self defined Bookmarks, evens etc. )

B: Any Point in Time from a Timescale up to Microseconds:

 

This gives us by far the most granular Disaster Recover Points !

 

 

As a Resume:

Hyper-V is Rock Solid and fully Crash Recoverable with 2012R2. You may also want to consider VM Checkpoints in Combination with Array Based Snapshot´s or Bookmarks for Consistency Points, but a Crash Recovery always works.

 

BTW: It took me 20 Minutes during my Breakfast to Sync the 2TB of data and do the Assisted Fail over to my Remote Site.

The Original Site LUN is blocked right now from Host Access, so if the Host Reboots, it will not be able to Start the Failed Over machines.

Having a DR Strategy: good

Having Hyper-V VM´s Protected by Recover point: Priceless

 

 

Friday, 11am. Work: Done. DR Done. Ready to go off for Skiing for a Week ...

Managing Fibre Channel in VMM with SMI-S or How I Got in the Zone

Greetings from the Microsoft Technology Center in Silicon Valley (MTCSV) in Mountain View, CA.  I have been putting in a lot of time lately on the new System Center 2012 R2 Virtual Machine Manager infrastructure that is hosting all the operational compute and storage for the MTC.   There are numerous blade chassis and rack mount servers from various vendors as well as multiple storage devices including 2 EMC VMAX arrays and a new 2nd generation VNX 5400.  We have been using the SMI-S provider from EMC to provision storage to Windows hosts for a while now.  There is a lot of material available on the EMC SMI-S provider and VMM so I am not going to write about that today.  I want to focus on something new in the 2012 R2 release of VMM – integration with SMI-S for fibre channel fabrics.

 

There are many advantages to provisioning storage to Windows host and virtual machines over fibre channel networks or fabrics.  Most enterprise customers have expressed interest in continuing to utilize their existing investments in fibre channel and would like to see better tools and integration for management.  Microsoft has been supporting integration with many types of hardware devices through VMM and other System Center tools to enable centralized data center management.  The Storage Management Initiative Standard (SMI-S) has been a tremendously useful architecture for bringing together devices from different vendors into a unified management framework.  This article is focused on SMI-S providers for fibre channel fabrics.

 

If you right click on the Fibre Channel Fabrics item under Storage in Fabric view and select the Add Storage devices option you will bring up a wizard.

FC Fabric menu.PNG.png

The first screen of the wizard shows the new option for 2012 R2 highlighted below.

Add SMIS FC.PNG.png

We are using the Brocade SMI-S provider for Fibre Channel fabrics.  The provider is shipped with the Brocade Network Advisor fabric management tools.  We are using BNA version 12.0.3 in the MTCSV environment.  The wizard will ask you for the FQDN or IP of the SMI-S provider that you wish to connect too.  It will also ask for credentials.  We are doing a non-SSL implementation and we left the provider listen on the default port of 5988.  That is all there is to the discovery wizard.  The VMM server will bring back the current configuration data from the fibre channel fabric(s) that the SMI-S provider knows about.  In our case we have fully redundant A/B networks with 4 switches per fabric.  Here is what the VMM UI shows after discovery is complete.

Discovered Fabrics.PNG.png

Once we have discovered the fabrics we can go a the properties of a server that has FC adapters connected to one or more of our managed switches.  The first highlight below show that VMM now knows what fabric each adapter is connected.  This allows VMM to intelligently select what storage devices and ports can be accessed by this server adapter when creating new zones.  That’s right; with VMM 2012 R2 and the appropriate SMI-S providers for your storage and FC fabric you can do zoning right from within the VMM environment.  This is huge!

BL-460 properties.PNG.png

The second highlight above show the HyperV virtual SAN that we created in VMM for each of the adapters.  The virtual SAN feature set was released with Windows Server 2012 HyperV.  It is the technology that allows direct access to fibre channel LUNs from a virtual machine that can replace pass through disks in most cases.  That is also a really big topic so I’m going to write about that more in the context of VMM and fibre channel fabrics in a later article.  For today I want to focus on the use of VMM for provisioning fibre channel storage to HyperV parent clusters.  Now let’s take a look at the zoning operations in VMM.

 

The next figure show the Storage properties for a server that is part of a 5 node cluster.  The properties show what storage arrays are available through fibre channel zoning.  You can also see the zones (active and inactive) that map this server to storage arrays.

storage properties.PNG.png

Lastly, I want to show you how to zone this server to another storage array.  The place tostart is in the storage properties window shown above.  Click the Add | Add storage array icons to get to this screen.

create new zone.PNG.png

As you can see from the window title this is the correct place to create a new zone.  This is the same regardless of whether this is the first or third array (as in this case) you are zoning to the selected server.  I highlighted the Show aliases check box that I selected while making the above selections.  In order for the friendly name zoning aliases to be available they must be created in the BNA zoning UI after the server has been connected to one of the switches in this fabric.  You can also see the zone name that I entered that will be important when I move to the final steps to complete this example

.

Now that the zone has been created let’s take a look at the Fibre channel fabrics details.

FC Fabrics and Zones.png

I’ve highlighted the total zones defined in the Inactive and Active sets for the A fabric.  This shows that new zones have been created but have not yet been moved into the Active zone set.  If you open the properties of the Inactive zone set and sort the Zone Name column you can see the zone that we created 2 steps above.

FAB_A Properties.png

In order to activate this zone use the Activate Zoneset button on the ribbon.  One important detail is that you can only activate all the staged zones or none of them.  There are two zones that are in the Inactive zoneset that will be activated if you push the button.  Be sure to coordinate the staging and activation of zones in the event the tool is being shared with multiple teams or users.

Activate Zoneset.png

The world of private cloud management is changing rapidly.  VMM and the other System Center products have made huge advancements in the last two releases.  The investments that Microsoft and storage product vendors have been making in  SMI-S integration tools are starting to bring real value to private cloud management.  Take a look, I think you’ll be surprised.

 

Phil Hummel

BigBang Update ! BRS2GO now with SQL2012 and SCVMM2012R2 !

I Finally finished an Important Part of BRS2GO:

SQL and SCMMM are now included in Version 1.38-79481 on  Machine HyperVN1.

 

Just run the BRS2GO Updater or Download the Scripts from Test Networker 8.10 with NMM 3.0 for Exchange 2013, SQL2012SP1 and Hyper-V on VMware Workstation !

 

To install BRS2GO with Networker,Hyper-V and SQL Just run

Install-brs2go.ps1 -action install -nw 1 -HV 1 -sql 1 -EX 0

 

 

Upon Installation you can Backup a Hyper-V VM ( i installed a new Epmty VM, feel free to create your own Ones )

 

 

The SQL Server on HyperVN1 Hosts the SCVMM Database that might also be backed up by NMM:

 

 

 

As a last bit i will work on automated Installation of the SSMS Pulgin ( SQL Server management Studio ), but i have to figure out how to install unattended.

Meanwhile, just run the Setup of NMM manually again to install the Plugin

 

 

 

 

I Packed the required Sources2.vhd containing Evals of SCVMM, SQL2012 and WAIK  into multiple Files to ease the Download :-)

SCCVMMSQLsources2.7z.001

SCCVMMSQLsources2.7z.002

SCCVMMSQLsources2.7z.003

SCCVMMSQLsources2.7z.004

 

I am also Working on an Exchange 2013SP1 on a Windows Server 2012R2. if you are a TAP Customer and interested in the new Version, PN me !