Announcing ESI 1.2

ESI (EMC Storage Integrator) is a new tool from EMC designed to simplify storage administration for the Microsoft administrator (you may recall I blogged about the initial product announcement a few months ago). It's not that managing EMC storage is difficult, we already have several wizards built into our storage management tools and easy to use dashboards as well as application based wizards designed for provisioning storage.  The problem is that provisioning storage is only one out of many steps when setting up SAN drives for a Windows host or Microsoft application.   

Typically you configure the storage including modifying settings and granting permissions for the server to access the new storage drives. You need to configure the fabric layer to make sure the host and the storage have permission to talk to one another. You use Windows Disk Administrator and find the new drive, set the proper disk offset for Windows, format with a file system, assign a drive letter and then mount it to Windows to use.  If the drive will be used for clustering you need to use the Failover Cluster Manager tool to configure clustering. Whew..and that's just getting the new drive ready to use for Windows!  

Once that is done you go to the application and create the new application database or Sharepoint farm and place the files on this new drive.  In the case of Sharepoint Portal Server this includes interacting with SQL Server (and maybe more SAN drives) for the content database.

All in all, we calculated that this can take up to 86 minutes!  And that's if only one person is doing the tasks.  What if you need to rely on others to finish their operation before you can proceed?!

Enter ESI.  ESI will handle all tasks including provisioning the storage, setting permissions, formatting the drives and assigning drive letters for Windows and even provision your Sharepoint farm for you!  Using MMC, ESI completes these tasks as background operations in seconds to minutes so that you can minimize the number of management screens you have to access and so you don't need to be a storage expert to configure drives for Windows. ESI will also provide reporting capabilities so you can log on and look at a storage array, see what hosts are connected to it, what drives are connected, how they are connected and so forth.


What's the best part about ESI? It's FREE! The only requirement is that you are running EMC storage and a Microsoft operating system.

To get ESI you can log into EMC Powerlink (current customer or partner) to download. You can also ask your EMC sales team for a copy and they will be happy to help you.

Still not convinced? Head over to Youtube for a short video showing how ESI works when adding new drives and configuring a new Sharepoint application:


 

ESI has many enhancements in the works so stay tuned for more updates to see how ESI continues to simplify managing storage for Windows!

Multi Data Center Cluster, Been There, Done That!!! Hyper-V & EMC VPLEX Baby!!!

by Sam Marraccini | Twitter @EMCMSFT | Sam.Marraccini@emc.com


I hear it from customers all the time, “My {insert application} needs to be always-on. It can’t go down. My users and my business demand it”. This availability question goes back to the roots of EMC. Game changing technologies like TimeFinder and SRDF (Symmetrix Remote Data Facility) have been around since the late 90’s, and arguably, set the stage for the beginning of the storage software industry.

From an EMC/Microsoft integration, EMC has long provided functionality like Geo-Span and SRDF/CE (Cluster Enabler) to extend shared storage clusters across data centers. In fact, you can find an outstanding Business Continuity white paper detailing Hyper-V with EMC Symmetrix VMAX and SRDF/CE In an EMC Symmetrix environment, SRDF/CE continues to provide an outstanding solution for extending a Microsoft Cluster across data centers. EMC customers continue to deploy SRDF/CE as the primary solution for high availability and extending Microsoft Clusters.

The introduction of advanced virtualization capabilities has extended the possibilities. No longer are we limited to cluster failover across data centers. We now have the ability to extend the production volume across data centers allowing the extension of the HA solution beyond the Microsoft cluster. Now we can also include the “Other” servers that need to be available for the application to be well… always-on. Enter the EMC VPLEX Family.

From VPLEX marketing material:

The EMC VPLEX family is a solution for federating EMC and non-EMC storage. The VPLEX platform logically resides between the servers and heterogeneous storage assets supporting a variety of arrays from various vendors. VPLEX simplifies storage management by allowing LUNs, provisioned from various arrays, to be managed through centralized management interfaces.


The EMC VPLEX platform removes physical barriers within, across and between data centers. VPLEX Local provides simplified management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro provides mobility, availability, and collaboration between two VPLEX clusters within synchronous distance. VPLEX Geo further dissolves those distances by extending these use cases to asynchronous distances.



EMC VPLEX is not limited to EMC Storage, providing a solution that can extend to all application components across storage arrays. This federation provides endless possibilities. In comparison, SRDF/CE can extend your existing Microsoft Cluster across data centers, leveraging EMC Symmetrix and EMC Symmetrix Remote Data Facility. EMC VPLEX allows the logical extension of the lun between data centers across heterogeneous Storage arrays. (That lun can easily be a CSV (Cluster Shared Volume) for your Microsoft Cluster.

In addition to the Microsoft Cluster integration, EMC has been working to insure Hyper-V and all Microsoft applications are fully integrated for a true business continuance solution.

The last two episodes of Inside The Partnership (EMC/MSFT) highlighted the advanced integration the EMC Solutions Team has been developing. The obvious integration point is Hyper-V Live Migration. I spent some time with James Baldwin at Microsoft Teched 2011 reviewing exactly that:

Episode #8 covers EMC VPLEX & Microsoft Hyper-V. Take a traditional Microsoft cluster and extend the single CSV (Cluster Shared Volume) across data centers, allowing clusters to extend beyond traditional limitations. The cluster nodes have no idea they are physically separated. They act and provide availability as if they were in the same server rack.

Inside The Partnership (EMC/MSFT) Episode #8 - (http://tiny.cc/jv7to)

A detailed whitepaper on the VPLEX family can be found here http://tiny.cc/u0s84 and the white paper "Microsoft Hyper-V Live Migration with EMC VPLEX GEO can be found here http://tiny.cc/k763s

Shortly after Episode #8, EMC Solutions team introduced an additional whitepaper covering Distance Application Mobility. This architecture "Long-Distance Application Mobility Enabled by EMC VPLEX GEO" highlights a typical customer with Windows Server, Hyper-V, Sharepoint and SAP. (You Can Find the whitepaper here) I quickly introduced the whitepaper in the beginning of Episode #9, then covered Big Data… A future Blog topic.

Inside The Partnership (EMC/MSFT) Episode #9 – (http://tiny.cc/4q5jb)





Thanks for the feedback.. Follow me on Twitter @EMCMSFT and subscribe to the Youtube Channel to get the latest Inside The Partnership Videos. I can also be reached via e-mail (sam.marraccini@emc.com)

Considerations for Customers Deploying Hyper-V

I recently visited a mid size customer who is running Hyper-V for test and dev and they are interested in expanding Microsoft virtualization into their production environment with a goal of being 100% virtualized by the end of next calendar year.  As part of this project they will be implementing a new SAN to consolidate their virtual machines as well as using it for advanced data protection and scalability. 

Since the customer has only been running Hyper-V with servers that are used primarily for in-house development, they asked me for some advice or best practices as they move forward with a production deployment of Microsoft virtualization.  Of course this is one of those "it depends" answers but below is a list of some of the items I came up with that I thought would be good to share:

  • Perform an assessment of the current hardware to determine which will be able to support virtualization.  There are several different tools available to assist but Microsoft's MAP (Microsoft Assessment and Planning Toolkit) is the best as it will analyze the hardware to determine what is available to use for Hyper-V. MAP will also determine any servers that are underutilized and even create a ROI report to understand the potential cost savings by virtualizing.
  • When designing the application for a virtual deployment, be sure to treat the applications as you would physically.  This is one of the biggest problems I see customer’s run into.  Because virtualization allows customers to easily mix workloads on the same server or even mix different data types on the same physical spindles doesn’t mean you should take these shortcuts.  Use tools like Microsoft performance monitor (perfmon) to understand the performance requirements of your applications before deciding which applications will reside on which Hyper-V parent server.
  • Use pass-through disks when possible.  This tends to give you your best performance and you can avoid the 2TB limit that VHDs have. 
  • Use multiple NICs so you can separate Hyper-V administration from general network traffic.  This will help ensure you can always access the Hyper-V server even if there are performance problems due to high traffic between the applications.  Additional NICs will be needed for storage traffic if iSCSI is used to access the server.  Taking the time to design network and VM layout and separating workloads will help if\when troubleshooting needs to occur.
  • Test, document and adjust.  If new hardware is being deployed as part of the migration to virtualization then you’ll have the opportunity to build out the workloads in a virtual environment before hand to do some testing.  Be sure to take the time to test the application while virtualized including backup, restore or tasks such as Live Migration to make sure things work as expected and are fully understood and documented by the IT team.
  • Use Microsoft System Center.  System Center has advanced features for managing the virtual (and physical) infrastructure as well as monitoring, management and software packaging.  With SCVMM you can create and use templates for easier rapid deployment of virtual machines as well as Microsoft PRO packs for dynamic management of the virtual environment including partner packs like EMC’s Pro pack.

In the end, the best advice is to just get started.  Most companies tend to put it off and in my opinion the benefits are too great.  Companies also tend to shy away from virtualizing specific applications because they are afraid it can’t be done due to potential performance problems. 

EMC works with Enterprise companies regularly (including a software company located in Redmond) who virtualize high-end workloads or critical applications with Hyper-V on EMC hardware (and look at our whitepaper on how we virtualized a 16 node cluster with over 1000 VMs) with great success. The only true barrier is those that are afraid of change.

The Hidden ROI in eDiscovery…Faster, Better, Cheaper…! Part II

The hidden ROI in eDiscovery…Faster, better, cheaper…!Ted O'Neil

Part II  Benchmarking: People, Process & Technology

Identifying all the key players in the legal & regulatory processes that request ESI (consumers) and why they need it…then find all the key players and stakeholders that identify, preserve & collect ESI and the tools currently in use to help to understand the processes and the level of effort associated with eDiscovery from an internal resource perspective as well as from the third party cost perspective…and to understand risk.

Each organization is unique…understanding who touches the process is critical…knowing this early saves resources in the long run!!

Most organizations face a mix of needs for ESI:

• Internal Audits & Board driven actions
• Regulatory Investigations
• State & Federal Litigation

Consumers of ESI may include:

• Human Resources
• Litigation Counsel
• Regulatory Affairs
• General Counsel
• Internal Audit

Personnel involved in Preservation & Collection of ESI may include:

• Security
• IT
• 3rd Parties / Service Providers
• Custodians
• Legal Service Providers
• Outside Counsel
• Internal Counsel
• Human Resources

Understanding all these different elements of the People, Process & Technology in your eDiscovery process is the key to controlling costs & mitigating risks.

Our team has developed an easy-to-use “eDiscovery ROI Calculator”, which is now available for the iPad.

If you would like to discuss this topic further…please comment below or send an email at ted.oneil@emc.com.


The Hidden ROI in eDiscovery…Faster, Better, Cheaper…! Part I

Faster, better, cheaper was the mantra at NASA as it set goals to improve quality, efficiency and better manage costs after Ted O'Neilseveral setbacks…it was a way to set goals and measure success from a “top down” approach of looking at it from all perspectives and seeking to better quantify risks & rewards in various programs…expect quality, but demand efficiency!!!

Faster, better, cheaper was clear theme from LegalTech 2011…good Information Governance makes good business sense!

The hidden Return On Investment in eDiscovery lies in understanding the entire spend…not just the obvious third party costs and understanding and quantifying risks in the current process.

I have been working with several clients in developing business cases and ROI models to frame the various challenges and drive strategic initiatives.  The key to success is having developed a “base-line” understanding of the current process & identifying all key players.  The nature of eDiscovery tends to affect IT, RM, the Business and of course, Legal.

As “beauty is in the eyes of the beholder”…all parts the organization affected by the process sees this from a different perspective should be included in the dialogue.

Key drivers in these initiatives is cost take-out and optimization of eDiscovery & regulatory compliance processes.  Understanding the current state of the process, people involved, key processes & technology to get visibility & control of the process…this is typically an evolution over time and requires a continued commitment to total quality management.

Best Practices that cut across the broad theme of Faster, better, cheaper:

  • Identify all the “stakeholders” & process owners
  • Identify all “Consumers” of ESI
  • Define cost of current process
  • Understand the organization’s “Legal Profile”
  • Form cross-functional team to drive change

Understanding all these different elements of the People, Process & Technology in your eDiscovery process is the key to controlling costs & mitigating risks.

Our team has developed an easy-to-use “eDiscovery ROI Calculator”, which is now available for the iPad.

If you would like to discuss this topic further…please comment below or send an email at ted.oneil@emc.com.