Is a SAN Better, Cheaper, and More Secure Than Office 365?

SAN’s – especially SAN’s from market leader EMC – are always under attack from companies wishing to cash in on a piece of the rising data growth across all industries and market segments.

  • Some say DAS is best.
  • Some say keep the storage in the servers.
  • Some say you should build your own shared array.

But when it comes to Microsoft environments, it often helps to have independent experts investigate the matter to get a fresh perspective.

In a recent study, Wikibon determined that a shared storage infrastructure (powered by EMC’s next generation VNX2 storage systems) can match the prices offered by Office 365 public cloud and offer more capabilities, more security, and more control.

This, however, assumes a completely consolidated approach for deploying multiple mixed workloads such as Exchange, SharePoint, SQL Server, Lync – where the VNX2 really shines.   We use FAST VP, FAST Cache, and a combination of drive types to achieve the best balance of performance and cost.

Are you looking for more information about deploying Microsoft applications on VNX?    Definitely check here for the most recent best practices guides!

Also check out my recent webinar I did with James Baldwin who leads our Proven Solutions EMC/Microsoft engineering team.  We had a lot of fun doing this one, hope you enjoy it.


SQL14 Buffer Pool Extension & EMC ExtremSF/ExtremSW

Microsoft SQL Server is introducing a good number of new features with the upcoming “SQL14″ release.  Most notably, it implements an in-memory solution (code named Hekaton) .. however, there are other features included.  The SQL14 Customer Technology Preview has been available for some time, and the CTP1 media was used for this exercise, running on Windows Server 2012.  I decided to look at one of these new features called Buffer Pool Extension (BPE), and wondered how it might behave to some similar EMC technology that we’ve used and validated previously for SQL Server environments.

Buffer Pool Extension

As the name suggests BPE is a mechanism to extend the buffer pool space for SQL Server. The buffer pool is where the data pages reside as they are being processed to execute a query, and it’s generally limited by the main memory (DRAM) available on the server itself.  While available memory in servers has been on the increase, so have the database sizes as well, having an adequately sized buffer pool helps keep data pages around, and keeping them around means that you don’t have to go to disk to execute a subsequent query that references these same pages.  In general, performance is going to be better, the less disk I/O you have to generate.

Performance is of course based on the speeds and feeds.  DRAM is typically very, very fast, disks on the other hand are orders of magnitude slower that DRAM speeds.  More recently a rash of new “Server Flash” solutions have come to market – these are generally PCIe based solutions. These server flash solutions fall (in terms of performance) between DRAM and disks. This is because they’re sitting on the PCIe bus, and subsequently have a more efficient means to service I/O .. it helps that they are also flash based (no moving parts).  These devices can also have very large throughput characteristics, and generally have pretty low latencies because of the performance characteristics of the PCIe bus.  The other thing that they deliver is large amounts of storage at a cheaper cost that something like DRAM.  Arguably Solid State Disks (SSDs or even Enterprise Flash Drives) have some of these characteristics, but drives of this type are much slower than Server Flash, because they live behind IDE or SAS controllers, FibreChannel controller or some other HBA.

So if you want to expand the SQL Server buffer pool so as to keep data pages around (besides what is available in DRAM) .. you will be able to use SQL14 BPE to help with that.  Effectively, you define a Buffer Pool Extension  as a physical file.  You specify where the file lives (so the storage needs to be seen as an NTFS volume), and once defined, SQL14 will start to use this space to keep data pages around.  Which data pages, and for how long, depends on the active dataset size, the space defined.  One interesting rule is that “dirty pages” cannot exist on the BPE device.  A dirty page is a page that has been updated, but has not been flushed to disk yet (of course the change is always written to the log file).  Dirty pages are either flushed to disk by something like a lazy writer, or a checkpoint operation.  Once a page no longer has changes to flush, it can be moved to the BPE storage.  Equally, a page that is read to satisfy a query, but is not updated, may be put out on the BPE storage – if you’ve already read the page, then it’s trivial to push it to the BPE.

Configuring BPE is done via the ALTER SERVER T-SQL statement, for example, in this environment:

ALTER SERVER CONFIGURATION

SET BUFFER POOL EXTENSION ON

    (FILENAME = ‘F:\BPE\EMCExtremeSF.BPE’, SIZE = 200GB);

go

You can also imagine that there are various algorithms in place to age the pages on the BPE storage, and discard the oldest/unused pages in preference to data that is just read, and might be re-read. It’s a complex beast when you consider the various activities going on.  But the goal is simple … keep more data available on high performance (and I wold argue, low latency) storage, such that you can improve the overall efficiency of the database environment.

EMC Server Flash

For the testing, EMC’s ExtremSF device was used.  In this instance, a 300 GB SLC version of the ExtremSF line, which now includes eMLC versions that provide over 1 TB of storage. The ExtremSF controller was used in two ways to look at optimizing the SQL14 environment.  Firstly, it was used as a traditional storage device, which, through Disk Management was partitioned to provide a 200 GB storage allocation (actually the volume was a little larger than 200 GB, the BPE file itself was created at 200 GB size).  In the second set of tests, the ExtremSF card was used in combination with the ExtremSW product to cache the SQL14 data files – more on that later.

The Test System

Because the testing needed to put some pressure on the SQL14 environment, I did what I would probably not recommend in any production environment, and that was to reduce the available amount of memory for the SQL14 instance to 20 GB.  That would subsequently, severely limit buffer pool space, and limit scale as a result of requiring much more I/O.  It also forces behavioral changes to SQL Server, forcing more writing of data pages, etc … again, I would never recommend doing this in practice!  But these limits were constant across the tests, as was the workload, the only variables ended up being the use of BPE, and subsequently ExtremSW.

The database itself contained around 750 GB of data and index.  Thus the dataset was 37.5 times larger than the total amount of memory allocated to SQL Server. Of the 20 GB allocated to SQL Server, only a portion of that is used by the SQL instance for the buffer pool .. so the ratio of data to buffer pool would actually be a little more extreme.  But what’s more important to consider is the “active” dataset – for example, you could have a 1TB set of data and index, but if you are only actively accessing a very small portion of the data/index, then it doesn’t really matter how large the dataset is … it matters more about the data that you are actively touching. In this case, the OLTP workload was fairly random across all the data/index.

Another aspect that remained constant throughout all testing was the underlying storage used for the database itself.  This was, unsurprisingly, an EMC array.  Here I even limited the total number of spindles, thus forcing more I/O to the data files, and subsequently increasing latency.  

Prior to each test, the database was restored from a backup.  Multiple runs were executed for each configuration (BaseLine, BPE and ExtremSW), and the average was used in the presentation of results (unless stated otherwise).

So what did the relative performance look like?

The Results

The performance is presented in terms of relative difference, since the actual numbers themselves do not matter – just how the workload changed for the various configurations.  The numbers used are the Transactions per minute (tpm).

Results

So for the same workload, same configuration of DB, but varying the usage of server flash as being SQL14 Buffer Pool Extension or using the same server flash as EMC ExtremSW, the system processed 1.72 times more when using Buffer Pool Extension, and 2.32 times more when using the same infrastructure with ExtremSW.

But there’s more ….

Efficiency Vs Time

How long it takes to make effective use of the performance enhancing server flash is also interesting. So a quick comparison of the workloads against the ExtremSF card for both the BPE implementation and that of ExtremSW.

SQL Server Batch Requests per second is one metric that may be used to identify how much “work” is being done by SQL Server, as it is a metric to determine the number of statements being executed.  Given that the workload is the same in every run, then this may give a comparison, and doing so, we see the following. (in this case, these are the numbers from two specific runs – not averaged across runs).

BatchReqComparison

The X-axis in this case shows the time, in Hrs:Min:Sec from the start of each run, and since the two runs executed for different periods to time (the test with BPE enabled was run for much longer to allow the utilization to reach steady state) … you can see that the ExtremSW test was terminated after about 7 hours, although steady state was attained after only about 3hrs.  The test run with BPE reached its steady state after about 10 hrs.  Also worth noting is the slope of the change.  ExtremSW had a much more aggressive improvement over a shorter period of time.  Overall, at steady state, the ExtremSW environment was processing more batches/sec than the Buffer Pool Extension implementation.

If it’s the same card, why is there such a difference?

Given that the hardware used was the same, there are implementation characteristics that will alter the performance.  Not the least of which is the aforementioned fact about the BPE storage only being able to hold non-dirty pages.  As a result, any updated pages will need to be pushed to durable media before being moved into the BPE.  That’s likely to be a small, but not necessarily trivial impact.

ExtremSW, on the other hand, is a rather different beast.  In the Windows environment ExtremSW is implemented effectively as a filter driver.  When configured as cache, the storage allocation on the ExtremSF card is used as a central storage (cache) pool by the driver.  In the following image, the ExtremSF card is seen as HardDisk0 (Disk0 in the GUI). The “ExtremeSF” NTFS volume was created to consume space from the device, such that when the ExtremSW implementation was activated, it would only use 200 GB, which is the “OEM Partition” seen on that device.

ExtremSW

Individual LUNs (HardDisks as seen by Windows) are then bound to this cache pool. As data is read from the disks, that data is stored in the cache immediately, and remains there until it becomes stale at which point it effectively gets dismissed.  Data that is updated is also stored in the cache pool, and as of this release of ExtremSW, all writes are implemented as Pass-Thru, so the write has to go to the backing disk in all cases … but the updated state is retained in cache (you don’t need to re-read what you have written).

Thus all data is cached on reads and writes, so there’s a tendency to be more efficient.  At least when comparing addition mechanisms that need to destage data out.

Again, in this configuration, the ExtremSW cache size was limited to 200 GB.  So it was effectively the same space on the ExtremSF card as the BPE file. There were 12 data LUNs in use (HardDisk4 thru HardDisk15 having NTFS volumes DATA01 thru DATA12, as seen in the previous mage) and these were bound to the ExtremSW cache pool, by executing the following calls to the VFCMT utility (the management tool for ExtremSW).

vfcmt add -source_dev harddisk4 -cache_dev harddisk0

vfcmt add -source_dev harddisk5 -cache_dev harddisk0

vfcmt add -source_dev harddisk6 -cache_dev harddisk0

vfcmt add -source_dev harddisk7 -cache_dev harddisk0

vfcmt add -source_dev harddisk8 -cache_dev harddisk0

vfcmt add -source_dev harddisk9 -cache_dev harddisk0

vfcmt add -source_dev harddisk10 -cache_dev harddisk0

vfcmt add -source_dev harddisk11 -cache_dev harddisk0

vfcmt add -source_dev harddisk12 -cache_dev harddisk0

vfcmt add -source_dev harddisk13 -cache_dev harddisk0

vfcmt add -source_dev harddisk14 -cache_dev harddisk0

vfcmt add -source_dev harddisk15 -cache_dev harddisk0

SQL Server transaction log devices don’t really benefit from ExtremSW, and it’s not really recommended in such instances to include the transaction log.  In this environment, the transaction log was on a separate LUN … HardDisk16 … and that was left out of the ExtremSW environment.

Conclusions?

It’s clear that Buffer Pool Extension has a positive impact to this SQL Server workload.  Its performance impact is definitely related to the characteristics of the storage used for the BPE file.  Server based Flash storage devices, like ExtremSF, have the performance characteristics to improve the throughput of SQL Server environments. This testing was based on CTP1 of the SQL14 product, and much change could be expected in the intervening time before launch.  As a result, performance may change with respect to efficiencies of BPE.

ExtremSW definitely is very efficient in improving the performance of SQL Server databases – there’s a number of papers that cover solutions using SQL Server 2008, etc.  It’s also true that ExtremSW is not specifically tied to SQL Server.  As mentioned, it’s a filter driver that binds to LUNs.  What those LUNs are used for, is irrelevant to ExtremSW, because the implementation is simply going to cache the data on those devices.  So if there’s an application that re-reads the same data, then it will see a benefit.

SQL Server Buffer Pool Extension is obviously a SQL Server feature, so its benefits are limited to this application.  Conversely, BPE is included in the appropriate versions of SQL Server, so you get that with that version. ExtremSW is an incremental cost as it is a separately licensed solution.  ExtremSF (the server flash card) is assumed to be common in both BPE and ExtremSW implementations.

In the end, the overall efficiency is also tied to the application, and the overall active dataset size.  Again, in this case, the Dataset size was around 750 GB, the cache size (both BPE and ExtremSW) was 200 GB.  As the ExtremSF card size, the dataset size and/or the active portion of the data changes, so will the results, and overall effect on any given environment.  Alas, it is the great “It depends” .. because it does. 


EMC’s VNX = Award Winning storage for Microsoft environments

Microsoft’s TechEd 2013 is next week, and I’m looking forward to spending time with my longtime industry friends and making some new connections on the show floor in New Orleans.

This year, I’ll attend as part of the Unified Storage Division, and felt I needed to share a little about the success of VNX and VNXe arrays into Microsoft environments:

awards

EMC’s VNX Unified Storage Platform has been recognized with awards from a slew of independent analysts such as Gartner, IDC and Wikibon, as well as media publications such as ComputerWorld, CRN and Virtualization Review due to the ability of the VNX family to power mission critical applications, integrate with virtual environments and solve SMB IT challenges, among other accolades.   We take pride in being the #1 storage for most Microsoft Windows-based applications.

BUT… DOES  MICROSOFT WINDOWS NEED A SAN?  CAN’T WE DO IT OURSELVES?

Well, after speaking with Windows Server 2012, SQL Server, and EMC customers, partners and employees, the independent analyst firm Wikibon posted a before and after comparison model based on an enterprise customer environment. The idea is that the total cost of bolting together your own solution isn’t worth it.

wikibon-windows-study

The findings showed that by moving a physical, non-tiered environment to a virtualized environment with flash and tiered storage SQL Server customers realized a 30% lower overall TCO over a 3 year period including hardware, software, maintenance, and management costs for their database infrastructure.

The graphic shows that a do-it-yourself approach saves very little if anything in hardware costs and will divert operational effort to build and maintain the infrastructure. Risks and costs are likely to be higher with this approach.

In the end, EMC’s VNX infrastructure was proven to deliver a lower cost and lower risk solution for Windows 2012 versus a direct-attached storage (DAS) or JBOD (just a bunch of disk) model.  Full study here.

Video of EMC’s Adrian Simays and Wikibon Analysts discussing these results is here on YouTube.

MICROSOFT INTEGRATIONS AND INNOVATIONS  

EMC’s VNX platform considers Microsoft applications, databases, and file shares to our sweet spot as evidenced by our early integration of the latest Windows Server 2012 features that increase performance, efficiency, availability, and simplicity for our joint customers.

Performance, Efficiency, Availability, Simplicity

1. Performance

Within Windows, we were the first storage array to support SMB 3 and ODX Copy Offload (part of SMB 3) to enable large file copies over SAN instead of consuming network bandwidth and host CPU cycles.

ODX-impact

This test highlights the speed difference before (left) and after (right) ODX was implemented. With EMC VNX and ODX enabled, you can accelerate your VM copies by a factor of 7 while reducing server CPU utilization by a factor of 30!

For applications and databases, VNX FAST Cache and FASTVP automatically tunes your storage to match your workload requirements saving up to 80% of the time it would take to manually balance workloads.

The Enterprise Storage Group (ESG) Lab confirmed that a data warehouse solution with Windows Server 2012 with Hyper-V, Microsoft SQL Server 2012 with new Columnstore indexing, and VNX FAST technologies and VNX storage form a complete solution to meet the business requirements of mid-tier organizations and beyond. An 800GB DW was deployed which is fairly typical for a medium sized business. With EMC FAST enabled, throughput reached up to 379 MB/sec, showing over 100% improvement over SQL Server 2012’s baseline Rowstore indexing. The DSS performance workload with EMC FAST enabled completed up to nine times faster than with rowstore indexing.

2. Efficiency

IT managers and storage administrators frequently adopt well-known forecasting models to pre-allocate storage space according to the storage demand growth rate. The main challenge is how to pre-allocate just enough storage capacity for the application. Reports from many storage array vendors indicate that 31% to 50% of the allocated storage is either stranded or unused. Thus, 31% to 50% of the capital investment from the initial storage installment is wasted.

The VNX supports Windows host-level and built-in storage-level thin provisioning to drastically reduce initial disk requirements.  Windows Server 2012 provides the ability to detect thin-provisioned storage on EMC storage arrays and reclaim unused space once it is freed by Hyper-V. In the previous scenario, an ODX-aware host connected to an EMC intelligent storage array would automatically reclaim the 10 GB of storage and return it to the pool where it could be used by other applications.

Furthermore, for application storage we partner with companies like Kroll and Metalogix to provide better solutions for Exchange single item recovery and SharePoint remote BLOB storage which can reduce SQL stored SharePoint objects by about 80-90% and improve SQL Respnse times by 20-40%

3. Availability

Our first to market integration with SMB3 not only provides for performance improvements, it also enables SMB 3 Continuous Availability allowing applications to run on clustered volumes with failovers that are transparent to end users.  For example, SQL Server may store system tables on the file shares such that any disruptive event to the access of the file share can lead to interruption of SQL Server operation. Continuous Availability is accomplished via cluster failover on the host side and Data Mover of Shared Folder failover on the VNX side.

Other SMB 3.0 Features supported include:

  • Multi-Channel / Multipath I/O (MPIO ) – Multiple TCP connections can now be associated with a single SMB 3.0 session and a client application can use several connections to transfer I/O on a CIFS share.  This optimizes bandwidth and enables failover and load balancing with multiple NICs.
  • Offload Copy - Copying data   within the same Data Mover can now be offloaded to the storage which reduces the workload on the client and network.
  • SMB Encryption - Provides secure access to data on CIFS shares, protecting data on untrusted networks and providing end-to-end encryption of data in- flight.
  • Directory Lease – SMB2   introduced a directory cache which allowed clients to cache a directory listing to save network bandwidth but it would not see new updates.  SMB3 introduces a directory lease and the client is now automatically aware of changes made in a cached directory.
  • Remote Volume Shadow Copy Service (RVSS) – With RVSS, point-in-time snapshots can be taken across multiple CIFS shares, providing improved performance in backup and restore.
  • BranchCache – Caching solution to have business data in local cache. Main use case is remote office and branch office storage.

EMC also offers a wide range of application availability and protection solutions that are built into the VNX including snapshots, remote replication, and a new RecoverPoint virtual replication appliance.

4. Simplicity

When it comes to provisioning storage for their applications, admins often have to navigate through too many repetitive tasks requiring them to touch different UIs and increasing the risk of human error. Admins also likely need to coordinate with other administrators each time they need to provision space. This is not very efficient. Take for example a user that wants to provision space for SharePoint. You need to work with Unisphere to create a LUN and add it to a storage group. Next you need to log onto the server and run disk manager to import the volume. Next you need to work with Hyper-V, then SQL Server Mgmt Studio, then SharePoint Central Admin. A bit tedious to say the least.

esi

EMC Storage Integrator (ESI) on the other hand streamlines everything we just talked about. Forget about how much faster it actually is… Just think about the convenience and elegance of this workflow compared to the manual steps outlined in our last paragraph. ESI is a free MMC based download that takes provisioning all the way into Microsoft Applications. Currently only SharePoint is supported but SQL and Exchange wizards are coming soon. This is a feature that surprises and delights our customers!

 SO WHAT DO VNX CUSTOMERS SAY?

EMC’s VNX not only provides a rock solid core infrastructure foundation, but also delivers significant features and benefits for application owners and DBAs.     Here’s some quotes from customers who have transformed their Microsoft environments using the VNX and VNXe platforms.

Peter Syngh Senior Manager, IT Operations, Toronto District School Board

 “EMC’s VNX unified storage has the best of everything at a very cost-effective price. It integrates with Microsoft Hyper-V, which is crucial to our cloud strategy, and with its higher performance, automated tiering and thin provisioning, VNX was a no-brainer.”

Marshall Bose Manager of IT Operations, Ensco (Oil/Gas)

 “A prime reason for choosing EMC over NetApp was that VNX is such a great fit for virtualization. With all the automation tools and tight integration with VMware, VNX is far easier than NetApp when it comes to spinning up and managing virtual machines.”

Rocco Hoffmann, IT Architect BNP Paribas (German Bank)

“We are achieving significant savings in energy and rack space. In fact our VNX requires only half the rack space and has reduced our power and cooling costs”

Charles Rosse, Systems Administrator II Baptist Memorial Health Care

“Since the VNX has been built into the design of our VDI from the beginning, it can easily accommodate growth- all we need to do is to plug in another drive or tray of drives and we get incrementally better performance.”

Erich Becker,  Director of Information Systems, AeroSpec (Manufacturing)

“…We loved the fact that VNXe and VMware worked extremely well together …we have dramatically cut operating costs, increased reliability and data access is now twice as fast as before.”

BOTTOM LINE

There are many more customers that have given praise to the VNX Family for powering their Microsoft applications but I don’t have the room to put them all in.     EMC is a trusted brand in storage, and the VNX today is an outstanding unified platform which successfully balances our customers block and file needs for their Microsoft file and application data – and gets awards for it.    Feel free to find out more about the VNX and VNXe product lines here and here.

Also come talk to us next week at TechEd, we will be there to help customers and partners learn more about our technology.

Find out more about our TechEd plans here.

Also download the VNXe Simulator executable right here.  It’s pretty awesome and shows you the unique VNXe management interface.


Great EMC Blog on Microsoft TechNet – Leveraging Flash Across the Microsoft SQL Server Stack

Our own Sam Marraccini wrote the following blog which is posted on Microsoft TechNet.  We hear from many of our customers that as their databases grow, costs increase and performance decreases.  Sam presents a very compelling case for using Flash in a SQL Server envrionment.  Enjoy his post here:

http://blogs.technet.com/b/dataplatforminsider/archive/2013/05/01/leveraging-flash-across-the-microsoft-sql-server-stack.aspx

 

The Hype v. Reality of Software Lead Storage and Windows Server 2012…Infrastructure matters to the tune of 15% lower TCO and 33% lower staff costs!

I recently read an interesting perspective on Software defined data centers and storage. The blog by David Vallente at Wikibon- entitled Windows Server 2012 Falls Short on Software-Defined Storage (http://wikibon.org/wiki/v/Windows_Server_2012_Falls_Short_on_Software-Defined_Storage). David and his team including David Floyer recently did an analysis of Windows Server 2012 and the new features such as OffLoad Data Transfer (ODX) , Storage spaces, SMB 3.0  etc. ( David Floyer’s blog -  Windows Server 2012 Falls Short on Software-Defined Storage http://wikibon.org/wiki/v/Windows_Server_2012_Falls_Short_on_Software-Defined_Storage)

The folks at Wikibon have been predicting for some time that ISVs like Microsoft (and Oracle) would increasingly try to grab more storage function and pressure traditional storage models.   

In this analysis, Wikibon conducted a number of interviews with customers that had some level of experience with Windows Server 2012.   This group represented a range of industries and account sizes.  What they found was that Microsoft Windows Server 2012 is an important and successful new release.

He further elaborates that in Wikibon’s view – “specifically as it relates to Windows Server 2012, arrays that integrate with this new platform will provide better tactical ROI in the near term.” 

But the potential challenge is in the robustness and maturity of this new functionality and deployment model.  As Wikibon further elaborates, its Windows Server 2012 lack of robustness and storage function maturity that cautions them on the true Software-led storage from Microsoft is still a release cycle or two away. Specifically, array-based storage will continue to provide the best ROI for many small and mid-sized Microsoft shops over the next 18-24 months.

In both of these cases, Wikibon’s research found that to the extent an array had the capability to exploit these new features, the value proposition of array-based storage was significantly better than relying solely on a Microsoft-led (Software-led) storage stack ( aka the do it yourself model) .   As such, the array-based capability that Wikibon modeled to evaluate the business case demonstrated significantly better value than a Microsoft Software-led approach using commodity disks . 

 What array did they use in the modeling and analysis?  The EMC VNX platform.  Wikibon further cautions folks looking at using Windows Server 2012 to ensure that their arrays can exploit the new functionality.   

Wikibon highlighted in their analysis that: 

  • Spending 10% more on disk array hardware that can exploit Windows Server 2012 capabilities can lead to 14% lower overall costs relative to today’s Microsoft Software-led approach using JBOD;
  • While server costs will be somewhat lower and largely offset more expensive array costs, the real savings come from infrastructure management costs (i.e. lower people costs).
  • By utilizing array-based hardware that can integrate with and exploit Windows Server 2012 function, IT organizations will free up staff time and reduce management complexity by approximately 33%. This can lead to better IT staff productivity and reduction in time spent doing non-differentiated heavy lifting for storage.

 Wikibon sums it up:”  Microsoft’s Windows Server 2012 delivers some compelling function, but critical storage capabilities are lacking, such that true Software-Defined Storage from Microsoft remains elusive. In the near-to-mid term, to achieve maximum efficiency IT organizations must either investigate alternative software-defined offerings or stick with array-based storage solutions that integrate with Windows 2012. Importantly, to the extent these traditional arrays exploit key new features in Windows Server 2012, business value will likely exceed all-Microsoft storage stack approach.”

No surprise here at EMC…. we agree that Windows Server 2012 is very interesting.  EMC was the first to announce our intent to deliver support ((June 2012 – http://www.emc.com/about/news/press/2012/20120910-01.htm)

; were the first storage provider to deliver support for SMB 3.0 and our award winning VNX platform was used to showcase the TechEd keynote to show case the ODX functionality.  We also are leading the pack with quite a bit of integration with Windows Server 2012 and System Center.  Let us not forget our recent announcements around VSPEX support for Windows Server 2012 as wellIn addition to checking out the Wikibon blogs, I would recommend checking out:

Everything Microsoft at EMC

 Also, there are additional assets that may help you understand the role infrastructure plays when deploying Windows Server 2012 in a Private Cloud Environment.  More information can also be found on emc.com

EMC Perspective:  The Power of Windows Server 2012 and EMC Infrastructure – http://www.emc.com/collateral/emc-perspective/power-windows-server-2012-emc-infrastruction-ms-pce.pdf

Whitepaper: EMC VNX3 Introduction to SMB 3.0 – http://www.emc.com/collateral/white-papers/h11383-vnxe-introduction-wp.pdf

There is a wealth of opportunities at #EMCworld to learn more about EMC and Windows Server 2012.   I am also looking forward to talking with the Wikibon folks at EMCworld to learn more about their analysis.