Dynamic Memory for Microsoft Hyper-V

While it didn't make a lot of noise, Microsoft released SP1 for Windows Server 2008 R2 a few days ago. So how does this impact Microsoft's virtualization customers? There are two changes in SP1 that have relevance to virtualization; dynamic memory and RemoteFX. While RemoteFX is a significant enhancement to running virtual desktops over the network, we'll come back to that this in another blog post and focus on the memory management changes found in Hyper-V for now.

Dynamic Memory for Hyper-V is a new feature that helps distribute memory to the virtual machines hosted on the physical server. Previous to SP1 you would allocate memory to a virtual machine based on what you assumed the VM would need. Now with SP1 for Windows Server 2008 R2 you can choose whether your memory is static or dynamic and with dynamic you can set startup RAM values, maximum RAM values and the amount of memory to reserve as a buffer.

Startup RAM is the amount of memory allocated to a virtual machine and reported to the guest operating system BIOS while maximum RAM is the total amount of memory that will be given to a virtual machine. The default for maximum RAM is 64GB.

You can also set memory priority which specifies how to prioritize the availability of memory for the virtual machine compared to the other VMs on the parent server. For instance, if you have several VMs on a physical server, the VMs with lower memory priority may not start if there are other VMs on the server with higher priorities and the server is low on memory.


How is this different from VMware? Well, Hyper-V is deciding how much memory to give to each VM and then relying on the operating system within the VM to decide how much memory it needs to operate. The idea is that the VM will only take what is needed based on what the operating system reports back on.

VMware overcommits memory essentially giving VMs more memory then they will likely need or use and more memory than what is available on the physical server. Vmware's method gambles on the fact that all of the VMs on the physical server won't need the maximum amount of memory at the same time. 

I don't think it is fair to say which solution is the best one just yet but this is another example as to why I think Microsoft Hyper-V has made significant feature enhancements to have a competitive solution to VMware. Something to consider, Microsoft is relying on the guest operating system to manage memory to make this feature successful…an operating system that is developed by none other than Microsoft.  Which do you think will be more successful in the long run?


Exchange 2010 Tested Solutions Update

Quick update on Microsoft publishing of EMC and our partners Tested Exchange Solutions that we’ve blogged about before. Microsoft has now published one of our two Tested Exchange Solutions out on the Tested Exchange solutions page on TechNet:


This whitepaper was a joint collaboration between EMC/Microsoft/Brocade/Dell and was formally titled “Zero Data Loss Disaster Recovery for Exchange 2010″ and featured our Replication Enabler for Exchange plug in to Exchange 2010 DAG (10,000 users per site/20k users total), with EMC CX4-480 storage, Brocade ServerIron ADX Load Balancing and Fabric, along with Dell R910 servers all supporting a virtualized Exchange 2010 infrastructure on Hyper-V.

We are still waiting on the other Tested Exchange Solution that was performed in collaboration with Cisco on their excellent Unified Compute System platform. But, you can get the full EMC version blogged about here from late last year.

Last, stay tuned for some exciting solutions update on what we have been cooking up for our VMAX/SRDF/VMWare HA and Site Recovery Manager solution for Exchange 2010, along with Exchange 2010 DAG/Recoverpoint solution. We also have exciting kit coming out on what we’ve been up to with our new EMC VNX platform with Exchange 2010.


EMC’s Data Storage Products of 2010 Finalists

This one took me awhile to get to but seemed to go under the radar…   EMC is mentioned in three of the major five categories covered by SearchStorage.com:

And to think… so much has happened since 2010 – you have Isilon under EMC, you have VNX and VNXe, Data Domain getting better and more powerful, and the VMAX… it continues to do amazing things.

Backup and DR Software and Services

Typically thought of as a data protection appliance, the TwinStrata CloudArray is now also available as software so it can be run as a virtual appliance under Citrix, Microsoft or VMware hypervisors. CloudArray links backups to cloud storage services, including AT&T Synaptic, Amazon and services based on EMC Atmos.

Backup Hardware

The EMC Data Domain Global Deduplication Array (GDA) with EMC Data Domain Boost boasts a throughput of 12.8 TB/hour and can accommodate more than 14 petabytes of data. The EMC Data Domain Boost option can distribute parts of the deduplication process to the backup server to accelerate performance by up to 50%.

Disks and Disk Subsystems

EMC Corp. Clariion CX4 Software and Unified Management: The comprehensive software update for EMC’s midrange Clariion CX4 arrays includes Fully Automated Storage Tiering (FAST) for sub-LUN tiering, FAST Cache performance acceleration, primary data compression and virtual provisioning. It also uses new EMC Unisphere management software for common management of Clariion and EMC Celerra unified storage.

EMC Corp. VPLEX: This new architecture encompasses a set of products for federating storage inside data centers and across geographic distances. EMC VPLEX lets organizations move thousands of virtual machines and petabytes of information non-disruptively.

Isilon Systems Inc. Unified Scale-Out Storage: With its Unified Scale-Out Storage, Isilon has added iSCSI support to its OneFS operating system, giving it block storage capabilities to go with its fundamental scale-out NAS technology in a single system.

Windows Disk Alignment

A quasi authoritative guide to improving the performance of Microsoft SQL Server running on EMC Symmetrix arrays Overview While attending the SQL PASS Summit in Seattle this last November, http://www.sqlpass.org/summit/na2010/, I was asked an interesting question. A DBA approached me after attending a session featuring Jimmy May from the Microsoft…

Dissecting Database Availability Groups

I wanted to post a good picture of dissection at the beginning of this post, and it brought me back to middle school biology.  I was in eighth or ninth grade.  We are the World was a chart-topper, and the Asian Tiger Mosquito (awesome name for species, eh?) managed to migrate to Houston.  The important thing is that I wore an onion on my belt (which was the style of the time). 

Well, no, I guess that wasn't the most important thing.  The really important thing is that I couldn't find a picture of a real frog dissection that I want to post on this particular blog.  So I'll post this instead:


Onto what I'd really like to get at with this post:

DAG is not the only way to achieve high availability or remote site disaster recovery with Exchange 2010.  The reality is that there are quite a few options for replication and HA, and they run the gamut of requirements, from synchronous replication with automatic, lossless failover, to asynchronous replication with selectable recovery points, and from a single copy of the database(s) at each sites to whatever you choose.

So over the next few posts, I'll be going over the options that administrators and managers can consider when deploying Exchange 2010.  Most importantly, I'll be covering the factors that should be included in that consideration.

Before I start, I need to conceptually take Database Availability Groups (or DAG) apart, show a little bit about how the technology works, and define the terminology associated with it.

Most Exchange administrators have a good idea of what DAG is, and know enough about how it works to implement it.  But many administrators don't know that there are two components to DAG.  As with any true remote disaster recovery technology, there are two mechanisms:

  1. There's a technology to replicate the data, known as the replication engine.  In native Exchange 2010, we'll call this "DAG Replication." It consists of asynchronous block-level replication of the transaction logs over an IP network between the members of a DAG.
  2. There's a technique or technology that manages which copy of the data is "active."  A "technique" would be something a human does to decide which copy is active.  A "technology" is an automated process that determines which copy is active.  In Exchange 2010, this is a technology (in the form of a role) called Active Manager that runs on ALL Exchange mailbox servers (even standalone servers).

I'll take a few bullet points here to describe the various types of replication.  All replication I'll discuss in these posts involve taking the writes to storage or application, duplicating them, and sending them to another location.  If these locations are on the same server, disk, array, or site, it could be generally termed local replication.  If it's sent to another site, it can be called remote replication.  The different replication types are:

  • Application replication:  This is where an application determines what is to be replicated at the application level.  In most cases, this is the application itself.  An example would be DAG replication, where writes to the Exchange transaction log are replicated to other members of the DAG.  Other examples would be SQL log shipping, Oracle DataGuard, or Active Directory replication.  An example where a middleware application determines what's to be replicated at the app level would be GoldenGate.
  • Host replication:  This is where an agent running on the server, but outside the context of the application is configured to replicate data directed to one or more disks.  Examples of this type of replication would be Replistor, NSI Doubletake, and so forth.
  • Array replication:  This is where the storage controller is configured to replicate writes destined for certain LUNs (or logical disks) configured on the array.  Examples of this type of replication would be EMC MirrorView and SRDF, NetApp SnapMirror and the like.
  • SAN replication:  This is where the replication is handled by an appliance residing on a storage area network (SAN), but outside of the host or array.  Examples of this would include EMC RecoverPoint, NetApp ReplicatorX, IBM SVC, and technologies of that ilk.

Here's a little cartoon showing the types of replication:


Clearly, this is a simplification, and details differ between the implementation details the various vendors take.  Some third party products include both replication and application failover.  A book could be written on replication schemes, so I apologize for the brief treatment of the topic.  But all replication technologies I'm aware of can fit fairly comfortably in one of those four buckets, and it should suffice for this series of blog posts.


Getting back to the topic at hand:

How can you replicate and fail over Exchange (other than DAG)?  Turns out there are a lot of possibilities.  The rules are pretty simple:

  • You can't use DAG replication without Active Manager
  • You should only use one high availability technique at a time

But you CAN use Active Manager without DAG replication.  This bears repeating:  You can get Exchange high availability without having multiple copies of your database.  And believe it or not, you don't have to use Active Manager at all.

I'm going to cover four different options for Exchange 2010 HA and replication, and this is not intended to be an exhaustive list: there are variants on each, and there might even be different methods.  Stay tuned for more...