Recently, we released some great new whitepapers for Exchange 2010 proven out on our new EMC VNX series arrays. If you didn’t already see all of the EMC launch buzz around VNX, you can read more about the VNX buzz here.
The VNX series line is EMC’s new unified line replacing the previous CX and Celerra line. A quick overview of the VNX line is below:
Some of the goals of the VNX Exchange 2010 testing was to get our hands on some of the new 2TB NL-SAS drives that we knew would be popular with Exchange customers so large, cheap drives can be used to enable large mailboxes. 2 HA copies via DAGs were also deployed in each of these solutions to enable for some local HA.
First up, the VNX5700 testing where we setup a simulated customer environment of approximately 16,000 mailboxes, 2GB per mailbox, 150 msgs sent/receive (0.15 IOPS) and 2 HA copies for each DB. This was done with the 2TB NL-SAS drives and the hypervisor tested in our case was VMWare vSphere 4.1. Diagram of the test environment looked like this:
In this particular test, we utilized two Vsphere hosts with Active/Passive copies spread out across VMs in each host. In a normal run, the VMs were configured for 2000 mailboxes in a normal run, 4000 in a single failure (2000A, 2000P).
In the whitepaper for this solution, we show you how to do all of the IOPS and capacity calculations manually, but you can obviously use tools like the Exchange Role Calc as well. We also show you how to use a building block approach to scale the solution. In our case, the requirement was for 16,000 mailboxes so we determined in this solution that 16 2TB disks in R10 was the best mix of performance and capacity for each BB. A total of 4 BB was used, so a total of 64 2TB disks (note: this is your performance and DB capacity req’s only, extra disks required for things like snapshot protection, restore LUN etc which we did not show).
To do some performance validation, we did a two hour Jetstress performance test on the four building blocks (32TB). We saw 2,859 Jetstress IOPS against the required 2,400 with all four servers around 14ms read latency, and 3ms write latency. Very good results:
We also wanted to show some of the goodness that comes with our VMWare integration into EMC Unisphere which will provide administrators with great visibility into our vSphere environment:
And on the VNX5300, we did a mix of FC and iSCSI testing to show efficiencies with both types of connectivity knowing that we all like options. We also tested the VNX5300 running on Microsoft Hyper-V (again, we all love options).
In the VNX5300 testing solution, the 4000 user building block was also utilized with a similar 0.15 IOPS per mailbox profile with 2GB mailboxes and also using the 16 2TB NL-SAS drives. Even with 1GB/s iSCSI on our 2 BB testing, Jetstress performance looked very good. In the diagram below you will see where we compared the 2 BB to 1BB to get an idea of the numbers:
As we saw in the testing, iSCSI network utilization moves up to about 70% in the 2 VM test, so this is something that should be considered. We covered some of the iSCSI best practices that we used in our labs in this paper, so please keep those in mind during your planning.
You can read all of the full details on the Exchange 2010 testing with VNX5700 in the Exchange Server 2010 Performance Review with VNX5700 here: http://www.emc.com/collateral/hardware/white-papers/h8152-exchange-performance-vnx-wp.pdf
Performance Review for the VNX5300 can be found here: http://www.emc.com/collateral/hardware/white-papers/h8158-exchange-performance-vnx-wp.pdf
Until next time,