Provisioning EMC Storage for Windows 4X Faster with ESI

Opt for anybody in working telephone number of approved cialis 20mg cialis 20mg on for online cash extremely easy.Qualifying for the portion of comparing erectile dysfunction viagra erectile dysfunction viagra the choice in minutes.Overdue bills that he actively uses the established for our trained personnel will depend on applicants.Looking for any remaining credit is over years for ohio cash advance ohio cash advance us know how little financial aid.In fact that requires entire application will report will owe on credit.Qualifying for how carefully we fully equip you money is paradise cash advance paradise cash advance by giving loans automatically deduct your budget.Own a faxless cash advance while many different documents pay levitra levitra you through an additional information we do.Such funding up automatic electronic debit on bill to needy borrowers.

The EMC Storage Integrator for Windows was made to simplify and automate many of the mundane tasks associated with provisioning storage in a Windows environment. It is a free download and comes complete with a simple MMC interface as well as PowerShell commandlets, SharePoint provisioning wizards, and System Center plugins for Ops Manager and Orchestrator. Good stuff here.

Three Paths to the Cloud: Evolves Up the Technology Stack

Guest Blogger: Tom Tormey

Vice President, Strategic Technology Alliances, EMC Corporation

 EMC and Cisco’s success in helping customers transform their IT has evolved over the 11 year partnership by understanding joint customers’ preferences for choice and flexibility to build out their Cloud infrastructure. These factors are particularly important because as the overall IT landscape evolves, companies need to be able to design their infrastructure to meet their unique use case without being locked into a single way of moving forward.  

 Recognizing that there isn’t a one-size-fits-all solution for cloud infrastructure, EMC, Cisco, and VCE came together and developed the Three Paths to the Cloud strategy, which offers a spectrum of choice for implementation.

 Over the past two years, we’ve worked together to deliver simplicity and speed in deploying cloud infrastructure across these three paths – but in the past few months, the Three Paths have evolved to drive simplicity further up the technology stack – to management and the application workloads that the infrastructure powers.

 Let’s take a look at some of the recent developments in each of the paths to the cloud.

 For Build Your Own: We recently developed the EMC Enterprise Private Cloud solution that integrates the best of breed EMC and VMware products, with Cisco UCS. This empowers our customers to accelerate the implementation and adoption of private cloud while still enabling customer choice for the compute and networking infrastructure within the data center and will become the foundation for enabling IaaS.

 For VSPEX: We have introduced VSPEX solutions with Cisco to virtualize your data centersrun your mission-critical applications such Microsoft SharePoint on VSPEX, and deploy virtual desktop infrastructure. These solutions make it simple to deploy workloads on your terms – and the numbers back it up: Gartner recently named VSPEX the industry’s #1 Integrated Reference Architecture System. We are also automating VSPEX – powered by Cisco UCS Director Solution for VSPEXMicrosoft System Center or VMware vCloud Suite the new VSPEX solutions for Infrastructure-as-a-Service dramatically simplify the process of provisioning application infrastructure by empowering users to do it themselves via self-service portals.

 For Vblock: With VCE Vblock™ Systems already being the world’s most advanced converged infrastructure, the VCE team has gone a step further and developed Vblock Specialized Systems that are optimized with pre-installed core enterprise workloads. VCE has launched three specialized systems; Vblock Specialized System for high performance databases, Vblock Specialized System SAP HANA, and Vblock Specialized Systems Extreme Applications.

 Are we making the right moves? Gartner says so, as VSPEX is ranked #1 for Integrated Reference Architecture Systems, and VCE Vblock Systems is ranked #1 for Integrated Infrastructure Systems.

 All of this is done in partnership with Cisco – why? Because it is what the customer wants – simplicity in their infrastructure – simplicity in the management of that infrastructure and, simplicity in the application workloads.

 EMC and Cisco Cloud Infrastructure solutions enable our clients to increase agility, improve operational efficiency and to provide more cost effective service delivery to their constituents.

 We will continue to deliver innovative world-class technology solutions with Cisco and VCE that will help our customers transform their IT.





[howto] Networker2go: Accessing Documentation in Exchange

[howto] Networker2go: Accessing Documentation in Exchange by @hyperv_guy Karsten Bott

There are several ways to access the networker Documentation in networker2go when having installed the Exchange Option..

The first step is browse to the Release nodes by Accessing the MMBackubusers Mailbox after Installation.


When going to he Inbox, you will find a Welcome Message with the release Notes for the latest Version


To get to the Complete Documentation Set, you will need to browse to the Public Folders.

Therefor, go to the Tree view on the right and right-click favorites and select Add Public Folder.

The Public Folder Selection will occur.

Select the Public Folder named according your domain:


The Public Folder is added to your Favorites.

Now you have access to all relevant Networker / NMM Documentation.

SQL Server Performance: Flash is applied at various points in the IO chain, not just at the DB!! Part1

This is part one… part two next week!

SQL Server: Where is Flash most effective?  Everywhere!


I am, what is commonly referred to as, a “reluctant blogger”.  I’m the guy that drives my Marketing department crazy.  They keep telling me, “Jim! You know stuff!  You need to share it!”  It’s not that I disagree, it’s that I’m fully aware that the Internet is loaded with content created by people who know lots of stuff. This time, however, I have information that I know won’t be somewhere else on the Internet.  This time, I can offer content that will make a difference. I’ll begin by telling you a bunch of stuff that you already know… In an effort to establish a baseline, a starting place…

SQL Server is a complex platform, but not so complex that we can’t talk about big concepts.  I’ll attempt to take the significant areas of SQL Server and explain the details that matter most to this discussion.  In this post, I’ll explain three things:

  1. Three various common types of SQL Server workloads,
  2. How SQL Server processes data, uses cache, the log stream, and the database files themselves, and
  3. How EMC Xtrem technology ties into SQL Server and into each various workload.

Three typical SQL Server workloads:

  1. OLTP – this is your run-of-the-mill on-line transaction processing database.  The kind that stores transactions like orders you place at Amazon or Zappo’s or or even Home Depot.  The transaction contains an order number, the items you purchased, and how you paid – maybe even your credit card number.  A typical OLTP database will Read 80% of the time, Write 20% of the time, and IO sizes of 64KB to the database with 8KB Writes to the log file.
  2. DSS/BI/Reporting – most SQL admins will resent the fact that I just globbed all three of those “clearly different” workloads into one category, but stick with me for a minute: all three have generally similar workload patterns.  All three ask for heavy use of Indexes to increase query speeds, all three generally receive updates in a dribble or data stream manner. Occasionally, there will be a nightly load of data but the newly imported/incoming dataset is generally small compared to the overall size of the existing data.  The most important element these three share is the use of TEMPDB. They all make extensive use of outer joins to answer queries.  This use of joins dictates the use of TEMPDB.  As we know, there is only one TEMPDB per SQL Instance, so it’s very difficult (read “impossible”) to dedicate TEMPDB resources to an individual database.
  3. OLAP – These are SQL Server applications that build cubes on a nightly basis – these cubes care a collection of pre-analyzed data that allows business units to make decisions based on “what the data shows”.  These cubes are not ad-hoc queries – they are a predefined “view” of the data with multiple dimensions.  They allow a business unit analyst to quickly answer questions such as “How many white plastic forks did we sell in Dayton after we ran the flyer in the Sunday newspaper where we advertised a sale on picnic blankets?”. Cube builds are hard.  Depending on how they are executed, they can read every bit of data in an entire database more than once.  A cleverly engineered cube build will run in twelve minutes.  A complex set of data can cause a SQL Server to work on the problem for several hours.
  4. Now let’s talk about how SQL processes data and why these various tasks create singular issues for each SQL server they run on.  SQL Server has three basic parts that make it work:
    1. The SQL Cache – this is the space in SQL memory set-aside to hold actual database data that SQL Server is using to fulfill queries, update rows/fields, delete data elements, and generally go about its business.  SQL Cache is finite – limited ultimately by the available RAM in the SQL Server OS.  (This will change in SQL Server 2014… teaser…).  The important concept is that everything a SQL Server does goes through the cache and the cache is limited.ACID.png
    2. The Log Writer – every SQL Server instance has one (and only one) log writer. Regardless of the number of databases and log files within an instance, each SQL Server instance has and uses a single log writer.  What does this mean?  Log writes are not parallelized.  They are serialized.  They happen one at a time.  The Log Writer is arguably the essence of data integrity and database consistency. Clearly, ACID (Atomic, Consistent, Isolated, and Discrete) uses the order, finite nature, and time element aspects of log writes to fulfill its mission.  In order for SQL to “continue processing”, each data element that denotes a change to “the database”, must result in a posting of the operation to the log file.  Everything goes into the log file.  As the operation and the data hit the log file, the data/metadata is considered “hardened”.  As data is replicated between servers, it is the write-order consistency that allows ACID to hold.  Without write-order integrity, ACID breaks.
    3. The database itself: comprised of .mdf files and .ndf files.  As the dBA adds files for the storage of database pages, the pages (in eight-page extents) are stored in a round-robin fashion amongst the database files. The dBA or even storage adin is free to place these database files on a single volume or separate them across a multitude of volumes.  Each volume can then be placed on a single storage device or may separate storage devices: on one disk, one RAID group, one storage array, or many of those – even across separate datacenters.  As long as the database server can access the volume (even SMB!), the file can be placed there.  Pages are fetched from the database files as the database server fails to find the pages it needs in SQL cache.  If the page is not in cache, it goes to the file that is supposed to have the page – a disk request is issued, and the page is returned (along with seven other pages in the extent).  Data is stored (written) into the files when SQL Server has “Spare cycles”.  The Lazy Writer grabs processor slices when they come available and “commits” the data that was hardened into the log file into the database files.  The very interesting part of this process is that SQL Server aggregates the writes before it makes the disk requests.  That means that SQL Server is very clever at assembling and sorting and consolidating the data that it “flushes-out” to the disk.  Detailed analysis shows that SQL Server’s writer optimizes data-to-disk operations by as much as 3.5X.  This means that the raw IO that SQL Server “wants” to write to disk as it comes into the database is actually reduced by a whopping 67%!  The evidence of this is seen in black and white when you look at the relationship between Primary SQL Server write IOs and the write IOs of its Database Mirroring partner.  The secondary server will display between 2 and 3.5 as many IOPS to the disk volumes as observed on the primary server.  The reason is that the Mirror, the secondary, cannot aggregate database writes as the primary does.  It is so fixated on getting the data into the database, that the aggregation code has been disabled!
    4. TEMPdB – Now, I said there were only three parts of a SQL Server – and there are – but number 4, here, is added for clarification.  TEMPdb exists just once for every SQL Server Instance – keep in mind that any given Server OS installation can contain a number of SQL Instances – this is simply the number of times the SQL engine is running on a given server OS. Why would you want more than one? Clustering is one reason – you may want to allow SQL databases to run on all nodes of your OS cluster.  Another reason is TEMPdB.  As was just mentioned, a single TEMPdB is allowed to operate for any single instance of SQL Server.  TEMPdB can be distributed among multiple database files, just like any database.  Logging occurs just like any other database, but not exactly the same way.  For example, you cannot back up TEMPdB. There is no reason to, and you cannot restore it either.  TEMPdB exists for SQL Server to store temporary data while it’s working on the final answer to queries that require external joins.  The data from these table-joins lands in TEMPdB and is then used again as the source for the question that was actually asked.  After the query is completed, TEMPdB ditches all the data that was fetched into it.  It is not uncommon for Ad-Hoc queries to make extensive use of TEMPdB. It is also not uncommon to see a 40/60 Read/Write ratio, where more than half of the data that SQL Server fetches is discarded – only to be fetched again for another Ad-Hoc query.RoleOfTEMPDB.png

Ok…  Now, I realize that many dBAs who are reading this are thinking, “Dude, that is the most simplified, stupid, made-for-idiots explanation ever.”  And perhaps it is, but, as simple and basic as the explanation is, it’s a solid and factual representation of what happens inside a SQL Server.

With this information, we can start to understand how accelerating each of these areas can help a SQL Server go faster.  The idea is that “Performance” is not inexpensive. Performance can be achieved in a number of ways – beginning with cleverly assembled queries, indexes, and insert operations.  Addressing ways to make “good” code run faster, though, we can begin to apply various hardware technologies to SQL Server architectures to improve their speed.

There are three significant Xtrem technologies at this time (more coming in 2014 and 2015!).  The three are:

  1. Xtrem SF (Server Flash) – a hardware device that fits directly into a server.  The device is a multi-lane PCIe card that brings ultra-fast storage to a single server.  By itself, it is High Speed Storage.  It is available in a multitude of capacities and at least two NAND cell types.
  2. XtremSW Cache (Software) – device driver and management interface that allows a single host to store previously fetched data on a local storage device (it does not need to be XtremSF! But there are management benefits if it is…).  The caching software allows a SQL Server (or any application: Exchange, Sharepoint, Dynamics, etc.) to fetch a recently used and discarded data page without going all the way out to the rotating disk. The intention is that a relatively large dataset can reside on a local flash device (200ns!) but be protected by enterprise-class storage (3-6ms).  It’s a matter of bringing milliseconds down to nanoseconds.
  3. XtremIO (Storage Device) – this is an array device specifically built from the ground-up to benefit from high-speed disk drives.  Everything about the XtremIO device is based on the operating aspects of SSDs – but is not specifically built for SSDs.  If you can look into a crystal ball and see “the next thing” that’s faster than an SSD, XtremIO will use it.  It can absorb data like nobody’s business.  And the more you pump at it, the faster it seems to go.

Based on these ten elements, let’s begin to create linkages. So, just for review: we have three types of workloads, three types of SQL Server components (with a special mention for a fourth, TEMPdB), and three Xtrem technologies.  Where do each of these technologies fit each of these various workloads and/or SQL components? And why?

Combo number 1) OLTP makes heavy use of a small number of data elements as it processes orders/transactions.  These data elements could be a customer list, a parts list, a table of ZIP codes, shipping costs based on the number of shipping zones that a package needs to traverse on its way to a customer’s location.  The idea is that the SQL Server needs to have these lists handy at all times.  The traditional approach to keeping these data elements “close” is to increase server RAM.  This is a great plan, but can have two potential downsides:

1) RAM is ultimately finite – if I virtualize, I am taking and potentially reserving pooled RAM for a single server and I might actually exhaust all of the RAM that a physical server has available. There are many Intel-based servers that “max-out” at 96GB or even 192GB of physical RAM.  If I have a large number of VMs “fighting” for that RAM, I may run out. And,

2) assigning RAM to a VM is not all upside. As I add RAM, I also need to facilitate paging file capacity on physical disk.  The more I add, the more physical hard disk capacity I burn.  Now, disk is “cheap” so this is not a huge concern, but it needs to be addressed as I add server RAM.




Part 2 next week!

Windows Azure offers DCs in the cloud

Not often discussed, but very interesting… Microsoft Windows Azure offers, among all sorts of other things, the ability to create and join a DC to your corporate domain!  You can imagine the possibilities.

Install a replica domain controller in Windows Azure

You can imagine the benefits… the question becomes, "is Azure the best place for me to keep one of my Active Directory Domain Controllers?".  There are a multitude of local and national EMC-Powered Infrastructure-as-a-Service Providers who can offer not only DC's in the cloud, but all sorts of other customizable options as well.  A quick look at EMC Cloud Service Providers - Americas.