Windows Server 2012R2 Update

What, R2update ? Youre kidding ?

Yes, Windows Update for Server 2012R2, 8.1 and 8.1 RT will be General Available by April 8th.

Subscribes can already download the Updates // New Install Images from Microsoft.

 

What´s in it ?

 

Very Simple, a set of KB´s, the largest 700Megs in Size:

Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 Update April, 2014

 

Important All future security and nonsecurity updates for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 require this update to be installed. We recommend that you install this update on your Windows RT 8.1, Windows 8.1, or Windows Server 2012 R2-based computer in order to receive continued future updates.

 

A servicing stack update is available for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2: March 2014

 

Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 Update April, 2014

 

This update includes the following new features and improvements:

  • Enables a more familiar mouse and keyboard functionality for modern apps and controls.
  • Improves the web application compatibility of the Internet Explorer 8 emulation mode in Internet Explorer 11 F12 Developer Tools.
  • Increases performance and reliability when you use multi-display configurations for portrait-first device experiences.

 

 

OK, keep calm and Get your HoHoHo ........

Providers, ready To Patch your Installed Base ?

 

There is not even a Installer for All Packages, it comes with a Simple Readme:

 

update.png

 

The Update comes with a new Image Version:

 

 

After Applying, my Image Master went from 9GB to 16GB .....

Also cleaning up with dism and Trimming does not seem to shrink the master ....

 

Master before Update:

 

Master after Update

 

It took MS three releases of Modern UI to bring Power and Search to the Startmenu for Amins not used to work wit Modern UI :-)

Have fun Patching :-)

Whats in your SLA?

People have been considering and comparing public (hosted) and private (on-premises) cloud solutions for some time in the messaging world, and at increasing rates for database and other application workloads.  I’m often surprised at how many people either don’t know the contents and implication of their service provider service level agreement (SLA), or fail to adjust the architecture of private cloud solution and then directly compare cost. 

Here are my five lessons for evaluating SAAS, PAAS, and IAAS provider SLAs:

Lesson 1: Make sure that what’s important to you is covered in the SLA

Lesson 2: Make sure that the availability guarantee is what you require of the service

Lesson 3: Evaluate the gap between a service outage’s cost to business and the financial relief from the provider

Lesson 4: Architect public and private clouds to the similar levels of availability for cost estimate purposes

Lesson 5: Layer availability features onto private clouds for business requirement purposes

I’ll use the Office 365 SLA to explore this topic – not because I want to pick on Microsoft,  but because it’s a very typical SLA, and one of the services it offers (email) is so universal that it’s easy to translate the SLA’s components into the business value that you’re purchasing from them.

Defining availability

The math is simple.  It’s a 99% uptime guarantee with a periodicity of one month:

image

If that number falls below 99, then they have not met their guarantee.  For what it’s worth, during a 30 day month, the limit will be about 44 minutes of downtime before they enter the penalty, or about 8.7 hours per year.

But what does “Downtime” mean?  Well, it’s stated clearly for each service.  This is the definition of downtime for Exchange Online:

“Any period of time when end users are unable to send or receive email with Outlook Web Access.”

Here’s what’s missing:

  • Data:  The mailbox can be completely empty of email the user has previously sent and received.  In fact the email can disappear as soon as they receive it.  As long they can log in via OWA, the service is considered to be “up”.
  • Clients:  Fat outlook, blackberry, and Exchange ActiveSync (iPhone/iPad/Winmopho, and most Android) clients are not covered in any way under the SLA

Lesson 1: Make sure that what’s important to you is covered in the SLA

Lesson 2: Make sure that the availability guarantee is what you require of the service

Balancing SLA penalties with business impact

My Internet service is important to me.  When it’s down, I lose more productivity than the $1/day or so I spend on it.  Likewise, email services are probably worth more than the $8/month/user or so that you might pay your provider for it.  That doesn’t mean that you should spend more than you need for email services.  But it does mean that if you do suffer an extended or widespread outage, there will likely be a large gap between the productivity cost of the downtime and the financial relief you’ll see in the form of free services you’ll see from the provider. 

image

Callahan Auto Parts also offers a guarantee

I’ll put this in real numbers.  Let’s say I have a 200 person organization.  I might pay $1600/month for email services from a provider.  If my email is down for a day during the month, my organization experiences 96% uptime for that month, and as a result, my organization is entitled to a month of free email from the provider, worth about $800.

image

The actual cost of my downtime will very likely exceed $800.  To calculate that cost we need the number of employees, the loaded cost per hour for the average employee, and and the productivity cost of the loss of email services.  For our example of 200 employees, let’s imagine a $50/hour average loaded cost to business and a 25% loss of productivity when email is down:

200 employees x $50 cost per hour x .75 productivity rate x 8 hour outage = $60,000 of lost productivity

Subtract the $800 in free services the organization will receive the next month, and the organization’s liability is $59,200 for that outage.

Now how do you fill that gap?  I’m not entirely sure.  It could be just the risk of doing business – after all, the business would just absorb that cost if they were hosting email internally and suffered an outage.  If the risk and impact were large enough, I would probably seek to hedge against it – exploring options to bring services in house quickly, or even looking to an insurance company to defray the cost of outages – if Merv Hughes can insure his mustache for $370,000, then surely you can insure the availability of your IT services.  Regardless, it’s wise not to confuse a “financially backed guarantee” with actual insurance or assurance against outage.

File Photo:  What a $370k mustache may look like.  Strong.

Lesson 3: Evaluate the gap between a service outage’s cost to business and the financial relief from the provider

Comparing Apples to Oranges

image

See what I did there?

Doing a cost comparison between public cloud designed to deliver 99.9% availability and a private cloud designed to provide 99.99% or 99.999% availability makes little sense, but I see people do it very frequently.  Usually it’s because the internal IT group’s mandate is to “make it as highly available as possible within the budget”.  So I’ll see a private cloud solution with redundancy at every level, capabilities to quickly recover from logical corruption, and automated failover between sites in the event of a regional failure, compared to a public cloud solution that provides nothing but a slim guarantee of 99.9% availability.  In this instance, it’s obvious why the public cloud provider is less expensive, even without factoring in efficiencies of scale.

To illustrate this, I usually refer to Maslow’s hand-dandy Hierarchy of Needs, customized for IT high availability.

image image

Single Site and Multi-site Hierarchies of Need

If I want to make an accurate comparison between a public cloud provider’s service and pricing and what I can do internally, I often have to strip out a lot of the services that are normally delivered internally.  Here’s the steps:

  1. Architect for equivalence.  If I have a public cloud provider just offering 3 9’s and no option for site to site failover, for my database services, I might just do a standalone database server.  Maybe I’d add a cheap rapid recovery solution (like snapshots or clones) to hedge against compete storage failure and cluster at the hypervisor layer to provide some level of hardware redundancy.  If my cloud provider offers disaster recovery, I’d figure out what their target RPO/RTO and insert some solution that matches that capability.
  2. Do a baseline price comparison.  Once I’ve got similar solutions to compare, I can compare price.  We’ll call this the price of entry.
  3. Add capabilities to the private cloud solution after the baseline.  I only start layering features that add availability and flexibility to the solution after I’ve obtained my baseline price.  Only then can I illustrate the true cost of those features, and compare them to the business benefits.

Lesson 4: Architect public and private clouds to the same levels of availability for cost estimate purposes

Lesson 5: Layer availability features onto private clouds for business requirement purposes

Open Records and FOIA – Pushing Government Technology into the 21st Century

At a recent a conference for compliance and IT professionals working in the state government sector, it quickly become evident that one of their main concerns was the tremendous increase in the number of open records requests that they have to deal with.   Both the federal and state governments give much lip service to the theory of transparency but few have made the necessary changes to properly deal with the onslaught of requests that appear almost daily.  Wisconsin’s Governor, Scott Walker’s administration has already produced 60,586 pages of open records in response to 222 requests in 13 months.  Compare that to 312 requests filled during the previous governor’s first 4 years[1].  It’s not just Wisconsin that is dealing with an explosion of open records and FOIA requests.  The U.S. Department of Defense received 67,434 in 2009 compared to 74,573 in 2010 and the National Archives and Records administration received 14,075 in 2008 compared to 18,129 in 2011[2].  Most government entities handle open records requests the same as they handle eDiscovery for litigation, manually and on an ad hoc basis.  Unfortunately for government agencies, the turnaround for a response is much quicker than for litigation.  Federal agencies have a statutory requirement to respond to requests within 20 business days[3].  State agencies have time limits ranging from 10-30 days or within “a reasonable time.”  For this reason, IT departments are struggling to keep up and there is a substantial backlog at most agencies.

Adding to their concerns, metadata could be a factor in public disclosure requests. Within the hard drive of any standard computer, metadata is created with each underlying electronic document. Metadata describes the document’s history, tracking and management. In Arizona and Washington that metadata, when requested, is now also subject to a public disclosure, along with the underlying document itself.  On a national level, a ruling by Judge Scheindlin in February of 2011 stated that responses by the federal government to FOIA requests must include metadata and be in a searchable format[4].  Although, she withdrew her opinion later that year (she said it was not based on a full and complete record), her original ruling will undoubtedly influence other courts grappling with public disclosure disputes, especially as they become more technologically savvy.

At this same conference, we heard from the CIO of a large state agency who revealed the tremendous cost of dealing with open records requests especially in a year where his agency has been the subject of several news stories and litigation[5].  The agency took several steps to reduce the cost and time associated in responding to these requests.  The first step was to perform file remediation on data that was not a record and met no legal, regulatory or business requirement for retention.  Next, they began the process of implementing an email archive in order to enforce retention and have one repository of record for all emails instead of dealing with local email storage on each hardware device.  In the meantime, they have installed in-house search technology that has allowed the agency to find and copy the requested information in a matter of minutes whereas the same action used to take several days.  When the occasional litigation notice came through, they have been able to utilize the same technology to put the requested information on hold.

Another concern for government agencies is the prospect of moving some or all of their data to the cloud. In fact, federal agencies were directed by President Obama to consider cloud based services or storage systems for records keeping[6].  The challenge then becomes how to facilitate cloud management of that information while still responding quickly to public record requests.  Any agency contemplating that move must ensure that the data being managed by the cloud provider is maintained in an easily accessible manner and that the provider is contractually bound to have technology in place for easy and fast retrieval of data for responding to eDiscovery.  Otherwise each request may be billed as a special project and the cost savings initially realized can quickly dissipate.

A possible step in the right direction is the common web portal for FOIA requests launching in the fall of 2012.  According to The FOIA Ombudsman, the $1.3 million portal, being built mostly with funds from the Environmental Protection Agency and the Commerce Department, with some participation from NARA, could save the federal government $200 million over 5 years were it to be adopted government wide[7].  This is a big step toward giving the public a self service model (similar to a tool utilized by government agencies in Mexico[8]).  However a portal is only as good as the data behind it so only time will tell if this can serve as a national model.

Although the government is notoriously behind the private sector in modernizing its technology, the public’s need for an open and transparent government does appear to be speeding up the process to the benefit of agency budgets and more importantly, the taxpaying public.


[2] http://www.foia.gov/

[3] 5 U.S.C. § 552(a)(6)(A)

[4] Nat. Day Laborer Org. Network v. U.S. Immigration and Customs Enforcement Agency (“NDLON”) 2011 WL 381625 (S.D.N.Y. Feb. 7, 2011)

[5] While public disclosure rules allow for collecting fees and recovering costs, some requesters who qualify for placement in favored fee categories may be charged less or may not be charged at all. Educational, news media and noncommercial scientific requesters typically pay no search or review fees and only duplication costs after a certain number of pages (usually 100 or more).  The amount of paper that is created by these responses is unacceptable.  Taxpayers are right to question why so much of their money is spent creating paper documents when 93%+ of all communication is in an electronic format (David W. Degnan, Accessing Arizona’s Government: Open Records Requests for Metadata and other Electronically Stored Information after Lake v. Phoenix, 3 Phoenix L. Rev. 69 (2010)).

[6] http://www.whitehouse.gov/the-press-office/2011/11/28/presidential-memorandum-managing-government-records

[7] http://blogs.archives.gov/foiablog/2012/01/09/foia-portal-moving-from-idea-to-reality/


A New Year’s Wish List

Jim Shook

Jim Shook

Rather than trying to make predictions for 2012, which I tend to avoid, I thought it might be interesting to put together a short wish list of things that I hope for in 2012.  The usual suspects immediately sprang to mind:  that Legal and IT learn to effectively communicate; companies begin to defensibly delete their stale and legacy data, more eDiscovery moves in-house, etc.  Those all seemed to be a little much to absorb in January, so instead I put together a much more achievable “To Do” list with some additional resources to help.

Don’t Be Scared Of  “Archiving”

Despite surveys suggesting otherwise, our experience is that email remains the most important and painful eDiscovery repository in a company.  Email sprawl also creates operational costs and risks when it’s not properly managed.  Yet many legal departments either block or fail to assist the efforts of their IT counterparts when they decide to do something about email.  Many times, this failure is because they really do not understand email, or their understanding of an “archive” implies that they will be keeping everything forever.

In reality, modern archives enable companies to implement and enforce retention policies on email, which is a strong foundation to enable defensible deletion of email.  Better archives can also enable similar management of other content repositories, such as Sharepoint and fileshares.  A good archive, with associated policies, will improve and reduce the cost of operations, and make eDiscovery cheaper and easier.

Learn more:

Dive Into Machine Classification and Coding

Machine-based coding for document review is a hot topic.  We’re learning that in many cases, people just do not do a great job in reviewing and coding large volumes of information.  However, machines are built for this type of work because they are consistent, never tire and are cheaper than human review.  An open and shut case, right?

In reality, there remains a misunderstanding about how these technologies actually work, and how they can be successfully deployed and defended in a litigation matter.  Clearly they hold great promise, but there’s a lot of work to be done before they become mainstream.

Learn more:

Be Proactive With Social Media

Many companies are using different types of “social media” to more effectively and rapidly reach their customers, partners and even their own employees.  Technologies such as Twitter, Facebook, wikis and blogs are being used daily, and it’s likely we’ll see some even newer technologies develop in 2012.

Yet social media is not a free ride.  Gartner’s Debra Logan predicted a year ago that by YE 2013, half of all companies will have produced social media content in response to an eDiscovery request.  But today, most companies do not have policies to regulate social media content, nor do they have much of an idea on how they might preserve and collect that ESI in response to a regulatory or litigation matter.

Learn more:

Understand “The Cloud”

Ahhh, the Cloud.   Depending on your vantage point, Cloud Computing may be the answer to every issue you have or the most overhyped idea since push computing in the 90s.  The IT department is attracted to the cloud’s operational efficiencies and flexibility, and the business enjoys the rapid rate of deployment.

But don’t dive in without being informed.  “Cloud Computing” is actually an umbrella term representing a number of different deployment and service models.  Operational and cost benefits found with cloud computing should be weighed against the loss of control that comes with those deployments.  In some cases, that’s an easy trade-off.  In others, particularly where compliance is concerned, it can be more difficult.  Even in tougher cases, better informed teams might be able to get the best of both worlds by leveraging private or hybrid cloud deployments.


Are you moving your data smartly?

Bryant Bell, eDiscovery Expert, EMC Information Intelligence GroupIn my last posting I wrote about what you can do to protect your company assets if you decide to move your ESI (electronically stored information) into the cloud. I pointed out that you should be sure that your cloud provider adheres to or is at least aware of US – EU Safe Harbor. This has been a topic of concern for multinational or at least transatlantic corporations. But now with the advent of the cloud your data could be stored in Dublin, Ireland or Stuttgart, Germany even though you may be a medium-sized business in Laredo, TX. The cloud will now essentially force you to start thinking about your data as if you were a multinational even if your business doesn’t expand past Texas. This is because you have now tossed your ESI into the cloud and it will reside in any country your provider finds fit. So as you take that “Journey to the Cloud’ I want to share some suggestions from Greg Buckles from the Discovery Journal, http://ediscoveryjournal.com/2011/06/moving-your-esi-to-the-cloud/

You need to understand and ask the questions to your cloud provider about the basic infrastructure and data flow process that your ESI will experience:

  • How is it transferred to the cloud?
  • Where does it physically reside?
  • Is it transformed for storage?
  • How is it kept separate from other customers?
  • Does the company own all the infrastructure outright?
  • What is the disaster recovery or co-location arrangement?
  • What are your guarantees on uptime, accessibility and Service Level Agreements (SLAs) for issues?
  • What are the company policies on data privacy, subpoenas and security?
  • How can your ESI be accessed, searched and retrieved?
  • What are reasonable restoration rates for retrievals?
  • Is there an established migration/transfer mechanism in case you want to change providers?

From a regulatory, internal investigation and litigation perspective, the points to pay particular attention to are: Where does your data reside, Company policies on data privacy, subpoenas and security, and how can your ESI be accessed, searched and retrieved?

Moving to the cloud may be inevitable but just make sure you have a plan and are taking safeguards.