Category Archives: Windows Server 2008 R2

Is a SAN Better, Cheaper, and More Secure Than Office 365?

SAN’s – especially SAN’s from market leader EMC – are always under attack from companies wishing to cash in on a piece of the rising data growth across all industries and market segments.

  • Some say DAS is best.
  • Some say keep the storage in the servers.
  • Some say you should build your own shared array.

But when it comes to Microsoft environments, it often helps to have independent experts investigate the matter to get a fresh perspective.

In a recent study, Wikibon determined that a shared storage infrastructure (powered by EMC’s next generation VNX2 storage systems) can match the prices offered by Office 365 public cloud and offer more capabilities, more security, and more control.

This, however, assumes a completely consolidated approach for deploying multiple mixed workloads such as Exchange, SharePoint, SQL Server, Lync – where the VNX2 really shines.   We use FAST VP, FAST Cache, and a combination of drive types to achieve the best balance of performance and cost.

Are you looking for more information about deploying Microsoft applications on VNX?    Definitely check here for the most recent best practices guides!

Also check out my recent webinar I did with James Baldwin who leads our Proven Solutions EMC/Microsoft engineering team.  We had a lot of fun doing this one, hope you enjoy it.

EMC’s VNX = Award Winning storage for Microsoft environments

Microsoft’s TechEd 2013 is next week, and I’m looking forward to spending time with my longtime industry friends and making some new connections on the show floor in New Orleans.

This year, I’ll attend as part of the Unified Storage Division, and felt I needed to share a little about the success of VNX and VNXe arrays into Microsoft environments:


EMC’s VNX Unified Storage Platform has been recognized with awards from a slew of independent analysts such as Gartner, IDC and Wikibon, as well as media publications such as ComputerWorld, CRN and Virtualization Review due to the ability of the VNX family to power mission critical applications, integrate with virtual environments and solve SMB IT challenges, among other accolades.   We take pride in being the #1 storage for most Microsoft Windows-based applications.


Well, after speaking with Windows Server 2012, SQL Server, and EMC customers, partners and employees, the independent analyst firm Wikibon posted a before and after comparison model based on an enterprise customer environment. The idea is that the total cost of bolting together your own solution isn’t worth it.


The findings showed that by moving a physical, non-tiered environment to a virtualized environment with flash and tiered storage SQL Server customers realized a 30% lower overall TCO over a 3 year period including hardware, software, maintenance, and management costs for their database infrastructure.

The graphic shows that a do-it-yourself approach saves very little if anything in hardware costs and will divert operational effort to build and maintain the infrastructure. Risks and costs are likely to be higher with this approach.

In the end, EMC’s VNX infrastructure was proven to deliver a lower cost and lower risk solution for Windows 2012 versus a direct-attached storage (DAS) or JBOD (just a bunch of disk) model.  Full study here.

Video of EMC’s Adrian Simays and Wikibon Analysts discussing these results is here on YouTube.


EMC’s VNX platform considers Microsoft applications, databases, and file shares to our sweet spot as evidenced by our early integration of the latest Windows Server 2012 features that increase performance, efficiency, availability, and simplicity for our joint customers.

Performance, Efficiency, Availability, Simplicity

1. Performance

Within Windows, we were the first storage array to support SMB 3 and ODX Copy Offload (part of SMB 3) to enable large file copies over SAN instead of consuming network bandwidth and host CPU cycles.


This test highlights the speed difference before (left) and after (right) ODX was implemented. With EMC VNX and ODX enabled, you can accelerate your VM copies by a factor of 7 while reducing server CPU utilization by a factor of 30!

For applications and databases, VNX FAST Cache and FASTVP automatically tunes your storage to match your workload requirements saving up to 80% of the time it would take to manually balance workloads.

The Enterprise Storage Group (ESG) Lab confirmed that a data warehouse solution with Windows Server 2012 with Hyper-V, Microsoft SQL Server 2012 with new Columnstore indexing, and VNX FAST technologies and VNX storage form a complete solution to meet the business requirements of mid-tier organizations and beyond. An 800GB DW was deployed which is fairly typical for a medium sized business. With EMC FAST enabled, throughput reached up to 379 MB/sec, showing over 100% improvement over SQL Server 2012’s baseline Rowstore indexing. The DSS performance workload with EMC FAST enabled completed up to nine times faster than with rowstore indexing.

2. Efficiency

IT managers and storage administrators frequently adopt well-known forecasting models to pre-allocate storage space according to the storage demand growth rate. The main challenge is how to pre-allocate just enough storage capacity for the application. Reports from many storage array vendors indicate that 31% to 50% of the allocated storage is either stranded or unused. Thus, 31% to 50% of the capital investment from the initial storage installment is wasted.

The VNX supports Windows host-level and built-in storage-level thin provisioning to drastically reduce initial disk requirements.  Windows Server 2012 provides the ability to detect thin-provisioned storage on EMC storage arrays and reclaim unused space once it is freed by Hyper-V. In the previous scenario, an ODX-aware host connected to an EMC intelligent storage array would automatically reclaim the 10 GB of storage and return it to the pool where it could be used by other applications.

Furthermore, for application storage we partner with companies like Kroll and Metalogix to provide better solutions for Exchange single item recovery and SharePoint remote BLOB storage which can reduce SQL stored SharePoint objects by about 80-90% and improve SQL Respnse times by 20-40%

3. Availability

Our first to market integration with SMB3 not only provides for performance improvements, it also enables SMB 3 Continuous Availability allowing applications to run on clustered volumes with failovers that are transparent to end users.  For example, SQL Server may store system tables on the file shares such that any disruptive event to the access of the file share can lead to interruption of SQL Server operation. Continuous Availability is accomplished via cluster failover on the host side and Data Mover of Shared Folder failover on the VNX side.

Other SMB 3.0 Features supported include:

  • Multi-Channel / Multipath I/O (MPIO ) – Multiple TCP connections can now be associated with a single SMB 3.0 session and a client application can use several connections to transfer I/O on a CIFS share.  This optimizes bandwidth and enables failover and load balancing with multiple NICs.
  • Offload Copy – Copying data   within the same Data Mover can now be offloaded to the storage which reduces the workload on the client and network.
  • SMB Encryption – Provides secure access to data on CIFS shares, protecting data on untrusted networks and providing end-to-end encryption of data in- flight.
  • Directory Lease – SMB2   introduced a directory cache which allowed clients to cache a directory listing to save network bandwidth but it would not see new updates.  SMB3 introduces a directory lease and the client is now automatically aware of changes made in a cached directory.
  • Remote Volume Shadow Copy Service (RVSS) – With RVSS, point-in-time snapshots can be taken across multiple CIFS shares, providing improved performance in backup and restore.
  • BranchCache – Caching solution to have business data in local cache. Main use case is remote office and branch office storage.

EMC also offers a wide range of application availability and protection solutions that are built into the VNX including snapshots, remote replication, and a new RecoverPoint virtual replication appliance.

4. Simplicity

When it comes to provisioning storage for their applications, admins often have to navigate through too many repetitive tasks requiring them to touch different UIs and increasing the risk of human error. Admins also likely need to coordinate with other administrators each time they need to provision space. This is not very efficient. Take for example a user that wants to provision space for SharePoint. You need to work with Unisphere to create a LUN and add it to a storage group. Next you need to log onto the server and run disk manager to import the volume. Next you need to work with Hyper-V, then SQL Server Mgmt Studio, then SharePoint Central Admin. A bit tedious to say the least.


EMC Storage Integrator (ESI) on the other hand streamlines everything we just talked about. Forget about how much faster it actually is… Just think about the convenience and elegance of this workflow compared to the manual steps outlined in our last paragraph. ESI is a free MMC based download that takes provisioning all the way into Microsoft Applications. Currently only SharePoint is supported but SQL and Exchange wizards are coming soon. This is a feature that surprises and delights our customers!


EMC’s VNX not only provides a rock solid core infrastructure foundation, but also delivers significant features and benefits for application owners and DBAs.     Here’s some quotes from customers who have transformed their Microsoft environments using the VNX and VNXe platforms.

Peter Syngh Senior Manager, IT Operations, Toronto District School Board

 “EMC’s VNX unified storage has the best of everything at a very cost-effective price. It integrates with Microsoft Hyper-V, which is crucial to our cloud strategy, and with its higher performance, automated tiering and thin provisioning, VNX was a no-brainer.”

Marshall Bose Manager of IT Operations, Ensco (Oil/Gas)

 “A prime reason for choosing EMC over NetApp was that VNX is such a great fit for virtualization. With all the automation tools and tight integration with VMware, VNX is far easier than NetApp when it comes to spinning up and managing virtual machines.”

Rocco Hoffmann, IT Architect BNP Paribas (German Bank)

“We are achieving significant savings in energy and rack space. In fact our VNX requires only half the rack space and has reduced our power and cooling costs”

Charles Rosse, Systems Administrator II Baptist Memorial Health Care

“Since the VNX has been built into the design of our VDI from the beginning, it can easily accommodate growth- all we need to do is to plug in another drive or tray of drives and we get incrementally better performance.”

Erich Becker,  Director of Information Systems, AeroSpec (Manufacturing)

“…We loved the fact that VNXe and VMware worked extremely well together …we have dramatically cut operating costs, increased reliability and data access is now twice as fast as before.”


There are many more customers that have given praise to the VNX Family for powering their Microsoft applications but I don’t have the room to put them all in.     EMC is a trusted brand in storage, and the VNX today is an outstanding unified platform which successfully balances our customers block and file needs for their Microsoft file and application data – and gets awards for it.    Feel free to find out more about the VNX and VNXe product lines here and here.

Also come talk to us next week at TechEd, we will be there to help customers and partners learn more about our technology.

Find out more about our TechEd plans here.

Also download the VNXe Simulator executable right here.  It’s pretty awesome and shows you the unique VNXe management interface.

2013: A Mobile Datacenter Odyssey

At EMC World last week, Avnet Technology Solutions introduced the Avnet Mobile Data Center Solution for EMC VSPEX.

Click here or on the picture below to access my latest video which provides a bit more about this rolling datacenter-in-a-box environment and features Stefan Voss, Business Development Manager from EMC.


What is the Avnet Mobile Data Center Solution for EMC VSPEX?

Exclusively available through Avnet’s U.S. and Canadian partner community, this mobile data center solution leverages VSPEX Proven Infrastructures to create private clouds. Channel partners’ enterprise customers will benefit from the solution by being able to deploy data centers that have been ‘hardened’ to operate in harsh environments to support BC, data center moves, DR, large-scale special events, and remote field locations.

It was named one of the Top 3 hottest products at EMC World this year by Channel Reseller News / CRN (link) and includes System Center, SharePoint, Metalogix, and many more partners.

Find more information here and here

What’s new in Exchange 2013, 2 Webcasts, and More!

Next week I’ll be on a couple of webcasts related to Exchange server protection:

In these webcasts, we will balance a solid blend of best practices content with information about some of our latest products.   I promise not to waste your time!

Webcast 1:  Introducing EMC AppSync: Advanced Application Protection Made Easy for VNX Platforms

In this webinar, we’ll describe how to setup a protection service catalog for any company and how easy EMC AppSync makes using snapshot and continuous data protection technology on a VNX storage array… As a bonus we will show a cool demo.

Sign up here.

Webcast 2: Protecting Exchange from Disaster: The Choices and Consequences

In this demo, we’ll explore the 3 common Exchange DR options available to customers with an advanced storage array like an EMC VNX.  One of the highlights is that I will be joined by independent Microsoft guru Brien Posey who has the low down on what’s new in Exchange 2013 related to storage and DR enhancements and describe how many things change in Exchange 2013 and how many things stay the same.  Oh, of course we will have a cool demo for this one too!

Sign up here.

2 Great AppSync Exchange 2010 Single Item Restore Demos

Our friend Ernes Taljic from Presidio, launched the Presidio Technical Blog “Converging Clouds” with a post about EMC’s new replication management software EMC AppSync.

He also made two excellent videos that showcase virtualized Exchange 2010 Protection and Single Item Restore with RecoverPoint and VNX Snapshots – all managed by AppSync.


AppSync and ItemPoint with VNX Snapshots

AppSync and ItemPoint with RecoverPoint

Application Protection: There’s Something Happening Here

There’s something happening here
What it is ain’t exactly clear
There’s a man with a gun over there
Telling me I got to beware

Yes, it’s blasphemy to simply change a classic like Buffalo Springfield’s “For What’s It Worth” – but I will anyway to prove my point.

There’s something happening here

If you haven’t noticed, IT is changing rapidly. Just search for IT transformation, IT as a Service, and converged infrastructure to see how far we’ve come in only the past few years.  This industry moves!

What it is ain’t exactly clear

We know a Cloud is built differently, operated differently, and consumed differently. So we know companies have begun re-architecting IT in order to offer more of a service in order to react faster to meet user needs. They know they must change their operational models and in many cases their organizational structure. They might also seek converged infrastructures to get moving faster.    But… has protection changed to keep pace with this transformation?

There’s a man with a gun over there
Telling me I got to beware

It’s been said that in the song the gun is more of a metaphor for the tension between groups within the US before Vietnam. And in a much less violent analogy, the tension between the IT team and the application owners has never been stronger.

The application teams want to have great performance and protection of their application. But they’ve never been empowered by the IT department to protect themselves with storage-level tools. The storage team wants to let them, but they fear they might create too many copies of their data. Instead, the app owners went out and used tools for their own application, creating their own protection strategy which might not deliver the best protection they can get.  To win back the hearts and minds of the application owners and DBA’s, the IT department and the storage teams need to get better at protecting applications as a service.

On the Road to Application Protection as a Service

Many companies have has attempted to do this in the past – with products that help you protect and restore your applications and critical virtual machines. They have tools that install on the server and can “freeze” and “thaw” the current transactions into the database, so that when a snapshot is taken, there is a clean copy that can be easily restored.  The major benefit of these tools is SPEED as the copy process is incremental and the restore process is also lightning fast.  Restoring a 1 TB database in minutes.

It needs to get easier. Like any “enterprise” tool, many of these products designed for snapshots and replication require a significant learning curve. We need something simple that integrates with the tools we know and love.

We should provide self-service capabilities. Instead of spending hours and hours making sure application owners are getting the protection they need, they should be empowered to simply protect and restore their own data.

We are driven by service levels. IT departments and storage teams need to offer “protection service catalogs” with various (e.g. Platinum, Gold, Silver, Bronze) levels of protection varied by RPO – from very low data loss (synchronous replication) to more sporadic application-consistent snapshots – all from one interface. This makes it easy for the app team and people with the checkbooks to really understand the value placed on the different applications in your catalog.

There truly is something happening here
And what is will be made clear at EMC World 2012!

Hope to see you there!

EMC PowerPath vs MPIO – Take the High Road

Guest post by Mark Prahl

If you live in New England like I do, you have experienced some of the wettest weather on record in recent times. And, if you live in an old town like I do dating back to before the American Revolution, you know that some of those old paved paths can get flooded and become impassible when the rain comes.
Flooded road at Blackwater NWR

Photo by Leon Reed

Well, if you’re using one of those data path solutions native to an operating system or hypervisor you can expect some limitations to the paths at your disposal. Most use a basic method like round robin which distributes I/O among all available data paths in sequence because it considers all paths to be equal.

Now, just imagine your data paths are roads and you’re in the northeast like I am. What do you do when a road is underwater? Keep traveling over the same road because that’s all you can do. I think not.  You might eventually get to your destination if you’re lucky, but you’re just as likely to arrive late or not at all.

Well, the same goes for multipathing software.

Want to deliver a clear way for your customers?  Want to ensure the best performance?   Take the high road and get EMC PowerPath Multipathing. Right out of the box, PowerPath will automatically select the right optimized data path algorithm for your data center environment.

The Enterprise Strategy Group recently compared PowerPath Multipathing with Windows native MPIO and showed the performance advantages of PowerPath in Windows environments.  Results ranged from about 20% to over 200% better performance with PowerPath depending on the application.

But don’t take my word for it. Read the report yourself!

Mark Prahl is a high-tech business and marketing professional who has been running businesses and talking or writing about products and gadgets for business or personal consumption for some time. Currently, he is a member of the infrastructure management group at EMC crafting his own corner of the world to share thoughts about infrastructure management software and more. When not defining or promoting technology products, Mark can be found playing guitar around the greater Boston area with whomever may invite him up on stage.

The Windows IT Pro Community Has Spoken! EMC Takes Gold… and Silver!

Best Hardware: Storage
  • Gold Community Choice Award for EMC CLARiiON
  • “Great build quality, powerful performance, and low price—the CLARiiON series is a winner.”

Best SharePoint Product
  • Silver Community Choice Award for EMC SourceOne

Windows Geoclusters, Stretch-Clusters, and RecoverPoint/CE Failover

Taking a page out of Chief EMC Blogger Chuck Hollis‘ playbook, I’m attaching the graphics from entire PPT file that I thought would be important to highlight for this blog and its readers.  Some of the graphics didn’t fit to the page as well as I thought it would (I need to shrink them further). So if you like what you see, you can download the whole PPT right here: RecoverPointCE-MSfailoverclusterPPT

In a nutshell, EMC’s RecoverPoint/Cluster Enabler extends a Microsoft cluster across two sites.  A Microsoft cluster normally provides local site “HA” or high availability of server nodes, and RecoverPoint/CE adds “DR” or disaster recovery (AFTER) by stretching the second node to anywhere outside of your primary datacenter.  This presentation walks you through the basics behind that simple idea and provides some additional background.   Slide building credit goes to Gary Archer, a great guy who is always keeping me sharp on RecoverPoint’s latest features.

Recovery Time Objective: Targeted amount of time to restart a business service after a disaster event

Recovery Point Objective: Amount of data lost from failure, measured as the amount of time from a disaster event

Various approaches for DR and their RTO rankings

Microsoft Failover Clusters (formerly MSCS (or Wolfpack if you go back really far)) provides local HA, not DR across a site.  For this, you need to S-T-R-E-T-C-H your cluster. EMC’s Cluster Enabler is one way to do it, and using RecoverPoint with it would be like have your iPhone on Verizon.  Not the best analogy, but you get my point I hope!

Basic requirements – use SYNCHRONOUS or ASYNCHRONOUS – distance is not the issue but 400 ms latency ASYNC and 4 ms latency SYNC

Leverages majority node set clustering.    If you have 2 nodes/servers on Site A and 2 nodes/servers on Site B you will need a “tiebreaker” for deciding how to remain online after a failure – most common method for this tiebreaker is File Share Witness.  Many articles can give you additional background on majority node set clustering – it’s a good thing to know – I will point you to the blog from an old friend of mine John Toner, who writes about geographically dispersed clusters.

The architecture. 

What each piece does:  CE is a filter driver that “catches” Microsoft Cluster failure events and let’s the RecoverPoint-managed disk systems know to failover as appropriate.  Very sophisticated logic is built-in to prevent cluster split-brain – scenarios where the link is down and the application (such as a SQL server database) doesn’t know what is the correct owner of the disk resources.

See if you see what is happening above – AUTOMATIC FAILOVER.

Integrates with and supports Hyper-V

Works with latest features like Live Migration – so you can Live Migrate workloads locally for HA and failover remotely for DR.  You can control if you want to failover locally before failing over across a site.

Self explanatory – the failover steps in detail.

More detail of Live Migration support – note synchronous requirement.

Multi-array support.  We can create consistency groups with storage devices from multiple arrays in the same group.  This allows fora lot of interesting failover implementations (failover locally first, not remotely for example) and lets you keep components grouped together… like an entire SharePoint farm.

Hey, it works with Oracle on Windows too.

Recap of the benefits – hopefully it makes sense and it’s the reason that customers love this integration – with RecoverPoint/CE you get more control, less bandwidth required (3-12x savings on bandwidth as reported by RP customers), and it’s integrated with Microsoft Clusters to enable seamless failover.

Now that is a cool product.

Cool Infographic Poster for Hyper-V

Step 1.  Download large infographic poster here (or click on the picture). Hat tip to Techhead for sharing this one.

Step 2.  Find a BAP (Big A–* Printer)

This thing is 40 inches by 25 inches!

Maybe send it to your local photo developer or print shop.

Or maybe just download the PDF and keep it handy.

I love graphics that have an insane amount of detail embedded into them!


* this word is blocked by Microsoft Forefront 🙂

CLARiiON/CX SCOM Management Pack

Did you know that EMC has a SCOM management pack for the CLARiiON?  Yes, you can indeed get CLARiiON alerts through SCOM.

The idea behind it is you set up a Windows host as a CX event monitoring station.  The station grabs events from the CX and inserts them into its event log.  The pack can take action on that event (raise a SCOM alert) if configured to do so.

It works against alerts; this is not a facility to import array performance data into SCOM.  There are a variety of very robust tools that can do performance monitoring for you.  Take a look at this screen shot to see the types of events and actions you can take.

It’s very handy way to monitor and track events generated on the CLARiiON.  Almost best of all, it’s free to EMC customers.  Just log into Powerlink and go to:

Home > Support > Software Downloads and Licensing > Downloads J-O > Navisphere Server Software.

More over here via Jedi Princess.

Automating Perfmon with Perfcollect

[ Post by Paul Galjan ]

When I started here at EMC, I was pleased to see that most of us would use actual host data (perfmon) to perform size out our storage.  We have a variety of cool tools that will analyze perfmon output and help visualize trends, size out replication bandwidth required and a whole lot more.

But performance data gives you only half of what you need in order to size storage.  You need the capacity of the disks in order to do a complete sizing.  This resulted in a fair number of conversations that went like this:

Sales Rep: Hey Paul, did you get that perfmon data from the customer?
Paul: Sure did.  Based on the performance data, they’ll need about 12 15k disks for the database, and 4 for the logs.
Sales Rep: So sixteen 146G 15k disks?
Paul: Well, I’m not sure; I don’t have the capacity information.
Sales Rep: But you looked at the performance data.
Paul: Yes.
Sales Rep: And based on that, it looks like they’ll need sixteen disks to address the performance requirement.
Paul: Yes

Sales Rep:  So I can go ahead and quote 16 146G 15k disks, right?

You only need to have that conversation about three dozen times before you realize that something must be done.

So I wrote a tool called perfcollect.  It runs on Windows 2003 and later, tested on x86, x64, and even ia64.  Once I started writing it, I figured out that I could solve a lot more problems than I actually set out to solve.

First, I decided not to limit the counter to just storage.  The tool collects a wide variety of counters related to CPU, memory and even the application context.  It will collect up to 350 counters, based on the XML profiles from the very cool PAL tool. The counters include all sorts of stuff relevant to Exchange, SQL Server, SharePoint, AD, Hyper-V and more.

Second, it collects configuration information that is available only by doing WMI queries on the server, but is nonetheless still relevant to performance troubleshooting.

Operation of the tool is very simple.  You download the tool from my own blog site, and run it as administrator on your server (it automatically escalates privileges if you’re running it on 2008, Vista, or 7).  Select the duration of the collection, the sample frequency, hit enter and let it go.  Come back and look in c:\perflogs\EMC, and you’ll see a directory tree of text files and csv’s.

Here’s the progress of what perfcollect actually does:

  • Detects the version of Windows running.  If it’s running 2000 or earlier, it exits
  • Presents the UI portion, where you select the duration and frequency of samples
  • Detects all available counters on the system
  • Builds a list of relevant “interesting” counters based on what is available
  • Builds a list of services running on the machine
  • Gets boot options of the machine
  • Builds a list of applications installed on the machine
  • Builds a list of disks on the system and their capacity information – outputs in CSV and human-friendly text formats
  • Dumps event logs of error and above to CSV
  • Builds a list of disks on the system and their offsets
  • Gets network configuration information
  • Enumerates hardware on the system – processors and types, disks, SCSI and iSCSI adapters, tape drives, and media changers
  • Executes “systeminfo”
  • Executes “driverquery”
  • Consolidates information relevant to PAL (Number of processors, boot options, system type, and memory)
  • Starts the perfmon collection – output in CSV format.

The whole process usually takes less than a minute – excluding the time it takes to actually sample the data, of course.

Once you’re done, the perfmon CSVs are ready for use with any tool you use to manipulate and visualize perfmon data; PAL, perfmon itself, excel, etc.  If you’re worried about the size, I’ve never seen an uncompressed perfmon file generated by perfcollect of over 40MB.

The tool “belongs” to EMC (in that I used EMC’s money to feed my family while I was developing it, and I tested it in EMC’s incredible labs).  But it’s free of charge to use, and the output is yours.  If you use it to collect data just to get a baseline of your servers’ performance, or troubleshoot a problem, we’re cool with that.

You can get more information in the Perfcollect README file.

The software is licensed “as-is.” The contributors give no express warranties, guarantees or conditions. You may have additional consumer rights under your local laws which this license cannot change. To the extent permitted under your local laws, the contributors exclude the implied warranties of merchantability, fitness for a particular purpose and non-infringement.
The Microsoft Corporation tools are packaged herein through permission granted by Microsoft Corporation through the premier contract with EMC.  Grep, gawk, and printf are distributed unmodified under the GNU General Public License.

Notice of Copyright
This program is the confidential unpublished intellectual property of EMC Corporation.  It includes without limitation exclusive copyright and trade secret rights of EMC throughout the world.

Large Scale Hyper-V Clusters, Cluster Enabler, and VPLEX

I talked with Partner Engineering Manager Txomin Barturen about how to get ultimate scale from Hyper-V with Cluster Shared Volumes. He also spoke about his session on multi-site Windows Clustering Configurations and EMC’s Cluster Enablers which plug directly into Microsoft’s clustering framework and well as EMC’s storage virtualization and transportation appliance, VPLEX.

Recorded at Microsoft TechEd 2010.

Virtual Winfrastructure – EMC and Hyper-V

I’d like to help introduce a new blogger in the house at EMC.

Adrian Simays will be blogging and advocating EMC’s approach towards Microsoft’s Hyper-V.

His blog, named Virtual Winfrastructure, aims to highlight the fact that EMC has a large group of people and projects dedicated to Microsoft’s Hyper-V.  We also have a lot of customers using a hybrid approach of both Hyper-V and VMware in their virtualization efforts.   And each of our product teams have been busy working on documentation that shows how it all works.

Here’s a small sample here:

I am subscribing, and look forward to reading some more good stuff from Adrian – a really smart dude that can put it in simple terms.  Learn more about Adrian here.  Subscribe here.

W2K8 R2 Hyper-V Live Migration with Exchange 2010, SQL 2008 R2, SCVMM, and EMC CLARiiON NQM

Longest title ever?  Thankfully I abbreviated SCVMM down from System Center Virtual Machine Manager.  Anyway…

Microsoft has announced their launch dates for Windows 7, Windows 2008 R2, and Exchange 2010.

EMC will be there to support them in many cities including Baltimore, NYC, Irvine, Raleigh, St Louis (to name a few).

I was asked to see if we could put together a quick demo showcasing some of the cool stuff we could do, and we hooked it up FAST.

My colleague Ryan Kucera and I worked together to put a quick little proof of concept together showing a combination of dynamic storage and server load balancing. In little over a week (just before his next proof of concept build-out), we were able to crank out a demo that  showcases:

  • System Center Virtual Machine Manager R2 (beta)
  • Hyper-V R2 Live Migration (not released yet)
  • Exchange 2010 (not released yet)
  • SQL 2008 R2 (not released yet)
  • CLARiiON Virtual Provisioning (creation of thin LUNs)
  • Storage IOPS thresholds (Navsphere Quality of Service Manager aka NQM)

The setup of the demo was this:

You’re setting up your virtual servers on Hyper-V servers and you’re moving stuff around pretty quickly…  You place two busy VM’s on the same host.  Performance is bad. You need to move the VM’s without downtime – we use Windows 2008 R2 Live Migration to show this.  Then you notice because we are using CLARiiON Virtual Provisioning and Thin LUNs for simplified management, we have multiple heavily utilized LUNs for different VM’s that are competing with each other on the same set of disks.  No problem. NQM gives you the ability to be able to place a threshold on LUN’s (like 500 IOPS max for SQL 2008 R2 in the video) and let others (like a standalone Exchange 2010 VM in the video) have more IOPS to service more requests.

Too many people don’t know most EMC storage devices can do this (in both physical and virtual environments).

But now you do.

(looking for higher resolution on the video – click here)

Windows 7 Release Dates Set

Microsoft has confirmed the Release to Manufacturing (RTM) (aka general availability) date for Windows 7 and Windows 2008 R2 as October 22, 2009.  RTM/GA code will be delivered to partners and TechNet/MSDN subscribers at the end of July.

According to an informal survey on ZDNet, this could mean the beginning of a delay period for many new PC purchasers.

Here’s a snippet and poll results from the ZDNet article:


“Now, this presents anyone thinking about buying a new PC with a dilemma. Do they buy a PC now and skip 7 until they buy another? Buy a PC now and upgrade it when 7 is released? Wait until the tech guarantee is on offer, buy a PC with Vista on it and upgrade when the OEM delivers the upgrade? Or do they just wait for a PC with 7 pre-loaded on it? What would you do?”

  • Wait until you can pick up a PC with 7 pre-loaded (54%)
  • Wait until the tech guarantee is on offer, upgrade when 7 is available (27%)
  • Buy the PC now with Vista on it, upgrade to 7 when it’s available (5%)


It’s apparent from this data that, in general, people don’t like upgrades (54%) compared to those that are comfortable doing the upgrade (32%) so they may delay their purchases until the PC’s pre-loaded with the fresh OS is available (October).

Could this mean trouble for Microsoft?

Probably not.  First of all, not many people who go to buy a laptop online or at the local tech superstore are going to be aware of these dates (unlike us early information seekers).  Also Microsoft is obviously a large company with a large array of revenue streams, so that a slight downward turn in Vista revenues might only be a short blip on the radar.

I am guessing Windows 7 is going to be a huge hit, with the overwhelming majority of users who’ve tried the OS providing very positive reviews.  Unlike Vista, it’s small and fast enough to be loaded onto netbooks.  Early tablet users (shout out to Ryan!) are enjoying the enhanced multitouch capabilities.  Native VHD and boot from VHD support will be huge, once people understand it. Built-in Wireless anywhere will allow you to have mobile-phone like access to the web from anywhere.  And Wordpad.  Wordpad looks nicer. 🙂

How to Build an Efficient Application Infrastructure through Virtualization

I couldn’t figure out how to embed this into my blog, but this is a great, short video which shows how EMC is working with Microsoft to virtualize applications like Exchange, SQL, and SharePoint.  It’s only ten minutes long and showcases one of EMC’s great technologists, Brian Martin, as he speaks with Microsoft’s Jim Schwartz.

Windows Server 2008 Foundation

The low end of Windows 7 – Starter Edition – has already been discussed quite a bit across the blogosphere, and its big brother is a bit jealous.  Like the future desktop version, Windows Server 2008 Foundation is intended to go lower (in price and functionality) than prior releases of the operating system.  Get em hooked, get em to upgrade is always the name of the game in software, isn’t it?

Windows 7 Starter Edition will have a “cap” of 3 applications and doesn’t support customized wallpaper from what I hear.

Windows Server 2008 Foundation will have a “cap” of 15 users, doesn’t include or support Hyper-V, and cannot be used on multi-core machines (multiprocessors are OK).  It will be primarily targeted for small companies and emerging economies such as China, Brazil, and India.  Better to have something cheap compared to promoting further piracy, I guess. Cheap is about $150-200, compared with Windows 2008 standard which you can find for about $500.

In the US market, I think it could be a new option for home users who are looking to build their own sandboxes with “real” servers and don’t want to pay full price for a higher-end product.  The lack of Hyper-V support is unfortunate, because I think a lot of people would want to get their hands on this technology, at least to test out and learn.  And if people can put apps (Exchange, SQL) on it, they (Microsoft) should theoretically get the licensing revenue anyway…

BUT – where does Home Server fit?  What about MinWin?  🙂

Press Release

Paul Thorrutt’s Super Site for Windows Coverage

Video Coverage

Maximum LUN Size for Windows Servers

(numbers last updated 5/30/13) Sometimes people ask hypothetical questions of EMC’s field TC’s (technical consultants) just to see what we say.

Here’s one example of a hypothetical question that a few of us were discussing internally:

Question: What is the maximum size of a LUN you can present to a Windows server?

Answer: 2TB or 16TB or 256TB or 16 EB (exabytes) depending on what website you read.  🙂exabyte


It’s all about the underlying addressing space of the disk and the filesystem you choose to create the volume that gets presented to the Windows server.

Address Spaces and Phone Numbers

I remember a time I could dial a five digit local phone number as in 5-1212 to reach local people.  As the town grew, it extended to seven digits… 555-1212.  Eventually this turned into a 10 digit string such as (123) 555-1212.  We began using a larger addressing space.

Address Spaces and IP Addresses

A better analogy is IPv6 which, although still in its infancy, is being put into place to provide a significantly larger “address space” for IP addresses.  IPv4 (the current standard) allows 232 addresses. The new address space thus supports 2128 (about 3.4×1038) addresses and could eliminate the need for clunky workarounds like NAT.  A typical IPv6 address looks like 2001:db8:cafe::1, compared to an IPv4 address  You can read more about IPv6 in its Wikipedia entry or in the free book, The Second Internet. You can use IPv6 tunnels if your ISP does not offer IPv6 connectivity yet. Using, you can verify IPv6 connectivity.

Back to the Disks

Master Boot Record (MBR) Disks have 32 bit address spaces.  Both the partition length and partition start address are stored as 32-bit quantities. When the block size is 512 bytes (one sector in most cases), this implies that neither the maximum size of a partition nor the maximum start address (both in bytes) can exceed 232 × 512 bytes, or 2 TB.

Alleviating this capacity limitation is one of the prime motivations for the development of the GUID Partition Table (GPT).

GUID Partition Table (GPT) Disks are 64 bit or 264 × 512 bytes giving you much more room to grow. From a Windows persective, Windows Server 2003 SP1 introduced GPT support and has been carried forward since.

Diagram illustrating the layout of the :en:GUI...
Image via Wikipedia

Windows Filesystems Maximums

  • FAT volumes have a maximum size of 4GB and a file size limit of 2GB.
  • FAT32 file systems have a maximum volume size of 32GB with a file size limit of 4GB.
  • NTFS volumes can be up to 2TB on an MBR disk and 16 Exabytes (EB) on GPT disks.
  • The maximum size NTFS volume that has been tested by Microsoft is 16 TB.
  • The maximum size of a VHD is 2040 GB (8 GB short of 2 TB).
  • The maximum size of a VHDX is 64 TB.

In reality, there are other considerations when creating huge volumes:

SCSI limitations.  Microsoft Windows operating systems support two different SCSI standards when it comes to reads and writes (Read10/Write10 and Read16/Write16) , and each of this SCSI standards also have different lookup tables if you will.  Read10/Write10 has room for 4 bytes of info (Max 2TB) and Read10/Write10 has room for 8 bytes of info with a max of about 8 zettabytes.

Backup and Restore. How do you do this effectively for such huge volumes?  Not a streaming back I hope.  Hopefully you’d never have to restore this from tape…

So, while this is not an official EMC blog, unofficially I’d recommend no bigger than a 2 TB LUN presented to a Windows server.  Anything more and you should make sure you are checking with your storage vendor to make sure it’s going to work the way you’d want without causing any problems for you down the road…


Reblog this post [with Zemanta]

Thin/Virtual Provisioning and Windows 2008 Formatting Confusion


Lately there’s been some confusion about how Quick Format differs from a Full Format in Windows 2008.   To understand it fully, first let’s go back to Windows 2003 and compare that to Windows 2008.

Windows 2003 (and Windows XP)

Quick format will remove files from the partition, but does not scan the disk for bad sectors. Microsoft has stated to “only use this option if your hard disk has been previously formatted and you are sure that your hard disk is not damaged.”

Full format will remove files from the volume that you are formatting and the hard disk will be scanned for bad sectors. The scan for bad sectors is responsible for the majority of the time that it takes to format a volume.

Windows 2008 (and Windows Vista)

Quick format will remove files from the partition, but does not scan the disk for bad sectors.

Full format will remove files, scan the disk for errors, and write zeros to the length of the disk (zero filling).  This will not only lengthen the amount of time of the format, but will also destroy any thin provision or virtual provisioning benefits that your storage provider is offering.

So, if you are going to use a disk array that supports thin or virtual provisioning, make sure you understand that a full format in Windows 2008 will cause the “thin” volume to become fully allocated or “thick.”

Database and Application Considerations

Also be aware that some databases and applications also have this same tendency when creating databases and log files – such as SQL Server.

SQL Server 2005 uses the instant file initialization feature for data files (kinda like Quick format). If the data file is a log file, or if instant file initialization is not enabled, SQL Server performs zero stamping (zero filling). Versions of SQL Server earlier than SQL Server 2005 always perform zero stamping.

Learn more about EMC Symmetrix implementation and considerations for virtual provisioning here (scroll to bottom of page)

Reference: KB article 941961

Windows XP vs. Windows Vista

Why is formatting a disk so confusing?

Reblog this post [with Zemanta]

New Whitepaper on Windows 2008 R2 and Hyper-V Live Migration

Windows Server 2008 R2 & Microsoft Hyper-V Server 2008 R2 – Hyper-V Live Migration Overview & Architecture can be downloaded from here:

Description: One of the most highly anticipated new features in Windows Server® 2008 R2 Hyper-V™ is live migration. This document describes the live migration feature of Windows Server® 2008 R2 Hyper-V™ in detail, including how live migration moves running VMs and requirements for implementing live migration.

Because no one like to read manuals, I will excerpt from the paper:

How Live Migration Works:

  1. All VM memory pages are transferred from the source Hyper-V™ physical host to the destination Hyper-V™ physical host.  While this is occurring, any VM modifications to its memory pages are tracked.
  2. Pages that were modified while step 1 was occurring are transferred to the destination physical computer.
  3. The storage handle for the VM’s VHD files are moved to the destination physical computer.
  4. The destination  VM is brought online  on the destination Hyper-V™ server.
  5. Further details and pictures are in the whitepaper.


  • Hyper-V™ live migration is supported on the following editions of Windows Server 2008 R2: Windows Server 2008 R2 x64 Enterprise Edition and Windows Server 2008 R2 x64 Datacenter Edition.
  • Live migration is also supported on Microsoft® Hyper-V™ Server 2008 R2.
  • Microsoft Failover Clustering must be configured on all physical hosts that will use live migration.
  • Failover Clustering supports up to 16 nodes per cluster.
  • The cluster should be configured with a dedicated network for the live migration traffic .
  • Physical host servers must use a processor or processors from the same manufacturer.
  • Physical hosts must be configured on the same TCP/IP subnet.
  • Physical hosts must have access to shared storage.

Recommendations and Notes:

  • A clustered shared volume is recommended for VM storage in a cluster where live migration will be used.
  • One live migration can be active between any two cluster nodes at any time.  This means that a cluster will support number_of_nodes/2 simultaneous live migrations.  For example, a 16-node cluster will support 8 simultaneous live migrations with no more than one live migration session active from every node of the cluster.
  • A dedicated 1 Gigabit Ethernet connection is recommended for the live migration network between cluster nodes to transfer the large number of memory pages typical for a virtual machine.
  • The cluster configurations that have been validated by vendors can be found through the listings in the FCCP program under the heading of The Microsoft Support Policy for Windows Server 2008 Failover Clusters at this URL:;EN-US;943984

Deploying Live Migration Steps

  1. Configure Windows Server 2008 R2 Failover Clustering.
  2. Connect both physical hosts to networks and storage.
  3. Install Hyper-V™ and Failover Clustering on both physical hosts.
  4. Enable Cluster Shared Volumes.
  5. Make the Virtual Machines highly available.
  6. Test a Live Migration.

For detailed, step-by-step instructions see the deploying live migration whitepaper at this URL:

Some Interesting Tidbits about Windows 2008 R2 and Hyper-V:

  • Enhanced Processor Support: Hyper-V Hosts will support up to 32 Cores (guest remains at  4 I believe)
  • Networking Enhancements: jumbo frames support, TCP Chimney support, new Virtual Machine Queue (VMQ) feature for allowing physical network cards to use direct memory access (DMA) to place contents of packets directly into memory.
  • Dynamic VM Storage: Now supports the addition and removal of VHD’s and pass-through disks while a VM is running. For hot plug and removal of storage you will need Hyper-V Integration Services.

I am sorry if in fact you do like to read manuals (this is the Cliff Notes version).

Windows 2008 R2 Beta Testers: Free Licenses of Sanbolic Melio FS

Sanbolic offers an interesting proposition for Windows 2008 Server/Hyper-V users today: replace NTFS with Melio FS.  Melio FS is similar to VMFS in that it is a clustered filesystem allowing for multiple servers/VMs to access a shared LUN at the same time.

Sanbolic is aware of the upcoming Cluster Shared Volumes (CSV’s) feature in Windows 2008 R2, and wants people to see how unique their capabilities really are.


Sanbolic’s free trial license of Melio FS can be used to:

  • Provide shared LUN access for multiple VMs.
  • Eliminate the “one LUN, one VM” rule imposed by Quick Migration.

More information about Sanbolic and Melio FS here.

Hat tip to Scott Lowe

Here’s a wmv video of John Savill describing Melio FS.

Reblog this post [with Zemanta]

Demo: Hyper-V Live Migration using EMC SRDF/Cluster Enabler

How EMC’s SRDF/CE provides Live Migration and Disaster Recovery protection of Hyper-V virtual machines between Symmetrix storage arrays in a non-disruptive manner, including bidirectional failover and failback across geographically dispersed clusters.  Peter Griffin would describe this functionality as frickin sweet!

Hyper-V Server 2008 R2 Beta Released


Holy Crap, Batman. It's "Hyper-V Server 2008 R2" Beta.

Like a stealth bomber flying too low to be detectable by radar, Microsoft’s Hyper-V Server 2008 R2 was just put into public beta status .  Because of all the hoopla over Windows 7 this piece of very interesting beta software was very much overlooked.

This edition adds:

  • Support for VMotion-like “Live Migration”
  • Support for Failover Clustering
  • Improved Memory and CPU Support
  • Improved Config utility

If interested, you can get it here.

How to Get Windows 7 and Windows Server 2008 R2 Beta

Last night at CES, Ballmer and friends presented some announcements for Microsoft in the upcoming year.

Instead of focusing their session on gadgets (such as the much discussed/rumored Zune Mobile device), they focused on their next big operating system which aims to fix all of Vista’s ailments. They also announced several other partnerships and plans for 2009.  More on Microsoft’s CES summaries here, here, and here.

How to Get Windows 7 and Windows Server 2008 R2 Beta

Windows 7 Beta (and Windows Server 2008 R2) is now available to TechNet Plus subscribers.  Friday it will be open to the public. Enter this into an RSS reader if you want to find out the instant it is publicly available:

Initial reports are very good, I’m anxious to hear your Windows 7 stories.