Category Archives: Virtualization

Is a SAN Better, Cheaper, and More Secure Than Office 365?

SAN’s – especially SAN’s from market leader EMC – are always under attack from companies wishing to cash in on a piece of the rising data growth across all industries and market segments.

  • Some say DAS is best.
  • Some say keep the storage in the servers.
  • Some say you should build your own shared array.

But when it comes to Microsoft environments, it often helps to have independent experts investigate the matter to get a fresh perspective.

In a recent study, Wikibon determined that a shared storage infrastructure (powered by EMC’s next generation VNX2 storage systems) can match the prices offered by Office 365 public cloud and offer more capabilities, more security, and more control.

This, however, assumes a completely consolidated approach for deploying multiple mixed workloads such as Exchange, SharePoint, SQL Server, Lync – where the VNX2 really shines.   We use FAST VP, FAST Cache, and a combination of drive types to achieve the best balance of performance and cost.

Are you looking for more information about deploying Microsoft applications on VNX?    Definitely check here for the most recent best practices guides!

Also check out my recent webinar I did with James Baldwin who leads our Proven Solutions EMC/Microsoft engineering team.  We had a lot of fun doing this one, hope you enjoy it.

Building a Microsoft Azure Private Cloud – Powered by EMC VNX Storage

Recently EMC held a Microsoft Summit, where a lot of the Microsoft savvy engineers and business folks within EMC get together to share their stories and lessons learned.

One of the highlights of these sessions is always the work of Txomin Barturen -who is our resident Microsoft expert in EMC’s Office of the CTO.

His blog can be found here:  http://datatothepeople.wordpress.com/
(Bookmark it, and look for videos and blog posts soon to follow)

This year his session focused on our work within Microsoft Hyper-V, Microsoft System Center, Private Clouds and the powerful Azure Pack for Windows.

Sure, everyone knows about EMC’s affinity towards VMware (EMC’s VNX was rated best storage for VMware 3 years in a row), but many don’t know how focused we are on Hyper-V and helping customers power their Microsoft Cloud.

EMC is committed to becoming the best storage for private clouds for enterprises and service providers who wish to deploy private and/or public clouds for their customers – on VMware or Hyper-V.

Evidence of EMC’s Microsoft Private Cloud Work

To get to this stage, we’ve had to do a lot of work.

And beyond our engineering ability, we also showcased our agility.

  • VNX was the first storage platform to support SMB 3.0 (VNX & VNXe)
  • VNX was the first storage platform to demonstrate ODX (TechEd 2012)
  • Our Elab aggressively submits Windows Logo certifications (EMC currently has the most Windows 2012 R2 certs)

Where do you find these materials? 

We’ve built Microsoft Private Cloud (Proven) solutions on VNXe, VNX & VMAX leveraging SMI-S / PowerShell that can be found and delivered through EMC’s VSPEX program or as part of our Microsoft Private Cloud Fast Track solutions (which are Microsoft validated, ready-to-run reference architectures).  You can find more about this work here.

Getting to a More Agile Cloud

Txomin’s presentation talked about how customers want all that an Azure Public Cloud model offers in terms of agility and management but without the loss of control (a on-premises cloud deployment).  They want to offer *-as-a-Service models, elastic scale, a self-service model for tenants, but without the SLA risks that are out of IT control when deploying on a full private cloud.

The Middle Ground:  The Azure Pack for Windows

Microsoft is putting together some really interesting cloud management software with Azure Pack for Windows.  The Azure Pack for Windows is a free downloadable set of services that offer the same interface as the Azure public cloud option, but provide more control for companies who are not willing to deploy on the public cloud for reasons due to performance, reliability, security, and compliance concerns.

azure-cloud

Since we’ve done all of the baseline private cloud work, now we can use these as a foundation for building a Microsoft Private Cloud on-premises with a VNX storage platform using the new Azure Pack for Windows.

Built atop the new Windows Server 2012 R2 platform, the Windows Azure Pack (WAP) enables public-cloud like management and services without the risk.   This layers right on top of EMC’s Windows Fast Track & Private Cloud offerings without any additional technology required.

Although it offers a limited subset of services, we expect that Microsoft will introduce more service as customers adopt this new model.

One of the first use cases Microsoft is focusing on is the service providers who want better management for their Microsoft clouds.   This will allow for new integrations and capabilities that weren’t previously available.   IT staff can treat business units as Tenants, offer pre-configured solutions via Gallery, enable self-service management by tenants (delegated Admin).  They can also view utilization and reporting available through System Center/3rd party integrations which are fully extensible through Operations Manager, Orchestrator and Virtual Machine Manager.

This is truly the future of Microsoft’s virtualization Strategy and EMC is right there to enable customers to build the best, most reliable, secure, manageable private cloud.

But what about Data Protection?

Well, our colleagues in the Backup and Recovery Systems division of EMC are no slackers.  They saw the same trends and are eager to help customers stay protected as they move to the cloud.

In this demo Alex Almeida, Sr. Technical Marketing Manager for EMC’s Backup and Recovery Systems demonstrates how the EMC Data Protection Suite provides full support for Windows Azure Private Cloud Backup and Recovery:

So let me correct my statement…  EMC is right there to enable customers to build the best, most reliable, secure, manageable private cloud – AND PROTECT IT.

EMC’s VNX = Award Winning storage for Microsoft environments

Microsoft’s TechEd 2013 is next week, and I’m looking forward to spending time with my longtime industry friends and making some new connections on the show floor in New Orleans.

This year, I’ll attend as part of the Unified Storage Division, and felt I needed to share a little about the success of VNX and VNXe arrays into Microsoft environments:

awards

EMC’s VNX Unified Storage Platform has been recognized with awards from a slew of independent analysts such as Gartner, IDC and Wikibon, as well as media publications such as ComputerWorld, CRN and Virtualization Review due to the ability of the VNX family to power mission critical applications, integrate with virtual environments and solve SMB IT challenges, among other accolades.   We take pride in being the #1 storage for most Microsoft Windows-based applications.

BUT… DOES  MICROSOFT WINDOWS NEED A SAN?  CAN’T WE DO IT OURSELVES?

Well, after speaking with Windows Server 2012, SQL Server, and EMC customers, partners and employees, the independent analyst firm Wikibon posted a before and after comparison model based on an enterprise customer environment. The idea is that the total cost of bolting together your own solution isn’t worth it.

wikibon-windows-study

The findings showed that by moving a physical, non-tiered environment to a virtualized environment with flash and tiered storage SQL Server customers realized a 30% lower overall TCO over a 3 year period including hardware, software, maintenance, and management costs for their database infrastructure.

The graphic shows that a do-it-yourself approach saves very little if anything in hardware costs and will divert operational effort to build and maintain the infrastructure. Risks and costs are likely to be higher with this approach.

In the end, EMC’s VNX infrastructure was proven to deliver a lower cost and lower risk solution for Windows 2012 versus a direct-attached storage (DAS) or JBOD (just a bunch of disk) model.  Full study here.

Video of EMC’s Adrian Simays and Wikibon Analysts discussing these results is here on YouTube.

MICROSOFT INTEGRATIONS AND INNOVATIONS  

EMC’s VNX platform considers Microsoft applications, databases, and file shares to our sweet spot as evidenced by our early integration of the latest Windows Server 2012 features that increase performance, efficiency, availability, and simplicity for our joint customers.

Performance, Efficiency, Availability, Simplicity

1. Performance

Within Windows, we were the first storage array to support SMB 3 and ODX Copy Offload (part of SMB 3) to enable large file copies over SAN instead of consuming network bandwidth and host CPU cycles.

ODX-impact

This test highlights the speed difference before (left) and after (right) ODX was implemented. With EMC VNX and ODX enabled, you can accelerate your VM copies by a factor of 7 while reducing server CPU utilization by a factor of 30!

For applications and databases, VNX FAST Cache and FASTVP automatically tunes your storage to match your workload requirements saving up to 80% of the time it would take to manually balance workloads.

The Enterprise Storage Group (ESG) Lab confirmed that a data warehouse solution with Windows Server 2012 with Hyper-V, Microsoft SQL Server 2012 with new Columnstore indexing, and VNX FAST technologies and VNX storage form a complete solution to meet the business requirements of mid-tier organizations and beyond. An 800GB DW was deployed which is fairly typical for a medium sized business. With EMC FAST enabled, throughput reached up to 379 MB/sec, showing over 100% improvement over SQL Server 2012’s baseline Rowstore indexing. The DSS performance workload with EMC FAST enabled completed up to nine times faster than with rowstore indexing.

2. Efficiency

IT managers and storage administrators frequently adopt well-known forecasting models to pre-allocate storage space according to the storage demand growth rate. The main challenge is how to pre-allocate just enough storage capacity for the application. Reports from many storage array vendors indicate that 31% to 50% of the allocated storage is either stranded or unused. Thus, 31% to 50% of the capital investment from the initial storage installment is wasted.

The VNX supports Windows host-level and built-in storage-level thin provisioning to drastically reduce initial disk requirements.  Windows Server 2012 provides the ability to detect thin-provisioned storage on EMC storage arrays and reclaim unused space once it is freed by Hyper-V. In the previous scenario, an ODX-aware host connected to an EMC intelligent storage array would automatically reclaim the 10 GB of storage and return it to the pool where it could be used by other applications.

Furthermore, for application storage we partner with companies like Kroll and Metalogix to provide better solutions for Exchange single item recovery and SharePoint remote BLOB storage which can reduce SQL stored SharePoint objects by about 80-90% and improve SQL Respnse times by 20-40%

3. Availability

Our first to market integration with SMB3 not only provides for performance improvements, it also enables SMB 3 Continuous Availability allowing applications to run on clustered volumes with failovers that are transparent to end users.  For example, SQL Server may store system tables on the file shares such that any disruptive event to the access of the file share can lead to interruption of SQL Server operation. Continuous Availability is accomplished via cluster failover on the host side and Data Mover of Shared Folder failover on the VNX side.

Other SMB 3.0 Features supported include:

  • Multi-Channel / Multipath I/O (MPIO ) – Multiple TCP connections can now be associated with a single SMB 3.0 session and a client application can use several connections to transfer I/O on a CIFS share.  This optimizes bandwidth and enables failover and load balancing with multiple NICs.
  • Offload Copy – Copying data   within the same Data Mover can now be offloaded to the storage which reduces the workload on the client and network.
  • SMB Encryption – Provides secure access to data on CIFS shares, protecting data on untrusted networks and providing end-to-end encryption of data in- flight.
  • Directory Lease – SMB2   introduced a directory cache which allowed clients to cache a directory listing to save network bandwidth but it would not see new updates.  SMB3 introduces a directory lease and the client is now automatically aware of changes made in a cached directory.
  • Remote Volume Shadow Copy Service (RVSS) – With RVSS, point-in-time snapshots can be taken across multiple CIFS shares, providing improved performance in backup and restore.
  • BranchCache – Caching solution to have business data in local cache. Main use case is remote office and branch office storage.

EMC also offers a wide range of application availability and protection solutions that are built into the VNX including snapshots, remote replication, and a new RecoverPoint virtual replication appliance.

4. Simplicity

When it comes to provisioning storage for their applications, admins often have to navigate through too many repetitive tasks requiring them to touch different UIs and increasing the risk of human error. Admins also likely need to coordinate with other administrators each time they need to provision space. This is not very efficient. Take for example a user that wants to provision space for SharePoint. You need to work with Unisphere to create a LUN and add it to a storage group. Next you need to log onto the server and run disk manager to import the volume. Next you need to work with Hyper-V, then SQL Server Mgmt Studio, then SharePoint Central Admin. A bit tedious to say the least.

esi

EMC Storage Integrator (ESI) on the other hand streamlines everything we just talked about. Forget about how much faster it actually is… Just think about the convenience and elegance of this workflow compared to the manual steps outlined in our last paragraph. ESI is a free MMC based download that takes provisioning all the way into Microsoft Applications. Currently only SharePoint is supported but SQL and Exchange wizards are coming soon. This is a feature that surprises and delights our customers!

 SO WHAT DO VNX CUSTOMERS SAY?

EMC’s VNX not only provides a rock solid core infrastructure foundation, but also delivers significant features and benefits for application owners and DBAs.     Here’s some quotes from customers who have transformed their Microsoft environments using the VNX and VNXe platforms.

Peter Syngh Senior Manager, IT Operations, Toronto District School Board

 “EMC’s VNX unified storage has the best of everything at a very cost-effective price. It integrates with Microsoft Hyper-V, which is crucial to our cloud strategy, and with its higher performance, automated tiering and thin provisioning, VNX was a no-brainer.”

Marshall Bose Manager of IT Operations, Ensco (Oil/Gas)

 “A prime reason for choosing EMC over NetApp was that VNX is such a great fit for virtualization. With all the automation tools and tight integration with VMware, VNX is far easier than NetApp when it comes to spinning up and managing virtual machines.”

Rocco Hoffmann, IT Architect BNP Paribas (German Bank)

“We are achieving significant savings in energy and rack space. In fact our VNX requires only half the rack space and has reduced our power and cooling costs”

Charles Rosse, Systems Administrator II Baptist Memorial Health Care

“Since the VNX has been built into the design of our VDI from the beginning, it can easily accommodate growth- all we need to do is to plug in another drive or tray of drives and we get incrementally better performance.”

Erich Becker,  Director of Information Systems, AeroSpec (Manufacturing)

“…We loved the fact that VNXe and VMware worked extremely well together …we have dramatically cut operating costs, increased reliability and data access is now twice as fast as before.”

BOTTOM LINE

There are many more customers that have given praise to the VNX Family for powering their Microsoft applications but I don’t have the room to put them all in.     EMC is a trusted brand in storage, and the VNX today is an outstanding unified platform which successfully balances our customers block and file needs for their Microsoft file and application data – and gets awards for it.    Feel free to find out more about the VNX and VNXe product lines here and here.

Also come talk to us next week at TechEd, we will be there to help customers and partners learn more about our technology.

Find out more about our TechEd plans here.

Also download the VNXe Simulator executable right here.  It’s pretty awesome and shows you the unique VNXe management interface.

What’s New for Exchange 2013 Storage?

What’s New for Exchange 2013 Storage?

By: Brien M. Posey

Many of Exchange Server 2013’s most noteworthy improvements are behind the scenes architectural improvements rather than new product features. Perhaps nowhere is this more true than Exchange Server’s storage architecture. Once again Microsoft invested heavily in Exchange’s storage subsystem in an effort to drive down the overall storage costs while at the same time improving performance and reliability. This article outlines some of the most significant storage related improvements in Exchange Server 2013.

Lower IOPS on Passive Database Copies

In failure situations, failover from an Active mailbox database to a passive database copy needs to happen as quickly as possible. In Exchange Server 2010, Microsoft expedited the failover process by maintaining a low checkpoint depth (5 MB) on the passive database copy. Microsoft’s reason for doing this was that failing over from an Active to a passive database copy required the database cache to be flushed. Having a large checkpoint depth would have increased the amount of time that it took to flush the cache, thereby causing the failover process to take longer to complete.

The problem was that maintaining a low checkpoint depth came at a cost. The server hosting the passive database copy had to do a lot of work in terms of pre-read operations in order to keep pace with demand while still maintaining a minimal checkpoint depth. The end result was that a passive database copy produced nearly the same level of IOPS as its active counterpart.

In Exchange Server 2013, Microsoft made a simple decision that greatly reduced IOPS for passive database copies, while also reducing the database failover time. Because much of the disk I/O activity on the passive database copy was related to maintaining a low checkpoint depth and because the checkpoint depth had a direct impact on the failover time, Microsoft realized that the best way to improve performance was to change the way that the caching process worked.

In Exchange 2013, the cache is no longer flushed during a failover. Instead, the cache is treated as a persistent object. Because the cache no longer has to be flushed, the size of the cache has little bearing on the amount of time that it takes to perform the failover. As such, Microsoft designed Exchange 2013 to have a much larger checkpoint depth (100 MB). Having a larger checkpoint depth means that the passive database doesn’t have to work as hard to pre-read data, which drives down the IOPS on the passive database copy by about half. Furthermore failovers normally occur in about 20 seconds.

Although the idea of driving down IOPS for passive database copies might sound somewhat appealing, some might question the benefit. After all, passive database copies are not actively being used, so driving down the IOPS should theoretically have no impact on the end user experience.

One of the reasons why reducing the IOPS produced by passive database copies is so important has to do with another architectural change that Microsoft has made in Exchange Server 2013. Unlike previous versions of Exchange Server, Exchange Server 2013 allows active and passive database copies to be stored together on the same volume.

If an organization does choose to use a single volume to store a mixture of active and passive databases then reducing the IOPS produced by passive database will have a direct impact on the performance of active databases.

This new architecture also makes it easier to recover from disk failures within a reasonable amount of time. Exchange Server 2013 supports using volume sizes of up to 8 TB. With that in mind, imagine what would happen if a disk failed to needed to be reseeded. Assuming that the majority of the space on the volume was being used, it would normally take a very long time to regenerate the contents of the failed disk.

Part of the reason for this has to do with the sheer volume of data that must be copied, but there is more to it than that. Passive database copies are normally reseeded from an active database copy. If all of the active database copies reside on a common volume than that volumes performance will be the limiting factor affecting the amount of time that it takes to rebuild the failed disk.

In Exchange Server 2013 however, volumes can contain a mixture of active and passive database copies. This means that the active database copies of likely reside on different volumes (typically on different servers). This means that the data that is necessary for rebuilding the failed volume will be pulled from a variety of sources. As such, the data source is no longer the limiting factor in the amount of time that it takes to reseed the disk. Assuming that the disk that is being reseeded can keep pace, the reseeding process can occur much more quickly than it would be able to if all of the data were coming from a single source.

In addition, Exchange Server 2013 periodically performs an integrity check of passive database copies. If any of the database copies are found to have a status of FailedAndSuspended. If such a database copy is found then Exchange will check to see if any spare disks are available. If a valid spare is found then Exchange Server will automatically remap the spare and initiate an automatic seating process.

Conclusion

As you can see, Microsoft has made a tremendous number of improvements with the way that Exchange Server manages storage in DAG environments. Passive database copies generate fewer IOPS, and failovers happen more quickly than ever before. Furthermore, Exchange Server can even use spare disks to quickly recover from certain types of disk failures.

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Guest Post By: Brien M. Posey

Why-SAN-vs-DAS

Of all the decisions that must be made when planning an Exchange Server deployment, perhaps none are as critical as deciding which type of storage to use. Exchange Server is very flexible with regard to the types of storage that it supports. However, some types of storage offer better performance and reliability than others.

When it comes to larger scale Exchange Server deployments, it is often better to use a Storage Area Network than it is to use direct attached storage. Storage networks provide a number of advantages over local storage in terms of costs, reliability, performance, and functionality.

Cost Considerations

Storage Area Networks have gained something of a reputation for being more expensive than other types of storage. However, if your organization already has a Storage Area Network in place then you may find that the cost per gigabyte of Exchange Server storage is less expensive on your storage network than it would be if you were to invest in local storage.

While this statement might seem completely counter intuitive, it is based on the idea that physical hardware is often grossly underutilized. To put this into prospective, consider the process of purchasing an Exchange mailbox server that uses Direct Attached Storage.

Organizations that choose to use local storage must estimate the amount of storage that will be needed to accommodate Exchange Server databases plus leave room for future growth. This means making a significant investment in storage hardware.  In doing so, an organization is purchasing the necessary storage space, but they may also be spending money for storage space that is not immediately needed.

In contrast, Exchange servers that are connected to storage networks can take advantage of thin provisioning. This means that the Exchange Server only uses the storage space that it needs. When a thinly provisioned volume is created, the volume typically consumes less than 1 GB of physical storage space, regardless of the volume’s logical size. The volume will consume physical storage space on an as needed basis as data is written to the volume.

In essence, a thinly provisioned volume residing on a SAN could be thought of as “pay as you go” storage. Unlike Direct Attached Storage, the organization is not forced to make a large up-front investment in dedicated storage that may never be used.

Reliability

Another advantage to using Storage Area Networks for Exchange Server storage is that when properly constructed, SANs are far more reliable than Direct Attached Storage.

The problem with using Direct Attached Storage is that there are a number of ways in which the storage can become a single point of failure. For example, a disk controller failure can easily corrupt an entire storage array. Although there are servers that have multiple array controllers for Direct Attached Storage, lowering servers are often limited to a single array controller.

Some Exchange mailbox servers implement Direct Attached Storage through an external storage array. Such an array is considered to be a local component, but makes use of an external case as a way of compensating for the lack of drive bays within the server itself. In these types of configurations, the connectivity between the server and external storage array can become a single point of failure (depending on the hardware configuration that is used).

When SAN storage is used, potential single points of failure can be eliminated through the use of multipath I/O. The basic idea behind multipath I/O is that fault tolerance can be achieved by providing multiple physical paths between a server and a storage device. If for example an organization wanted to establish fault tolerant connectivity between an Exchange Server and SAN storage, they could install multiple Fibre Channel Host Bus Adapters into the Exchange Server. Each Host Bus Adapter could be connected to a separate Fibre Channel switch. Each switch could in turn provide a path to mirrored storage arrays. This approach prevents any of the storage components from becoming single points of failure.

Performance

Although Microsoft has taken measures to drive down mailbox server I/O requirements in the last couple of versions of Exchange Server, mailbox databases still tend to be I/O intensive. As such, large mailbox servers depend on high-performance hardware.

While there is no denying the fact that high-performance Direct Attached Storage is available, SAN storage can potentially provide a higher level of performance sue to its scalability. One of the major factors that impacts a storage array’s performance is the number of spindles that are used by the array. Direct Attached Storage limits the total number of spindles that can be used. Not only are the number of drive bays in the case a factor, but there is also a limit to the number of disks that can be attached to the array controller.

SAN environments make it possible to create high performance disk arrays by using large numbers of physical disks. Of course capitalizing on the disk I/O performance also means that you must have a high speed connection between the server and the SAN, but this usually isn’t a problem. Multipath I/O allows storage traffic to be distributed across multiple Fibre Channel ports for optimal performance.

Virtualization

Finally, SAN environments are ideal for use in virtualized datacenters. Although neither Microsoft nor VMware still require the use of shared storage for clustered virtualization hosts, using shared storage is still widely considered to be a best practice. SANs make it easy to create cluster shared volumes that can be shared among the nodes in your host virtualization cluster.

Conclusion

Exchange mailbox servers are almost always considered to be mission critical. As such, it makes sense to invest in SAN storage for your Exchange Server since it can deliver better performance and reliability than is possible with Direct Attached Storage.

 

3 Benefits of Running Exchange Server in a Virtualized Environment

3 Benefits of Running Exchange Server in a Virtualized Environment

Guest post by: Brien M. Posey

Benefits-of-Running-Exchang

One of the big decisions that administrators must make when preparing to deploy Exchange Server is whether to run Exchange on physical hardware, virtual hardware, or a mixture of the two. Prior to the release of Exchange Server 2010 most organizations chose to run Exchange on physical hardware. Earlier versions of Exchange mailbox servers were often simply too I/O intensive for virtual environments. Furthermore, it took a while for Microsoft’s Exchange Server support policy to catch up with the virtualization trend.

Today these issues are not the stumbling blocks that they once were. Exchange Server 2010 and 2013 are far less I/O intensive than their predecessors. Likewise, Exchange Server is fully supported in virtual environments. Of course administrators must still answer the question of whether it is better to run Exchange Server on physical or on virtual hardware.

Typically there are far greater advantages to running Exchange Server in a virtual environment than running it in a physical environment. Virtual environments can help to expedite Exchange Server deployment, and they often make better use of hardware resources, while also offering some advanced protection options.

Improved Deployment

At first the idea that deploying Exchange Server in a virtual environment is somehow easier or more efficient might seem a little strange. After all, the Exchange Server setup program works in exactly the same way whether Exchange is being deployed on a physical or a virtual server. However, virtualized environments provide some deployment options that simply do not exist in physical environments.

Virtual environments make it quick and easy to deploy additional Exchange Servers. This is important for any organization that needs to quickly scale they are Exchange organization to meet evolving business needs. Virtual environments allow administrators to build templates that can be used to quickly deploy new servers in a uniform way.

Depending upon the virtualization platform that is being used, it is sometimes even possible to set up a self-service portal that allows authorized users to deploy new Exchange Servers with only a few mouse clicks. Because the servers are based on preconfigured templates, they will already be configured according to the corporate security policy.

 Hardware Resource Allocation

Another advantage that virtualized environments offer over physical environments is them virtual environments typically make more efficient use of server hardware. In virtual environments, multiple virtualized workloads share a finite pool of physical hardware resources. As such, virtualization administrators have gotten into the habit of using the available hardware resources efficiently and making every resource count.

Of course it isn’t just these habits that lead to more efficient resource usage. Virtualized environments contain mechanisms that help to ensure that virtual machines receive exactly the hardware resources that are necessary, but without wasting resources in the process. Perhaps the best example of this is dynamic memory.

The various hypervisor vendors each implement dynamic memory in their own way. As a general rule however, each virtual machine is assigned a certain amount of memory at startup. The administrator also assigns maximum and minimum memory limits to the virtual machines. This allows the virtual machines to claim the memory that they need, but without consuming an excessive percentage of the servers overall physical memory. When memory is no longer actively needed by the virtual machine, that memory is released so that it becomes available to other virtual machines that are running on the server.

Although mechanisms such as dynamic memory can certainly help a virtual machine to make the most efficient use possible of physical hardware resources, resource usage can be thought of in another way as well.

When Exchange Server is deployed onto physical hardware, all of the servers resources are dedicated to running the operating system and Exchange Server. While this may initially sound desirable, there are  problems with it when you consider hardware allocation from a financial standpoint.

In a physical server environment, the hardware must be purchased up front. The problem with this is that administrators can simply purchase the resources that are needed by Exchange Server based on current usage. Workloads tend to increase over time, so administrators must typically purchase more memory, CPU cores, and faster disks than what are currently needed. These resources are essentially wasted until the day that the Exchange Server workload grows to the point that those resources are suddenly needed. In a virtual environment this is simply not the case. Whatever resources are not needed by a virtual machine can be put into a pool of physical resources that are accessible to other virtualized workloads.

 Protection Options

One last reason why it is often more beneficial to operate Exchange Server in a virtual environment is because virtual environments provide certain protection options that are not natively available with Exchange Server.

Perhaps the best example of this is failover clustering. Exchange Server offers failover clustering in the form of Database Availability Groups. The problem is that Database Availability Groups only protect the mailbox server role. Exchange administrators must look for creative ways to protect the remaining server roles against failure. One of the easiest ways to achieve this protection is to install Exchange Server onto virtual machines. The underlying hypervisor can be clustered in a way that allows virtual machines to fail over from one host to another if necessary. Such a failover can be performed regardless of the limits of the operating system or application software that might be running within individual virtual machines. In other words, virtualization allows you to receive the benefits of failover clustering for Exchange server roles that don’t normally support clustering.

 Conclusion

As you can see, there are a number of benefits to running Exchange Server in a virtual environment. In almost every case, it is preferable to run Exchange Server on virtual hardware over physical hardware.

What’s new in Exchange 2013, 2 Webcasts, and More!

Next week I’ll be on a couple of webcasts related to Exchange server protection:

In these webcasts, we will balance a solid blend of best practices content with information about some of our latest products.   I promise not to waste your time!

Webcast 1:  Introducing EMC AppSync: Advanced Application Protection Made Easy for VNX Platforms

In this webinar, we’ll describe how to setup a protection service catalog for any company and how easy EMC AppSync makes using snapshot and continuous data protection technology on a VNX storage array… As a bonus we will show a cool demo.

Sign up here.

Webcast 2: Protecting Exchange from Disaster: The Choices and Consequences

In this demo, we’ll explore the 3 common Exchange DR options available to customers with an advanced storage array like an EMC VNX.  One of the highlights is that I will be joined by independent Microsoft guru Brien Posey who has the low down on what’s new in Exchange 2013 related to storage and DR enhancements and describe how many things change in Exchange 2013 and how many things stay the same.  Oh, of course we will have a cool demo for this one too!

Sign up here.