The Future of Exchange Protection

crystal-ball-emc

If you could look into a crystal ball and predict what would come next for Exchange protection, what would it be?

Join us live on Feb 7th to learn what Ernes Taljec (a data architect from Presidio) and I think what is coming next for Exchange 2013 and beyond.     We will talk about the evolution of Microsoft built-in and other EMC technologies and take a look into the future!

Also – because everyone likes free stuff – we will pick one person from the audience to win an iPad 3 live during the event.

Sign up today!

  • Webinar Date:    Feb 7th at 12:00 PM EST
  • Webinar Link:    https://www.brighttalk.com/webcast/7397/65443
  • Presenters:          Brian Henderson, AppSync Technical Marketing Manager,  EMC & Ernes Taljec, Data Center Architect, Presidio
  • Duration:               60 mins

What’s new for Exchange Server 2013 Database Availability Groups?

What’s new for Exchange Server 2013 Database Availability Groups?

By: Brien M. Posey

When Microsoft created Exchange Server 2010, they introduced the Concept of Database Availability Groups. Database Availability Groups are the mechanism that makes it possible for a mailbox database to fail over from one mailbox server to another. In retrospect, Database Availability Groups worked really well for organizations whose operations were confined to a single data center. Although it was possible to stretch a Database Availability Group across multiple data centers, performing site level failovers was anything but simple. Microsoft has made a number of enhancements to Database Availability Groups in Exchange Server 2013. Some of these enhancements are geared toward making site level failovers less complex.

Site Resilience

Although site resilience could be achieved using Exchange Server 2010, there were a number of different factors preventing organizations from achieving the level resilience that they might have liked. For starters, site level resilience was something that had to be planned ahead of time before Exchange Server 2010 was put into place. One of the reasons for this was that all of the Database Availability Group members had to belong to the same Active Directory domain. This meant that site resilience could only be achieved if the Active Directory domain spanned multiple data centers.

Another major limitation was that Microsoft designed Exchange Server 2010 so that a simple WAN failure would not trigger a site failover. One of the ways that they did this was to make it so that the failover process had to be initiated manually. Furthermore, the primary data center had to contain enough Database Availability Group members to allow the site to retain quorum in the event that the WAN link failed. Because of these limitations, there was really no such thing as true site resilience in Exchange Server 2010.

In Exchange Server 2013, it is finally possible to achieve full site resilience – with enough planning. As was the case with Exchange Server 2010, a DAG can only function if it is able to maintain quorum. Maintaining quorum means that at least half plus one of the DAG members are online and able to communicate with one another at any given time. This can be accomplished by placing an equal number of DAG members in each datacenter and then placing a witness server into a remote location that is accessible to each datacenter.

This approach will allow a datacenter level failover in the event of a major outage or a WAN failure. It is worth noting however, that this approach to site resiliency still does not achieve fully comprehensive protection for mailbox databases because situations could still occur that cause the DAG to lose quorum. Imagine for example, that a WAN link failure occurs between two datacenters. In that situation, whichever datacenter is still able to communicate with the witness server will retain quorum. Now, imagine that one of the DAG members in this datacenter were to fail before the WAN link is fixed. This failure would cause the datacenter to lose quorum, resulting in a DAG failure.

Lagged Copies

Another major change that Microsoft has made to DAGs has to do with the way that lagged copies work. Lagged copies are database replicas for which transaction log replay is delayed so as to facilitate point in time recovery.

In Exchange 2013, Microsoft has built some intelligence into lagged copies to detect and correct instances of corruption or low disk space. It is worth noting however, that in these types of circumstances you could end up losing the lag.

One of the big problems with lagged copies in Exchange 2010 was the fact that transaction had to be stored for the full lag period and could grow to a considerable size. As such, there have been instances in which organizations underestimated the volume of transaction logs that would be stored for lagged copies, resulting in the mailbox server running out of disk space.

Exchange 2013 monitors the available disk space. If the volume containing the transaction logs begins to run short on space then Exchange will initiate an automatic play down, which commits the contents of the transaction logs to lagged copy so that disk space can be freed on the transaction log volume.

Exchange uses a similar log file play down if it detects a corrupt database page. According to Microsoft however, “Lagged copies aren’t patchable with the ESE single page restore feature. If a lagged copy encounters database page corruption (for example, a -1018 error), it will have to be reseeded (which will lose the lagged aspect of the copy)”.

Another change that Microsoft has made to lagged copies is that it is now possible to activate a lagged copy and bring it to a current state, even if the transaction logs are not available. This is due to a new feature called the Safety Net. The Safety Net replaces the transport dumpster. Its job is to store copies of every message that has been successfully delivered to an active mailbox database. If a lagged database copy needs to be activated and the transaction logs are not available, Exchange can use the Safety Net’s contents to bring the database into a current state.

Public Folders

One of the most welcome changes that Microsoft has made to DAGs is that it is now possible to use DAGs to protect your public folders. In Exchange Server 2010, DAGs could only protect mailbox databases, not public folder databases. Public folder databases do not exist in Exchange 2010. Instead, public folders are stored in mailbox databases, which make it possible to use DAGs to protect public folders.

Conclusion

The most significant changes that Microsoft has made to DAGs include the ability to fail over at the datacenter level, the ability to use DAGs to provide high availability for public folders, and automated maintenance for lagged copies. In addition, Microsoft has also built in some minor improvements such as automatic database reseeding after a storage failure, and automated notification in situations in which only a single healthy copy of a DAG exists.

What’s New for Exchange 2013 Storage?

What’s New for Exchange 2013 Storage?

By: Brien M. Posey

Many of Exchange Server 2013’s most noteworthy improvements are behind the scenes architectural improvements rather than new product features. Perhaps nowhere is this more true than Exchange Server’s storage architecture. Once again Microsoft invested heavily in Exchange’s storage subsystem in an effort to drive down the overall storage costs while at the same time improving performance and reliability. This article outlines some of the most significant storage related improvements in Exchange Server 2013.

Lower IOPS on Passive Database Copies

In failure situations, failover from an Active mailbox database to a passive database copy needs to happen as quickly as possible. In Exchange Server 2010, Microsoft expedited the failover process by maintaining a low checkpoint depth (5 MB) on the passive database copy. Microsoft’s reason for doing this was that failing over from an Active to a passive database copy required the database cache to be flushed. Having a large checkpoint depth would have increased the amount of time that it took to flush the cache, thereby causing the failover process to take longer to complete.

The problem was that maintaining a low checkpoint depth came at a cost. The server hosting the passive database copy had to do a lot of work in terms of pre-read operations in order to keep pace with demand while still maintaining a minimal checkpoint depth. The end result was that a passive database copy produced nearly the same level of IOPS as its active counterpart.

In Exchange Server 2013, Microsoft made a simple decision that greatly reduced IOPS for passive database copies, while also reducing the database failover time. Because much of the disk I/O activity on the passive database copy was related to maintaining a low checkpoint depth and because the checkpoint depth had a direct impact on the failover time, Microsoft realized that the best way to improve performance was to change the way that the caching process worked.

In Exchange 2013, the cache is no longer flushed during a failover. Instead, the cache is treated as a persistent object. Because the cache no longer has to be flushed, the size of the cache has little bearing on the amount of time that it takes to perform the failover. As such, Microsoft designed Exchange 2013 to have a much larger checkpoint depth (100 MB). Having a larger checkpoint depth means that the passive database doesn’t have to work as hard to pre-read data, which drives down the IOPS on the passive database copy by about half. Furthermore failovers normally occur in about 20 seconds.

Although the idea of driving down IOPS for passive database copies might sound somewhat appealing, some might question the benefit. After all, passive database copies are not actively being used, so driving down the IOPS should theoretically have no impact on the end user experience.

One of the reasons why reducing the IOPS produced by passive database copies is so important has to do with another architectural change that Microsoft has made in Exchange Server 2013. Unlike previous versions of Exchange Server, Exchange Server 2013 allows active and passive database copies to be stored together on the same volume.

If an organization does choose to use a single volume to store a mixture of active and passive databases then reducing the IOPS produced by passive database will have a direct impact on the performance of active databases.

This new architecture also makes it easier to recover from disk failures within a reasonable amount of time. Exchange Server 2013 supports using volume sizes of up to 8 TB. With that in mind, imagine what would happen if a disk failed to needed to be reseeded. Assuming that the majority of the space on the volume was being used, it would normally take a very long time to regenerate the contents of the failed disk.

Part of the reason for this has to do with the sheer volume of data that must be copied, but there is more to it than that. Passive database copies are normally reseeded from an active database copy. If all of the active database copies reside on a common volume than that volumes performance will be the limiting factor affecting the amount of time that it takes to rebuild the failed disk.

In Exchange Server 2013 however, volumes can contain a mixture of active and passive database copies. This means that the active database copies of likely reside on different volumes (typically on different servers). This means that the data that is necessary for rebuilding the failed volume will be pulled from a variety of sources. As such, the data source is no longer the limiting factor in the amount of time that it takes to reseed the disk. Assuming that the disk that is being reseeded can keep pace, the reseeding process can occur much more quickly than it would be able to if all of the data were coming from a single source.

In addition, Exchange Server 2013 periodically performs an integrity check of passive database copies. If any of the database copies are found to have a status of FailedAndSuspended. If such a database copy is found then Exchange will check to see if any spare disks are available. If a valid spare is found then Exchange Server will automatically remap the spare and initiate an automatic seating process.

Conclusion

As you can see, Microsoft has made a tremendous number of improvements with the way that Exchange Server manages storage in DAG environments. Passive database copies generate fewer IOPS, and failovers happen more quickly than ever before. Furthermore, Exchange Server can even use spare disks to quickly recover from certain types of disk failures.

The Pros and Cons of Using Database Availability Groups

The Pros and Cons of Using Database Availability Groups

Guest Post By: Brien M. Posey

Database Availability Groups (DAGs) are Microsoft’s go to solution for providing high availability for Exchange 2010 (and Exchange 2013) mailbox servers. Even so, it is critically important for administrators to consider whether or not a DAG is the most appropriate high availability solution for their organization.

The primary advantage offered by DAGs is that of high availability for mailbox servers within an Exchange Server organization. DAGs make use of failover clustering. As such, the failure of a DAG member results in any active mailbox databases failing over to another DAG member.

At first this behavior likely seems ideal, but depending on an organization’s needs DAGs can leave a lot to be desired. One of the first considerations that administrators must take into account is the fact that DAGs only provide high availability for mailbox databases. This means that administrators must find other ways to protect the other Exchange Server roles and any existing public folder databases. Incidentally, Exchange Server 2013 adds high availability for public folders through DAGs, but DAGs cannot be used to protect any additional Exchange Server components.

In spite of the limitations that were just mentioned, DAGs have historically proven to be an acceptable high availability solution for medium sized organizations. While it is true that DAGs fail to protect the individual server roles, Exchange stores all of its configuration information in the Active Directory, which means that entire servers can be rebuilt by following these steps:

  1. Reset the Active Directory account for the failed server (reset the account, do not delete it).
  2. Install Windows onto a new server and giving it the same name as the failed server.
  3. Install any Windows patches or service packs onto the new server that were running on the failed server.
  4. Join the server to the Active Directory domain.
  5. Create an Exchange Server installation DVD that contains the same service pack level that was used on the failed server.
  6. Insert the Exchange installation media that you just created and run Setup /m:RecoverServer

The method outlined above can be used to recreate a failed Exchange Server. The only thing that is not recreated using this method are databases, but databases are protected by DAGs. As such, these two mechanisms provide relatively comprehensive protection against a disaster. Even so, the level of protection afforded by these mechanisms often proves to be inadequate for larger organizations.

One of the reasons for this has to do with the difficulty of rolling a database back to an earlier point in time. Microsoft allows DAG members to be configured as lagged copies. This means that transaction logs are not committed to the lagged copy as quickly as they would otherwise be. This lag gives administrators the ability to activate an older version of the database if necessary. The problem is that activating a lagged copy is not an intuitive process. Furthermore, activating a lagged copy always results in data loss.

The other reason why DAGs are not always an adequate solution for larger organizations has to do with the difficulty of providing off-site protection. Exchange Server 2010 supports the creation of stretched DAGs, which are DAGs that span multiple datacenters. Although being able to fail over to an off-site datacenter sounds like a true enterprise class feature, the reality of the situation is that architectural limitations often prevent organizations from being able to achieve such functionality

The most common barriers to implementing a stretched DAG are network latency and Active Directory design. Stretched DAGs are only supported on networks with a maximum round trip latency of 500 milliseconds. Additionally, DAGs cannot span multiple Active Directory domains, which means that the domain in which the DAG members reside must span datacenters.

Even if an organization is able to meet the criteria outlined above, they must construct the DAG in a way that will ensure continued functionality both in times of disaster and during minor outages. In order for a DAG to function, it must maintain quorum. This means that at least half plus one of the total number of existing DAG members must be functional in order for the DAG to remain online. This requirement is relatively easy to meet in a single datacenter deployment, but is quite challenging in stretched DAG environments.

One of the issues that must be considered when building a stretched DAG is that Exchange cannot tell the difference between a WAN failure and the failure of the Exchange servers on the other side of the WAN link. As such, the primary site must have enough DAG members to maintain quorum even in the event of a WAN failure. Ideally, the primary site should have enough DAG members to retain quorum during a WAN failure and still be able to absorb the failure of at least one member in the primary site.

Another problem with stretched DAGs is that the requirement for the primary site to have enough DAG members to always maintain quorum means that if the DAG will never failover to the remote site, even if the entire primary datacenter is destroyed. The secondary site lacks enough DAG members to achieve quorum without an administrator manually evicting nodes from the DAG.

As you can see, DAGs tend to deliver an acceptable level of functionality in single datacenter environments, but the limitations that are inherent in stretched DAGs make them impractical for use in multi-datacenter deployments. Larger organizations are typically better off implementing other types of redundancy rather than depending on DAGs. One possible solution for example is to virtualize an organization’s Exchange servers and then replicate the virtual machines to a standby datacenter. This approach will usually make the process of failing over to an alternate datacenter much simpler and more efficient.

An Up Close Look at the Volume Shadow Copy Services

An Up Close Look at the Volume Shadow Copy Services
Guest Post by: Brien M. Posey

One of the big problems with backing up database applications is that oftentimes the data is modified before the backup can complete. Needless to say, modifying data while a backup is running can result in a corrupt backup. In an effort to keep this from happening, Microsoft uses the Volume Shadow Copy Services (VSS) to make sure that database applications are backed up in a consistent state.   This article explains how the Volume Shadow Copy Services work.

The VSS Components

vss

There are four main components that make up the Volume Shadow Copies Services.

These components include:

  • the VSS service
  • the VSS requester
  • the VSS writer
  • the VSS provider

These components work together to provide the VSS backup capabilities.

  • The VSS Service component could best be thought of as the centralized operating system service that ties the various VSS components together. The VSS service insures that the VSS requestor, VSS Writer, and VSS Provider are all able to communicate with one another.
  • At a high level, the VSS requestor generally refers to the backup software. The VSS requestor is the component that asks the VSS service to create a shadow copy. The requestor itself is built into the backup software. This is true for Windows Server Backup, System Center Data Protection Manager, and for third party backup applications such as EMC’s AppSync.
  • The third component is the VSS provider. The VSS provider links the VSS Service to the hardware on which the shadow copy will be created. The Windows Server operating system includes a built in VSS provider. This provider exists at the software level and is allows Windows to interact with the server’s storage subsystem. A VSS provider can exist at the hardware level as well. A hardware level provider offloads shadow copy operations to the storage hardware so that the server operating system does not have to carry the workload. However, when a hardware level VSS provider is used, there is usually a driver that is required to make Windows aware of the storage hardware’s capabilities.
  • The fourth component of the Volume Shadow Copy Services is the VSS Writer. The VSS writer’s job is to insure that data is backed up in a consistent manner. It is important to understand that in most cases there are a number of different VSS writers that work together in parallel to insure that various types of data are backed up properly. Server applications such as Exchange Server and SQL Server include their own VSS writers that plug into the operating system’s existing VSS infrastructure and allow the application to be backed up.

Creating a Volume Shadow Copy

The process of creating a volume shadow copy begins when the requestor (which is usually built into backup software) notifies the Volume Shadow Copy Service that a shadow copy needs to be created. When the Volume Shadow Copy Service receives this request, it in turn notifies all of the individual VSS writers of the impending shadow copy.

When the individual writers receive the request, they take steps to place data into a consistent state that is suitable for shadow copy creation. The exact tasks that the writer performs varies from one application to another, but generally writers prepare by flushing caches and completing any database transactions that are currently in progress. If the application makes use of transaction logs, the logs may be committed as a part of the process as well.

After all of the VSS writers have prepared for the shadow copy, the Volume Shadow Copy Service instructs the writers to freeze their corresponding applications. This prevents write operations from occurring for the duration of the shadow copy (which takes less than ten seconds to complete).

When all of the applications have been frozen the Volume Shadow Copy Service instructs the provider to create the shadow copy. When the shadow copy creation is complete, the provider notifies the Volume Shadow Copy Service of the completion. At this point, the Volume Shadow Copy Service once again allows file system I/O and it instructs the writers to resume normal application activity.

The shadow copy creation process revolves largely around the task of coordinating VSS writers so that the various components of the operating system and any applications that are running on the server can be backed up in a reliable and consistent manner. Even though the individual writers do most of the heavy lifting, it is the provider that ultimately creates the shadow copy.

There are actually several different methods that can be used for shadow copy creation. The actual method used varies from one provider to the next. As you may recall, providers can exist as an operating system component, or it can exist at the hardware level. The shadow copy creation process that is used varies depending on the type of provider that is being used. There are three main methods that providers typically use for creating shadow copies.

The first method is known as a complete copy. A complete copy is usually based on mirroring. A mirror set is created between the original volume and the shadow copy volume. When the shadow copy creation process is complete, the mirror is broken so that the shadow copy volume can remain in a pristine state as it existed at the point in time when it was created.

The second method that is sometimes used for shadow copy creation is known as Redirect on Write. Redirect on Write is based on the use of differencing disks. The shadow copy process designates the original volume as read only so that it can be kept in a pristine state as it existed at the point in time when the shadow copy was created. All future write operations are redirected to a differencing disk. This method is also sometimes referred to as snapshotting.

The third method that providers sometimes use for shadow copy creation is known as copy on write. This is block level operation that is designed to preserve storage blocks that would ordinarily be overwritten. When a write operation occurs, any blocks that would be overwritten are copied to the shadow copy volume prior to the write operation.

Conclusion

As you can see, the process of creating a shadow copy is relatively straightforward. You can gain some additional insight into the process by opening a Command Prompt window and entering the following command:

VSSADMIN List Writers

This command displays all of the VSS writers that are present on the system and also shows you each writer’s status.

Hope you found this helpful!

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Guest Post By: Brien M. Posey

Why-SAN-vs-DAS

Of all the decisions that must be made when planning an Exchange Server deployment, perhaps none are as critical as deciding which type of storage to use. Exchange Server is very flexible with regard to the types of storage that it supports. However, some types of storage offer better performance and reliability than others.

When it comes to larger scale Exchange Server deployments, it is often better to use a Storage Area Network than it is to use direct attached storage. Storage networks provide a number of advantages over local storage in terms of costs, reliability, performance, and functionality.

Cost Considerations

Storage Area Networks have gained something of a reputation for being more expensive than other types of storage. However, if your organization already has a Storage Area Network in place then you may find that the cost per gigabyte of Exchange Server storage is less expensive on your storage network than it would be if you were to invest in local storage.

While this statement might seem completely counter intuitive, it is based on the idea that physical hardware is often grossly underutilized. To put this into prospective, consider the process of purchasing an Exchange mailbox server that uses Direct Attached Storage.

Organizations that choose to use local storage must estimate the amount of storage that will be needed to accommodate Exchange Server databases plus leave room for future growth. This means making a significant investment in storage hardware.  In doing so, an organization is purchasing the necessary storage space, but they may also be spending money for storage space that is not immediately needed.

In contrast, Exchange servers that are connected to storage networks can take advantage of thin provisioning. This means that the Exchange Server only uses the storage space that it needs. When a thinly provisioned volume is created, the volume typically consumes less than 1 GB of physical storage space, regardless of the volume’s logical size. The volume will consume physical storage space on an as needed basis as data is written to the volume.

In essence, a thinly provisioned volume residing on a SAN could be thought of as “pay as you go” storage. Unlike Direct Attached Storage, the organization is not forced to make a large up-front investment in dedicated storage that may never be used.

Reliability

Another advantage to using Storage Area Networks for Exchange Server storage is that when properly constructed, SANs are far more reliable than Direct Attached Storage.

The problem with using Direct Attached Storage is that there are a number of ways in which the storage can become a single point of failure. For example, a disk controller failure can easily corrupt an entire storage array. Although there are servers that have multiple array controllers for Direct Attached Storage, lowering servers are often limited to a single array controller.

Some Exchange mailbox servers implement Direct Attached Storage through an external storage array. Such an array is considered to be a local component, but makes use of an external case as a way of compensating for the lack of drive bays within the server itself. In these types of configurations, the connectivity between the server and external storage array can become a single point of failure (depending on the hardware configuration that is used).

When SAN storage is used, potential single points of failure can be eliminated through the use of multipath I/O. The basic idea behind multipath I/O is that fault tolerance can be achieved by providing multiple physical paths between a server and a storage device. If for example an organization wanted to establish fault tolerant connectivity between an Exchange Server and SAN storage, they could install multiple Fibre Channel Host Bus Adapters into the Exchange Server. Each Host Bus Adapter could be connected to a separate Fibre Channel switch. Each switch could in turn provide a path to mirrored storage arrays. This approach prevents any of the storage components from becoming single points of failure.

Performance

Although Microsoft has taken measures to drive down mailbox server I/O requirements in the last couple of versions of Exchange Server, mailbox databases still tend to be I/O intensive. As such, large mailbox servers depend on high-performance hardware.

While there is no denying the fact that high-performance Direct Attached Storage is available, SAN storage can potentially provide a higher level of performance sue to its scalability. One of the major factors that impacts a storage array’s performance is the number of spindles that are used by the array. Direct Attached Storage limits the total number of spindles that can be used. Not only are the number of drive bays in the case a factor, but there is also a limit to the number of disks that can be attached to the array controller.

SAN environments make it possible to create high performance disk arrays by using large numbers of physical disks. Of course capitalizing on the disk I/O performance also means that you must have a high speed connection between the server and the SAN, but this usually isn’t a problem. Multipath I/O allows storage traffic to be distributed across multiple Fibre Channel ports for optimal performance.

Virtualization

Finally, SAN environments are ideal for use in virtualized datacenters. Although neither Microsoft nor VMware still require the use of shared storage for clustered virtualization hosts, using shared storage is still widely considered to be a best practice. SANs make it easy to create cluster shared volumes that can be shared among the nodes in your host virtualization cluster.

Conclusion

Exchange mailbox servers are almost always considered to be mission critical. As such, it makes sense to invest in SAN storage for your Exchange Server since it can deliver better performance and reliability than is possible with Direct Attached Storage.

 

3 Benefits of Running Exchange Server in a Virtualized Environment

3 Benefits of Running Exchange Server in a Virtualized Environment

Guest post by: Brien M. Posey

Benefits-of-Running-Exchang

One of the big decisions that administrators must make when preparing to deploy Exchange Server is whether to run Exchange on physical hardware, virtual hardware, or a mixture of the two. Prior to the release of Exchange Server 2010 most organizations chose to run Exchange on physical hardware. Earlier versions of Exchange mailbox servers were often simply too I/O intensive for virtual environments. Furthermore, it took a while for Microsoft’s Exchange Server support policy to catch up with the virtualization trend.

Today these issues are not the stumbling blocks that they once were. Exchange Server 2010 and 2013 are far less I/O intensive than their predecessors. Likewise, Exchange Server is fully supported in virtual environments. Of course administrators must still answer the question of whether it is better to run Exchange Server on physical or on virtual hardware.

Typically there are far greater advantages to running Exchange Server in a virtual environment than running it in a physical environment. Virtual environments can help to expedite Exchange Server deployment, and they often make better use of hardware resources, while also offering some advanced protection options.

Improved Deployment

At first the idea that deploying Exchange Server in a virtual environment is somehow easier or more efficient might seem a little strange. After all, the Exchange Server setup program works in exactly the same way whether Exchange is being deployed on a physical or a virtual server. However, virtualized environments provide some deployment options that simply do not exist in physical environments.

Virtual environments make it quick and easy to deploy additional Exchange Servers. This is important for any organization that needs to quickly scale they are Exchange organization to meet evolving business needs. Virtual environments allow administrators to build templates that can be used to quickly deploy new servers in a uniform way.

Depending upon the virtualization platform that is being used, it is sometimes even possible to set up a self-service portal that allows authorized users to deploy new Exchange Servers with only a few mouse clicks. Because the servers are based on preconfigured templates, they will already be configured according to the corporate security policy.

 Hardware Resource Allocation

Another advantage that virtualized environments offer over physical environments is them virtual environments typically make more efficient use of server hardware. In virtual environments, multiple virtualized workloads share a finite pool of physical hardware resources. As such, virtualization administrators have gotten into the habit of using the available hardware resources efficiently and making every resource count.

Of course it isn’t just these habits that lead to more efficient resource usage. Virtualized environments contain mechanisms that help to ensure that virtual machines receive exactly the hardware resources that are necessary, but without wasting resources in the process. Perhaps the best example of this is dynamic memory.

The various hypervisor vendors each implement dynamic memory in their own way. As a general rule however, each virtual machine is assigned a certain amount of memory at startup. The administrator also assigns maximum and minimum memory limits to the virtual machines. This allows the virtual machines to claim the memory that they need, but without consuming an excessive percentage of the servers overall physical memory. When memory is no longer actively needed by the virtual machine, that memory is released so that it becomes available to other virtual machines that are running on the server.

Although mechanisms such as dynamic memory can certainly help a virtual machine to make the most efficient use possible of physical hardware resources, resource usage can be thought of in another way as well.

When Exchange Server is deployed onto physical hardware, all of the servers resources are dedicated to running the operating system and Exchange Server. While this may initially sound desirable, there are  problems with it when you consider hardware allocation from a financial standpoint.

In a physical server environment, the hardware must be purchased up front. The problem with this is that administrators can simply purchase the resources that are needed by Exchange Server based on current usage. Workloads tend to increase over time, so administrators must typically purchase more memory, CPU cores, and faster disks than what are currently needed. These resources are essentially wasted until the day that the Exchange Server workload grows to the point that those resources are suddenly needed. In a virtual environment this is simply not the case. Whatever resources are not needed by a virtual machine can be put into a pool of physical resources that are accessible to other virtualized workloads.

 Protection Options

One last reason why it is often more beneficial to operate Exchange Server in a virtual environment is because virtual environments provide certain protection options that are not natively available with Exchange Server.

Perhaps the best example of this is failover clustering. Exchange Server offers failover clustering in the form of Database Availability Groups. The problem is that Database Availability Groups only protect the mailbox server role. Exchange administrators must look for creative ways to protect the remaining server roles against failure. One of the easiest ways to achieve this protection is to install Exchange Server onto virtual machines. The underlying hypervisor can be clustered in a way that allows virtual machines to fail over from one host to another if necessary. Such a failover can be performed regardless of the limits of the operating system or application software that might be running within individual virtual machines. In other words, virtualization allows you to receive the benefits of failover clustering for Exchange server roles that don’t normally support clustering.

 Conclusion

As you can see, there are a number of benefits to running Exchange Server in a virtual environment. In almost every case, it is preferable to run Exchange Server on virtual hardware over physical hardware.