Category Archives: Exchange Server

Is a SAN Better, Cheaper, and More Secure Than Office 365?

SAN’s – especially SAN’s from market leader EMC – are always under attack from companies wishing to cash in on a piece of the rising data growth across all industries and market segments.

  • Some say DAS is best.
  • Some say keep the storage in the servers.
  • Some say you should build your own shared array.

But when it comes to Microsoft environments, it often helps to have independent experts investigate the matter to get a fresh perspective.

In a recent study, Wikibon determined that a shared storage infrastructure (powered by EMC’s next generation VNX2 storage systems) can match the prices offered by Office 365 public cloud and offer more capabilities, more security, and more control.

This, however, assumes a completely consolidated approach for deploying multiple mixed workloads such as Exchange, SharePoint, SQL Server, Lync – where the VNX2 really shines.   We use FAST VP, FAST Cache, and a combination of drive types to achieve the best balance of performance and cost.

Are you looking for more information about deploying Microsoft applications on VNX?    Definitely check here for the most recent best practices guides!

Also check out my recent webinar I did with James Baldwin who leads our Proven Solutions EMC/Microsoft engineering team.  We had a lot of fun doing this one, hope you enjoy it.

EMC’s VNX = Award Winning storage for Microsoft environments

Microsoft’s TechEd 2013 is next week, and I’m looking forward to spending time with my longtime industry friends and making some new connections on the show floor in New Orleans.

This year, I’ll attend as part of the Unified Storage Division, and felt I needed to share a little about the success of VNX and VNXe arrays into Microsoft environments:

awards

EMC’s VNX Unified Storage Platform has been recognized with awards from a slew of independent analysts such as Gartner, IDC and Wikibon, as well as media publications such as ComputerWorld, CRN and Virtualization Review due to the ability of the VNX family to power mission critical applications, integrate with virtual environments and solve SMB IT challenges, among other accolades.   We take pride in being the #1 storage for most Microsoft Windows-based applications.

BUT… DOES  MICROSOFT WINDOWS NEED A SAN?  CAN’T WE DO IT OURSELVES?

Well, after speaking with Windows Server 2012, SQL Server, and EMC customers, partners and employees, the independent analyst firm Wikibon posted a before and after comparison model based on an enterprise customer environment. The idea is that the total cost of bolting together your own solution isn’t worth it.

wikibon-windows-study

The findings showed that by moving a physical, non-tiered environment to a virtualized environment with flash and tiered storage SQL Server customers realized a 30% lower overall TCO over a 3 year period including hardware, software, maintenance, and management costs for their database infrastructure.

The graphic shows that a do-it-yourself approach saves very little if anything in hardware costs and will divert operational effort to build and maintain the infrastructure. Risks and costs are likely to be higher with this approach.

In the end, EMC’s VNX infrastructure was proven to deliver a lower cost and lower risk solution for Windows 2012 versus a direct-attached storage (DAS) or JBOD (just a bunch of disk) model.  Full study here.

Video of EMC’s Adrian Simays and Wikibon Analysts discussing these results is here on YouTube.

MICROSOFT INTEGRATIONS AND INNOVATIONS  

EMC’s VNX platform considers Microsoft applications, databases, and file shares to our sweet spot as evidenced by our early integration of the latest Windows Server 2012 features that increase performance, efficiency, availability, and simplicity for our joint customers.

Performance, Efficiency, Availability, Simplicity

1. Performance

Within Windows, we were the first storage array to support SMB 3 and ODX Copy Offload (part of SMB 3) to enable large file copies over SAN instead of consuming network bandwidth and host CPU cycles.

ODX-impact

This test highlights the speed difference before (left) and after (right) ODX was implemented. With EMC VNX and ODX enabled, you can accelerate your VM copies by a factor of 7 while reducing server CPU utilization by a factor of 30!

For applications and databases, VNX FAST Cache and FASTVP automatically tunes your storage to match your workload requirements saving up to 80% of the time it would take to manually balance workloads.

The Enterprise Storage Group (ESG) Lab confirmed that a data warehouse solution with Windows Server 2012 with Hyper-V, Microsoft SQL Server 2012 with new Columnstore indexing, and VNX FAST technologies and VNX storage form a complete solution to meet the business requirements of mid-tier organizations and beyond. An 800GB DW was deployed which is fairly typical for a medium sized business. With EMC FAST enabled, throughput reached up to 379 MB/sec, showing over 100% improvement over SQL Server 2012’s baseline Rowstore indexing. The DSS performance workload with EMC FAST enabled completed up to nine times faster than with rowstore indexing.

2. Efficiency

IT managers and storage administrators frequently adopt well-known forecasting models to pre-allocate storage space according to the storage demand growth rate. The main challenge is how to pre-allocate just enough storage capacity for the application. Reports from many storage array vendors indicate that 31% to 50% of the allocated storage is either stranded or unused. Thus, 31% to 50% of the capital investment from the initial storage installment is wasted.

The VNX supports Windows host-level and built-in storage-level thin provisioning to drastically reduce initial disk requirements.  Windows Server 2012 provides the ability to detect thin-provisioned storage on EMC storage arrays and reclaim unused space once it is freed by Hyper-V. In the previous scenario, an ODX-aware host connected to an EMC intelligent storage array would automatically reclaim the 10 GB of storage and return it to the pool where it could be used by other applications.

Furthermore, for application storage we partner with companies like Kroll and Metalogix to provide better solutions for Exchange single item recovery and SharePoint remote BLOB storage which can reduce SQL stored SharePoint objects by about 80-90% and improve SQL Respnse times by 20-40%

3. Availability

Our first to market integration with SMB3 not only provides for performance improvements, it also enables SMB 3 Continuous Availability allowing applications to run on clustered volumes with failovers that are transparent to end users.  For example, SQL Server may store system tables on the file shares such that any disruptive event to the access of the file share can lead to interruption of SQL Server operation. Continuous Availability is accomplished via cluster failover on the host side and Data Mover of Shared Folder failover on the VNX side.

Other SMB 3.0 Features supported include:

  • Multi-Channel / Multipath I/O (MPIO ) – Multiple TCP connections can now be associated with a single SMB 3.0 session and a client application can use several connections to transfer I/O on a CIFS share.  This optimizes bandwidth and enables failover and load balancing with multiple NICs.
  • Offload Copy – Copying data   within the same Data Mover can now be offloaded to the storage which reduces the workload on the client and network.
  • SMB Encryption – Provides secure access to data on CIFS shares, protecting data on untrusted networks and providing end-to-end encryption of data in- flight.
  • Directory Lease – SMB2   introduced a directory cache which allowed clients to cache a directory listing to save network bandwidth but it would not see new updates.  SMB3 introduces a directory lease and the client is now automatically aware of changes made in a cached directory.
  • Remote Volume Shadow Copy Service (RVSS) – With RVSS, point-in-time snapshots can be taken across multiple CIFS shares, providing improved performance in backup and restore.
  • BranchCache – Caching solution to have business data in local cache. Main use case is remote office and branch office storage.

EMC also offers a wide range of application availability and protection solutions that are built into the VNX including snapshots, remote replication, and a new RecoverPoint virtual replication appliance.

4. Simplicity

When it comes to provisioning storage for their applications, admins often have to navigate through too many repetitive tasks requiring them to touch different UIs and increasing the risk of human error. Admins also likely need to coordinate with other administrators each time they need to provision space. This is not very efficient. Take for example a user that wants to provision space for SharePoint. You need to work with Unisphere to create a LUN and add it to a storage group. Next you need to log onto the server and run disk manager to import the volume. Next you need to work with Hyper-V, then SQL Server Mgmt Studio, then SharePoint Central Admin. A bit tedious to say the least.

esi

EMC Storage Integrator (ESI) on the other hand streamlines everything we just talked about. Forget about how much faster it actually is… Just think about the convenience and elegance of this workflow compared to the manual steps outlined in our last paragraph. ESI is a free MMC based download that takes provisioning all the way into Microsoft Applications. Currently only SharePoint is supported but SQL and Exchange wizards are coming soon. This is a feature that surprises and delights our customers!

 SO WHAT DO VNX CUSTOMERS SAY?

EMC’s VNX not only provides a rock solid core infrastructure foundation, but also delivers significant features and benefits for application owners and DBAs.     Here’s some quotes from customers who have transformed their Microsoft environments using the VNX and VNXe platforms.

Peter Syngh Senior Manager, IT Operations, Toronto District School Board

 “EMC’s VNX unified storage has the best of everything at a very cost-effective price. It integrates with Microsoft Hyper-V, which is crucial to our cloud strategy, and with its higher performance, automated tiering and thin provisioning, VNX was a no-brainer.”

Marshall Bose Manager of IT Operations, Ensco (Oil/Gas)

 “A prime reason for choosing EMC over NetApp was that VNX is such a great fit for virtualization. With all the automation tools and tight integration with VMware, VNX is far easier than NetApp when it comes to spinning up and managing virtual machines.”

Rocco Hoffmann, IT Architect BNP Paribas (German Bank)

“We are achieving significant savings in energy and rack space. In fact our VNX requires only half the rack space and has reduced our power and cooling costs”

Charles Rosse, Systems Administrator II Baptist Memorial Health Care

“Since the VNX has been built into the design of our VDI from the beginning, it can easily accommodate growth- all we need to do is to plug in another drive or tray of drives and we get incrementally better performance.”

Erich Becker,  Director of Information Systems, AeroSpec (Manufacturing)

“…We loved the fact that VNXe and VMware worked extremely well together …we have dramatically cut operating costs, increased reliability and data access is now twice as fast as before.”

BOTTOM LINE

There are many more customers that have given praise to the VNX Family for powering their Microsoft applications but I don’t have the room to put them all in.     EMC is a trusted brand in storage, and the VNX today is an outstanding unified platform which successfully balances our customers block and file needs for their Microsoft file and application data – and gets awards for it.    Feel free to find out more about the VNX and VNXe product lines here and here.

Also come talk to us next week at TechEd, we will be there to help customers and partners learn more about our technology.

Find out more about our TechEd plans here.

Also download the VNXe Simulator executable right here.  It’s pretty awesome and shows you the unique VNXe management interface.

The Future of Exchange Protection

crystal-ball-emc

If you could look into a crystal ball and predict what would come next for Exchange protection, what would it be?

Join us live on Feb 7th to learn what Ernes Taljec (a data architect from Presidio) and I think what is coming next for Exchange 2013 and beyond.     We will talk about the evolution of Microsoft built-in and other EMC technologies and take a look into the future!

Also – because everyone likes free stuff – we will pick one person from the audience to win an iPad 3 live during the event.

Sign up today!

  • Webinar Date:    Feb 7th at 12:00 PM EST
  • Webinar Link:    https://www.brighttalk.com/webcast/7397/65443
  • Presenters:          Brian Henderson, AppSync Technical Marketing Manager,  EMC & Ernes Taljec, Data Center Architect, Presidio
  • Duration:               60 mins

What’s new for Exchange Server 2013 Database Availability Groups?

What’s new for Exchange Server 2013 Database Availability Groups?

By: Brien M. Posey

When Microsoft created Exchange Server 2010, they introduced the Concept of Database Availability Groups. Database Availability Groups are the mechanism that makes it possible for a mailbox database to fail over from one mailbox server to another. In retrospect, Database Availability Groups worked really well for organizations whose operations were confined to a single data center. Although it was possible to stretch a Database Availability Group across multiple data centers, performing site level failovers was anything but simple. Microsoft has made a number of enhancements to Database Availability Groups in Exchange Server 2013. Some of these enhancements are geared toward making site level failovers less complex.

Site Resilience

Although site resilience could be achieved using Exchange Server 2010, there were a number of different factors preventing organizations from achieving the level resilience that they might have liked. For starters, site level resilience was something that had to be planned ahead of time before Exchange Server 2010 was put into place. One of the reasons for this was that all of the Database Availability Group members had to belong to the same Active Directory domain. This meant that site resilience could only be achieved if the Active Directory domain spanned multiple data centers.

Another major limitation was that Microsoft designed Exchange Server 2010 so that a simple WAN failure would not trigger a site failover. One of the ways that they did this was to make it so that the failover process had to be initiated manually. Furthermore, the primary data center had to contain enough Database Availability Group members to allow the site to retain quorum in the event that the WAN link failed. Because of these limitations, there was really no such thing as true site resilience in Exchange Server 2010.

In Exchange Server 2013, it is finally possible to achieve full site resilience – with enough planning. As was the case with Exchange Server 2010, a DAG can only function if it is able to maintain quorum. Maintaining quorum means that at least half plus one of the DAG members are online and able to communicate with one another at any given time. This can be accomplished by placing an equal number of DAG members in each datacenter and then placing a witness server into a remote location that is accessible to each datacenter.

This approach will allow a datacenter level failover in the event of a major outage or a WAN failure. It is worth noting however, that this approach to site resiliency still does not achieve fully comprehensive protection for mailbox databases because situations could still occur that cause the DAG to lose quorum. Imagine for example, that a WAN link failure occurs between two datacenters. In that situation, whichever datacenter is still able to communicate with the witness server will retain quorum. Now, imagine that one of the DAG members in this datacenter were to fail before the WAN link is fixed. This failure would cause the datacenter to lose quorum, resulting in a DAG failure.

Lagged Copies

Another major change that Microsoft has made to DAGs has to do with the way that lagged copies work. Lagged copies are database replicas for which transaction log replay is delayed so as to facilitate point in time recovery.

In Exchange 2013, Microsoft has built some intelligence into lagged copies to detect and correct instances of corruption or low disk space. It is worth noting however, that in these types of circumstances you could end up losing the lag.

One of the big problems with lagged copies in Exchange 2010 was the fact that transaction had to be stored for the full lag period and could grow to a considerable size. As such, there have been instances in which organizations underestimated the volume of transaction logs that would be stored for lagged copies, resulting in the mailbox server running out of disk space.

Exchange 2013 monitors the available disk space. If the volume containing the transaction logs begins to run short on space then Exchange will initiate an automatic play down, which commits the contents of the transaction logs to lagged copy so that disk space can be freed on the transaction log volume.

Exchange uses a similar log file play down if it detects a corrupt database page. According to Microsoft however, “Lagged copies aren’t patchable with the ESE single page restore feature. If a lagged copy encounters database page corruption (for example, a -1018 error), it will have to be reseeded (which will lose the lagged aspect of the copy)”.

Another change that Microsoft has made to lagged copies is that it is now possible to activate a lagged copy and bring it to a current state, even if the transaction logs are not available. This is due to a new feature called the Safety Net. The Safety Net replaces the transport dumpster. Its job is to store copies of every message that has been successfully delivered to an active mailbox database. If a lagged database copy needs to be activated and the transaction logs are not available, Exchange can use the Safety Net’s contents to bring the database into a current state.

Public Folders

One of the most welcome changes that Microsoft has made to DAGs is that it is now possible to use DAGs to protect your public folders. In Exchange Server 2010, DAGs could only protect mailbox databases, not public folder databases. Public folder databases do not exist in Exchange 2010. Instead, public folders are stored in mailbox databases, which make it possible to use DAGs to protect public folders.

Conclusion

The most significant changes that Microsoft has made to DAGs include the ability to fail over at the datacenter level, the ability to use DAGs to provide high availability for public folders, and automated maintenance for lagged copies. In addition, Microsoft has also built in some minor improvements such as automatic database reseeding after a storage failure, and automated notification in situations in which only a single healthy copy of a DAG exists.

What’s New for Exchange 2013 Storage?

What’s New for Exchange 2013 Storage?

By: Brien M. Posey

Many of Exchange Server 2013’s most noteworthy improvements are behind the scenes architectural improvements rather than new product features. Perhaps nowhere is this more true than Exchange Server’s storage architecture. Once again Microsoft invested heavily in Exchange’s storage subsystem in an effort to drive down the overall storage costs while at the same time improving performance and reliability. This article outlines some of the most significant storage related improvements in Exchange Server 2013.

Lower IOPS on Passive Database Copies

In failure situations, failover from an Active mailbox database to a passive database copy needs to happen as quickly as possible. In Exchange Server 2010, Microsoft expedited the failover process by maintaining a low checkpoint depth (5 MB) on the passive database copy. Microsoft’s reason for doing this was that failing over from an Active to a passive database copy required the database cache to be flushed. Having a large checkpoint depth would have increased the amount of time that it took to flush the cache, thereby causing the failover process to take longer to complete.

The problem was that maintaining a low checkpoint depth came at a cost. The server hosting the passive database copy had to do a lot of work in terms of pre-read operations in order to keep pace with demand while still maintaining a minimal checkpoint depth. The end result was that a passive database copy produced nearly the same level of IOPS as its active counterpart.

In Exchange Server 2013, Microsoft made a simple decision that greatly reduced IOPS for passive database copies, while also reducing the database failover time. Because much of the disk I/O activity on the passive database copy was related to maintaining a low checkpoint depth and because the checkpoint depth had a direct impact on the failover time, Microsoft realized that the best way to improve performance was to change the way that the caching process worked.

In Exchange 2013, the cache is no longer flushed during a failover. Instead, the cache is treated as a persistent object. Because the cache no longer has to be flushed, the size of the cache has little bearing on the amount of time that it takes to perform the failover. As such, Microsoft designed Exchange 2013 to have a much larger checkpoint depth (100 MB). Having a larger checkpoint depth means that the passive database doesn’t have to work as hard to pre-read data, which drives down the IOPS on the passive database copy by about half. Furthermore failovers normally occur in about 20 seconds.

Although the idea of driving down IOPS for passive database copies might sound somewhat appealing, some might question the benefit. After all, passive database copies are not actively being used, so driving down the IOPS should theoretically have no impact on the end user experience.

One of the reasons why reducing the IOPS produced by passive database copies is so important has to do with another architectural change that Microsoft has made in Exchange Server 2013. Unlike previous versions of Exchange Server, Exchange Server 2013 allows active and passive database copies to be stored together on the same volume.

If an organization does choose to use a single volume to store a mixture of active and passive databases then reducing the IOPS produced by passive database will have a direct impact on the performance of active databases.

This new architecture also makes it easier to recover from disk failures within a reasonable amount of time. Exchange Server 2013 supports using volume sizes of up to 8 TB. With that in mind, imagine what would happen if a disk failed to needed to be reseeded. Assuming that the majority of the space on the volume was being used, it would normally take a very long time to regenerate the contents of the failed disk.

Part of the reason for this has to do with the sheer volume of data that must be copied, but there is more to it than that. Passive database copies are normally reseeded from an active database copy. If all of the active database copies reside on a common volume than that volumes performance will be the limiting factor affecting the amount of time that it takes to rebuild the failed disk.

In Exchange Server 2013 however, volumes can contain a mixture of active and passive database copies. This means that the active database copies of likely reside on different volumes (typically on different servers). This means that the data that is necessary for rebuilding the failed volume will be pulled from a variety of sources. As such, the data source is no longer the limiting factor in the amount of time that it takes to reseed the disk. Assuming that the disk that is being reseeded can keep pace, the reseeding process can occur much more quickly than it would be able to if all of the data were coming from a single source.

In addition, Exchange Server 2013 periodically performs an integrity check of passive database copies. If any of the database copies are found to have a status of FailedAndSuspended. If such a database copy is found then Exchange will check to see if any spare disks are available. If a valid spare is found then Exchange Server will automatically remap the spare and initiate an automatic seating process.

Conclusion

As you can see, Microsoft has made a tremendous number of improvements with the way that Exchange Server manages storage in DAG environments. Passive database copies generate fewer IOPS, and failovers happen more quickly than ever before. Furthermore, Exchange Server can even use spare disks to quickly recover from certain types of disk failures.

An Up Close Look at the Volume Shadow Copy Services

An Up Close Look at the Volume Shadow Copy Services
Guest Post by: Brien M. Posey

One of the big problems with backing up database applications is that oftentimes the data is modified before the backup can complete. Needless to say, modifying data while a backup is running can result in a corrupt backup. In an effort to keep this from happening, Microsoft uses the Volume Shadow Copy Services (VSS) to make sure that database applications are backed up in a consistent state.   This article explains how the Volume Shadow Copy Services work.

The VSS Components

vss

There are four main components that make up the Volume Shadow Copies Services.

These components include:

  • the VSS service
  • the VSS requester
  • the VSS writer
  • the VSS provider

These components work together to provide the VSS backup capabilities.

  • The VSS Service component could best be thought of as the centralized operating system service that ties the various VSS components together. The VSS service insures that the VSS requestor, VSS Writer, and VSS Provider are all able to communicate with one another.
  • At a high level, the VSS requestor generally refers to the backup software. The VSS requestor is the component that asks the VSS service to create a shadow copy. The requestor itself is built into the backup software. This is true for Windows Server Backup, System Center Data Protection Manager, and for third party backup applications such as EMC’s AppSync.
  • The third component is the VSS provider. The VSS provider links the VSS Service to the hardware on which the shadow copy will be created. The Windows Server operating system includes a built in VSS provider. This provider exists at the software level and is allows Windows to interact with the server’s storage subsystem. A VSS provider can exist at the hardware level as well. A hardware level provider offloads shadow copy operations to the storage hardware so that the server operating system does not have to carry the workload. However, when a hardware level VSS provider is used, there is usually a driver that is required to make Windows aware of the storage hardware’s capabilities.
  • The fourth component of the Volume Shadow Copy Services is the VSS Writer. The VSS writer’s job is to insure that data is backed up in a consistent manner. It is important to understand that in most cases there are a number of different VSS writers that work together in parallel to insure that various types of data are backed up properly. Server applications such as Exchange Server and SQL Server include their own VSS writers that plug into the operating system’s existing VSS infrastructure and allow the application to be backed up.

Creating a Volume Shadow Copy

The process of creating a volume shadow copy begins when the requestor (which is usually built into backup software) notifies the Volume Shadow Copy Service that a shadow copy needs to be created. When the Volume Shadow Copy Service receives this request, it in turn notifies all of the individual VSS writers of the impending shadow copy.

When the individual writers receive the request, they take steps to place data into a consistent state that is suitable for shadow copy creation. The exact tasks that the writer performs varies from one application to another, but generally writers prepare by flushing caches and completing any database transactions that are currently in progress. If the application makes use of transaction logs, the logs may be committed as a part of the process as well.

After all of the VSS writers have prepared for the shadow copy, the Volume Shadow Copy Service instructs the writers to freeze their corresponding applications. This prevents write operations from occurring for the duration of the shadow copy (which takes less than ten seconds to complete).

When all of the applications have been frozen the Volume Shadow Copy Service instructs the provider to create the shadow copy. When the shadow copy creation is complete, the provider notifies the Volume Shadow Copy Service of the completion. At this point, the Volume Shadow Copy Service once again allows file system I/O and it instructs the writers to resume normal application activity.

The shadow copy creation process revolves largely around the task of coordinating VSS writers so that the various components of the operating system and any applications that are running on the server can be backed up in a reliable and consistent manner. Even though the individual writers do most of the heavy lifting, it is the provider that ultimately creates the shadow copy.

There are actually several different methods that can be used for shadow copy creation. The actual method used varies from one provider to the next. As you may recall, providers can exist as an operating system component, or it can exist at the hardware level. The shadow copy creation process that is used varies depending on the type of provider that is being used. There are three main methods that providers typically use for creating shadow copies.

The first method is known as a complete copy. A complete copy is usually based on mirroring. A mirror set is created between the original volume and the shadow copy volume. When the shadow copy creation process is complete, the mirror is broken so that the shadow copy volume can remain in a pristine state as it existed at the point in time when it was created.

The second method that is sometimes used for shadow copy creation is known as Redirect on Write. Redirect on Write is based on the use of differencing disks. The shadow copy process designates the original volume as read only so that it can be kept in a pristine state as it existed at the point in time when the shadow copy was created. All future write operations are redirected to a differencing disk. This method is also sometimes referred to as snapshotting.

The third method that providers sometimes use for shadow copy creation is known as copy on write. This is block level operation that is designed to preserve storage blocks that would ordinarily be overwritten. When a write operation occurs, any blocks that would be overwritten are copied to the shadow copy volume prior to the write operation.

Conclusion

As you can see, the process of creating a shadow copy is relatively straightforward. You can gain some additional insight into the process by opening a Command Prompt window and entering the following command:

VSSADMIN List Writers

This command displays all of the VSS writers that are present on the system and also shows you each writer’s status.

Hope you found this helpful!

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Guest Post By: Brien M. Posey

Why-SAN-vs-DAS

Of all the decisions that must be made when planning an Exchange Server deployment, perhaps none are as critical as deciding which type of storage to use. Exchange Server is very flexible with regard to the types of storage that it supports. However, some types of storage offer better performance and reliability than others.

When it comes to larger scale Exchange Server deployments, it is often better to use a Storage Area Network than it is to use direct attached storage. Storage networks provide a number of advantages over local storage in terms of costs, reliability, performance, and functionality.

Cost Considerations

Storage Area Networks have gained something of a reputation for being more expensive than other types of storage. However, if your organization already has a Storage Area Network in place then you may find that the cost per gigabyte of Exchange Server storage is less expensive on your storage network than it would be if you were to invest in local storage.

While this statement might seem completely counter intuitive, it is based on the idea that physical hardware is often grossly underutilized. To put this into prospective, consider the process of purchasing an Exchange mailbox server that uses Direct Attached Storage.

Organizations that choose to use local storage must estimate the amount of storage that will be needed to accommodate Exchange Server databases plus leave room for future growth. This means making a significant investment in storage hardware.  In doing so, an organization is purchasing the necessary storage space, but they may also be spending money for storage space that is not immediately needed.

In contrast, Exchange servers that are connected to storage networks can take advantage of thin provisioning. This means that the Exchange Server only uses the storage space that it needs. When a thinly provisioned volume is created, the volume typically consumes less than 1 GB of physical storage space, regardless of the volume’s logical size. The volume will consume physical storage space on an as needed basis as data is written to the volume.

In essence, a thinly provisioned volume residing on a SAN could be thought of as “pay as you go” storage. Unlike Direct Attached Storage, the organization is not forced to make a large up-front investment in dedicated storage that may never be used.

Reliability

Another advantage to using Storage Area Networks for Exchange Server storage is that when properly constructed, SANs are far more reliable than Direct Attached Storage.

The problem with using Direct Attached Storage is that there are a number of ways in which the storage can become a single point of failure. For example, a disk controller failure can easily corrupt an entire storage array. Although there are servers that have multiple array controllers for Direct Attached Storage, lowering servers are often limited to a single array controller.

Some Exchange mailbox servers implement Direct Attached Storage through an external storage array. Such an array is considered to be a local component, but makes use of an external case as a way of compensating for the lack of drive bays within the server itself. In these types of configurations, the connectivity between the server and external storage array can become a single point of failure (depending on the hardware configuration that is used).

When SAN storage is used, potential single points of failure can be eliminated through the use of multipath I/O. The basic idea behind multipath I/O is that fault tolerance can be achieved by providing multiple physical paths between a server and a storage device. If for example an organization wanted to establish fault tolerant connectivity between an Exchange Server and SAN storage, they could install multiple Fibre Channel Host Bus Adapters into the Exchange Server. Each Host Bus Adapter could be connected to a separate Fibre Channel switch. Each switch could in turn provide a path to mirrored storage arrays. This approach prevents any of the storage components from becoming single points of failure.

Performance

Although Microsoft has taken measures to drive down mailbox server I/O requirements in the last couple of versions of Exchange Server, mailbox databases still tend to be I/O intensive. As such, large mailbox servers depend on high-performance hardware.

While there is no denying the fact that high-performance Direct Attached Storage is available, SAN storage can potentially provide a higher level of performance sue to its scalability. One of the major factors that impacts a storage array’s performance is the number of spindles that are used by the array. Direct Attached Storage limits the total number of spindles that can be used. Not only are the number of drive bays in the case a factor, but there is also a limit to the number of disks that can be attached to the array controller.

SAN environments make it possible to create high performance disk arrays by using large numbers of physical disks. Of course capitalizing on the disk I/O performance also means that you must have a high speed connection between the server and the SAN, but this usually isn’t a problem. Multipath I/O allows storage traffic to be distributed across multiple Fibre Channel ports for optimal performance.

Virtualization

Finally, SAN environments are ideal for use in virtualized datacenters. Although neither Microsoft nor VMware still require the use of shared storage for clustered virtualization hosts, using shared storage is still widely considered to be a best practice. SANs make it easy to create cluster shared volumes that can be shared among the nodes in your host virtualization cluster.

Conclusion

Exchange mailbox servers are almost always considered to be mission critical. As such, it makes sense to invest in SAN storage for your Exchange Server since it can deliver better performance and reliability than is possible with Direct Attached Storage.