Author Archives: Brian

About Brian

Brian Henderson lives in Quincy, MA with 1 wife, 2 dogs, & 3 kids.

EMC’s new VNXe3200: Come Fly with Me

I am very excited to write about the new VNXe3200 which brings a new level of power and flexibility to EMC’s entry level VNXe system, while retaining its simplicity for non-storage experts.

KittyHawk was a fitting codename we used for EMC’s 3rd generation VNXe platform – which flies at an altitude further up from any VNXe you may have seen before – yet remains simple and affordable at prices starting under $12,000 US list price.

So let’s jump right in and take a quick fly-by of the important features, shall we?

Slide3

The VNXe3200 has many new features from larger VNX models:

  • FAST Suite (autotiering and SSD caching)
  • Fibre Channel Support
  • MCx Multicore Optimization
  • VMware support: VAAI, VASA
  • Microsoft support: SMB3 support, ODX support, on array SMI-S provider, NPIV

The VNXe3200 also has features which aren’t on larger VNX models – mostly features focused on the IT generalist.

  • A revised configuration wizard that lets you setup NAS or SAN in under 15 minutes
  • Unified Snapshots for file and block with new LUN groups (for grouping databases and logs together as an example)
  • New system metrics for simple performance collection and charting
  • Expanded EMC Connect Proactive Support ecosystem to help resolve issues 5X faster

And the hardware got a makeover too.

  • Compared to a VNXe3150, the VNXe3200 has upgraded processors, MCx multicore optimization, and a bump in system memory from 8GB to 48GB!
  • Like VNXe3150, we support SSD, SAS, and large NL-SAS drives, but the VNXe3200 now supports FAST Suite – FAST VP autotiering and FAST Cache to easily add up to an overall 3X performance boost over the VNXe3150.
  • We’ve expanded protocol and connectivity options from Ethernet only to Ethernet and Fibre Channel and support file and block protocols – iSCSI, FC, NFS, CIFS (including SMB3).

There’s far too much to say in a single blog post about all of the goodness around the new VNXe3200 – but that’s some more information to get you started.

Here’s some more links to check out:

Provisioning EMC Storage for Windows 4X Faster with ESI

The EMC Storage Integrator for Windows was made to simplify and automate many of the mundane tasks associated with provisioning storage in a Windows environment. It is a free download and comes complete with a simple MMC interface as well as PowerShell commandlets, SharePoint provisioning wizards, and System Center plugins for Ops Manager and Orchestrator. Good stuff here.

Is a SAN Better, Cheaper, and More Secure Than Office 365?

SAN’s – especially SAN’s from market leader EMC – are always under attack from companies wishing to cash in on a piece of the rising data growth across all industries and market segments.

  • Some say DAS is best.
  • Some say keep the storage in the servers.
  • Some say you should build your own shared array.

But when it comes to Microsoft environments, it often helps to have independent experts investigate the matter to get a fresh perspective.

In a recent study, Wikibon determined that a shared storage infrastructure (powered by EMC’s next generation VNX2 storage systems) can match the prices offered by Office 365 public cloud and offer more capabilities, more security, and more control.

This, however, assumes a completely consolidated approach for deploying multiple mixed workloads such as Exchange, SharePoint, SQL Server, Lync – where the VNX2 really shines.   We use FAST VP, FAST Cache, and a combination of drive types to achieve the best balance of performance and cost.

Are you looking for more information about deploying Microsoft applications on VNX?    Definitely check here for the most recent best practices guides!

Also check out my recent webinar I did with James Baldwin who leads our Proven Solutions EMC/Microsoft engineering team.  We had a lot of fun doing this one, hope you enjoy it.

Building a Microsoft Azure Private Cloud – Powered by EMC VNX Storage

Recently EMC held a Microsoft Summit, where a lot of the Microsoft savvy engineers and business folks within EMC get together to share their stories and lessons learned.

One of the highlights of these sessions is always the work of Txomin Barturen -who is our resident Microsoft expert in EMC’s Office of the CTO.

His blog can be found here:  http://datatothepeople.wordpress.com/
(Bookmark it, and look for videos and blog posts soon to follow)

This year his session focused on our work within Microsoft Hyper-V, Microsoft System Center, Private Clouds and the powerful Azure Pack for Windows.

Sure, everyone knows about EMC’s affinity towards VMware (EMC’s VNX was rated best storage for VMware 3 years in a row), but many don’t know how focused we are on Hyper-V and helping customers power their Microsoft Cloud.

EMC is committed to becoming the best storage for private clouds for enterprises and service providers who wish to deploy private and/or public clouds for their customers – on VMware or Hyper-V.

Evidence of EMC’s Microsoft Private Cloud Work

To get to this stage, we’ve had to do a lot of work.

And beyond our engineering ability, we also showcased our agility.

  • VNX was the first storage platform to support SMB 3.0 (VNX & VNXe)
  • VNX was the first storage platform to demonstrate ODX (TechEd 2012)
  • Our Elab aggressively submits Windows Logo certifications (EMC currently has the most Windows 2012 R2 certs)

Where do you find these materials? 

We’ve built Microsoft Private Cloud (Proven) solutions on VNXe, VNX & VMAX leveraging SMI-S / PowerShell that can be found and delivered through EMC’s VSPEX program or as part of our Microsoft Private Cloud Fast Track solutions (which are Microsoft validated, ready-to-run reference architectures).  You can find more about this work here.

Getting to a More Agile Cloud

Txomin’s presentation talked about how customers want all that an Azure Public Cloud model offers in terms of agility and management but without the loss of control (a on-premises cloud deployment).  They want to offer *-as-a-Service models, elastic scale, a self-service model for tenants, but without the SLA risks that are out of IT control when deploying on a full private cloud.

The Middle Ground:  The Azure Pack for Windows

Microsoft is putting together some really interesting cloud management software with Azure Pack for Windows.  The Azure Pack for Windows is a free downloadable set of services that offer the same interface as the Azure public cloud option, but provide more control for companies who are not willing to deploy on the public cloud for reasons due to performance, reliability, security, and compliance concerns.

azure-cloud

Since we’ve done all of the baseline private cloud work, now we can use these as a foundation for building a Microsoft Private Cloud on-premises with a VNX storage platform using the new Azure Pack for Windows.

Built atop the new Windows Server 2012 R2 platform, the Windows Azure Pack (WAP) enables public-cloud like management and services without the risk.   This layers right on top of EMC’s Windows Fast Track & Private Cloud offerings without any additional technology required.

Although it offers a limited subset of services, we expect that Microsoft will introduce more service as customers adopt this new model.

One of the first use cases Microsoft is focusing on is the service providers who want better management for their Microsoft clouds.   This will allow for new integrations and capabilities that weren’t previously available.   IT staff can treat business units as Tenants, offer pre-configured solutions via Gallery, enable self-service management by tenants (delegated Admin).  They can also view utilization and reporting available through System Center/3rd party integrations which are fully extensible through Operations Manager, Orchestrator and Virtual Machine Manager.

This is truly the future of Microsoft’s virtualization Strategy and EMC is right there to enable customers to build the best, most reliable, secure, manageable private cloud.

But what about Data Protection?

Well, our colleagues in the Backup and Recovery Systems division of EMC are no slackers.  They saw the same trends and are eager to help customers stay protected as they move to the cloud.

In this demo Alex Almeida, Sr. Technical Marketing Manager for EMC’s Backup and Recovery Systems demonstrates how the EMC Data Protection Suite provides full support for Windows Azure Private Cloud Backup and Recovery:

So let me correct my statement…  EMC is right there to enable customers to build the best, most reliable, secure, manageable private cloud – AND PROTECT IT.

EMC’s Next Generation VNX Arrives!

VNX5800_so_300dpiFinally the big day is here.  The next generation VNX models are now available and shipping. 

We’ve got a big launch day that started at 11am Milan Italy time. We’ve partnered with Lotus to provide a speed / agility theme and also have a limited edition VNX Lotus bezel available for the 5400.

Press Releases include:

New EMC Storage Arrays, Systems & Software-Defined Storage Speed IT Transformation

New EMC VNX Shatters the Definition and Economics of Midrange Storage

EMC Transforms EMC VSPEX Proven Infrastructure, Supports 2x More Virtual Machines at Same Price

EMC Announces Availability of ViPR Software-Defined Storage Platform

These 2 “search links” will provide insight into how the announcement is progressing throughout the day.

Articles   Videos

Some people I know who are going to have a lot to say include:

Eric Herzog

Chad Sakac

Jeremy Burton

Even if you aren’t on Twitter a lot, it’s going to be one of those days where it will be a lot of fun to stay tuned.

#emc #SpeedtoLead

VNX Data Protection Ask the Experts
I will be answering questions in a VNX Data Protection Ask the Experts Session / online forum Sept 4th – Sept 19th.  Sign up for email updates and ask away!

How to Follow VMworld from your Couch

vOdgeball

The VMworld effect?

This week is VMworld and if you are in IT, it’s likely that you might be hearing about it in some way.  There are tweets, press releases, pictures of grown men dressed up like bunnies, and people that you forgot about that are suddenly professing their love for the software defined datacenter on LinkedIn.  Yes, this is the VMworld effect and admit it – you want to be there!

So how do you find out about all the new tech goodies without missing a beat – all from your desk or couch?

The easy way is to simply follow Chad Sakac –  Blog  YouTube  Twitter.

He’s now running Presales for EMC and he’s still the core “voice” for VMware virtualization at EMC.  He gets access to top secret previews of new technology like no one other person at EMC.  Did you know he also has a huge datacenter 2 miles beneath his house on a glacier up there in Canada?  He doesn’t, I made that up.  But he does have a lot of EMC storage in his house!

One of the coolest demos I saw on his YouTube channel was this one which shows “vSphere Replication on steroids” using a beta version of RecoverPoint which supports VM-Level Granularity.  This is something that I know our customers have been asking about, and it was great to see the wraps finally come off of this one.

VMworld 2013: VM-Granular replication with EMC Recoverpoint/VE

 

Another thing to check out is the latest video from Fred Nix and gang. Stay for the Chad and Nick Weaver cameo at the end.

VMware Still Dominant, Hyper-V Gaining Ground

The results from Wikibon’s Multi-Hypervisor study are back, and there’s a good amount of data that you can dig into.

For virtualization trends the survey provides evidence that virtualization will continue to grow (percentage of virtualized servers is predicted to grow from 69% today to 84% in two years time).   Also the survey indicates that many companies experiment with multi-hypervisor strategies, but 55% predict to move to a single hypervisor in 18 months.

The three big takeaways for me include:

1. VMware is still the dominant hypervisor with growth leveling off.

  • VMware was perceived to be dominant in functionality and all workload types were being run in production under VMware.
  • There is continued movement towards VMware (of the installations with a single hypervisor in 18 months time, two-thirds would be VMware).

2. Hyper-V is becoming good enough for many use cases and growing fast.

  • Like many enterprise Microsoft products, the 3rd release is the game changer.  Hyper-V V3 (Windows 2012) is really gaining momentum in the small-mid section of the market.
  • Planned adoption was measured in the survey results – Of the installations with a single hypervisor in 18 months time, one-third would be Hyper-V.
  • But the features need to be there – ODX support an essential requirement for storage arrays for Microsoft environments (VNX was first to market here by the way)

3. EMC VNX continues to lead in virtualization integration.

  • EMC VNX had the most VMware integration of all the storage arrays analyzed and led in the overall group, the block-only group, and the file-only group.
  • EMC had a clean sweep of the block-only group, with the VNX, VMAX 10K and VMAX 20-40K in first, second, and third place.

But like I said, there’s a lot to dig into if you like data and colorful charts including:

  1. A storage array/VMware integration feature matrix showing each vendor and the VMware integration they provide (thumbnail above)
  2. A VMware Storage Integration Assessment by Vendor Array (leads to a score).
  3. Advice for peers regarding hypervisor strategy – good advice in here.

Check out the full results on the Wikibon blog post here.

Survey Says… A Dual Hypervisor Strategy?

If you are virtualizing physical servers and want to see what others in the industry are doing, then you should definitely take this survey.

The Wikibon guys running the survey are top notch and will share the no-spin facts with the industry.

Some of the focus areas are around:

  • Does your company have a long-term strategy around hypervisor deployment?
  • Which hypervisor for which applications?
  • Should your cloud strategy impact your hypervisor decision?

Taking the survey gets you access to the results AND you could also win an iPad.

Link to Wikibon Page  http://wikibon.org/blog/time-to-create-a-hypervisor-strategy/

Link to the Survey   http://www.surveymonkey.com/s/MWBR2V9

Backup! And Watch These New Networker Microsoft Demos

Networker has added some great functionality in the Microsoft backup space recently, including a lot of great integration with SQL, Exchange, and Hyper-V.

The videos below are meant to show you the basics of why you would use these new features and how to configure them.

These great, get-right-to-the-point, videos were made by Deanna Hoover, a Sr. Technical Marketing Manager in EMC’s Backup Recovery Systems (BRS) Division.

Happy Networking!

Demo 1: EMC NetWorker Module for Microsoft SQL Server Management Studio Backup

Let the SQL DBA make their own backups with native SQL tools!

Demo 2: EMC NetWorker Configuration of Microsoft SQL Server Management Studio for Federated Backup

Shows support for Always On Availability Groups – letting you backup the active or passive copy (secondary replica).

Demo 3: EMC NetWorker Module for Microsoft Application 3_0 Exchange Alternate Mailbox Recovery

Get granular recoveries for operational restore of emails – useful for quick recoveries and legal discovery.

Demo 4: EMC NetWorker 8.1 Hyper-V Configuration Wizard

Guest, image-level backup, CSV, using SMB 3.0, and useful for granular recoveries too.

Hope you enjoyed these videos, let me know if you want to see anything else and I’ll see what I can do!

Get Inside the EMC/Microsoft Partnership

I’ve had the privilege of knowing and working with Sam Marraccini for about 10 years now and although he’s a Pittsburgh Penguins fan, he’s always been a great guy to hang out with and a prolific creator of EMC/Microsoft related content – mostly videos.

My friend Sam makes it easy to learn more about what EMC is doing in the Microsoft market and isn’t afraid to wheel up his portable video equipment (laptop + webcam + guitar hero microphone) to anyone on a conference show floor to find out more about what they’re doing.

If  you are interested in what EMC is doing in the Microsoft space to increase the performance, availability, and management of your application and database infrastructure, then you need to check out Sam’s page over at:   http://www.insidethepartnership.com/

Happy Customer: City of Lenexa on EMC VNX (w/FAST) and VMware

In the video below, Justin Rairden talks about using FAST-enabled VNX storage for his VMware environment which hosts Microsoft Exchange and SQL Server.

SUMMARY:  they went from 14 SANs with nothing centrally managed, taking hours and hours to manage… to a VNX 5500 running FAST for VMware datastores (which runs critical apps including a SQL-based police system).  Things are a lot better now.

WHAT IS FAST?

EMC fully automated storage tiering (FAST) automatically moves active data to high-performance storage tiers and inactive data to low-cost, high-capacity storage tiers. The result is higher performance, lower costs, and a denser footprint than conventional systems. With FAST, flash drives (aka SSDs) increase application performance by up to 800 percent, and lower cost (NL-SAS or SATA) disk drives lower costs by up to 80 percent. more here.

WHAT IS FAST CACHE?

EMC fully automated storage tiering (FAST) cache is a storage performance optimization feature that provides immediate access to frequently accessed data. FAST cache complements FAST by automatically absorbing unpredicted spikes in application workloads. FAST cache results in a significant increase in performance for all read and write workloads. more here

SOME NOTES:

  • “We can just throw it out there and automatically tier that up… You don’t waste money on the disks you don’t need”
  • Exchange 2010 is completely virtualized. We run SQL with VNX and VMware and performance has increased… “Performance is way better than before.”
  • Using VMware Plugin to manage EMC components from VMware GUIs and VMware components from EMC GUI – “I could literally live off of one pane – EMC or VMware”
  • Exploring VPLEX for business continuity

Why EMC VNX for Microsoft Exchange, SQL, SharePoint

We’re having a great week so far in New Orleans at Microsoft TechEd 2013!

Yesterday I had a chance to record a couple videos including this one which describes why customers are using VNX for Microsoft databases and applications.

I also had a great conversation with Jose Bareto, whose blog I’ve followed for awhile and finally got to meet in person.

Check it out here:

http://blogs.technet.com/b/josebda/

Jose Bareto and Brian Henderson

EMC’s VNX = Award Winning storage for Microsoft environments

Microsoft’s TechEd 2013 is next week, and I’m looking forward to spending time with my longtime industry friends and making some new connections on the show floor in New Orleans.

This year, I’ll attend as part of the Unified Storage Division, and felt I needed to share a little about the success of VNX and VNXe arrays into Microsoft environments:

awards

EMC’s VNX Unified Storage Platform has been recognized with awards from a slew of independent analysts such as Gartner, IDC and Wikibon, as well as media publications such as ComputerWorld, CRN and Virtualization Review due to the ability of the VNX family to power mission critical applications, integrate with virtual environments and solve SMB IT challenges, among other accolades.   We take pride in being the #1 storage for most Microsoft Windows-based applications.

BUT… DOES  MICROSOFT WINDOWS NEED A SAN?  CAN’T WE DO IT OURSELVES?

Well, after speaking with Windows Server 2012, SQL Server, and EMC customers, partners and employees, the independent analyst firm Wikibon posted a before and after comparison model based on an enterprise customer environment. The idea is that the total cost of bolting together your own solution isn’t worth it.

wikibon-windows-study

The findings showed that by moving a physical, non-tiered environment to a virtualized environment with flash and tiered storage SQL Server customers realized a 30% lower overall TCO over a 3 year period including hardware, software, maintenance, and management costs for their database infrastructure.

The graphic shows that a do-it-yourself approach saves very little if anything in hardware costs and will divert operational effort to build and maintain the infrastructure. Risks and costs are likely to be higher with this approach.

In the end, EMC’s VNX infrastructure was proven to deliver a lower cost and lower risk solution for Windows 2012 versus a direct-attached storage (DAS) or JBOD (just a bunch of disk) model.  Full study here.

Video of EMC’s Adrian Simays and Wikibon Analysts discussing these results is here on YouTube.

MICROSOFT INTEGRATIONS AND INNOVATIONS  

EMC’s VNX platform considers Microsoft applications, databases, and file shares to our sweet spot as evidenced by our early integration of the latest Windows Server 2012 features that increase performance, efficiency, availability, and simplicity for our joint customers.

Performance, Efficiency, Availability, Simplicity

1. Performance

Within Windows, we were the first storage array to support SMB 3 and ODX Copy Offload (part of SMB 3) to enable large file copies over SAN instead of consuming network bandwidth and host CPU cycles.

ODX-impact

This test highlights the speed difference before (left) and after (right) ODX was implemented. With EMC VNX and ODX enabled, you can accelerate your VM copies by a factor of 7 while reducing server CPU utilization by a factor of 30!

For applications and databases, VNX FAST Cache and FASTVP automatically tunes your storage to match your workload requirements saving up to 80% of the time it would take to manually balance workloads.

The Enterprise Storage Group (ESG) Lab confirmed that a data warehouse solution with Windows Server 2012 with Hyper-V, Microsoft SQL Server 2012 with new Columnstore indexing, and VNX FAST technologies and VNX storage form a complete solution to meet the business requirements of mid-tier organizations and beyond. An 800GB DW was deployed which is fairly typical for a medium sized business. With EMC FAST enabled, throughput reached up to 379 MB/sec, showing over 100% improvement over SQL Server 2012’s baseline Rowstore indexing. The DSS performance workload with EMC FAST enabled completed up to nine times faster than with rowstore indexing.

2. Efficiency

IT managers and storage administrators frequently adopt well-known forecasting models to pre-allocate storage space according to the storage demand growth rate. The main challenge is how to pre-allocate just enough storage capacity for the application. Reports from many storage array vendors indicate that 31% to 50% of the allocated storage is either stranded or unused. Thus, 31% to 50% of the capital investment from the initial storage installment is wasted.

The VNX supports Windows host-level and built-in storage-level thin provisioning to drastically reduce initial disk requirements.  Windows Server 2012 provides the ability to detect thin-provisioned storage on EMC storage arrays and reclaim unused space once it is freed by Hyper-V. In the previous scenario, an ODX-aware host connected to an EMC intelligent storage array would automatically reclaim the 10 GB of storage and return it to the pool where it could be used by other applications.

Furthermore, for application storage we partner with companies like Kroll and Metalogix to provide better solutions for Exchange single item recovery and SharePoint remote BLOB storage which can reduce SQL stored SharePoint objects by about 80-90% and improve SQL Respnse times by 20-40%

3. Availability

Our first to market integration with SMB3 not only provides for performance improvements, it also enables SMB 3 Continuous Availability allowing applications to run on clustered volumes with failovers that are transparent to end users.  For example, SQL Server may store system tables on the file shares such that any disruptive event to the access of the file share can lead to interruption of SQL Server operation. Continuous Availability is accomplished via cluster failover on the host side and Data Mover of Shared Folder failover on the VNX side.

Other SMB 3.0 Features supported include:

  • Multi-Channel / Multipath I/O (MPIO ) – Multiple TCP connections can now be associated with a single SMB 3.0 session and a client application can use several connections to transfer I/O on a CIFS share.  This optimizes bandwidth and enables failover and load balancing with multiple NICs.
  • Offload Copy – Copying data   within the same Data Mover can now be offloaded to the storage which reduces the workload on the client and network.
  • SMB Encryption – Provides secure access to data on CIFS shares, protecting data on untrusted networks and providing end-to-end encryption of data in- flight.
  • Directory Lease – SMB2   introduced a directory cache which allowed clients to cache a directory listing to save network bandwidth but it would not see new updates.  SMB3 introduces a directory lease and the client is now automatically aware of changes made in a cached directory.
  • Remote Volume Shadow Copy Service (RVSS) – With RVSS, point-in-time snapshots can be taken across multiple CIFS shares, providing improved performance in backup and restore.
  • BranchCache – Caching solution to have business data in local cache. Main use case is remote office and branch office storage.

EMC also offers a wide range of application availability and protection solutions that are built into the VNX including snapshots, remote replication, and a new RecoverPoint virtual replication appliance.

4. Simplicity

When it comes to provisioning storage for their applications, admins often have to navigate through too many repetitive tasks requiring them to touch different UIs and increasing the risk of human error. Admins also likely need to coordinate with other administrators each time they need to provision space. This is not very efficient. Take for example a user that wants to provision space for SharePoint. You need to work with Unisphere to create a LUN and add it to a storage group. Next you need to log onto the server and run disk manager to import the volume. Next you need to work with Hyper-V, then SQL Server Mgmt Studio, then SharePoint Central Admin. A bit tedious to say the least.

esi

EMC Storage Integrator (ESI) on the other hand streamlines everything we just talked about. Forget about how much faster it actually is… Just think about the convenience and elegance of this workflow compared to the manual steps outlined in our last paragraph. ESI is a free MMC based download that takes provisioning all the way into Microsoft Applications. Currently only SharePoint is supported but SQL and Exchange wizards are coming soon. This is a feature that surprises and delights our customers!

 SO WHAT DO VNX CUSTOMERS SAY?

EMC’s VNX not only provides a rock solid core infrastructure foundation, but also delivers significant features and benefits for application owners and DBAs.     Here’s some quotes from customers who have transformed their Microsoft environments using the VNX and VNXe platforms.

Peter Syngh Senior Manager, IT Operations, Toronto District School Board

 “EMC’s VNX unified storage has the best of everything at a very cost-effective price. It integrates with Microsoft Hyper-V, which is crucial to our cloud strategy, and with its higher performance, automated tiering and thin provisioning, VNX was a no-brainer.”

Marshall Bose Manager of IT Operations, Ensco (Oil/Gas)

 “A prime reason for choosing EMC over NetApp was that VNX is such a great fit for virtualization. With all the automation tools and tight integration with VMware, VNX is far easier than NetApp when it comes to spinning up and managing virtual machines.”

Rocco Hoffmann, IT Architect BNP Paribas (German Bank)

“We are achieving significant savings in energy and rack space. In fact our VNX requires only half the rack space and has reduced our power and cooling costs”

Charles Rosse, Systems Administrator II Baptist Memorial Health Care

“Since the VNX has been built into the design of our VDI from the beginning, it can easily accommodate growth- all we need to do is to plug in another drive or tray of drives and we get incrementally better performance.”

Erich Becker,  Director of Information Systems, AeroSpec (Manufacturing)

“…We loved the fact that VNXe and VMware worked extremely well together …we have dramatically cut operating costs, increased reliability and data access is now twice as fast as before.”

BOTTOM LINE

There are many more customers that have given praise to the VNX Family for powering their Microsoft applications but I don’t have the room to put them all in.     EMC is a trusted brand in storage, and the VNX today is an outstanding unified platform which successfully balances our customers block and file needs for their Microsoft file and application data – and gets awards for it.    Feel free to find out more about the VNX and VNXe product lines here and here.

Also come talk to us next week at TechEd, we will be there to help customers and partners learn more about our technology.

Find out more about our TechEd plans here.

Also download the VNXe Simulator executable right here.  It’s pretty awesome and shows you the unique VNXe management interface.

2013: A Mobile Datacenter Odyssey

At EMC World last week, Avnet Technology Solutions introduced the Avnet Mobile Data Center Solution for EMC VSPEX.

Click here or on the picture below to access my latest video which provides a bit more about this rolling datacenter-in-a-box environment and features Stefan Voss, Business Development Manager from EMC.

Avnet-Blog-VSPEX-datacenter

What is the Avnet Mobile Data Center Solution for EMC VSPEX?

Exclusively available through Avnet’s U.S. and Canadian partner community, this mobile data center solution leverages VSPEX Proven Infrastructures to create private clouds. Channel partners’ enterprise customers will benefit from the solution by being able to deploy data centers that have been ‘hardened’ to operate in harsh environments to support BC, data center moves, DR, large-scale special events, and remote field locations.

It was named one of the Top 3 hottest products at EMC World this year by Channel Reseller News / CRN (link) and includes System Center, SharePoint, Metalogix, and many more partners.

Find more information here and here

The Future of Exchange Protection

crystal-ball-emc

If you could look into a crystal ball and predict what would come next for Exchange protection, what would it be?

Join us live on Feb 7th to learn what Ernes Taljec (a data architect from Presidio) and I think what is coming next for Exchange 2013 and beyond.     We will talk about the evolution of Microsoft built-in and other EMC technologies and take a look into the future!

Also – because everyone likes free stuff – we will pick one person from the audience to win an iPad 3 live during the event.

Sign up today!

  • Webinar Date:    Feb 7th at 12:00 PM EST
  • Webinar Link:    https://www.brighttalk.com/webcast/7397/65443
  • Presenters:          Brian Henderson, AppSync Technical Marketing Manager,  EMC & Ernes Taljec, Data Center Architect, Presidio
  • Duration:               60 mins

What’s new for Exchange Server 2013 Database Availability Groups?

What’s new for Exchange Server 2013 Database Availability Groups?

By: Brien M. Posey

When Microsoft created Exchange Server 2010, they introduced the Concept of Database Availability Groups. Database Availability Groups are the mechanism that makes it possible for a mailbox database to fail over from one mailbox server to another. In retrospect, Database Availability Groups worked really well for organizations whose operations were confined to a single data center. Although it was possible to stretch a Database Availability Group across multiple data centers, performing site level failovers was anything but simple. Microsoft has made a number of enhancements to Database Availability Groups in Exchange Server 2013. Some of these enhancements are geared toward making site level failovers less complex.

Site Resilience

Although site resilience could be achieved using Exchange Server 2010, there were a number of different factors preventing organizations from achieving the level resilience that they might have liked. For starters, site level resilience was something that had to be planned ahead of time before Exchange Server 2010 was put into place. One of the reasons for this was that all of the Database Availability Group members had to belong to the same Active Directory domain. This meant that site resilience could only be achieved if the Active Directory domain spanned multiple data centers.

Another major limitation was that Microsoft designed Exchange Server 2010 so that a simple WAN failure would not trigger a site failover. One of the ways that they did this was to make it so that the failover process had to be initiated manually. Furthermore, the primary data center had to contain enough Database Availability Group members to allow the site to retain quorum in the event that the WAN link failed. Because of these limitations, there was really no such thing as true site resilience in Exchange Server 2010.

In Exchange Server 2013, it is finally possible to achieve full site resilience – with enough planning. As was the case with Exchange Server 2010, a DAG can only function if it is able to maintain quorum. Maintaining quorum means that at least half plus one of the DAG members are online and able to communicate with one another at any given time. This can be accomplished by placing an equal number of DAG members in each datacenter and then placing a witness server into a remote location that is accessible to each datacenter.

This approach will allow a datacenter level failover in the event of a major outage or a WAN failure. It is worth noting however, that this approach to site resiliency still does not achieve fully comprehensive protection for mailbox databases because situations could still occur that cause the DAG to lose quorum. Imagine for example, that a WAN link failure occurs between two datacenters. In that situation, whichever datacenter is still able to communicate with the witness server will retain quorum. Now, imagine that one of the DAG members in this datacenter were to fail before the WAN link is fixed. This failure would cause the datacenter to lose quorum, resulting in a DAG failure.

Lagged Copies

Another major change that Microsoft has made to DAGs has to do with the way that lagged copies work. Lagged copies are database replicas for which transaction log replay is delayed so as to facilitate point in time recovery.

In Exchange 2013, Microsoft has built some intelligence into lagged copies to detect and correct instances of corruption or low disk space. It is worth noting however, that in these types of circumstances you could end up losing the lag.

One of the big problems with lagged copies in Exchange 2010 was the fact that transaction had to be stored for the full lag period and could grow to a considerable size. As such, there have been instances in which organizations underestimated the volume of transaction logs that would be stored for lagged copies, resulting in the mailbox server running out of disk space.

Exchange 2013 monitors the available disk space. If the volume containing the transaction logs begins to run short on space then Exchange will initiate an automatic play down, which commits the contents of the transaction logs to lagged copy so that disk space can be freed on the transaction log volume.

Exchange uses a similar log file play down if it detects a corrupt database page. According to Microsoft however, “Lagged copies aren’t patchable with the ESE single page restore feature. If a lagged copy encounters database page corruption (for example, a -1018 error), it will have to be reseeded (which will lose the lagged aspect of the copy)”.

Another change that Microsoft has made to lagged copies is that it is now possible to activate a lagged copy and bring it to a current state, even if the transaction logs are not available. This is due to a new feature called the Safety Net. The Safety Net replaces the transport dumpster. Its job is to store copies of every message that has been successfully delivered to an active mailbox database. If a lagged database copy needs to be activated and the transaction logs are not available, Exchange can use the Safety Net’s contents to bring the database into a current state.

Public Folders

One of the most welcome changes that Microsoft has made to DAGs is that it is now possible to use DAGs to protect your public folders. In Exchange Server 2010, DAGs could only protect mailbox databases, not public folder databases. Public folder databases do not exist in Exchange 2010. Instead, public folders are stored in mailbox databases, which make it possible to use DAGs to protect public folders.

Conclusion

The most significant changes that Microsoft has made to DAGs include the ability to fail over at the datacenter level, the ability to use DAGs to provide high availability for public folders, and automated maintenance for lagged copies. In addition, Microsoft has also built in some minor improvements such as automatic database reseeding after a storage failure, and automated notification in situations in which only a single healthy copy of a DAG exists.

What’s New for Exchange 2013 Storage?

What’s New for Exchange 2013 Storage?

By: Brien M. Posey

Many of Exchange Server 2013’s most noteworthy improvements are behind the scenes architectural improvements rather than new product features. Perhaps nowhere is this more true than Exchange Server’s storage architecture. Once again Microsoft invested heavily in Exchange’s storage subsystem in an effort to drive down the overall storage costs while at the same time improving performance and reliability. This article outlines some of the most significant storage related improvements in Exchange Server 2013.

Lower IOPS on Passive Database Copies

In failure situations, failover from an Active mailbox database to a passive database copy needs to happen as quickly as possible. In Exchange Server 2010, Microsoft expedited the failover process by maintaining a low checkpoint depth (5 MB) on the passive database copy. Microsoft’s reason for doing this was that failing over from an Active to a passive database copy required the database cache to be flushed. Having a large checkpoint depth would have increased the amount of time that it took to flush the cache, thereby causing the failover process to take longer to complete.

The problem was that maintaining a low checkpoint depth came at a cost. The server hosting the passive database copy had to do a lot of work in terms of pre-read operations in order to keep pace with demand while still maintaining a minimal checkpoint depth. The end result was that a passive database copy produced nearly the same level of IOPS as its active counterpart.

In Exchange Server 2013, Microsoft made a simple decision that greatly reduced IOPS for passive database copies, while also reducing the database failover time. Because much of the disk I/O activity on the passive database copy was related to maintaining a low checkpoint depth and because the checkpoint depth had a direct impact on the failover time, Microsoft realized that the best way to improve performance was to change the way that the caching process worked.

In Exchange 2013, the cache is no longer flushed during a failover. Instead, the cache is treated as a persistent object. Because the cache no longer has to be flushed, the size of the cache has little bearing on the amount of time that it takes to perform the failover. As such, Microsoft designed Exchange 2013 to have a much larger checkpoint depth (100 MB). Having a larger checkpoint depth means that the passive database doesn’t have to work as hard to pre-read data, which drives down the IOPS on the passive database copy by about half. Furthermore failovers normally occur in about 20 seconds.

Although the idea of driving down IOPS for passive database copies might sound somewhat appealing, some might question the benefit. After all, passive database copies are not actively being used, so driving down the IOPS should theoretically have no impact on the end user experience.

One of the reasons why reducing the IOPS produced by passive database copies is so important has to do with another architectural change that Microsoft has made in Exchange Server 2013. Unlike previous versions of Exchange Server, Exchange Server 2013 allows active and passive database copies to be stored together on the same volume.

If an organization does choose to use a single volume to store a mixture of active and passive databases then reducing the IOPS produced by passive database will have a direct impact on the performance of active databases.

This new architecture also makes it easier to recover from disk failures within a reasonable amount of time. Exchange Server 2013 supports using volume sizes of up to 8 TB. With that in mind, imagine what would happen if a disk failed to needed to be reseeded. Assuming that the majority of the space on the volume was being used, it would normally take a very long time to regenerate the contents of the failed disk.

Part of the reason for this has to do with the sheer volume of data that must be copied, but there is more to it than that. Passive database copies are normally reseeded from an active database copy. If all of the active database copies reside on a common volume than that volumes performance will be the limiting factor affecting the amount of time that it takes to rebuild the failed disk.

In Exchange Server 2013 however, volumes can contain a mixture of active and passive database copies. This means that the active database copies of likely reside on different volumes (typically on different servers). This means that the data that is necessary for rebuilding the failed volume will be pulled from a variety of sources. As such, the data source is no longer the limiting factor in the amount of time that it takes to reseed the disk. Assuming that the disk that is being reseeded can keep pace, the reseeding process can occur much more quickly than it would be able to if all of the data were coming from a single source.

In addition, Exchange Server 2013 periodically performs an integrity check of passive database copies. If any of the database copies are found to have a status of FailedAndSuspended. If such a database copy is found then Exchange will check to see if any spare disks are available. If a valid spare is found then Exchange Server will automatically remap the spare and initiate an automatic seating process.

Conclusion

As you can see, Microsoft has made a tremendous number of improvements with the way that Exchange Server manages storage in DAG environments. Passive database copies generate fewer IOPS, and failovers happen more quickly than ever before. Furthermore, Exchange Server can even use spare disks to quickly recover from certain types of disk failures.

The Pros and Cons of Using Database Availability Groups

The Pros and Cons of Using Database Availability Groups

Guest Post By: Brien M. Posey

Database Availability Groups (DAGs) are Microsoft’s go to solution for providing high availability for Exchange 2010 (and Exchange 2013) mailbox servers. Even so, it is critically important for administrators to consider whether or not a DAG is the most appropriate high availability solution for their organization.

The primary advantage offered by DAGs is that of high availability for mailbox servers within an Exchange Server organization. DAGs make use of failover clustering. As such, the failure of a DAG member results in any active mailbox databases failing over to another DAG member.

At first this behavior likely seems ideal, but depending on an organization’s needs DAGs can leave a lot to be desired. One of the first considerations that administrators must take into account is the fact that DAGs only provide high availability for mailbox databases. This means that administrators must find other ways to protect the other Exchange Server roles and any existing public folder databases. Incidentally, Exchange Server 2013 adds high availability for public folders through DAGs, but DAGs cannot be used to protect any additional Exchange Server components.

In spite of the limitations that were just mentioned, DAGs have historically proven to be an acceptable high availability solution for medium sized organizations. While it is true that DAGs fail to protect the individual server roles, Exchange stores all of its configuration information in the Active Directory, which means that entire servers can be rebuilt by following these steps:

  1. Reset the Active Directory account for the failed server (reset the account, do not delete it).
  2. Install Windows onto a new server and giving it the same name as the failed server.
  3. Install any Windows patches or service packs onto the new server that were running on the failed server.
  4. Join the server to the Active Directory domain.
  5. Create an Exchange Server installation DVD that contains the same service pack level that was used on the failed server.
  6. Insert the Exchange installation media that you just created and run Setup /m:RecoverServer

The method outlined above can be used to recreate a failed Exchange Server. The only thing that is not recreated using this method are databases, but databases are protected by DAGs. As such, these two mechanisms provide relatively comprehensive protection against a disaster. Even so, the level of protection afforded by these mechanisms often proves to be inadequate for larger organizations.

One of the reasons for this has to do with the difficulty of rolling a database back to an earlier point in time. Microsoft allows DAG members to be configured as lagged copies. This means that transaction logs are not committed to the lagged copy as quickly as they would otherwise be. This lag gives administrators the ability to activate an older version of the database if necessary. The problem is that activating a lagged copy is not an intuitive process. Furthermore, activating a lagged copy always results in data loss.

The other reason why DAGs are not always an adequate solution for larger organizations has to do with the difficulty of providing off-site protection. Exchange Server 2010 supports the creation of stretched DAGs, which are DAGs that span multiple datacenters. Although being able to fail over to an off-site datacenter sounds like a true enterprise class feature, the reality of the situation is that architectural limitations often prevent organizations from being able to achieve such functionality

The most common barriers to implementing a stretched DAG are network latency and Active Directory design. Stretched DAGs are only supported on networks with a maximum round trip latency of 500 milliseconds. Additionally, DAGs cannot span multiple Active Directory domains, which means that the domain in which the DAG members reside must span datacenters.

Even if an organization is able to meet the criteria outlined above, they must construct the DAG in a way that will ensure continued functionality both in times of disaster and during minor outages. In order for a DAG to function, it must maintain quorum. This means that at least half plus one of the total number of existing DAG members must be functional in order for the DAG to remain online. This requirement is relatively easy to meet in a single datacenter deployment, but is quite challenging in stretched DAG environments.

One of the issues that must be considered when building a stretched DAG is that Exchange cannot tell the difference between a WAN failure and the failure of the Exchange servers on the other side of the WAN link. As such, the primary site must have enough DAG members to maintain quorum even in the event of a WAN failure. Ideally, the primary site should have enough DAG members to retain quorum during a WAN failure and still be able to absorb the failure of at least one member in the primary site.

Another problem with stretched DAGs is that the requirement for the primary site to have enough DAG members to always maintain quorum means that if the DAG will never failover to the remote site, even if the entire primary datacenter is destroyed. The secondary site lacks enough DAG members to achieve quorum without an administrator manually evicting nodes from the DAG.

As you can see, DAGs tend to deliver an acceptable level of functionality in single datacenter environments, but the limitations that are inherent in stretched DAGs make them impractical for use in multi-datacenter deployments. Larger organizations are typically better off implementing other types of redundancy rather than depending on DAGs. One possible solution for example is to virtualize an organization’s Exchange servers and then replicate the virtual machines to a standby datacenter. This approach will usually make the process of failing over to an alternate datacenter much simpler and more efficient.

An Up Close Look at the Volume Shadow Copy Services

An Up Close Look at the Volume Shadow Copy Services
Guest Post by: Brien M. Posey

One of the big problems with backing up database applications is that oftentimes the data is modified before the backup can complete. Needless to say, modifying data while a backup is running can result in a corrupt backup. In an effort to keep this from happening, Microsoft uses the Volume Shadow Copy Services (VSS) to make sure that database applications are backed up in a consistent state.   This article explains how the Volume Shadow Copy Services work.

The VSS Components

vss

There are four main components that make up the Volume Shadow Copies Services.

These components include:

  • the VSS service
  • the VSS requester
  • the VSS writer
  • the VSS provider

These components work together to provide the VSS backup capabilities.

  • The VSS Service component could best be thought of as the centralized operating system service that ties the various VSS components together. The VSS service insures that the VSS requestor, VSS Writer, and VSS Provider are all able to communicate with one another.
  • At a high level, the VSS requestor generally refers to the backup software. The VSS requestor is the component that asks the VSS service to create a shadow copy. The requestor itself is built into the backup software. This is true for Windows Server Backup, System Center Data Protection Manager, and for third party backup applications such as EMC’s AppSync.
  • The third component is the VSS provider. The VSS provider links the VSS Service to the hardware on which the shadow copy will be created. The Windows Server operating system includes a built in VSS provider. This provider exists at the software level and is allows Windows to interact with the server’s storage subsystem. A VSS provider can exist at the hardware level as well. A hardware level provider offloads shadow copy operations to the storage hardware so that the server operating system does not have to carry the workload. However, when a hardware level VSS provider is used, there is usually a driver that is required to make Windows aware of the storage hardware’s capabilities.
  • The fourth component of the Volume Shadow Copy Services is the VSS Writer. The VSS writer’s job is to insure that data is backed up in a consistent manner. It is important to understand that in most cases there are a number of different VSS writers that work together in parallel to insure that various types of data are backed up properly. Server applications such as Exchange Server and SQL Server include their own VSS writers that plug into the operating system’s existing VSS infrastructure and allow the application to be backed up.

Creating a Volume Shadow Copy

The process of creating a volume shadow copy begins when the requestor (which is usually built into backup software) notifies the Volume Shadow Copy Service that a shadow copy needs to be created. When the Volume Shadow Copy Service receives this request, it in turn notifies all of the individual VSS writers of the impending shadow copy.

When the individual writers receive the request, they take steps to place data into a consistent state that is suitable for shadow copy creation. The exact tasks that the writer performs varies from one application to another, but generally writers prepare by flushing caches and completing any database transactions that are currently in progress. If the application makes use of transaction logs, the logs may be committed as a part of the process as well.

After all of the VSS writers have prepared for the shadow copy, the Volume Shadow Copy Service instructs the writers to freeze their corresponding applications. This prevents write operations from occurring for the duration of the shadow copy (which takes less than ten seconds to complete).

When all of the applications have been frozen the Volume Shadow Copy Service instructs the provider to create the shadow copy. When the shadow copy creation is complete, the provider notifies the Volume Shadow Copy Service of the completion. At this point, the Volume Shadow Copy Service once again allows file system I/O and it instructs the writers to resume normal application activity.

The shadow copy creation process revolves largely around the task of coordinating VSS writers so that the various components of the operating system and any applications that are running on the server can be backed up in a reliable and consistent manner. Even though the individual writers do most of the heavy lifting, it is the provider that ultimately creates the shadow copy.

There are actually several different methods that can be used for shadow copy creation. The actual method used varies from one provider to the next. As you may recall, providers can exist as an operating system component, or it can exist at the hardware level. The shadow copy creation process that is used varies depending on the type of provider that is being used. There are three main methods that providers typically use for creating shadow copies.

The first method is known as a complete copy. A complete copy is usually based on mirroring. A mirror set is created between the original volume and the shadow copy volume. When the shadow copy creation process is complete, the mirror is broken so that the shadow copy volume can remain in a pristine state as it existed at the point in time when it was created.

The second method that is sometimes used for shadow copy creation is known as Redirect on Write. Redirect on Write is based on the use of differencing disks. The shadow copy process designates the original volume as read only so that it can be kept in a pristine state as it existed at the point in time when the shadow copy was created. All future write operations are redirected to a differencing disk. This method is also sometimes referred to as snapshotting.

The third method that providers sometimes use for shadow copy creation is known as copy on write. This is block level operation that is designed to preserve storage blocks that would ordinarily be overwritten. When a write operation occurs, any blocks that would be overwritten are copied to the shadow copy volume prior to the write operation.

Conclusion

As you can see, the process of creating a shadow copy is relatively straightforward. You can gain some additional insight into the process by opening a Command Prompt window and entering the following command:

VSSADMIN List Writers

This command displays all of the VSS writers that are present on the system and also shows you each writer’s status.

Hope you found this helpful!

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Guest Post By: Brien M. Posey

Why-SAN-vs-DAS

Of all the decisions that must be made when planning an Exchange Server deployment, perhaps none are as critical as deciding which type of storage to use. Exchange Server is very flexible with regard to the types of storage that it supports. However, some types of storage offer better performance and reliability than others.

When it comes to larger scale Exchange Server deployments, it is often better to use a Storage Area Network than it is to use direct attached storage. Storage networks provide a number of advantages over local storage in terms of costs, reliability, performance, and functionality.

Cost Considerations

Storage Area Networks have gained something of a reputation for being more expensive than other types of storage. However, if your organization already has a Storage Area Network in place then you may find that the cost per gigabyte of Exchange Server storage is less expensive on your storage network than it would be if you were to invest in local storage.

While this statement might seem completely counter intuitive, it is based on the idea that physical hardware is often grossly underutilized. To put this into prospective, consider the process of purchasing an Exchange mailbox server that uses Direct Attached Storage.

Organizations that choose to use local storage must estimate the amount of storage that will be needed to accommodate Exchange Server databases plus leave room for future growth. This means making a significant investment in storage hardware.  In doing so, an organization is purchasing the necessary storage space, but they may also be spending money for storage space that is not immediately needed.

In contrast, Exchange servers that are connected to storage networks can take advantage of thin provisioning. This means that the Exchange Server only uses the storage space that it needs. When a thinly provisioned volume is created, the volume typically consumes less than 1 GB of physical storage space, regardless of the volume’s logical size. The volume will consume physical storage space on an as needed basis as data is written to the volume.

In essence, a thinly provisioned volume residing on a SAN could be thought of as “pay as you go” storage. Unlike Direct Attached Storage, the organization is not forced to make a large up-front investment in dedicated storage that may never be used.

Reliability

Another advantage to using Storage Area Networks for Exchange Server storage is that when properly constructed, SANs are far more reliable than Direct Attached Storage.

The problem with using Direct Attached Storage is that there are a number of ways in which the storage can become a single point of failure. For example, a disk controller failure can easily corrupt an entire storage array. Although there are servers that have multiple array controllers for Direct Attached Storage, lowering servers are often limited to a single array controller.

Some Exchange mailbox servers implement Direct Attached Storage through an external storage array. Such an array is considered to be a local component, but makes use of an external case as a way of compensating for the lack of drive bays within the server itself. In these types of configurations, the connectivity between the server and external storage array can become a single point of failure (depending on the hardware configuration that is used).

When SAN storage is used, potential single points of failure can be eliminated through the use of multipath I/O. The basic idea behind multipath I/O is that fault tolerance can be achieved by providing multiple physical paths between a server and a storage device. If for example an organization wanted to establish fault tolerant connectivity between an Exchange Server and SAN storage, they could install multiple Fibre Channel Host Bus Adapters into the Exchange Server. Each Host Bus Adapter could be connected to a separate Fibre Channel switch. Each switch could in turn provide a path to mirrored storage arrays. This approach prevents any of the storage components from becoming single points of failure.

Performance

Although Microsoft has taken measures to drive down mailbox server I/O requirements in the last couple of versions of Exchange Server, mailbox databases still tend to be I/O intensive. As such, large mailbox servers depend on high-performance hardware.

While there is no denying the fact that high-performance Direct Attached Storage is available, SAN storage can potentially provide a higher level of performance sue to its scalability. One of the major factors that impacts a storage array’s performance is the number of spindles that are used by the array. Direct Attached Storage limits the total number of spindles that can be used. Not only are the number of drive bays in the case a factor, but there is also a limit to the number of disks that can be attached to the array controller.

SAN environments make it possible to create high performance disk arrays by using large numbers of physical disks. Of course capitalizing on the disk I/O performance also means that you must have a high speed connection between the server and the SAN, but this usually isn’t a problem. Multipath I/O allows storage traffic to be distributed across multiple Fibre Channel ports for optimal performance.

Virtualization

Finally, SAN environments are ideal for use in virtualized datacenters. Although neither Microsoft nor VMware still require the use of shared storage for clustered virtualization hosts, using shared storage is still widely considered to be a best practice. SANs make it easy to create cluster shared volumes that can be shared among the nodes in your host virtualization cluster.

Conclusion

Exchange mailbox servers are almost always considered to be mission critical. As such, it makes sense to invest in SAN storage for your Exchange Server since it can deliver better performance and reliability than is possible with Direct Attached Storage.

 

3 Benefits of Running Exchange Server in a Virtualized Environment

3 Benefits of Running Exchange Server in a Virtualized Environment

Guest post by: Brien M. Posey

Benefits-of-Running-Exchang

One of the big decisions that administrators must make when preparing to deploy Exchange Server is whether to run Exchange on physical hardware, virtual hardware, or a mixture of the two. Prior to the release of Exchange Server 2010 most organizations chose to run Exchange on physical hardware. Earlier versions of Exchange mailbox servers were often simply too I/O intensive for virtual environments. Furthermore, it took a while for Microsoft’s Exchange Server support policy to catch up with the virtualization trend.

Today these issues are not the stumbling blocks that they once were. Exchange Server 2010 and 2013 are far less I/O intensive than their predecessors. Likewise, Exchange Server is fully supported in virtual environments. Of course administrators must still answer the question of whether it is better to run Exchange Server on physical or on virtual hardware.

Typically there are far greater advantages to running Exchange Server in a virtual environment than running it in a physical environment. Virtual environments can help to expedite Exchange Server deployment, and they often make better use of hardware resources, while also offering some advanced protection options.

Improved Deployment

At first the idea that deploying Exchange Server in a virtual environment is somehow easier or more efficient might seem a little strange. After all, the Exchange Server setup program works in exactly the same way whether Exchange is being deployed on a physical or a virtual server. However, virtualized environments provide some deployment options that simply do not exist in physical environments.

Virtual environments make it quick and easy to deploy additional Exchange Servers. This is important for any organization that needs to quickly scale they are Exchange organization to meet evolving business needs. Virtual environments allow administrators to build templates that can be used to quickly deploy new servers in a uniform way.

Depending upon the virtualization platform that is being used, it is sometimes even possible to set up a self-service portal that allows authorized users to deploy new Exchange Servers with only a few mouse clicks. Because the servers are based on preconfigured templates, they will already be configured according to the corporate security policy.

 Hardware Resource Allocation

Another advantage that virtualized environments offer over physical environments is them virtual environments typically make more efficient use of server hardware. In virtual environments, multiple virtualized workloads share a finite pool of physical hardware resources. As such, virtualization administrators have gotten into the habit of using the available hardware resources efficiently and making every resource count.

Of course it isn’t just these habits that lead to more efficient resource usage. Virtualized environments contain mechanisms that help to ensure that virtual machines receive exactly the hardware resources that are necessary, but without wasting resources in the process. Perhaps the best example of this is dynamic memory.

The various hypervisor vendors each implement dynamic memory in their own way. As a general rule however, each virtual machine is assigned a certain amount of memory at startup. The administrator also assigns maximum and minimum memory limits to the virtual machines. This allows the virtual machines to claim the memory that they need, but without consuming an excessive percentage of the servers overall physical memory. When memory is no longer actively needed by the virtual machine, that memory is released so that it becomes available to other virtual machines that are running on the server.

Although mechanisms such as dynamic memory can certainly help a virtual machine to make the most efficient use possible of physical hardware resources, resource usage can be thought of in another way as well.

When Exchange Server is deployed onto physical hardware, all of the servers resources are dedicated to running the operating system and Exchange Server. While this may initially sound desirable, there are  problems with it when you consider hardware allocation from a financial standpoint.

In a physical server environment, the hardware must be purchased up front. The problem with this is that administrators can simply purchase the resources that are needed by Exchange Server based on current usage. Workloads tend to increase over time, so administrators must typically purchase more memory, CPU cores, and faster disks than what are currently needed. These resources are essentially wasted until the day that the Exchange Server workload grows to the point that those resources are suddenly needed. In a virtual environment this is simply not the case. Whatever resources are not needed by a virtual machine can be put into a pool of physical resources that are accessible to other virtualized workloads.

 Protection Options

One last reason why it is often more beneficial to operate Exchange Server in a virtual environment is because virtual environments provide certain protection options that are not natively available with Exchange Server.

Perhaps the best example of this is failover clustering. Exchange Server offers failover clustering in the form of Database Availability Groups. The problem is that Database Availability Groups only protect the mailbox server role. Exchange administrators must look for creative ways to protect the remaining server roles against failure. One of the easiest ways to achieve this protection is to install Exchange Server onto virtual machines. The underlying hypervisor can be clustered in a way that allows virtual machines to fail over from one host to another if necessary. Such a failover can be performed regardless of the limits of the operating system or application software that might be running within individual virtual machines. In other words, virtualization allows you to receive the benefits of failover clustering for Exchange server roles that don’t normally support clustering.

 Conclusion

As you can see, there are a number of benefits to running Exchange Server in a virtual environment. In almost every case, it is preferable to run Exchange Server on virtual hardware over physical hardware.

What’s new in Exchange 2013, 2 Webcasts, and More!

Next week I’ll be on a couple of webcasts related to Exchange server protection:

In these webcasts, we will balance a solid blend of best practices content with information about some of our latest products.   I promise not to waste your time!

Webcast 1:  Introducing EMC AppSync: Advanced Application Protection Made Easy for VNX Platforms

In this webinar, we’ll describe how to setup a protection service catalog for any company and how easy EMC AppSync makes using snapshot and continuous data protection technology on a VNX storage array… As a bonus we will show a cool demo.

Sign up here.

Webcast 2: Protecting Exchange from Disaster: The Choices and Consequences

In this demo, we’ll explore the 3 common Exchange DR options available to customers with an advanced storage array like an EMC VNX.  One of the highlights is that I will be joined by independent Microsoft guru Brien Posey who has the low down on what’s new in Exchange 2013 related to storage and DR enhancements and describe how many things change in Exchange 2013 and how many things stay the same.  Oh, of course we will have a cool demo for this one too!

Sign up here.

Revenge of the (SharePoint) BLOB and Backronyms

BLOBs stored in SQL databases can be horrific.  Oh, THOSE kind of BLOBs.    

I was on a call with a customer this week who said they were reaching SharePoint content database file size limits (100GB) and they needed to get data out of SQL Server – bad.

But first let’s take a quick step back.

What is a BLOB?   And What Is a Backronym?

A blob (alternately known as a binary large object, basic large object, BLOB, or BLOb) is a collection of binary data stored as a single entity in a database management system. Blobs are typically images, audio or other multimedia objects, though sometimes binary executable code is stored as a blob. Database support for blobs is not universal.

Blobs were originally just amorphous chunks of data invented by Jim Starkey at DEC, who describes them as “the thing that ate Cincinnati, Cleveland, or whatever”. Later, Terry McKiever, a marketing person for Apollo, felt that it needed to be an acronym and invented the backronym Basic Large Object. Then Informix invented an alternative backronym, Binary Large Object.  [Wikipedia]

Problems with BLOBs?

  • Pushes content database file size limits to max (100GB)
  • Poor performance for large files, especially write intensive
  • Large file upload time
  • Can’t easily & economically scale
  • Poor asset utilization SQL Servers

Microsoft says BLOBs are bad:

“Typically, as much as 80 percent of data for an enterprise-scale deployment of SharePoint Foundation consists of file-based data streams that are stored as BLOB data. These BLOB objects comprise data associated with SharePoint files. However, maintaining large quantities of BLOB data in a SQL Server database is a suboptimal use of SQL Server resources. You can achieve equal benefit at lower cost with equivalent efficiency by using an external data store to contain BLOB data.”    Source: http://msdn.microsoft.com/en-us/library/bb802976.aspx

Up to 95% of your SharePoint  (stored in SQL content databases) is a BLOB!

The Solution: Get BLOBs out of SQL

Keep the metadata in, and get the BLOBs out!

This solution we’ve designed is perfect for those of you in that situation who may be reaching SharePoint content database size limits or if you just want to run things better and more efficiently.

You can easily get 90% of SharePoint content data out of SQL Server and onto less expensive tiers of disk.

Join us for a live webcast that will take you through exactly how it all works – on October 10th.

Please sign up today to learn more about it event if you can’t make it, you will still receive the recorded webcast after.

2 Great AppSync Exchange 2010 Single Item Restore Demos

Our friend Ernes Taljic from Presidio, launched the Presidio Technical Blog “Converging Clouds” with a post about EMC’s new replication management software EMC AppSync.

He also made two excellent videos that showcase virtualized Exchange 2010 Protection and Single Item Restore with RecoverPoint and VNX Snapshots – all managed by AppSync.

Enjoy:

AppSync and ItemPoint with VNX Snapshots

AppSync and ItemPoint with RecoverPoint