Category Archives: Virtualization

Is a SAN Better, Cheaper, and More Secure Than Office 365?

SAN’s – especially SAN’s from market leader EMC – are always under attack from companies wishing to cash in on a piece of the rising data growth across all industries and market segments.

  • Some say DAS is best.
  • Some say keep the storage in the servers.
  • Some say you should build your own shared array.

But when it comes to Microsoft environments, it often helps to have independent experts investigate the matter to get a fresh perspective.

In a recent study, Wikibon determined that a shared storage infrastructure (powered by EMC’s next generation VNX2 storage systems) can match the prices offered by Office 365 public cloud and offer more capabilities, more security, and more control.

This, however, assumes a completely consolidated approach for deploying multiple mixed workloads such as Exchange, SharePoint, SQL Server, Lync – where the VNX2 really shines.   We use FAST VP, FAST Cache, and a combination of drive types to achieve the best balance of performance and cost.

Are you looking for more information about deploying Microsoft applications on VNX?    Definitely check here for the most recent best practices guides!

Also check out my recent webinar I did with James Baldwin who leads our Proven Solutions EMC/Microsoft engineering team.  We had a lot of fun doing this one, hope you enjoy it.

Building a Microsoft Azure Private Cloud – Powered by EMC VNX Storage

Recently EMC held a Microsoft Summit, where a lot of the Microsoft savvy engineers and business folks within EMC get together to share their stories and lessons learned.

One of the highlights of these sessions is always the work of Txomin Barturen -who is our resident Microsoft expert in EMC’s Office of the CTO.

His blog can be found here:  http://datatothepeople.wordpress.com/
(Bookmark it, and look for videos and blog posts soon to follow)

This year his session focused on our work within Microsoft Hyper-V, Microsoft System Center, Private Clouds and the powerful Azure Pack for Windows.

Sure, everyone knows about EMC’s affinity towards VMware (EMC’s VNX was rated best storage for VMware 3 years in a row), but many don’t know how focused we are on Hyper-V and helping customers power their Microsoft Cloud.

EMC is committed to becoming the best storage for private clouds for enterprises and service providers who wish to deploy private and/or public clouds for their customers – on VMware or Hyper-V.

Evidence of EMC’s Microsoft Private Cloud Work

To get to this stage, we’ve had to do a lot of work.

And beyond our engineering ability, we also showcased our agility.

  • VNX was the first storage platform to support SMB 3.0 (VNX & VNXe)
  • VNX was the first storage platform to demonstrate ODX (TechEd 2012)
  • Our Elab aggressively submits Windows Logo certifications (EMC currently has the most Windows 2012 R2 certs)

Where do you find these materials? 

We’ve built Microsoft Private Cloud (Proven) solutions on VNXe, VNX & VMAX leveraging SMI-S / PowerShell that can be found and delivered through EMC’s VSPEX program or as part of our Microsoft Private Cloud Fast Track solutions (which are Microsoft validated, ready-to-run reference architectures).  You can find more about this work here.

Getting to a More Agile Cloud

Txomin’s presentation talked about how customers want all that an Azure Public Cloud model offers in terms of agility and management but without the loss of control (a on-premises cloud deployment).  They want to offer *-as-a-Service models, elastic scale, a self-service model for tenants, but without the SLA risks that are out of IT control when deploying on a full private cloud.

The Middle Ground:  The Azure Pack for Windows

Microsoft is putting together some really interesting cloud management software with Azure Pack for Windows.  The Azure Pack for Windows is a free downloadable set of services that offer the same interface as the Azure public cloud option, but provide more control for companies who are not willing to deploy on the public cloud for reasons due to performance, reliability, security, and compliance concerns.

azure-cloud

Since we’ve done all of the baseline private cloud work, now we can use these as a foundation for building a Microsoft Private Cloud on-premises with a VNX storage platform using the new Azure Pack for Windows.

Built atop the new Windows Server 2012 R2 platform, the Windows Azure Pack (WAP) enables public-cloud like management and services without the risk.   This layers right on top of EMC’s Windows Fast Track & Private Cloud offerings without any additional technology required.

Although it offers a limited subset of services, we expect that Microsoft will introduce more service as customers adopt this new model.

One of the first use cases Microsoft is focusing on is the service providers who want better management for their Microsoft clouds.   This will allow for new integrations and capabilities that weren’t previously available.   IT staff can treat business units as Tenants, offer pre-configured solutions via Gallery, enable self-service management by tenants (delegated Admin).  They can also view utilization and reporting available through System Center/3rd party integrations which are fully extensible through Operations Manager, Orchestrator and Virtual Machine Manager.

This is truly the future of Microsoft’s virtualization Strategy and EMC is right there to enable customers to build the best, most reliable, secure, manageable private cloud.

But what about Data Protection?

Well, our colleagues in the Backup and Recovery Systems division of EMC are no slackers.  They saw the same trends and are eager to help customers stay protected as they move to the cloud.

In this demo Alex Almeida, Sr. Technical Marketing Manager for EMC’s Backup and Recovery Systems demonstrates how the EMC Data Protection Suite provides full support for Windows Azure Private Cloud Backup and Recovery:

So let me correct my statement…  EMC is right there to enable customers to build the best, most reliable, secure, manageable private cloud – AND PROTECT IT.

EMC’s VNX = Award Winning storage for Microsoft environments

Microsoft’s TechEd 2013 is next week, and I’m looking forward to spending time with my longtime industry friends and making some new connections on the show floor in New Orleans.

This year, I’ll attend as part of the Unified Storage Division, and felt I needed to share a little about the success of VNX and VNXe arrays into Microsoft environments:

awards

EMC’s VNX Unified Storage Platform has been recognized with awards from a slew of independent analysts such as Gartner, IDC and Wikibon, as well as media publications such as ComputerWorld, CRN and Virtualization Review due to the ability of the VNX family to power mission critical applications, integrate with virtual environments and solve SMB IT challenges, among other accolades.   We take pride in being the #1 storage for most Microsoft Windows-based applications.

BUT… DOES  MICROSOFT WINDOWS NEED A SAN?  CAN’T WE DO IT OURSELVES?

Well, after speaking with Windows Server 2012, SQL Server, and EMC customers, partners and employees, the independent analyst firm Wikibon posted a before and after comparison model based on an enterprise customer environment. The idea is that the total cost of bolting together your own solution isn’t worth it.

wikibon-windows-study

The findings showed that by moving a physical, non-tiered environment to a virtualized environment with flash and tiered storage SQL Server customers realized a 30% lower overall TCO over a 3 year period including hardware, software, maintenance, and management costs for their database infrastructure.

The graphic shows that a do-it-yourself approach saves very little if anything in hardware costs and will divert operational effort to build and maintain the infrastructure. Risks and costs are likely to be higher with this approach.

In the end, EMC’s VNX infrastructure was proven to deliver a lower cost and lower risk solution for Windows 2012 versus a direct-attached storage (DAS) or JBOD (just a bunch of disk) model.  Full study here.

Video of EMC’s Adrian Simays and Wikibon Analysts discussing these results is here on YouTube.

MICROSOFT INTEGRATIONS AND INNOVATIONS  

EMC’s VNX platform considers Microsoft applications, databases, and file shares to our sweet spot as evidenced by our early integration of the latest Windows Server 2012 features that increase performance, efficiency, availability, and simplicity for our joint customers.

Performance, Efficiency, Availability, Simplicity

1. Performance

Within Windows, we were the first storage array to support SMB 3 and ODX Copy Offload (part of SMB 3) to enable large file copies over SAN instead of consuming network bandwidth and host CPU cycles.

ODX-impact

This test highlights the speed difference before (left) and after (right) ODX was implemented. With EMC VNX and ODX enabled, you can accelerate your VM copies by a factor of 7 while reducing server CPU utilization by a factor of 30!

For applications and databases, VNX FAST Cache and FASTVP automatically tunes your storage to match your workload requirements saving up to 80% of the time it would take to manually balance workloads.

The Enterprise Storage Group (ESG) Lab confirmed that a data warehouse solution with Windows Server 2012 with Hyper-V, Microsoft SQL Server 2012 with new Columnstore indexing, and VNX FAST technologies and VNX storage form a complete solution to meet the business requirements of mid-tier organizations and beyond. An 800GB DW was deployed which is fairly typical for a medium sized business. With EMC FAST enabled, throughput reached up to 379 MB/sec, showing over 100% improvement over SQL Server 2012’s baseline Rowstore indexing. The DSS performance workload with EMC FAST enabled completed up to nine times faster than with rowstore indexing.

2. Efficiency

IT managers and storage administrators frequently adopt well-known forecasting models to pre-allocate storage space according to the storage demand growth rate. The main challenge is how to pre-allocate just enough storage capacity for the application. Reports from many storage array vendors indicate that 31% to 50% of the allocated storage is either stranded or unused. Thus, 31% to 50% of the capital investment from the initial storage installment is wasted.

The VNX supports Windows host-level and built-in storage-level thin provisioning to drastically reduce initial disk requirements.  Windows Server 2012 provides the ability to detect thin-provisioned storage on EMC storage arrays and reclaim unused space once it is freed by Hyper-V. In the previous scenario, an ODX-aware host connected to an EMC intelligent storage array would automatically reclaim the 10 GB of storage and return it to the pool where it could be used by other applications.

Furthermore, for application storage we partner with companies like Kroll and Metalogix to provide better solutions for Exchange single item recovery and SharePoint remote BLOB storage which can reduce SQL stored SharePoint objects by about 80-90% and improve SQL Respnse times by 20-40%

3. Availability

Our first to market integration with SMB3 not only provides for performance improvements, it also enables SMB 3 Continuous Availability allowing applications to run on clustered volumes with failovers that are transparent to end users.  For example, SQL Server may store system tables on the file shares such that any disruptive event to the access of the file share can lead to interruption of SQL Server operation. Continuous Availability is accomplished via cluster failover on the host side and Data Mover of Shared Folder failover on the VNX side.

Other SMB 3.0 Features supported include:

  • Multi-Channel / Multipath I/O (MPIO ) – Multiple TCP connections can now be associated with a single SMB 3.0 session and a client application can use several connections to transfer I/O on a CIFS share.  This optimizes bandwidth and enables failover and load balancing with multiple NICs.
  • Offload Copy – Copying data   within the same Data Mover can now be offloaded to the storage which reduces the workload on the client and network.
  • SMB Encryption – Provides secure access to data on CIFS shares, protecting data on untrusted networks and providing end-to-end encryption of data in- flight.
  • Directory Lease – SMB2   introduced a directory cache which allowed clients to cache a directory listing to save network bandwidth but it would not see new updates.  SMB3 introduces a directory lease and the client is now automatically aware of changes made in a cached directory.
  • Remote Volume Shadow Copy Service (RVSS) – With RVSS, point-in-time snapshots can be taken across multiple CIFS shares, providing improved performance in backup and restore.
  • BranchCache – Caching solution to have business data in local cache. Main use case is remote office and branch office storage.

EMC also offers a wide range of application availability and protection solutions that are built into the VNX including snapshots, remote replication, and a new RecoverPoint virtual replication appliance.

4. Simplicity

When it comes to provisioning storage for their applications, admins often have to navigate through too many repetitive tasks requiring them to touch different UIs and increasing the risk of human error. Admins also likely need to coordinate with other administrators each time they need to provision space. This is not very efficient. Take for example a user that wants to provision space for SharePoint. You need to work with Unisphere to create a LUN and add it to a storage group. Next you need to log onto the server and run disk manager to import the volume. Next you need to work with Hyper-V, then SQL Server Mgmt Studio, then SharePoint Central Admin. A bit tedious to say the least.

esi

EMC Storage Integrator (ESI) on the other hand streamlines everything we just talked about. Forget about how much faster it actually is… Just think about the convenience and elegance of this workflow compared to the manual steps outlined in our last paragraph. ESI is a free MMC based download that takes provisioning all the way into Microsoft Applications. Currently only SharePoint is supported but SQL and Exchange wizards are coming soon. This is a feature that surprises and delights our customers!

 SO WHAT DO VNX CUSTOMERS SAY?

EMC’s VNX not only provides a rock solid core infrastructure foundation, but also delivers significant features and benefits for application owners and DBAs.     Here’s some quotes from customers who have transformed their Microsoft environments using the VNX and VNXe platforms.

Peter Syngh Senior Manager, IT Operations, Toronto District School Board

 “EMC’s VNX unified storage has the best of everything at a very cost-effective price. It integrates with Microsoft Hyper-V, which is crucial to our cloud strategy, and with its higher performance, automated tiering and thin provisioning, VNX was a no-brainer.”

Marshall Bose Manager of IT Operations, Ensco (Oil/Gas)

 “A prime reason for choosing EMC over NetApp was that VNX is such a great fit for virtualization. With all the automation tools and tight integration with VMware, VNX is far easier than NetApp when it comes to spinning up and managing virtual machines.”

Rocco Hoffmann, IT Architect BNP Paribas (German Bank)

“We are achieving significant savings in energy and rack space. In fact our VNX requires only half the rack space and has reduced our power and cooling costs”

Charles Rosse, Systems Administrator II Baptist Memorial Health Care

“Since the VNX has been built into the design of our VDI from the beginning, it can easily accommodate growth- all we need to do is to plug in another drive or tray of drives and we get incrementally better performance.”

Erich Becker,  Director of Information Systems, AeroSpec (Manufacturing)

“…We loved the fact that VNXe and VMware worked extremely well together …we have dramatically cut operating costs, increased reliability and data access is now twice as fast as before.”

BOTTOM LINE

There are many more customers that have given praise to the VNX Family for powering their Microsoft applications but I don’t have the room to put them all in.     EMC is a trusted brand in storage, and the VNX today is an outstanding unified platform which successfully balances our customers block and file needs for their Microsoft file and application data – and gets awards for it.    Feel free to find out more about the VNX and VNXe product lines here and here.

Also come talk to us next week at TechEd, we will be there to help customers and partners learn more about our technology.

Find out more about our TechEd plans here.

Also download the VNXe Simulator executable right here.  It’s pretty awesome and shows you the unique VNXe management interface.

What’s New for Exchange 2013 Storage?

What’s New for Exchange 2013 Storage?

By: Brien M. Posey

Many of Exchange Server 2013’s most noteworthy improvements are behind the scenes architectural improvements rather than new product features. Perhaps nowhere is this more true than Exchange Server’s storage architecture. Once again Microsoft invested heavily in Exchange’s storage subsystem in an effort to drive down the overall storage costs while at the same time improving performance and reliability. This article outlines some of the most significant storage related improvements in Exchange Server 2013.

Lower IOPS on Passive Database Copies

In failure situations, failover from an Active mailbox database to a passive database copy needs to happen as quickly as possible. In Exchange Server 2010, Microsoft expedited the failover process by maintaining a low checkpoint depth (5 MB) on the passive database copy. Microsoft’s reason for doing this was that failing over from an Active to a passive database copy required the database cache to be flushed. Having a large checkpoint depth would have increased the amount of time that it took to flush the cache, thereby causing the failover process to take longer to complete.

The problem was that maintaining a low checkpoint depth came at a cost. The server hosting the passive database copy had to do a lot of work in terms of pre-read operations in order to keep pace with demand while still maintaining a minimal checkpoint depth. The end result was that a passive database copy produced nearly the same level of IOPS as its active counterpart.

In Exchange Server 2013, Microsoft made a simple decision that greatly reduced IOPS for passive database copies, while also reducing the database failover time. Because much of the disk I/O activity on the passive database copy was related to maintaining a low checkpoint depth and because the checkpoint depth had a direct impact on the failover time, Microsoft realized that the best way to improve performance was to change the way that the caching process worked.

In Exchange 2013, the cache is no longer flushed during a failover. Instead, the cache is treated as a persistent object. Because the cache no longer has to be flushed, the size of the cache has little bearing on the amount of time that it takes to perform the failover. As such, Microsoft designed Exchange 2013 to have a much larger checkpoint depth (100 MB). Having a larger checkpoint depth means that the passive database doesn’t have to work as hard to pre-read data, which drives down the IOPS on the passive database copy by about half. Furthermore failovers normally occur in about 20 seconds.

Although the idea of driving down IOPS for passive database copies might sound somewhat appealing, some might question the benefit. After all, passive database copies are not actively being used, so driving down the IOPS should theoretically have no impact on the end user experience.

One of the reasons why reducing the IOPS produced by passive database copies is so important has to do with another architectural change that Microsoft has made in Exchange Server 2013. Unlike previous versions of Exchange Server, Exchange Server 2013 allows active and passive database copies to be stored together on the same volume.

If an organization does choose to use a single volume to store a mixture of active and passive databases then reducing the IOPS produced by passive database will have a direct impact on the performance of active databases.

This new architecture also makes it easier to recover from disk failures within a reasonable amount of time. Exchange Server 2013 supports using volume sizes of up to 8 TB. With that in mind, imagine what would happen if a disk failed to needed to be reseeded. Assuming that the majority of the space on the volume was being used, it would normally take a very long time to regenerate the contents of the failed disk.

Part of the reason for this has to do with the sheer volume of data that must be copied, but there is more to it than that. Passive database copies are normally reseeded from an active database copy. If all of the active database copies reside on a common volume than that volumes performance will be the limiting factor affecting the amount of time that it takes to rebuild the failed disk.

In Exchange Server 2013 however, volumes can contain a mixture of active and passive database copies. This means that the active database copies of likely reside on different volumes (typically on different servers). This means that the data that is necessary for rebuilding the failed volume will be pulled from a variety of sources. As such, the data source is no longer the limiting factor in the amount of time that it takes to reseed the disk. Assuming that the disk that is being reseeded can keep pace, the reseeding process can occur much more quickly than it would be able to if all of the data were coming from a single source.

In addition, Exchange Server 2013 periodically performs an integrity check of passive database copies. If any of the database copies are found to have a status of FailedAndSuspended. If such a database copy is found then Exchange will check to see if any spare disks are available. If a valid spare is found then Exchange Server will automatically remap the spare and initiate an automatic seating process.

Conclusion

As you can see, Microsoft has made a tremendous number of improvements with the way that Exchange Server manages storage in DAG environments. Passive database copies generate fewer IOPS, and failovers happen more quickly than ever before. Furthermore, Exchange Server can even use spare disks to quickly recover from certain types of disk failures.

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Why Storage Networks Can Be Better Than Direct Attached Storage for Exchange

Guest Post By: Brien M. Posey

Why-SAN-vs-DAS

Of all the decisions that must be made when planning an Exchange Server deployment, perhaps none are as critical as deciding which type of storage to use. Exchange Server is very flexible with regard to the types of storage that it supports. However, some types of storage offer better performance and reliability than others.

When it comes to larger scale Exchange Server deployments, it is often better to use a Storage Area Network than it is to use direct attached storage. Storage networks provide a number of advantages over local storage in terms of costs, reliability, performance, and functionality.

Cost Considerations

Storage Area Networks have gained something of a reputation for being more expensive than other types of storage. However, if your organization already has a Storage Area Network in place then you may find that the cost per gigabyte of Exchange Server storage is less expensive on your storage network than it would be if you were to invest in local storage.

While this statement might seem completely counter intuitive, it is based on the idea that physical hardware is often grossly underutilized. To put this into prospective, consider the process of purchasing an Exchange mailbox server that uses Direct Attached Storage.

Organizations that choose to use local storage must estimate the amount of storage that will be needed to accommodate Exchange Server databases plus leave room for future growth. This means making a significant investment in storage hardware.  In doing so, an organization is purchasing the necessary storage space, but they may also be spending money for storage space that is not immediately needed.

In contrast, Exchange servers that are connected to storage networks can take advantage of thin provisioning. This means that the Exchange Server only uses the storage space that it needs. When a thinly provisioned volume is created, the volume typically consumes less than 1 GB of physical storage space, regardless of the volume’s logical size. The volume will consume physical storage space on an as needed basis as data is written to the volume.

In essence, a thinly provisioned volume residing on a SAN could be thought of as “pay as you go” storage. Unlike Direct Attached Storage, the organization is not forced to make a large up-front investment in dedicated storage that may never be used.

Reliability

Another advantage to using Storage Area Networks for Exchange Server storage is that when properly constructed, SANs are far more reliable than Direct Attached Storage.

The problem with using Direct Attached Storage is that there are a number of ways in which the storage can become a single point of failure. For example, a disk controller failure can easily corrupt an entire storage array. Although there are servers that have multiple array controllers for Direct Attached Storage, lowering servers are often limited to a single array controller.

Some Exchange mailbox servers implement Direct Attached Storage through an external storage array. Such an array is considered to be a local component, but makes use of an external case as a way of compensating for the lack of drive bays within the server itself. In these types of configurations, the connectivity between the server and external storage array can become a single point of failure (depending on the hardware configuration that is used).

When SAN storage is used, potential single points of failure can be eliminated through the use of multipath I/O. The basic idea behind multipath I/O is that fault tolerance can be achieved by providing multiple physical paths between a server and a storage device. If for example an organization wanted to establish fault tolerant connectivity between an Exchange Server and SAN storage, they could install multiple Fibre Channel Host Bus Adapters into the Exchange Server. Each Host Bus Adapter could be connected to a separate Fibre Channel switch. Each switch could in turn provide a path to mirrored storage arrays. This approach prevents any of the storage components from becoming single points of failure.

Performance

Although Microsoft has taken measures to drive down mailbox server I/O requirements in the last couple of versions of Exchange Server, mailbox databases still tend to be I/O intensive. As such, large mailbox servers depend on high-performance hardware.

While there is no denying the fact that high-performance Direct Attached Storage is available, SAN storage can potentially provide a higher level of performance sue to its scalability. One of the major factors that impacts a storage array’s performance is the number of spindles that are used by the array. Direct Attached Storage limits the total number of spindles that can be used. Not only are the number of drive bays in the case a factor, but there is also a limit to the number of disks that can be attached to the array controller.

SAN environments make it possible to create high performance disk arrays by using large numbers of physical disks. Of course capitalizing on the disk I/O performance also means that you must have a high speed connection between the server and the SAN, but this usually isn’t a problem. Multipath I/O allows storage traffic to be distributed across multiple Fibre Channel ports for optimal performance.

Virtualization

Finally, SAN environments are ideal for use in virtualized datacenters. Although neither Microsoft nor VMware still require the use of shared storage for clustered virtualization hosts, using shared storage is still widely considered to be a best practice. SANs make it easy to create cluster shared volumes that can be shared among the nodes in your host virtualization cluster.

Conclusion

Exchange mailbox servers are almost always considered to be mission critical. As such, it makes sense to invest in SAN storage for your Exchange Server since it can deliver better performance and reliability than is possible with Direct Attached Storage.

 

3 Benefits of Running Exchange Server in a Virtualized Environment

3 Benefits of Running Exchange Server in a Virtualized Environment

Guest post by: Brien M. Posey

Benefits-of-Running-Exchang

One of the big decisions that administrators must make when preparing to deploy Exchange Server is whether to run Exchange on physical hardware, virtual hardware, or a mixture of the two. Prior to the release of Exchange Server 2010 most organizations chose to run Exchange on physical hardware. Earlier versions of Exchange mailbox servers were often simply too I/O intensive for virtual environments. Furthermore, it took a while for Microsoft’s Exchange Server support policy to catch up with the virtualization trend.

Today these issues are not the stumbling blocks that they once were. Exchange Server 2010 and 2013 are far less I/O intensive than their predecessors. Likewise, Exchange Server is fully supported in virtual environments. Of course administrators must still answer the question of whether it is better to run Exchange Server on physical or on virtual hardware.

Typically there are far greater advantages to running Exchange Server in a virtual environment than running it in a physical environment. Virtual environments can help to expedite Exchange Server deployment, and they often make better use of hardware resources, while also offering some advanced protection options.

Improved Deployment

At first the idea that deploying Exchange Server in a virtual environment is somehow easier or more efficient might seem a little strange. After all, the Exchange Server setup program works in exactly the same way whether Exchange is being deployed on a physical or a virtual server. However, virtualized environments provide some deployment options that simply do not exist in physical environments.

Virtual environments make it quick and easy to deploy additional Exchange Servers. This is important for any organization that needs to quickly scale they are Exchange organization to meet evolving business needs. Virtual environments allow administrators to build templates that can be used to quickly deploy new servers in a uniform way.

Depending upon the virtualization platform that is being used, it is sometimes even possible to set up a self-service portal that allows authorized users to deploy new Exchange Servers with only a few mouse clicks. Because the servers are based on preconfigured templates, they will already be configured according to the corporate security policy.

 Hardware Resource Allocation

Another advantage that virtualized environments offer over physical environments is them virtual environments typically make more efficient use of server hardware. In virtual environments, multiple virtualized workloads share a finite pool of physical hardware resources. As such, virtualization administrators have gotten into the habit of using the available hardware resources efficiently and making every resource count.

Of course it isn’t just these habits that lead to more efficient resource usage. Virtualized environments contain mechanisms that help to ensure that virtual machines receive exactly the hardware resources that are necessary, but without wasting resources in the process. Perhaps the best example of this is dynamic memory.

The various hypervisor vendors each implement dynamic memory in their own way. As a general rule however, each virtual machine is assigned a certain amount of memory at startup. The administrator also assigns maximum and minimum memory limits to the virtual machines. This allows the virtual machines to claim the memory that they need, but without consuming an excessive percentage of the servers overall physical memory. When memory is no longer actively needed by the virtual machine, that memory is released so that it becomes available to other virtual machines that are running on the server.

Although mechanisms such as dynamic memory can certainly help a virtual machine to make the most efficient use possible of physical hardware resources, resource usage can be thought of in another way as well.

When Exchange Server is deployed onto physical hardware, all of the servers resources are dedicated to running the operating system and Exchange Server. While this may initially sound desirable, there are  problems with it when you consider hardware allocation from a financial standpoint.

In a physical server environment, the hardware must be purchased up front. The problem with this is that administrators can simply purchase the resources that are needed by Exchange Server based on current usage. Workloads tend to increase over time, so administrators must typically purchase more memory, CPU cores, and faster disks than what are currently needed. These resources are essentially wasted until the day that the Exchange Server workload grows to the point that those resources are suddenly needed. In a virtual environment this is simply not the case. Whatever resources are not needed by a virtual machine can be put into a pool of physical resources that are accessible to other virtualized workloads.

 Protection Options

One last reason why it is often more beneficial to operate Exchange Server in a virtual environment is because virtual environments provide certain protection options that are not natively available with Exchange Server.

Perhaps the best example of this is failover clustering. Exchange Server offers failover clustering in the form of Database Availability Groups. The problem is that Database Availability Groups only protect the mailbox server role. Exchange administrators must look for creative ways to protect the remaining server roles against failure. One of the easiest ways to achieve this protection is to install Exchange Server onto virtual machines. The underlying hypervisor can be clustered in a way that allows virtual machines to fail over from one host to another if necessary. Such a failover can be performed regardless of the limits of the operating system or application software that might be running within individual virtual machines. In other words, virtualization allows you to receive the benefits of failover clustering for Exchange server roles that don’t normally support clustering.

 Conclusion

As you can see, there are a number of benefits to running Exchange Server in a virtual environment. In almost every case, it is preferable to run Exchange Server on virtual hardware over physical hardware.

What’s new in Exchange 2013, 2 Webcasts, and More!

Next week I’ll be on a couple of webcasts related to Exchange server protection:

In these webcasts, we will balance a solid blend of best practices content with information about some of our latest products.   I promise not to waste your time!

Webcast 1:  Introducing EMC AppSync: Advanced Application Protection Made Easy for VNX Platforms

In this webinar, we’ll describe how to setup a protection service catalog for any company and how easy EMC AppSync makes using snapshot and continuous data protection technology on a VNX storage array… As a bonus we will show a cool demo.

Sign up here.

Webcast 2: Protecting Exchange from Disaster: The Choices and Consequences

In this demo, we’ll explore the 3 common Exchange DR options available to customers with an advanced storage array like an EMC VNX.  One of the highlights is that I will be joined by independent Microsoft guru Brien Posey who has the low down on what’s new in Exchange 2013 related to storage and DR enhancements and describe how many things change in Exchange 2013 and how many things stay the same.  Oh, of course we will have a cool demo for this one too!

Sign up here.

2 Great AppSync Exchange 2010 Single Item Restore Demos

Our friend Ernes Taljic from Presidio, launched the Presidio Technical Blog “Converging Clouds” with a post about EMC’s new replication management software EMC AppSync.

He also made two excellent videos that showcase virtualized Exchange 2010 Protection and Single Item Restore with RecoverPoint and VNX Snapshots – all managed by AppSync.

Enjoy:

AppSync and ItemPoint with VNX Snapshots

AppSync and ItemPoint with RecoverPoint

VNX Replication: Ask the Experts… Now!

For the next three weeks we’re inviting anyone and everyone to ask anything about data replication on an EMC VNX storage array.

This is part of our Ask the Experts Series on the EMC Community Network forums.

Possible topics:

  • Application considerations
  • Bandwidth considerations
  • Determining which replication product makes most sense to use
  • How virtualization can affect your configuration

The forum is open and ready for any of your questions!

EMC AppSync for VNX / Microsoft Environments

We had a great time launching EMC AppSync in Las Vegas a few weeks back!

Some of the highlights were an on stage demo, an appearance on Chad’s World Live, 4 breakout sessions, and so much more. We got interviewed by industry analysts and taught our TCs what AppSync was all about.

We also launched a new ECN (EMC Community Network) space where I’ll be spending a lot of time in the future. The product becomes officially available later this year and now we’re handling all of the customer requests to join our beta program and learn more about the product.

If you want to find out more about the launch and if you want to ask a question – go ahead and ask one over here!

Application Protection: There’s Something Happening Here

There’s something happening here
What it is ain’t exactly clear
There’s a man with a gun over there
Telling me I got to beware

Yes, it’s blasphemy to simply change a classic like Buffalo Springfield’s “For What’s It Worth” – but I will anyway to prove my point.

There’s something happening here

If you haven’t noticed, IT is changing rapidly. Just search for IT transformation, IT as a Service, and converged infrastructure to see how far we’ve come in only the past few years.  This industry moves!

What it is ain’t exactly clear

We know a Cloud is built differently, operated differently, and consumed differently. So we know companies have begun re-architecting IT in order to offer more of a service in order to react faster to meet user needs. They know they must change their operational models and in many cases their organizational structure. They might also seek converged infrastructures to get moving faster.    But… has protection changed to keep pace with this transformation?

There’s a man with a gun over there
Telling me I got to beware

It’s been said that in the song the gun is more of a metaphor for the tension between groups within the US before Vietnam. And in a much less violent analogy, the tension between the IT team and the application owners has never been stronger.

The application teams want to have great performance and protection of their application. But they’ve never been empowered by the IT department to protect themselves with storage-level tools. The storage team wants to let them, but they fear they might create too many copies of their data. Instead, the app owners went out and used tools for their own application, creating their own protection strategy which might not deliver the best protection they can get.  To win back the hearts and minds of the application owners and DBA’s, the IT department and the storage teams need to get better at protecting applications as a service.

On the Road to Application Protection as a Service

Many companies have has attempted to do this in the past – with products that help you protect and restore your applications and critical virtual machines. They have tools that install on the server and can “freeze” and “thaw” the current transactions into the database, so that when a snapshot is taken, there is a clean copy that can be easily restored.  The major benefit of these tools is SPEED as the copy process is incremental and the restore process is also lightning fast.  Restoring a 1 TB database in minutes.

It needs to get easier. Like any “enterprise” tool, many of these products designed for snapshots and replication require a significant learning curve. We need something simple that integrates with the tools we know and love.

We should provide self-service capabilities. Instead of spending hours and hours making sure application owners are getting the protection they need, they should be empowered to simply protect and restore their own data.

We are driven by service levels. IT departments and storage teams need to offer “protection service catalogs” with various (e.g. Platinum, Gold, Silver, Bronze) levels of protection varied by RPO – from very low data loss (synchronous replication) to more sporadic application-consistent snapshots – all from one interface. This makes it easy for the app team and people with the checkbooks to really understand the value placed on the different applications in your catalog.

There truly is something happening here
And what is will be made clear at EMC World 2012!

Hope to see you there!
Brian

ESI = EMC Storage Integrator (for Windows Environments)

In the video below, Sam talks with Giri Basava about the latest EMC Storage Integrator, a free download that makes setup for Windows hosts a breeze.

You can get this plug-in at Powerlink (Support > Software Downloads and Licensing > Downloads E-I > EMC Storage Integrator)

Here’s the official product description.

EMC Storage Integrator (ESI) for Windows simplifies the management and provisioning of EMC storage for Microsoft Windows Servers and Applications in a physical as well as virtual (Hyper-V) environments. It maps application resources to Windows and in turn provides mapping to underlying Storage resources. With ESI, administrators can provision block and file storage for Microsoft Windows and Microsoft SharePoint Farms. ESI supports the EMC CLARiiON CX4 series, EMC VNX series, EMC VNXe series, EMC Symmetrix VMAX and EMC Symmetrix VMAXe.  Version 1.3 adds virtualization capability using Hyper-V and support for File Stream Remote Blob Store.

 

Is SQL Server 2008 on VMware ESXi 4.1 supported? Find out using Microsoft’s SVVP Wizards

For DBA’s who have concerns about the support of their SQL server environments on virtualization technologies other than Hyper-V™ and Virtual Server, Microsoft provides the Server Virtualization Validation Program (SVVP).

This article shows the simple steps required to complete the SVVP Support Policy Wizard to check support of your configuration.

  • Step 3 Select Virtualization Technology, Guest OS and Guest Architecture

  • Step 4 Review the Summary Support Statement

thanks Mike Morris for the blog post idea…

Windows Geoclusters, Stretch-Clusters, and RecoverPoint/CE Failover

Taking a page out of Chief EMC Blogger Chuck Hollis‘ playbook, I’m attaching the graphics from entire PPT file that I thought would be important to highlight for this blog and its readers.  Some of the graphics didn’t fit to the page as well as I thought it would (I need to shrink them further). So if you like what you see, you can download the whole PPT right here: RecoverPointCE-MSfailoverclusterPPT

In a nutshell, EMC’s RecoverPoint/Cluster Enabler extends a Microsoft cluster across two sites.  A Microsoft cluster normally provides local site “HA” or high availability of server nodes, and RecoverPoint/CE adds “DR” or disaster recovery (AFTER) by stretching the second node to anywhere outside of your primary datacenter.  This presentation walks you through the basics behind that simple idea and provides some additional background.   Slide building credit goes to Gary Archer, a great guy who is always keeping me sharp on RecoverPoint’s latest features.

Recovery Time Objective: Targeted amount of time to restart a business service after a disaster event

Recovery Point Objective: Amount of data lost from failure, measured as the amount of time from a disaster event

Various approaches for DR and their RTO rankings

Microsoft Failover Clusters (formerly MSCS (or Wolfpack if you go back really far)) provides local HA, not DR across a site.  For this, you need to S-T-R-E-T-C-H your cluster. EMC’s Cluster Enabler is one way to do it, and using RecoverPoint with it would be like have your iPhone on Verizon.  Not the best analogy, but you get my point I hope!

Basic requirements – use SYNCHRONOUS or ASYNCHRONOUS – distance is not the issue but 400 ms latency ASYNC and 4 ms latency SYNC

Leverages majority node set clustering.    If you have 2 nodes/servers on Site A and 2 nodes/servers on Site B you will need a “tiebreaker” for deciding how to remain online after a failure – most common method for this tiebreaker is File Share Witness.  Many articles can give you additional background on majority node set clustering – it’s a good thing to know – I will point you to the blog from an old friend of mine John Toner, who writes about geographically dispersed clusters.

The architecture. 

What each piece does:  CE is a filter driver that “catches” Microsoft Cluster failure events and let’s the RecoverPoint-managed disk systems know to failover as appropriate.  Very sophisticated logic is built-in to prevent cluster split-brain – scenarios where the link is down and the application (such as a SQL server database) doesn’t know what is the correct owner of the disk resources.

See if you see what is happening above – AUTOMATIC FAILOVER.

Integrates with and supports Hyper-V

Works with latest features like Live Migration – so you can Live Migrate workloads locally for HA and failover remotely for DR.  You can control if you want to failover locally before failing over across a site.

Self explanatory – the failover steps in detail.

More detail of Live Migration support – note synchronous requirement.

Multi-array support.  We can create consistency groups with storage devices from multiple arrays in the same group.  This allows fora lot of interesting failover implementations (failover locally first, not remotely for example) and lets you keep components grouped together… like an entire SharePoint farm.

Hey, it works with Oracle on Windows too.

Recap of the benefits – hopefully it makes sense and it’s the reason that customers love this integration – with RecoverPoint/CE you get more control, less bandwidth required (3-12x savings on bandwidth as reported by RP customers), and it’s integrated with Microsoft Clusters to enable seamless failover.

Now that is a cool product.


Storage and Virtualization for SQL DBA’s

I try to keep up with as many people who are interesting and important work in the field of Microsoft’s enterprise application products such as Exchange, SQL Server, and SharePoint and now and then it surprises me when someone really “gets it” in terms of their audience.

Denny Cherry is one of those guys.  He presents topics not for the storage geeks among us, but for SQL geeks who could benefit from understanding more about how a SAN is set up and configured.  A couple of days ago, he wrote a post summarized a recent presentation he gave to a group of SQL pro’s – a dry run for his upcoming SQL Pass presentation.  What caught my eye is that he was generous enough to share his presentation materials online, for all to share and digest – great stuff on storage and virtualization for the SQL DBA.

A very smart dude – and I’ve happy to say I’ve had the pleasure of meeting him at EMC World in the bloggers lounge.  I think I also walked past him a few times on Bourbon Street at TechEd, but … I can’t be sure 🙂

Anyway…. as an EMC employee, I was very happy to see the references to EMC, but (there’s always a but isn’t there) I did want to correct two minor corrections I would make on the materials:

1. Slide 22 indicates Exchange belongs on FC disks.  I would just mention that at EMC, we’ve seen a lot of people put Exchange 2010 on SATA.  RAID-protected SATA nonetheless, but SATA combined with Virtual Provisioning (EMC’s term for Thin Provisioning) works very well for most situations.  Caveats apply when using SATA and replicating that data, of course – but tread carefully, and it can be done.  Thin Provisioning is great for Exchange 2010 because Exchange teams want to give their users enormous mailboxes (up to 25GB in some cases) and they don’t want to buy and allocate all of that space upfront.  SATA with Virtual Provisioning is a great way to cut the cost of an Exchange infrastructure that used to demand those Tier 1 FC disks.

2. Slide 43 indicates EMC can only do EMC to EMC array replication. One nice surprise comes from EMC’s RecoverPoint product.  It’s a journaling appliance that implements a write-splitter that sits on the host, SAN switch, or in the storage array itself.  This replication appliance that splits the writes and keeps a copy in a local or remote journal and uses policy- driven bandwidth reduction and data compression technologies to shrink bandwidth significantly – sometimes by a factor of 5-10x.  This is usually enough to justify the purchase of the product, due to the cost savings in bandwidth.  Oh… so back to the main point… we have a lot of customers that use EMC storage at their primary site and another vendor’s storage in a secondary location.  It’s heterogeneous.  You don’t even need an EMC array and it works with FC or iSCSI protocols.

I’m hoping Denny doesn’t take this the wrong way – I learned a lot about what SQL DBA’s need to know after reading this… and I bet you can too.  Remember to check out his PPT consider following him on Twitter and check out his blog here.

Cool Infographic Poster for Hyper-V

Step 1.  Download large infographic poster here (or click on the picture). Hat tip to Techhead for sharing this one.

Step 2.  Find a BAP (Big A–* Printer)

This thing is 40 inches by 25 inches!

Maybe send it to your local photo developer or print shop.

Or maybe just download the PDF and keep it handy.

I love graphics that have an insane amount of detail embedded into them!

————————————————–

* this word is blocked by Microsoft Forefront 🙂

Automating Perfmon with Perfcollect

[ Post by Paul Galjan ]

When I started here at EMC, I was pleased to see that most of us would use actual host data (perfmon) to perform size out our storage.  We have a variety of cool tools that will analyze perfmon output and help visualize trends, size out replication bandwidth required and a whole lot more.

But performance data gives you only half of what you need in order to size storage.  You need the capacity of the disks in order to do a complete sizing.  This resulted in a fair number of conversations that went like this:

Sales Rep: Hey Paul, did you get that perfmon data from the customer?
Paul: Sure did.  Based on the performance data, they’ll need about 12 15k disks for the database, and 4 for the logs.
Sales Rep: So sixteen 146G 15k disks?
Paul: Well, I’m not sure; I don’t have the capacity information.
Sales Rep: But you looked at the performance data.
Paul: Yes.
Sales Rep: And based on that, it looks like they’ll need sixteen disks to address the performance requirement.
Paul: Yes

Sales Rep:  So I can go ahead and quote 16 146G 15k disks, right?

You only need to have that conversation about three dozen times before you realize that something must be done.

So I wrote a tool called perfcollect.  It runs on Windows 2003 and later, tested on x86, x64, and even ia64.  Once I started writing it, I figured out that I could solve a lot more problems than I actually set out to solve.

First, I decided not to limit the counter to just storage.  The tool collects a wide variety of counters related to CPU, memory and even the application context.  It will collect up to 350 counters, based on the XML profiles from the very cool PAL tool. The counters include all sorts of stuff relevant to Exchange, SQL Server, SharePoint, AD, Hyper-V and more.

Second, it collects configuration information that is available only by doing WMI queries on the server, but is nonetheless still relevant to performance troubleshooting.

Operation of the tool is very simple.  You download the tool from my own blog site, and run it as administrator on your server (it automatically escalates privileges if you’re running it on 2008, Vista, or 7).  Select the duration of the collection, the sample frequency, hit enter and let it go.  Come back and look in c:\perflogs\EMC, and you’ll see a directory tree of text files and csv’s.

Here’s the progress of what perfcollect actually does:

  • Detects the version of Windows running.  If it’s running 2000 or earlier, it exits
  • Presents the UI portion, where you select the duration and frequency of samples
  • Detects all available counters on the system
  • Builds a list of relevant “interesting” counters based on what is available
  • Builds a list of services running on the machine
  • Gets boot options of the machine
  • Builds a list of applications installed on the machine
  • Builds a list of disks on the system and their capacity information – outputs in CSV and human-friendly text formats
  • Dumps event logs of error and above to CSV
  • Builds a list of disks on the system and their offsets
  • Gets network configuration information
  • Enumerates hardware on the system – processors and types, disks, SCSI and iSCSI adapters, tape drives, and media changers
  • Executes “systeminfo”
  • Executes “driverquery”
  • Consolidates information relevant to PAL (Number of processors, boot options, system type, and memory)
  • Starts the perfmon collection – output in CSV format.

The whole process usually takes less than a minute – excluding the time it takes to actually sample the data, of course.

Once you’re done, the perfmon CSVs are ready for use with any tool you use to manipulate and visualize perfmon data; PAL, perfmon itself, excel, etc.  If you’re worried about the size, I’ve never seen an uncompressed perfmon file generated by perfcollect of over 40MB.

The tool “belongs” to EMC (in that I used EMC’s money to feed my family while I was developing it, and I tested it in EMC’s incredible labs).  But it’s free of charge to use, and the output is yours.  If you use it to collect data just to get a baseline of your servers’ performance, or troubleshoot a problem, we’re cool with that.

You can get more information in the Perfcollect README file.

License
The software is licensed “as-is.” The contributors give no express warranties, guarantees or conditions. You may have additional consumer rights under your local laws which this license cannot change. To the extent permitted under your local laws, the contributors exclude the implied warranties of merchantability, fitness for a particular purpose and non-infringement.
The Microsoft Corporation tools are packaged herein through permission granted by Microsoft Corporation through the premier contract with EMC.  Grep, gawk, and printf are distributed unmodified under the GNU General Public License.

Notice of Copyright
This program is the confidential unpublished intellectual property of EMC Corporation.  It includes without limitation exclusive copyright and trade secret rights of EMC throughout the world.

Large Scale Hyper-V Clusters, Cluster Enabler, and VPLEX

I talked with Partner Engineering Manager Txomin Barturen about how to get ultimate scale from Hyper-V with Cluster Shared Volumes. He also spoke about his session on multi-site Windows Clustering Configurations and EMC’s Cluster Enablers which plug directly into Microsoft’s clustering framework and well as EMC’s storage virtualization and transportation appliance, VPLEX.

Recorded at Microsoft TechEd 2010.

EMC’s Most Popular Microsoft SQL Server Documents

These are the most popular Microsoft SQL-focused documents that were downloaded within the past 3 months.  Simply click the link, download, and read whatever might interest you.

There’s a lot more where these came from… So what are you looking for?

Let me know and I can find it.

VPLEX and SharePoint Distance VMotion

Wondering how EMC’s storage federation device VPLEX might solve SharePoint DR concerns?

A few of the engineers teamed up to create a reference architecture that shows you how it’s done.

First, they listened to early customer feedback on what use cases might be most beneficial.

What came up again and again – SharePoint (and SAP and Oracle) DR is not that easy and virtualization of the servers helps a great deal, but the ability to perform VMotion across two sites without a major network upgrade is game-changing.  If you weren’t aware of VMotion, it’s a feature that allows the migration of operational (live, running) guest virtual machines between similar but separate hardware hosts sharing the same storage. Each of these transitions is completely transparent to any users on the virtual machine at the time it is being migrated.  Since VPLEX allows two sites to share the same storage, it doesn’t take a genius to realize that this enables VMotion across two sites.

Second, they built out the environment and tested a farm.

Specific to SharePoint, they configured the farm according to EMC and Microsoft best practices with about half a TB of total space.  They used KnowledgeLake’s DocLoaderLite to populate SharePoint with random user data and then fired up a simulated load using Microsoft VSTS (Visual Studio).  The Proven Solutions mantra held true – we wanted to get this as close to real world as possible and push the thresholds of performance and scalability at the same time. Our SharePoint guru and friend James Baldwin put a great amount of effort in putting this together amongst a team of several others (Don, Brahim, Haji, Patrick, Joe, Brian C, and many others).

Third, they provide the result and key findings.

I’d boil it down to this.  VPLEX can safely be inserted in the data path between host and your existing storage array. You can stretch your clusters and enable a much better HA and DR strategy for your SharePoint farms.  Failover across sites used to be one of the most challenging IT procedures out there… it’s why many companies don’t even have a DR plan.  Now it can be as simple as a local failover, once it’s setup.  And swapping out/upgrading storage becomes a seamless procedure compared to what might have happened yesterday. You can bounce your server back and forth without much of a blip.

OK, so where’s some detail on that “blip”?

Page 35 shows the VMotion durations, with and without latency added.

This section of the paper describes how the SharePoint farm response time will be affected before, after, and during the VMotion.

I think skeptics like Eddie would have liked to see data like this.  Oh yeah, we did the same with non-SharePoint SQL databases (1 TB), Oracle E-Business Suite (with 11g), and SAP (ERP and BW).  And it’s all in this little reference architecture here.

Virtual Winfrastructure – EMC and Hyper-V

I’d like to help introduce a new blogger in the house at EMC.

Adrian Simays will be blogging and advocating EMC’s approach towards Microsoft’s Hyper-V.

His blog, named Virtual Winfrastructure, aims to highlight the fact that EMC has a large group of people and projects dedicated to Microsoft’s Hyper-V.  We also have a lot of customers using a hybrid approach of both Hyper-V and VMware in their virtualization efforts.   And each of our product teams have been busy working on documentation that shows how it all works.

Here’s a small sample here:

I am subscribing, and look forward to reading some more good stuff from Adrian – a really smart dude that can put it in simple terms.  Learn more about Adrian here.  Subscribe here.

W2K8 R2 Hyper-V Live Migration with Exchange 2010, SQL 2008 R2, SCVMM, and EMC CLARiiON NQM

Longest title ever?  Thankfully I abbreviated SCVMM down from System Center Virtual Machine Manager.  Anyway…

Microsoft has announced their launch dates for Windows 7, Windows 2008 R2, and Exchange 2010.

EMC will be there to support them in many cities including Baltimore, NYC, Irvine, Raleigh, St Louis (to name a few).

I was asked to see if we could put together a quick demo showcasing some of the cool stuff we could do, and we hooked it up FAST.

My colleague Ryan Kucera and I worked together to put a quick little proof of concept together showing a combination of dynamic storage and server load balancing. In little over a week (just before his next proof of concept build-out), we were able to crank out a demo that  showcases:

  • System Center Virtual Machine Manager R2 (beta)
  • Hyper-V R2 Live Migration (not released yet)
  • Exchange 2010 (not released yet)
  • SQL 2008 R2 (not released yet)
  • CLARiiON Virtual Provisioning (creation of thin LUNs)
  • Storage IOPS thresholds (Navsphere Quality of Service Manager aka NQM)

The setup of the demo was this:

You’re setting up your virtual servers on Hyper-V servers and you’re moving stuff around pretty quickly…  You place two busy VM’s on the same host.  Performance is bad. You need to move the VM’s without downtime – we use Windows 2008 R2 Live Migration to show this.  Then you notice because we are using CLARiiON Virtual Provisioning and Thin LUNs for simplified management, we have multiple heavily utilized LUNs for different VM’s that are competing with each other on the same set of disks.  No problem. NQM gives you the ability to be able to place a threshold on LUN’s (like 500 IOPS max for SQL 2008 R2 in the video) and let others (like a standalone Exchange 2010 VM in the video) have more IOPS to service more requests.

Too many people don’t know most EMC storage devices can do this (in both physical and virtual environments).

But now you do.

(looking for higher resolution on the video – click here)

Iomega StorCenter ix4-200d NAS and iSCSI Storage Array

I hear stories all the time about people who became celebrities and how often people from their past come out of the woodwork to pester them.

That’s how I feel about Iomega!

They have come FAR beyond those annoying zip drives that I’d dismissed in my head and are delivering the most advanced storage arrays in their class complete with dual GigE, iSCSI support, VMware certification, and Windows server 2003/2008 HCL certification.  It could be argued that the Iomega division is the most innovative group within EMC. And dammit, now that they are the most popular group in EMC, they won’t return my calls (because I am pestering them for a freebie).

What is this ix4-200d that I desire?

Simply put… a 4-drive desktop storage unit in 2, 4, and 8TB options – starting at $699.99 with an AMAZING amount of options.  I’m quite proud (and surprised) that my company did this as we go further and further down market into the SMB and pro-sumer space.

Cool features for small businesses and networking:

  • Dual GbE connection
  • easy file sharing
  • iSCSI block access
  • multiple RAID configurations
  • UPS support
  • print serving
  • folder quotas
  • Device-to-device replication (yes it’s true – one in your basement, another in a buddy’s basement, and replicate em!)
  • user replaceable drives for business continuity and disaster recovery
  • Active Directory support
  • Remote access from web

Cool features for home users:

Who might want this?

1. Small businesses who need advanced capabilities without a large budget.  Somebody said – even your dentist requires their files to be kept intact, protected, and replicated.  Solo-prenuers, Partnerships, and small businesses will love it.

2. Prosumer types who need to play with the latest and greatest technologies but don’t have the budget for something large scale.  They will put this on their cube desk, in their office, or in their own basement.  Not as noisy as pulling in an Ax4-5 🙂

I think it’s cool and I’m going to do my best to get Jay and Marc to send me a freebie.  Then I can finally stop pestering them (now that they are celebrities).

More coverage:

http://chucksblog.emc.com/chucks_blog/2009/08/i-love-a-good-disruption.html

http://virtualgeek.typepad.com/virtual_geek/2009/08/and-now-something-just-awesome-for-small-vmware-shops.html

http://blog.fosketts.net/2009/08/27/iomega-ix4-200d/

The Hyper-V Blue Screen Video Drama

Act 1.  The Posting

VMware employee posts a video of a bluescreen in a Hyper-V VM which takes down the whole physical box.

It was available here.

Act 2.  The Revolt

The VM community rises up and demand facts.  Read comments here.

Microsoft gives some facts.

Video largely discredited as FUD without much detail.

Scott Drummunds apologizes.

Microsoft piles on: here, here and here.

Act 3.  The Aftermath

Bruce Herndon from VMware posts detailed results from the testing. Summary here. Very interesting.  Maybe it wasn’t FUD?   How will the saga end?

Oh, and here’s a Microsoft “myth-busting video“.

I’d say we’re starting to see the beginnings of a not-so-peaceful co-existence.   These FUD battles are only part of a larger war that could take place over the next 5 – 10 years.   No one knows where things will end up, but one thing that we can be sure of, it sure is fun to watch!

[Update June 15th 2009: Microsoft cannot get the parent partition to crash, however the claim of 750,000 downloads and fastest growth hypervisor could be seen as hyperbole – does that include downloads suggested through Windows Update?]

Please take my 4 question, anonymous survey.

How to Build an Efficient Application Infrastructure through Virtualization

I couldn’t figure out how to embed this into my blog, but this is a great, short video which shows how EMC is working with Microsoft to virtualize applications like Exchange, SQL, and SharePoint.  It’s only ten minutes long and showcases one of EMC’s great technologists, Brian Martin, as he speaks with Microsoft’s Jim Schwartz.