I work in the Proven Solutions team at EMC. Our entire group recently got involved to test out the new VMAX, and in the Microsoft arena, there were a lot of meetings and a lot of speculation of what we should test. It certainly Overtook The Future free-time of a lot of people…
If you work within a company, you probably know there are dreamers and realists when it comes to any project. For this one, the dreamers wanted to showcase an all-in-one solution that would show how this new platform could show the new features while consolidating multiple mixed workloads without the array even breathing heavy. The realists were quoted as saying popular phrases such as “scope creep,” and “timelines.” Well thank goodness, the dreamers won.
Once the scope of the all-in-one (SQL, SharePoint, Exchange) project was decided, Project (insert code name) got many teams working together in a way that I think is fairly unique across the industry. Just look at some of the MSIT whitepapers. Look at HP’s papers. Look at any company and you have islands of documentation and knowledge. A paper by the Exchange team for Exchange people only. Or a deep SQL paper that about 1 guy in your company can understand. A SharePoint backup guide that only a few people will ever look at. Sure, these papers provide unique material to a focused audience, but it also means it’s likely that they aren’t going to have a clue about what happens outside their little area of focus. None of these papers gives you perspective that ALL of your Windows servers can be and should be closer together.
Having your Windows environment closer probably means its stored data is closer which means data backups and replication are probably going to be easier to think about. It means that the SQL gal can tell the Exchange guy what she thinks about log shipping, based on her experience, and have an intelligent conversation about it. It means the teams can work together like a plumber, electrician, and a framer – sticking to their core area but retain a surprising amount of knowledge about the big picture. This gets them thinking like architects or CIO’s and not thinking like single-product people.
I wrote a fairly well received post back in December about consolidation and de-consolidation and and the fact remains true today for your company or even for your home.
When your data is in close proximity to you, and in less places – it’s easier to manage. When your data is distributed and in more places – it’s harder to manage.
Unless you have a really intelligent system to distribute your information ($), then this will always remain true.
So instead of doing configurations for SQL, and then for SharePoint, and then for Exchange – we worked together to build a mixed workload that showcased the standard division of control and priority settings of the VMAX, but also utilized the newer features that enable easier storage allocations, online LUN migration, and thin provisioning. We did this all on the same storage hardware – the VMAX.
We took a combination of Exchange 2007, SQL 2008, and SharePoint 2007 and as usual beat the hell out of them until we understood the thresholds and the limits. We used ESX 3.5 (ESX 4.0 would have been new and might have presented delays at this scale) to virtualize everything – including all of the SQL databases. We’ve been a little conservative on this in the past, but we were happy to find performance exceeded our expectations.
We tried a few unique cases:
- Live Migrations of SharePoint Web Front End – under 3 minutes without disruption. Check.
- Live Migration of a Heavy (75,000 user) SQL environment – 17 minutes without disruption or downtime. Check.
- Simulated power failures – all servers back up and running in under 6 minutes. Check.
- Migrating a SQL database from RAID1 to RAID5 while the application stays online and processing requests. Check.
- Utilizing Replication Manager to clone each copy of the production database and logs. Check.
- Adding Kroll Ontrack PowerControls to be able to get file-level restore granularity from SQL or Exchange or SharePoint databases. Check.
Using standard performance testing tools for Exchange (LoadGen/JetStress), SQL (TPC-E workload) and SharePoint (Knowledge Lake VSTS scripts), we achieved a consolidated, virtualized workload that can perform, scale, and move around the system as needs change over time.
What about Hyper-V?
Well, don’t get confused – although much of the testing we did was based on VMware, there is no reason the functionality embedded within this new platform cannot be leveraged with Hyper-V. The V-Max is an excellent platform to get simplified provisioning of your virtualized Microsoft environment – whether or not the hypervisor is Hyper-V or VMware. In my role, I really don’t care. We do what our customers ask for.
Virtual Servers and Virtual Storage
When I say virtual servers need virtual storage – there’s a few key enabling technologies that come to mind that are new to the V-Max platform, which are quickly becoming the new requirements for today’s datacenter:
Autoprovisioning Groups— Autoprovisioning Groups provide an easier, faster way to provision storage in Symmetrix V-Max arrays. In virtual server environments applications running on V-Max arrays require a fault tolerant environment with clustered servers as well as multiple paths to devices for guest Virtual Machines (VMs). Autoprovisioning Groups were developed to make storage allocation easier and faster, especially with these types of configurations.
Advanced Tiered Storage Management—The Virtual Provisioning component of this package allows you to present a large amount of capacity to a host and then consume space only as needed from a shared pool. This improves total cost of ownership (TCO) by reducing initial over-allocation of storage capacity and simplifies management by reducing the steps required to support growth.
Virtual LUNs—Virtual LUN technology enables data migration within an array without host or application disruption. Virtual LUN brings a tiered storage strategy to life by easily moving information throughout the storage system as its value changes over time. It can assist in system reconfiguration, performance improvement, and consolidation efforts while helping maintain vital service levels.
I’ll post some more on the specifics of the testing we performed over the next couple of days and cap it off with the Reference Architecture and WhitePaper for this solution.