Category Archives: Politics

There is no “DAS vs SAN” and Exchange 2010 Tested Solutions from EMC

It’s refreshing to see the most recent post on the Exchange team blog promoting real-world, tested Exchange 2010 configurations. EMC’s got two white papers we did with Cisco and Brocade highlighting the benefits of virtualizing Exchange for local and remote availability reasons.  I’m sure EMC’s Dustin Smith will have more to say on this soon enough.

The program was a great idea – led by Microsoft’s Rob Simpson that highlights both DAS and SAN connectivity options. In the Exchange world, we’ve seen plenty of documents and programs and marketing against anti-SAN approaches to the dismay of many very smart folks we talk to in the data center. This program was a healthy dose of accepting reality by a very logical thinker within Microsoft.

But… there is no “DAS vs SAN.”  The DAS versus SAN debate is not a technology debate and it’s not a cost debate and it’s not a Microsoft versus storage vendors debate.    It’s a control issue.

Let me explain.

DEBATING COST AND HOW TCO IS ALWAYS WRONG

There’s been a ton of articles written and TCO studies done to show TCO in both directions.

When it’s a DAS discussion, TCO slides in favor of DAS.   Big surprise.  Here’s a few reasons why these TCO models are usually wrong:

  • JBOD style DAS TCO calculations never take into account RAID protection – and I haven’t seen any large customer stop using RAID protection (mirroring or otherwise) on their crown jewels – their email systems.
  • DAS-skewed calculations also crank up the price by utilizing very old technology prices…  so maybe we should be more transparent about pricing to show that these prices are typically WAY off and don’t take into account list & street price factors.
  • Also they assume the wrong disk configurations which crank up the perceived price of a storage array – the configurations they are matching against DAS typically use thickly provisioned Fibre Channel drives – way more than you need for Exchange 2010.  Thin provisioning let’s you allocate storage capacity only as you need it and manage a simple pool of storage that has multiple Exchange databases across it.  Large capacity, lower cost/GB SATA drives often make up the bulk of EMC Exchange 2010 configs – unless you have a storage admin who just likes to put Tier 1 applications of FC as a rule (this does happen) and was not expressly told that Exchange 2010 should be put on SATA.
  • Also the server to storage connection itself is wrong.  When I’ve seen competitive TCO’s against EMC, I see FC connections, FC switches, FC cards … and heck if I were selling against EMC I’d do the same.  But EMC can offer iSCSI, and sure, you can buy a lower cost iSCSI switch (but don’t get one that’s too cheap), and you can now get away with pure software iSCSI initiators in most circumstances these days (look out for embedded TOEs in the foot notes 🙂

However – when it’s a SAN discussion, we obviously suffer from similar blinders.  We don’t always know about the competitive storage option and what special sauce ingredient they might be using to make our TCO model look invalid.   The one thing we do is try to show TCO over a few years and we don’t see much of that from DAS models (we stress operational cost savings, DAS models focus on short-term acquisition or CAPEX costs).

Since we are on the topic, the word SAN is often used wrong when companies compete against EMC – the world leader in storage hardware, software, and services.   Taken literally, DAS and SAN are only differences in connectivity.  DAS is simply direct attached storage – going straight from a server to a storage array without a switch in between.  A SAN is a storage area network, formed with server(s), switch(es), and storage array(s).   Used in this context, storage vendors like EMC love DAS and SANAlthough there are numerous benefits of having multiple servers sharing storage (for simplified management, protection, and virtualization), we can also let you connect directly from your server directly to an EMC array with your protocol and connection of choice.

While EMC does make the world’s best storage arrays – please do not think EMC =SAN and all SAN’s = expensive.  We have some amazing products in the lower end price bands that be configured “DAS Style” to lower costs.  Also, thin provisioning is regularly used on the Exchange mailbox database volumes to decrease the initial storage outlay (running numbers on this is quite easy:  5000 users at 5GB thick versus  5000 users that are thinly provisioned 500MB mailboxes.  The savings can be tremendous).

THE TECHNOLOGY DEBATE

Technology debates are almost always irrelevant. It’s usually a mindset debate. What is the mindset of the decision maker in the company for their Exchange deployment?   Is the IT manager responsible for final decisions on the storage for Exchange, or does the storage manager make that call?  Or is the Exchange administrator empowered to make their own choices.  You have three different storage choices in each case:

The IT manager

wants something that has a great price/performance – something that works but with a reasonable price tag.  They like “standardized solutions.”  Virtualization integration has become big on this persons list.  And they can’t forget the big cloud in the room too –  hosted/public cloud options.

The storage manager

wants more of what they know – keep it simple and make it easy for them to manage growing volumes of storage.  Integration with other tools and virtualization is big for them too.  They are into the storage hardware and you need all their requirements met to make them happy.

The Exchange admin

will go with what they know.  After a few classes and conferences and blog posts, they learn a mantra of “go with DAS… not expensive SAN” and they may associate EMC with SAN and forget to think that EMC storage arrays can be direct connected as well…  We could make a voice-controlled storage robot that costs $10 and an Exchange admin still may not like it as long as it’s controlled by someone else and has the words EMC on it.

Our storage arrays keep getting better and better (like our storage robot like, zero-management, set-it-and-forget-it functionality in EMC’s Fully Automated Storage Tiering), but again it’s not a technology debate…

It’s all about control.

For years, putting Exchange on high-end storage arrays was almost required, to get that many FC disks in one place.  Now that Exchange 2010 is here, anyone can use lower cost SATA drives.  It doesn’t mean they have to go with a direct attached deployment model…  If the Exchange administrator is in control, they will choose the deployment model prescribed to them again and again.  It’s not their fault, it’s just what they are told and it’s all they know to recommend.  Just about anyone can ride their bicycle to work – but do they?

The same exact discussion is also taking place with regards to public or private cloud for email…. who controls it?  You or the hosting provider?  And are you comfortable with that?   Some folks are, and some aren’t, and some never will be.

CONCLUSION

In the end, I am very happy to see an alternative approach with this new program.  I’m happy that we participated in the program and drove two successful solutions from it.  And I’m happy that we offer a choice to customers – whether it’s the Exchange admin, or the storage team, or the IT manager that wants to decide to control the storage direction for Exchange… we have a wide range of price points (wait until you see what we have coming next week) and allow any connectivity type our customers want.

I’m disappointed when Exchange 2010 deployment discussions turn into a vendor vs vendor debate, and when bad TCO data is shown, and when our own reps are not aware that Exchange should be on SATA on not FC drives in most cases.

It’s tough educating a lot of people at once and it’s even tougher to change their mind.  The best thing to do as companies is to work together and let our customers tell us where to go next.

Advertisements

ReBlog: SAN 101 for the DBA

Merrill Aldrich (SQL DBA) writes a great post that highlights some excellent points when thinking about your storage choice for SQL databases – or any mission critical applications for that matter.

His rules are solid pieces of advice that I would advocate to any application or database owner that has data that could potentially get put on a storage array or on the storage area network (SAN).

Rule 1: There is no Magic

Basic idea: For the most part, storage is a bunch of spinning disks.  Sure, some advanced features can enable advanced capabilities, but try not to overthink it.

Rule 2: Performance Costs More than Space

Basic idea: Always size based on performance – make sure you have enough spinning disks to service your workload and then, make sure you have enough capacity.

Rule 3: Yes, Direct Attached Storage is Cheaper … But

Basic idea: You can get good deals on a 2TB SATA drive and drop them into a cheap disk housing and call it a storage array.  But you could really lose out on the flexibility that comes with networked storage (SAN) to rebalance workloads, to reclaim underutilized space and provision it to another server… DAS has it’s place, but make sure you aren’t making a cost based decision that might not suit your goals or your company’s goals.

Rule 4: You Need a Good Relationship with the SAN Admin

Direct Quote: “I’ve blogged about this before, but suffice it to say that bad communication with the SAN admin = FAIL.” SQL Server often has unique and demanding IO requirements that don’t go away just because you have a fancy array. You have to be able to work that out with the storage admins, if you have them, or the vendor, if you are in a smaller shop. Together you will have to talk through the need to separate logs, data and backups, and what the performance profile of each “virtual” disk system needs to be, backed by perf counter data, to prevent the SAN nightmare: “We spent our $5,000,000 and the VP wants to know why it’s SLOW.”

Please check out this article and let me know what you think

The Hyper-V Blue Screen Video Drama

Act 1.  The Posting

VMware employee posts a video of a bluescreen in a Hyper-V VM which takes down the whole physical box.

It was available here.

Act 2.  The Revolt

The VM community rises up and demand facts.  Read comments here.

Microsoft gives some facts.

Video largely discredited as FUD without much detail.

Scott Drummunds apologizes.

Microsoft piles on: here, here and here.

Act 3.  The Aftermath

Bruce Herndon from VMware posts detailed results from the testing. Summary here. Very interesting.  Maybe it wasn’t FUD?   How will the saga end?

Oh, and here’s a Microsoft “myth-busting video“.

I’d say we’re starting to see the beginnings of a not-so-peaceful co-existence.   These FUD battles are only part of a larger war that could take place over the next 5 – 10 years.   No one knows where things will end up, but one thing that we can be sure of, it sure is fun to watch!

[Update June 15th 2009: Microsoft cannot get the parent partition to crash, however the claim of 750,000 downloads and fastest growth hypervisor could be seen as hyperbole – does that include downloads suggested through Windows Update?]

Please take my 4 question, anonymous survey.

Former Microsoft Exec Tod Nielson Joins VMware

When you build a business or a group, you tend to reach out to people you trust/respect and try to hire them.

In this case, VMware CEO Paul Maritz is reaching out to a person that has run a publically traded company in the past (Borland).  Tod Nielson will be brought on the the Chief Operating Officer (COO).

In time, maybe he will get his own Wikipedia page…

Microsoft Promotes Bob Muglia to President

Just in case you didn’t see this interesting take on the Muglia promotion from Steve Gillmor, writing for TechCrunchIT.

Muglia now commands a Microsoft unit with some 22% of the company’s $60 billion in revenue…

Steve Ballmer’s decision to solidify Muglia’s status after moving Muglia from a report headcount of 3,000 to 1 in 2001 also provides Chief Software Architect Ray Ozzie with a powerful ally…

Muglia’s strategic assets (significant revenue and the ability to survive in the nation state politics of Redmond’s Windows and Office groups) are complementary to Ozzie’s command of the direction the company must take to avoid being marginalized by Google and its disruptive advertising model.

Read the whole article here.

UPDATE 1/10/2011

http://www.betanews.com/joewilcox/article/End-of-an-era-Bob-Muglia-is-leaving-Microsoft/1294683968

Bob Muglia let go.

Tips for Dealing with Your Storage Admin

It happens everyday.  Yet another SQL admin falls victim to the technical prejudice of those who provide their storage.  It’s a sad story which is seen everyday throughout the currently developed world; at least those working in relatively well-funded companies.  UNIX guys make fun of Windows people. Mainframe guys refuse to believe Windows is even available.  Oracle guys say their DB is the king. The reality is that Window servers are here to stay in today’s datacenter – whether it’s in house or in the cloud is a story for the future.

My point is to reach out to all you application owners, and although I’m more of a storage guy, I do know a bit about applications and getting them to perform better.  I know more about how companies bicker internally after visiting many, many customers with EMC.  I give workshops to SQL, Exchange, and SharePoint app owners, and especially when the storage guy is sitting in the room, it seems like a therapy session.

Bad Performance = Bad Code or a Bad Config?

When it comes to SQL performance troubleshooting, a friend of mine always says, “it’s either bad code, or a bad config.” This is not entirely true – there are so many more things which can cause slow performance on a server – but it’s a reality of the situation which typically happens. The storage team says it’s something with SQL. The SQL admins revolt and demand it’s not their fault, and to see how the heck their storage was setup.  These two groups are often at war with each other!

So, my friends, I come in peace as a friendly mediator, and I’d offer you these tips:

  • Befriend a storage person. If you have to request storage from a storage team or a person, you should identify these people, seek them out, and take them out to lunch or something – it will make life a bit easier.
  • Learn about storage a little bit. It’s not all that hard, and the variables are outlined quite simply. Using these words can help you make a proper request for a storage allocation for your servers which of course are running your applications and databases.  If you need more, I’d suggest starting with a nice site called SearchStorage, which is part of a larger IT web network.
  • Before deploying anything on EMC gear, assume we have a piece of documentation which outlines the steps necessary to put SQL (or Exchange or SharePoint or whatever) on one of our SANs.  If we don’t; well, shame on us – and you should push a bit to see if we can whip something up quickly for you.  I am not saying RTFM, I am simply stating that we at least owe you the M part.
  • Ask about application integration of the in-house storage vendor. Now that you are buddies with someone on the storage team, sit down and ask them about the cool stuff that each storage platform that they run can provide for you like 1) app-consistent snapshots to slice your backup windows down to nothing or 2) app-consistent replication to make sure your app comes up with minimal pain (or data loss) after an outage.
  • Wait a Virtual Second. Well, if you are like the rest of the business world, you are probably looking to virtualize some percentage  of, or specific servers/applications in your IT environment.  If the storage team has done this, you have to make sure you understand a bit about that too because for better or for worse – it changes everything.
  • Get them to build a storage request form. Many companies are already doing this. Some as simple as your name, your email, your capacity, and your IOPS.  This is enough for most situations, but I would add RAID-type, latency, and LUN sizes if at all possible.  If you’ve done all of the above, and taken an appropriate performance measure from the items you are able to monitor, then you should know what specifics should be included in your storage request form (and of course who it goes to).

The more technical version of this needs to make it’s way onto this site soon, and that would have to include a quick review of the performance chain in the SAN world, and how to find the weakest link.  Stay tuned.