Merrill Aldrich (SQL DBA) writes a great post that highlights some excellent points when thinking about your storage choice for SQL databases – or any mission critical applications for that matter.
His rules are solid pieces of advice that I would advocate to any application or database owner that has data that could potentially get put on a storage array or on the storage area network (SAN).
Rule 1: There is no Magic
Basic idea: For the most part, storage is a bunch of spinning disks. Sure, some advanced features can enable advanced capabilities, but try not to overthink it.
Rule 2: Performance Costs More than Space
Basic idea: Always size based on performance – make sure you have enough spinning disks to service your workload and then, make sure you have enough capacity.
Rule 3: Yes, Direct Attached Storage is Cheaper … But
Basic idea: You can get good deals on a 2TB SATA drive and drop them into a cheap disk housing and call it a storage array. But you could really lose out on the flexibility that comes with networked storage (SAN) to rebalance workloads, to reclaim underutilized space and provision it to another server… DAS has it’s place, but make sure you aren’t making a cost based decision that might not suit your goals or your company’s goals.
Rule 4: You Need a Good Relationship with the SAN Admin
Direct Quote: “I’ve blogged about this before, but suffice it to say that bad communication with the SAN admin = FAIL.” SQL Server often has unique and demanding IO requirements that don’t go away just because you have a fancy array. You have to be able to work that out with the storage admins, if you have them, or the vendor, if you are in a smaller shop. Together you will have to talk through the need to separate logs, data and backups, and what the performance profile of each “virtual” disk system needs to be, backed by perf counter data, to prevent the SAN nightmare: “We spent our $5,000,000 and the VP wants to know why it’s SLOW.”
Please check out this article and let me know what you think