I was wondering if anyone had any specific experience using Intel DC3700 SSDs (or similar) in the HP (DL380p) Gen8 servers?
I'm upgrading a set of database servers that use direct-attached storage. Typically, we use HP-branded everything in our server configs, and beyond a few SSD'd desktops (all of which have worked flawlessly), I have not otherwise used SSDs - certainly not in a server.
The servers we're upgrading run SQL Server (2005) on Windows. We're moving to SQL 2012. Current boxes host a single 200GB database on DL370 G6 provisioned with 72GB 15K SFF drives in RAID 1+0 as follows: OS (2 spindles), tempdb (4 spindles), t-logs (8 spindles), data (20 spindles). Performance is not an issue (CPU load is typically 20% / peak 30%, disk queues are typ = 1). The data volume disks are running in MSA50s off a P800 - so there's probably 5K IOPS there tops. The hardware is approaching 4 years old, and so it's time for a refresh.
Data usage, as reported by the individual hard disks, shows write volume of < 100TB since deployment on the data volume; < 10TB write on the transaction log volume; and ~ 1TB on tempdb.
That's the use case. Now consider a new, identical disk subsystem. It's going to run ~ $15K per server (34x 15K HDD @ $250 + 2x D2700 shelf + Smart array P421 for the external storage).
Consider a similar SSD deployment, say 6x 200GB SSD for the data volume, and 2 each (100GB) for OS, tempdb, and logging. Perhaps overkill, but using Intel DC S3700 for all with a second array card brings me in around $5K per server. Plus, it fits in one 2U box (use the expansion cage on the DL380p) and saves several hundred dollars in electricity every year. With the increased SSD performance, this might even cover some sloppy queries ;-).
An equivalent "no-worries" HP SSD solution is going to run ~ $10.5K. Twice the price with less warranty, lower endurance, but guaranteed performance and manageability.
Certainly, there are loads of in-between solutions that could work. I'm also quite aware of the vendor supported solution vs. 3rd party trade-offs. What I don't have is experience integrating these specific products to help quantify those trade-offs. I'm hoping someone out there does, and is willing to share his experience.
Questions that come to my mind are:
Does the S3700 play well in the Proliant environment with the Array P42x/P822 cards?
If using the S3700, would there be an advantage to using a 3rd party card, say the LSI 9270-8i?
How (well, easily) are firmware upgrades or management alerts accomplished with the third party solution as I've outlined?
If there are particular issues with the assembly, how have you worked around them - assuming you have?
With the changes that SSDs has introduced into the storage arena, storage solutions are way less straightforward than even a few years ago. I'm sure they will be very different in another few years, and we had expected to wait another cycle before seriously considering using SSDs in any server application.
Before I head too far down this road, is there anyone who would share their relevant experience with any of this? Please tell us why we're smart, crazy, or something in between.
Answer
Here's an update to summarize my takeaways from this question. Thanks for the contributions!
It's fair to say that the original question presumes an OEM storage solution (HP SSDs in this case) provides a supported or "guaranteed" working solution in terms of component compatibility and system performance. This obviously comes at a premium price, and the perceived value informs how reasonable the premium is.
While I had really discarded the notion of using SSDs in this hardware refresh, the press on the Intel S3700, specifically, made an SSD solution attractive enough to consider. Looking at the equivalent HP products, I found (1) they aren't currently available, and (2) the expected price premium is 2.4x the Intel product. So, the question becomes how much effort would it take to integrate and validate the Intel solution? Understanding this leads to a very product-specific solution that runs counter to the aim of serverfault, so I'll generalize my thinking process using the answers provided:
Whether vendor-integrated or DIY, there are still a lot of variables in hanging SSDs behind RAID controllers optimized for spinning disks. HP recommends assorted tweaks for SSD use, and the HP SmartPath software that ewwhite mentioned (Gen8 RAID + Windows only) basically short-circuits much of the RAID firmware when using SSDs. HP's additional "protectionism" with the Gen8 carriers, and managing firmware updates for 3rd party SSDs (that I would expect to be more critical than for HDDs) also makes this all just look a little too immature (or too management intensive) for prime time in a complex setup.
Before I ran back to spinning disks, though, I took another look at the FusionIO product, as Tom O'Connor suggested. Since performance isn't really an issue for us, the biggest benefit is that it is an integrated storage module. That makes compatibility and configuration much more straightforward. Another important point is that HP OEMs these, so you can get "genuine" HP product in this line, and integration becomes even less an issue. Furthermore, and in stark contrast to the SATA/SAS SSDs I was considering, HP's advertised (online) prices are actually better than FusionIO's. Go figure.
Re-thinking the deployment with this post in mind, I considered building availability nodes with single FusionIO cards. This took the solution cost from "can't consider" down to "let's investigate further." Finally, when the actual quote came in at a better-than-expected level, I was sold.
So the bottom line is that we have two Gen8 servers sporting HP-branded FusionIO cards running in the sandbox. Endurance will be far beyond our expected use, the cost was lower than for a 15K SAS disk solution, and we'll substantially reduce power consumption and rack space. The redundancy model is different, sure, but the only thing I expect people will miss are all the blinking LEDs.
My original thinking regarding SSDs for a mission-critical database system was to wait a few years, as there will be many more mature and proven solutions at better price points. No doubt that will still be the case, but I was surprised to find something today that looks like it will do the job well.
Comments
Post a Comment