Skip to main content

database - Third-party SSD solutions in ProLiant Gen8 servers




I was wondering if anyone had any specific experience using Intel DC3700 SSDs (or similar) in the HP (DL380p) Gen8 servers?



I'm upgrading a set of database servers that use direct-attached storage. Typically, we use HP-branded everything in our server configs, and beyond a few SSD'd desktops (all of which have worked flawlessly), I have not otherwise used SSDs - certainly not in a server.



The servers we're upgrading run SQL Server (2005) on Windows. We're moving to SQL 2012. Current boxes host a single 200GB database on DL370 G6 provisioned with 72GB 15K SFF drives in RAID 1+0 as follows: OS (2 spindles), tempdb (4 spindles), t-logs (8 spindles), data (20 spindles). Performance is not an issue (CPU load is typically 20% / peak 30%, disk queues are typ = 1). The data volume disks are running in MSA50s off a P800 - so there's probably 5K IOPS there tops. The hardware is approaching 4 years old, and so it's time for a refresh.



Data usage, as reported by the individual hard disks, shows write volume of < 100TB since deployment on the data volume; < 10TB write on the transaction log volume; and ~ 1TB on tempdb.



That's the use case. Now consider a new, identical disk subsystem. It's going to run ~ $15K per server (34x 15K HDD @ $250 + 2x D2700 shelf + Smart array P421 for the external storage).




Consider a similar SSD deployment, say 6x 200GB SSD for the data volume, and 2 each (100GB) for OS, tempdb, and logging. Perhaps overkill, but using Intel DC S3700 for all with a second array card brings me in around $5K per server. Plus, it fits in one 2U box (use the expansion cage on the DL380p) and saves several hundred dollars in electricity every year. With the increased SSD performance, this might even cover some sloppy queries ;-).



An equivalent "no-worries" HP SSD solution is going to run ~ $10.5K. Twice the price with less warranty, lower endurance, but guaranteed performance and manageability.



Certainly, there are loads of in-between solutions that could work. I'm also quite aware of the vendor supported solution vs. 3rd party trade-offs. What I don't have is experience integrating these specific products to help quantify those trade-offs. I'm hoping someone out there does, and is willing to share his experience.



Questions that come to my mind are:



Does the S3700 play well in the Proliant environment with the Array P42x/P822 cards?

If using the S3700, would there be an advantage to using a 3rd party card, say the LSI 9270-8i?
How (well, easily) are firmware upgrades or management alerts accomplished with the third party solution as I've outlined?



If there are particular issues with the assembly, how have you worked around them - assuming you have?



With the changes that SSDs has introduced into the storage arena, storage solutions are way less straightforward than even a few years ago. I'm sure they will be very different in another few years, and we had expected to wait another cycle before seriously considering using SSDs in any server application.



Before I head too far down this road, is there anyone who would share their relevant experience with any of this? Please tell us why we're smart, crazy, or something in between.


Answer



Here's an update to summarize my takeaways from this question. Thanks for the contributions!




It's fair to say that the original question presumes an OEM storage solution (HP SSDs in this case) provides a supported or "guaranteed" working solution in terms of component compatibility and system performance. This obviously comes at a premium price, and the perceived value informs how reasonable the premium is.



While I had really discarded the notion of using SSDs in this hardware refresh, the press on the Intel S3700, specifically, made an SSD solution attractive enough to consider. Looking at the equivalent HP products, I found (1) they aren't currently available, and (2) the expected price premium is 2.4x the Intel product. So, the question becomes how much effort would it take to integrate and validate the Intel solution? Understanding this leads to a very product-specific solution that runs counter to the aim of serverfault, so I'll generalize my thinking process using the answers provided:




  1. Whether vendor-integrated or DIY, there are still a lot of variables in hanging SSDs behind RAID controllers optimized for spinning disks. HP recommends assorted tweaks for SSD use, and the HP SmartPath software that ewwhite mentioned (Gen8 RAID + Windows only) basically short-circuits much of the RAID firmware when using SSDs. HP's additional "protectionism" with the Gen8 carriers, and managing firmware updates for 3rd party SSDs (that I would expect to be more critical than for HDDs) also makes this all just look a little too immature (or too management intensive) for prime time in a complex setup.


  2. Before I ran back to spinning disks, though, I took another look at the FusionIO product, as Tom O'Connor suggested. Since performance isn't really an issue for us, the biggest benefit is that it is an integrated storage module. That makes compatibility and configuration much more straightforward. Another important point is that HP OEMs these, so you can get "genuine" HP product in this line, and integration becomes even less an issue. Furthermore, and in stark contrast to the SATA/SAS SSDs I was considering, HP's advertised (online) prices are actually better than FusionIO's. Go figure.





Re-thinking the deployment with this post in mind, I considered building availability nodes with single FusionIO cards. This took the solution cost from "can't consider" down to "let's investigate further." Finally, when the actual quote came in at a better-than-expected level, I was sold.



So the bottom line is that we have two Gen8 servers sporting HP-branded FusionIO cards running in the sandbox. Endurance will be far beyond our expected use, the cost was lower than for a 15K SAS disk solution, and we'll substantially reduce power consumption and rack space. The redundancy model is different, sure, but the only thing I expect people will miss are all the blinking LEDs.



My original thinking regarding SSDs for a mission-critical database system was to wait a few years, as there will be many more mature and proven solutions at better price points. No doubt that will still be the case, but I was surprised to find something today that looks like it will do the job well.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able