Skip to main content

storage - Are SSD drives as reliable as mechanical drives (2013)?




SSD drives have been around for several years now. But the issue of reliability still comes up.



I guess this is a follow up from this question posted 4 years ago, and last updated in 2011. It's now 2013, has much changed? I guess I'm looking for some real evidence, more than just a gut feel. Maybe you're using them in your DC. What's been your experience?



Reliability of ssd drives






UPDATE:




It's now 2016. I think the answer is probably yes (a pity they still cost more per GB though).



This report gives some evidence:



Flash Reliability in Production: The Expected and the Unexpected



And some interesting data on (consumer) mechanical drives:



Backblaze: Hard Drive Data and Stats



Answer



This is going to be a function of your workload and the class of drive you purchase...



In my server deployments, I have not had a properly-spec'd SSD fail. That's across many different types of drives, applications and workloads.



Remember, not all SSDs are the same!!



So what does "properly-spec'd" mean?



If your question is about SSD use in enterprise and server applications, quite a bit has changed over the past few years since the original question. Here are a few things to consider:





  • Identify your use-case: There are consumer drives, enterprise drives and even ruggedized industrial application SSDs. Don't buy a cheap disk meant for desktop use and run a write-intensive database on it.


  • Many form-factors are available: Today's SSDs can be found in PCIe cards, SATA and SAS 1.8", 2.5", 3.5" and other variants.


  • Use RAID for your servers: You wouldn't depend on a single mechanical drive in a server situation. Why would you do the same for an SSD?


  • Drive composition: There are DRAM-based SSDs, as well as the MLC, eMLC and SLC flash types. The latter have finite lifetimes, but they're well-defined by the manufacturer. e.g. you'll see daily write limits like 5TB/day for 3 years.


  • Drive application matters: Some drives are for general use, while there are others that are read-optimized or write-optimized. DRAM-based drives like the sTec ZeusRAM and DDRDrive won't wear-out. These are ideal for high-write environments and to front slower disks. MLC drives tend to be larger and optimized for reads. SLC drives have a better lifetime than the MLC drives, but enterprise MLC really appears to be good enough for most scenarios.


  • TRIM doesn't seem to matter: Hardware RAID controllers still don't seem to fully support it. And most of the time I use SSDs, it's going to be on a hardware RAID setup. It isn't something I've worried about in my installations. Maybe I should?


  • Endurance: Over-provisioning is common in server-class SSDs. Sometimes this can be done at the firmware level, or just by partitioning the drive the right way. Wear-leveling algorithms are better across the board as well. Some drives even report lifetime and endurance statistics. For example, some of my HP-branded Sandisk enterprise SSDs show 98% life remaining after two years of use.


  • Prices have fallen considerably: SSDs hit the right price:performance ratio for many applications. When performance is really needed, it's rare to default to mechanical drives now.



  • Reputations have been solidified: e.g. Intel is safe but not high-performance. OCZ is unreliable. Sandforce-based drives are good. sTec/STEC is extremely-solid and is the OEM for a lot of high-end array drives. Sandisk/Pliant is similar. OWC has great SSD solutions with a superb warranty for low-impact servers and for workstation/laptop deployment.


  • Power-loss protection is important: Look at drives with supercapacitors/supercaps to handle outstanding writes during power events. Some drives boost performance with onboard caches or leverage them to reduce wear. Supercaps ensure that those writes are flushed to stable storage.


  • Hybrid solutions: Hardware RAID controller vendors offer the ability to augment standard disk arrays with SSDs to accelerate reads/writes or serve as intelligent cache. LSI has CacheCade and its Nytro hardware/software offerings. Software and OS-level solutions have also exist to do things like provide local cache on application, database or hypervisor systems. Advanced filesystems like ZFS make very intelligent use of read and write-optimized SSDs; ZFS can be configured to use separate devices for secondary caching and for the intent log, and SSDs are often used in that capacity even for HDD pools.


  • Top-tier flash has arrived: PCIe flash solutions like FusionIO have matured to the point where organizations are comfortable deploying critical applications that rely on the increased performance. Appliance and SAN solutions like RanSan and Violin Memory are still out there as well, with more entrants coming into that space.




enter image description here


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able