Skip to main content

raid - Can "enterprise" drives be safely replaced by near/midline in some situations?



When specifying servers, like (I would assume) many engineers who aren't experts in storage, I'll generally play it safe (and perhaps be a slave to marketing) by standardising on a minimum of 10k SAS drives (and therefore are "enterprise"-grade with a 24x7 duty cycle, etc) for "system" data (usually OS and sometimes apps), and reserve the use of 7.2k mid/nearline drives for storage of non-system data where performance isn't a significant factor. This is all assuming 2.5" (SFF) disks, as 3.5" (LFF) disks are only really relevant for high-capacity, low IOPs requirements.



In situations where there isn't a massive amount of non-system data, I'll generally place it on the same disks/array as the system data, meaning the server only has 10k SAS drives (generally a "One Big RAID10" type of setup these days). Only if the size of the non-system data is significant do I usually consider putting it on a separate array of 7.2k mid/nearline disks to keep the cost/GB down.



This has lead me to wonder: in some situations, could those 10k disks in the RAID10 array have been replaced with 7.2k disks without any significant negative consequences? In other words, am I sometimes over-spec'ing (and keeping the hardware vendors happy) by sticking to a minimum of 10k "enterprise" grade disks, or is there a good reason to always stick to that as a minimum?



For example, take a server that acts as a hypervisor with a couple of VMs for a typical small company (say 50 users). The company has average I/O patterns with no special requirements. Typical 9-5, Mon-Fri office, with backups running for a couple of hours a night. The VMs could perhaps be a DC and a file/print/app server. The server has a RAID10 array with 6 disks to store all the data (system and non-system data). To my non-expert eye, it looks as though mid/nearline disks may do just fine. Taking HP disks as an example:





  • Workload: Midline disks are rated for <40% workload. With the office only open for 9 hours a day and average I/O during that period unlikely to be anywhere near maximum, it seems unlikely workload would go over 40%. Even with a couple of hours of intense I/O at night for backups, my guess is it would still be below 40%

  • Speed: Although the disks are only 7.2k, performance is improved by spreading it across six disks



So, my question: is it sensible to stick a minimum of 10k SAS drives, or are 7.2k midline/nearline disks actually more than adequate in many situations? If so, how do I gauge where the line is and avoid being a slave to ignorance by playing it safe?



My experience is mostly with HP servers, so the above may have a bit an HP slant to it, but I would assume the principles are fairly vendor independent.


Answer



There's an interesting intersection of server design, disk technology and economics here:




Also see: Why are Large Form Factor (LFF) disks still fairly prevalent?




  • The move toward dense rackmount and small form-factor servers. E.g. you don't see many tower offerings anymore from the major manufacturers, whereas the denser product lines enjoy more frequent revisions and have more options/availability.

  • Stagnation in 3.5" enterprise (15k) disk development - 600GB 15k 3.5" is about as large as you can go.

  • Slow advancement in 2.5" near line (7.2k) disk capacities - 2TB is the largest you'll find there.

  • Increased availability and lower pricing of high capacity SSDs.

  • Storage consolidation onto shared storage. Single-server workloads that require high capacity can sometimes be serviced via SAN.

  • The maturation of all-flash and hybrid storage arrays, plus the influx of storage startups.




The above are why you generally find manufacturers focusing on 1U/2U servers with 8-24 2.5" disk drive bays.



3.5" disks are for low-IOPs high-capacity use cases (2TB+). They're best for external storage enclosures or SAN storage fronted by some form of caching. In enterprise 15k RPM speeds, they are only available up to 600GB.



2.5" 10k RPM spinning disks are for higher IOPS needs and are generally available up to 1.8TB capacity.



2.5" 7.2k RPM spinning disks are a bad call because they offer neither capacity, performance, longevity nor price advantages. E.g. The cost of a 900GB SAS 10k drive is very close to that of a 1TB 7.2k RPM SAS. Given the small price difference, the 900GB drive is the better buy. In the example of 1.8TB 10k SAS versus 2.0TB 7.2k SAS, the prices are also very close. The warranties are 3-year and 1-year, respectively.




So for servers and 2.5" internal storage, use SSD or 10k. If you need capacity needs and have 3.5" drive bays available internally or externally, use 7.2k RPM.



For the use cases you've described, you're not over-configuring the servers. If they have 2.5" drive bays, you should really just be using 10k SAS or SSD. The midline disks are a lose on performance, capacity, have a significantly shorter warranty and won't save much on cost.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able