Skip to main content

cache - How effective is LSI CacheCade SSD storage tiering?



LSI offers their CacheCade storage tiering technology, which allows SSD devices to be used as read and write caches to augment traditional RAID arrays.



Other vendors have adopted similar technologies; HP SmartArray controllers have their SmartCache. Adaptec has MaxCache... Not to mention a number of software-based acceleration tools (sTec EnhanceIO, Velobit, FusionIO ioTurbine, Intel CAS, Facebook flashcache?).




Coming from a ZFS background, I make use of different types of SSDs to handle read caching (L2ARC) and write caching (ZIL) duties. Different traits are needed for their respective workloads; Low-latency and endurance for write caching. High capacity for read.




  • Since CacheCade SSDs can be used for write and read cache, what purpose does the RAID controller's onboard NVRAM play?

  • When used as a write cache, what danger is there to the CacheCade SSDs in terms of write endurance? Using consumer SSDs seems to be encouraged.

  • Do writes go straight to SSD or do they hit the controller's cache first?

  • How intelligent is the read caching algorithm? I understand how the ZFS ARC and L2ARC functions. Is there any insight into the CacheCade tiering process?

  • What metrics exist to monitor the effectiveness of the CacheCade setup? Is there a method to observe a cache hit ratio or percentage? How can you tell if it's really working?




I'm interested in opinions and feedback on the LSI solution. Any caveats? Tips?


Answer




Since CacheCade SSDs can be used for write and read cache, what purpose does the RAID controller's onboard NVRAM play?




If you leave the write caching feature of the controller enabled, the NVRAM will still be used primarily. The SSD write cache will typically only be used for larger quantities of write data, where the NVRAM alone is not enough to keep up.




When used as a write cache, what danger is there to the CacheCade SSDs in terms of write endurance? Using consumer SSDs seems to be encouraged.





This depends on how often your writes are actually causing the SSD write cache to become necessary... whether or not your drives are able to handle the write load quickly enough that the NVRAM doesn't fill up. In most scenarios I've seen, the write cache gets little to no action most of the time, so I wouldn't expect this to have a big impact on write endurance - most writes to the SSDs are likely to be part of your read caching.




Do writes go straight to SSD or do they hit the controller's cache first?




Answered above... Controller cache is hit first, SSD cache is more of a 2nd line of defense.





How intelligent is the read caching algorithm? I understand how the ZFS ARC and L2ARC functions. Is there any insight into the CacheCade tiering process?




Sorry... no knowledge to contribute on that - hopefully someone else will have some insight?




What metrics exist to monitor the effectiveness of the CacheCade setup? Is there a method to observe a cache hit ratio or percentage? How can you tell if it's working?





It doesn't look like any monitoring tools are available for this as there are with other SAN implementations of this feature set... And since the CacheCade virtual disk doesn't get presented to the OS, you may not have any way to manually monitor activity either. This may just require further testing to verify effectiveness...



Opinion/observation: In a lot of cases (when used correctly, read cache appropriately sized for the working data set) this feature makes things FLY. But in the end, it can be hit-and-miss.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able