Skip to main content

cache - How effective is LSI CacheCade SSD storage tiering?



LSI offers their CacheCade storage tiering technology, which allows SSD devices to be used as read and write caches to augment traditional RAID arrays.



Other vendors have adopted similar technologies; HP SmartArray controllers have their SmartCache. Adaptec has MaxCache... Not to mention a number of software-based acceleration tools (sTec EnhanceIO, Velobit, FusionIO ioTurbine, Intel CAS, Facebook flashcache?).




Coming from a ZFS background, I make use of different types of SSDs to handle read caching (L2ARC) and write caching (ZIL) duties. Different traits are needed for their respective workloads; Low-latency and endurance for write caching. High capacity for read.




  • Since CacheCade SSDs can be used for write and read cache, what purpose does the RAID controller's onboard NVRAM play?

  • When used as a write cache, what danger is there to the CacheCade SSDs in terms of write endurance? Using consumer SSDs seems to be encouraged.

  • Do writes go straight to SSD or do they hit the controller's cache first?

  • How intelligent is the read caching algorithm? I understand how the ZFS ARC and L2ARC functions. Is there any insight into the CacheCade tiering process?

  • What metrics exist to monitor the effectiveness of the CacheCade setup? Is there a method to observe a cache hit ratio or percentage? How can you tell if it's really working?




I'm interested in opinions and feedback on the LSI solution. Any caveats? Tips?


Answer




Since CacheCade SSDs can be used for write and read cache, what purpose does the RAID controller's onboard NVRAM play?




If you leave the write caching feature of the controller enabled, the NVRAM will still be used primarily. The SSD write cache will typically only be used for larger quantities of write data, where the NVRAM alone is not enough to keep up.




When used as a write cache, what danger is there to the CacheCade SSDs in terms of write endurance? Using consumer SSDs seems to be encouraged.





This depends on how often your writes are actually causing the SSD write cache to become necessary... whether or not your drives are able to handle the write load quickly enough that the NVRAM doesn't fill up. In most scenarios I've seen, the write cache gets little to no action most of the time, so I wouldn't expect this to have a big impact on write endurance - most writes to the SSDs are likely to be part of your read caching.




Do writes go straight to SSD or do they hit the controller's cache first?




Answered above... Controller cache is hit first, SSD cache is more of a 2nd line of defense.





How intelligent is the read caching algorithm? I understand how the ZFS ARC and L2ARC functions. Is there any insight into the CacheCade tiering process?




Sorry... no knowledge to contribute on that - hopefully someone else will have some insight?




What metrics exist to monitor the effectiveness of the CacheCade setup? Is there a method to observe a cache hit ratio or percentage? How can you tell if it's working?





It doesn't look like any monitoring tools are available for this as there are with other SAN implementations of this feature set... And since the CacheCade virtual disk doesn't get presented to the OS, you may not have any way to manually monitor activity either. This may just require further testing to verify effectiveness...



Opinion/observation: In a lot of cases (when used correctly, read cache appropriately sized for the working data set) this feature makes things FLY. But in the end, it can be hit-and-miss.


Comments

Popular posts from this blog

linux - Awstats - outputting stats for merged Access_logs only producing stats for one server's log

I've been attempting this for two weeks and I've accessed countless number of sites on this issue and it seems there is something I'm not getting here and I'm at a lost. I manged to figure out how to merge logs from two servers together. (Taking care to only merge the matching domains together) The logs from the first server span from 15 Dec 2012 to 8 April 2014 The logs from the second server span from 2 Mar 2014 to 9 April 2014 I was able to successfully merge them using the logresolvemerge.pl script simply enermerating each log and > out_putting_it_to_file Looking at the two logs from each server the format seems exactly the same. The problem I'm having is producing the stats page for the logs. The command I've boiled it down to is /usr/share/awstats/tools/awstats_buildstaticpages.pl -configdir=/home/User/Documents/conf/ -config=example.com awstatsprog=/usr/share/awstats/wwwroot/cgi-bin/awstats.pl dir=/home/User/Documents/parced -month=all -year=all...

iLO 3 Firmware Update (HP Proliant DL380 G7)

The iLO web interface allows me to upload a .bin file ( Obtain the firmware image (.bin) file from the Online ROM Flash Component for HP Integrated Lights-Out. ) The iLO web interface redirects me to a page in the HP support website ( http://www.hp.com/go/iLO ) where I am supposed to find this .bin firmware, but no luck for me. The support website is a mess and very slow, badly categorized and generally unusable. Where can I find this .bin file? The only related link I am able to find asks me about my server operating system (what does this have to do with the iLO?!) and lets me download an .iso with no .bin file And also a related question: what is the latest iLO 3 version? (for Proliant DL380 G7, not sure if the iLO is tied to the server model)

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits, ...