Skip to main content

filesystems - Safety of write cache on SATA drives with barriers

I've been reading lately about write caching, NCQ, firmware bugs, barriers, etc regarding SATA drives, and I'm not sure what's the best setting that would make my data safe in case of a power failure.



From what I understand, NCQ allows the drive to reorder the writes to optimize performance, while keeping the kernel informed on which requests have been physically written.



Write cache makes the drive serve a request much faster, because it doesn't wait for the data to be written to physical disk.




I'm not sure how NCQ and Write cache mix here...



Filesystems, specially journalled ones, need to be sure when a particular request has been written down. Also, user space process use fsync() to force the flush of a particular file. That call to fsync() shouldn't return until the filesystem is sure that the data is written to disk.



There's a feature (FUA, Force Unit Access), which I've seen only on SAS drives, which forces the drive to bypass cache and write directly to disk. For everything else, there's write barriers, which is a mechanism provided by the kernel that can trigger a cache flush on the drive. This forces all the cache to be written down, not just the critical data, thus slowing the whole system if abused, with fsync() for example.



An then there are drives with firmware bugs, or that deliberately lie about when data has been physically written.



Having said this.. there are several ways to setup the drives/filesystems:

A) NCQ and Write cache disabled
B) Just NCQ enabled
C) Just Write cache enabled
D) Both NCQ and write cache enabled



I'm asuming barriers are enabled.. BTW, how to check if they are actually enabled?



In case of power loss, while actively writing to the disk, my guess is that option B (NCQ, no cache) is safe, both for filesystem journal and data. There may be a performance penalty.



Option D (NCQ+cache), if using barriers or FUA, would be safe for the filesystem journal and applications that use fsync(). It would be bad for the data that was waiting in the cache, and it's up to the filesystem to detect it (checksuming), and at least the filesystem won't be (hopefully) in an unstable state. Performance-wise, it should be better.




My question, however, stands... Am I missing anything? Is there any other variable to take into account? Is there any tool that could confirm this, and that my drives behave as they should?

Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able