Skip to main content

linux - Removing files takes too long

Short version: rm
-rf mydir
, with mydir (recursively) containing 2.5
million files, takes about 12 hours on a mostly idle
machine.



More
information
: Most of the files being deleted are hard links to files in
other directories (the directory being deleted is actually the oldest backup made by
rsnapshot; the rm command is actually
given by rsnapshot). So it's mostly directory entries being
deleted - the file content itself isn't much; it's in the order of some tens of
GB.



I'm far from certain that
btrfs is the culprit. I recall backup was also very slow before
I started to use btrfs, but I'm not certain that the slowness
was in the deletion.




The machine is
an Intel Core i5 2.67 GHz with 4 GB RAM. It has two SATA disks: one has the OS and some
other stuff, and the backup disk is a 1 TB WDC
WD1002FAEX-00Z3A0
. The motherboard is an Asus
P7P55D.



Edit: The
machine is a Debian wheezy with Linux 3.16.3-2~bpo70+1. This is
how the filesystem is
mounted:



root@thames:~# mount|grep
rsnapshot
/dev/sdb1 on /var/backups/rsnapshot type btrfs
(rw,relatime,compress=zlib,space_cache)


Edit:
Using rsync -a --delete /some/empty/dir mydir takes about 6
hours. A significant improvement over rm -rf, but still too
much I think. (Explanation of
why rsync is faster than rm
:
"[M]ost filesystems store their directory structures in a btree format, the order [in]
which you delete files is ... important. One needs to avoid rebalancing the btree when
you perform the unlink.... rsync -a --delete ... does deletions
in-order")




Edit:
I attached another disk which had 2.2 million files (recursively) in a directory, but on
XFS. Here are some comparative
results:



 On the XFS disk On the
BTRFS disk
Cached reads[1] 10 GB/s 10 GB/s
Buffered reads[1] 80 MB/s
115 MB/s
Walk tree[2] 11 minutes 43 minutes
rm -rf mydir[3] 7
minutes 12 hours


[1]
With hdparm -T /dev/sdX and hdparm -t
/dev/sdX
.
[2] Time taken to run find mydir -print|wc
-l
immediately after boot.
[3] On the XFS disk, this was soon
after walking the tree with find. On the BTRFS disk it is the
old measurement (and I don't think it was with the tree
cached).




It appears to be a problem
with btrfs.

Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able