Skip to main content

linux - DD copy works at terminal but not by cron

itemprop="text">

On an RHEL5.4 system I setup a script
to backup a drive with dd copy every night by cron. I spit the dd output into a log file
and email. It is in /etc/crontab and /var/spool/cron/root when I figured out it wouldn't
even run under cron.



The script is supposed to
copy /dev/sda to /mnt/backup/sda.img (/mnt/backup is a mounted 250gb
external).



When I run it as root at the terminal
it works fine, I can see data being written to the disk and sda.img is getting
bigger.



However when run as cron, I get the
output from dd saying it copied 147gb, but cannot find where it spat that 147gb to - it
didn't put it in sda.img. Its not on the filesystem anywhere as there is only 50gb left
on it.




Where did it go? And how can I
make sure the same thing happens in cron that happens in
terminal.



I do stop crond and start it before
and after the backup, however I am under the impression that cron kicks the job off, I
shut it down, it backs up, starts again and is on its merry
way.



Thanks.



EDIT:
Sorry, the dd line is
dd if=/dev/sda of=/mnt/backup/sda.img
bs=400K



And the cron line is
01 0 * *
* 2-6 /root/applog_backup.sh




I can
access the files when it works with
mount -o loop,offset=32256 sda.img
/mnt/restore



I shut down cron to prevent hourly
jobs from modifying the disk during backup. I have also shutdown other services and the
production database to minimize disk writing in the important
places.



Answer




You have your "backup" script being executed
by cron... and you shut down cron in the script in order to prevent cron jobs from
running during the "backup". You really can't see where is the problem here? Your script
shuts down crond, but crond is running your script, so, shutting down crond will close
the descriptors connected to your script, which will then die, either with a broken pipe
or by a interruption signal from crond
itself.



Since the script died, crond won't be
restarted anymore. That is what we call "shot yourself on the
foot".



Even after restarting crond, it won't
have registered that the job completed, since it was shutdown during its execution
and/or had to signal its termination. Either crond itself or anacron (depends on what
cron scheduler you are using), it will have to run the job again, potentially going into
an infinite loop.




Your problem is an
excellent example of everything that is wrong with inventing your own "backup" solution
if you have no real-life experience with reliability management and disaster recovery.
Worse, a lack of knowledge of how the system
works.



First, and most important,
you do not make a raw disk dump on a live filesystem.
Filesystems were invented so that you do not touch the raw disk contents directly. You
want to save the files stored in the filesystem, that is what matters for you. So you
have to access them through the filesystem, not the raw
bytes stored on the disk. If the partition is mounted, there is absolute no guarantee
that your data is actually stored on the disk and that the disk will stay in a
consistent state during the copy.



Even if you
could snapshot the state of the disk in a recoverable manner (like a sudden power
failure, which could be quickly recoverable with a journaled filesystem like ext3), that
is never true with a hot disk dump. A disk dump takes a long time to complete, there are
virtually infinite intermediate states between the beginning and the end of the dump,
and the dump will contain a mixture of these states, which is potentially unrecoverable
even with a journaled filesystem.



And I still
didn't mention everything else that is wrong with raw disk dump
backups:





  • There
    is no difference between used and free space. It doesn't matter if you have a single 100
    kB file or 250 GB in tens of thousand files, everything will be copied. It is extremely
    inefficient. You use this approach only if you need an identical clone of your disk, and
    with the disk unmounted.

  • You
    can't do differential or incremental backups. All your backups must be full backups. All
    kinds of inefficiencies:


    1. Since this
      takes a lot of space, you usually will keep only a single copy of all the data. If your
      files are damaged or deleted before the backup and you don't notice, the damaged or
      deleted data is copied over the previous backup, making it
      useless.

    2. Since you do this over the previous data, if
      your system fails in the middle of the dump (which takes a longer time since you are
      copying the whole disk), both your original system and the
      backup are lost on a single shot.

    3. If 100 kB of data
      changed since the previous backup, you will still dump the whole disk. In your case,
      this is at least a million times less
      efficient.


  • You can't
    restore this dump to a disk with a different geometry. If your replacement disk is
    smaller there is no discussion; if your replacement disk is bigger, you may be able to
    restore either losing the extra space or doing some manual (and dangerous for the
    uninitiated) changes to the partition table and partition superblocks. Do you want to
    trust your files, your work, to such a hack?

  • Even if you
    mount the raw image using a loop device and copy the files manually... you
    end up copying your files manually!!
    So what you earned from doing a raw
    disk dump?? Just copy your damn
    files!




Many
people have been there and have a lot of experience to share regarding disaster
recovery. Don't try to invent your own backup solution, you will end messing things up.
Use proper backup tooks, like dump, tar or rel="nofollow noreferrer">rsync. If you need something more robust, use
Amanda or
Bacula, or
one of the other hundreds of solutions ready to
use.



Probably not the answer you were expecting,
but had to be said.


Comments

Popular posts from this blog

linux - iDRAC6 Virtual Media native library cannot be loaded

When attempting to mount Virtual Media on a iDRAC6 IP KVM session I get the following error: I'm using Ubuntu 9.04 and: $ javaws -version Java(TM) Web Start 1.6.0_16 $ uname -a Linux aud22419-linux 2.6.28-15-generic #51-Ubuntu SMP Mon Aug 31 13:39:06 UTC 2009 x86_64 GNU/Linux $ firefox -version Mozilla Firefox 3.0.14, Copyright (c) 1998 - 2009 mozilla.org On Windows + IE it (unsurprisingly) works. I've just gotten off the phone with the Dell tech support and I was told it is known to work on Linux + Firefox, albeit Ubuntu is not supported (by Dell, that is). Has anyone out there managed to mount virtual media in the same scenario?

hp proliant - Smart Array P822 with HBA Mode?

We get an HP DL360 G8 with an Smart Array P822 controller. On that controller will come a HP StorageWorks D2700 . Does anybody know, that it is possible to run the Smart Array P822 in HBA mode? I found only information about the P410i, who can run HBA. If this is not supported, what you think about the LSI 9207-8e controller? Will this fit good in that setup? The Hardware we get is used but all original from HP. The StorageWorks has 25 x 900 GB SAS 10K disks. Because the disks are not new I would like to use only 22 for raid6, and the rest for spare (I need to see if the disk count is optimal or not for zfs). It would be nice if I'm not stick to SAS in future. As OS I would like to install debian stretch with zfs 0.71 as file system and software raid. I have see that hp has an page for debian to. I would like to use hba mode because it is recommend, that zfs know at most as possible about the disk, and I'm independent from the raid controller. For us zfs have many benefits,

apache 2.2 - Server Potentially Compromised -- c99madshell

So, low and behold, a legacy site we've been hosting for a client had a version of FCKEditor that allowed someone to upload the dreaded c99madshell exploit onto our web host. I'm not a big security buff -- frankly I'm just a dev currently responsible for S/A duties due to a loss of personnel. Accordingly, I'd love any help you server-faulters could provide in assessing the damage from the exploit. To give you a bit of information: The file was uploaded into a directory within the webroot, "/_img/fck_uploads/File/". The Apache user and group are restricted such that they can't log in and don't have permissions outside of the directory from which we serve sites. All the files had 770 permissions (user rwx, group rwx, other none) -- something I wanted to fix but was told to hold off on as it wasn't "high priority" (hopefully this changes that). So it seems the hackers could've easily executed the script. Now I wasn't able