Log in

No account? Create an account
Ahh, Monday. A quiet Monday at work, in fact, since about half the… - He's just this guy, you know.

> Recent Entries
> Archive
> Friends
> Profile

Schlock Mercenary
Something Positive
Irregular Webcomic
Sluggy Freelance

May 7th, 2007

Previous Entry Share Next Entry
11:07 am
Ahh, Monday. A quiet Monday at work, in fact, since about half the team here, and most of the people who are likely to call us with work to do, are off on a bank holiday.

The weekend was fairly quiet too - shopped and gymmed on Saturday, started on recovering data from a dying hard drive in the evening, caught up on some more BSG (7 episodes to go in S3).
I also went back and found and tagged most of my posts which have photos in them - you can see them here. I should do other tags for fencing and touristing and stuff, but that's going to be much more work.

Notes to self, I: When you disconnect one IDE drive because you need its power connecter to hook up an old drive that you're trying to recover data from, remember to set the master/slave jumper on the old drive correctly. Otherwise it looks like it has completely died, rather than just the start of the disk being unusable (thus no partition tables).

Notes to self, II: When trying to create a disk image of a 120Gb drive (which is dying in various ways), make sure that it is using DMA first. Transfer rate went from 250Mb/minute to 2Gb/minute ... eventually. Could have saved myself a few hours there.

Note to others : It looks like most of the partition recovery tools in Linux assume that the drive is still functional. rescuept doesn't - so you can get an image of a dying disk with "dd" and then find the partitions from the image and then mount them using loopback filesystem (with an offset into the file) - but it has a 2Gb file size limit (!). I found the source and hacked in #defines for __USE_LARGEFILES64 and __FILE_OFFSET_BITS=64 and added O_LARGEFILE to the open call, and recompiled it. Amazingly, it worked !

Here's roughly how I'd go about this recovery, if I had to do it again :
1. hdparm -d1 /dev/hde
2. dd if=/dev/hde of=hde.dsk bs=4096 conv=noerror,sync skip=1000 2>&1 > dd.log
3. recscuept hde.dsk > parts_hde
4. mount hde.dsk /mnt/old -o loop=/dev/loop0,offset=NNNNNNNNNN
5. Copy data from /mnt/old and make plans for a better backup strategy.

Notes on the above :
1. I actually did this while the dd command was running - didn't appear to cause any harm, but probably better to do it first ! It did produce an immediate speedup, though.

2. I used "skip=1000" because the damage on the disk appeared to be right at the start. "conv=noerror,sync" means that dd doesn't stop on any further errors, and any blocks which are read uncompleted are padded with nulls - probably only useful if there isn't too much damage.

3. This was the recompiled version of rescuept as above.

4. offset in the mount command is in bytes - the output from rescuept gives partition startpoint in sectors, so multiply that by 512 to get the offset.

5. I used something like :
cd /mnt/old
find . -print0 | cpio -ocB0 | ( cd /mnt/newhd; cpio -vicmudB0 )

Current Mood: geekygeeky

(1 touche | En garde !)


[User Picture]
Date:May 7th, 2007 05:42 pm (UTC)
If the BIOS didn't inform Linux to set DMA, hdparm -d1 probably isn't optimal. It's a hell of a lot better than no DMA, but "-X69" for ATA/100 might be useful too. If I've wanted to copy a dodgy disk, I've just done a dd straight from one disk to another. I keep copies of partition tables (sfdisk -d) in backups in case that needs to be rebuilt, though I've not needed to use it to do the rebuilding. For the backup itself, this should do:
find . -print | cpio -pdmuv /mnt/newhd
(possibly with the 0 arg too, I guess ... but anyone who has literal newlines in filenames deserves to have the file deleted)

> Go to Top