I am a big fan of solid state disks. But not those fancy expensive SATA disks which are all the rage. I mean low end, dirt cheap SSDs. Compact flash and SD cards. If you have an old laptop, you can either buy a new hard drive for $80, or buy a CF card and CF-IDE adapter for $30. Sure, the hard drive has ten or twenty times the capacity. But flash is ten times as awesome.
Only problem is that quality control is all over the map. Some cheap flash cards will work for years. I've had one with an internal short, it started melting and bubbling the moment it was plugged in. Once, I had a card that worked great for a few months and then completely died. This is the story of that card and its recovery.
The superblock was gone. The backup superblocks were gone. The drive was in shambles, and fsck could not fix it. At this point, normal procedure is to DD the drive and do more aggressive work on the image.
DD could not read the drive. Oh, it would try for a while. Three gigabytes into the 8GB drive, DD would stop. The device jumped from /dev/sdc1 to /dev/sdd1. DD really was not built to handle such insanity.
There is an app called
ddrescue meant to be like DD, but more thorough. It too was stymied by the disappearing device node. Instead of freezing, it would fill the remainder of the image with zeros. Hardly useful.
Basically, CF cards do not work like hard drives. If a hard drive hits a bad cluster, it will promptly return bad data. If a CF card has a bad chunk, it will lock up. DD needs a slight redesign for these conditions.
Here's my wrapper for DD: dd-decrepit.py. I might rewrite it as a shell script. Python seems overkill and I did not know about the
timeout while I wrote the first version.
Dd-decrepit works by reading the drive in 1024KB chunks. If it takes more than a few seconds to read the block, the DD processes is killed and it tries again as two 512KB blocks. This recursion can continue down to a single kilobyte, if need be. There was some really funky stuff going on. Many instances a single 1024KB read would fail, but both 512KB reads would succeed.
It will also check periodically to make sure the device node has not vanished. If it has, it pauses and will ask you to reconnect the drive. Then it continues where it left off.
This worked flawlessly, but is very very slow. Imaging an 8GB parition took three hours. About 100MB of the partition could not be recovered, but this was expected considering the massive damage.
Once you have the complete image, retrieving the old files is a snap. I am assuming the drive is ext2, the most stable option for flash file systems. Tweak the commands accordingly for other file systems.
- Copy the image. his recovery process could mess up the image, and you don't want to wait another three hours to clone the drive again.
mke2fs -S mirror.imgThis is the crucial step. It will format the image, but only writes the high level structures. Your data stays intact. Also, it needs the identical settings used to initially format the partition. You did write these down somewhere, right?
e2fsck -y mirror.imgExpect this to take a very long time. Expect to run it multiple times. My first pass segfaulted. The second pass took 90 minutes and placed many files in
mount -o loop -t ext2 mirror.img /tmp/loopAnd there is the ghost of your drive.
find /tmp/loop/lost+found/ -type f -print0 | xargs -0 file > /tmp/filelist
And that pretty much wraps up the recovery process. You've got a
lost+found directory full of your files, but the file names and paths have all missing. The
find command runs each of those through
file and dumps the result to
filelist. Look through
filelist, and identify the types of files you are interested in recovering.