I have been using linux for about 7 years now, but just join the mail list about a month ago and been lurking and watching since. The Mailman page for the list does describe what the list is actually for but I have seen a few questions along with the comments.
I've got a problem with a data drive (ext2 formatted as /home) that I need some direction on where to go to find help with. Specifically if there's any way to recover from a "bad super block" when attempting to mount the drive? I've Goggled for an answer, but without success.
Any suggestions?
Hi,
On Fri, Sep 17, 2004 at 11:32:19AM -0500, docv wrote:
I've got a problem with a data drive (ext2 formatted as /home) that I need some direction on where to go to find help with. Specifically if there's any way to recover from a "bad super block" when attempting to mount the drive? I've Goggled for an answer, but without success.
Try to mount the filesystem using one of the backup superblocks. From the mount man page under the section "Mount options for ext2":
sb=n Instead of block 1, use block n as superblock. This could be useful when the filesystem has been damaged. (Earlier, copies of the superblock would be made every 8192 blocks: in block 1, 8193, 16385, ... (and one got hundreds or even thousands of copies on a big filesystem). Since version 1.08, mke2fs has a -s (sparse superblock) option to reduce the number of backup superblocks, and since version 1.15 this is the default. Note that this may mean that ext2 filesystems created by a recent mke2fs cannot be mounted r/w under Linux 2.0.*.) The block num- ber here uses 1k units. Thus, if you want to use logical block 32768 on a filesystem with 4k blocks, use "sb=131072".
I've not had much luck with this but that is because when I hose the superblock I REALLY hose the rest of the filesystem. I know it works because I have mounted good filesystems with this option when trying to locate backup superblocks (because I forgot to write down the numbers when I did the mke2fs). Good luck.
Hi,
On Sat, Sep 18, 2004 at 11:06:31AM -0500, Jonathan Hutchins wrote:
sb=n Instead of block 1, use block n as superblock.
Given that sparse superblock backups are now the norm, how does one find them if one didn't think of this before the crash?
Good question. I doubt you are the first to ask it. My guess is that someone smarter/lazier than you and I has asked this question and then written a tool to answer it but I don't know what that tool is.
Looking at the man page for ext2fs I found a reference to the mke2fs command that could help:
Additional backup superblocks can be determined by using the mke2fs program using the -n option to print out where the superblocks were created. The -b option to mke2fs, which spec- ifies blocksize of the filesystem must be specified in order for the superblock locations that are printed out to be accurate.
In the man page for mke2fs I found this:
-n causes mke2fs to not actually create a filesystem, but display what it would do if it were to create a filesystem. This can be used to determine the location of the backup superblocks for a particular filesystem, so long as the mke2fs parameters that were passed when the filesystem was originally created are used again. (With the -n option added, of course!)
The command "tune2fs -l" will tell you what the blocksize is for a partition. If you have a good partition created at about the same time you could use this to make a first guess at the blocksize for the bad partition.
From here it is a lot of trial and error and luck. Kinda makes you want to go
write down a list of backup superblocks for your favorite partitions, doesn't it?
I guess I need to take this as "good" and "bad" news. I did give both of these commands a try and keep getting the same error over and over, 'bad superblock'. So I take it that likely means the bad news is the data on the drive is likely gone, good news that I know how to prevent it in the future?
The problem occurred after the power supply committed suicide. Everything else (except this new 250 gig drive) is working just fine. As a matter of fact the bios still recognizes the drive and when I use webmin to look at the hard drives I have the option of creating a new partition but have opted not to in hopes I might still be able to reclaim the data before attempting repartitioning.
Any other suggestions before I bite the bullet and repartition? Anybody?
Uncle Jim wrote:
The command "tune2fs -l" will tell you what the blocksize is for a partition. If you have a good partition created at about the same time you could use this to make a first guess at the blocksize for the bad partition.
From here it is a lot of trial and error and luck. Kinda makes you want to go
write down a list of backup superblocks for your favorite partitions, doesn't it?
Is it worth the cost a data recovery company would charge you; they can do just about anything, I've heard...
On Sun, 19 Sep 2004, docv wrote:
I guess I need to take this as "good" and "bad" news. I did give both of these commands a try and keep getting the same error over and over, 'bad superblock'. So I take it that likely means the bad news is the data on the drive is likely gone, good news that I know how to prevent it in the future?
The problem occurred after the power supply committed suicide. Everything else (except this new 250 gig drive) is working just fine. As a matter of fact the bios still recognizes the drive and when I use webmin to look at the hard drives I have the option of creating a new partition but have opted not to in hopes I might still be able to reclaim the data before attempting repartitioning.
Any other suggestions before I bite the bullet and repartition? Anybody?
Uncle Jim wrote:
The command "tune2fs -l" will tell you what the blocksize is for a partition. If you have a good partition created at about the same time you could use this to make a first guess at the blocksize for the bad partition.
From here it is a lot of trial and error and luck. Kinda makes you want to go
write down a list of backup superblocks for your favorite partitions, doesn't it?
Having used On-Track to recover about 36GB of corporate data after a RAID controller sh*t the bed on me, I can tell you this is a VERY EXPENSIVE operation. It ran right at $11,000.00 when it was all said and done, and their rates are pretty standard.
[snip]
The problem occurred after the power supply committed suicide. Everything else (except this new 250 gig drive) is working just fine. As a matter of fact the bios still recognizes the drive and when I use webmin to look at the hard drives I have the option of creating a new partition but have opted not to in hopes I might still be able to reclaim the data before attempting repartitioning.
[snip] In all honesty, I don't know that I would "trust" the drive again at this point. If it is still under warranty, I would perhaps "drop" it, and have it exchanged. It might be useful, if you can spare the time, to allow some folks at a meeting to give their best efforts at data recovery or something?
Just a thought.
Dustin
No, would not be worth it. Mainly just my personal stuff that can be replaced, although it will be time consuming. Only had a few dir's of business stuff that I'll need to re-create and some backup dir's.
lowell wrote:
Is it worth the cost a data recovery company would charge you; they can do just about anything, I've heard...