While using the Hadoop, the name node was stuck in the “safemode” and would not get out.

In my particular case, analyzing the namenode log, it was clear that there was problem with the Hadoop Filesystem.

This was simply fixed by

Step 1. Forcefully exiiting the safenode as

hdfs dfsadmin -safemode leave

and, Step 2. Correcting the HDFS file system as

Since I was using single Replication factor on the data node, there was no chance of recovering the file. Hence we opted to deleting the corrupt block as

To find the corrupt block status

  hadoop fsck /    

And finally to delete the block as

hadoop fs -rm /path/to/corrupt/File 

Finally, that brought the HDFS back to the healthy state and the namenode was no longer getting stuck at Safemode 🙂

Advertisements