Hadoop cluster builds with your computer's time is always will encounter all sorts of wonderful work, because the virtual machine will always be far worse than the real environment, some wonderful work of the problem is really let me depressed uh-uh ~ ~
Like this problem, the cluster of set up three times met all kinds of problems and finally found the ultimate big recruit:
But, but, the premise is that the new cluster, if HDFS has a data inside, then don't use, because will clean up all your hadoopdata,
if, if, really sure they build process without any problems, and the configuration file is correct, then:
Delete the master node and slave nodes hadoopdata folder all of the content, logs log delete not delete arbitrary,
Then reformat the master node, and then restart the program look at
This method can only apply to new building cluster, because of the data, delete HDFS
So still cautious use of good,