Question:how to Starting Cluster Operation?


Starting Cluster Operation
Verify that all machines in FH342 are running Fedora.
The cluster should be ready for hadoop operations by default.
Restarting Hadoop
In case the Hadoop is not running, you may have to restart it:
Ssh to hadoop110 as user hadoop
Probe for hadoop processes/daemons running on hadoop110 with the Java Virtual Machine Process Status Tool (jps):
hadoop@hadoop110:~$ jps
16404 NameNode
16775 Jps
16576 SecondaryNameNode
16648 JobTracker
If you don't see any of the processes above, the cluster is down. In this case, bring it up with
starting namenode, logging to /mnt/win/data/logs/hadoop-hadoop-namenode-hadoop110.out
hadoop110: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop110.out
hadoop101: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop101.out
hadoop106: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop106.out
hadoop104: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop104.out
hadoop102: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop102.out
hadoop105: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop105.out
hadoop109: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop109.out
hadoop103: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop103.out
hadoop108: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop108.out
hadoop107: starting datanode, logging to /mnt/win/data/logs/hadoop-hadoop-datanode-hadoop107.out
hadoop110: starting secondarynamenode, logging to /mnt/win/data/logs/hadoop-hadoop-secondarynamenode-hadoop110.out
starting jobtracker, logging to /mnt/win/data/logs/hadoop-hadoop-jobtracker-hadoop110.out
hadoop103: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop103.out
hadoop109: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop109.out
hadoop106: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop106.out
hadoop110: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop110.out
hadoop104: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop104.out
hadoop107: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop107.out
hadoop108: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop108.out
hadoop105: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop105.out
hadoop102: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop102.out
hadoop101: starting tasktracker, logging to /mnt/win/data/logs/hadoop-hadoop-tasktracker-hadoop101.out
For completeness, you should know that the command for taking the cluster down in, but, very likely, you will never have to use it.
Just to make sure, connect to hadoop102 to verify that it, too, is running some hadoop processes:
hadoop@hadoop110:~$ hadoop102
hadoop@hadoop102:~$ jps
18571 TaskTracker
18749 Jps
18447 DataNode

asked Sep 13, 2013 in Hadoop by anonymous
edited Sep 12, 2013
0 votes

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
To avoid this verification in future, please log in or register.