Step #3 – Setting Hadoop Configuration
$ vim etc/hadoop/core-site.xml
fs.default.name
hdfs://localhost:50000
$ vim etc/hadoop/yarn-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
$ vim etc/hadoop/hdfs-site.xml
dfs.namenode.name.dir
/usr/local/apache-hadoop/namenode-dir
dfs.datanode.data.dir
/usr/local/apache-hadoop/datanode-dir
dfs.replication
1
$ vim etc/hadoop/mapred-site.xml
mapreduce.framework.name
yarn
$ vim etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_151
$ vim etc/hadoop/mapred-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_151
$ vim etc/hadoop/yarn-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_151
$ vim etc/hadoop/slaves localhost
Step #4 Install ssh key to create Authentication keys
$ sudo apt-get install openssh-server
$ ssh-keygen -t rsa setup passwordless ssh to localhost and to slaves
$ cd .ssh
$ cat id_rsa.pub >> authorized_keys (copy the .pub)
Copy the id_rsa.pub from NameNode to authorized_keys in all machines
$ ssh localhost (Asking No Password )
Step #5 Install ssh key to create Authentication keys
Format NameNode
$ cd $HADOOP_HOME
$ bin/hadoop namenode –format
Step #6 Start Hadoop Cluster Service
To start all services sbin/start-all.sh (or)
To start DFS & YARN sbin/start-dfs.sh, sbin/start-yarn.sh
Type jps NameNode DataNode SecondaryNameNode ResourceManager NodeManager
Step #7 View the Cluster member status at Web GUI NameNode - http://localhost:50070 Resource Manager - http://localhost:8088
Node Manager - http://localhost:8042
Step #8 Stop Hadoop Cluster service
$ sbin/stop-all.sh