Friday, April 29, 2016

Hadoop Installation on Linux

Hadoop Installation on Linux

Here are the steps to install Hadoop 2.x on Linux machine

Step 1: Installing Java

Check Java installation on your machine

# java -version 

java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)

If you don’t have Java installed on your system, use below link to install the java.
https://www.java.com/en/download/help/linux_x64_install.xml

Step 2: Creating Hadoop User
We recommend to create a normal (nor root) account for hadoop working. So create a system account using following command.
# adduser hadoop
# passwd hadoop
After creating account, it also required to set up key based ssh to its own account. To do this use execute following commands.
# su - hadoop
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Lets verify key based login. Below command should not ask for password but first time it will prompt for adding RSA to the list of known hosts.

$ ssh localhost
$ exit
Step 3. Install Hadoop 2.6.0
2.1 Download Hadoop 2.x

$ sudo wget http://apache.claz.org/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
2.2 Install Hadoop 2.x
$ tar xzf hadoop-2.6.0.tar.gz
$ mv hadoop-2.6.0 hadoop
OR
Download hadoop 2.6.0 source archive file using below command. You can also select alternate download mirror for increasing download speed.

$ cd ~
$ wget http://apache.claz.org/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
$ tar xzf hadoop-2.6.0.tar.gz
$ mv hadoop-2.6.0 hadoop

Step 4. Configure Hadoop Pseudo-Distributed Mode

4.1. Setup Environment Variables

First we need to set environment variable uses by hadoop. Create a hadoop.evn file and add following values.
export HADOOP_HOME=/scratch/vchennar/hadoop/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
Now apply the changes in current running environment
$ source /scratch/vchennar/hadoop/hadoop.env
Now edit $HADOOP_HOME/etc/hadoop/hadoop-env.sh file and set JAVA_HOME environment variable. Change the JAVA path as per install on your system.
export JAVA_HOME=/ade_autofs/gd29_3rdparty/JDK8_MAIN_LINUX.X64.rdd/LATEST/jdk8

4.2. Edit Configuration Files

Hadoop has many of configuration files, which need to configure as per requirements of your hadoop infrastructure. Lets start with the configuration with basic hadoop single node cluster setup. first navigate to below location
$ cd $HADOOP_HOME/etc/hadoop

Edit core-site.xml

<configuration>
<property>
  <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
</property>
</configuration>

Edit hdfs-site.xml

<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value>
</property>

<property>
  <name>dfs.name.dir</name>
    <value>file:///home/hadoop/hadoopdata/hdfs/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
    <value>file:///home/hadoop/hadoopdata/hdfs/datanode</value>
</property>
</configuration>

Edit mapred-site.xml

<configuration>
 <property>
  <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
</configuration>

Edit yarn-site.xml

<configuration>
 <property>
  <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>
</configuration>

4.3. Format Namenode

Now format the namenode using following command, make sure that Storage directory is
$ hdfs namenode -format
Sample output:
28/04/16 09:58:43 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ........
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
...
...

28/04/16 09:58:57 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
28/04/16 09:58:57 INFO util.ExitUtil: Exiting with status 0
28/04/16 09:58:57 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at .....
************************************************************/

Step 5. Start Hadoop Cluster

Lets start your hadoop cluster using the scripts provides by hadoop. Just navigate to your hadoop sbin directory and execute scripts one by one.
$ cd $HADOOP_HOME/sbin/
Now run start-dfs.sh script.
$ start-dfs.sh
Sample output:
28/04/16 10:00:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to .../logs/...out
localhost: starting datanode, logging to .../logs/...out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 3c:c4:f6:f1:72:d9:84:f9:71:73:4a:0d:55:2c:f9:43.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to .../logs/hadoop-hadoop-secondarynamenode-....out
28/04/16 10:01:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Now run start-yarn.sh script.
$ start-yarn.sh
Sample output:
starting yarn daemons
starting resourcemanager, logging to .../logs/yarn-hadoop-resourcemanager-...out
localhost: starting nodemanager, logging to .../logs/yarn-hadoop-nodemanager-...out

Step 6. Access Hadoop Services in Browser

Hadoop NameNode started on port 50070 default. Access your server on port 50070 in your favorite web browser.
http://svr1.tecadmin.net:50070/
hadoop single node namenode
Now access port 8088 for getting the information about cluster and all applications
http://svr1.tecadmin.net:8088/
hadoop single node applications
Access port 50090 for getting details about secondary namenode.
http://svr1.tecadmin.net:50090/
Hadoop single node secondary namenode
Access port 50075 to get details about DataNode
http://svr1.tecadmin.net:50075/
hadoop-2-6-single-node-datanode





1 comment:

  1. Thanks for the informative article. This is one of the best resources I have found in quite some time. Nicely written and great info. I really cannot thank you enough for sharing.

    Web Designing Training in Chennai

    Java Training in Chennai

    Salesforce Training in Chennai

    ReplyDelete