CentOS hadoop 2.6.1伪分布式安装

0.必要环境
CentOS6,jdk7已经安装
1.添加hadoop用户
[root@capaatest ~]# useradd hadoop
[root@capaatest ~]# passwd hadoop
2.ssh免密码key登录
[hadoop@capaatest ~]$ ssh-keygen -t rsa -P ”
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory ‘/home/hadoop/.ssh’.
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
c2:38:41:3f:9c:e8:5b:a8:3b:57:7b:e9:82:80:0c:59 hadoop@capaatest
The key’s randomart image is:
+–[ RSA 2048]—-+
| . |
| E. + . |
| o o = |
|o . = . |
|o. = + S |
|…. +.. |
| …o . . |
| ..o o o |
| .o +. |
+—————–+

[hadoop@capaatest ~]$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys
[hadoop@capaatest .ssh]$ chmod 644 authorized_keys

此时ssh本机应该可以免密码登录。
如果有需要,改主机名
vi /etc/sysconfig/network

3.解压hadoop包
[hadoop@capaatest ~]$ tar -xvf hadoop-2.6.1.tar.gz
放在/home/hadoop下

4.添加hadoop环境变量
[hadoop@capaatest ~]$ vi .bash_profile
最后添加
export HADOOP_PREFIX=/home/hadoop/hadoop-2.6.1
export PATH=$HADOOP_PREFIX/bin:$PATH

[hadoop@capaatest ~]$ source .bash_profile

5.编辑hadoop-env.sh
[hadoop@capaatest ~]$ cd hadoop-2.6.1
[hadoop@capaatest hadoop-2.6.1]$ cd etc/hadoop/
[hadoop@capaatest hadoop]$ vi hadoop-env.sh
添加java_home
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_79

6.编辑core-site.xml
[hadoop@capaatest hadoop]$ vi core-site.xml

修改

hadoop.tmp.dir
/home/hadoop/hadoop-2.6.1/tmp
Abase for other temporary directories.
fs.default.name
hdfs://capaatest:9000

 

7.编辑mapred-site.xml
[hadoop@capaatest hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@capaatest hadoop]$ vi mapred-site.xml
修改

mapreduce.framework.name
yarn

8.编辑yarn-site.xml
[hadoop@capaatest hadoop]$ vi yarn-site.xml

 


yarn.nodemanager.aux-services
mapreduce_shuffle

9.编辑hdfs-site.xml
[hadoop@capaatest hadoop]$ vi hdfs-site.xml
这个文件维持默认即可

10.格式化namenode
[hadoop@capaatest hadoop-2.6.1]$ bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

15/10/15 15:18:44 INFO common.Storage: Storage directory /home/hadoop/hadoop-2.6.1/tmp/dfs/name has been successfully formatted.
提示successfully formatted 即可

11.启动
[hadoop@capaatest sbin]$ ./start-dfs.sh
[hadoop@capaatest sbin]$ ./start-yarn.sh

12.验证进程
[hadoop@capaatest sbin]$ jps
12659 Jps
11669 NameNode
12247 ResourceManager
11796 DataNode
12349 NodeManager
11963 SecondaryNameNode

13.webui
http://capaatest:8088/
http://capaatest:50070/