一.CDH4安装
在官网https://ccp.cloudera.com/display/CDH4DOC/CDH4+Installation下载针对本机操作系统的CDH4引导包,这里我的操作系统是redhat5.4
Step 1a: Optionally Add a Repository Key
rpm --import http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera
Step 2: Install CDH4 with MRv1
yum -y install
hadoop-0.20-mapreduce-jobtracker
Step 3: Install CDH4 with YARN
yum -y install hadoop-yarn-resourcemanager
yum -y install hadoop-hdfs-namenode
yum -y install hadoop-hdfs-secondarynamenode
yum -y install hadoop-yarn-nodemanager hadoop-hdfs-datanode
hadoop-mapreduce
yum -y install hadoop-mapreduce-historyserver
hadoop-yarn-proxyserver
yum -y install hadoop-client
另:请实现装好jdk,Postgresql
二.CDH4配置
1.配置网络主机
(1).配置网络主机
为确保主机之间可以互信,/etc/hosts内容,/etc/sysconfig/network
注意跟ip地址保持一直
(2).复制hadoop配置
cp -r /etc/hadoop/conf.empty
/etc/hadoop/conf.my_cluster
(3).自定义配置文件
/etc/hadoop/conf/core-site.xml
fs.default.name(老版的,已弃用,但仍然兼容可用) or
fs.defaultFS指定namenode的文件系统
例:
/etc/hadoop/conf/hdfs-site.xml
dfs.permissions.superusergroup指定的UNIX组包含用户,将被视为由HDFS的超级用户
例:
(4).配置本地存储目录
①./etc/hadoop/conf/hdfs-site.xml
namenode:
dfs.name.dir(老版的,已弃用,但仍然兼容可用)或dfs.namenode.name.dir此属性指定的目录,其中NameNode的存储元数据和编辑日志。Cloudera的建议指定至少两个目录,其中之一是位于一个NFS挂载点
例:
datanode:
dfs.data.dir(老版的,已弃用,但仍然兼容可用)或dfs.datanode.data.dir此属性指定目录下的DataNode块存储。Cloudera的建议配置一个独立的磁盘,挂载到上面
例:
②.创建上面所用到的目录
mkdir -p /data/1/dfs/nn /nfsmount/dfs/nn
mkdir -p /data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn
/data/4/dfs/dn
chown -R hdfs:hdfs /data/1/dfs/nn /nfsmount/dfs/nn
/data/1/dfs/dn /data/2/dfs/dn /data/3/dfs/dn /data/4/dfs/dn
③.最后正确的权限是
dfs.name.dir or or dfs.namenode.name.dir | hdfs:hdfs |
drwx------
-----------------------------------------------------------------
dfs.data.dir or dfs.datanode.data.dir |
hdfs:hdfs | drwx------
④.注:Hadoop守护进程会自动为你设置正确的权限dfs.data.dir或dfs.datanode.data.dir。但在dfs.name.dir或dfs.namenode.name.dir的情况下,权限是当前不正确的设置为默认的文件系统,通常是drwxr
- XR的-X(755)。使用命令将dfs.name.dir或dfs.namenode.name.dir目录权限设置为drwx
------
chmod 700 /data/1/dfs/nn
/nfsmount/dfs/nn
chmod go-rx /data/1/dfs/nn /nfsmount/dfs/nn
(5).格式化namenode
service hadoop-hdfs-namenode init
(6).配置远程NameNode的存储目录
mount -t nfs -o tcp,soft,intr,timeo=10,retrans=10,
如果针对的是一个HA高可用集群的话,则
mount -t nfs -o tcp,soft,intr,timeo=50,retrans=12,
2.在集群上部署mapreduce的MRv1集群
(1).Step 1: Configuring Properties for MRv1
Clusters
/etc/hadoop/conf/mapred-site.xml
mapred.job.tracker指定JobTracker的RPC服务器的主机名和(可选)端口
例:
(2).Step 2: Configure Local Storage Directories for Use by
MRv1 Daemons
/etc/hadoop/conf/mapred-site.xml
mapred.local.dir指定用于存放临时数据和中间文件的目录
例:
创建这几个目录
mkdir -p /data/1/mapred/local /data/2/mapred/local
/data/3/mapred/local /data/4/mapred/local
配置属主属组
chown -R mapred:hadoop /data/1/mapred/local
/data/2/mapred/local /data/3/mapred/local /data/4/mapred/local
(3).Step 3: Configure a Health Check Script for DataNode
Processes
健康检查,这里提供一个官方的脚本
#!/bin/bash
if ! jps | grep -q DataNode ; then
echo ERROR: datanode not up
fi
(4).Step 4: Deploy your Custom Configuration to your Entire
Cluster
设置每个节点的配置文件
alternatives --set hadoop-conf
/etc/hadoop/conf.my_cluster
(5).Step 5: Start HDFS
for service in /etc/init.d/hadoop-hdfs-*
> do
> sudo $service start
> done
(6).Step 6: Create the HDFS /tmp Directory
sudo -u hdfs hadoop fs -mkdir /tmp
sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
注意:这是使用本地文件系统HDFS,这是根的hadoop.tmp.dir
(7).Step 7: Create MapReduce /var directories
sudo -u hdfs hadoop fs -mkdir /var
sudo -u hdfs hadoop fs -mkdir /var/lib
sudo -u hdfs hadoop fs -mkdir
/var/lib/hadoop-hdfs
sudo -u hdfs hadoop fs -mkdir
/var/lib/hadoop-hdfs/cache
sudo -u hdfs hadoop fs -mkdir
/var/lib/hadoop-hdfs/cache/mapred
sudo -u hdfs hadoop fs -mkdir
/var/lib/hadoop-hdfs/cache/mapred/mapred
sudo -u hdfs hadoop fs -mkdir
/var/lib/hadoop-hdfs/cache/mapred/mapred/staging
sudo -u hdfs hadoop fs -chmod 1777
/var/lib/hadoop-hdfs/cache/mapred/mapred/staging
sudo -u hdfs hadoop fs -chown -R mapred
/var/lib/hadoop-hdfs/cache/mapred
(8).Step 8: Verify the HDFS File Structure
检查hdfs文件结构
sudo -u hdfs hadoop fs -ls -R /
(9).Step 9: Create and Configure the mapred.system.dir
Directory in HDFS
①.sudo -u hdfs hadoop fs -mkdir /mapred/system
sudo -u hdfs hadoop fs -chown mapred:hadoop
/mapred/system
②.正确的权限
mapred.system.dir | mapred:hadoop | drwx------ 1
/
| hdfs:hadoop | drwxr-xr-x
(10).Step 10: Start MapReduce
在TaskTracker系统
sudo service hadoop-0.20-mapreduce-tasktracker start
在JobTracker系统
sudo service hadoop-0.20-mapreduce-jobtracker start
(11).Step 11: Create a Home Directory for each MapReduce
User
创建每个MapReduce用户的主目录
sudo -u hdfs hadoop fs -mkdir /user/
sudo -u hdfs hadoop fs -chown