Hadoop2.6.0分布式部署参考手册.doc

上传人:夺命阿水 文档编号:9389 上传时间:2022-06-23 格式:DOC 页数:17 大小:152KB
返回 下载 相关 举报
Hadoop2.6.0分布式部署参考手册.doc_第1页
第1页 / 共17页
Hadoop2.6.0分布式部署参考手册.doc_第2页
第2页 / 共17页
Hadoop2.6.0分布式部署参考手册.doc_第3页
第3页 / 共17页
Hadoop2.6.0分布式部署参考手册.doc_第4页
第4页 / 共17页
Hadoop2.6.0分布式部署参考手册.doc_第5页
第5页 / 共17页
点击查看更多>>
资源描述

《Hadoop2.6.0分布式部署参考手册.doc》由会员分享,可在线阅读,更多相关《Hadoop2.6.0分布式部署参考手册.doc(17页珍藏版)》请在课桌文档上搜索。

1、-Hadoop 2.6.0分布式部署参考手册1.环境说明21.1安装环境说明22.2 Hadoop集群环境说明:22.根底环境安装及配置22.1 添加hadoop用户22.2 JDK 1.7安装22.3 SSH无密码登陆配置32.4 修改hosts映射文件33.Hadoop安装及配置43.1 通用局部安装及配置43.2 各节点配置44.格式化/启动集群44.1 格式化集群HDFS文件系统44.2启动Hadoop集群5附录1 关键配置容参考51core-site.*ml52hdfs-site.*ml53mapred-site.*ml64yarn-site.*ml65hadoop-env.sh66

2、slaves7附录2 详细配置容参考71core-site.*ml72hdfs-site.*ml73mapred-site.*ml84yarn-site.*ml105hadoop-env.sh126slaves13附录3 详细配置参数参考13conf/core-site.*ml13conf/hdfs-site.*ml13o Configurations for NameNode:13o Configurations for DataNode:14conf/yarn-site.*ml14o Configurations for ResourceManager and NodeManager:14

3、o Configurations for ResourceManager:14o Configurations for NodeManager:15o Configurations for History Server (Needs to be moved elsewhere):16conf/mapred-site.*ml17o Configurations for MapReduce Applications:17o Configurations for MapReduce JobHistory Server:171.环境说明1.1安装环境说明本列中,操作系统为Centos 7.0,JDK版

4、本为Oracle HotSpot 1.7,Hadoop版本为Apache Hadoop 2.6.0,操作用户为hadoop。2.2 Hadoop集群环境说明:集群各节点信息参考如下:主机名IP地址角色ResourceManagerResourceManager & MR JobHistory ServerNameNodeNameNodeSecondaryNameNodeSecondaryNameNodeDataNode01DataNode & NodeManagerDataNode02DataNode & NodeManagerDataNode03DataNode & NodeManagerD

5、ataNode04DataNode & NodeManagerDataNode05DataNode & NodeManager注:上述表中用&连接多个角色,如主机ResourceManager有两个角色,分别为ResourceManager和MR JobHistory Server。2.根底环境安装及配置2.1 添加hadoop用户useradd hadoop用户“hadoop即为Hadoop集群的安装和使用用户。2.2 JDK 1.7安装 Centos 7自带的JDK版本为 OpenJDK 1.7,本例中需要将其更换为Oracle HotSpot 1.7版,本例中采用解压二进制包方式安装,安

6、装目录为/opt/。1 查看当前JDK rpm包 rpm -qa | grep jdk2 删除自带JDKrpm -e -nodepsrpm -e -nodeps3 安装指定JDK进入安装包所在目录并解压4 配置环境变量编辑/.bashrc或者/etc/profile,添加如下容:#JAVAe*port JAVA_HOME=/opt/jdk1.7e*port PATH=$PATH:$JAVA_HOME/bine*port CLASSPATH=$JAVA_HOME/libe*port CLASSPATH=$CLASSPATH:$JAVA_HOME/jre/lib2.3 SSH无密码登陆配置1 需要

7、设置如上表格所示8台主机间的SSH无密码登陆。2 进入hadoop用户的根目录下并通过命令ssh-keygen -t rsa生成秘钥对3 创立公钥认证文件authorized_keys并将生成的/.ssh目录下的id_rsa.pub文件的容输出至该文件:more id_rsa.pub auhorized_keys4 分别改变/.ssh目录和authorized_keys文件的权限:chmod700 /.ssh;chmod600 /.ssh/authorized_keys5 每个节点主机都重复以上步骤,并将各自的/.ssh/id_rsa.pub文件的公钥拷贝至其他主机。 对于以上操作,也可以通过

8、一句命令搞定:rm -rf /.ssh;ssh-keygen -t rsa;chmod 700 /.ssh;more /.ssh/id_rsa.pub /.ssh/authorized_keys;chmod 600 /.ssh/authorized_keys;注:在centos 6中可以用dsa方式:ssh-keygen -t dsa命令来设置无密码登陆,在centos 7中只能用rsa方式,否则只能ssh无密码登陆本机,无能登陆它机。2.4 修改hosts映射文件分别编辑各节点上的/etc/hosts文件,添加如下容:172.15.0.2 ResourceManager172.15.0.3

9、NameNode172.15.0.4 SecondaryNameNode172.15.0.5 DataNode01172.15.0.6 DataNode02172.15.0.7 DataNode03172.15.0.8 DataNode04172.15.0.9 DataNode05172.15.0.5 NodeManager01172.15.0.6 NodeManager02172.15.0.7 NodeManager03172.15.0.8 NodeManager04172.15.0.9 NodeManager053.Hadoop安装及配置3.1 通用局部安装及配置以下操作容为通用操作局部,

10、及在每个节点上的容一样。分别在每个节点上重复如下操作:1 将hadoop安装包hadoop-2.6.0.tar拷贝至/opt目录下,并解压: 解压后的hadoop-2.6.0目录(/opt/hadoop-2.6.0)即为hadoop的安装根目录。2 更改hadoop安装目录hadoop-2.6.0的所有者为hadoop用户:chown -R hadoop.had3 添加环境变量:#hadoope*port PATH=$PATH:$HADOOP_HOME/bine*port PATH=$PATH:$HADOOP_HOME/sbin3.2 各节点配置分别将如下配置文件解压并分发至每个节点的Hado

11、op“$HADOOP_HOME/etc/hadoop目录中,如提示是否覆盖文件,确认即可。注:关于各节点的配置参数设置,请参考后面的“附录1或“附录24.格式化/启动集群4.1 格式化集群HDFS文件系统安装完毕后,需登陆NameNode节点或任一DataNode节点执行hdfs namenode -format格式化集群HDFS文件系统;注:如果非第一次格式化HDFS文件系统,则需要在进展格式化操作前分别将NameNode的dfs.namenode.name.dir和各个DataNode节点的dfs.datanode.data.dir目录(在本例中为/home/hadoop/hadoopda

12、ta)下的所有容清空。4.2启动Hadoop集群分别登陆如下主机并执行相应命令:1 登陆ResourceManger执行start-yarn.sh命令启动集群资源管理系统yarn2 登陆NameNode执行start-dfs.sh命令启动集群HDFS文件系统3 分别登陆SecondaryNameNode、DataNode01、DataNode02、DataNode03、DataNode04节点执行jps命令,查看每个节点是否有如下Java进程运行:ResourceManger节点运行的进程:ResouceNamagerNameNode节点运行的进程:NameNodeSecondaryNameN

13、ode节点运行的进程:SecondaryNameNode各个DataNode节点运行的进程:DataNode & NodeManager如果以上操作正常则说明Hadoop集群已经正常启动。附录1 关键配置容参考1core-site.*ml fs.defaultFS hdfs:/NameNode:9000 NameNode URI l 属性fs.defaultFS“表示NameNode节点地址,由hdfs:/主机名(或ip):端口号组成。2hdfs-site.*ml dfs.namenode.name.dir file:/home/hadoop/hadoopdata/hdfs/namenode

14、dfs.datanode.data.dir file:/home/jack/hadoopdata/hdfs/datanode /property dfs.namenode.secondary. -address SecondaryNameNode:50090 l 属性“dfs.namenode.name.dir表示NameNode存储命名空间和操作日志相关的元数据信息的本地文件系统目录,该项默认本地路径为/tmp/hadoop-username/dfs/name;l “表示DataNode节点存储HDFS文件的本地文件系统目录,由组成,该项默认本地路径为/tmp/hadoop-username

15、/dfs/data。l 属性“dfs.namenode.secondary. -address表示SecondNameNode主机及端口号如果无需额外指定SecondNameNode角色,可以不进展此项配置;3mapred-site.*ml mapreduce.framework.name yarn E*ecution framework set to Hadoop YARN. l “表示执行mapreduce任务所使用的运行框架,默认为local,需要将其改为yarn4yarn-site.*mlyarn.resourcemanager.hostnameResourceManagerResou

16、rceManagerhost yarn.nodemanager.au*-services mapreduce_shuffle Shuffle service that needs to be set for Map Reduce applications. l 属性用来指定ResourceManager主机地址;l “表示MR applicatons所使用的shuffle工具类5hadoop-env.shJAVA_HOME表示当前的Java安装目录e*port JAVA_HOME=/opt/jdk-1.76slaves集群中的master节点(NameNode、ResourceManager)

17、需要配置其所拥有的slaver节点,其中:NameNode节点的slaves容为:DataNode01DataNode02DataNode03DataNode04DataNode05ResourceManager节点的slaves容为:NodeManager01NodeManager02NodeManager03NodeManager04NodeManager05附录2 详细配置容参考注:以下的红色字体局部的配置参数为必须配置的局部,其他配置皆为默认配置。1core-site.*ml fs.defaultFS hdfs:/NameNode:9000 NameNode URI io.file.b

18、uffer.size 131072 Size of read/write buffer used in SequenceFiles,The default value is 131072 l 属性fs.defaultFS“表示NameNode节点地址,由hdfs:/主机名(或ip):端口号组成。2hdfs-site.*ml dfs.namenode.name.dir file:/home/hadoop/hadoopdata/hdfs/namenode dfs.namenode.secondary. -address SecondaryNameNode:50090 dfs.replication

19、 3 /property dfs.blocksize 268435456 ount 100 dfs.datanode.data.dir file:/home/hadoop/hadoopdata/hdfs/datanode l 属性“dfs.namenode.name.dir表示NameNode存储命名空间和操作日志相关的元数据信息的本地文件系统目录,该项默认本地路径为/tmp/hadoop-username/dfs/name;l “表示DataNode节点存储HDFS文件的本地文件系统目录,由组成,该项默认本地路径为/tmp/hadoop-username/dfs/data。l 属性“dfs.

20、namenode.secondary. -address表示SecondNameNode主机及端口号如果无需额外指定SecondNameNode角色,可以不进展此项配置;3mapred-site.*ml mapreduce.framework.name yarn E*ecution framework set to Hadoop YARN. mapreduce.map.memory.mb 1024 Larger resource limit for maps. mapreduce.map.java.opts *m*1024M Larger heap-size for child jvms of

21、 maps. mapreduce.reduce.memory.mb 1024 Larger resource limit for reduces. mapreduce.reduce.java.opts *m*2560M mapreduce.task.io.sort.mb 512 mapreduce.task.io.sort.factor 10 More streams merged at once while sorting files. mapreduce.reduce.shuffle.parallelcopies 5 Higher number of parallel copies run

22、 by reduces to fetch outputs from very large number of maps. mapreduce.jobhistory.address ResourceManager:10020 MapReduce JobHistory Server host:port Default port is 10020 mapreduce.jobhistory.webapp.address ResourceManager:19888 MapReduce JobHistory Server Web UI host:port Default port is 19888 map

23、reduce.jobhistory.intermediate-done-dir /mr-history/tmp Directory where history files are written by MapReduce jobs. Defalut is /mr-history/tmp mapreduce.jobhistory.done-dir /mr-history/done Directory where history files are managed by the MR JobHistory Server.Default value is /mr-history/done l “表示

24、执行mapreduce任务所使用的运行框架,默认为local,需要将其改为yarn4yarn-site.*ml yarn.acl.enable false Enable ACLs? Defaults to false. The value of the optional is true or false yarn.admin.acl * ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of *

25、 which means anyone. Special value of just space means no one has access yarn.log-aggregation-enable false Configuration to enable or disable log aggregation yarn.resourcemanager.address ResourceManager:8032 ResourceManager host:port for clients to submit jobs.NOTES:host:port If set, overrides the h

26、ostname set in yarn.resourcemanager.hostname. yarn.resourcemanager.scheduler.address ResourceManager:8030 ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname yarn.resourcemanager

27、.resource-tracker.address ResourceManager:8031 ResourceManager host:port for NodeManagers.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname yarn.resourcemanager.admin.addressResourceManager:8033ResourceManager host:port for administrative commands.NOTES:host:port If

28、 set, overrides the hostname set in yarn.resourcemanager.hostname. yarn.resourcemanager.webapp.address ResourceManager:8088 ResourceManager web-ui host:port. NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname yarn.resourcemanager.hostname ResourceManager ResourceMana

29、ger host yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler ResourceManager Scheduler class CapacityScheduler (recommended), FairScheduler (also recommended), or FifoScheduler.The default value is org.apache.hadoop.yarn.server.reso

30、urcemanager.scheduler.capacity.CapacityScheduler. yarn.scheduler.minimum-allocation-mb 1024 Minimum limit of memory to allocate to each container request at the Resource Manager.NOTES:In MBs yarn.scheduler.ma*imum-allocation-mb 8192 Ma*imum limit of memory to allocate to each container request at th

31、e Resource Manager.NOTES:In MBs yarn.log-aggregation.retain-seconds -1 How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node. yarn.log-aggregation.retain-check-interval-seconds -1 Time between checks for aggregated log ret

32、ention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node. yarn.nodemanager.resource.memory-mb 8192 Resource i.e. available physical memory, in MB, for given NodeManager. The default value is 8192. NOTES:Defines total available resources on the NodeManager to be made available to running containers yarn.nodemanager.vmem-pmem-ratio 2.1 Ma*imum ratio by which virtual memory usa

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 在线阅读 > 生活休闲


备案号:宁ICP备20000045号-1

经营许可证:宁B2-20210002

宁公网安备 64010402000986号