01Hadoop-2.7.2+Zookeeper-3.4.6完全分布式环境搭建(HDFS,YARN HA)

发布时间:2016-09-14 11:52:05

Hadoop-2.7.2+Zookeeper-3.4.6完全分布式环境搭建

.版本

.主机规划

.目录规划

.常用脚本及命令

1.启动集群

start-dfs.sh

start-yarn.sh

2.关闭集群

stop-yarn.sh

stop-dfs.sh

3.监控集群

hdfsdfsadmin-report

4.单个进程启动/关闭

hadoop-daemon.shstart|stopnamenode|datanode|journalnode

yarn-daemon.sh start |stop resourcemanager|nodemanager

http://blog.chinaunix.net/uid-25723371-id-4943894.html

.环境准备

1 .设置ip地址(5)

[root@sht-sgmhadoopnn-01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"

BOOTPROTO="static"

DNS1="172.16.101.63"

DNS2="172.16.101.64"

GATEWAY="172.16.101.1"

HWADDR="00:50:56:82:50:1E"

IPADDR="172.16.101.55"

NETMASK="255.255.255.0"

NM_CONTROLLED="yes"

ONBOOT="yes"

TYPE="Ethernet"

UUID="257c075f-6c6a-47ef-a025-e625367cbd9c"

执行命令: service network restart

验证:ifconfig

2 .关闭防火墙(5)

执行命:service iptables stop

验证:service iptables status

3.关闭防火墙的自动运行(5)

执行命令:chkconfigiptables off

验证:chkconfig --list | grep iptables

4 设置主机名(5)

执行命令 (1)hostname sht-sgmhadoopnn-01

(2)vi /etc/sysconfig/network

[root@sht-sgmhadoopnn-01 ~]# vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=sht-sgmhadoopnn-01.telenav.cn

GATEWAY=172.16.101.1

5 iphostname绑定(5)

[root@sht-sgmhadoopnn-01 ~]# vi /etc/hosts

127.0.0.1 localhostlocalhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.101.55 sht-sgmhadoopnn-01.telenav.cn sht-sgmhadoopnn-01

172.16.101.56 sht-sgmhadoopnn-02.telenav.cn sht-sgmhadoopnn-02

172.16.101.58 sht-sgmhadoopdn-01.telenav.cn sht-sgmhadoopdn-01

172.16.101.59 sht-sgmhadoopdn-02.telenav.cn sht-sgmhadoopdn-02

172.16.101.60 sht-sgmhadoopdn-03.telenav.cn sht-sgmhadoopdn-03

验证:ping sht-sgmhadoopnn-01

6.设置5machines,SSH互相通信

word/media/image1.gif

7 .安装JDK(5)

(1)执行命令

[root@sht-sgmhadoopnn-01 ~]# cd /usr/java

[root@sht-sgmhadoopnn-01java]# cp /tmp/jdk-7u67-linux-x64.gz ./

[root@sht-sgmhadoopnn-01java]# tar -xzvf jdk-7u67-linux-x64.gz

(2)vi /etc/profile 增加内容如下:

export JAVA_HOME=/usr/java/jdk1.7.0_67

export HADOOP_HOME=/hadoop/hadoop-2.7.2

export ZOOKEEPER_HOME=/hadoop/zookeeper

export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH

#先把HADOOP_HOME, ZOOKEEPER_HOME配置了

#本次实验机器已经配置好了jdk1.7.0_67-cloudera

(3)执行source /etc/profile

(4)验证:java –version

8.创建文件夹(5)

mkdir /hadoop

.安装Zookeeper

sht-sgmhadoopdn-01/02/03

1.下载解压zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-01 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-02tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-03tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-01 tmp]# tar -xvf zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-02tmp]# tar -xvf zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-03tmp]# tar -xvf zookeeper-3.4.6.tar.gz

[root@sht-sgmhadoopdn-01tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper

[root@sht-sgmhadoopdn-02tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper

[root@sht-sgmhadoopdn-03tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper

2.修改配置

[root@sht-sgmhadoopdn-01tmp]# cd /hadoop/zookeeper/conf

[root@sht-sgmhadoopdn-01conf]# cpzoo_sample.cfgzoo.cfg

[root@sht-sgmhadoopdn-01 conf]# vizoo.cfg

修改dataDir

dataDir=/hadoop/zookeeper/data

添加下面三行

server.1=sht-sgmhadoopdn-01:2888:3888

server.2=sht-sgmhadoopdn-02:2888:3888

server.3=sht-sgmhadoopdn-03:2888:3888

[root@sht-sgmhadoopdn-01 conf]# cd ../

[root@sht-sgmhadoopdn-01 zookeeper]# mkdir data

[root@sht-sgmhadoopdn-01 zookeeper]# touch data/myid

[root@sht-sgmhadoopdn-01 zookeeper]# echo 1 > data/myid

[root@sht-sgmhadoopdn-01 zookeeper]# more data/myid

1

##sht-sgmhadoopdn-02/03,也修改配置,就如下不同

[root@sht-sgmhadoopdn-02 zookeeper]# echo2> data/myid

[root@sht-sgmhadoopdn-03 zookeeper]# echo3> data/myid

.安装Hadoop(HDFS HA+YARN HA)

#step3~7,SecureCRTsshlinux的环境中,假如copy 内容从window linux,中文乱码,请参照修改http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html

1.下载解压hadoop-2.7.2.tar.gz

[root@sht-sgmhadoopnn-01 tmp]# wgethttps://www.apache.org/dist/hadoop/core/hadoop-2.7.2/hadoop-2.7.2.tar.gz--no-check-certificate

[root@sht-sgmhadoopnn-01 tmp]# tar -xvf hadoop-2.7.2.tar.gz

[root@sht-sgmhadoopnn-01 tmp]# mv/tmp/hadoop-2.7.2 /hadoop/hadoop-2.7.2

[root@sht-sgmhadoopnn-01 tmp]#cd/hadoop/hadoop-2.7.2/etc/hadoop

[root@sht-sgmhadoopnn-01 etc]# pwd

/hadoop/hadoop-2.7.2/etc/hadoop

2.修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh

export JAVA_HOME="/usr/java/jdk1.7.0_67-cloudera"

3.修改$HADOOP_HOME/etc/hadoop/core-site.xml

fs.defaultFS

hdfs://mycluster

dfs.permissions.superusergroup

root

fs.trash.checkpoint.interval

0

fs.trash.interval

1440

4.修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml

word/media/image2.gif

dfs.webhdfs.enabled

true

dfs.namenode.name.dir

/hadoop/hadoop-2.7.2/data/dfs/name

namenode存放name table(fsimage)本地目录(需要修改)

dfs.namenode.edits.dir

${dfs.namenode.name.dir}

namenode粗放 transaction file(edits)本地目录(需要修改)

dfs.datanode.data.dir

/hadoop/hadoop-2.7.2/data/dfs/data

datanode存放block本地目录(需要修改)

dfs.replication

3

dfs.blocksize

268435456

dfs.nameservices

mycluster

dfs.ha.namenodes.mycluster

nn1,nn2

dfs.namenode.rpc-address.mycluster.nn1

sht-sgmhadoopnn-01:8020

dfs.namenode.rpc-address.mycluster.nn2

sht-sgmhadoopnn-02:8020

dfs.namenode.http-address.mycluster.nn1

sht-sgmhadoopnn-01:50070

dfs.namenode.http-address.mycluster.nn2

sht-sgmhadoopnn-02:50070

dfs.journalnode.http-address

0.0.0.0:8480

dfs.journalnode.rpc-address

0.0.0.0:8485

dfs.namenode.shared.edits.dir

qjournal://sht-sgmhadoopdn-01:8485;sht-sgmhadoopdn-02:8485;sht-sgmhadoopdn-03:8485/mycluster

dfs.journalnode.edits.dir

/hadoop/hadoop-2.7.2/data/dfs/jn

dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

dfs.ha.fencing.methods

sshfence

dfs.ha.fencing.ssh.private-key-files

/root/.ssh/id_rsa

dfs.ha.fencing.ssh.connect-timeout

30000

dfs.ha.automatic-failover.enabled

true

ha.zookeeper.quorum

sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181

ha.zookeeper.session-timeout.ms

2000

5.修改$HADOOP_HOME/etc/hadoop/yarn-env.sh

#Yarn Daemon Options

#export YARN_RESOURCEMANAGER_OPTS

#export YARN_NODEMANAGER_OPTS

#export YARN_PROXYSERVER_OPTS

#export HADOOP_JOB_HISTORYSERVER_OPTS

#Yarn Logs

export YARN_LOG_DIR="/hadoop/hadoop-2.7.2/logs"

6.修改$HADOOP_HOEM/etc/hadoop/mapred-site.xml

[root@sht-sgmhadoopnn-01 hadoop]# cpmapred-site.xml.template mapred-site.xml

[root@sht-sgmhadoopnn-01 hadoop]# vi mapred-site.xml

mapreduce.framework.name

yarn

mapreduce.jobhistory.address

0.0.0.0:10020

mapreduce.jobhistory.webapp.address

0.0.0.0:19888

7.修改$HADOOP_HOME/etc/hadoop/yarn-site.xml

word/media/image3.gif

yarn.nodemanager.aux-services

mapreduce_shuffle

yarn.nodemanager.aux-services.mapreduce.shuffle.class

org.apache.hadoop.mapred.ShuffleHandler

Address where the localizer IPC is.

yarn.nodemanager.localizer.address

0.0.0.0:23344

NM Webapp address.

yarn.nodemanager.webapp.address

0.0.0.0:23999

yarn.resourcemanager.connect.retry-interval.ms

2000

yarn.resourcemanager.ha.enabled

true

yarn.resourcemanager.ha.automatic-failover.enabled

true

yarn.resourcemanager.ha.automatic-failover.embedded

true

yarn.resourcemanager.cluster-id

yarn-cluster

yarn.resourcemanager.ha.rm-ids

rm1,rm2

yarn.resourcemanager.scheduler.class

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler

yarn.resourcemanager.recovery.enabled

true

yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms

5000

yarn.resourcemanager.store.class

org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore

yarn.resourcemanager.zk-address

sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181

yarn.resourcemanager.zk.state-store.address

sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181

yarn.resourcemanager.address.rm1

sht-sgmhadoopnn-01:23140

yarn.resourcemanager.address.rm2

sht-sgmhadoopnn-02:23140

yarn.resourcemanager.scheduler.address.rm1

sht-sgmhadoopnn-01:23130

yarn.resourcemanager.scheduler.address.rm2

sht-sgmhadoopnn-02:23130

yarn.resourcemanager.admin.address.rm1

sht-sgmhadoopnn-01:23141

yarn.resourcemanager.admin.address.rm2

sht-sgmhadoopnn-02:23141

yarn.resourcemanager.resource-tracker.address.rm1

sht-sgmhadoopnn-01:23125

yarn.resourcemanager.resource-tracker.address.rm2

sht-sgmhadoopnn-02:23125

yarn.resourcemanager.webapp.address.rm1

sht-sgmhadoopnn-01:8088

yarn.resourcemanager.webapp.address.rm2

sht-sgmhadoopnn-02:8088

yarn.resourcemanager.webapp.https.address.rm1

sht-sgmhadoopnn-01:23189

yarn.resourcemanager.webapp.https.address.rm2

sht-sgmhadoopnn-02:23189

8.修改slaves

[root@sht-sgmhadoopnn-01 hadoop]# vi slaves

sht-sgmhadoopdn-01

sht-sgmhadoopdn-02

sht-sgmhadoopdn-03

9.分发文件夹

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopnn-02:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-01:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-02:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-03:/hadoop

.启动集群

另外一种启动方式:http://www.micmiu.com/bigdata/hadoop/hadoop2-cluster-ha-setup/

1.启动zookeeper

command: ./zkServer.sh start|stop|status

[root@sht-sgmhadoopdn-01 bin]# ./zkServer.sh start

JMX enabled by default

Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@sht-sgmhadoopdn-01 bin]# jps

2073 QuorumPeerMain

2106 Jps

[root@sht-sgmhadoopdn-02 bin]# ./zkServer.sh start

JMX enabled by default

Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@sht-sgmhadoopdn-02 bin]# jps

2073 QuorumPeerMain

2106 Jps

[root@sht-sgmhadoopdn-03 bin]# ./zkServer.sh start

JMX enabled by default

Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@sht-sgmhadoopdn-03 bin]# jps

2073 QuorumPeerMain

2106 Jps

2.启动hadoop(HDFS+YARN)

a.格式化前,先在journalnode节点机器上先启动JournalNode进程

[root@sht-sgmhadoopdn-01 ~]# cd /hadoop/hadoop-2.7.2/sbin

[root@sht-sgmhadoopdn-01sbin]# hadoop-daemon.sh start journalnode

startingjournalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out

[root@sht-sgmhadoopdn-03 sbin]# jps

16722 JournalNode

16775 Jps

15519 QuorumPeerMain

[root@sht-sgmhadoopdn-02 ~]# cd /hadoop/hadoop-2.7.2/sbin

[root@sht-sgmhadoopdn-02sbin]# hadoop-daemon.sh start journalnode

startingjournalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out

[root@sht-sgmhadoopdn-03 sbin]# jps

16722 JournalNode

16775 Jps

15519 QuorumPeerMain

[root@sht-sgmhadoopdn-03 ~]# cd /hadoop/hadoop-2.7.2/sbin

[root@sht-sgmhadoopdn-03 sbin]# hadoop-daemon.sh start journalnode

startingjournalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out

[root@sht-sgmhadoopdn-03 sbin]# jps

16722 JournalNode

16775 Jps

15519 QuorumPeerMain

b.NameNode格式化

[root@sht-sgmhadoopnn-01 bin]# hadoopnamenode-format

16/02/25 14:05:04 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = sht-sgmhadoopnn-01.telenav.cn/172.16.101.55

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 2.7.2

STARTUP_MSG: classpath =

……………..

………………

16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033

16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000

16/02/25 14:05:07 INFO metrics.TopMetrics: NNTopconf: dfs.namenode.top.window.num.buckets = 10

16/02/25 14:05:07 INFO metrics.TopMetrics: NNTopconf: dfs.namenode.top.num.users = 10

16/02/25 14:05:07 INFO metrics.TopMetrics: NNTopconf: dfs.namenode.top.windows.minutes = 1,5,25

16/02/25 14:05:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled

16/02/25 14:05:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis

16/02/25 14:05:07 INFO util.GSet: Computing capacity for map NameNodeRetryCache

16/02/25 14:05:07 INFO util.GSet: VM type = 64-bit

16/02/25 14:05:07 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB

16/02/25 14:05:07 INFO util.GSet: capacity = 2^15 = 32768 entries

16/02/25 14:05:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1182930464-172.16.101.55-1456380308394

16/02/25 14:05:08 INFO common.Storage: Storage directory /hadoop/hadoop-2.7.2/data/dfs/name has been successfully formatted.

16/02/25 14:05:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid>= 0

16/02/25 14:05:08 INFO util.ExitUtil: Exiting with status 0

16/02/25 14:05:08 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at sht-sgmhadoopnn-01.telenav.cn/172.16.101.55

************************************************************/

c.同步NameNode元数据

同步sht-sgmhadoopnn-01 元数据到sht-sgmhadoopnn-02

主要是:dfs.namenode.name.dirdfs.namenode.edits.dir还应该确保共享存储目录下(dfs.namenode.shared.edits.dir ) 包含NameNode所有的元数据。

[root@sht-sgmhadoopnn-01 hadoop-2.7.2]# pwd

/hadoop/hadoop-2.7.2

[root@sht-sgmhadoopnn-01 hadoop-2.7.2]# scp -r data/ root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2

seen_txid 100% 2 0.0KB/s 00:00

fsimage_0000000000000000000 100% 351 0.3KB/s 00:00

fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00

VERSION 100% 205 0.2KB/s 00:00

d.初始化ZFCK

[root@sht-sgmhadoopnn-01 bin]# hdfszkfc-formatZK

……………..

……………..

16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Client environment:user.home=/root

16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Client environment:user.dir=/hadoop/hadoop-2.7.2/bin

16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=2000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@5f4298a5

16/02/25 14:14:41 INFO zookeeper.ClientCnxn: Opening socket connection to server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181. Will not attempt to authenticate using SASL (unknown error)

16/02/25 14:14:41 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181, initiating session

16/02/25 14:14:42 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181, sessionid = 0x15316c965750000, negotiated timeout = 4000

16/02/25 14:14:42 INFO ha.ActiveStandbyElector: Session connected.

16/02/25 14:14:42 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.

16/02/25 14:14:42 INFO zookeeper.ClientCnxn: EventThread shut down

16/02/25 14:14:42 INFO zookeeper.ZooKeeper: Session: 0x15316c965750000 closed

e.启动HDFS 系统

集群启动,sht-sgmhadoopnn-01执行start-dfs.sh

集群关闭,sht-sgmhadoopnn-01执行stop-dfs.sh

#####集群启动############

[root@sht-sgmhadoopnn-01 sbin]# start-dfs.sh

16/02/25 14:21:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting namenodes on [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]

sht-sgmhadoopnn-01: starting namenode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-01.telenav.cn.out

sht-sgmhadoopnn-02: starting namenode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-02.telenav.cn.out

sht-sgmhadoopdn-01: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-01.telenav.cn.out

sht-sgmhadoopdn-02: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-02.telenav.cn.out

sht-sgmhadoopdn-03: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-03.telenav.cn.out

Starting journal nodes [sht-sgmhadoopdn-01 sht-sgmhadoopdn-02 sht-sgmhadoopdn-03]

sht-sgmhadoopdn-01: journalnode running as process 6348. Stop it first.

sht-sgmhadoopdn-03: journalnode running as process 16722. Stop it first.

sht-sgmhadoopdn-02: journalnode running as process 7197. Stop it first.

16/02/25 14:21:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting ZK Failover Controllers on NN hosts [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]

sht-sgmhadoopnn-01: starting zkfc, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-01.telenav.cn.out

sht-sgmhadoopnn-02: starting zkfc, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-02.telenav.cn.out

You have mail in /var/spool/mail/root

####单进程启动###########

NameNode(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):

hadoop-daemon.sh start namenode

DataNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):

hadoop-daemon.sh start datanode

JournamNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):

hadoop-daemon.sh start journalnode

ZKFC(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):

hadoop-daemon.sh start zkfc

f.验证namenode,datanode,zkfc

1) 进程

[root@sht-sgmhadoopnn-01 sbin]# jps

12712 Jps

12593 DFSZKFailoverController

12278 NameNode

[root@sht-sgmhadoopnn-02 ~]# jps

29714 NameNode

29849 DFSZKFailoverController

30229 Jps

[root@sht-sgmhadoopdn-01 ~]# jps

6348 JournalNode

8775 Jps

559 QuorumPeerMain

8509 DataNode

[root@sht-sgmhadoopdn-02 ~]# jps

9430 Jps

9160 DataNode

7197 JournalNode

2073 QuorumPeerMain

[root@sht-sgmhadoopdn-03 ~]# jps

16722 JournalNode

17369 Jps

15519 QuorumPeerMain

17214 DataNode

2) 页面

sht-sgmhadoopnn-01:

http://172.16.101.55:50070/

sht-sgmhadoopnn-02:

http://172.16.101.56:50070/

g.启动YARN运算框架

#####集群启动############

1) sht-sgmhadoopnn-01启动Yarn,命令所在目录:$HADOOP_HOME/sbin

[root@sht-sgmhadoopnn-01 sbin]# start-yarn.sh

starting yarn daemons

startingresourcemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-01.telenav.cn.out

sht-sgmhadoopdn-03: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-03.telenav.cn.out

sht-sgmhadoopdn-02: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-02.telenav.cn.out

sht-sgmhadoopdn-01: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-01.telenav.cn.out

2) sht-sgmhadoopnn-02机启动RM

[root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh start resourcemanager

startingresourcemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-02.telenav.cn.out

####单进程启动###########

1) ResourceManager(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02)

yarn-daemon.sh start resourcemanager

2) NodeManager(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03)

yarn-daemon.sh start nodemanager

######关闭#############

[root@sht-sgmhadoopnn-01 sbin]# stop-yarn.sh

#包含namenoderesourcemanager进程,datanodenodemanager进程

[root@sht-sgmhadoopnn-02sbin]# yarn-daemon.sh stopresourcemanager

h.验证resourcemanager,nodemanager

1) 进程

[root@sht-sgmhadoopnn-01 sbin]# jps

13611 Jps

12593 DFSZKFailoverController

12278 NameNode

13384 ResourceManager

[root@sht-sgmhadoopnn-02 sbin]# jps

32265 ResourceManager

32304 Jps

29714 NameNode

29849 DFSZKFailoverController

[root@sht-sgmhadoopdn-01 ~]# jps

6348 JournalNode

559 QuorumPeerMain

8509 DataNode

10286 NodeManager

10423 Jps

[root@sht-sgmhadoopdn-02 ~]# jps

9160 DataNode

10909 NodeManager

11937 Jps

7197 JournalNode

2073 QuorumPeerMain

[root@sht-sgmhadoopdn-03 ~]# jps

18031 Jps

16722 JournalNode

17710 NodeManager

15519 QuorumPeerMain

17214 DataNode

2) 页面

ResourceMangerActive):http://172.16.101.55:8088

ResourceMangerStandby):http://172.16.101.56:8088/cluster/cluster

.监控集群

[root@sht-sgmhadoopnn-01 ~]# hdfsdfsadmin -report

.附件及参考

#http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.5.2.tar.gz

#http://archive-primary.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.5.2.tar.gz

hadoop :http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz

zookeeper :http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

参考:

Hadoop-2.3.0-cdh5.0.1完全分布式环境搭建(NameNode,ResourceManager HA):

http://blog.itpub.net/30089851/viewspace-1987620/

如何解决这类问题:The string "--" is not permitted within comments:

http://blog.csdn.net/free4294/article/details/38681095

SecureCRT连接linux终端中文显示乱码解决办法:

http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html

01Hadoop-2.7.2+Zookeeper-3.4.6完全分布式环境搭建(HDFS,YARN HA)

相关推荐