Hadoop Day01 安装Hadoop及其配置

1 案例1:安装Hadoop
1.1 问题
本案例要求安装单机模式Hadoop:

单机模式安装Hadoop
安装JAVA环境
设置环境变量,启动运行
1.2 步骤
实现此案例需要按照如下步骤进行。

步骤一:环境准备

1)配置主机名为nn01,ip为192.168.1.21,配置yum源(系统源)

备注:由于在之前的案例中这些都已经做过,这里不再重复,不会的学员可以参考之前的案例

2)安装java环境

[root@nn01 ~]# yum -y install java-1.8.0-openjdk-devel
[root@nn01 ~]# java -version
openjdk version “1.8.0_131″
OpenJDK Runtime Environment (build 1.8.0_131-b12)
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode)
[root@nn01 ~]# jps
1235 Jps
3)安装hadoop

[root@nn01 ~]# tar -xf hadoop-2.7.6.tar.gz
[root@nn01 ~]# mv hadoop-2.7.6 /usr/local/hadoop
[root@nn01 ~]# cd /usr/local/hadoop/
[root@nn01 hadoop]# ls
bin include libexec NOTICE.txt sbin
etc lib LICENSE.txt README.txt share
[root@nn01 hadoop]# ./bin/hadoop //报错,JAVA_HOME没有找到
Error: JAVA_HOME is not set and could not be found.
[root@nn01 hadoop]#
4)解决报错问题

[root@nn01 hadoop]# rpm -ql java-1.8.0-openjdk
[root@nn01 hadoop]# cd ./etc/hadoop/
[root@nn01 hadoop]# vim hadoop-env.sh
25 export \
JAVA_HOME=”/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64/jre”
33 export HADOOP_CONF_DIR=”/usr/local/hadoop/etc/hadoop”
[root@nn01 ~]# cd /usr/local/hadoop/
[root@nn01 hadoop]# ./bin/hadoop
Usage: hadoop [–config confdir] [COMMAND | CLASSNAME]
CLASSNAME run the class named CLASSNAME
or
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
note: please use “yarn jar” to launch
YARN applications, not this command.
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings
Most commands print help when invoked w/o parameters.
[root@nn01 hadoop]# mkdir /usr/local/hadoop/aa
[root@nn01 hadoop]# ls
bin etc include lib libexec LICENSE.txt NOTICE.txt aa README.txt sbin share
[root@nn01 hadoop]# cp *.txt /usr/local/hadoop/aa
[root@nn01 hadoop]# ./bin/hadoop jar \
share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount aa bb //wordcount为参数 统计aa这个文件夹,存到bb这个文件里面(这个文件不能存在,要是存在会报错,是为了防止数据覆盖)
[root@nn01 hadoop]# cat bb/part-r-00000 //查看

代码如下:
1、先把虚拟机准备好,IP、主机名配置,yum配置好(我为了省事,直接使用centos本身的源,加上本机的yum)
[root@nn01 ~]# yum repolist
已加载插件:fastestmirror
10local_rhscon-2-main-rpms | 2.9 kB 00:00:00
1local_devtools-rpms | 2.9 kB 00:00:00
2local_optools-rpms | 2.9 kB 00:00:00
3local_rpms | 2.9 kB 00:00:00
4local_tools-rpms | 2.9 kB 00:00:00
5local_mon-rpms | 2.9 kB 00:00:00
6local_osd-rpms | 2.9 kB 00:00:00
7local_rhceph-2-tools-rpms | 2.9 kB 00:00:00
8local_agent-rpms | 2.9 kB 00:00:00
9local_installer-rpms | 2.9 kB 00:00:00
base | 3.6 kB 00:00:00
dvd | 3.6 kB 00:00:00
extras | 3.4 kB 00:00:00
local_extras | 2.9 kB 00:00:00
local_repo | 3.6 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/19): 10local_rhscon-2-main-rpms/primary_db | 21 kB 00:00:00
(2/19): 1local_devtools-rpms/primary_db | 3.7 kB 00:00:00
(3/19): 4local_tools-rpms/primary_db | 35 kB 00:00:00
(4/19): 2local_optools-rpms/primary_db | 41 kB 00:00:00
(5/19): 5local_mon-rpms/primary_db | 37 kB 00:00:00
(6/19): 6local_osd-rpms/primary_db | 29 kB 00:00:00
(7/19): 7local_rhceph-2-tools-rpms/primary_db | 30 kB 00:00:00
(8/19): 8local_agent-rpms/primary_db | 13 kB 00:00:00
(9/19): 3local_rpms/primary_db | 318 kB 00:00:00
(10/19): 9local_installer-rpms/primary_db | 44 kB 00:00:00
(11/19): dvd/group_gz | 156 kB 00:00:00
(12/19): local_extras/primary_db | 43 kB 00:00:00
(13/19): local_repo/group_gz | 156 kB 00:00:00
(14/19): local_repo/primary_db | 3.1 MB 00:00:00
(15/19): dvd/primary_db | 3.1 MB 00:00:00
base/7/x86_64/primary_db FAILED
http://mirrors.cqu.edu.cn/CentOS/7.5.1804/os/x86_64/repodata/03d0a660eb33174331aee3e077e11d4c017412d761b7f2eaa8555e7898e701e0-primary.sqlite.bz2: [Errno 14] curl#56 – “Recv failure: Connection reset by peer”
正在尝试其它镜像。
(16/19): base/7/x86_64/group_gz | 166 kB 00:00:00
(17/19): base/7/x86_64/primary_db | 5.9 MB 00:00:01
(18/19): extras/7/x86_64/primary_db | 205 kB 00:00:02
(19/19): updates/7/x86_64/primary_db | 6.0 MB 00:00:03
Determining fastest mirrors
* base: mirrors.nwsuaf.edu.cn
* extras: mirrors.163.com
* updates: mirrors.nwsuaf.edu.cn
源标识 源名称 状态
10local_rhscon-2-main-rpms rhscon-2-main-rpms 29
1local_devtools-rpms devtools-rpms 3
2local_optools-rpms optools-rpms 99
3local_rpms rpms 680
4local_tools-rpms tools-rpms 84
5local_mon-rpms mon-rpms 41
6local_osd-rpms osd-rpms 28
7local_rhceph-2-tools-rpms rhceph-2-tools-rpms 35
8local_agent-rpms agent-rpms 19
9local_installer-rpms installer-rpms 46
base/7/x86_64 CentOS-7 – Base 9,911
dvd dvd 3,894
extras/7/x86_64 CentOS-7 – Extras 434
local_extras extras 76
local_repo CentOS-7 – Base 3,894
updates/7/x86_64 CentOS-7 – Updates 1,614
repolist: 20,887

[root@nn01 hadoop]# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.21 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::5054:ff:fe2a:6ecf prefixlen 64 scopeid 0x20<link>
ether 52:54:00:2a:6e:cf txqueuelen 1000 (Ethernet)
RX packets 46994 bytes 365892055 (348.9 MiB)
RX errors 0 dropped 1026 overruns 0 frame 0
TX packets 42421 bytes 3085934 (2.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@nn01 hadoop]# hostname
nn01
2、把准备好的HADDP包传到虚拟机上面去
[root@nn01 ~]# ls /root/ /root/hadoop/
/root/:
hadoop Hadoop.zip RPM-GPG-KEY-CentOS-7

/root/hadoop/:
hadoop-2.7.6.tar.gz kafka_2.10-0.10.2.1.tgz zookeeper-3.4.10.tar.gz

3、安装java环境
[root@nn01 ~]# yum install -y java-1.8.0-openjdk-devel
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.nwsuaf.edu.cn
* extras: mirrors.163.com
* updates: mirrors.nwsuaf.edu.cn
正在解决依赖关系
–> 正在检查事务
—> 软件包 java-1.8.0-openjdk-devel.x86_64.1.1.8.0.191.b12-0.el7_5 将被 安装
–> 正在处理依赖关系 java-1.8.0-openjdk(x86-64) = 1:1.8.0.191.b12-0.el7_5,它被软件包 1:java-1.8.0-openjdk-devel-1.8.0.191.b12-0.el7_5.x86_64 需要
#######################################

已安装:
java-1.8.0-openjdk-devel.x86_64 1:1.8.0.191.b12-0.el7_5

作为依赖被安装:
copy-jdk-configs.noarch 0:3.3-10.el7_5
fontconfig.x86_64 0:2.10.95-11.el7
fontpackages-filesystem.noarch 0:1.44-8.el7
giflib.x86_64 0:4.1.6-9.el7
java-1.8.0-openjdk.x86_64 1:1.8.0.191.b12-0.el7_5
java-1.8.0-openjdk-headless.x86_64 1:1.8.0.191.b12-0.el7_5
javapackages-tools.noarch 0:3.4.1-11.el7
libICE.x86_64 0:1.0.9-9.el7
libSM.x86_64 0:1.2.2-2.el7
libX11.x86_64 0:1.6.5-1.el7
libX11-common.noarch 0:1.6.5-1.el7
libXau.x86_64 0:1.0.8-2.1.el7
libXcomposite.x86_64 0:0.4.4-4.1.el7
libXext.x86_64 0:1.3.3-3.el7
libXfont.x86_64 0:1.5.2-1.el7
libXi.x86_64 0:1.7.9-1.el7
libXrender.x86_64 0:0.9.10-1.el7
libXtst.x86_64 0:1.2.3-1.el7
libfontenc.x86_64 0:1.1.3-3.el7
libjpeg-turbo.x86_64 0:1.2.90-5.el7
libpng.x86_64 2:1.5.13-7.el7_2
libxcb.x86_64 0:1.12-1.el7
libxslt.x86_64 0:1.1.28-5.el7
lksctp-tools.x86_64 0:1.0.17-2.el7
python-javapackages.noarch 0:3.4.1-11.el7
python-lxml.x86_64 0:3.2.1-5.el7ost
stix-fonts.noarch 0:1.1.0-5.el7
ttmkfdir.x86_64 0:3.0.9-42.el7
tzdata-java.noarch 0:2018f-2.el7
xorg-x11-font-utils.x86_64 1:7.5-20.el7
xorg-x11-fonts-Type1.noarch 0:7.5-9.el7

作为依赖被升级:
nspr.x86_64 0:4.19.0-1.el7_5 nss.x86_64 0:3.36.0-7.el7_5
nss-softokn.x86_64 0:3.36.0-5.el7_5 nss-softokn-freebl.x86_64 0:3.36.0-5.el7_5
nss-sysinit.x86_64 0:3.36.0-7.el7_5 nss-tools.x86_64 0:3.36.0-7.el7_5
nss-util.x86_64 0:3.36.0-1.el7_5

完毕!

验证JAVA环境
[root@nn01 ~]# java -version
openjdk version “1.8.0_191″
OpenJDK Runtime Environment (build 1.8.0_191-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
[root@nn01 ~]# jps
988 Jps

4、安装hadoop
[root@nn01 ~]# unzip Hadoop.zip
Archive: Hadoop.zip
inflating: hadoop/hadoop-2.7.6.tar.gz
extracting: hadoop/kafka_2.10-0.10.2.1.tgz
inflating: hadoop/zookeeper-3.4.10.tar.gz
[root@nn01 ~]# ll
总用量 283216
drwxr-xr-x 2 root root 95 11月 30 13:23 hadoop
-rw-r–r– 1 root root 290007891 11月 30 13:22 Hadoop.zip
-rw-r–r–. 1 root root 1690 12月 10 2015 RPM-GPG-KEY-CentOS-7
[root@nn01 ~]# cd hadoop/
[root@nn01 hadoop]# ll
总用量 283416
-rw-r–r– 1 root root 216745683 5月 29 2018 hadoop-2.7.6.tar.gz
-rw-r–r– 1 root root 38424081 4月 27 2017 kafka_2.10-0.10.2.1.tgz
-rw-r–r– 1 root root 35042811 4月 1 2017 zookeeper-3.4.10.tar.gz
[root@nn01 hadoop]# tar -xf hadoop-2.7.6.tar.gz
[root@nn01 hadoop]# ll
总用量 283416
drwxr-xr-x 9 20415 101 149 4月 18 2018 hadoop-2.7.6
-rw-r–r– 1 root root 216745683 5月 29 2018 hadoop-2.7.6.tar.gz
-rw-r–r– 1 root root 38424081 4月 27 2017 kafka_2.10-0.10.2.1.tgz
-rw-r–r– 1 root root 35042811 4月 1 2017 zookeeper-3.4.10.tar.gz
[root@nn01 hadoop]# mv hadoop-2.7.6 /usr/local/hadoop
[root@nn01 hadoop]# cd /usr/local/hadoop/
[root@nn01 hadoop]# ll
总用量 112
drwxr-xr-x 2 20415 101 194 4月 18 2018 bin
drwxr-xr-x 3 20415 101 20 4月 18 2018 etc
drwxr-xr-x 2 20415 101 106 4月 18 2018 include
drwxr-xr-x 3 20415 101 20 4月 18 2018 lib
drwxr-xr-x 2 20415 101 239 4月 18 2018 libexec
-rw-r–r– 1 20415 101 86424 4月 18 2018 LICENSE.txt
-rw-r–r– 1 20415 101 14978 4月 18 2018 NOTICE.txt
-rw-r–r– 1 20415 101 1366 4月 18 2018 README.txt
drwxr-xr-x 2 20415 101 4096 4月 18 2018 sbin
drwxr-xr-x 4 20415 101 31 4月 18 2018 share

5、如果直接运行会报错,因为JAVA目录还没有设置
[root@nn01 hadoop]# ./bin/hadoop
Error: JAVA_HOME is not set and could not be found.
检查一下JAVA的目录在什么地方
[root@nn01 hadoop]# rpm -ql java-1.8.0-openjdk
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre/bin/policytool
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre/lib/amd64/libawt_xawt.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre/lib/amd64/libjawt.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre/lib/amd64/libjsoundalsa.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre/lib/amd64/libsplashscreen.so
/usr/share/applications/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64-policytool.desktop
/usr/share/icons/hicolor/16×16/apps/java-1.8.0.png
/usr/share/icons/hicolor/24×24/apps/java-1.8.0.png
/usr/share/icons/hicolor/32×32/apps/java-1.8.0.png
/usr/share/icons/hicolor/48×48/apps/java-1.8.0.png
然后重新设置JAVA的路径
[root@nn01 hadoop]# vim etc/hadoop/hadoop-env.sh
25 export JAVA_HOME=”/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64/jre”
33 export HADOOP_CONF_DIR=”/usr/local/hadoop/etc/hadoop”
重新运行就可以了
[root@nn01 hadoop]# ./bin/hadoop
Usage: hadoop [–config confdir] [COMMAND | CLASSNAME]
CLASSNAME run the class named CLASSNAME
or
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
note: please use “yarn jar” to launch
YARN applications, not this command.
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings

Most commands print help when invoked w/o parameters.
复制一些文件过去试试
[root@nn01 hadoop]# mkdir /usr/local/hadoop/aa
[root@nn01 hadoop]# ll
总用量 112
drwxr-xr-x 2 root root 6 11月 30 13:26 aa
drwxr-xr-x 2 20415 101 194 4月 18 2018 bin
drwxr-xr-x 3 20415 101 20 4月 18 2018 etc
drwxr-xr-x 2 20415 101 106 4月 18 2018 include
drwxr-xr-x 3 20415 101 20 4月 18 2018 lib
drwxr-xr-x 2 20415 101 239 4月 18 2018 libexec
-rw-r–r– 1 20415 101 86424 4月 18 2018 LICENSE.txt
-rw-r–r– 1 20415 101 14978 4月 18 2018 NOTICE.txt
-rw-r–r– 1 20415 101 1366 4月 18 2018 README.txt
drwxr-xr-x 2 20415 101 4096 4月 18 2018 sbin
drwxr-xr-x 4 20415 101 31 4月 18 2018 share
[root@nn01 hadoop]# cp *.txt ./aa/
[root@nn01 hadoop]# ll aa/
总用量 108
-rw-r–r– 1 root root 86424 11月 30 13:26 LICENSE.txt
-rw-r–r– 1 root root 14978 11月 30 13:26 NOTICE.txt
-rw-r–r– 1 root root 1366 11月 30 13:26 README.txt
[root@nn01 hadoop]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.6.jar wordcount aa bb
18/11/30 13:27:13 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
##################################################
运行完毕,看看统计的单词数
[root@nn01 hadoop]# cat bb/part-r-00000
“”AS 2
“AS 17
“COPYRIGHTS 1
“Contribution” 2
“Contributor” 2
“Derivative 1
“GCC 1
“Legal 1
“License” 1
“License”); 2
“Licensed 1
#####################################################3

2 案例2:安装配置Hadoop
2.1 问题
本案例要求:

另备三台虚拟机,安装Hadoop
使所有节点能够ping通,配置SSH信任关系
节点验证
2.2 方案
准备四台虚拟机,由于之前已经准备过一台,所以只需再准备三台新的虚拟机即可,安装hadoop,使所有节点可以ping通,配置SSH信任关系
主机 角色 软件
192.168.1.21 nn01 NameNode /Secondary NameNode HDFS
192.168.1.22 node1 DataNode HDFS
192.168.1.23 node2 DataNode HDFS
192.168.1.24 node3 DataNode HDFS

2.3 步骤
实现此案例需要按照如下步骤进行。

步骤一:环境准备

1)三台机器配置主机名为node1、node2、node3,配置ip地址(ip如图-1所示),yum源(系统源)

2)编辑/etc/hosts(四台主机同样操作,以nn01为例)

[root@nn01 ~]# vim /etc/hosts
192.168.1.21 nn01
192.168.1.22 node1
192.168.1.23 node2
192.168.1.24 node3
3)安装java环境,在node1,node2,node3上面操作(以node1为例)

[root@node1 ~]# yum -y install java-1.8.0-openjdk-devel
4)布置SSH信任关系

[root@nn01 ~]# vim /etc/ssh/ssh_config //第一次登陆不需要输入yes
Host *
GSSAPIAuthentication yes
StrictHostKeyChecking no
[root@nn01 .ssh]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Ucl8OCezw92aArY5+zPtOrJ9ol1ojRE3EAZ1mgndYQM root@nn01
The key’s randomart image is:
+—[RSA 2048]—-+
| o*E*=. |
| +XB+. |
| ..=Oo. |
| o.+o… |
| .S+.. o |
| + .=o |
| o+oo |
| o+=.o |
| o==O. |
+—-[SHA256]—–+
[root@nn01 .ssh]# for i in 21 22 23 24 ; do ssh-copy-id 192.168.1.$i; done
//部署公钥给nn01,node1,node2,node3
5)测试信任关系

[root@nn01 .ssh]# ssh node1
Last login: Fri Sep 7 16:52:00 2018 from 192.168.1.21
[root@node1 ~]# exit
logout
Connection to node1 closed.
[root@nn01 .ssh]# ssh node2
Last login: Fri Sep 7 16:52:05 2018 from 192.168.1.21
[root@node2 ~]# exit
logout
Connection to node2 closed.
[root@nn01 .ssh]# ssh node3

代码如下:
以node1为例子,其余是一样的
[root@node1 ~]# hostname
node1
[root@node1 ~]# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.22 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::5054:ff:feb3:4f9 prefixlen 64 scopeid 0x20<link>
ether 52:54:00:b3:04:f9 txqueuelen 1000 (Ethernet)
RX packets 208 bytes 18908 (18.4 KiB)
RX errors 0 dropped 36 overruns 0 frame 0
TX packets 90 bytes 10323 (10.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@node1 ~]# ll /etc/yum.repos.d/
总用量 36
drwxr-xr-x. 2 root root 187 11月 25 17:13 bak
-rw-r–r– 1 root root 1664 11月 25 17:32 CentOS-Base.repo
-rw-r–r– 1 root root 1309 11月 25 17:32 CentOS-CR.repo
-rw-r–r– 1 root root 649 11月 25 17:32 CentOS-Debuginfo.repo
-rw-r–r– 1 root root 314 11月 25 17:32 CentOS-fasttrack.repo
-rw-r–r– 1 root root 630 11月 25 17:32 CentOS-Media.repo
-rw-r–r– 1 root root 1331 11月 25 17:32 CentOS-Sources.repo
-rw-r–r– 1 root root 3830 11月 25 17:32 CentOS-Vault.repo
-rw-r–r–. 1 root root 71 11月 25 17:16 dvd.repo
-rw-r–r– 1 root root 1524 11月 30 14:21 local.repo

[root@node1 ~]# yum repolist
已加载插件:fastestmirror
10local_rhscon-2-main-rpms | 2.9 kB 00:00:00
1local_devtools-rpms | 2.9 kB 00:00:00
2local_optools-rpms | 2.9 kB 00:00:00
3local_rpms | 2.9 kB 00:00:00
4local_tools-rpms | 2.9 kB 00:00:00
5local_mon-rpms | 2.9 kB 00:00:00
6local_osd-rpms | 2.9 kB 00:00:00
7local_rhceph-2-tools-rpms | 2.9 kB 00:00:00
8local_agent-rpms | 2.9 kB 00:00:00
9local_installer-rpms | 2.9 kB 00:00:00
base | 3.6 kB 00:00:00
dvd | 3.6 kB 00:00:00
extras | 3.4 kB 00:00:00
local_extras | 2.9 kB 00:00:00
local_repo | 3.6 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/19): 10local_rhscon-2-main-rpms/primary_db | 21 kB 00:00:00
(2/19): 1local_devtools-rpms/primary_db | 3.7 kB 00:00:00
(3/19): 2local_optools-rpms/primary_db | 41 kB 00:00:00
(4/19): 5local_mon-rpms/primary_db | 37 kB 00:00:00
(5/19): 3local_rpms/primary_db | 318 kB 00:00:00
(6/19): 6local_osd-rpms/primary_db | 29 kB 00:00:00
(7/19): 4local_tools-rpms/primary_db | 35 kB 00:00:00
(8/19): 8local_agent-rpms/primary_db | 13 kB 00:00:00
(9/19): dvd/group_gz | 156 kB 00:00:00
(10/19): 7local_rhceph-2-tools-rpms/primary_db | 30 kB 00:00:00
(11/19): 9local_installer-rpms/primary_db | 44 kB 00:00:00
(12/19): dvd/primary_db | 3.1 MB 00:00:00
(13/19): local_extras/primary_db | 43 kB 00:00:00
(14/19): local_repo/group_gz | 156 kB 00:00:00
(15/19): local_repo/primary_db | 3.1 MB 00:00:00
(16/19): base/7/x86_64/group_gz | 166 kB 00:00:00
(17/19): extras/7/x86_64/primary_db | 205 kB 00:00:01
(18/19): updates/7/x86_64/primary_db | 6.0 MB 00:00:02
base/7/x86_64/primary_db FAILED MB 01:13:54 ETA
http://ftp.sjtu.edu.cn/centos/7.5.1804/os/x86_64/repodata/03d0a660eb33174331aee3e077e11d4c017412d761b7f2eaa8555e7898e701e0-primary.sqlite.bz2: [Errno 12] Timeout on http://ftp.sjtu.edu.cn/centos/7.5.1804/os/x86_64/repodata/03d0a660eb33174331aee3e077e11d4c017412d761b7f2eaa8555e7898e701e0-primary.sqlite.bz2: (28, ‘Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds’)
正在尝试其它镜像。
(19/19): base/7/x86_64/primary_db | 5.9 MB 00:00:09
Determining fastest mirrors
* base: mirror.lzu.edu.cn
* extras: mirrors.cn99.com
* updates: mirrors.163.com
源标识 源名称 状态
10local_rhscon-2-main-rpms rhscon-2-main-rpms 29
1local_devtools-rpms devtools-rpms 3
2local_optools-rpms optools-rpms 99
3local_rpms rpms 680
4local_tools-rpms tools-rpms 84
5local_mon-rpms mon-rpms 41
6local_osd-rpms osd-rpms 28
7local_rhceph-2-tools-rpms rhceph-2-tools-rpms 35
8local_agent-rpms agent-rpms 19
9local_installer-rpms installer-rpms 46
base/7/x86_64 CentOS-7 – Base 9,911
dvd dvd 3,894
extras/7/x86_64 CentOS-7 – Extras 434
local_extras extras 76
local_repo CentOS-7 – Base 3,894
updates/7/x86_64 CentOS-7 – Updates 1,614
repolist: 20,887

2、4台机器都写本地HOSTS
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.21 nn01
192.168.1.22 node1
192.168.1.23 node2
192.168.1.24 node3

3、安装JAVA环境
[root@node1 ~]# yum install -y java-1.8.0-openjdk-devel
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.lzu.edu.cn
* extras: mirrors.cn99.com
* updates: mirrors.163.com
正在解决依赖关系
–> 正在检查事务
—> 软件包 java-1.8.0-openjdk-devel.x86_64.1.1.8.0.191.b12-0.el7_5 将被 安装
–> 正在处理依赖关系 java-1.8.0-openjdk(x86-64) = 1:1.8.0.191.b12-0.el7_5,它被软件包 1:java-1.8.0-openjdk-devel-1.8.0.191.b12-0.el7_5.x86_64 需要
–> 正在处理依赖关系 libjvm.so()(64bit),它被软件包 1:java-1.8.0-openjdk-devel-1.8.0.191.b12-0.el7_5.x86_64 需要
–> 正在处理依赖关系 libjava.so()(64bit),它被软件包 1:java-1.8.0-openjdk-devel-1.8.0.191.b12-0.el7_5.x86_64 需要
–> 正在处理依赖关系 libX11.so.6()(64bit),它被软件包 1:java-1.8.0-openjdk-devel-1.8.0.191.b12-0.el7_5.x86_64 需要
##################################
已安装:
java-1.8.0-openjdk-devel.x86_64 1:1.8.0.191.b12-0.el7_5

4、配置SSH免密登录
[root@node1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:eLC1ivkxGX3LviFW3kHvs+Ds/v3DiSRhfQzNZ0SZoYE root@node1
The key’s randomart image is:
+—[RSA 2048]—-+
| ..o+*|
| E .o=o|
| . . o.o..|
| * . + o o |
| + S + o o |
| o = = + + |
| o = o = = = .|
| . + o + o B |
| . o+=.o =|
+—-[SHA256]—–+
[root@node1 ~]#
[root@node1 ~]# for i in {nn01,node1,node2,node3} ; do ssh-copy-id $i ;done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘nn01 (192.168.1.21)’ can’t be established.
ECDSA key fingerprint is SHA256:OGu5BChujFALtDvZ860w673bww507mEzfcTAP5CHXpA.
ECDSA key fingerprint is MD5:91:52:6e:2a:24:f3:94:1b:fc:4a:41:71:b6:c1:e2:b6.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@nn01’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘nn01′”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘node1 (192.168.1.22)’ can’t be established.
ECDSA key fingerprint is SHA256:Nw0LMMvdUx1oOws/2DI6D1PaZrAotg+HnUiO7sBzAz4.
ECDSA key fingerprint is MD5:21:59:ad:29:77:65:11:ff:e0:d6:4a:5e:ab:4f:a7:01.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@node1’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘node1′”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘node2 (192.168.1.23)’ can’t be established.
ECDSA key fingerprint is SHA256:3PyPHaUstzjL2HpmZ+UllCW19ZaeBYJ9bn9Fsp64NlI.
ECDSA key fingerprint is MD5:b7:7e:27:bf:fd:f4:d0:2c:00:d3:e3:25:a7:66:b5:91.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@node2’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘node2′”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘node3 (192.168.1.24)’ can’t be established.
ECDSA key fingerprint is SHA256:7Cj7gj3IyiZXuzcrERWKEpxJd+CA3B9z5TCeh5lh/kc.
ECDSA key fingerprint is MD5:26:e1:5b:f5:d2:6c:c8:b9:c6:20:4e:16:30:d3:4f:ae.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@node3’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘node3′”
and check to make sure that only the key(s) you wanted were added.

5、测试信任关系
[root@node1 ~]# ssh node3
Last login: Fri Nov 30 14:22:51 2018 from 192.168.1.254
[root@node3 ~]# exit
登出
Connection to node3 closed.

步骤二:配置hadoop

1)修改slaves文件

[root@nn01 ~]# cd /usr/local/hadoop/etc/hadoop
[root@nn01 hadoop]# vim slaves
node1
node2
node3
2)hadoop的核心配置文件core-site

[root@nn01 hadoop]# vim core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://nn01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/hadoop</value>
</property>
</configuration>
[root@nn01 hadoop]# mkdir /var/hadoop //hadoop的数据根目录
[root@nn01 hadoop]# ssh node1 mkdir /var/hadoop
[root@nn01 hadoop]# ssh node2 mkdir /var/hadoop
[root@nn01 hadoop]# ssh node3 mkdir /var/hadoop
3)配置hdfs-site文件

[root@nn01 hadoop]# vim hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.http-address</name>
<value>nn01:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>nn01:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
4)同步配置到node1,node2,node3

[root@nn01 hadoop]# yum –y install rsync //同步的主机都要安装rsync
[root@nn01 hadoop]# for i in 22 23 24 ; do rsync -aSH –delete /usr/local/hadoop/
\ 192.168.1.$i:/usr/local/hadoop/ -e ‘ssh’ & done
[1] 23260
[2] 23261
[3] 23262
5)查看是否同步成功

[root@nn01 hadoop]# ssh node1 ls /usr/local/hadoop/
bin
etc
include
lib
libexec
LICENSE.txt
NOTICE.txt
bb
README.txt
sbin
share
aa
[root@nn01 hadoop]# ssh node2 ls /usr/local/hadoop/
bin
etc
include
lib
libexec
LICENSE.txt
NOTICE.txt
bb
README.txt
sbin
share
aa
[root@nn01 hadoop]# ssh node3 ls /usr/local/hadoop/
bin
etc
include
lib
libexec
LICENSE.txt
NOTICE.txt
bb
README.txt
sbin
share
aa

代码如下:
1、参照NN01,把包装好,先把包挨个传过去
[root@nn01 ~]# ll
总用量 283216
drwxr-xr-x 2 root root 95 11月 30 13:23 hadoop
-rw-r–r– 1 root root 290007891 11月 30 13:22 Hadoop.zip
-rw-r–r–. 1 root root 1690 12月 10 2015 RPM-GPG-KEY-CentOS-7
[root@nn01 ~]# for i in {node1,node2,node3} ;do scp Hadoop.zip $i:/root/ ;done
Hadoop.zip 100% 277MB 165.4MB/s 00:01
Hadoop.zip 100% 277MB 167.6MB/s 00:01
Hadoop.zip 100% 277MB 162.9MB/s 00:01
[root@nn01 ~]#

2、回到nn01,然后修改slaves文件
[root@nn01 hadoop]# cat slaves
node1
node2
node3
[root@nn01 hadoop]# cat core-site.xml

3、在nn01上面,修改core-site
[root@nn01 hadoop]# cat core-site.xml
<?xml version=”1.0″ encoding=”UTF-8″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!–
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
–>

<!– Put site-specific property overrides in this file. –>

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://nn01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/hadoop</value>
</property>
</configuration>
[root@nn01 hadoop]#

4、还是配置hdfs-site.xml文件
[root@nn01 hadoop]# vim hdfs-site.xml
[root@nn01 hadoop]# cat hdfs-site.xml
<?xml version=”1.0″ encoding=”UTF-8″?>
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
<!–
Licensed under the Apache License, Version 2.0 (the “License”);
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
–>

<!– Put site-specific property overrides in this file. –>

<configuration>
<property>
<name>dfs.namenode.http-address</name>
<value>nn01:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>nn01:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>

5、在4台机器上面都安装RSYNC,然后同步配置文件
[root@nn01 hadoop]# yum install -y rsync
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.nwsuaf.edu.cn
* extras: mirrors.163.com
* updates: mirrors.nwsuaf.edu.cn
正在解决依赖关系
–> 正在检查事务
—> 软件包 rsync.x86_64.0.3.1.2-4.el7 将被 安装
–> 解决依赖关系完成

依赖关系解决

====================================================================================================
Package 架构 版本 源 大小
====================================================================================================
正在安装:
rsync x86_64 3.1.2-4.el7 base 403 k

事务概要
====================================================================================================
安装 1 软件包

总下载量:403 k
安装大小:815 k
Downloading packages:
rsync-3.1.2-4.el7.x86_64.rpm | 403 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : rsync-3.1.2-4.el7.x86_64 1/1
验证中 : rsync-3.1.2-4.el7.x86_64 1/1

已安装:
rsync.x86_64 0:3.1.2-4.el7

完毕!

[root@nn01 hadoop]# for i in {22,23,24};do rsync -aSH –delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop/ -e ‘ssh’&done
[1] 10993
[2] 10994
[3] 10995
[root@nn01 hadoop]#
[1] 完成 rsync -aSH –delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop/ -e ‘ssh’
[2]- 完成 rsync -aSH –delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop/ -e ‘ssh’
[3]+ 完成 rsync -aSH –delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop/ -e ‘ssh’

6、随便找台机器验证一下是否同步成功
[root@node3 ~]# ll /usr/local/hadoop/
总用量 112
drwxr-xr-x 2 root root 61 11月 30 13:26 aa
drwxr-xr-x 2 root root 88 11月 30 13:27 bb
drwxr-xr-x 2 20415 101 194 4月 18 2018 bin
drwxr-xr-x 3 20415 101 20 4月 18 2018 etc
drwxr-xr-x 2 20415 101 106 4月 18 2018 include
drwxr-xr-x 3 20415 101 20 4月 18 2018 lib
drwxr-xr-x 2 20415 101 239 4月 18 2018 libexec
-rw-r–r– 1 20415 101 86424 4月 18 2018 LICENSE.txt
-rw-r–r– 1 20415 101 14978 4月 18 2018 NOTICE.txt
-rw-r–r– 1 20415 101 1366 4月 18 2018 README.txt
drwxr-xr-x 2 20415 101 4096 4月 18 2018 sbin
drwxr-xr-x 4 20415 101 31 4月 18 2018 share

6、格式化Hadoop,都在NN01上面操作
[root@nn01 hadoop]# cd /usr/local/hadoop/
[root@nn01 hadoop]# pwd
/usr/local/hadoop
[root@nn01 hadoop]# ls
aa bb bin etc include lib libexec LICENSE.txt NOTICE.txt README.txt sbin share
[root@nn01 hadoop]# ./bin/hdfs namenode -format
18/11/30 14:57:43 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = nn01/192.168.1.21
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.6
#######################################################################
18/11/30 14:57:44 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/11/30 14:57:44 INFO util.ExitUtil: Exiting with status 0
18/11/30 14:57:44 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at nn01/192.168.1.21
************************************************************/

启动集群
[root@nn01 hadoop]# ./sbin/start-dfs.sh
Starting namenodes on [nn01]
nn01: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-nn01.out
node1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node1.out
node2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node2.out
node3: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node3.out
Starting secondary namenodes [nn01]
nn01: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-nn01.out

验证角色,每台机器都看一眼
[root@nn01 hadoop]# jps
11333 SecondaryNameNode
11448 Jps
11146 NameNode

其他3台机器都是DataNode
[root@node1 ~]# jps
1360 DataNode
1434 Jps
[root@node1 ~]#

[root@node2 ~]# jps
1417 Jps
1342 DataNode

[root@node3 ~]# jps
1360 DataNode
1435 Jps

报告集群状态,可以看到有3个角色成功了
[root@nn01 hadoop]# ./bin/hdfs dfsadmin -report
Configured Capacity: 51505004544 (47.97 GB)
Present Capacity: 45306929152 (42.20 GB)
DFS Remaining: 45306916864 (42.20 GB)
DFS Used: 12288 (12 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

————————————————-
Live datanodes (3):

Name: 192.168.1.24:50010 (node3)
Hostname: node3
Decommission Status : Normal
Configured Capacity: 17168334848 (15.99 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2066079744 (1.92 GB)
DFS Remaining: 15102251008 (14.07 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.97%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Nov 30 14:58:43 CST 2018

Name: 192.168.1.22:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17168334848 (15.99 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2065956864 (1.92 GB)
DFS Remaining: 15102373888 (14.07 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.97%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Nov 30 14:58:43 CST 2018

Name: 192.168.1.23:50010 (node2)
Hostname: node2
Decommission Status : Normal
Configured Capacity: 17168334848 (15.99 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2066038784 (1.92 GB)
DFS Remaining: 15102291968 (14.07 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.97%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Nov 30 14:58:43 CST 2018

今天的实验到此完毕

实验改进:由于手贱升级了JAVA,造成无法找到目录,现在尝试恢复
[root@nn01 ~]# jps
823 Jps
[root@nn01 ~]# cd /usr/local/hadoop/
[root@nn01 hadoop]# ./sbin/start-dfs.sh
Starting namenodes on [nn01]
nn01: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-nn01.out
node2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node2.out
node3: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node3.out
node1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node1.out
node2: /usr/local/hadoop/bin/hdfs: line 304: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre/bin/java: No such file or directory
node3: /usr/local/hadoop/bin/hdfs: line 304: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre/bin/java: No such file or directory
node1: /usr/local/hadoop/bin/hdfs: line 304: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64/jre/bin/java: No such file or directory
Starting secondary namenodes [nn01]
nn01: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-nn01.out
可以看到,JAVA找不到目录
重新确认一下目录,可以看到,目录变成了/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre
[root@node1 ~]# rpm -ql java-1.8.0-openjdk
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/bin/policytool
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/lib/amd64/libawt_xawt.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/lib/amd64/libjawt.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/lib/amd64/libjsoundalsa.so
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/lib/amd64/libsplashscreen.so
/usr/share/applications/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64-policytool.desktop
/usr/share/icons/hicolor/16×16/apps/java-1.8.0.png
/usr/share/icons/hicolor/24×24/apps/java-1.8.0.png
/usr/share/icons/hicolor/32×32/apps/java-1.8.0.png
/usr/share/icons/hicolor/48×48/apps/java-1.8.0.png

所以,把目录重新指定一下,所有机器都要做,注意,我的NN01由于没有升级JAVA,所以目录还是原来的不懂
[root@node1 hadoop]# vim etc/hadoop/hadoop-env.sh
[root@node1 hadoop]# pwd
/usr/local/hadoop
[root@node1 hadoop]# grep “JAVA_HOME” etc/hadoop/hadoop-env.sh
# The only required environment variable is JAVA_HOME. All others are
# set JAVA_HOME in this file, so that it is correctly defined on
export JAVA_HOME=”/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre”
[root@node1 hadoop]#

再次重新启动,就一切正常了
[root@nn01 hadoop]# ./sbin/start-dfs.sh
Starting namenodes on [nn01]
nn01: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-nn01.out
node2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node2.out
node1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node1.out
node3: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-node3.out
Starting secondary namenodes [nn01]
nn01: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-nn01.out
[root@nn01 hadoop]# ./bin/hdfs dfsadmin -report
Configured Capacity: 51505004544 (47.97 GB)
Present Capacity: 45398556672 (42.28 GB)
DFS Remaining: 45398532096 (42.28 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

————————————————-
Live datanodes (3):

Name: 192.168.1.24:50010 (node3)
Hostname: node3
Decommission Status : Normal
Configured Capacity: 17168334848 (15.99 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 2035486720 (1.90 GB)
DFS Remaining: 15132839936 (14.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.14%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Dec 07 10:41:46 CST 2018

Name: 192.168.1.22:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17168334848 (15.99 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 2035470336 (1.90 GB)
DFS Remaining: 15132856320 (14.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.14%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Dec 07 10:41:47 CST 2018

Name: 192.168.1.23:50010 (node2)
Hostname: node2
Decommission Status : Normal
Configured Capacity: 17168334848 (15.99 GB)
DFS Used: 8192 (8 KB)
Non DFS Used: 2035490816 (1.90 GB)
DFS Remaining: 15132835840 (14.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.14%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Dec 07 10:41:47 CST 2018

此条目发表在hadoop分类目录,贴了标签。将固定链接加入收藏夹。

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注