先准备三台或者五台虚拟机(我这里写三台,实际我用的五台)
在安装centos时可以在选择(最小安装还是图形化界面处)有配置网络的,把网络先配置好,这样就不需要在重新配置了
先配置一台主机,后两台克隆即可,我这里搭建的主机用的图形化界面,从节点用的最小安装
ip和主机名
ip | 主机名 |
---|---|
192.168.228.138 | chun1 |
192.168.228.139 | chun2 |
192.168.228.140 | chun3 |
把 ip 主机名 写到/etc/hosts里,这里是映射,用来关联你的另外的机器。
ip与主机名根据自己情况定义,下面会讲ip地址怎么
echo '192.168.228.138 chun1' >>/etc/hosts
echo '192.168.228.139 chun2' >>/etc/hosts
echo '192.168.228.140 chun2' >>/etc/hosts
把每台对应的ip和主机名改掉
1- 改ip 进入
[root@chun1 /]# cd /etc/sysconfig/network-scripts/
1.1 ls查看目录
可以看到 ifcig-ens33文件,centos6与7不同,有的是ifcfig-etho,配置是一样的。vi编译此文件,来修改主机ip地址
BOOTPROTO=static
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.228.138 //这是你的主机ip
NETMASK=255.255.255.0 //这是你的子网掩码
GATEWAY=192.168.228.3 //这是你的网关
DNS1=192.168.228.3 //第一个DNS1与网关相同
DNS2=114.114.114.114 //第二个用114.114.114
然后讲怎么配置ip,子网掩码,网关。
到VMware界面
到此网络基本配置好了
1.2-更改主机名
分别在对应IP的主机中修改主机名:vi /etc/hostname
把localhost改成你的主机名
2配置ssh免密登录:
原理很简单,先在每台机器上生成秘钥,ssh-keygen -t rsa
,三次回车即可。然后把从节点/root/.ssh/id_rsa.pub发给主机,并改名,要不然会被覆盖
在chun2上执行:scp /root/.ssh/id_rsa.pub root@chun1:/root/.ssh/id_rsa.pub002
在chun3上执行:scp /root/.ssh/id rsa.pub root@chun1:/root/.ssh/id_rsa.pub003
然后在chun1中把三个id_rsa.pub加入到authorized_keys里
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
cat /root/.ssh/id_rsa.pub002 >> /root/.ssh/authorized_keys
cat /root/.ssh/id_rsa.pub003 >> /root/.ssh/authorized_keys
然后把authorized_keys发给chun2,chun3,就可以实现三台互通
scp /root/.ssh/authorized_keys root@chun2:/root/.ssh/authorized_keys
scp /root/.ssh/authorized_keys root@chun3:/root/.ssh/authorized_keys
测试 --退出用exit
以下安装都是在master(chun1)上安装之后在发送
3.安装JDK
下载jdk1.8.0,官网下载即可,下载后解压,配置环境变量就行。(下载解压就不讲了)
我这里放在了/usr/local/java下java目录自己创建的
mkdir /usr/local/java
配置环境变量
vi /etc/profile
#在最后加上#JAVA
JAVA_HOME=/usr/local/java/jdk1.8.0_221 #解压后的jdk目录名称
JRE_JOME=/usr/local/java/jdk1.8.0_221/jre
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_JOME CLASS_PATH PATH
刷新环境变量
source /etc/profile
测试输入java -version,java,javac分别查看,出来很多内容即是成功
[root@chun1 ~]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
4.安装hadoop 这里用的2.7.7 ,暂时别用3和2.8.5,后面配置hbase会出现不兼容问题。详情见我的hbase搭建。
我这里官方下载的2.7.7下载bin类型。下载后解压到/usr/loacl/hadoop
hadoop为自己创建的文件夹
tar -zxf hadoop-2.7.7.tar.gz -C /usr/local/hadoop/
配置环境变量
#HADOOP
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.7
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
#zookeeper
export ZOOKEEPER_PREFIX=/usr/local/zookeeper/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_PREFIX/bin
刷新环境变量
source /etc/profile
进行测试是否成功
[root@chun1 ~]# hadoop version
Hadoop 2.7.7
Subversion Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac
Compiled by stevel on 2018-07-18T22:47Z
Compiled with protoc 2.5.0
From source with checksum 792e15d20b12c74bd6f19a1fb886490
This command was run using /usr/local/hadoop/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar
5.配置Hadoop
创建目录
#在/usr/local/hadoop目录下创建目录
cd /usr/local/hadoop/
mkdir tmp
mkdir var
mkdir dfs
mkdir dfs/name
mkdir dfs/data
修改配置文件
进入hadoop-2.7.7/etc/hadoop下
cd /usr/local/hadoop/hadoop-3.7.7/etc/hadoop
(1) hadoop-env.sh
在# JAVA_HOME=/usr/java/testing hdfs dfs -ls一行下面添加如下代码
export JAVA_HOME=/usr/local/java/jdk1.8.0_221
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.7
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
(2)修改slaves,把从节点名字写进去,删除localhost,hadoop3是修改workers。
chun2
chun3
下面在各个文件的< configuration> < /configuration>中添加
(3)修改core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.228.138:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
</configuration>
(4)hdfs-site.xml
<property><name>dfs.name.dir</name><value>/usr/local/hadoop/dfs/name</value><description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description>
</property>
<property><name>dfs.data.dir</name><value>/usr/local/hadoop/dfs/data</value><description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>192.168.228.138:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.228.138:50090</value>
</property>
<property><name>dfs.replication</name><value>4</value>
</property>
<property><name>dfs.permissions</name><value>false</value><description>need not permissions</description>
</property>
(5)mapred-site.xml
<property><name>mapred.job.tracker</name><value>chun1:49001</value>
</property>
<property><name>mapred.local.dir</name><value>/usr/local/hadoop/var</value>
</property>
<property><name>mapreduce.framework.name</name><value>yarn</value>
</property>
(6) yarn-site.xml
在命令行输入 hadoop classpath,把得到的内容复制到下面
[root@chun1 ~]# hadoop classpath
/usr/local/hadoop/hadoop-2.7.7/etc/hadoop:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/common/lib/*:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/common/*:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/hdfs:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/hdfs/lib/*:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/hdfs/*:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/yarn/lib/*:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/yarn/*:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/hadoop-2.7.7/share/hadoop/mapreduce/*:/usr/local/hadoop/hadoop-2.7.7/contrib/capacity-scheduler/*.jar
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>输入刚才返回的Hadoop classpath路径</value> //这里借鉴了别人的
</property>
到此配置基本结束,然后同步给另外两台虚拟机
scp -r /usr/local/java chun2:/usr/local/java
scp -r /usr/local/hadoop chun2:/usr/local/hadoop
scp -r /etc/profile chun2:/etc/scp -r /usr/local/java chun3:/usr/local/java
scp -r /usr/local/hadoop chun3:/usr/local/hadoop
scp -r /etc/profile chun3:/etc/
这里有个问题就是如果你的另外虚拟机local下有hadoop文件夹和java文件夹,他会放到你的java或者hadoop下,导致你的目录会又多了一层。可以这样解决,或者简单粗暴直接删除原来的文件夹:
进入到local下 使用命令scp hadoop/ chun2:$PWD
这样进行传输,会覆盖掉。PWD表示当前你所进入的目录
然后刷新两个从节点环境变量
ssh chun2
source /etc/profilessh chun3
source /etc/profile
格式化节点
在主节点上执行
hdfs namenode -format
运行之后不报错,并在倒数第五六行有successfully即为格式化节点成功
启动hadoop集群的服务
start-all.sh
输入jps可以查看进程
chun1上会有namenode
chun2、chun3上有datanode