2.联邦模式配置---扩容,负载均衡

原理图  两个集群---目的:扩容

HA联邦模式解决了单纯HA模式的性能瓶颈(主要指Namenode、ResourceManager),将整个HA集群划分为两个以上的集群,不同的集群之间通过Federation进行连接,使得HA集群拥有了横向扩展的能力。理论上,在该模式下,能够通过增加计算节点以处理无限增长的数据。联邦模式下的配置在原HA模式的基础上做了部分调整。

配置过程
federation

 
cp -r local/ federation
    1.规划集群
        ns1:nn1(s101) + nn2(s102)
        ns2:nn3(s103) + nn4(s014)
    2.准备
        [nn1 ~ nn4 ]ssh 所有节点.
 
    3.停止整个集群
        
    4.配置文件
        4.1)s101和s102的hdfs-site.xml配置
            [hadoop/federation/hdfs-site.xml]
            <?xml version="1.0" encoding="UTF-8"?>
            <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
            <configuration>
                    <property>
                                    <name>dfs.nameservices</name>
                                    <value>ns1,ns2</value>
                    </property>
                    <!-- **************ns1********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns1</name>
                                    <value>nn1,nn2</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn1</name>
                                    <value>s101:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn2</name>
                                    <value>s102:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn1</name>
                                    <value>s101:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn2</name>
                                    <value>s102:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns1</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!-- **************ns2********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns2</name>
                                    <value>nn3,nn4</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn3</name>
                                    <value>s103:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn4</name>
                                    <value>s104:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn3</name>
                                    <value>s103:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn4</name>
                                    <value>s104:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns2</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!--***********************************************-->
 
                    <property>
                                    <name>dfs.namenode.shared.edits.dir</name>
                                    <value>qjournal://s102:8485;s103:8485;s104:8485/ns1</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.fencing.methods</name>
                                    <value>
                                                    sshfence
                                                    shell(/bin/true)
                                    </value>
                    </property>
                    <property>
                                    <name>dfs.ha.fencing.ssh.private-key-files</name>
                                    <value>/home/centos/.ssh/id_rsa</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.automatic-failover.enabled</name>
                                    <value>true</value>
                    </property>
                    <property>
                                    <name>dfs.replication</name>
                                    <value>3</value>
                    </property>
            </configuration>
 
        4.2)s103和s104的hdfs-site.xml配置
            [hadoop/federation/hdfs-site.xml]
            <?xml version="1.0" encoding="UTF-8"?>
            <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
            <configuration>
                    <property>
                                    <name>dfs.nameservices</name>
                                    <value>ns1,ns2</value>
                    </property>
                    <!-- **************ns1********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns1</name>
                                    <value>nn1,nn2</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn1</name>
                                    <value>s101:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn2</name>
                                    <value>s102:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn1</name>
                                    <value>s101:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn2</name>
                                    <value>s102:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns1</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!-- **************ns2********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns2</name>
                                    <value>nn3,nn4</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn3</name>
                                    <value>s103:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn4</name>
                                    <value>s104:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn3</name>
                                    <value>s103:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn4</name>
                                    <value>s104:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns2</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!--***********************************************-->
 
                    <property>
                                    <name>dfs.namenode.shared.edits.dir</name>
                                    <value>qjournal://s102:8485;s103:8485;s104:8485/ns2</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.fencing.methods</name>
                                    <value>
                                                    sshfence
                                                    shell(/bin/true)
                                    </value>
                    </property>
                    <property>
                                    <name>dfs.ha.fencing.ssh.private-key-files</name>
                                    <value>/home/centos/.ssh/id_rsa</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.automatic-failover.enabled</name>
                                    <value>true</value>
                    </property>
                    <property>
                                    <name>dfs.replication</name>
                                    <value>3</value>
                    </property>
            </configuration>
        
        4.3)s101 ~ s104的core-site.xml配置文件
            [hadoop/federation/core-site.xml]
            <?xml version="1.0"?>
            <configuration xmlns:xi="http://www.w3.org/2001/XInclude">
                    <xi:include href="mountTable.xml" />
                    <property>
                                    <name>fs.defaultFS</name>
                                    <value>viewfs://ClusterX</value>
                    </property>
                    <property>
                                    <name>dfs.journalnode.edits.dir</name>
                                    <value>/home/centos/hadoop/federation/journalnode</value>
                    </property>
                    <property>
                                     <name>hadoop.tmp.dir</name>
                                    <value>/home/centos/hadoop/federation</value>
                    </property>
                    <property>
                                    <name>ha.zookeeper.quorum</name>
                                    <value>s102:2181,s103:2181,s104:2181</value>
                    </property>
            </configuration>
        
        4.4)mountTable.xml  挂载表文件
            [hadoop/federation/mountTable.xml]
            <configuration>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.homedir</name>
                            <value>/home</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./home</name>
                            <value>hdfs://ns1/home</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./tmp</name>
                            <value>hdfs://ns2/tmp</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./projects/foo</name>
                            <value>hdfs://ns1/projects/foo</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./projects/bar</name>
                            <value>hdfs://ns2/projects/bar</value>
                    </property>
            </configuration>
 
    5.操作
        5.1)删除所有节点的日志和本地临时目录
            $>xcall.sh rm -rf /soft/hadoop/logs/*
            $>xcall.sh rm -rf /home/centos/hadoop/federation/*
        
        5.2)修改所有节点的hadoop软连接
            $>xcall.sh ln -sfT /soft/hadoop/etc/federation /soft/hadoop/etc/hadoop
 
        5.3)对ns1集群进行格式化以及初始工作
            a)启动jn集群
                登录s102 ~ s104,启动jounalnode进程。
                $>hadoop-daemon.sh start journalnode
            b)格式化nn1节点
                [s101]
                $>hdfs namenode -format
            c)复制s101的元数据到s102下.
                [s101]
                $>scp -r
            d)在s102上执行引导过程
                #s101启动名称节点
                $>hadoop-daemon.sh start namenode
                # s102执行引导,不要重格(N)
                $>hdfs namenode -bootstrapStandby
 
            e)在s102上初始化编辑日志到jn集群(N)
                $>hdfs namenode -initializeSharedEdits
            f)在s102对zookeeper格式化zkfc(选择Y).
                $>hdfs zkfc -formatZK
            g)启动s101和s102的namenode和zkfc进程。
                [s101]
                $>hadoop-daemon.sh start zkfc
                
                [s102]
                $>hadoop-daemon.sh start namenode
                $>hadoop-daemon.sh start zkfc
 
            h)测试webui
                
 
            
        5.4)对ns2集群进行格式化以及初始工作
            a)格式nn3,切记使用-clusterId属性,保持和ns1的一致。
                [s103]
                $>hdfs namenode -format -clusterId CID-e16c5e2f-c0a5-4e51-b789-008e36b7289a
                
            b)复制s103的元数据到s104上。
                $>scp -r /home/centos/hadoop/federation centos@s104:/home/centos/hadoop/
            c)在s104引导
                #在s103启动namenode
                $>hadoop-daemon.sh start namenode
                #在s104执行引导
                $>hdfs namenode -bootstrapStandby
            d)在s104上初始化编辑日志
                $>hdfs namenode -initializeSharedEdits
            e)在s104对zookeeper格式化zkfc(选择Y).
                $>hdfs zkfc -formatZK
            f)启动s103和s104的namenode和zkfc进程。
                [s103]
                $>hadoop-daemon.sh start zkfc
                
                [s104]
                $>hadoop-daemon.sh start namenode
                $>hadoop-daemon.sh start zkfc
        5.5)停止集群
            $>stop-dfs.sh
 
        5.6)重启dfs集群
            $>start-dfs.sh
                
        5.7)创建目录
            # 注意加p参数
            $>hdfs dfs -mkdir -p /home/data
          
            #上传文件,考察webui
            $>hdfs dfs -put 1.txt /home/data
 
 

转载于:https://www.cnblogs.com/star521/p/9703171.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/265919.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

树莓派交叉编译(PS交叉编译链下载安装、配置永久环境变量、带WiringPi库交叉编译、软链接)

目录一、本章概述二、交叉编译工具链的下载安装下载安装交叉编译链临时有效交叉编译链永久有效三、交叉编译的使用对比gcc与armgccPC端交叉编译发送到树莓派运行四、带WiringPi库的交叉编译如何处理复制树莓派上的WiringPi库到主机软硬链接交叉编译一、本章概述 下面将详细介绍…

海量数据处理分析(部分)

2019独角兽企业重金招聘Python工程师标准>>> 1. 海量数据处理分析 原文地址&#xff1a; http://blog.csdn.net/DaiZiLiang/archive/2006/12/06/1432193.aspx 笔者在实际工作中&#xff0c;有幸接触到海量的数据处理问题&#xff0c;对其进行处理是一项艰巨而复…

android p wifi一直在扫描_在Android上的每次WiFi扫描之间我应该使用什么时间间隔?...

我需要定期执行Wifi扫描.当时间间隔设置为1-2秒时,我遇到了问题.好像我没有得到任何ScanResult.是否有最短的时间设置,以便WifiManager能够执行成功的WiFi扫描&#xff1f;这是代码.我正在使用服务进行Wifi扫描&#xff1a;public class WifiScanning extends Service{private …

uboot2015–启动流程分析 imx6q

最近项目原因&#xff0c;要在uboot中增加内核验校和内核损坏修复功能&#xff0c;所以需要回头看看uboot。这次选择了uboot2015来进行分析 uboot是明远睿智提供的。 下载地址 链接&#xff1a;https://pan.baidu.com/s/13SuRii3WTqvFTNIsSS9GAg 密码&#xff1a;65zz 环境&…

树莓派内核开发准备(内核源码获取、启动过程、源码目录树)

目录1.交叉编译工具的安装2.内核源码获取3.嵌入式设备带操作系统的启动过程扫盲4.Linux内核源码树扫盲1.内核源码简介2.Linux内核源代码目录树结构tree指令查看内核源码目录树1.交叉编译工具的安装 参照我之前的笔记 2.内核源码获取 下载哪个版本取决于树莓派的版本&#xf…

linux修改文件句柄数生效_修改Linux的open files参数是,立即生效,无需重启

通过命令ulimit -a查看的open files参数的修改&#xff1a;core file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (…

柯乐义猜数字游戏

游戏规则&#xff1a;柯乐义请您玩猜数字游戏。后台已经随机生成了一个100到999之间的数字。如果您能在10次之内猜出这个数字&#xff0c;则游戏成功&#xff0c;否则失败。请开始吧。 SilverLight 猜数字游戏&#xff1a;http://keleyi.com/keleyi/phtml/silverlight/ 一次猜数…

fsdisk 分区

芯片主控&#xff1a;imx6q http://lornyin.top/?p545 昨天在做一个linux嵌入式项目时要修改板子的分区&#xff0c;查看了ucl2.xml &#xff08;mfgtool&#xff09;文件后&#xff0c;找到了他的分区脚本 #!/bin/sh# partition size in MB BOOT_ROM_SIZE10# call sfdisk …

树莓派Linux内核源码配置、编译、挂载(boot/kernal/根文件)、开启新内核

目录一、树莓派Linux源码配置(适合树莓派)总体概述配置的三种方式1.照搬厂家的配置&#xff08;使用这种方式&#xff09;2.参考厂家的配置&#xff08;感受一下&#xff09;3.完全自主配置&#xff08;需要一定工作经验&#xff09;二、树莓派Linux内核编译三、树莓派挂载新内…

xshell连接linux出现乱码

今天用Xshell连接linux&#xff0c;查看一个脚本&#xff0c;里面有中文写的注解&#xff0c;出现了乱码&#xff0c;所以记录一下&#xff0c;以便下次用到&#xff0c;也可以帮助遇到同样问题的小伙伴。 以下是乱码的截图&#xff1a; 我们可以照着下面的方式更改编码&#x…

阡陌路-车行天下之汽车基础知识

汽车基础知识 1、什么是ABS   ABS是Anti-LockBrakeSystem的英文缩写&#xff0c;翻译过来可以叫做“刹车防抱死系统”。在没有ABS时&#xff0c;如果紧急刹车一般会使轮胎 抱死&#xff0c;由于抱死之后轮胎与地面是滑动摩擦&#xff0c;所以刹车的距离会变长。如果前轮锁死…

java版本号分段比较_Java实现比较版本号

涉及到客户端的系统中经常需要用到比较版本号的功能&#xff0c;但是比较版本号又不能完全按照字符串比较的方式去用compareTo之类的方法&#xff1b;这就需要我们总结版本号的通用规则&#xff0c;设计一个比较算法并封装成通用方法来使用&#xff1a;通常版本号如&#xff1a…

win10 重置串口

最近想起了玩玩51单片机&#xff0c;回味下&#xff0c;发现以前板子送人了&#xff0c;于是随便捡了一个下载器来给AT89S52下程序&#xff0c;下载器是通过串口和电脑通讯的, 可是要求串口端口号不能大于10&#xff0c;我的串口号都特别大&#xff0c;于是尝试到设备管理器里修…

ios 视频知识补充---分解LFLiveKit

视频&#xff1a;泛指将一系列的静态影像以电信号的方式捕捉、记录、处理、储存、传送、重现的各种技术。连续的静态图像变化每秒超过24帧&#xff08;frame&#xff09;以上时&#xff0c;由于视觉残留&#xff0c;人眼无法识别单独的静态图片&#xff0c;此时看上去就是平滑且…

快速排序详解以及java实现

快速排序作为一种高效的排序算法被广泛应用&#xff0c;SUN的JDK中的Arrays.sort 方法用的就是快排。 快排采用了经典的分治思想&#xff08;divide and conquer&#xff09;&#xff1a; Divide&#xff1a;选取一个基元X&#xff08;一般选取数组第一个元素&#xff09;&…

机票预订系统活动图_软件工程(第五版)--习题及答案技术总结.docx

PAGE \* MERGEFORMAT43《软件工程》(第五版)习题参考答案第1章 一、判断题1、()软件的维护与硬件维护本质上是相同的。 2、(√)软件在运行和使用中也存在退化问题。 3、()软件危机的产生主要是因为程序设计人员使用了不适当的程序设计语言。 4、(√)软件同其他事物一样&#xf…

android jni ——Field Method -- Accessing Field

现在我们知道了怎样使用native code访问简单的数据类型和引用参考类型&#xff08;string&#xff0c;array&#xff09;&#xff0c;下面我们来介绍怎样让jni代码去访问java中的成员变量和成员函数&#xff0c;然后可以再jni中回调java中的方法。 ---------------------------…

树状数组的建树 单点修改 单点查询 区间修改 区间查询

单点修改 单点查询 用普通数组就能写出来 单点修改 区间查询 用线段树 树状数组&#xff1b; 区间修改 区间查询 用线段树 树状数组&#xff1b; 区间修改 单点查询 用线段树 树状数组&#xff1b; 建树 #include<bits/stdc.h> using namespace std; …

bert 中文 代码 谷歌_如何用最强模型BERT做NLP迁移学习?

作者 | 台湾大学网红教授李宏毅的三名爱徒来源 | 井森堡&#xff0c;不定期更新机器学习技术文并附上质量佳且可读性高的代码。编辑 | Jane谷歌此前发布的NLP模型BERT&#xff0c;在知乎、Reddit上都引起了轰动。其模型效果极好&#xff0c;BERT论文的作者在论文里做的几个实验…

安装ubuntu20.04(安装vim、gcc、VMtools、中文输入法、汉化、修改IP、无法连网问题)

目录ubuntu安装包获取ubuntu的安装安装网络配置命令ifconfig连接网络(解决ubuntu无法连网问题)如何修改IP地址安装VMtools解决VMware Tools选项灰色VMtools安装安装中文&#xff0c;汉化添加中文输入法调整分辨率安装新版的Vim安装gccubuntu安装包获取 xunlei中直接搜索下载 …