2.联邦模式配置---扩容,负载均衡

原理图  两个集群---目的:扩容

HA联邦模式解决了单纯HA模式的性能瓶颈(主要指Namenode、ResourceManager),将整个HA集群划分为两个以上的集群,不同的集群之间通过Federation进行连接,使得HA集群拥有了横向扩展的能力。理论上,在该模式下,能够通过增加计算节点以处理无限增长的数据。联邦模式下的配置在原HA模式的基础上做了部分调整。

配置过程
federation

 
cp -r local/ federation
    1.规划集群
        ns1:nn1(s101) + nn2(s102)
        ns2:nn3(s103) + nn4(s014)
    2.准备
        [nn1 ~ nn4 ]ssh 所有节点.
 
    3.停止整个集群
        
    4.配置文件
        4.1)s101和s102的hdfs-site.xml配置
            [hadoop/federation/hdfs-site.xml]
            <?xml version="1.0" encoding="UTF-8"?>
            <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
            <configuration>
                    <property>
                                    <name>dfs.nameservices</name>
                                    <value>ns1,ns2</value>
                    </property>
                    <!-- **************ns1********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns1</name>
                                    <value>nn1,nn2</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn1</name>
                                    <value>s101:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn2</name>
                                    <value>s102:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn1</name>
                                    <value>s101:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn2</name>
                                    <value>s102:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns1</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!-- **************ns2********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns2</name>
                                    <value>nn3,nn4</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn3</name>
                                    <value>s103:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn4</name>
                                    <value>s104:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn3</name>
                                    <value>s103:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn4</name>
                                    <value>s104:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns2</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!--***********************************************-->
 
                    <property>
                                    <name>dfs.namenode.shared.edits.dir</name>
                                    <value>qjournal://s102:8485;s103:8485;s104:8485/ns1</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.fencing.methods</name>
                                    <value>
                                                    sshfence
                                                    shell(/bin/true)
                                    </value>
                    </property>
                    <property>
                                    <name>dfs.ha.fencing.ssh.private-key-files</name>
                                    <value>/home/centos/.ssh/id_rsa</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.automatic-failover.enabled</name>
                                    <value>true</value>
                    </property>
                    <property>
                                    <name>dfs.replication</name>
                                    <value>3</value>
                    </property>
            </configuration>
 
        4.2)s103和s104的hdfs-site.xml配置
            [hadoop/federation/hdfs-site.xml]
            <?xml version="1.0" encoding="UTF-8"?>
            <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
            <configuration>
                    <property>
                                    <name>dfs.nameservices</name>
                                    <value>ns1,ns2</value>
                    </property>
                    <!-- **************ns1********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns1</name>
                                    <value>nn1,nn2</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn1</name>
                                    <value>s101:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns1.nn2</name>
                                    <value>s102:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn1</name>
                                    <value>s101:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns1.nn2</name>
                                    <value>s102:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns1</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!-- **************ns2********************* -->
                    <property>
                                    <name>dfs.ha.namenodes.ns2</name>
                                    <value>nn3,nn4</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn3</name>
                                    <value>s103:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.rpc-address.ns2.nn4</name>
                                    <value>s104:8020</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn3</name>
                                    <value>s103:50070</value>
                    </property>
                    <property>
                                    <name>dfs.namenode.http-address.ns2.nn4</name>
                                    <value>s104:50070</value>
                    </property>
                    <property>
                                    <name>dfs.client.failover.proxy.provider.ns2</name>
                                    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
                    </property>
                    <!--***********************************************-->
 
                    <property>
                                    <name>dfs.namenode.shared.edits.dir</name>
                                    <value>qjournal://s102:8485;s103:8485;s104:8485/ns2</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.fencing.methods</name>
                                    <value>
                                                    sshfence
                                                    shell(/bin/true)
                                    </value>
                    </property>
                    <property>
                                    <name>dfs.ha.fencing.ssh.private-key-files</name>
                                    <value>/home/centos/.ssh/id_rsa</value>
                    </property>
 
                    <property>
                                    <name>dfs.ha.automatic-failover.enabled</name>
                                    <value>true</value>
                    </property>
                    <property>
                                    <name>dfs.replication</name>
                                    <value>3</value>
                    </property>
            </configuration>
        
        4.3)s101 ~ s104的core-site.xml配置文件
            [hadoop/federation/core-site.xml]
            <?xml version="1.0"?>
            <configuration xmlns:xi="http://www.w3.org/2001/XInclude">
                    <xi:include href="mountTable.xml" />
                    <property>
                                    <name>fs.defaultFS</name>
                                    <value>viewfs://ClusterX</value>
                    </property>
                    <property>
                                    <name>dfs.journalnode.edits.dir</name>
                                    <value>/home/centos/hadoop/federation/journalnode</value>
                    </property>
                    <property>
                                     <name>hadoop.tmp.dir</name>
                                    <value>/home/centos/hadoop/federation</value>
                    </property>
                    <property>
                                    <name>ha.zookeeper.quorum</name>
                                    <value>s102:2181,s103:2181,s104:2181</value>
                    </property>
            </configuration>
        
        4.4)mountTable.xml  挂载表文件
            [hadoop/federation/mountTable.xml]
            <configuration>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.homedir</name>
                            <value>/home</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./home</name>
                            <value>hdfs://ns1/home</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./tmp</name>
                            <value>hdfs://ns2/tmp</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./projects/foo</name>
                            <value>hdfs://ns1/projects/foo</value>
                    </property>
                    <property>
                            <name>fs.viewfs.mounttable.ClusterX.link./projects/bar</name>
                            <value>hdfs://ns2/projects/bar</value>
                    </property>
            </configuration>
 
    5.操作
        5.1)删除所有节点的日志和本地临时目录
            $>xcall.sh rm -rf /soft/hadoop/logs/*
            $>xcall.sh rm -rf /home/centos/hadoop/federation/*
        
        5.2)修改所有节点的hadoop软连接
            $>xcall.sh ln -sfT /soft/hadoop/etc/federation /soft/hadoop/etc/hadoop
 
        5.3)对ns1集群进行格式化以及初始工作
            a)启动jn集群
                登录s102 ~ s104,启动jounalnode进程。
                $>hadoop-daemon.sh start journalnode
            b)格式化nn1节点
                [s101]
                $>hdfs namenode -format
            c)复制s101的元数据到s102下.
                [s101]
                $>scp -r
            d)在s102上执行引导过程
                #s101启动名称节点
                $>hadoop-daemon.sh start namenode
                # s102执行引导,不要重格(N)
                $>hdfs namenode -bootstrapStandby
 
            e)在s102上初始化编辑日志到jn集群(N)
                $>hdfs namenode -initializeSharedEdits
            f)在s102对zookeeper格式化zkfc(选择Y).
                $>hdfs zkfc -formatZK
            g)启动s101和s102的namenode和zkfc进程。
                [s101]
                $>hadoop-daemon.sh start zkfc
                
                [s102]
                $>hadoop-daemon.sh start namenode
                $>hadoop-daemon.sh start zkfc
 
            h)测试webui
                
 
            
        5.4)对ns2集群进行格式化以及初始工作
            a)格式nn3,切记使用-clusterId属性,保持和ns1的一致。
                [s103]
                $>hdfs namenode -format -clusterId CID-e16c5e2f-c0a5-4e51-b789-008e36b7289a
                
            b)复制s103的元数据到s104上。
                $>scp -r /home/centos/hadoop/federation centos@s104:/home/centos/hadoop/
            c)在s104引导
                #在s103启动namenode
                $>hadoop-daemon.sh start namenode
                #在s104执行引导
                $>hdfs namenode -bootstrapStandby
            d)在s104上初始化编辑日志
                $>hdfs namenode -initializeSharedEdits
            e)在s104对zookeeper格式化zkfc(选择Y).
                $>hdfs zkfc -formatZK
            f)启动s103和s104的namenode和zkfc进程。
                [s103]
                $>hadoop-daemon.sh start zkfc
                
                [s104]
                $>hadoop-daemon.sh start namenode
                $>hadoop-daemon.sh start zkfc
        5.5)停止集群
            $>stop-dfs.sh
 
        5.6)重启dfs集群
            $>start-dfs.sh
                
        5.7)创建目录
            # 注意加p参数
            $>hdfs dfs -mkdir -p /home/data
          
            #上传文件,考察webui
            $>hdfs dfs -put 1.txt /home/data
 
 

转载于:https://www.cnblogs.com/star521/p/9703171.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/265919.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

树莓派交叉编译(PS交叉编译链下载安装、配置永久环境变量、带WiringPi库交叉编译、软链接)

目录一、本章概述二、交叉编译工具链的下载安装下载安装交叉编译链临时有效交叉编译链永久有效三、交叉编译的使用对比gcc与armgccPC端交叉编译发送到树莓派运行四、带WiringPi库的交叉编译如何处理复制树莓派上的WiringPi库到主机软硬链接交叉编译一、本章概述 下面将详细介绍…

海量数据处理分析(部分)

2019独角兽企业重金招聘Python工程师标准>>> 1. 海量数据处理分析 原文地址&#xff1a; http://blog.csdn.net/DaiZiLiang/archive/2006/12/06/1432193.aspx 笔者在实际工作中&#xff0c;有幸接触到海量的数据处理问题&#xff0c;对其进行处理是一项艰巨而复…

uboot2015–启动流程分析 imx6q

最近项目原因&#xff0c;要在uboot中增加内核验校和内核损坏修复功能&#xff0c;所以需要回头看看uboot。这次选择了uboot2015来进行分析 uboot是明远睿智提供的。 下载地址 链接&#xff1a;https://pan.baidu.com/s/13SuRii3WTqvFTNIsSS9GAg 密码&#xff1a;65zz 环境&…

树莓派内核开发准备(内核源码获取、启动过程、源码目录树)

目录1.交叉编译工具的安装2.内核源码获取3.嵌入式设备带操作系统的启动过程扫盲4.Linux内核源码树扫盲1.内核源码简介2.Linux内核源代码目录树结构tree指令查看内核源码目录树1.交叉编译工具的安装 参照我之前的笔记 2.内核源码获取 下载哪个版本取决于树莓派的版本&#xf…

柯乐义猜数字游戏

游戏规则&#xff1a;柯乐义请您玩猜数字游戏。后台已经随机生成了一个100到999之间的数字。如果您能在10次之内猜出这个数字&#xff0c;则游戏成功&#xff0c;否则失败。请开始吧。 SilverLight 猜数字游戏&#xff1a;http://keleyi.com/keleyi/phtml/silverlight/ 一次猜数…

fsdisk 分区

芯片主控&#xff1a;imx6q http://lornyin.top/?p545 昨天在做一个linux嵌入式项目时要修改板子的分区&#xff0c;查看了ucl2.xml &#xff08;mfgtool&#xff09;文件后&#xff0c;找到了他的分区脚本 #!/bin/sh# partition size in MB BOOT_ROM_SIZE10# call sfdisk …

树莓派Linux内核源码配置、编译、挂载(boot/kernal/根文件)、开启新内核

目录一、树莓派Linux源码配置(适合树莓派)总体概述配置的三种方式1.照搬厂家的配置&#xff08;使用这种方式&#xff09;2.参考厂家的配置&#xff08;感受一下&#xff09;3.完全自主配置&#xff08;需要一定工作经验&#xff09;二、树莓派Linux内核编译三、树莓派挂载新内…

xshell连接linux出现乱码

今天用Xshell连接linux&#xff0c;查看一个脚本&#xff0c;里面有中文写的注解&#xff0c;出现了乱码&#xff0c;所以记录一下&#xff0c;以便下次用到&#xff0c;也可以帮助遇到同样问题的小伙伴。 以下是乱码的截图&#xff1a; 我们可以照着下面的方式更改编码&#x…

快速排序详解以及java实现

快速排序作为一种高效的排序算法被广泛应用&#xff0c;SUN的JDK中的Arrays.sort 方法用的就是快排。 快排采用了经典的分治思想&#xff08;divide and conquer&#xff09;&#xff1a; Divide&#xff1a;选取一个基元X&#xff08;一般选取数组第一个元素&#xff09;&…

android jni ——Field Method -- Accessing Field

现在我们知道了怎样使用native code访问简单的数据类型和引用参考类型&#xff08;string&#xff0c;array&#xff09;&#xff0c;下面我们来介绍怎样让jni代码去访问java中的成员变量和成员函数&#xff0c;然后可以再jni中回调java中的方法。 ---------------------------…

树状数组的建树 单点修改 单点查询 区间修改 区间查询

单点修改 单点查询 用普通数组就能写出来 单点修改 区间查询 用线段树 树状数组&#xff1b; 区间修改 区间查询 用线段树 树状数组&#xff1b; 区间修改 单点查询 用线段树 树状数组&#xff1b; 建树 #include<bits/stdc.h> using namespace std; …

bert 中文 代码 谷歌_如何用最强模型BERT做NLP迁移学习?

作者 | 台湾大学网红教授李宏毅的三名爱徒来源 | 井森堡&#xff0c;不定期更新机器学习技术文并附上质量佳且可读性高的代码。编辑 | Jane谷歌此前发布的NLP模型BERT&#xff0c;在知乎、Reddit上都引起了轰动。其模型效果极好&#xff0c;BERT论文的作者在论文里做的几个实验…

安装ubuntu20.04(安装vim、gcc、VMtools、中文输入法、汉化、修改IP、无法连网问题)

目录ubuntu安装包获取ubuntu的安装安装网络配置命令ifconfig连接网络(解决ubuntu无法连网问题)如何修改IP地址安装VMtools解决VMware Tools选项灰色VMtools安装安装中文&#xff0c;汉化添加中文输入法调整分辨率安装新版的Vim安装gccubuntu安装包获取 xunlei中直接搜索下载 …

arm-2014.05 编译三星内核错误 “not support ARM mode ‘smc 0’ ”

&#xff08;1&#xff09;arch/arm/mach-exynos/include/mach/smc.h文件&#xff1a; 在第54行和第69下面添加&#xff1a; __asm__ volatile (".arch_extension sec\n""smc 0\n": "r"(reg0), "r"(reg1), "r"(reg2), &…

树莓派基于Linux内核驱动开发详解

目录一、驱动认知1.1 为什么要学习写驱动1.2 文件名与设备号1.3 open函数打通上层到底层硬件的详细过程二、基于框架编写驱动代码2.1 编写上层应用代码2.2 修改内核驱动框架代码2.3 部分代码解读2.3.1 static的作用2.3.2 结构体成员变量赋值方式2.3.3 结构体file_operations(最…

3.X内核下设备树–platform设备驱动

1。历史的车轮总是向前&#xff0c;技术更替。在linus 同学发出那句 WFK 后内核进入了设备树时代&#xff08;站在驱动工程师角度&#xff09;。 前几天我已经被mach-imx 中的文件折磨的夜不能眠。我终于在一个清晨&#xff0c;喝完一杯咖啡后决定放弃蹩脚的传统device描述方式…

六核cpu安装SQL Server 2005时报错,提示启动服务失败

2019独角兽企业重金招聘Python工程师标准>>> 新买的IBM3650M4的服务器上安装SQL server2005 安装到一半时&#xff0c;报"提示&#xff1a;SQL Server 服务无法启动。"错。 一开始以为是操作系统的问题&#xff0c;先重装了一遍&#xff0c;还是不行&…

网络编程知识预备(1) ——了解OSI网络模型

参考&#xff1a;简单了解OSI网络模型 作者&#xff1a;丶PURSUING 发布时间&#xff1a; 2021-03-18 20:07:09 网址&#xff1a;https://blog.csdn.net/weixin_44742824/article/details/114968802?spm1001.2014.3001.5502 根据需求摘抄自下面这篇文章&#xff0c;内容非常详…

antd表格显示分页怎么取消_真相!Word里怎么也删不掉的文档空白页原来是这样...

大家好&#xff0c;我是你们的海宝老师在Word里&#xff0c;经常会遇到这种情况&#xff1a;文档莫名其妙地多出一个或多个空白页&#xff0c;没啥内容却怎么也删不掉。不着急&#xff0c;咱们来一一分析。1、标题前的空白页像这种标题前面有空白&#xff0c;基本就是【段落】设…

网络编程知识预备(2) ——三次握手与四次挥手、流量控制(滑动窗口)、拥塞控制、半连接状态、2MSL

参考&#xff1a;浅显易懂的三次握手与四次挥手 作者&#xff1a;丶PURSUING 发布时间&#xff1a; 2021-03-19 09:33:20 网址&#xff1a;https://blog.csdn.net/weixin_44742824/article/details/114990198?spm1001.2014.3001.5502 参考&#xff1a;&#xff08;四十七&…