Linux下部署Hadoop伪分布模式

Hadoop版本为1.2.1

Distribution为Fedora19并使用hadoop账号安装

第一步:配置ssh本地登录证书(虽然为伪分布模式,Hadoop依然会使用SSH进行通信)

[hadoop@promote ~]$ which ssh
/usr/bin/ssh
[hadoop@promote ~]$ which ssh-keygen
/usr/bin/ssh-keygen
[hadoop@promote ~]$ which sshd
/usr/sbin/sshd
[hadoop@promote ~]$ ssh-keygen -t rsa

然后一路回车

Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Passphrases do not match.  Try again.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
2f:a9:60:c7:dc:38:8f:c7:bb:70:de:d4:39:c3:39:87 hadoop@promote.cache-dns.local
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|                 |
|        S        |
|     o o o o +   |
|    o B.= o E .  |
|   . o Oo+   =   |
|      o.=o.      |
+-----------------+

最终将在/home/hadoop/.ssh/路径下生成私钥id_rsa和公钥id_rsa.pub

[hadoop@promote .ssh]$ cd /home/hadoop/.ssh/
[hadoop@promote .ssh]$ ls
id_rsa  id_rsa.pub

修改sshd服务配置文件:

[hadoop@promote .ssh]$ su root
密码:
[root@promote .ssh]# vi /etc/ssh/sshd_config 

启用RSA加密算法验证

RSAAuthentication yes
PubkeyAuthentication yes# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile      .ssh/authorized_keys

保存并退出,然后重启sshd服务

[root@promote .ssh]# service sshd restart
Redirecting to /bin/systemctl restart  sshd.service

然后切换回hadoop用户,将ssh证书公钥拷贝至/home/hadoop/.ssh/authorized_keys文件中

[root@promote .ssh]# su hadoop
[hadoop@promote .ssh]$ cat id_rsa.pub >> authorized_keys

修改~/.ssh/authorized_keys文件的权限为644,~/.ssh文件夹的权限为700,/home/hadoop文件夹的权限为700(权限正确是成功认证的先决条件

[hadoop@promote .ssh]$ chmod 644 authorized_keys 
[hadoop@promote .ssh]$ ssh 192.168.211.129
The authenticity of host 192.168.211.129(192.168.211.129)' can't be established.
RSA key fingerprint is 25:1f:be:72:7b:83:8e:c7:96:b6:71:35:fc:5d:2e:7d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.211.129' (RSA) to the list of known hosts.
Last login: Thu Feb 13 23:42:43 2014 

第一次登陆将会将证书内容保存在/home/hadoop/.ssh/known_hosts文件中,以后再次登陆将不需要输入密码

[hadoop@promote .ssh]$ ssh 192.168.211.129
Last login: Thu Feb 13 23:46:04 2014 from 192.168.211.129

至此ssh证书部分配置完成

第二步:安装JDK

[hadoop@promote ~]$ java -version
java version "1.7.0_25"
OpenJDK Runtime Environment (fedora-2.3.10.3.fc19-i386)
OpenJDK Client VM (build 23.7-b01, mixed mode)

将OpenJDK换为Oracle的Java SE

[hadoop@promote .ssh]$ cd ~
[hadoop@promote ~]$ uname -i
i386

在Oracle的官网下载jdk-6u45-linux-i586.bin后上传至服务器,赋予权限并进行安装,最后删除安装包

[hadoop@promote ~]$ chmod u+x jdk-6u45-linux-i586.bin 
[hadoop@promote ~]$ ./jdk-6u45-linux-i586.bin 
[hadoop@promote ~]$ rm -rf jdk-6u45-linux-i586.bin 

出现以下结果说明JDK成功安装:

[hadoop@promote ~]$ /home/hadoop/jdk1.6.0_45/bin/java -version
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) Client VM (build 20.45-b01, mixed mode, sharing)

第三步:安装Hadoop

在Hadoop官网下载hadoop-1.2.1.tar.gz并上传至服务器/home/hadoop路径下

[hadoop@promote ~]$ tar -xzf hadoop-1.2.1.tar.gz
[hadoop@promote ~]$ rm -rf hadoop-1.2.1.tar.gz 
[hadoop@promote ~]$ cd hadoop-1.2.1/conf/
[hadoop@promote conf]$ vi hadoop-env.sh

将JAVA_HOME指向第二步安装的JDK所在目录

# The java implementation to use.  Required.
export JAVA_HOME=/home/hadoop/jdk1.6.0_45

保存并退出

[hadoop@promote ~]$ vi ~/.bash_profile 

环境变量PATH后接上Hadoop和JDK的bin目录

......
PATH=$PATH:$HOME/.local/bin:$HOME/bin:/home/hadoop/hadoop-1.2.1/bin:/home/hadoop/jdk1.6.0_45/bin
export PATH

保存并退出,退出登录并重新使用hadoop账号登录,如果出现如下结果,说明环境变量PATH设置成功

[hadoop@promote ~]$ echo $PATH
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:
/home/hadoop/bin:/home/hadoop/hadoop-1.2.1/bin:/home/hadoop/jdk1.6.0_45/bin

第四步:修改Hadoop配置文件

修改core-site.xml(使用IP地址而不是主机名或localhost的好处是不需要修改/etc/hosts):

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.211.129:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadooptmp</value>
</property>
</configuration>

修改mapred-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>
<property>
<name>mapred.job.tracker</name>
<value>192.168.211.129:9001</value>
</property>
</configuration>

修改hdfs-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

master中指定的SNN节点和slaves中指定的从节点位置均为本地

[hadoop@promote conf]$ cat masters 
192.168.211.129
[hadoop@promote conf]$ cat slaves 
192.168.211.129

第五步:初始化HDFS文件系统

特别要注意的是,Hadoop并不识别带“_”的主机名,所以如果你的主机名带有“_”,一定要进行修改,修改方式参照http://blog.csdn.net/a19881029/article/details/20485079

[hadoop@fedora ~]$ hadoop namenode -format
14/03/04 22:13:41 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = fedora/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
-r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.6.0_45
************************************************************/
14/03/04 22:13:42 INFO util.GSet: Computing capacity for map BlocksMap
14/03/04 22:13:42 INFO util.GSet: VM type       = 32-bit
14/03/04 22:13:42 INFO util.GSet: 2.0% max memory = 1013645312
14/03/04 22:13:42 INFO util.GSet: capacity      = 2^22 = 4194304 entries
14/03/04 22:13:42 INFO util.GSet: recommended=4194304, actual=4194304
14/03/04 22:13:42 INFO namenode.FSNamesystem: fsOwner=hadoop
14/03/04 22:13:42 INFO namenode.FSNamesystem: supergroup=supergroup
14/03/04 22:13:42 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/03/04 22:13:42 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/03/04 22:13:42 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/03/04 22:13:42 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/03/04 22:13:42 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/03/04 22:13:53 INFO common.Storage: Image file /tmp/hadoop-hadoop/dfs/name/ current/fsimage of size 112 bytes saved in 0 seconds.
14/03/04 22:13:53 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
14/03/04 22:13:53 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/tmp/hadoop-hadoop/dfs/name/current/edits
14/03/04 22:13:53 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
14/03/04 22:13:53 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at fedora/127.0.0.1
************************************************************/

第六步:启动Hadoop

[hadoop@fedora logs]$ start-all.sh 
starting namenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-fedora.out
localhost: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-fedora.out
localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-fedora.out
starting jobtracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-fedora.out
localhost: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-fedora.out
hadoop@fedora logs]$ jps
2099 SecondaryNameNode
2184 JobTracker
1976 DataNode
2365 Jps
1877 NameNode
2289 TaskTracker

可以看到所有Hadoop守护进程均已启动

再看看日志文件中有没有报错,如果没有报错,说明Hadoop已经启动成功了

[hadoop@fedora hadoop]$ hadoop dfsadmin -report
Configured Capacity: 39474135040 (36.76 GB)
Present Capacity: 33661652992 (31.35 GB)
DFS Remaining: 33661612032 (31.35 GB)
DFS Used: 40960 (40 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)Name: 192.168.211.129:50010
Decommission Status : Normal
Configured Capacity: 39474135040 (36.76 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 5812482048 (5.41 GB)
DFS Remaining: 33661612032(31.35 GB)
DFS Used%: 0%
DFS Remaining%: 85.28%
Last contact: Thu Mar 06 09:48:17 CST 2014

尝试执行Map/Reduce任务时也是没问题的

转载于:https://www.cnblogs.com/sean-zou/p/3709998.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/575588.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Fedora-19安装texlive2013并配置中文

参考博文&#xff1a; http://blog.csdn.net/longerzone/article/details/8129124 之前通过yum install安装了texlive&#xff0c;不过在使用过程中老是报错&#xff0c;后来通过下载完整iso安装成功&#xff0c;并成功配置了中文&#xff08;真是一个折腾啊&#xff09; 现将…

【Java】RuleSource约束常用方法整理

1-常用约束规则RuleSource的设置方法例如&#xff1a;addRules(new Rules(ProgramFeeItem.class){protected void initRules() {add("rateClass", all(new Constraint[] { required() })); //required表示&#xff0c;不可为空add("remark", rules.maxLen…

C语言编程基础 打印图形

C语言中用循环可以打印出各种图形1 直角三角形(靠右直立)&#xff1a;部分代码int i,j;for (i0; i<6; i) {for (j6;j>i ;j-- ) {printf("");}for (j0; j<i; j) {printf("*");}printf("\n");}2.等腰三角形&#xff08;直立&#xff09;部…

PHP的ISAPI和FastCGI比较

1、CGI&#xff08;通用网关接口/Common Gateway Interface&#xff09;一般是可执行程序&#xff0c;例如EXE文件&#xff0c;和WEB服务器各自占据着不同的进程,而且一般一个CGI程序只能处理一个用户请求。这样&#xff0c;当用 户请求数量非常多时&#xff0c;会大量占用系统…

辨析*P++,*(p++),*(++p),++(*p),*(P--),*(--P)

1&#xff0c;*p由于和*同等优先级&#xff0c;结合方向为自右向左&#xff0c;因此它等价与*&#xff08;p&#xff09;。先引用p的值&#xff0c;实现*p的运算&#xff0c;然后再使p自增1。注意 其中为什么是先执行*p然后再执行*p&#xff0c;不应该是先执行括号里的p再执行*…

chrome密码管理

chrome://settings/passwords ------------------------------- [系统盘]:\Documents and Settings\[用户名]\Local Settings\Application Data\Google\Chrome\User Data\Default\Login Data &#xff08;这个路径是 Win XP 系统&#xff09; 你可以用 SQLite Database Browse…

牛客网刷题错题记录

目录 Java 1.静态内部类 2.关于String&#xff0c;StringBuilder以及StringBuffer 3.java语言特性 4.非抽象类实现接口的问题 5.互斥锁 6. Socket 通信编程 7.类的初始化顺序 8.变量的存储区 9.jvm线程共享问题 10.java内存回收问题 11.关于java concurrent包四个类…

android Intent和IntentFilter

android的应用程序包含三种重要的组件&#xff1a;Activity、Service、BroadcastReceiver&#xff0c;应用程序采用一致的方式来启动他们——都是依靠Intent来进行启动。Intent就封装了程序想要启动的程序意图&#xff0c;不仅如此&#xff0c;Intent还可用于与被启动组件交换信…

冒泡排序的双重循环理解

主要说一下冒泡排序的一些关键地方的个人理解&#xff0c;比如算法思想&#xff0c;两个循环的作用意义&#xff0c;中间循环变量范围的确定等。 原理&#xff1a;比较两个相邻的元素&#xff0c;将值大的元素交换至右端。思路&#xff1a;依次比较相邻的两个数&#xff0c;将小…

全程软件测试之测试需求分析与计划(2)

2.3 测试工作量估算 在确定了测试需求、明确了测试范围之后&#xff0c;就需要明确测试任务&#xff0c;估算测试工作量。基于质量需求和测试的工作量、测试环境、产品发布的设想时间等要求&#xff0c;就可以确定测试进度和所需的测试资源&#xff0c;或者基于现有的测试资源…

C语言和Java 在用数组作为参数时有点不一样

C语言和Java 在用数组作为参数时有点不一样。 C中 void jh(int n[2]) {/注意这里参数是写了大小 int temp; temp n[0]; n[0] n[1]; n[1] temp; } int main() { int i; int num[2] {7, 8}; jh(num); } Java中 由于C和Java中定义数组形式稍微有点不一样&#xff0c;所以这里…

[CLR via C#]16. 数组

数组是允许将多个数据项当作一个集合来处理的机制。CLR支持一维数组、多维数组和交错数据(即由数组构成的数组)。所有数组类型都隐式地从System.Array抽象类派生&#xff0c;后者又派生自System.Object。这意味着数组始终是引用类型&#xff0c;是在托管堆上分配的。在你应用程…

Java中String类 compareTo()方法比较字符串详解

中心&#xff1a;String 是字符串,它的比较用compareTo方法,它从第一位开始比较, 如果遇到不同的字符,则马上返回这两个字符的ascii值差值.返回值是int类型1.当两个比较的字符串是英文且长度不等时&#xff0c;1&#xff09;长度短的与长度长的字符一样&#xff0c;则返回的结果…

UIPopoverController简介

1, performSegueWithIdentifier:sender&#xff1a;跳转或弹出控制器 Identifier为popoverSegue时候&#xff0c;Sender仅限于UIBarbuttonItem与View&#xff1b;//。。。。。。。。。 转载于:https://www.cnblogs.com/senlinwuran/p/UIPopoverController.html

异常是catch还是throws的简单原则

1 .如果你完全能处理这个异常&#xff0c;那么就catch掉 public void test() {try {} catch (Exception e) {}}2. 如果你完全不能处理这个异常&#xff0c;那么就throws掉 public void test() throws Exception {}3. 如果你想对异常做一点点处理&#xff0c;但又不能完全处理&a…

在Windows 7 x64 上编译libsvn

这几天由于工作需要&#xff0c;需要Windows上Python 2.7 x64对应的svn模块。Win32版本可以从这个页面直接下载 http://sourceforge.net/projects/win32svnx64的无奈只有自己编译了。在这个过程中还是费了一些力气&#xff0c;在Linux上可以直接make&#xff0c;在Windows使用V…

程序员简历怎么写

说到程序员简历&#xff0c;这两个月&#xff0c;我看过不下10,000份简历。。。 答主不是HR&#xff0c;也不是技术负责人&#xff0c;但是在网站的运营工作中&#xff0c;每天最开心的事情就是研究候选人的简历了~~ 这些人中&#xff0c;有BAT的资深大牛程序员&#xff0c;也有…

android之修改CheckBox左侧图标样式

很多时候系统自带的CheckBox样式并不能满足我们的需求&#xff0c;本文讲解如何替换CheckBox选中&#xff0c;未选中状态下的左侧图片背景的替换。 1.在res目录下创建drawable文件夹&#xff0c;在drawable创建my_checkbox.xml文件 my_checkbox.xml文件内容如下&#xff1a; &l…

步步高DVD机DV603的U盘模式支持视频格式

亲测支持视频格式&#xff1a;支持avi,mpg,vob文件 转载于:https://www.cnblogs.com/phyking/p/4456602.html

SSM框架学习整理

一、Spring原理&#xff1a; 1:核心技术 Spring的两大技术要点便是&#xff0c;一个AOP(面向切面编程)&#xff0c;一个IOC&#xff08;控制反转&#xff09;&#xff0c;而AOP是什么呢&#xff0c;就好比从c语言面向过程编程—>java面向对象编程—>Spring面向切面编程…