资源文件file.txt
hello hadoop
hello word
this is my first hadoop program
分析:一个文档中每行的单词通过空格分割后获取,经过map阶段可以将所有的单词整理成如下形式:
key:hello value:1
key:hadoop value:1
key:hello value:1
key:word value:1
key:this value:1
key:is value:1
key:my value:1
key:first value:1
key:hadoop value:1
key:program value:1
经过hadoop整理后以如下形式输入到Reduce中:
key:hello value:{1,1}
key:hadoop value: {1,1}
key:word value:{1}
key:this value: {1}
key:is value:{1}
key:myvalue: {1}
key:first value:{1}
key:program value:{1}
所以Reduce接受的时候是以Iterable<IntWritable> values最为值。在Reduce中我们就可以将value中的值迭代相加就可以得出该单词出现的次数。
实现:
package com.bwzy.hadoop; import java.io.File; import java.io.IOException; import java.util.Iterator; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.*; import org.apache.hadoop.mapreduce.*; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; public class WordCount extends Configured implements Tool {public static class Mapextendsorg.apache.hadoop.mapreduce.Mapper<LongWritable, Text, Text, IntWritable> {private final static IntWritable one = new IntWritable(1);private Text word = new Text();public void map(LongWritable key,Text value,Context context){String line = value.toString();StringTokenizer tokenizer = new StringTokenizer(line);//读取每一行数据,并将该行数据以空格分割(StringTokenizer默认是以空格分割字符串)while (tokenizer.hasMoreTokens()) {word.set(tokenizer.nextToken());try {context.write(word, one);//输出给Reduce} catch (IOException e) {e.printStackTrace();} catch (InterruptedException e) {e.printStackTrace();}}}}public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException{int sum = 0; //合并每个单词出现的次数for (IntWritable val : values) {sum+=val.get();}context.write(key, new IntWritable(sum));}}public static void main(String[] args) throws Exception {int ret = ToolRunner.run(new WordCount(), args);System.exit(ret);}@Overridepublic int run(String[] arg0) throws Exception {Job job = new Job(getConf());job.setJobName("wordcount");job.setOutputKeyClass(Text.class);//key--设置输出格式job.setOutputValueClass(IntWritable.class);//value--设置输出格式job.setMapperClass(Map.class);job.setReducerClass(Reduce.class);job.setInputFormatClass(TextInputFormat.class);job.setOutputFormatClass(TextOutputFormat.class);FileInputFormat.setInputPaths(job, new Path(arg0[0]));FileOutputFormat.setOutputPath(job, new Path(arg0[1]));boolean success = job.waitForCompletion(true);return success?0:1;} }
运行:
1:将程序打包
选中打包的类-->右击-->Export-->java-->JAR file--填入保存路径-->完成
2:将jar包拷贝到hadoop的目录下。(因为程序中用到来hadoop的jar包)
3:将资源文件(存储这单词的文件,假设在/home/user/Document/file1.txt)上传到定义的hdfs目录下
创建hdfs目录命令(在hadoop已经成功启动的前提下):hadoop fs -mkdir /自定义/自定义/input
上传本地资源文件到hdfs上:hadop fs -put -copyFromLocal /home/user/Document/file1.txt /自定义/自定义/input
4:运行MapReduce程序:
hadoop jar /home/user/hadoop-1.0.4/WordCount.jar com.bwzy.hadoop.WordCount /自定义/自定义/input /自定义/自定义/output
说明:hadoop运行后会自动创建/自定义/自定义/output目录,在该目录下会有两个文件,其中一个文件中存放来MapReduce运行的结果。如果重新运行该程序,需要将/自定义/自定义/output目录删除,否则系统认为该结果已经存在了。
5:运行的结果为
hello 2
hadoop 2
word 1
this 1
is 1
my 1
first 1
program 1
转载于:https://blog.51cto.com/3157689/1350178