采用集成了scala的eclipse编写代码
代码:
package wordcountimport org.apache.spark.SparkConf
import org.apache.spark.SparkContextobject WordCount {def main(args: Array[String]): Unit = {//非常重要,是通向Spark集群的入口val conf=new SparkConf().setAppName("WC")val sc=new SparkContext(conf)sc.textFile(args(0)).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2, false).saveAsTextFile(args(1))sc.stop()}}
导出成jar包
上传到服务器
提交命令:
./spark-submit --master spark://nbdo1:7077
--class wordcount.WordCount
/home/hadoop/wordcount.jar
"hdfs://nbdo1:9000/wordcount.txt" "hdfs://nbdo1:9000/out3"
运行效果:
[hadoop@nbdo1 ~]$ hdfs dfs -cat /out3/part-*
(hello,6)
(zeng,4)
(ting,2)
(miao,2)
(gen,2)
(wen,2)
(biao,2)
(zhu,2)
(ye,1)
(,1)
(zhang,1)
(ai,1)
(lai,1)
(su,1)
(qi,1)
(sheng,1)
(xiao,1)
(xiang,1)
(lu,1)
(chang,1)
(ni,1)
-------------
更多的Java,Android,大数据,J2EE,Python,数据库,Linux,Java架构师,教程,视频请访问:
http://www.cnblogs.com/zengmiaogen/p/7083694.html