这篇文章主要介绍“如何使用spark Context转成RDD”,在日常操作中,相信很多人在如何使用spark Context转成RDD问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”如何使用spark Context转成RDD”的疑惑有所帮助!接下来,请跟着小编一起来学习吧!
一. 背景
在spark rdd转换算子中join和cogroup是有些需要区分的算子转换,这里使用示例来说明一下。
二. 示例
1.构建List示例数据
List<Tuple2<Integer, String>> studentsList = Arrays.asList(
new Tuple2<Integer,String>(1,"xufengnian"),
new Tuple2<Integer,String>(2,"xuyao"),
new Tuple2<Integer,String>(2,"wangchudong"),
new Tuple2<Integer,String>(3,"laohuang")
);
List<Tuple2<Integer, Integer>> scoresList = Arrays.asList(
new Tuple2<Integer,Integer>(1,100),
new Tuple2<Integer,Integer>(2,90),
new Tuple2<Integer,Integer>(3,80),
new Tuple2<Integer,Integer>(1,101),
new Tuple2<Integer,Integer>(2,91),
new Tuple2<Integer,Integer>(3,81),
new Tuple2<Integer,Integer>(3,71)
);
2.使用sparkContext转成RDD
JavaPairRDD<Integer,String> studentsRDD = sc.parallelizePairs(studentsList);
JavaPairRDD<Integer,Integer> scoresRDD = sc.parallelizePairs(scoresList);
//studentsRDD 为:List<Tuple2<Integer, String>>
//(1,xufengnian)(2,xuyao)(2,wangchudong)(3,laohuang),下面进行打印查看
studentsRDD.foreach(new VoidFunction<Tuple2<Integer,String>>(){
public void call(Tuple2<Integer,String> tuple){
System.out.println(tuple._1);//1 2 3
System.out.println(tuple._2);// xufengnian xuyao laohuang
}
});
3.进行join
/*
前面数据
(1,xufengnian)(2,xuyao)(2,"wangchudong")(3,laohuang)
(1,100)(2,90)(3,80)(1,101)(2,91)(3,81)(3,71)
join之后:
(1,(xufengnian,100))(1,(xufengnian,101))(3,(laohuang,80))(3,(laohuang,81))(3,(laohuang,71))
(2,(xuyao,90))(2,(xuyao,91))(2,(wangchudong,90))(2,(wangchudong,91))
*/
JavaPairRDD<Integer, Tuple2<String, Integer>> studentScores = studentsRDD.join(scoresRDD);
//join为key相同的join,key不变,value变成(string,integer)
studentScores.foreach(new VoidFunction<Tuple2<Integer,Tuple2<String,Integer>>>() {
private static final long serialVersionUID = 1L;
@Override
public void call(Tuple2<Integer, Tuple2<String, Integer>> student)
throws Exception {
System.out.println("student id: " + student._1);//1 1 3
System.out.println("student name: " + student._2._1);//xufengnian xufengnian laohuang
System.out.println("student score: " + student._2._2);//100 101 80
System.out.println("===================================");
}
});
4.进行cogroup
/*
前面的数据
(1,xufengnian)(2,xuyao)(2,"wangchudong")(3,laohuang)
(1,100)(2,90)(3,80)(1,101)(2,91)(3,81)(3,71)
cogroup之后:
(1,([xufengnian],[100,101])) (3,([laohuang],[80,81,71])) (2,([xuyao,wangchudong],[90,91]))
*/
JavaPairRDD<Integer,Tuple2<Iterable<String>,Iterable<Integer>>> studentScores2 = studentsRDD.cogroup(scoresRDD);
studentScores2.foreach(new VoidFunction<Tuple2<Integer, Tuple2<Iterable<String>, Iterable<Integer>>>>() {
@Override
public void call(Tuple2<Integer, Tuple2<Iterable<String>, Iterable<Integer>>> stu) throws Exception {
System.out.println("stu id:"+stu._1);//1 3
System.out.println("stu name:"+stu._2._1);//[xufengnian] [laohuang]
System.out.println("stu score:"+stu._2._2);//[100,101] [80,81,71]
Iterable<Integer> integers = stu._2._2;
for (Iterator iter = integers.iterator(); iter.hasNext();) {
Integer str = (Integer)iter.next();
System.out.println(str);//100 101 80 81 71
}
System.out.println("===================================");
}
});
到此,关于“如何使用spark Context转成RDD”的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注天达云网站,小编会继续努力为大家带来更多实用的文章!