我有一个spark数据集,我需要按日期对数据进行分组,并在日期id不存在时对数据进行零填充。我还需要对数据集进行30天的转换,因为在原始类型数据集中,我的结束日期是30天,下面是我正在处理的示例数据集。实现这一转变的最佳方法是什么?
val genre = sc.parallelize(List(("id1", "2016-05-01", "action",0),
("id1", "2016-05-03", "horror",1),
("id2", "2016-05-03", "art",0),
("id2", "2016-05-04", "action",0))).
toDF("id","date","genre","score")
输出
+---+----------+------+-----+
| id| date| genre|score|
+---+----------+------+-----+
|id1|2016-05-01|action| 0|
|id1|2016-05-03|horror| 1|
|id2|2016-05-03| art| 0|
|id2|2016-05-04|action| 0|
+---+----------+------+-----+
期望输出
+---+----------+------+-----+-----------
| date| grouped |
+---+----------+------+-----+------------
|2016-05-01|[[id1,action,1],[id2,0, 0]] |
|2016-05-02|[[id1,0,0],[id2,0, 0]] |
|2016-05-03|[[id1,horror,1],[id2,art,0]]|
|2016-05-04|[[id1,0,0],[id2,action, 0]] |
+---+----------+------+-----+-----------