public class EmployeeBean implements Serializable {
private Long id;
private String name;
private Long salary;
private Integer age;
// getters and setters
}
相关火花代码:
SparkSession spark = SparkSession.builder().master("local[2]").appName("play-with-spark").getOrCreate();
List<EmployeeBean> employees1 = populateEmployees(1, 10);
Dataset<EmployeeBean> ds1 = spark.createDataset(employees1, Encoders.bean(EmployeeBean.class));
ds1.show();
ds1.printSchema();
Dataset<Row> ds2 = ds1.where("age is null").withColumn("is_age_null", lit(true));
Dataset<Row> ds3 = ds1.where("age is not null").withColumn("is_age_null", lit(false));
Dataset<Row> ds4 = ds2.union(ds3);
ds4.show();
相关输出:
DS1
+----+---+----+------+
| age| id|name|salary|
+----+---+----+------+
|null| 1|dev1| 11000|
| 2| 2|dev2| 12000|
|null| 3|dev3| 13000|
| 4| 4|dev4| 14000|
|null| 5|dev5| 15000|
+----+---+----+------+
DS4
+----+---+----+------+-----------+
| age| id|name|salary|is_age_null|
+----+---+----+------+-----------+
|null| 1|dev1| 11000| true|
|null| 3|dev3| 13000| true|
|null| 5|dev5| 15000| true|
| 2| 2|dev2| 12000| false|
| 4| 4|dev4| 14000| false|
+----+---+----+------+-----------+
有没有更好的解决方案可以将此列添加到数据集中,而不是创建两个数据集并执行union?