代码之家  ›  专栏  ›  技术社区  ›  Bikas Katwal

当为hdfs位置中的jar提供额外的类路径时,spark submit失败

  •  1
  • Bikas Katwal  · 技术社区  · 6 年前

    我以客户模式运行我的spark工作,

    所以,我在下面运行spark submit命令:

    spark-submit --class "com.bk.App" --master yarn --deploy-mode client --executor-cores 2  --driver-memory 1G --driver-cores 1 --driver-class-path /home/my_account/spark-jars/guava-19.0.jar  --conf spark.executor.extraClassPath=/home/my_account/spark-jars/guava-19.0.jar maprfs:///user/my_account/jobs/spark-jobs.jar parma1 parma2 
    

    Downloading maprfs:///user/my_account/jobs/spark-jobs.jar to /tmp/tmp732578642370806645/user/my_account/jobs/spark-jobs.jar.
    2018-10-29 19:37:52,2025 ERROR JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:566 Thread: 6832 Client initialization failed due to mismatch of libraries. Please make sure that the java library version matches the native build version 5.0.0.32987.GA and native patch version $Id: mapr-version: 5.0.0.32987.GA 40889:3056362e419b $
    Exception in thread "main" java.io.IOException: Could not create FileClient
        at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:593)
        at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:654)
        at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1310)
        at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:942)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:345)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:297)
        at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2066)
        at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2035)
        at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2011)
        at org.apache.spark.deploy.SparkSubmit$.downloadFile(SparkSubmit.scala:874)
        at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$1.apply(SparkSubmit.scala:316)
        at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$1.apply(SparkSubmit.scala:316)
        at scala.Option.map(Option.scala:146)
        at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:316)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:153)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    

    我甚至试着把我的番石榴罐子放到hdfs位置,并使用 hdfs:// 甚至 maprfs:// 在我的火花中。试着给予 local:// 我也是。所有的结果都是相同的。

    注意:如果没有给驱动程序和执行程序额外的类路径jar,那么这个作业绝对可以正常工作。

    有什么建议吗?我是否以错误的方式使用类路径参数?

    0 回复  |  直到 6 年前