我正在使用Cloudera Quickstart VM CDH5.3.0(就parcels包而言)和Spark 1.2.0 $SPARK_HOME=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark
使用命令并使用命令提交Spark应用程序
./bin/spark-submit --class
Spark_App_Main_Class_Name.scala
import org.apache.spark.SparkContext import org.apache.spark.SparkConf import org.apache.spark.mllib.util.MLUtils object Spark_App_Main_Class_Name { def main(args: Array[String]) { val hConf = new SparkConf() .set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName) .set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName) val sc = new SparkContext(hConf) val data = MLUtils.loadLibSVMFile(sc, "hdfs://localhost.localdomain:8020/analytics/data/mllib/sample_libsvm_data.txt") ... } }
但是我在客户端模式下提交应用程序时获得了ClassNotFoundException
fororg.apache.hadoop.hdfs.DistributedFileSystem
[cloudera@localhost bin]$ ./spark-submit --class Spark_App_Main_Class_Name --master spark://localhost.localdomain:7077 --deploy-mode client --executor-memory 4G ../apps/Spark_App_Target_Jar_Name.jar 15/11/30 09:46:34 INFO SparkContext: Spark configuration: spark.app.name=Spark_App_Main_Class_Name spark.driver.extraLibraryPath=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native spark.eventLog.dir=hdfs://localhost.localdomain:8020/user/spark/applicationHistory spark.eventLog.enabled=true spark.executor.extraLibraryPath=/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hadoop/lib/native spark.executor.memory=4G spark.jars=file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/bin/../apps/Spark_App_Target_Jar_Name.jar spark.logConf=true spark.master=spark://localhost.localdomain:7077 spark.yarn.historyServer.address=http://localhost.localdomain:18088 15/11/30 09:46:34 WARN Utils: Your hostname, localhost.localdomain resolves to a loopback address: 127.0.0.1; using 10.113.234.150 instead (on interface eth12) 15/11/30 09:46:34 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 15/11/30 09:46:34 INFO SecurityManager: Changing view acls to: cloudera 15/11/30 09:46:34 INFO SecurityManager: Changing modify acls to: cloudera 15/11/30 09:46:34 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(cloudera); users with modify permissions: Set(cloudera) 15/11/30 09:46:35 INFO Slf4jLogger: Slf4jLogger started 15/11/30 09:46:35 INFO Remoting: Starting remoting 15/11/30 09:46:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.113.234.150:59473] 15/11/30 09:46:35 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@10.113.234.150:59473] 15/11/30 09:46:35 INFO Utils: Successfully started service 'sparkDriver' on port 59473. 15/11/30 09:46:36 INFO SparkEnv: Registering MapOutputTracker 15/11/30 09:46:36 INFO SparkEnv: Registering BlockManagerMaster 15/11/30 09:46:36 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20151130094636-8c3d 15/11/30 09:46:36 INFO MemoryStore: MemoryStore started with capacity 267.3 MB 15/11/30 09:46:38 INFO HttpFileServer: HTTP File server directory is /tmp/spark-7d1f2861-a568-4919-8f7e-9a9fe6aab2b4 15/11/30 09:46:38 INFO HttpServer: Starting HTTP Server 15/11/30 09:46:38 INFO Utils: Successfully started service 'HTTP file server' on port 50003. 15/11/30 09:46:38 INFO Utils: Successfully started service 'SparkUI' on port 4040. 15/11/30 09:46:38 INFO SparkUI: Started SparkUI at http://10.113.234.150:4040 15/11/30 09:46:39 INFO SparkContext: Added JAR file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/spark/bin/../apps/Spark_App_Target_Jar_Name.jar at http://10.113.234.150:50003/jars/Spark_App_Target_Jar_Name.jar with timestamp 1448894799228 15/11/30 09:46:39 INFO AppClient$ClientActor: Connecting to master spark://localhost.localdomain:7077... 15/11/30 09:46:40 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151130094640-0000 15/11/30 09:46:41 INFO NettyBlockTransferService: Server created on 56458 15/11/30 09:46:41 INFO BlockManagerMaster: Trying to register BlockManager 15/11/30 09:46:41 INFO BlockManagerMasterActor: Registering block manager 10.113.234.150:56458 with 267.3 MB RAM, BlockManagerId(, 10.113.234.150, 56458) 15/11/30 09:46:41 INFO BlockManagerMaster: Registered BlockManager Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2047) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2578) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at org.apache.spark.util.FileLogger. (FileLogger.scala:90) at org.apache.spark.scheduler.EventLoggingListener. (EventLoggingListener.scala:63) at org.apache.spark.SparkContext. (SparkContext.scala:352) at org.apache.spark.SparkContext. (SparkContext.scala:92) at Spark_App_Main_Class_Name$.main(Spark_App_Main_Class_Name.scala:22) at Spark_App_Main_Class_Name.main(Spark_App_Main_Class_Name.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.hdfs.DistributedFileSystem not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1953) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2045) ... 16 more
似乎Spark应用程序无法映射HDFS,因为最初我收到错误:
Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at org.apache.spark.util.FileLogger.(FileLogger.scala:90) at org.apache.spark.scheduler.EventLoggingListener. (EventLoggingListener.scala:63) at org.apache.spark.SparkContext. (SparkContext.scala:352) at org.apache.spark.SparkContext. (SparkContext.scala:92) at LogisticRegressionwithBFGS$.main(LogisticRegressionwithBFGS.scala:21) at LogisticRegressionwithBFGS.main(LogisticRegressionwithBFGS.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
我按照hadoop No FileSystem for scheme:file将"fs.hdfs.impl"和"fs.file.impl"添加到Spark配置设置
您需要在类路径中使用hadoop-hdfs-2.x jar (maven链接).在提交您的申请时,请使用spark-submit的--jar选项提及其他jar位置.
另一方面,你应该理想地转向具有spark1.5的CDH5.5.