我正在尝试使用YARN REST API来提交spark-submit作业,我通常通过命令行运行.
我的命令行spark-submit看起来像这样
JAVA_HOME=/usr/local/java7/ HADOOP_CONF_DIR=/etc/hadoop/conf /usr/local/spark-1.5/bin/spark-submit \ --driver-class-path "/etc/hadoop/conf" \ --class MySparkJob \ --master yarn-cluster \ --conf "spark.executor.extraClassPath=/usr/local/hadoop/client/hadoop-*" \ --conf "spark.driver.extraClassPath=/usr/local/hadoop/client/hadoop-*" \ spark-job.jar --retry false --counter 10
通过纱线REST API文档阅读https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_APISubmit_Application,我试图创建的JSON有效载荷POST看上去像
{ "am-container-spec": { "commands": { "command": "JAVA_HOME=/usr/local/java7/ HADOOP_CONF_DIR=/etc/hadoop/conf org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster --jar spark-job.jar --class MySparkJob --arg --retry --arg false --arg --counter --arg 10" }, "local-resources": { "entry": [ { "key": "spark-job.jar", "value": { "resource": "hdfs:///spark-job.jar", "size": 3214567, "timestamp": 1452408423000, "type": "FILE", "visibility": "APPLICATION" } } ] } }, "application-id": "application_11111111111111_0001", "application-name": "test", "application-type": "Spark" }
我看到的问题是,Hadoop的CONFIGS目录是以前本地我是从运行作业的机器,现在我通过REST API提交作业,还是直接对RM,我不知道如何提供这些细节?