I'm trying to run a stand-alone Spark application on EC2 Yarn command line. I'm submitting the following spark-submit script:
./bin/spark-submit --class PageRankGraphX --master yarn-cluster --properties-file spark-defaults.conf.2 --executor-memory 2G --total-executor-cores 5 ./SparkPageRank-assembly-1.0.jar s3://linkfilefull/full/links_small.txt s3://conansoutputbucket/smalloutput.txt 10 0.15 2
This is the output - there is no exception or error thrown, the job simply fails after running:
15/04/15 21:27:03 INFO yarn.Client: Application report from ASM:
application identifier: application_1429126831428_0027
appId: 27
clientToAMToken: null
appDiagnostics:
appMasterHost: ip-172-31-1-67.eu-west-1.compute.internal
appQueue: default
appMasterRpcPort: 0
appStartTime: 1429133214320
yarnAppState: RUNNING
distributedFinalState: UNDEFINED
appTrackingUrl: http://ift.tt/1ysGuR4
appUser: hadoop
15/04/15 21:27:04 INFO yarn.Client: Application report from ASM:
application identifier: application_1429126831428_0027
appId: 27
clientToAMToken: null
appDiagnostics:
appMasterHost: ip-172-31-1-67.eu-west-1.compute.internal
appQueue: default
appMasterRpcPort: 0
appStartTime: 1429133214320
yarnAppState: FINISHED
distributedFinalState: FAILED
appTrackingUrl: http://ift.tt/1zhNICs
appUser: hadoop
Does anyone know what could be causing this or how I could investigate? When I try to access the yarn logs, it says logs are disabled or not ready.
Aucun commentaire:
Enregistrer un commentaire