I am getting an error in launching the standalone Spark driver in cluster mode. As per the documentation, it is noted that cluster mode is supported in the Spark 1.2.1 release. However, it is currently not working properly for me. Please help me in fixing the issue(s) that are preventing the proper functioning of Spark.
I have a 3 node cluster and I am using the below Command for launching the driver from master node. The driver gets launched at a slave node and gives below error.
Command:
spark-1.2.1-bin-hadoop2.4]# /usr/local/spark-1.2.1-bin-hadoop2.4/bin/spark-submit --class com.mashery.firststep.aggregator.FirstStepMessageProcessor --master spark://ec2-xx.xx.xx.compute-1.amazonaws.com:7077 --deploy-mode cluster --supervise file:///home/xyz/sparkstreaming-0.0.1-SNAPSHOT.jar /home/xyz/config.properties
Output:
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/02/28 17:41:16 INFO SecurityManager: Changing view acls to: root
15/02/28 17:41:16 INFO SecurityManager: Changing modify acls to: root
15/02/28 17:41:16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/02/28 17:41:16 INFO Slf4jLogger: Slf4jLogger started
15/02/28 17:41:16 INFO Utils: Successfully started service 'driverClient' on port 48740.
Sending launch command to spark://ec2-xx.xx.xx.compute-1.amazonaws.com:7077
Driver successfully submitted as driver-20150228174117-0003
... waiting before polling master for driver state
... polling master for driver state
State of driver-20150228174117-0003 is RUNNING
Driver running on ec2-yy.yy.yy.compute-1.amazonaws.com:36323 (worker-20150228171635-ec2-yy.yy.yy.compute-1.amazonaws.com-36323)
log at driver stderr:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/mnt/worker/driver-20150228174117-0003/sparkstreaming-0.0.1-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/spark-1.2.1-bin-hadoop2.4/lib/spark-assembly-1.2.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://ift.tt/1f12hSy for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.net.BindException: Failed to bind to: http://ift.tt/1G3HkG7: Service 'Driver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:391)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:388)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
enter code here
Aucun commentaire:
Enregistrer un commentaire