I am new to Spark, and trying to start Spark on an EC2 cluster.
I have been following this [http://ift.tt/1zw4eOf] Briefly, I
- generated a key pair in the management console, saved it in my home directory, and
chmod 400 mykey.pem - created a cluster: ./spark-ec2 --spark-version=1.2.1 --key-pair=mykey --identity-file=/Users/myuser/mykey.pem launch test-cluster
This looks fine, but stalls for more than 15min while displaying: Waiting for all instances in cluster to enter 'ssh-ready' state..............................................
By that time, the instance is visible in the web management console ("running", not "initialising")
If I try to login ./spark-ec2 -k mykey -i mykey.pem login test-cluster, I get an error:
Found 1 master(s), 1 slaves
Logging into master ec2-54-152-145-139.compute-1.amazonaws.com...
Warning: Identity file mykey.pem not accessible: No such file or directory.
Warning: Permanently added 'ec2-54-152-145-139.compute- 1.amazonaws.com,54.152.145.139' (RSA) to the list of known hosts.
Permission denied (publickey).
Traceback (most recent call last):
File "./spark_ec2.py", line 1093, in <module>
main()
File "./spark_ec2.py", line 1085, in main
real_main()
File "./spark_ec2.py", line 1018, in real_main
ssh_command(opts) + proxy_opt + ['-t', '-t', "%s@%s" % (opts.user, master)])
File "/Users/myuser/anaconda/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['ssh', '-o', 'StrictHostKeyChecking=no', '-o', 'UserKnownHostsFile=/dev/null', '-i', 'mykey.pem', '-t', '-t', u'root@ec2- 54-152-145-139.compute-1.amazonaws.com']' returned non-zero exit status 255`
The error message is quite the same when the cluster setup is done. It automatically tries to connect via ssh, fails, and shuts down.
It's not the key itself though. I've used that to create EC2 instances via the web console. Login via ssh into this using mykey.pem works fine.
Any help would be appreciated.
Aucun commentaire:
Enregistrer un commentaire