samedi 18 avril 2015

Apache Spark EC2 "Not enough memory"

When using Apache Spark on EC2 I sometimes get an error message when trying to run my .jar that the master is unresponsive / there may not be enough memory on the nodes. I can access the masters UI and it works fine - I can log in/log out, upload files etc to the master and RSYNC them to the nodes. Furthermore, I can see that the nodes I have are empty (the last time this happened I had 300GB RAM and it said 0 memory had been used - was trying to perform actions on a 256MB file). I end up having to terminate the instances and then set up a new cluster - it generally then works fine. Does anyone know why this occurs? This constant failing, ~ 1 in 3 times, is becoming expensive.





Aucun commentaire:

Enregistrer un commentaire