Not sure the best way to approach this scenario so let me explain what I'm trying to do and go from there.
- I am currently using one Ubuntu server running SlimerJS, NodeJS, and NGinx
- Using SlimerJS along with xvfb-run to run some custom HTML5 Canvas rendering and outputting a bunch of images in order to compile them into a video. So on the fly snap-shotting at 24 FPS for video output.
- This is a very CPU intensive process and the more jobs you run the longer it will take. So I built a fairly robust server (30 GB RAM + 16 CPU VM on Rackspace) to do some testing. I can run 30+ jobs on this box simultaneously and it works fairly well.
- What I'm trying to do is figure out if I can take this and put it on a scalable platform. So if traffic is very low then a 2 GB + 2 CPU type of VM would work great but as traffic and load increases it will auto-scale to utilize more CPU's as needed.
- I can easily create more large servers and have them all behind a load balancer and send traffic to whichever node is available but this requires having a bunch of dedicated VM's up and running.
How would I go about provisioning a large VM with small CPU initially and scaling said CPU as jobs come in? Am I stuck with provisioning multiple VM's and having them idle and join in or can I provision a large singular VM and just take CPU from a pool? Does that make sense?
I haven't played around with EC2 or Azure a ton but am wondering if they have the tools to help scale something like this on demand (in essence IaaS on demand).
Aucun commentaire:
Enregistrer un commentaire