It can be a silly question, but I have a big mental block right now. I can't see the connection between my SQS queue and the EC2 instances that will process the message.
That is to say: a client completes a form on the web page (this web page is hosted in a EC2 instance), and that form is sent as a message to a SQS Queue. After that, the goal of my Cloud-Application is to take the form information of the message and run a .sh with that information. An example of this process is shown in the next picture:
So, how can my SQS Queue run a .sh in a EC2 instance? The only way I figured out to do so with Python-Boto, is creating a "listener" to constantly read the messages and "do something" with that message.
while 1:
m = conn.receive_message(
q,
number_messages=1,
message_attributes=['configName'],
attributes='All'
)
if not m:
time.sleep(5)
else:
a = str(m[0].message_attributes.get('configName').get('string_value'))
rh = str(m[0].receipt_handle)
# Processing the message in a EC2 instance
conn.delete_message_from_handle(q, rh)
time.sleep(5)
So, as seen in the above code, I read the attributes of the message (in this case only one for simplicity) and I have to process it.
How can I process the incoming messages in parallel in different EC2 instances? I don't see how, because I only have one listener, and if the listener is busy processing one message, it won't process any other until it finishes the first one. I want the listener to process any number of messages in parallel (depending on the number of instances and the bill I want to pay, of course). How can I know which EC2 instance will run my .sh program?
There is another way to do this in a much easier way with any Amazon Service?
Thanks
Aucun commentaire:
Enregistrer un commentaire