samedi 31 janvier 2015

How does Mechanical Turk re-load page content after inner-frame re-load?

On Mechanical Turk, it allows you to give a URL for the site to embed. For example:


http://ift.tt/1Kh6gI1


After a user submits the iFrame form data (with action = http://ift.tt/1yZNenw), it reloads the iFrame and it changes information on the Amazon page (the outer frame, which contains the iFrame).


How is this done by Mechanical Turk? How does the submission of the iFrame trigger an event to re-load the outer page with new data?





AWS ec2 winreg not found

I'm trying to run a python app from amazon EC2 large instance. However, Its complaining in scipy because it can't find a thing called _winreg.


I don't know how to reconfigure this so its no longer an issue.



""" python2 app.py * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with stat import _winreg as winregTraceback (most recent call last): File "app.py", line 111, in app = create_app().run(debug=True) File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 772, in run run_simple(host, port, self, **options) File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 622, in run_simple reloader_type) File "/usr/local/lib/python2.7/dist-packages/werkzeug/_reloader.py", line 265, in run_with_reloader reloader.run() File "/usr/local/lib/python2.7/dist-packages/werkzeug/_reloader.py", line 155, in run for filename in chain(_iter_module_files(), self.extra_files): File "/usr/local/lib/python2.7/dist-packages/werkzeug/_reloader.py", line 70, in _iter_module_files for package_path in getattr(module, 'path', ()): File "/usr/lib/python2.7/dist-packages/scipy/lib/six.py", line 116, in getattr _module = self._resolve() File "/usr/lib/python2.7/dist-packages/scipy/lib/six.py", line 105, in _resolve return _import_module(self.mod) File "/usr/lib/python2.7/dist-packages/scipy/lib/six.py", line 76, in _import_module import(name) ImportError: No module named _winreg """






Add or remove an entry from a List type attribute in a DynamoDB table item

I have a very simple class, with a string type primary key and List type attributes. I want to write APIS for adding and removing an item from the attribute list and saving the changes back to DDB.


The simplest solution I can think of is: - Read the list (if it exists) - If it exists, remove or add that entry from the List type attribute - Put the modified object back


Is there a cleaner/simpler way to do this via DynamoDB java API? I spent quite some time looking it up, before I posted this question here.





Node Express 4.0 Edit image before uploading to Amazon AWS

This is my first post here...so here goes nothing.


I'm currently working on a website and I have built it on Node Express 4.0 and Mongo. It's a marketplace type website and I need to be able to allow the user to:



  1. Upload the image

  2. Preview the Image

  3. Crop/Rotate (basic editing)

  4. Then post it to the server.


As of right now, I'm using the blueimp file uploader linked with jQuery and the image is uploaded to the aws server as soon as soon as the user selects it. The backend is completely JS with no PHP or any of that mumbojumbo. I need the user to be able to edit the image before it is sent to the server. I also need to be able to put cropping requirements.


For streaming an image if I wanted it to be rendered in a lower resolution would that be done client side or server side?





DynamoDB query on boolean key

I'm new to DynamoDB (and to noSQL in general) and am struggling a little to get my head round some of the concepts. One thing in particular is giving me some problems, which is around querying a table based on a boolean key.


I realise that I can't created a primary or secondary index on a boolean key, but I can't see how I should ideally index and query a table with the following strcture;


reportId: string (uuid) reportText: string isActive: boolean category: string


I would like to be able to complete the following searches;


Access a specific report directly (a primary hash index of reportId) List reports of a specific category (a primary hash index on category)


These are both straightforward, but I would like to perform two other queries;


List all reports that are marked as isActive = true List all reports of a specific category that are marked as isActive = true


My first approach would be to create a primary hashkey index on 'isActive', with a rangekey on 'category', but I'm only able to choose String, Number of Boolean as the key type.


Storing isActive as a string (saved as 'true' rather than a boolean true) solves the problem, but its horrible using a string for a boolean property.


Am I missing something? Is there a simple way to query the table directly on a boolean value?


Any advice duly appreciated.


Thanks in advance.





nginx+gunicorn+django+aws bad request

I was following a tutorial at http://ift.tt/1yUZa5b on deploying a django application. This is my current situation:


fab spawn instance created the aws instance with nginx and gunicorn installed but when I tried accessing the site on that machine I got a 400 Bad Request. I checked the nginx-error log but that was empty and the nginx-access log showed that it had received the requests. The supervisor log had the following:



2015-01-31 21:26:20 +0000] [15823] [INFO] Starting gunicorn 19.2.0
[2015-01-31 21:26:20 +0000] [15823] [INFO] Listening at: http://127.0.0.1:8002/ (15823)
[2015-01-31 21:26:20 +0000] [15823] [INFO] Using worker: sync
[2015-01-31 21:26:20 +0000] [15832] [INFO] Booting worker with pid: 15832
[2015-01-31 21:26:20 +0000] [15833] [INFO] Booting worker with pid: 15833
[2015-01-31 21:26:20 +0000] [15834] [INFO] Booting worker with pid: 15834
[2015-01-31 21:26:20 +0000] [15835] [INFO] Booting worker with pid: 15835
[2015-01-31 21:26:20 +0000] [15836] [INFO] Booting worker with pid: 15836
[2015-01-31 21:26:31 +0000] [15837] [INFO] Starting gunicorn 19.2.0
[2015-01-31 21:26:31 +0000] [15837] [ERROR] Connection in use: (‘127.0.0.1′, 8002)
[2015-01-31 21:26:31 +0000] [15837] [ERROR] Retrying in 1 second.
[2015-01-31 21:26:32 +0000] [15837] [ERROR] Connection in use: (‘127.0.0.1′, 8002)
[2015-01-31 21:26:32 +0000] [15837] [ERROR] Retrying in 1 second.
[2015-01-31 21:26:33 +0000] [15837] [ERROR] Connection in use: (‘127.0.0.1′, 8002)
[2015-01-31 21:26:33 +0000] [15837] [ERROR] Retrying in 1 second.
[2015-01-31 21:26:34 +0000] [15837] [ERROR] Connection in use: (‘127.0.0.1′, 8002)
[2015-01-31 21:26:34 +0000] [15837] [ERROR] Retrying in 1 second.
[2015-01-31 21:26:35 +0000] [15837] [ERROR] Connection in use: (‘127.0.0.1′, 8002)
[2015-01-31 21:26:35 +0000] [15837] [ERROR] Retrying in 1 second.
[2015-01-31 21:26:36 +0000] [15837] [ERROR] Can’t connect to (‘127.0.0.1′, 8002)
[2015-01-31 21:26:37 +0000] [15846] [INFO] Starting gunicorn 19.2.0
[2015-01-31 21:26:37 +0000] [15846] [INFO] Listening at: http://127.0.0.1:8002 (15846)
[2015-01-31 21:26:37 +0000] [15846] [INFO] Using worker: sync
[2015-01-31 21:26:37 +0000] [15855] [INFO] Booting worker with pid: 15855
[2015-01-31 21:26:37 +0000] [15856] [INFO] Booting worker with pid: 15856
[2015-01-31 21:26:37 +0000] [15857] [INFO] Booting worker with pid: 15857
[2015-01-31 21:26:38 +0000] [15858] [INFO] Booting worker with pid: 15858
[2015-01-31 21:26:38 +0000] [15859] [INFO] Booting worker with pid: 15859


I changed ALLOWED_HOSTS from [] to [''] and then '' When I changed it to the string, I got "The requested URL / was not found on this server." On other instances, I got 400 bad request.


This is the first time I am deploying a django app on nginx and I can’t figure out what the problem might be. Could you please help me debug this error? Thanks in advance!!


PS: Please let me know if I need to post any config files. So far I have just followed the tutorial and I have not changed any configurations.





build static website on amazon s3

i got and static website and i try use amazon s3 storage to put it on.. i create an buket in the name WEB-SITE, upload all the file in to the buket and the click on the buket propertys and change it to static website hosting after it i click on the url of the website but it write and got this Error:


"403 Forbidden - AccessDenied"


i read that it need to add Permissions Required to the buket.. i try to add this:



{
"Version":"2015-02-01",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::WEB-SITE/*"
]
}
]
}


but i can not save it i got this Error:


"The XML for Routing Rules is invalid"





Load testing tool to generate up to 50K req.sec in AWS

I'm looking for a light weight open source load testing tool which conforms to the following criteria:



  1. Capable to generate up to 50K requests per second(potentially more) to the REST API with mature HTTP support: methods, automatic management of client connections and cookies. SSL. e.t.c.

  2. Has a proven track record of successful load tests generating heavy workload in AWS. Positive AWS experience is critical. Any comments(links) on load testing in AWS generating similar workloads are welcome too.

  3. It is really light weight. We expect to use several load generator instances not dozens of them.

  4. Flexible parametrization.

  5. Reasonable reporting features. Aggregated metrics(avg, percentiles, throughput), graphs.


At the moment I stick to tool called tsung, but still researching the topic.





AWSSQS trying to create queue and get messages

I am trying to connect to AWS SQS to create a new queue and also fetch messages I've placed on a another queue. This is the code I'm using:



AWSSQS *sqs=[[AWSSQS alloc] initWithConfiguration:configuration];

AWSSQSCreateQueueRequest *createQueue = [[AWSSQSCreateQueueRequest alloc] init];
[createQueue setQueueName:@"TEST_Q2_NEW_3"];

BFTask *answer = [sqs createQueue:createQueue];
NSLog(@"Status queue creation: %@ %@", answer.result, answer.error);

AWSSQSReceiveMessageRequest *receiveMessageRequest = [[AWSSQSReceiveMessageRequest alloc] init];
receiveMessageRequest.QueueUrl = @"http://ift.tt/1KgpkpC";

answer = [sqs receiveMessage:receiveMessageRequest];
NSLog(@"Status messages received %@ %@", answer.result, answer.error);


The queue TEST_Q2_NEW_3 is created but the log message is:


Status queue creation: (null) (null)


No messages are fetched but in the SQS Management Console some has the status Messages in Flight. However, the log message is:


Status messages received (null) (null)


What am I missing?





Is there any other service like aws that provides free tier plan for 1 year?

1 year free tier plan for services like web hosting,storage ? I have seen google's offer, but it is only valid for 2 months.





Complete control of EC2 instances from python

I am new to EC2 and I am trying to find a way to get started easily. I have searched internet for tutorials, however I am unable to find a precise answer to my question. I am trying to use amazon EC2 for some personal small scale scientific computing.


I want to do the following programmatically from a single python program:

- create a new instance

- upload a script on that instance that I want to execute

- execute the script on that instance

- obtain the result and save it on my local PC

- close the instance when the script finishes running and the result is copied to my machine


I want to be able to do many such tasks in parallel. So, say I create 10 different variations of the script that I want to run and run them in parallel on 10 different instances. I want to do everything from python and in fact the scripts are also written in python.


Can anybody point me to how is the best way to do it?


If it can't be done with python easily, what are other easy ways to do it? Is there some specific software for this? I take it I am not the first person who has such requirements, how are other people solving this problem?





how to handle multiple users of my android app to upload and download files in amazon s3?

I am developing a XMPP chat client for android. For the file transfers, i want to use amazon-s3. I am very new to amazon AWS. Created amazon account, created BUCKET and created POOL also. Configured IAM permissions.


Now the doubts are,



  • in my goggling, i saw two words that authorized and unauthorized roles. What are they?

  • And in my chat application, number of users will be there. How to manage their file transfers?

  • i mean have to create different pools for them?

  • or i have to use single pool which i created?

  • If i want to create different pools for different users, how can i create and manage?

  • Or if i have to use single pool id, then how to manage their transfer?


i am fully confused.. Please help friends..





django - Permission denied in uploading photos

When I try to upload an Image, I am getting an error:



OSError at /


[Errno 13] Permission denied: '/home/ubuntu/project/django-user-activities/django_user_activities/static/media/uploaded_files/1422722471_11_Tulips.jpg'



I think its related with user permission. But since I am a windows user, and I am hosting this in a ubuntu 14.04 OS, I have no idea how to solve this. How do I resolve this problem/error? I would be very much grateful if you could help me. Thank you.





How to host php files in amazon EC2?

Actually, i have implemented android push notification using GCM with the help of this link: http://ift.tt/1ht5mJr and hosted my server on 000webhost.com. Now, i want to move my server to AWS. I have created a instance of type t2.micro on amazon EC2. Now, i am in complete noob how to proceed further. I do have installed mysql and apache. Note: All of the web files are PHP. Are there any tutorials that can help me out.





Deploying Java web application on Amazon (AWS) [on hold]

I am relatively new to developing Java web applications and trying to understand that best ways to deploy new applications. I am particularly interested in leveraging AWS. I have been researching some of the posts on this topic, but some good ones are a couple years old. Any updates on AWS structures to implement a Java web application? Any good alternatives?


Deploy Java Web application on Amazon Cloud @Sangram_Anand


Is the Cloud ready for an Enterprise Java web application? Seeking a Java EE hosting advice @sfussenegger





ELB for Websockets SSL

Does AWS support websockets with SSL ?


Can EWS ELB be used for websockets over SSL ?


What happens when a EC2 instance(machine) is added or removed to this ELB. Especially removed; what if a machine goes down. are the existing sockets routed to some other machine or reseted to connected.


can ELB be a bottleneck at any point in time.


any other alternatives .. let me know





PHP runing from Apache cannot write to filesystem

I cannot make PHP write a file to filesystem when running from Apache webserver.


I have a simple PHP script:



<?php
print 'User : '.posix_getpwuid(posix_getuid())['name'];
print ' ';
print 'Group: '.posix_getgrgid(posix_getgid())['name'];
file_put_contents('./test.txt', 'OK');
?>


I'm logged in as user ec2-user:ec2-user and just for testing Apache is running as ec2-user:ec2-user.


ec2-user belongs to the following groups:



>groups
ec2-user adm wheel systemd-journal www


The script is located in Apache document root.



/var/www/html/test.php

drwxr-xr-x. 21 root root 4096 ene 31 05:45 var
drwxrwsr-x. 4 root www 31 ene 29 17:30 www
drwxrwsr-x. 2 root www 36 ene 31 06:16 html
-rw-rw-r--. 1 ec2-user www 172 ene 31 06:15 test.php


If a run the script vis PHP cli the file test.txt is created and the following output is generated.



>php ./test.php
User : ec2-user Group: ec2-user


But if I call the script via my browser as a normal web page, I get a file permissions error:



User : ec2-user Group: ec2-user
Warning: file_put_contents(./test.txt): failed to open stream: Permission denied in /var/www/html/test.php on line 6


I have tried also to run Apache as ec2-user:www, but the output is the same:



User : ec2-user Group: www
Warning: file_put_contents(./test.txt): failed to open stream: Permission denied in /var/www/html/test.php on line 6


I have checked PHP configuration and there is no open_basedir option configured.


I have tried to write to a /dummy folder with 777 permissions and the same output.


Is there any configuration I'm missing?





receive sqs message attributes using the camel dsl?

anybody knows how to receive sqs message attributes using the camel dsl in java? Im getting the following error:


"Failed to create route payee route: Route(batch route)[[From[aws-sqs://myqueue?amazonSQSEndpoint=... because of Failed to resolve endpoint: http://aws-sqsmyqueue?amazonSQSEndpoint=sqs.us-west-1.amazonaws.com&accessKey=*****&secretKey=****************&maxMessagesPerPoll=1&messageAttributeNames=%5BuserID%5 due to: Could not find a suitable setter for property: messageAttributeNames as there isn't a setter method with same type: java.lang.String nor type conversion possible: No type converter available to convert from type: java.lang.String to the required type: java.util.Collection with value [userID] "


please find my code



StringBuilder QueueURI = new StringBuilder();
QueueURI(PropertyUtils.AWS_SQS)
.append(propertyUtils.queueName)
.append(PropertyUtils.AMAZON_SQS_REGION)
.append(propertyUtils.sqsRegion);
QueueURI(PropertyUtils.AWS_ACCESS_KEY).append(
propertyUtils.awsAccessKey);
QueueURI(PropertyUtils.AWS_SECRET_KEY).append(
propertyUtils.awsSecretKey);
QueueURI(PropertyUtils.MAX_MESSAGES_PER_POLL_1);


Collection<String> collection = new ArrayList<String>();
collection.add("userID");

//aws-sqs://myqueue?amazonSQSEndpoint=sqs.us-west-1.amazonaws.com&accessKey=*****&secretKey=****************&maxMessagesPerPoll=1&messageAttributeNames=[userID]



from(QueueURI.ToString() + collection)
.routeId("batch route")
.process(userValidator);




vendredi 30 janvier 2015

AWS S3 storage access in Php script?

This is a total newbie question, but I am wondering if there is an easy way to replace the following code:


A CODE - - - - - - - - - - -



foreach(glob('../music/*', GLOB_ONLYDIR) as $playlist) {
code here
}


with "something" like this:


B CODE - - - - - - - - - - -



foreach(glob('http://ift.tt/1BGHxuo', GLOB_ONLYDIR) as $playlist) {
code here
}


Code snippet "A" above works fine and the music directory is housed on the local server. Since the music directory houses lots of audio files (GBs), I am hoping I can somehow store all the audio files in an AWS S3 storage container and call them from AWS instead of storing them all on the local server. Code snippet B doesn't work of course. This illustrates what I am conceptually desiring to do. Can someone point me to an article that addresses this of offer any suggestions/solutions?





issue with java heap on ec2 AWS microinstance

I'm trying to get dbpedia java package working on an AWS ECs micro instance (info here: http://ift.tt/1pVsQih)


The problem is that additional java heap space is required and I guess amazon isn't so fond of giving it to me. Here is the command and output. I"ve tried -Xmx10G, etc, no dice. I guess amazon micro instances might be limited in memory / heap space but I'm really not sure how to go about changing it or if that is the issue. Thanks!


tu@ip-172-31-27-6:~/dbpedia-spotlight-quickstart-0.6.5$ java -Xmx1024m -jar dbpedia-spotlight.jar en http://localhost:2223/rest Jan 31, 2015 6:48:04 AM org.dbpedia.spotlight.db.memory.MemoryStore$ load INFO: Loading MemoryTokenTypeStore... Jan 31, 2015 6:48:05 AM org.dbpedia.spotlight.db.memory.MemoryTokenTypeStore createReverseLookup INFO: Creating reverse-lookup for Tokens. Jan 31, 2015 6:48:06 AM org.dbpedia.spotlight.db.memory.MemoryStore$ load INFO: Done (1527 ms) Jan 31, 2015 6:48:06 AM org.dbpedia.spotlight.db.memory.MemoryStore$ load INFO: Loading MemorySurfaceFormStore... OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ec7a8000, 153452544, 0) failed; error='Cannot allocate memory' (errno=12) #


There is insufficient memory for the Java Runtime Environment to continue.


Native memory allocation (malloc) failed to allocate 153452544 bytes for committing reserved memory.


An error report file with more information is saved as:


/home/ubuntu/dbpedia-spotlight-quickstart-0.6.5/hs_err_pid2347.log


ubuntu@ip-172-31-27-6:~/dbpedia-spotlight-quickstart-0.6.5$





Can't create pipeline in Amazon Elastic Transcoder

In AWS Elastic Transcoder i want to create a new pipeline.But when i am trying to do that, got following error that shows in image.enter image description here


What is the permission needed to create a pipeline? How can i get that permission or how admin can give me such permission ?





Amazon Outbound MWS Fulfillment API: Understanding Shipments and Packages

I'm using the MWSOutboundAPI to create fulfillment orders on Amazon.com


In implementing the schema there is a design pattern that has me in a bind.


Amazon represents their FulfillmentShipment as a list on the Fulfillment Order. That makes sense because one order can have multiple shipments if, say, Amazon has to split up an order with multiple items across a few warehouses. This FulfillmentShipment contains the items that it comprises and it contains a list of packages.


Here's where the problems begin because there can be multiple FulfillmentShipmentPackages for one shipment. Each one of these FulfillmentShipmentPackages contains a tracking number, but no information about what items are being shipped in the package.


We would like to be able to communicate to our customers what items have been shipped in what package and this doesn't seem possible given the structure of the API. I was wondering why this is the case and if anyone knows how to determine this information.





Restrict access to amazon WorkSpace by IP Address?

I have a simple question which I don't think has a simple answer.


I would like to use Amazon Workspaces but a requirement would be that I can restrict the IP addresses that can access a or any workspace.


I kind of get the impression this should be possible through rules on the security group on the directory, but I'm not really sure, and I don't know where to start.


I've been unable to find any instructions for this or other examples of people having done this. Surely I'm not the first/only person to want to do this?!


Can anyone offer any pointers??





Camel, Amazon SQS - No type converter available to convert from type: java.lang.String to the required type: com.amazonaws.services.sqs.AmazonSQS

I am currently working on a Spring Application using Camel which is going to poll SQS as an entry point into the application (1st route). I am successfully able to Achieve this using Spring's XML based Approach.


My AmazonSQSClient Bean:



<bean id="sqsClient" class="com.amazonaws.services.sqs.AmazonSQSClient">
<constructor-arg ref="sqsCredentialsProvider" />
<property name="endpoint" value="${aws.sqs.endpoint}" />
</bean>


My Camel Route:



<route id="pollMessages">
<from id="sqsEndpoint" uri="http://ift.tt/1yNzCqQ" />
<to uri="direct:readSQSMessage" />
</route>


Everything works as I want at this point with the above approach.


Now I am trying to migrate all my beans and Camel Configuration to Java Based Approach.


I have created my Amazon SQS Client Bean as following:



@Bean
public AmazonSQS sqsClient(){
ClientConfiguration clientConfiguration = new ClientConfiguration();
AmazonSQSClient client = new AmazonSQSClient(sqsCredentialsProvider(), clientConfiguration);
client.setEndpoint(sqsEndpoint);

return client;
}


And, I am creating Camel route (snippet) looks like:



@Bean
public CamelContext camelContext() throws Exception{
CamelContext camelContext = new DefaultCamelContext();

camelContext.addRoutes(new RouteBuilder() {
@Override
public void configure() {

from("aws-sqs://"+fulfillmentQueueName+"?amazonSQSClient=#sqsClient&amp;delay="+fulfillmentQueuePollInterval)
.to("direct:parseSQSMessage");

}
});

camelContext.start();
return camelContext;
}


However, I am getting errors using this approach:


java.lang.IllegalArgumentException: Could not find a suitable setter for property: amazonSQSClient as there isn't a setter method with same type: java.lang.String nor type conversion possible: No type converter available to convert from type: java.lang.String to the required type: com.amazonaws.services.sqs.AmazonSQS with value #sqsClient


I read here how to construct a Java style Camel Route


I read here that I need to bind the AWS Client to Registry (registry.bind) but I am not able to find a bind method on any Registry except JNDI


I tried this as well:



SimpleRegistry registry = new SimpleRegistry();
registry.put("sqsClient", sqsClient());

CamelContext camelContext = new DefaultCamelContext(registry);


But got same error.


I searched a lot and tried reading up, plan to keep doing more but am unable to find any complete example doing what I need to do. Snippets are helping much here.


Any help is greatly appreciated. Thanks





Does someone has an example of Docker Rails configuation for AWS

I Have been using OpsWorks with custom cookbooks, it works ok, but i have lately been reading about Docker and seems very interesting, mostly because of the resource optimisation.


So i'm looking for a blog post that could guide me thought the process of setting this up using docker on AWS


Thanks in advance!





Serialize DynamicDB results

I'm using Python/Boto/Flask to build an api to get user gps data.


The issue is there is a Decimal('1422568178.40941') where I expect there to just be 1422568178.40941. I can get the data as expected though for loops but I need to pass it back out as json with flask-restful.


Now when I hit the API i get a



TypeError: Decimal('1422642498.484733') is not JSON serializable



If I try



users={'tester1','tester2'}
locdata = blutrac.getallusers(users)
print locdata


I get



{'tester1': {u'lat': Decimal('68.2354'), u'user': u'tester1', u'epochtime': Decimal('1422568178.40941'), u'log': Decimal('-48.255')}, 'tester2': {u'lat': Decimal('68.2354'), u'user': u'tester2', u'epochtime': Decimal('1422642498.484733'), u'log': Decimal('-48.255')}}


But with



gpstracks = Table('gpstrack')
resuts = gpstracks.query_2(
user__eq='tester1',
reverse=True,
limit=1
)
for result in results:
for item in result:
print item


I get:



68.2354 tester1 1422642498.484733 -48.255



Model:



from boto.dynamodb2.table import Table

def getallusers(users):
locations = {}
for user in users:
userlocation = getuserlast(user)
locations[user] = userlocation
return locations

def getuserlast(user):
userdata = {}
gpstracks = Table('gpstrack')
results = gpstracks.query_2(
user__eq=user,
reverse=True,
limit=1
)
for result in results:
return result._data


View:



# LocationList
# shows a list of most recent locations of listed users
class LocationList(Resource):
def get(self):
users = {'user1', 'user2'}
data = blutrac.getallusers(users)
print data
return data
api.add_resource(LocationList, '/api/locations')


Browser JS code:



<script type="application/javascript">

function GetData() {
$( ".location").empty();
$.ajax({
type: "GET",
url: "/api/locations",
contentType: "application/json; charset=utf-8",
crossDomain: true,
dataType: "json",
success: function (data, status, jqXHR) {

console.log(data);
for (user in data ){
console.log(user);

}

},

error: function (jqXHR, status) {
// error handler
console.log(jqXHR);
alert('fail' + status.code);
}
});
}
</script>




Handling Spring Boot Clustered Websockets on Amazon Beanstalk

I have an application using Spring Framework / Spring Boot / Spring Messaging/Websockets and am going to be deploying it to Elastic Beanstalk. You can think of the application as a chat application (it actually does have chat features)


Scenario


Here is an example scenario:



Client A <-> Server A
Client B <-> Server B
Client C <-> Server B


Now, if Client A posts a message, using spring messaging, if I send that message to all connected clients, only Client A will see it because only Client A is connected to Server A, and likewise if Client B does, only Clients B and C will see it, not Client A.


So this leaves me with a problem of what options I have.


Possible Solutions


If possible, I would like to use an Amazon service as I am already in their cloud platform.


I thought about using Amazon SQS, having each server subscribe to the same queue, and then sending all notifications through it, but I believe all requests with SQS are active, so I would have to do polling, and would create a significant delay.


Does anyone know of a good solution for this problem? I can set up a server to handle all web-sockets, but that is not optimal.


Thanks in advance!





InvalidParameterValue: Duplicate header 'Content-Type' when send RawMessage with Amazon SES

I'm trying to send a raw email message with Amazon SES API to include attachment.

SES reponse with 400 status code and I not sure what I'm doing wrong, here is the response:



<ErrorResponse xmlns="http://ift.tt/1vMoAG1">
<Error>
<Type>Sender</Type>
<Code>InvalidParameterValue</Code>
<Message>Duplicate header 'Content-Type'.</Message>
</Error>
<RequestId>ad8cb17c-a8a0-11e4-8898-8924aa87abfa</RequestId>
</ErrorResponse>


The request signed and work OK with other request, so I think it must only my email message issue. Here is my message data:



Cc: my-verified-email-1@gmail.com
Subject: Hello testing email hahahahaha
Mime-Version: 1.0
Date: 30 Jan 15 23:54 +0700
Content-Type: multipart/mixed; boundary=7f1a313ee430f85a8b054f085ae67abd6ee9c52aa8d056e7f7e19c6e2887
From: my-verified-email-2@gmail.com
To: my-verified-email-3@gmail.com
--7f1a313ee430f85a8b054f085ae67abd6ee9c52aa8d056e7f7e19c6e2887
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hello <b>testing email</b> with some =E4=B8=96=E7=95=8C and Vi=E1=BB=87t ng=
=E1=BB=AF.
--7f1a313ee430f85a8b054f085ae67abd6ee9c52aa8d056e7f7e19c6e2887
Content-Type: text/plain; charset=utf-8; name="test1.txt"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="test1.txt"

dGVzdGluZyBzdHJpbmcgd2l0aCBWaeG7h3Qgbmfhu68K
--7f1a313ee430f85a8b054f085ae67abd6ee9c52aa8d056e7f7e19c6e2887--




assign tag and public for instance via amazonica

I'm using amazonica to create an ami and then launch an instance from the ami when it's ready.


The problem I'm having with amazonica is that it has about zero documentation (that I can find), apart from what's on readme. And what's on ready is very little and covers very little.


I can currently successfully look at running instances, grab latest / required instance, create an AMI off of it, wait until that's ready, and then launch that instance.


Only, the (run-instance) method takes in I don't know what arguments. Looking at the java api doc I have figured out most of parameters with some trial and error but I still need to set a few more things.


Where can I find what parameters to pass to this function?


Currently, I have:



(run-instances :image-id ami-id
:min-count 1
:max-count 1
:instance-type "t2.small"
:key-name "api-key-pair"
:sercurity-groups ["sg-1a2b3c4d"]
;:vpc-id "vpc-a1b2c3d4"
:subnet-id "subnet-a1b2c3d4"
:monitoring true
:ebs-optimized false
:tag [{:value instance-name
:key "Name"}])


And this sets most things. But I can't figure out how to set:



  • tag - I want to set a tag name: "prod-1.0"


  • security groups. I've tried the one above, and this:



    :security-groups [{:group-id "sg-1a2b3c4d"
    :group-name "SG_STRICT"}]



but no use. Either the instance has default group, or, I get a strange errors like



...AmazonServiceException: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request


or



....s.AmazonServiceException: The security group '{:group-id "sg-1a2b3c4d", :group-name "SG_STRICT"}' does not exist


I've gone through that whole doc page a couple of times and can't find any other sensible options / keywords to pass.


I also want to start the instance with auto-assign-public-ip option too.


The source doesn't reveal much on amazonica, unfortunately, as the doc says it uses reflections heavily and tests aren't very elaborate.


so How do I set a security group and tags for this, please?





How do I set up AWS Route 53 to handle an EC2 single instance domain

I have thoroughly reviewed both the Amazon Web Services documentation and many Stackoverflow posts related to my issue, but have not yet resolved it. My situation:


I have successfully set up:



  1. an EC2 t2.micro instance with elastic IP, running Ubuntu 14.04.01 / Apache2 / PHP / MySQL (LAMP)

  2. WordPress 4.5 as a content management system

  3. CiviCRM 4.1 as a constituent management app


I am able to access and run this configuration with the public DNS linked to the Elastic IP. I have a custom domain (mydomain.org) registered through Route 53 and have set up the necessary record sets to connect (mydomain.org and www.mydomain.org) to the EIP. This configuration, accessed with HTTP, correctly serves the base page of the app, and I see what I expect from WordPress, with (mydomain.org) showing in the browser address window.


When I navigate to any other page, it breaks. I see the page, but the displayed URL is that of the EIP public DNS, not my custom domain. I suspect that rewriting the URL in the Virtual Hosts section of my provide a solution, but I haven't been able to determine the proper statements.


Further, I need to have this configuration support TLS / HTTPS. I have successfully obtained and installed the necessary certificates and set them up in my server configuration. I have edited the ssl.conf Virtual Hosts file, and have even been able, using HTTPS to successfully navigate to the base page of WordPress. It shows the basic HTML of the page, but all of the script driven formatting is missing. Again, navigating to any other page of the app breaks the TLS by using the EIP URL, not my custom URL.


I suspect the same solution to the initial issue will fix this issue, as well.


Thank you, in advance, for your advice and suggestions.





Unable to connect to and AWS Oracle RDS instance

I'm new to working with AWS and RDS. What I'm trying to do is setup an Oracle DB instance on RDS, that part is fine but when I try to connect to it via an application I get various errors. Depending on the application I get a couple of different errors.


[Local computer trying to connect to RDS directly]


LinqPad - Connection Error: Server did not respon within the specified timeout interval. (I get this on both the Direct and OCI connection modes)


SQL Developer - IO Error: The network adapter could not establish the connection.


[EC2 instance in same VPC]


LinqPad - Connection Error: NET: Connection was refused with error ORA-12504 (I get this on both the Direct and OCI connection modes also)


I've done some research and it mostly points that the VPC isn't setup correctly, but I'm able to connect to my EC2 instance fine though.


Oddly enough, I did have one fluke incident where from my local computer it did connect, but as soon as I tried the test again I got more errors.


Any help, or new directions to look would be greatly appreciated. Will provide more information if needed also. Wasn't sure what to post settings wise. Thanks in advance.





Support vector machines in MLLib Apache Spark

I have installed spark on AWS Elastic Map Reduce(EMR) and have been running SVM using the packages in MLLib. But there are no options to choose parameters for building the model like kernel selection and cost of misclassification (Like in e1071 package of R). Can someone please tell me how to set these parameters while building the model?





How to create new user using AWS SDK for PHP?

I know how to create new user using AWS Console but is there an a way to do the same via PHP SDK?


AWS Create New User


Actually I want to create users with SDK to apply permissions (limited access to folders). There will a folder for each user in a bucket, user will be able to access only allotted folder.


So in short my script will be doing this


create AWS user create folder in a bucket for the user created in step#1 Grant access to folder create in step#2 to user created in step#1 After this happens, another script running on another website will allow logged in user to access his/her own folder. So no user will be able to see other user's folder.


Is this possible with AWS?





Web and mobile app using single database in amazon

We are planning to develop web (php) and mobile (ios) app and then deploy them in amazon web services. We would like our apps to reference same data in a real time.


Could you advice me how this can be achieved in general. Since mobile apps can not directly access external databases, what kind of approach is used to deal with such cases. Should we create an additional api layer (web services) for mobile app to let it exchange data with database? Or there are some other solutions? Particularly how it can be solved using amazon web services?


Appreciate your any advice.





Download a application from AWS Elastic Beanstalk

How can I download a application from Elastic Beanstalk? I uploaded the application via the web interface, and made some changes live (It's a wordpress site), and now I want to download the whole site.


Thanks.





how to make code deployed automatically in elb and updated in autoscale group

we have multiple servers running in elb. we have our repo in github. right now i deploy code in elb servers using fabric scripts in parallel and update AMI for autoscale group. these code are written in python and we are using mod-wsgi with apache2. we have to restart apache to refresh new code. is there any method the code refresh can be automatic without restarting apache2 ? now. i want my codes to be deployed automatically in each servers. since there are multiple servers in backend, i doubt if i can use webhook for deployment.


i do not want my presence for deployment of new codes available. i was going through git-deploy. but not able to understand how it deploy codes in multiple servers please help me for this how i can perform this task .


thanks





jeudi 29 janvier 2015

Port forwarding in when running a Tomcat Docker in an AWS Elastic Beanstalk application

I have a Tomcat 7.0 webapp running inside a docker container on AWS Elastic Beanstalk (EB) (I followed the tutorial here).


When I browse to my EB url myapplication.elasticbeanstalk.com, I get a 502 Bad Gateway by served by nginx. So its immediately clear that my port 80 is not forwarding to my container. When I browse to myapplication.elasticbeanstalk.com:8888 (another port I exposed in my Dockerfile) the connection is refused (ERR_CONNECTION_REFUSED). So I SSH'ed into the AWS instance and checked the docker logs, which show that my Tomcat server has started successfully, yet obviously hasn't processed any requests.


Does anyone have any idea my port 8888 appears not to be forwarding to my container?


Executing the command (on the AWS instance):



sudo docker ps -a


gives:



CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c353e236da7a aws_beanstalk/current-app:latest "catalina.sh run" 28 minutes ago Up 13 minutes 80/tcp, 8080/tcp, 8888/tcp sharp_leakey


which shows port 80, 8080, and 8888 as being open on the docker container.


My Dockerfile is fairly simple:



FROM tomcat:7.0

EXPOSE 8080
EXPOSE 8888
EXPOSE 80


and my Dockerrun.aws.json file is:



{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "myusername/mycontainer-repo"
},
"Authentication": {
"Bucket": "mybucket",
"Key": "docker/.dockercfg"
},
"Ports": [
{
"ContainerPort": "8888"
}
]
}


Does anyone see where I could be going wrong? I'm not even sure where to look at this point.


Also, my AWS security group for the instance is open on port 80, 8080, and 8888. Any advice would be greatly appreciated! I'm at a loss here.





how to combine 100 files in an Amazon S3 bucket to 1 large file

We have about 50,000 small files in an s3 bucket. I need to merge 100 small files into a big file and save the big file to a diff bucket/folder. Any thoughts?





Trying to build an automation script on AWS Data Pipeline

I am trying to use the AWS Data Pipeline service in the following manner:



  1. Select the activity type as Shell Command activity with the script uri set (to an s3 bucket) and Stage input set to true.

  2. Set the resource type of the activity as EC2.

  3. Use S3 as a data node.

  4. For the ec2 resource, I have selected the instance type as t2.medium and instance ID as a custom AMI created by me.

  5. Schedule the pipeline to run everyday at 10pm.


The script specified in step 1 (i.e. as part of script uri in the activity) has 2 lines: 1. To copy the S3 bucket data to the instance. 2. Run the python command to execute my program. The AMI I have created is based on Ubuntu instance of ec2 and it consists of some python software and also the code I would like to run.


Now, on initiation of the pipeline I notice that ec2 instance is indeed created and the S3 data is copied and made available to the instance but the python command is not run. The instance is in running state and the pipeline is in waiting for runner state for some time and then data pipeline fails with the message: "Resource stalled".


Can someone please let me know if i am doing something wrong or why doesn't my python code is not being executed or why am I getting the Resource stalled error? The code works fine if I run it manually without the pipeline.


Thanks in advance!!





Files not updating on Server - EC2, Sublime, FileZilla

I'm using filezilla + sublime with EC2


I'm able to connect to the server fine with filezilla, but when I edit a file and save it using sublime its seems to be updated. Filezilla responds "File transfer successful, transferred 16,384 bytes in 1 second"


Also if I close the file and then reopen it I can see the changes.


However, when I go to the site's url I don't see any changes its like I'm editing a completely different server, but my colleague is able to do the same thing on his computer ~using the same login~ and I can see those changes on url but not when I open/edit the file with sublime. I just see the changes I made in the source file but not at the live url.


I'm new to this kind of stuff after hours of research I need redirection.





AWS OpsWorks app layer: environment variables not accessible from PHP app

I love the services AWS is offering. I'm using OpsWorks to deploy my PHP apps, but I can't access any environment variables from the php app, to securely connecting to the databases. Neither with genenv and not with $_SERVER.


I've found the following question about this topic: Set environment variables with AWS Opsworks, but I can't imagine this is the way to go..


Can anybody tell me how I can access those environment variables?


Thanks in advance.


Cheers.





How do I collect and store data from a 3rd party API?

I'm new to web development and mainly have had experience doing front-end things with JS, so I'm not quite sure how to go about this. I know how to manipulate+display the data on the front-end, but I need help understand what tools I need to accomplish #1 and #2 below. Essentially collecting and storing the data remotely.


I assume since I won't be using my own server, I will need to use Amazon Web Services. But AWS has like 30 services and I'm not sure which does what.


Thank you!


Here's what I'd like to do:



  1. GET request to a 3rd party REST API every 60 minutes to collect JSON data.

  2. Store the data in a database.

  3. On the front-end, when a user loads the webpage, the webapp will pull the data from the database and do cool things with the data (e.g. charts, etc.)





Amazon S3 PHP SDK pre-signed requests

I am trying to generate a presigned request with the S3 PHP SDK like this:



$s3Client = Aws\S3\S3Client::factory([
'credentials' => new Aws\Common\Credentials\Credentials('my-access-code', 'xxx')
]);

$command = $s3Client->getCommand('GetObject', [
'Bucket' => 'my-bucket-name',
'Key' => 'awesome-cat-image.png',
]);

$signedUrl = $command->createPresignedUrl('+10 minutes');


But when I goto the URL I get an error saying:



<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
<AWSAccessKeyId>xxx</AWSAccessKeyId>
<StringToSign>GET 1422564095 /my-bucket-name/awesome-cat-image.png</StringToSign>
<SignatureProvided>xxx</SignatureProvided>
<StringToSignBytes>
xxx
</StringToSignBytes>
<RequestId>xxx</RequestId>
<HostId>
xxx
</HostId>
</Error>


Accessing http://ift.tt/1JPs13g works just fine with permissions set to allow non authenticated users.





how to send attachments in emails using AWS SES

I'm trying to send an email programmatically using amazon's SES library using the code from here. After tweaking I have the following pieces of code.


SESUtils.php



<?php

require_once('aws.phar');

use Aws\Ses\SesClient;

/**
* SESUtils is a tool to make it easier to work with Amazon Simple Email Service
* Features:
* A client to prepare emails for use with sending attachments or not
*
* There is no warranty - use this code at your own risk.
* @author sbossen
* http://ift.tt/1uFRinE
*
* Update: Error checking and new params input array provided by Michael Deal
*/
class SESUtils {

const version = "1.0";
const AWS_KEY = SES_KEY;
const AWS_SEC = SES_SECRET;
const AWS_REGION = "us-east-1";
const MAX_ATTACHMENT_NAME_LEN = 60;

/**
* Usage:
$params = array(
"to" => "email1@gmail.com",
"subject" => "Some subject",
"message" => "<strong>Some email body</strong>",
"from" => "sender@verifiedbyaws",
//OPTIONAL
"replyTo" => "reply_to@gmail.com",
//OPTIONAL
"files" => array(
1 => array(
"name" => "filename1",
"filepath" => "/path/to/file1.txt",
"mime" => "application/octet-stream"
),
2 => array(
"name" => "filename2",
"filepath" => "/path/to/file2.txt",
"mime" => "application/octet-stream"
),
)
);

$res = SESUtils::sendMail($params);

* NOTE: When sending a single file, omit the key (ie. the '1 =>')
* or use 0 => array(...) - otherwise the file will come out garbled
*
* use $res->success to check if it was successful
* use $res->message_id to check later with Amazon for further processing
* use $res->result_text to look for error text if the task was not successful
*
* @param array $params - array of parameters for the email
* @return \ResultHelper
*/
public static function sendMail($params) {

$to = self::getParam($params, 'to', true);
$subject = self::getParam($params, 'subject', true);
$body = self::getParam($params, 'message', true);
$from = self::getParam($params, 'from', true);
$replyTo = self::getParam($params, 'replyTo');
$files = self::getParam($params, 'files');

$res = new ResultHelper();

// get the client ready
$client = SesClient::factory(array(
'key' => self::AWS_KEY,
'secret' => self::AWS_SEC,
'region' => self::AWS_REGION
));

// build the message
if (is_array($to)) {
$to_str = rtrim(implode(',', $to), ',');
} else {
$to_str = $to;
}

$msg = "To: $to_str\n";
$msg .= "From: $from\n";

if ($replyTo) {
$msg .= "Reply-To: $replyTo\n";
}

// in case you have funny characters in the subject
$subject = mb_encode_mimeheader($subject, 'UTF-8');
$msg .= "Subject: $subject\n";
$msg .= "MIME-Version: 1.0\n";
$msg .= "Content-Type: multipart/alternative;\n";
$boundary = uniqid("_Part_".time(), true); //random unique string
$msg .= " boundary=\"$boundary\"\n";
$msg .= "\n";

// now the actual message
$msg .= "--$boundary\n";

// first, the plain text
$msg .= "Content-Type: text/plain; charset=utf-8\n";
$msg .= "Content-Transfer-Encoding: 7bit\n";
$msg .= "\n";
$msg .= strip_tags($body);
$msg .= "\n";

// now, the html text
$msg .= "--$boundary\n";
$msg .= "Content-Type: text/html; charset=utf-8\n";
$msg .= "Content-Transfer-Encoding: 7bit\n";
$msg .= "\n";
$msg .= $body;
$msg .= "\n";

// add attachments
if (is_array($files)) {
$count = count($files);
foreach ($files as $idx => $file) {
if ($idx !== 0)
$msg .= "\n";
$msg .= "--$boundary\n";
$msg .= "Content-Transfer-Encoding: base64\n";
$clean_filename = mb_substr($file["name"], 0, self::MAX_ATTACHMENT_NAME_LEN);
$msg .= "Content-Type: {$file['mime']}; name=$clean_filename;\n";
$msg .= "Content-Disposition: attachment; filename=$clean_filename;\n";
$msg .= "\n";
$msg .= base64_encode(file_get_contents($file['filepath']));
if (($idx + 1) === $count)
$msg .= "==\n";
$msg .= "--$boundary";
}
// close email
$msg .= "--\n";
}

// now send the email out
try {
file_put_contents("log.txt", $msg);
$ses_result = $client->sendRawEmail(
array(
'RawMessage' => array(
'Data' => base64_encode($msg)
)
), array(
'Source' => $from,
'Destinations' => $to_str
)
);
if ($ses_result) {
$res->message_id = $ses_result->get('MessageId');
} else {
$res->success = false;
$res->result_text = "Amazon SES did not return a MessageId";
}
} catch (Exception $e) {
$res->success = false;
$res->result_text = $e->getMessage().
" - To: $to_str, Sender: $from, Subject: $subject";
}
return $res;
}

private static function getParam($params, $param, $required = false) {
$value = isset($params[$param]) ? $params[$param] : null;
if ($required && empty($value)) {
throw new Exception('"'.$param.'" parameter is required.');
} else {
return $value;
}
}

}

class ResultHelper {

public $success = true;
public $result_text = "";
public $message_id = "";

}

?>


And the function I'm using to send the actual email



function sendAttachmentEmail($from, $to, $subject, $message, $attachmentPaths=array()){
client = SesClient::factor(array('key' => SES_KEY, 'secret' => SES_SECRET, 'region' => 'us-east-1'));
$attachments = array();
foreach($attachmentPaths as $path){
$fileName = explode("/",, $path);
$fileName = $fileName[count($fileName)-1];
$extension = explode(".", $fileName);
$extension = strtoupper($extension[count($extension)-1]);
$mimeType = "";
if($extension == 'PDF') $mimeType = 'application/pdf';
elseif($extension == 'CSV') $mimeType = 'test/csv';
elseif($extension == 'XLS') $mimeType = 'application/vnd.ms-excel';
array_push($attachments, array("name" => $fileName, "filepath" => $path, "mime" => $mimeType));
}
$params = array(
"from" => $from,
"to" => $to,
"subject" => $subject,
"message" => $message,
"replyTo" => $from,
"files" => $attachments
);
$res = SESUtils::sendMail($params);
return $res;
}

sendAttachmentEmail("jesse@aol.com", "jesse@aol.com", 'test', 'test', array("/path/to/file.pdf"));


When I run this the message returned is an error saying "Expected ';', got "Reports" - To: jesse@aol.com, Sender: jesse@aol.com, Subject: test". Anyone know what I might be missing? The contents of the msg being sent is



To: jesse@aol.com
From: jesse@aol.com
Reply-To: jesse@aol.com
Subject: test
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="_Part_142255491754ca7725b0bf89.40746157"

--_Part_142255491754ca7725b0bf89.40746157
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

test
--_Part_142255491754ca7725b0bf89.40746157
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit

test
--_Part_142255491754ca7725b0bf89.40746157
Content-Transfer-Encoding: base64
Content-Type: application/pdf; name=file.pdf;
Content-Disposition: attachment; filename=file.pdf;




AWS EC2 and Route 53 Domain ( Transfered from different Provider ) Linking Issues | nslookup doesnt resolve any ip address

I am new to AWS and having trouble in linking my recently transferred domain to the EC2 Web application which is running on Ubuntu.


Cpnfigurations


EC2 Setup is done. Assigned an Elastic IP to EC2. Assigned liost of Name Servers to Recently transferred domains. Created Hosted Zone with new A record with Name: mydomain.com Alias : no Value :


When try to Access My web application using static IP it works fine. Hoever, with domain it doesnt seems resolving the host.


nslookup for domain gives below result


Server: UnKnown Address: fe80::1


DNS request timed out. timeout was 2 seconds. *** Request to UnKnown timed-out


Any help would be appreciated. Thanks in Advance





AWS signature version 4 sha256 hash not signing correctly?

I am trying to use the AWS signature version 4 to submit a request to S3.


When I submit my request I get the message The request signature we calculated does not match the signature you provided. Check your key and signing method.


I am not sure where to go from here. I have tried several different hash algorithms (sha1, md5) but always get the same response. I have verified the Access Key and Secret Key. I just created a new one pair on AWS, and it still fails.


Any help is appreciated!





<?php
$date = date('Ymd');
$x_date = $date . "T000000Z";
$credential = AWS_ACCESS_KEY . '/' . $date . '/us-west-2/s3/aws4_request';
$redirect = 'http://ift.tt/18znq87';

$conditions = array(
array('bucket' => 'tracescope'),
array('starts-with', '$key', 'user/user1/'),
array('acl' => 'public-read'),
array('success_action_redirect' => $redirect),
array("starts-with", "\$Content-Type", "image/"),
array("x-amz-credential" => $credential),
array("x-amz-algorithm"=> "AWS4-HMAC-SHA256"),
array("x-amz-date" => $x_date),
);

$policy_b64 = $this->aws->getPolicy(3600 * 24, $conditions);
$signature = hash_hmac('sha256', $policy_b64, AWS_SECRET_KEY);
?>

<form action="http://ift.tt/1Kc7vIi" method="post" enctype="multipart/form-data">
<input type="input" name="key" value="test/${filename}"/><br/>
<input type="hidden" name="acl" value="public-read"/>
<input type="hidden" name="success_action_redirect" value="<?= $redirect; ?>"/>
<input type="input" name="Content-Type" value="image/jpeg"/><br/>
<input type="text" name="X-Amz-Credential" value="<?= $credential; ?>"/>
<input type="text" name="X-Amz-Algorithm" value="AWS4-HMAC-SHA256"/>
<input type="text" name="X-Amz-Date" value="<?= $x_date; ?>"/>
<input type="hidden" name="Policy" value="<?= $policy_b64; ?>" />
<input type="hidden" name="X-Amz-Signature" value="<?= $signature; ?>"/>

<input type="file" name="file"/> <br/>

<input type="submit" name="submit" value="Upload to Amazon S3"/>
</form>






AWS Dynamodb limits for items size and list item count sounds contradictory

Dynamodb documentation [1] clearly states that:



  • "Item size" cannot exceed 400KB.

  • "Number of elements in a list": An attribute of type List can contain more than two billion elements.


I must be misunderstanding something here, if you have 2 billion items in a list attribute, then the item containing this attribute is surely larger than 400KB, right?


What am I missing?


1- http://ift.tt/1senaCU





Refinerycms: file names lost after migrating files from one AWS S3 bucket to another

This is how I encountered the problem:


Uploaded files to S3


Tried to download the file, it works ok


Migrated the files in that bucket to another bucket (under the same AWS account)


Tried to download the file again, it works, the file is downloaded, but the name of the file is just "file", no extension.


If I change the file name to add the extension, it still opens. So the content of the file is ok, but the file name is lost after migrating to another bucket.


Has anybody met this problem before? Or know any potential causes of this?


Thanks





how to get the latest aws volume snapshot id# using python/boto?

I am new to python , it will be very help if some one share me a sample script to

get the latest snapshot ID# of each AWS volumes.


I am using AWS api.





How to deploy in amazon ec2 through bitbucket

I have a django project and have my repository in bitbucket and locally. Now I want to deploy it to amazon ec2 instance. I did git clone and have added it. But I can't pull nor can I push from the ec2 instance. What do I have to do to push or atleast pull from the origin master.


I guess its a very simple question. But I am a newbie so very confused. Please elaborate your answer. I would be very helpful if you could help me. Thank you.





AWS, Amazon, Cognito and DynamoDB from NODEJS RESTAPI

we developing a REST API in NodeJS and use DynamoDB as database,


i pretend use Cognito for handle Auth multiple accounts, but in examples allways first get cognitoID and later use DynamoDB, its not posible send cognitoID to client and use this id or somthing similar like credential to access to DynamoDB whitout refind CognitoID??



FB.login(function (response) {
if (response.authResponse) { // logged in
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: 'us-east-1:1699ebc0-7900-4099-b910-2df94f52a030',
Logins: {
'graph.facebook.com': response.authResponse.accessToken
}
});

AWS.config.credentials.get(err,data){
console.log('You are now logged in.');
//Here i can use Dynamo without any problem,
//but i allways need save Facebook Token ID and get CognitoID
//for ALL the connections
//there is no way to save CognitoID and use this
//The ID of my users in Dynamo is CognitoID
var db = new AWS.DynamoDB();
db.listTables(function(err, data) {
console.log(data.TableNames);
});
}
} else {
console.log('There was a problem logging you in.');
}
});




Trouble installing cPanel

I am trying to install the latest version of cPanel on a server running Centos 6.6 and it is failing here:



[20150129.112152] Testing if it's possible to install a simple RPM
[20150129.112152] Retrieving http://ift.tt/1LlKoyo
[20150129.112152] Preparing... ##################################################
[20150129.112152] rpm_is_working ##################################################
[20150129.112152] Now removing the RPM
info [updatenow] upcp Notification => root@server.hollmanmedia.com via EMAIL [level => 1]
Cpanel::iContact: icontact /usr/sbin/sendmail is not executable by 0
[20150129.112152] W An attempt to up/downgrade to 11.46.2.4 was blocked. Please review blockers.
Can't exec "/usr/local/cpanel/scripts/cpanel_initial_install": No such file or directory at
/home/cPanelInstall/selfgz11290/install line 146.
2015-01-29 11:21:52 148 (FATAL): Failure to exec /usr/local/cpanel/scripts/cpanel_initial_install
Removing /root/installer.lock


Does anyone have any ideas? I am trying to install this on an AWS EC2 instance that is running Centos 6.6. I tried opening my security group so that all traffic is allowed and this did not help.





AWS signature for signed URL

I came across a .NET example on how to get a signature needed for signing aws request api calls.


I am working on a windows phone 8 app and got stuck with the line



KeyedHashAlgorithm kha = KeyedHashAlgorithm.Create(algorithm);


It appears that windows phone 8 does not have the Create method, the error is below:



'System.Security.Cryptography.KeyedHashAlgorithm' does not contain a definition for Create



Is there an alternative way around this?


Here is the complete code snippet



public static byte[] HmacSHA256(String data, byte[] key)
{
String algorithm = "HmacSHA256";
KeyedHashAlgorithm kha = KeyedHashAlgorithm.Create(algorithm);
kha.Key = key;

return kha.ComputeHash(Encoding.UTF8.GetBytes(data));
}



static byte[] getSignatureKey(String key, String dateStamp, String regionName, String serviceName)
{
byte[] kSecret = Encoding.UTF8.GetBytes(("AWS4" + key).ToCharArray());
byte[] kDate = HmacSHA256(dateStamp, kSecret);
byte[] kRegion = HmacSHA256(regionName, kDate);
byte[] kService = HmacSHA256(serviceName, kRegion);
byte[] kSigning = HmacSHA256("aws4_request", kService);

return kSigning;
}




Java - Create a directory and add files from aws

I am trying to retrieve files from AWS and store them in a directory. I keep getting the following error:



java.io.FileNotFoundException: C:\Users\Matthew\AppData\Local\Temp\UPDATE_TEMP_5190592302690883358\aws (Access is denied)


Here is the code where it throws the error:



File outputFile = new File(tmpDirPath + File.separator + "aws");

outputFile.mkdirs();

FileOutputStream fos = new FileOutputStream(outputFile);


It complains before creating the FileOutputStream. I have tried outputFile.getParentFile().mkdirs(); as well but it creates aws as type File instead of a directory. Can someone explain why it throws access denied on the directory?





Amazon sqs node.js worker

I've created a node.js worker that processes my jobs stored in the queue, and every time a jobs comes in sqs service I get a POST request to my node.js worker, after that I try to read the messages from the queue, but the only thing I get is :



{"ResponseMetadata":{"RequestId":"9d6121e0-4241-5bc8-a000-6ca41ae431f9"}}


My question is, why can't I get the messages (sqs says that they are in flight), and the second thing is I know I get my message when SQS service makes a POST request, then why is there no RecipientHandle in that request to delete the message?


my worker.js



AWS.config.update({
accessKeyId: AWS_CONFIG.ACCESS_KEY_ID,
secretAccessKey: AWS_CONFIG.SECRET_ACCESS_KEY,
region: AWS_CONFIG.REGION
});

var sqs = new AWS.SQS();

var i = 0;
router.post('/', function(req, res, next){

if (!dataIsSet(req.body)){
req.send(569).end();
return;
}
var msgId = req.header('x-aws-sqsd-msgid'),
name = req.body.name,
message = req.body.data;

console.log('Headers : ');
console.log(JSON.stringify(req.headers));
console.log('============ Request came in ============' + (i++) );
res.status(200).send('Msg');
var data = {};

sqs.receiveMessage({
QueueUrl: AWS_CONFIG.SQS.QUEUE_URL,
MaxNumberOfMessages: 10,
VisibilityTimeout: 20,
WaitTimeSeconds: 0
}, function (err, response) {

console.log('============ Message from queue : ============');
console.log(JSON.stringify(response));
if (err) {
console.log(JSON.stringify(err));
//callback(err);
} else {
console.log(JSON.stringify(response));
data = response;
}
});


});


Thanks!





Tagging AWS Elasticache Nodes

Does anyone know a way to add tags to an Elasticache node like you're able to do with EC2 and RDS instances?



  • Is this not an feature?

  • If possible it available in either the console, command line tools, or both?





AWS Elastic Beanstalk Node.js Socket.io 400 (Bad Request)

I have attempted to resolve this and looked around but I still cannot figure out how to resolve this matter.


I set my AWS Elastic Beanstalk instance to run with 2 server instances and now I'm getting the following:


WebSocket connection to 'ws://.com/socket.io/?EIO=3&transport=websocket&sid=ygwpRoYU4e_WYr7QAADQ' failed: Error during WebSocket handshake: Unexpected response code: 400 socket.io.js:2919 POST http://.com/socket.io/?EIO=3&transport=polling&t=1422547055741-674&sid=ygwpRoYU4e_WYr7QAADQ 400 (Bad Request) socket.io.js:2919 GET http://.com/socket.io/?EIO=3&transport=polling&t=1422547056603-676&sid=hDkZib1rxiodV10EAADR 400 (Bad Request) socket.io.js:3509 WebSocket connection to 'ws://.com/socket.io/?EIO=3&transport=websocket&sid=hDkZib1rxiodV10EAADR' failed: WebSocket is closed before the connection is established.


Does anyone have any idea on how to resolve this?


Your help is really appreciated.





Running different stacks on same cloud instance

I'm running a Python-based application on an AWS instance but now I want to install a Wordpress (PHP) blog as a sub-domain or as a sub-folder as an addition in the application. Is it technically possible to run two different stack applications on a single cloud instance? Currently getting an inscrutable error installing the Wordpress package with the Yum installer.





How to configure and run cronjob on AWS ElasticBeanStalk?

I have deployed my laravel project on AWS Elasticbeanstalk and I have a cronjob task. How do I configure it to run? I have seen multiple answers but either they are no longer valid, too complex or not described well. Any sort of guide/steps I can follow to do that?


In many deployment services there's the cpanel where I would easily configure a cronjob but stating the path where it exists but I can't find anything for that on AWS.





AWS, running multiple server on the same instance for same type?

AWS said when we deploy the same type of application, it is better to deploy on the same server instances.


I am not sure if it is a best/better practise for deployment. Is there any further references for that?


http://ift.tt/1JMqLOn


Running Multiple Applications on the Same Application Server


If you have multiple applications of the same type, it is sometimes more cost-effective to run them on the same application server instances.


To run multiple applications on the same server


Add an app to the stack for each application.


Obtain a separate subdomain for each app and map the subdomains to the application server's or load balancer's IP address.


Edit each app's configuration to specify the appropriate subdomain.


For more information on how to perform these tasks, see Using Custom Domains.





Best combination of ec2 instance Family and instance type for large user based PHP application

I have gone through the amazon documentation about instance families and instance types before selecting it, but still i am confused Neither i found related question on any platform. Here is my scenario:


I am in process of developing Cricket WorldCup application which will have thousands of users in first week after it launches. Lets say 50,000 (can go to 100000) users at one time on website. I want to select right instance family and right instance type for that as i will not have time to scale up if site goes down in the middle.


Can anyone suggest best combination? I am using Drupal CMS to develop this application if it will help suggesting right combination.





Ruby on Rails Direct AWS S3 Upload with JqueryFileUpload

I want to build a simple File Upload system for my website (only I will be accessing and uploading) to upload my Portfolio Page. My website is on Ruby on Rails, hosted on Heroku.


So I was following the Heroku Tutorial to upload Images to S3. It uses the aws-sdk gem After following through the tutorial, when I try to upload a simple .png file, I received the following error.



Bad Request 400: Bucket POST must contain a field named 'key'. If it is specified, please check the order of the fields.


PortfolioController



def new
@s3_direct_post = S3_BUCKET.presigned_post(key: "${filename}", success_action_status: 201, acl: :public_read)
@portfolio = Portfolio.new()
end


Checking the javascript formData value in the View:



...
fileInput.fileupload({
formData: '<%=@s3_direct_post.fields.to_json.html_safe %>',
fileInput: fileInput,
url: '<%=@s3_direct_post.url%>',
type: 'POST',
autoUpload: true,
paramName: 'file',
dataType: 'XML',
replaceFileInput: false,

...


gives:



{
"AWSAccessKeyId"=>"my-access-key",
"key"=>"${filename}",
"policy"=> "long-string",
"signature"=>"randomg-signature-string",
"success_action_status"=>"201",
"acl"=>"public-read"
}


I've tried adding to sync my timing as shown in /config/initializers/aws.rb:



AWS.config(access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'])

AWS::S3.const_set('DEFAULT_HOST', "s3-ap-southeast-1.amazonaws.com")

S3_BUCKET = AWS::S3.new.buckets[ENV['S3_BUCKET']]


After looking through Google and Stackoverflow, it seems that Jquery might be rebuilding the form data, hence messing up the order of the POST values.


Problem is, I'm relatively new to Ruby on Rails and Javascript, so I'm not sure how to go about fixing this.


Any advice is appreciated. Thanks!





Docker Error - "jq: error: Cannot iterate over null"

So I'm trying to deploy a dockerfile on Elastic Beanstalk, but I can't get past this error - "jq: error: Cannot iterate over null".



Successfully built [myContainerId]
Successfully built aws_beanstalk/staging-app
[2015-01-29T10:35:59.494Z] INFO [16343] - [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/04run.sh] : Starting activity...
[2015-01-29T10:36:05.507Z] INFO [16343] - [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/04run.sh] : Activity execution failed, because: command failed with error code 1: /opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh
jq: error: Cannot iterate over null
Docker container quit unexpectedly after launch: Docker container quit unexpectedly on Thu Jan 29 10:36:05 UTC 2015:. Check snapshot logs for details. (Executor::NonZeroExitStatus)
at /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.1.0/gems/executor-1.0/lib/executor/exec.rb:81:in `sh'
from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.1.0/gems/executor-1.0/lib/executor.rb:15:in `sh'
from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.1.0/gems/beanstalk-core-1.1/lib/elasticbeanstalk/executable.rb:63:in `execute!'
from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.1.0/gems/beanstalk-core-1.1/lib/elasticbeanstalk/hook-directory-executor.rb:29:in `block (2 levels) in run!'
from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.1.0/gems/beanstalk-core-1.1/lib/elasticbeanstalk/activity.rb:169:in `call'
from /opt/elasticbeanstalk/lib/ruby/lib/ruby/gems/2.1.0/gems/beanstalk-core-1.1/lib/elasticbeanstalk/activity.rb:169:in `exec'


There aren't any other errors in the logs. My Docker container is successfully built, so it seems unlikely the error is coming from there.


My Dockerrun.aws.json looks like :



{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "blah",
"Update": "false"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}


I'm banging my head against a wall with this one, nothing I change seems to affect it and googling hasn't been of any help.


Any ideas?





How to test if my Gateway can communicate with aws server

I have an instance running healthily on AWS, so i need to write some form of a linux scrip which will be deployed on my gateway (router). This script has to test if the gateway can communicate with the instance. I have thought about using tools such as ping and traceroute but then i realized that as long as the gateway has internet connection, it will successfully ping the server.


So what approach can i implement to test that ?


Many Thanks





Amazon Redhshift unload encrypted option

For unloading data from Amazon Redshift, I want to use client-side encryption feature. Does anyone know about the key generator tool for the same ?





Amazon EMR runs slow than EC2 based Hadoop Cluster

I am running a streaming Hadoop job written in python over EMR as well as a self made Hadoop Cluster on top of Amazon EC2 instances. My input is divided into around 50,000 files and mapper reads each file as an input. Now when I ran the job over EC2 cluster it took 36 minutes for 1000 files and EMR took 1 hr 30 Minutes. Even job with larger inputs seems to fail on EMR but they run fine on EC2.


The EMR job ran with 4 m3.xlarge instances and EC2 with 4 m1.large instances. For instance info look here


I have gone through this link as well as this link to find the comparison between EMR and EC2.


Now, I am looking for: 1) the reason of EMR being slow as compared to EC2. 2) Is EMR keeping the input provided in s3 only or is it copying the same to HDFS and then running the job. 3) Is EC2 a better choice. If yes what type of jobs are more suitable over EC2.


I would really appreciate reference to some links, some similar experience or any form of relevant document.


P.S: I am a beginner to AWS and some of my questions may sound silly. But as someone has said, no question is silly :).





Connecting to MySQL Database - GAE - AWS - Maven

I am receiving data in a Java Servlet (maven) and want to store it in a MySQL database that is hosted at Amazon. The application is hosted at Google.


I receive the error message "no suitable driver found for..."


I already added the mysql-connector-java-5.1.18-bin.jar to the folder WEB-INF/lib


This is my code:



package com.example.mail;

import java.io.IOException;
import java.util.Properties;
import javax.mail.Session;
import javax.mail.internet.MimeMessage;
import javax.servlet.http.*;
import javax.mail.Address;
import java.sql.Statement;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.logging.Logger;

public class MailHandlerServlet extends HttpServlet {
@Override
public void doPost(HttpServletRequest req, HttpServletResponse resp)
throws IOException {
Properties props = new Properties();
Session session = Session.getDefaultInstance(props, null);

try{
MimeMessage message = new MimeMessage(session, req.getInputStream());
String summary = message.getSubject();
Address[] addresses = message.getFrom();
String text = message.getContent().toString();
System.out.println("Subject: " + summary);
System.out.println("Sender: " + addresses);
System.out.println("Text: " + text);

} catch (Exception e) {
e.printStackTrace();
}

String connectionUrl = "jdbc:http://mysqlaws-address:port/table";
String dbUser = "user";
String dbPwd = "pw";
Connection conn = null;

try {
conn = (Connection) DriverManager.getConnection(connectionUrl, dbUser, dbPwd);
System.out.println("conn Available");
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("fetch otion error"+e.getLocalizedMessage());
}
}
}


Thank you for your help!





mercredi 28 janvier 2015

How to use local machine localhost services through boot2docker?

Amazon AWS doesn't allow ElastiCache/Redis instances to be accessible outside of EC2 instances. This means my docker container containers need to reference the redis instance running on my local Mac, for dev and testing.


But how do I map the redis server running on 6379 on my localhost into my container so that I point the ENV config there?





Use bootstrap to replace default jar on EMR

I am on a EMR cluster with AMI 3.0.4. Once the cluster is up, I ssh to master and did the following manually:




cd /home/hadoop/share/hadoop/common/lib/


rm guava-11.0.2.jar


wget http://ift.tt/1yB8Lhz


chmod 777 guava-14.0.1.jar




Is it possible to do above in a bootstrap action? Thanks!





Automate a data compiler with PHP (preferably)

I get data from a few API's that needs to be compiled daily to be summarized and put on display. I have the PHP file that does this for me, after which I then switch what table the index.php file displays the summarized data off of.


The problem is, I previously had a basic HostGator plan, where I could just edit the files freely, and even inline in cPanel. I have recently switched to AWS and use Elastic beanstalk with load-balancing and auto-scaling (now its next to impossible to edit a file inline while it's live with AWS). Is there a way to get a PHP script to run on a timer so to speak?


Does AWS have an application for this? If so, does it work with PHP? There must be something like that out there. Thanks!





Amazon's Simple Email Service not Working in PHP

I recently made the decision to use Vanilla PHP rather than a framework due mainly to speed reasons and quickly came up with the problem of sending email. I previously sent all my email through amazon's Simple Email Service but right now it doesn't seem to work.


My code looks like:



$ses = new SimpleEmailService($id, $secret,'email-smtp.us-west-2.amazonaws.com');

$m = new SimpleEmailServiceMessage();
$m->addTo($_POST['email']);
$m->setFrom($main_email);
$m->setSubject('Hello, world!');
$m->setMessageFromString('This is the message body.');

$n = $ses->sendEmail($m);
var_dump($n);


I am currentlly using this PHP SES Library http://ift.tt/1nPLBVQ as I do not want to use the Amazon SDK.


The Error I typically get is usually: " Could not resolve host email-stmp.us-...."


UPDATE:


I fixed the typo and now I'm getting a whole shwack of other various errors





Zookeeper quorum issue with external hbase client when running hbase on Amzon EMR

I am running HBase on Amazon EMR.



<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property><name>fs.hdfs.impl</name><value>emr.hbase.fs.BlockableFileSystem</value></property>
<property><name>hbase.regionserver.handler.count</name><value>100</value></property>
<property><name>hbase.zookeeper.quorum</name><value>ip-xx-xxx-aa-aa.us-west-1.compute.internal</value></property>
<property><name>hbase.rootdir</name><value>hdfs://xx.xxx.aa.aa:9000/hbase</value></property>
<property><name>hbase.cluster.distributed</name><value>true</value></property>
<property><name>hbase.tmp.dir</name><value>/mnt/var/lib/hbase/tmp-data</value></property>
<property><name>hbase.master.wait.for.log.splitting</name><value>true</value></property>
</configuration>


The above is the configuration. Now I am trying to start a new hbase client using:



val zk_quoroum = "xx.xxx.aa.aa"
val hBaseClient = new HBaseClient(zk_quoroum)


I am not able to get a connection to the zookeeper somehow:



6:04:54.238 [main-SendThread()] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server /xx.xxx.aa.aa:2181
16:04:59.264 [main-SendThread(xx.xxx.aa.aa:2181)] INFO org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from server in 5026ms for sessionid 0x0, closing socket connection and attempting reconnect


The reconnect keeps on trying but never gets a connection. Has that something to do with the fact that the quorum is an internal url ? The client doesnt live in AWS. Anyone encountered this before ?





AWS Creating POST policy base64 encoded + signature

I am trying to generate an AWS POST policy w/ signature in PHP, which will be sent to the client to allow uploading in the browser via javascript AJAX.


I have copied the example at http://ift.tt/1mEUcKx


I have 2 problems:




  1. I cannot generate a correctly encoded base64 string from the policy.



    $policy = '{ "expiration": "2013-08-06T12:00:00.000Z", "conditions": [{"bucket": "examplebucket"}, ["starts-with", "$key", "user/user1/"], {"acl": "public-read"}, {"success_action_redirect": "http://ift.tt/1yp4NeI"}, ["starts-with", "$Content-Type", "image/"], {"x-amz-meta-uuid": "14365123651274"}, ["starts-with", "$x-amz-meta-tag", ""], {"x-amz-credential": "AKIAIOSFODNN7EXAMPLE/20130806/us-east-1/s3/aws4_request"}, {"x-amz-algorithm": "AWS4-HMAC-SHA256"}, {"x-amz-date": "20130806T000000Z"}]}';
    $base64 = base64_encode($policy);
    //Result
    //eyAiZXhwaXJhdGlvbiI6ICIyMDEzLTA4LTA2VDEyOjAwOjAwLjAwMFoiLCAiY29uZGl0aW9ucyI6IFt7ImJ1Y2tldCI6ICJleGFtcGxlYnVja2V0In0sIFsic3RhcnRzLXdpdGgiLCAiJGtleSIsICJ1c2VyL3VzZXIxLyJdLCB7ImFjbCI6ICJwdWJsaWMtcmVhZCJ9LCB7InN1Y2Nlc3NfYWN0aW9uX3JlZGlyZWN0IjogImh0dHA6Ly9hY2w2LnMzLmFtYXpvbmF3cy5jb20vc3VjY2Vzc2Z1bF91cGxvYWQuaHRtbCJ9LCBbInN0YXJ0cy13aXRoIiwgIiRDb250ZW50LVR5cGUiLCAiaW1hZ2UvIl0sIHsieC1hbXotbWV0YS11dWlkIjogIjE0MzY1MTIzNjUxMjc0In0sIFsic3RhcnRzLXdpdGgiLCAiJHgtYW16LW1ldGEtdGFnIiwgIiJdLCB7IngtYW16LWNyZWRlbnRpYWwiOiAiQUtJQUlPU0ZPRE5ON0VYQU1QTEUvMjAxMzA4MDYvdXMtZWFzdC0xL3MzL2F3czRfcmVxdWVzdCJ9LCB7IngtYW16LWFsZ29yaXRobSI6ICJBV1M0LUhNQUMtU0hBMjU2In0sIHsieC1hbXotZGF0ZSI6ICIyMDEzMDgwNlQwMDAwMDBaIn1dfQ==



this is my utf8_encode() policy. when I try to base64 encode it it is different than the examples base64 policy. Can't seem to get it to match no matter what I do. I did notice that changing the date to 2013-08-07T12:00:00.000Z makes it match for that part of the encoded string.



  1. I cannot generate a correct signature via sha256 using a correctly encoded base64 policy.


Creating the Signature from the example's base64 encoded policy:



//Using this example secret key:
$secret = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY';
$policy = 'eyAiZXhwaXJhdGlvbiI6ICIyMDEzLTA4LTA3VDEyOjAwOjAwLjAwMFoiLA0KICAiY29uZGl0aW9ucyI6IFsNCiAgICB7ImJ1Y2tldCI6ICJleGFtcGxlYnVja2V0In0sDQogICAgWyJzdGFydHMtd2l0aCIsICIka2V5IiwgInVzZXIvdXNlcjEvIl0sDQogICAgeyJhY2wiOiAicHVibGljLXJlYWQifSwNCiAgICB7InN1Y2Nlc3NfYWN0aW9uX3JlZGlyZWN0IjogImh0dHA6Ly9leGFtcGxlYnVja2V0LnMzLmFtYXpvbmF3cy5jb20vc3VjY2Vzc2Z1bF91cGxvYWQuaHRtbCJ9LA0KICAgIFsic3RhcnRzLXdpdGgiLCAiJENvbnRlbnQtVHlwZSIsICJpbWFnZS8iXSwNCiAgICB7IngtYW16LW1ldGEtdXVpZCI6ICIxNDM2NTEyMzY1MTI3NCJ9LA0KICAgIFsic3RhcnRzLXdpdGgiLCAiJHgtYW16LW1ldGEtdGFnIiwgIiJdLA0KDQogICAgeyJ4LWFtei1jcmVkZW50aWFsIjogIkFLSUFJT1NGT0ROTjdFWEFNUExFLzIwMTMwODA2L3VzLWVhc3QtMS9zMy9hd3M0X3JlcXVlc3QifSwNCiAgICB7IngtYW16LWFsZ29yaXRobSI6ICJBV1M0LUhNQUMtU0hBMjU2In0sDQogICAgeyJ4LWFtei1kYXRlIjogIjIwMTMwODA2VDAwMDAwMFoiIH0NCiAgXQ0KfQ==';


<?= hash_hmac('sha256', $policy, $secret); ?>

//Resulting Signature
d8ddc156c5d681b42c40a4224c07cdd64b938def8e8c34d616806175cb3c7119

//Signature in Example
21496b44de44ccb73d545f1a995c68214c9cb0d41c45a17a5daeec0b1a6db047


Not sure what I am missing here. I also have the PHP SDK, but I was unable to find a way to extract the Policy / Signature so I can send that to the javascript in the browser. Is there a way to generate a policy with my specified conditions from the PHP SDK? I have looked around on the web and in the SDK itself and came up empty...





AWS: "ERROR: The request could not be satisfied."

I am trying to use curl to push my images of my Rails project to my AWS CloudFront distribution with a POST request. However I get the following error:


Here is how I made the request:



curl --data @general.png http://ift.tt/1ttIYw1


And this is the error i get:



ERROR: The request could not be satisfied
This distribution is not configured to allow the HTTP request method that was used for this request. The distribution supports only cachable requests.


I don't know if I need to create a 'Bucket' for my distribution. I also don't know where/if I can set up my distribution to allow this kind of approach to uploading Objects to my distribution.





AWS Certified Solutions Architect - Associate

I would like to give certificate - AWS Certified Solutions Architect - Associate


Does any one have list of training/vod/books that can be useful to prepare above test ?


Thank you!





Fail to run Java Spark on EMR

I got a simple Java Spark program running locally, but it failed on Amazon EMR. I tried both AMI 3.2.1 and AMI 3.3.1, all same errors. The failing code is at the JavaSparkContext below :



public static void main(String[] args) throws Exception {

if (args.length < 1) {
System.err.println("Usage: JavaWordCount <file>");
System.exit(1);
}

SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount");
sparkConf.setMaster("local").set("spark.executor.memory", "1g");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
:




The error I got is:



Exception in thread "main" java.lang.NoSuchMethodError: scala.collection.immutable.HashSet$.empty()Lscala/collection/immutable/HashSet;
at akka.actor.ActorCell$.<init>(ActorCell.scala:305)
at akka.actor.ActorCell$.<clinit>(ActorCell.scala)
at akka.actor.RootActorPath.$div(ActorPath.scala:152)
at akka.actor.LocalActorRefProvider.<init>(ActorRefProvider.scala:465)
at akka.remote.RemoteActorRefProvider.<init>(RemoteActorRefProvider.scala:124)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78)
at scala.util.Try$.apply(Try.scala:191)
at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
at scala.util.Success.flatMap(Try.scala:230)
at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:84)
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:550)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:111)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:104)
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:104)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:152)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:202)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:53)
at examples.JavaSparkProfile.main(JavaSparkProfile.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)




Does anyone have any idea?


Thanks a lot!