mardi 31 mars 2015

Creating Signed Cookies for Amazon CloudFront

Amazon has introduced Cloudfront signed cookie recently in addition to signed url.


A similar quesition has been about signed url. Apparently there is support for signed url in the cloudfront SDK


However I cannot find the support of this feature in the aws python SDK.


How can I got about to create a signed cookie?





Giving anonymous access to AWS SQS queue using the Java SDK

I am trying to provide anonymous access to an SQS queue by providing * as principal but the API rejects it. I am using the AWS Java SDK.



List<String> principal = new ArrayList<String>();
principal.add("*");

List<String> actions = new ArrayList<String>();
actions.add("*");


sqsClient.addPermission(queueUrl,"realtimeEvents",principal,actions);


This throws the following exception:



Error Message: Value [*] for parameter PrincipalId is invalid. Reason: Unable to verify. (Service: AmazonSQS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: c749bd43-a485-508d-ba0d-f0d6dd92af7b)


'*' is a valid input while defining the policy file as well as using the UI to provide access. Any idea how to make this work.





how to amazon autocomplete scopes to books

I am using Amazon auto-complete suggestions in my site. It is working well but I want to scope it only for books.My code as following



$(document).ready(function () {
jQuery('input.autocomplete').autocomplete({
source: function (request, response) {
jQuery.ajax('http://ift.tt/1d55rlk', {
cache: true,
data: {
client: 'amazon-search-ui',
mkt: 1,
'search-alias': 'aps',
q: request.term
},
error: function () {
response([]);
},
success: function (data) {
response(data[1]);
},
dataType: 'jsonp'
});
}
});
});


This code is working on autocomplete css class. If you know something about this please let me know.





Executing a java file in PHP

Hi I have a Java file that I want to execute when on a PHP page. I have a PHP file with the following content:



<?php
exec('java sendMail');
?>


The PHP file is being run on AWS's Elastic Beanstalk, and when I run the Java file in eclipse, it works. Any ideas?





Amazon Redshift doing Hash Join even when joined on column that is both Dist Key and Sort Key

I have a fact table in Redshift having about 1.3 Billion rows with DISTribution key c1 and sort key c1, c2.


I need to join this table with itself with a join clause on c1 (i.e. c1 from 1st instance of table = c1 from 2nd instance of table).


As I see query plan of my query, Redshift appears to be doing a Hash Join with DS_DIST_NONE. Though DS_DIST_NONE is expected as I have both dist key and sort key on the column c1, but I expected Redshift to do a Merge Join instead of Hash Join (again because of the same reason).


I believe this is slowing down my query.


Can anyone please explain as to why Redshift may be doing a Hash Join instead of Merge Join (even though I have both DIST Key and SORT key on the joining column) and Redshift is doing DS_DIST_NONE for the query?





Get the subscription email from a specific topic

Im trying to get the email of a subscriptor in a specific topic.


But with code below Im getting a dictonary, but I just want the email:



conn = boto.sns.connect_to_region("us-east-1")
test = conn.get_all_subscriptions_by_topic("arn:aws:sns:us-east-2:8274142742:testtopic")
print test


I was trying something like this:



test2 = test['ListSubscriptionsByTopicResponse']['ListSubscriptionsByTopicResult']['Subscriptions'][0]


But again, I dont get only the email.


Do you know how can I get only the email here?





How to save S3 object to a file using boto3

I'm trying to do a "hello world" with new boto3 client for AWS.


They use-case I have is fairly simple: get object from S3 and save it to the file.


In boto 2.X I would do it like this:



import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')


In boto 3 . I can't find a clean way to do the same thing, so I'm manually iterating over the "Streaming" object:



import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
chunk = key['Body'].read(1024*8)
while chunk:
f.write(chunk)


And it works fine. I was wondering is there any "native" boto3 function that will do the same task?





Amazon Europe Web Services - Invalid Ids Presented

I am trying to connect to Amazon Web Services Europe. I have opened an account with Amazon for Europe and I obtained and AWSAccessKeyId and an AWSSecretKey for Amazon Europe. My customer has sent over their Amazon Europe credentials. Is something wrong with my request?


Request



POST http://ift.tt/19zikZA HTTP/1.1
User-Agent: CloudCartConnector/1 (Language=C#; CLI=4.0.30319.18444; Platform=Win32NT/6.1.7601.65536; MWSClientVersion=2014-09-30)
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Host: mws-eu.amazonservices.com
Content-Length: 359
Expect: 100-continue
Connection: Keep-Alive

AWSAccessKeyId=XX&Action=ListOrders&LastUpdatedAfter=2015-03-31T20%3A31%3A38Z&LastUpdatedBefore=2015-03- 31T20%3A31%3A52Z&MarketplaceId.Id.1=XX&SellerId=XX&Signature=XX&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2015-03-31T20%3A36%3A52Z&Version=2013-09-01


Response



<?xml version="1.0"?>
<ErrorResponse xmlns="http://ift.tt/PeLt2L">
<Error>
<Type>Sender</Type>
<Code>InvalidParameterValue</Code>
<Message>Invalid ids presented</Message>
</Error>
<RequestId>7f89f886-b946-43d6-8bf5-bda9d03df186</RequestId>
</ErrorResponse>




Elasticsearch cluster CPU maxing out at 100 request per second search queries

The Problem: I'm running a elastic search server cluster with four index's one has 4.5 million documents the other has 13 million. The other index's are the marvel and the kibana and are very small.


Whenever I get to about 150 queries per second with jmeter ("testing framework") The CPU will max out. The more I turn it up the more the CPU maxes out.


From everything I read online about performance tuning it usually has memory issues but our box is running out of cpu way before memory and causing 6 second response times with the test.


the setup:



3 x client nodes AWS m3.xlarge
4 cores 16gb
3 x master nodes AWS m3.medium
1 core 4gb <-i beleive
3 x data nodes AWS c3.2xlarge
8 cores 30gb


Plugins:



AWS
Marvel


Document Count



account-index-v1.0 4.5 M
entry-index-v1.0 13.1 M


@160 query per second using jmeter to execute the following query:



CPU LOAD(1m) MEM %FreeD IOPS
Client Nodes
hidden:9300 0.0 0.0 7.3 n/a n/a
hidden:9300 0.0 0.0 4.3 n/a n/a
hidden:9300 0.0 0.1 8.3 n/a n/a

Data Nodes
hidden:9300 99.0 10.2 11.7 69.7 GB 1.2
hidden:9300 71.0 3.0 15.0 69.6 GB 3.9
hidden:9300 16.7 0.3 12.7 69.8 GB 0.1

Master Nodes
hidden:9300 0.3 0.0 3.0 73.0 GB 0.2
hidden:9300 0.3 0.0 7.0 73.0 GB 0.1
hidden:9300 0.3 0.0 5.0 73.0 GB 0.1


queries



{
"match"{"event_id":"10000"},
"match"{"race_id_indexed":"10000"},
"match"{"is_test":"F"},
"match"{"status":"CONF"},
"must_not":[{"match"{"type":"TEAM"}}],
"query":{"match_all":{}}
}



marvel.agent.enabled: true
cluster.name: Vision
bootstrap.mlockall: true
http.enabled: true
index.number_of_shards: 3
index.number_of_replicas: 1

<%if node.has_key?("ec2") %>
plugin.mandatory: "cloud-aws"
discovery.type: "ec2"
discovery.ec2.groups: "<%= node["ec2"]["security_groups"][0] %>"
discovery.ec2.ping_timeout: "120s"
discovery.zen.ping.multicast.enabled: false
<% else %>
discovery.zen.ping.multicast.enabled: true
<% end %>




PHP File Upload Issues - Can't Upload to Desired Folder

I get a $tmp_name of "/tmp/phpv1K2Eh" but I can't move the temporary file to the "uploads/" folder.


This worked fine on my prior server, but on my new AWS server the folder structure is different. Do I need to alter the $tmp_name path?



<?php

$name = $_FILES['file']['name'];
$tmp_name = $_FILES['file']['tmp_name'];

if(isset($name) && !empty($name)){

$location = 'uploads/' .$name;

if(move_uploaded_file($tmp_name, $location)){
echo 'File Uploaded!';
} else {
echo 'Error in upload';
}

}
?>

<form action="test.php" method="POST" enctype="multipart/form-data">
<input type="file" name="file"><br><br>
<input type="submit" name="Submit">
</form>




Cannot Load Video from S3 AWS( Android Studio )

I'm trying to load video from my S3 AWS service. The problem is that every time I try to load video, I'm getting:



D/MediaPlayer﹕ Couldn't open file on client side, trying server side

E/MediaPlayer﹕ error (1, -2147483648)

E/MediaPlayer﹕ Error (1,-2147483648)

D/VideoView﹕ Error: 1,-2147483648


I don't know if it's problem with permissions? This is my Android Code:



AWSCredentials myCredentials = new BasicAWSCredentials("my-key", "secret-key");
AmazonS3 s3client = new AmazonS3Client(myCredentials);
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest("-", "AndroidCommercial.3gp");

objectURL = s3client.generatePresignedUrl(request);
videoview = (VideoView) findViewById(R.id.videoView);

getWindow().setFormat(PixelFormat.TRANSLUCENT);
MediaController mediaCtrl;
mediaCtrl = new MediaController( MainActivity.this );
mediaCtrl.setMediaPlayer(videoview);
videoview.setMediaController(mediaCtrl);
Uri clip = Uri.parse(String.valueOf(objectURL));
videoview.setVideoURI(clip);
videoview.requestFocus();

videoview.start();


I'm not sure if it's problem with AWS or my APP... I've created user with AmazonS3FullAccess policy. I can download that file with AWS chrome extension. Can anyone help me?


Best Regards, Mateusz





Running a Docker container on AWS Elastic Beanstalk - Where is my web app?

Dockerfile



FROM ubuntu:14.04

RUN apt-get update && apt-get upgrade -y

RUN apt-get install -y git git-core wget zip nodejs npm

EXPOSE 8080

# startup
ADD start.sh /tmp/
RUN chmod +x /tmp/start.sh
CMD ./tmp/start.sh


start.sh



cd /tmp

rm -rf docker-node-test; true

git clone http://ift.tt/1CIxOpu

cd docker-node-test

npm install

nodejs app.js


Dockerrun.aws.json



{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "ubuntu:14.04"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}


Before I hit the beanstalk I put 3 files into a .zip file. Call it aws-test.zip


Head to the AWS developer console and select "Elastic Beanstalk". Then pick "Create New Application".



  1. Pick an application name.

  2. Environment tier: Web Server

  3. Predefined Configuration: Docker

  4. Environment type: Load balancing, autoscaling

  5. On the next screen select Upload your own and find the zip you created.

  6. Additional Resources. Next.

  7. Configuration Details. Next.

  8. Environtment Tags. Next.

  9. Scroll down and click Launch.


It always shows web page :



Congratulations!
Your Docker Container is now running in Elastic Beanstalk on your own dedicated environment in the AWS Cloud


Where is my web app? Do i miss anything?





Remove a file in Amazon S3 using Django-storages

In my Django project I use Django-storageS to save media files in my Amazon S3.


I followed this tutorial (I use also Django-rest-framework). This works well for me: I can upload some images and I can see these on my S3 storage.


But, if I try to remove an instance of my model (that contains an ImageField) this not removes the corresponding file in S3. Is correct this? I need t remove also the resource in S3.





AWS Lambda: How to store secret to external API?

I need your help on this question. I'm currently building a monitoring tool based on AWS Lambda. Given a set of metrics, the Lambdas should be able to send SMS using Twilio API. To be able to use the API, Twilio provide an account SID and an auth token.


My question is the following: "How and where should I store these secrets?"


I'm currently thinking to use AWS KMS but there might be other better solutions. What do you think?


Thanks a lot for your time and your answers.


Best regards, Jonathan.





AWS Java SDK credentials linux ec2

I've created my java web application on a tomcat server which will start another instance using the AWS Java SDK, on windows i just place the credentials in my user. Im now trying to host my application on an AWS EC2 Instance and hence i am trying to place my credentials on the Linux EC2 i've follow some steps on the AWS SDK - http://ift.tt/1xTcU7q as per the link but im still thrown the same error upon calling the method -



Cannot load the credentials from the credential profiles file. Please make sure that your credentials file is at the correct location (~/.aws/credentials), and is in valid format.



I've created a .aws folder in my home directory an placed the credential file within it, i've also added the export codes within the .bashrc file but it doesnt seem to work.


At Wits end here :(





How to increase network bandwidth of AWS EC2 instance?

We hosted a site in AWS EC2 of type c4.8xlarge. It is a fairly large system with lot of memory and compute resources. It incurred a planned heavy usage during a 2 hour time frame this weekend. While it did not crash, it slowed down quite a bit and failed to perform at the expected level. I was analyzing the stats and found realized that the network bandwidth is the main culprit. While the CPU usage stayed below 6%, the networkIn and networkOut seem to have peaked during that timeframe and thus all the problems. While I'm not an networking expect, some reading online seemed to indicate that all the traffic going through one NIC could be the main source of limited network bandwidth. Is this true? Would hosting the site on a different type of EC2 instance help increase the network bandwidth? Here is how the networkIn and networkOut metrics looked like under heavy load.


networkIn and networkOut metrics chart





How to fetch an image from one bucket and put it in another in S3 boto

I want to fetch an image from one bucket(bucket1) and put it another bucket (bucket2) I have written the below code snippet in python which I am trying,but I am getting the error:IOError: cannot identify image file.Please help.Thanks!



conn = boto.connect_s3()
filename='image2.jpg'
bucket=conn.get_bucket('bucket1')
key= bucket.get_key(filename)
fp = open (filename, "w+")
key.get_file (fp)


img = cStringIO.StringIO(fp.read())
im = Image.open(img)

b = conn.get_bucket('bucket2')
k = b.new_key('image2.jpg')
k.set_contents_from_string(out_im2.getvalue())




Using Spring Web. There's a way to use views path in AWS S3 bucket?

I'm using a normal view resolver, like:



<bean
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="prefix">
<value>/presentation/views/</value>
</property>
<property name="suffix">
<value>.jsp</value>
</property>
</bean>


There's a way to use this resolver to point to some S3 bucket external to application?





AcessDenied Trying to acess s3 bucket from url

Im using a link that I concatenate with datetime variable:



t = time.localtime(time.time())
datetime = str(t.tm_year) + str(t.tm_mon) + str(t.tm_mday) + str(t.tm_hour) + str(t.tm_min) + str(t.tm_sec)

link = 'http://ift.tt/19wWrtU' +datetime + '/test'


When I try to acess the link, in aws console it appears like this: http://ift.tt/1xT03C5


And it works, I can acess the above link.


But if I try to acess the link in the way Im saving it, like this:


http://ift.tt/19wWrtW


Im getting a xml with an Error "<code>AcessDenied</code>"


Do you see why Im getting this error?





NodeJS connect() failed (111: Connection refused) while connecting to upstream

I am running into an issue today where all of a sudden my Elastic Beanstalk app is sending me to a 502 Bad Gateway page. Now I have run into this issue in the past and the reason why this was happening was because the Node command could not start my server. I fixed this by inputting Node command: node main.js and I never ran into this issue until randomly this morning. All of a sudden it stopped working and I get this error, in my error log:



2015/03/31 13:07:17 [error] 697#0: *519 connect() failed (111: Connection refused) while connecting to upstream, client: 54.146.12.189, server: , request: "HEAD / HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "54.152.12.19"
2015/03/31 13:07:17 [error] 697#0: *521 connect() failed (111: Connection refused) while connecting to upstream, client: 54.146.18.189, server: , request: "GET /clientaccesspolicy.xml HTTP/1.1", upstream: "http://ift.tt/19EVUGa", host: "54.152.12.19"
2015/03/31 13:16:02 [error] 697#0: *523 connect() failed (111: Connection refused) while connecting to upstream, client: 69.204.65.1321, server: , request: "GET /blog/the-differences-in-segmenting-your-data-by-users-and-sessions HTTP/1.1", upstream: "http://ift.tt/1OUSZKm", host: "www.mywebsite.com"


How should I approach solving this issue?


Here is my main.js file:



//Load express
var express = require('express');
var app = express();
var router = express.Router(); // get an instance of the router
var bodyParser = require('body-parser'); // configure app to use bodyParser()
var mongoose = require('mongoose');
var passport = require('passport');
var flash = require('connect-flash');
var morgan = require('morgan');
var cookieParser = require('cookie-parser');
var session = require('express-session');
var aws = require('aws-sdk');

app.use(bodyParser.urlencoded({ extended: true})); // get data from a POST method
app.use(bodyParser.json());
app.use(morgan('dev'));
app.use(cookieParser());


var port = process.env.PORT || 8080; // set the port

var DB_CONFIG = process.env.DB_CONFIGURATION;
var AWS_ACCESS_KEY = process.env.AWS_ACCESS_KEY;
var AWS_SECRET_KEY = process.env.AWS_SECRET_KEY;
var S3_BUCKET = process.env.S3_BUCKET;

var blogDB = require('./config/blogDB.js');
mongoose.connect(blogDB.url);




require('./config/passport.js')(passport);


app.set('view engine', 'ejs'); // set ejs as the view engine

app.use(express.static(__dirname + '/public')); // set the public directory

app.use(session({ secret: 'thisisatest' }));
app.use(passport.initialize());
app.use(passport.session());

app.use(flash());


var routes = require('./app/routes');

app.use(routes); // use routes.js


app.listen(port);
console.log('magic is happening on port' + port);




Create Jobs in Amazon Elastic Transcoder using iOS sdk

Ok, so I have a in.Bucket on my S3 account with 300+ video files that I will distribute. For now, I'm not allowing file upload, so what I need is a one-time transcoding.


I wouldn't like to create 300+ jobs using the management console, instead, I'm looking for a code to do this for me. I can retrieve the filenames from my parse.com database and use a for statement to go trough each file inside the bucket, but how would I go about actually creating the jobs? Just to be clear, the jobs can all be identical, I mean, with the same presets and setups, changing only the filenames.


Any help will be greatly appreciated.





how to prevent downloading video from amazon cloudfront using signed urls

I am amazon cloudfront distribution for video files in my website. To read those files I am using signed url method with canned policy to read video file from amazon cloudfront distribution. Below is the example of signed url.


http://cloudfront-domainname/VideoFileName.mp4?Expires=1427805933&Signature=signature-of-policy-statement&Key-Pair-Id=cloudfront-key-pair-id


If I directly paste this url in address bar, I am able to download the video. How to prevent user from downloading this video but video should play in the HTML5 media player.





Sending email error using amazon ses ec2

The following code used to send email using phpmailer function and amazon ses ec2, if i use different port(25, 465, 587) then getting different error.



<?php
require 'class.phpmailer.php';
$to = "********@gmail.com";
$from = "info@itlowers.com"; // verified mail id
$subject = "a test subject";
$body = "email body content goes here";

$mail = new PHPMailer();
$mail->IsSMTP(true); // use SMTP

$mail->SMTPDebug = 2; // enables SMTP debug information (for testing)
// 1 = errors and messages
// 2 = messages only
$mail->SMTPAuth = true; // enable SMTP authentication
$mail->Host = "email-smtp.us-west-2.amazonaws.com"; // Amazon SES server, note "tls://" protocol
$mail->Port = 465; // set the SMTP port
$mail->Username = "*************"; // SES SMTP username
$mail->Password = "*****************"; // SES SMTP password

$mail->SetFrom($from, 'First Last');
$mail->AddReplyTo($from,'First Last');
$mail->Subject = $subject;
$mail->MsgHTML($body);
$address = $to;
$mail->AddAddress($address, $to);

if(!$mail->Send()) {
echo "Mailer Error: " . $mail->ErrorInfo;
} else {
echo "Message sent!";
}
?>


I'm getting following errors



SMTP -> FROM SERVER:
SMTP -> FROM SERVER:
SMTP -> ERROR: EHLO not accepted from server:
SMTP -> FROM SERVER:
SMTP -> ERROR: HELO not accepted from server:
SMTP -> ERROR: AUTH not accepted from server:
SMTP -> NOTICE: EOF caught while checking if connectedSMTP Error: Could not authenticate. Mailer Error: SMTP Error: Could not authenticate.


what i wrote wrong in my code?





How to set region with Python boto AWS

I am trying to access my AWS buckets with Python boto.


I can access buckets with the region US Standard.



import boto
c = boto.connect_s3()
print c.lookup('boto-test-us-standard')


However if I try to access a bucket with a different region, it doesn't find the bucket.



print c.lookup('boto-test-frankfurt')


Where can I set the region boto uses?





Cloudsearch Fuzzy terms and phrases

I am trying to get my head around how fuzzy search works on AWS CloudSearch


I want to find "Star Wars" but in my search, I spell it



ster wers


The logic of my app will add fuzzy but it never returns Star Wars. I have tried:



ster~1 wers~1
"ster wers"~2
"ster"~1 "wers"~1


What am I missing here?





AWS static IPs for whitelisting

We currently have 4 AWS instances managed by OpsWorks. We're working with an API provider that requires us to whitelist any of the servers communicating between our stack and theirs.


However the request we make to them can come from any of the 3 instances in our stack (the workers that actually perform the processing requests). Extra info: right now we have our web server on an ElasticIP that scales out on load as well.


We're wondering how we can contain all 3 of the worker instances (and the instances that they trigger under load) to a block or single IP address so that the service provider can whitelist us and allow our requests through. We don't want to have to update IPs over time, so providing a block/static IP is quite important.


Further info: the 3 worker instances are managed as separate OpsWorks stacks and each have their own subnet but are all assigned to the same VPC. I was wondering if this was a matter of setting up a VPC and NAT -- but I know very little about networking at this level.





AWS - Static IP

I was wondering if anyone had an answer to this question. If I add more instances to my stack will they all get masked with the same Elastic IP or will they get new ones?


Thanks





boto get_bucket exception for some buckets

I am trying to access my AWS buckets with boto.


I can list all my buckets like this:



import boto

c = boto.connect_s3()

rs = c.get_all_buckets()
for b in rs:
print b.name


However some buckets I cannot access like this:



print c.get_bucket('mybucket')


The error I get is the following:



File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 502, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/local/lib/python2.7/site-packages/boto/s3/connection.py", line 549, in head_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request


Is this related to the bucket policy? Or what else could the reason be?





integrate amazon gift card on magento for specific users

I currently running a promo on my magento store (ver 1.7.0.2), which gives my customers $25 amazon gift card.


Currently it is a manual process where I sent the code via email. Instead I want to make it automatic via magento, as the user already having a login information for my magento website, they should login and redeem the offer.


I am searching for a extension which will provide me a popup which contains separate amazon gift code for separate users.





AWS opswprks Chef+Mongodb

Presently i'm working with AWS Opsworks to deploy mongodb cluster. I'm using http://ift.tt/1joOSKh repository to deploy. Unfortunately when i'm trying to connect more then 7 instances, i have error :" "errmsg" : "exception: replSet bad config maximum number of voting members is 7", "code" : 13612, "ok" : 0 " I was thinking, that it's an ssl certificate problem, but i've arrange pem keys to each instance separately. Still have the same error. What could be the problem.


Thank You in advence, hope for quick reply.





db:migrate gives ArgumentError: Missing required arguments: aws_secret_access_key

Every time I try to run db:migrate or heroku run console, I get ArgumentError: Missing required arguments: aws_secret_access_key


I have done heroku config:set for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Afterwards I run heroku config and see those two and theS3_BUCKETcorrect. Then I runheroku run console` and I get the error.


I have also went on my IAM management console and I gave my user the AmazonS3FullAccess policy. Although this does nothing.


I am also using config/application.yml from the figaro gem to store my keys, but that's no different either.


I'm out of ideas on what to do to fix this, does anyone know what to know?


carrierwave.rb:



if Rails.env.production?
CarrierWave.configure do |config|
config.root = Rails.root.join('tmp') # adding these...
config.cache_dir = 'carrierwave' # ...two lines
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => 'us-west-2',
:host => 's3.example.com',
:endpoint => 'http://ift.tt/1hhRBgu'
}
config.fog_directory = ENV['S3_BUCKET']
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
end




pptpd traffic control - Amazon EC2

Small question. I set up under Ubuntu 14 VPN-server through pptpd on Amazon EC2. Amazon provides a free 15 GB of incoming and outgoing traffic. How best to track it? Maybe in the settings I can set a limit on the traffic? Or where to watch it? Thank U!





Plot a graph using metric information fetched using AWS Cloudwatch

I have successfully retrieved metrics data of AWS EC2 service using AWS Cloudwatch Java API.


Now I need to show the metrics data as real time streaming and plot the graph. I have never worked in any of the Java graph library/API. Can someone suggest how to start?





How to add a gateway to eh0?

eth0 eth1 are in the same IP segment. but only eth1 has gateway. so I think this is why I can't connect to the IP of eth0 with putty. I can connect to eth1 with putty.



[root@ip-172-26-26-60 ~]# netstat -nr

Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.26.26.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0
172.26.26.0 0.0.0.0 255.255.255.192 U 0 0 0 eth1
0.0.0.0 172.26.26.1 0.0.0.0 UG 0 0 0 eth1




Even after getting the certificate signed by comodo i am getting certificate not trusted

I was uploading my SSL certificates as mentioned in this link : http://ift.tt/1xTfQgl Even after doing all the steps i am still getting not verified certificate and still see self signed. Link to my website : www.advisorcircuit.com:8443 Please tell me why is this happening?





lundi 30 mars 2015

Magento not connected to AWS MYSQL RDS instance?

I tried to connect my magento application to AWS RDS instance but instead it connects to localhost MYSQL.


But in phpMyAdmin it is connected to rds instance.


Here are the steps that i did:



  1. Created RDS instance with required settings.

  2. In Magento directory i.e app/etc/local.xml i changed hostname as RDS endpoint with port 3306. With correct database user, password and database name.





<host>
<![CDATA[XXXX.XXXXXXXXXXX.us-east-1.rds.amazonaws.com:3306]]></host>
<username><![CDATA[noones]]></username>
<password><![CDATA[admin1234]]></password>
<dbname><![CDATA[noones]]></dbname>
<initStatements><![CDATA[SET NAMES utf8]]></initStatements>
<model><![CDATA[mysql4]]></model>
<type><![CDATA[pdo_mysql]]></type>
<pdoType><![CDATA[]]></pdoType>
<active>1</active>


I used Bitnami Magento Stack for magento installation, Is there any other files/configurations i need to change so that the magento points to RDS instance instead of localhost mysql.


Thanks





Php - Amazon s3 how do I check my connection is success or not

I am using amazon-php-sdk. In my application I'm accepting key and secret value from a form and passing for connecting to Aws. Here is my code.



<?php
require 'aws-autoloader.php';
use Aws\S3\S3Client;
$s3Client = S3Client::factory(array(
'key' => 'my key',
'secret' => 'my secret key'
));
?>



  1. How do I check whether the connection is success or not ?

  2. How can I check the already created object using the passed key so I that I don't want to create the object again. I need to create the object only if the passed key is different from the already created one.





index was out of range amazon.glacier.archivetransfermanager

I'm getting an error "index was out of range. must be a non-negative and less than the size of the collection"


When I comment out all of the lines besides console.wirteline(archivetoupload) the loop works correctly.


Any ideas?



namespace glacier.amazon.com.docsamples
{
class ArchiveUploadHighLevel
{


public static void Main(string[] args)
{
string vaultName = "testvault";
string archiveToUpload = "";

DirectoryInfo info = new DirectoryInfo(@"c:\test\");
FileInfo[] files = info.GetFiles().OrderByDescending(d => d.CreationTime).ToArray();

foreach (FileInfo file in files)
{
try
{
archiveToUpload = info + file.ToString();
Console.WriteLine(archiveToUpload);

var manager = new ArchiveTransferManager(Amazon.RegionEndpoint.APSoutheast2);
// Upload an archive.
string archiveId = manager.Upload(vaultName, "Test Document", archiveToUpload).ArchiveId;
Console.WriteLine("Archive ID: (Copy and save this ID for the next step) : {0}", archiveId);
Console.ReadKey();
}
catch (AmazonGlacierException e) { Console.WriteLine(e.Message); }
catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
catch (Exception e) { Console.WriteLine(e.Message); }
Console.WriteLine("To continue, press Enter");
Console.ReadKey();
}

}


}

}




Can connect through ethernet shield but not when using MQTT on my arduino

I have been trying to send messages via MQTT from my Arduino to my amazon web server. The following code connects the ethernet client but not the MQTT client. Why would my MQTT client not be connecting?



#include <SPI.h>
#include <Ethernet.h>
#include <PubSubClient.h>

byte mac[] = { 0x12, 0x42, 0x98, 0x85, 0x49, 0x3A }; //MAC address of server
char server[] = "http://52.1.29.117/"; //web address of server
IPAddress ip(172, 31, 51, 13); //IP address of server

void callback(char* topic, byte* payload, unsigned int length) {
// handle message arrived
}

EthernetClient ethClient;
PubSubClient client(server, 80, callback, ethClient);

void setup()
{
Serial.begin(9600);
while (!Serial) {
; // wait for serial port to connect. Needed for Leonardo only
}

// start the Ethernet connection:
if (Ethernet.begin(mac) == 0) {
Serial.println("Failed to configure Ethernet using DHCP");
Ethernet.begin(mac, ip);
}
delay(1000);
Serial.println("connecting...");

if (ethClient.connect(server, 80)) {
Serial.println("connected");
// Make a HTTP request:
ethClient.println("GET /search?q=arduino HTTP/1.1");
ethClient.println("Host: www.google.com");
ethClient.println("Connection: close");
ethClient.println();
}
else {
Serial.println("connection failed");
}

// if (client.connect(server)) {
if (client.connect(server, "ubuntu", "")) {
Serial.print("Data sent/n");
client.publish("hello/world","hello world");
client.subscribe("hiWorld");
}
else {
Serial.print("nope");
}
}

void loop()
{
client.loop();
}




How to configure CouchDB authentication in Docker?

I'm trying to build a Dockerized CouchDB to run in AWS that bootstraps authentication for my app. I've got a Dockerfile that installs CouchDB 1.6.1 and sets up the rest of the environment the way I need it. However, before I put it on AWS and potentially expose it to the wild, I want to put some authentication in place. The docs show this:


http://ift.tt/1bLwR5M


which hardly explains the configuration properly or what is required for basic security. I've spent the afternoon reading SO questions, docs and blogs, all about how to do it, but there's no consistent story and I can't tell if what worked in 2009 will works now, or which parts are obsolete. I see a bunch of possible settings in the current ini files, but they don't match what I'm seeing in my web searches. I'm about to start trying various random suggestions I've gleaned from various readings, but thought I would ask before doing trial and error work.


Since I want it to run in AWS I need it to be able to start up without manual modifications. I need my Dockerfile to do the configuration, so using Futon isn't going to cut it. If I need to I can add a script to run on start to handle what can't be done there.


I believe that I need to set up an admin user, then define a role for users, provide a validation function that checks for the proper role, then create users that have that role. Then I can use the cookie authentication (over SSL) to restrict access to my app that provides the correct login and handles the session/cookie.


It looks like some of it can be done in the Dockerfile. Do I need to configure authentication_handlers, and an admin user in the ini file? And I'm guessing that the operations that modify the database will need to be done by some runtime script. Has anyone done this, or seen some example of it being done?





Not able to increase Hbase read throughput

I have 50 GB data in hbase. I only have one column in table. My throughput is not increasing from 500 rps. I need at-least 5000 rps. I tried changing various parameters but it is not working. I found suggestions on internet which says to manually split records in your regions I am not able to understand how to manually split. Can anyone explain in detail about the process of manually splitting data in regions.



["-s","hbase.hregion.max.filesize=1073741824",
"-s","hfile.block.cache.size=0.6","-s","hfile.regionserver.global.memstore.size=0.2",
"-s","hbase.client.scanner.caching=10000",
"-s","hbase.regionserver.handler.count=64"]


Another problem I have is while making changes in the the xml if I am not able to restart the cluster. It throws the error of public key denied , I think It is because master tries to connect every core node which requires the pem file. Any idea how to restart the cluster.





How to do multistep or chained MapReduce on AWS (Using Python)

TL;DR, How can I use the output of one MapReduce step as the input to the next step?


Currently I'm trying to us MapReduce to count sets of 4 words from a sample data set.


Mapper.py:



#!/usr/bin/env python

import sys

# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into 4-grams (4 word sets)
ngrams = line.split()
# write the results to STDOUT (standard output)
print '%s %s %s %s<s>%s' % (ngrams[0], ngrams[1], ngrams[2], ngrams[3], ngrams[5])


Reducer.py:



#!/usr/bin/env python

import sys

current_word = None
current_count = 0
word = None

# input comes from STDIN
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()

# parse the input we got from mapper.py
words = line.split('<s>')
word = words[0]
count = words[1]

# convert count (currently a string) to int
try:
count = int(count)
except ValueError:
continue

# this IF-switch only works because Hadoop sorts map output
# by key (here: word) before it is passed to the reducer
if current_word == word:
current_count += count
else:
if current_word:
# write result to STDOUT
print '%s\t%s' % (current_word, current_count)
current_count = count
current_word = word

# do not forget to output the last word if needed!
if current_word == word:
print '%s\t%s' % (current_word, current_count)


Original code came from this great tutorial!


This code works perfectly for the dataset that I have; however, the output is 38 separate files (15MB each). Thus, I would like to run MapReduce again on the 38 files and further reduce the output into a single file. (Note: I think there is a way to get MR to output a single file, but I'm interested in chaining steps.)


My first attempt was simply to run the same scripts (seen above) on the output files a second time, but I get the following error:



Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads():
subprocess failed with code 1


The images below show how I have my MapReduce steps set up. I have the output of the first step set as the input for the second step. What am I doing wrong?


Streaming Step 1 Streaming Step 2





Better initialization

I am making an API call to AWS to get a list of AMIs using the Golang SDK. The DescribeImages function takes in DescribeImagesInput . I only want to see my own AMIs, so my code is doing this:



// Build input
self := "self"
ownerSelf := []*string{&self}
ownImages := &ec2.DescribeImagesInput{
Owners: ownerSelf,
}

// Call the DescribeImages Operation
resp, err := svc.DescribeImages(ownImages)
if err != nil {
panic(err)
}


Building input like that is very ugly. I am sure there is a better technique, but being a Golang n00b, I don't know it. What is a better way to do?





Heroku and AWS Timeout

I recently started testing an app hosted on Heroku with AWS as a backend. I'm commonly seeing relatively short requests timeout on a response on the Heroku side, which points the user to an error page, and then when looking at the application, it is clear that the request actually completed.


In general, the application is much slower than I expected. I'm using the micro database tier because I do not expect more than 60 users and not more than a handful at any given time. The demand is low.


It is practically unusable as is though. Simply going to a page where rails queries for a list of 100 records causes the page to time out. When run just with Heroku it was lightening fast. I need a larger database than Heroku can provide though.


Any suggestions?





How does AWS Data Pipeline run an EC2 instance?

I have an AWS Data Pipeline built and keep getting warnings on an EC2 resource's TerminateAfter field being missing. My DataPipeline is designed to use the same instance various times throughout the entire process, which is to run every hour (I haven't run the pipeline yet).


So if I set the Terminate After field to 3 minutes, I'm wondering if the EC2 instance is terminated 3 minutes after every time it is spun up. Or is the EC2 instance terminated 3 minutes after the last time it is used in the pipeline?





How do I debug Route 53 domain name based health checks that keep failing?

I just registered a domain with Route 53 yesterday and set up the hosted zone today. I left in the default SOA and NS records and added an A record to point to my servers IP address. I can connect with the IP address no problem but the domain name based health checks only report:


Failure: DNS resolution failed: DNS response error code SERVFAIL


How do I debug this? I've already gone through all the documentation plenty of times, as well as the Hosted Zones settings, and everything looks perfectly fine to me...





Amazon AWS Domain forwarding with masking issue

So I have my domain that is registered with GoDaddy. It's set up to forward to this address: ( http://ift.tt/1EsxPQK ) with masking, so you can only see the domain name and not the full URL.


When I view that page on my phone by entering the domain name, the page does not render properly (the media queries I've set don't work). However, if I view the page on my phone from the full AWS URL, the page renders as it should and the media queries work as they should. The page works perfectly fine on my desktop computer.


GoDaddy support said this is an iFrame issue in AWS or AWS doesn't allow forwarding with masking. How would I fix this problem and get the address to forward with no issues and render properly?


Thank you!





AWS EMR Step failed as jobs it created failed

I'm trying to analyse a Wikipedia article view dataset using Amazon EMR. This data set contains page view statistics over a three month period (1 Jan 2011 - 31 March 2011). I am trying to find the article with the most views over that time. Here is the code I am using:



public class mostViews {

public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {

private final static IntWritable views = new IntWritable(1);
private Text article = new Text();

public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {

String line = value.toString();

String[] words = line.split(" ");
article.set(words[1]);
views.set(Integer.parseInt(words[2]));
output.collect(article, views);
}
}

public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {

int sum = 0;

while (values.hasNext())
{
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}

public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(mostViews.class);
conf.setJobName("wordcount");

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);

conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);

conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

JobClient.runJob(conf);
}
}


The code itself works, but when I create a cluster and add a custom jar, it sometimes fails but other times it works. Using the entire dataset as input causes it to fail, but using one month, e.g January, it completes. After running using the entire dataset, I looked at the 'controller' log file and found this, which I think is relevant:



2015-03-10T11:50:12.437Z INFO Synchronously wait child process to complete : hadoop jar /mnt/var/lib/hadoop/steps/s-22ZUAWNM...
2015-03-10T12:05:10.505Z INFO Process still running
2015-03-10T12:20:12.573Z INFO Process still running
2015-03-10T12:35:14.642Z INFO Process still running
2015-03-10T12:50:16.711Z INFO Process still running
2015-03-10T13:05:18.779Z INFO Process still running
2015-03-10T13:20:20.848Z INFO Process still running
2015-03-10T13:35:22.916Z INFO Process still running
2015-03-10T13:50:24.986Z INFO Process still running
2015-03-10T14:05:27.056Z INFO Process still running
2015-03-10T14:20:29.126Z INFO Process still running
2015-03-10T14:35:31.196Z INFO Process still running
2015-03-10T14:50:33.266Z INFO Process still running
2015-03-10T15:05:35.337Z INFO Process still running
2015-03-10T15:11:37.366Z INFO waitProcessCompletion ended with exit code 1 : hadoop jar /mnt/var/lib/hadoop/steps/s-22ZUAWNM...
2015-03-10T15:11:40.064Z INFO Step created jobs: job_1425988140328_0001
2015-03-10T15:11:50.072Z WARN Step failed as jobs it created failed. Ids:job_1425988140328_0001


Can anyone tell me what's going wrong, and what I can do to fix it? The fact that it works for one month but not for two or three months makes me think that the data set might be too big, but I am not sure. I'm still new to this whole Hadoop/EMR thing so if there's any information I left out just let me know. Any help or advice would be greatly appreciated.


Thanks in advance!





AWS SWF Flow does the activity start to clise timeout include activity retries?

I'm trying to configure one of my SWF activities using the Flow framework for java and I can't find any documentation about whether the StartToClose timeout is only for a single activity attempt or for all of the retry attempts for that activity.


Here is the configuration for my activity:



@Activity(name = "WaitForExternalTaskToFinish", version = "1.0")
@ActivityRegistrationOptions(
defaultTaskScheduleToStartTimeoutSeconds = 60,
defaultTaskStartToCloseTimeoutSeconds = 60)
@ExponentialRetry(
initialRetryIntervalSeconds = 60,
maximumRetryIntervalSeconds = 300,
retryExpirationSeconds = 7200,
exceptionsToRetry = IllegalStateException.class)
boolean waitForExternalTaskToFinish(long externalTaskId);


I'm trying to get this activity that is expected to take a very short execution time (eg. 5 sec) but if the activity fails then keep retrying the activity to for 2 hours.



  • Will this configuration do what I want?

  • Do I need to change defaultTaskStartToCloseTimeoutSeconds to 7200?

  • If I need to change defaultTaskStartToCloseTimeoutSeconds to 7200, then how do I get the activity to fail if the execution of a single activity attempt is "too long" (eg. 100 sec)?





Issues copying a bucketfile to local disk

I've ran through the entire sequence that I believe I needed to do, but I am still getting an invalid argument type error while trying to copy my file locally. What am I doing wrong here?



vagrant@dev:~$ aws s3 ls s://bucketname-vagrant

A client error (NoSuchBucket) occurred when calling the ListObjects operation: The specified bucket does not exist

vagrant@dev:~$ aws s3 ls bucketname-vagrant
2015-03-30 14:06:02 285061467 or_vagrant.sql.tar.gz
2015-03-30 13:55:01 102642228 or_vagrant.sql.xz

vagrant@dev:~$ aws s3 ls bucketname-vagrant/or_vagrant.sql.xz
2015-03-30 13:55:01 102642228 or_vagrant.sql.xz
vagrant@dev:~$ aws s3 cp bucketname-vagrant/or_vagrant.sql.xz /tmp/

usage: aws s3 cp <LocalPath> <S3Path> or <S3Path> <LocalPath> or <S3Path> <S3Path>
Error: Invalid argument type




In Apache Spark. How to set worker/executor's environment variables?

My spark program on EMR is constantly getting this error:



Caused by: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
at sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:421)
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:128)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:397)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:148)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:149)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:121)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:573)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:425)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:334)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:281)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:942)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2148)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2075)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1093)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:548)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:172)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at org.apache.hadoop.fs.s3native.$Proxy8.retrieveMetadata(Unknown Source)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:414)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.create(NativeS3FileSystem.java:341)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)


I did some research and found out that this authentication can be disabled in low-security situation, by setting environment variable:



com.amazonaws.sdk.disableCertChecking=true


but I can only set it with spark-submit.sh --conf, which only affects driver, while most of the errors are on workers.


Is there a way to propagate them to workers?


Thanks a lot.





AWS: Resizing a RESERVED t2.medium to a m3.large. How pricing works?

I want to hire a reserved instance in Amazon EC2 to host an application.


What really drives me the choose of it, is the RAM capacity. I know I will be running this APP for years, but I do not know how hardware requirements will be changing in the future.


So, lets say I hire a t2.medium reserved instance for a year, and in 6 month I have a boom of demand and I realize I have to migrate to a m3.large.


Do I need to give up 6 month of the already paid t2.medium instance ? Or I can just pay the difference for the 6 month between the t2.medium and the m3.large instance ?


I think flexibility is what really make AWS interesting... but I need to be able to scale up at a reasonable cost...





AWS S3. Multipart Upload. Can i start downloading file until it's 100% uploaded?

Actually title was a question :) Do AWS S3 support file streaming in case if file is not 100% uploaded? Client #1 split files into small chunks and start uploading them using Multipart Upload. Client #2 start downloading them from S3. So, as result client #2 don't need to wait until client #1 has uploaded the whole file. Is it possible to do it without additional streaming server?





s3.amazonaws.com SSL cert is about to expire - should I panic?

enter image description here


My bucket is linked with CDN service (not CloudFront) and recently support of that CDN service contacted me that SSL cert on my bucket (cert for s3.amazonaws.com/mybucket) is going to expire.




  1. Does Amazon automatically update S3 SSL certs? Or I have to do something about it on my own?




  2. If Amazon will update S3 certificate automatically then under which CA it will be renewed? Because if it will be different CA then I will have to take some action on CDN side to make it work...







Can't deploy to AWS Elastic Beanstalk after timeout

I am newish to AWS Elastic Beanstalk and this is the first time I have encountered this issue. I tried deploying a new version via zip upload of my app to an instance and the updated completed with errors:


"Completed but with Command Line Timeouts", I increased the timeout in the config file and redeployed, after which I got this message.



During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.



This repeats each time try to redeploy.


I went into the EC2 instance and noticed the the /var/app/ondeck folder was still there.


I assume this is the issue, but i doubt that simply removing this directory would be the answer - unless it is that simple.


The /var app/current version is still present and the app runs fine using the version that was deployed prior to the initial timeout.


I inherited this app it is Laravel and the composer scripts take a while to run.


Thanks for any help.





Can't run command rails console on EC2 after deploy

I use AWS with Elasticbeanstalk to deploy my applications, but I can't run the command rails console once ssh to my server and going to /var/app/current.


I tried many commands, without success :



  • bundle exec rails c

  • RAILS_ENV=development rails c


I got the follow error:



Could not find addressable-2.3.6 in any of the sources
Run `bundle install` to install missing gems.


But when running bundle install everything is fine.


This error is a huge issue to me as I can't use the whenever cron job.


Can you help me ?





DoS Attacks launched from my java app on aws EC2 instance

I got an email from AWS saying that they limited access to my EC2 instance because a DoS attack was launched from it. I swear it wasn't me.


Basically, all I have in this EC2 instance is Tomcat7 with a java app deployed to it. The java app is a simple web app with REST calls exposed through Jersy, so nothing super fancy since I am only using it to learn web services. The java app also is connected to an RDS mysql instance to expose the database through the REST services.


I am trying to narrow down my search area to find out how this DoS attack happened from my instance. What could have caused it? what are the security measures I need to take to prevent this from happening again?





How can I get the most recent record from an Amazon Kinesis stream?

I would like to get the most recent record from an Amazon Kinesis stream. I intend to extract the timestamp from this record and compare it to the timestamp of the last record checkpointed by a consumer app in order to check whether the consumer is falling behind.


I cannot use the shard iterator type LATEST. This is because LATEST points to just after the most recent record, so it cannot be used to access the most recent record.


Is there a simple way to get the latest record?


An approach I am considering is to get the shard iterator for the sequence number of the record most recently processed by the consumer, make a GetRecords request using that shard iterator, get the next shard iterator from the result of the request, and repeat until a GetRecords request doesn't return any records.


This approach would involve reading all records since the consumer's checkpoint, which seems unnecessarily wasteful. Is there any way around requesting all these records?





Amazon Cloudwatch service not returning metrics information of EC2 service

I am writing a java code for retrieving Amazon EC2 metrics using Amazon Cloudwatch. Below is the code:



AWSCredentials awsCredentials = new BasicAWSCredentials(aws_accessKey, aws_secretKey);
AmazonCloudWatchClient cloudWatch = new AmazonCloudWatchClient(awsCredentials);

Dimension instanceDimension = new Dimension();
instanceDimension.setName("InstanceId");
instanceDimension.setValue("i-480de11e");


GetMetricStatisticsRequest request = new GetMetricStatisticsRequest();
request.setNamespace("AWS/EC2");
request.setPeriod(60 * 5);

ArrayList<String> stats = new ArrayList<String>();
stats.add("Average");
request.setStatistics(stats);

ArrayList<Dimension> dimensions = new ArrayList<Dimension>();
dimensions.add(instanceDimension);
request.setDimensions(dimensions);
request.setMetricName("CPUUtilization");

SimpleDateFormat format = new SimpleDateFormat("EEE MMM dd HH:mm:ss z yyyy");
Calendar cal = Calendar.getInstance();
cal.setTime(new Date());
cal.add(Calendar.HOUR_OF_DAY, -5);
cal.add(Calendar.MINUTE, -30);
Date endTime = format.parse(cal.getTime().toString());
request.setEndTime(endTime);

cal.add(Calendar.MINUTE, -10);
Date startTime = format.parse(cal.getTime().toString());
request.setStartTime(startTime);

GetMetricStatisticsResult getMetricStatisticsResult = cloudWatch.getMetricStatistics(request);
System.out.println(getMetricStatisticsResult.getDatapoints().size());


The above is returning zero though I can see the metrics data in AWS console. Few thing I would like to clarify:


1) Do I need to set endpoint like cloudWatch.setEndpoint(....)?

2) Could there be an issue with setting start/end time related to format etc?


Any help will be appreciated.





Wait for signal to start activity (amazon Workflow)

I have built a workflow using java flow framework provided by AWS. I have created 4 activities. First activity wait for signal to start. Then all the activities execute synchronously using Promise<> object. Workflow implementation code is following-



public class PaginationWorkflowImpl implements PaginationWorkflow
{
private ManualUploadClient operations0 = new ManualUploadClientImpl();
private DownloadActivityClient operations1 = new DownloadActivityClientImpl();
private ConvertActivityClient operations2 = new ConvertActivityClientImpl();
private UploadActivityClient operations3 = new UploadActivityClientImpl();
final Settable<String> result = new Settable<String>();

public void paginate()
{
Promise<String> UDone = operations0.Upload(result);
Promise<String> dnDone = operations1.s3Download(UDone);
Promise<String> convDone = operations2.pdfToTiff(dnDone);
operations3.s3Upload(convDone);
}

@Override
public void signal1(String data) {
// result.set(data);
//result.Void();
Promise<String> ready = Promise.asPromise("ready");
result.chain(ready);
}


}


Here activity Upload wait for the object result to be in ready state. So when I signal workflow the method signal1 kicks off and puts the object in ready state. But as soon as I signal the workflow, workflow execution get failed.


I am using nodejs aws api to signal workflow. Below is the code for the same-



var AWS = require('aws-sdk');
AWS.config.update({accessKeyId: '', secretAccessKey: ''});
AWS.config.update({region: 'us-east-1'});

var swf = new AWS.SWF();
var params = {
domain: 'HWdemo2', /* required */
signalName: 'signal1', /* required */
workflowId: 'PaginationWorkflow', /* required */
//input: 'true'
//runId: 'STRING_VALUE'
};
swf.signalWorkflowExecution(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});


And the error which I is showing in AWS workflow events console for execution fail is following-





["java.util.concurrent.CancellationException", {
"cause": ["java.lang.NullPointerException", {
"cause": null,
"stackTrace": [{
"methodName": "<init>",
"fileName": null,
"lineNumber": -1,
"className": "java.io.StringReader",
"nativeMethod": false
}, {
"methodName": "createParser",
"fileName": "JsonFactory.java",
"lineNumber": 835,
"className": "com.fasterxml.jackson.core.JsonFactory",
"nativeMethod": false
}, {
"methodName": "readValue",
"fileName": "ObjectMapper.java",
"lineNumber": 2098,
"className": "com.fasterxml.jackson.databind.ObjectMapper",
"nativeMethod": false
}, {
"methodName": "fromData",
"fileName": "JsonDataConverter.java",
"lineNumber": 96,
"className": "com.amazonaws.services.simpleworkflow.flow.JsonDataConverter",
"nativeMethod": false
}, {
"methodName": "signalRecieved",
"fileName": "POJOWorkflowDefinition.java",
"lineNumber": 111,
"className": "com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition",
"nativeMethod": false
}, {
"methodName": "doExecute",
"fileName": "AsyncDecider.java",
"lineNumber": 417,
"className": "com.amazonaws.services.simpleworkflow.flow.worker.AsyncDecider$1",
"nativeMethod": false
}, {
"methodName": "",
"fileName": "",
"lineNumber": 0,
"className": "--- continuation ---",
"nativeMethod": false
}, {
"methodName": "handleWorkflowExecutionSignaled",
"fileName": "AsyncDecider.java",
"lineNumber": 413,
"className": "com.amazonaws.services.simpleworkflow.flow.worker.AsyncDecider",
"nativeMethod": false
}, {
"methodName": "processEvent",
"fileName": "AsyncDecider.java",
"lineNumber": 251,
"className": "com.amazonaws.services.simpleworkflow.flow.worker.AsyncDecider",
"nativeMethod": false
}, {
"methodName": "decide",
"fileName": "AsyncDecider.java",
"lineNumber": 496,
"className": "com.amazonaws.services.simpleworkflow.flow.worker.AsyncDecider",
"nativeMethod": false
}, {
"methodName": "handleDecisionTask",
"fileName": "AsyncDecisionTaskHandler.java",
"lineNumber": 50,
"className": "com.amazonaws.services.simpleworkflow.flow.worker.AsyncDecisionTaskHandler",
"nativeMethod": false
}, {
"methodName": "pollAndProcessSingleTask",
"fileName": "DecisionTaskPoller.java",
"lineNumber": 201,
"className": "com.amazonaws.services.simpleworkflow.flow.worker.DecisionTaskPoller",
"nativeMethod": false
}, {
"methodName": "run",
"fileName": "GenericWorker.java",
"lineNumber": 94,
"className": "com.amazonaws.services.simpleworkflow.flow.worker.GenericWorker$PollServiceTask",
"nativeMethod": false
}, {
"methodName": "runWorker",
"fileName": null,
"lineNumber": -1,
"className": "java.util.concurrent.ThreadPoolExecutor",
"nativeMethod": false
}, {
"methodName": "run",
"fileName": null,
"lineNumber": -1,
"className": "java.util.concurrent.ThreadPoolExecutor$Worker",
"nativeMethod": false
}, {
"methodName": "run",
"fileName": null,
"lineNumber": -1,
"className": "java.lang.Thread",
"nativeMethod": false
}],
"message": null,
"localizedMessage": null,
"suppressed": ["[Ljava.lang.Throwable;", []]
}],
"stackTrace": [{
"methodName": "execute",
"fileName": "POJOWorkflowDefinition.java",
"lineNumber": 66,
"className": "com.amazonaws.services.simpleworkflow.flow.pojo.POJOWorkflowDefinition",
"nativeMethod": false
}, {
"methodName": "doAsync",
"fileName": "AsyncDecider.java",
"lineNumber": 70,
"className": "com.amazonaws.services.simpleworkflow.flow.worker.AsyncDecider$WorkflowExecuteAsyncScope",
"nativeMethod": false
}],
"message": null,
"localizedMessage": null,
"suppressed": ["[Ljava.lang.Throwable;", []]
}]



Can anyone please help me out with this error, Thanks a lot in advance.





Remove entire object directory tree using AWS-SDK

I'm using the aws-sdk, and I'm trying to delete an object with the #delete_object method, for example:


s3.delete_object(bucket: ENV["AWS_BUCKET"]), key: "images/123/myimage.png"))


How can I delete the route (that's "images/123") instead of only the .png file? I don't want empty "folders". I've tested adding only the first part of the route (s3.delete_object(bucket: ENV["AWS_BUCKET"]), key: "images/")) in the key parameter but doesn't work. Thanks!





Website Often Unresponsive or Unavailable in Germany

The reliability of our website is suffering in Germany specifically.


Pages are often unresponsive or the site fails to load at all.


As far as I am aware, this does not happen in any other country. We're monitoring google analytics, but this doesn't reveal any unusual behaviour.


What are the next stages for diagnosing the problem?


Site is hosted with Heroku, databases are hosted with RDS AWS.





Invalid instance_type while creating Amazon RDS db.r3 instance from Ansible playbook

I'm trying to create an Amazon RDS instance using Ansible RDS module and getting the following error when instance_type is db.r3.large.



msg: value of instance_type must be one of: db.t1.micro,db.m1.small,db.m1.medium,db.m1.large,db.m1.xlarge,db.m2.xlarge,db.m2.2xlarge,db.m2.4xlarge,db.m3.medium,db.m3.large,db.m3.xlarge,db.m3.2xlarge,db.cr1.8xlarge, got: db.r3.large



However, db.r3.large is a valid type as described here and I can create one manually from AWS console (i.e. without Ansible playbook).


Here is my vars file:



instance_name: my-instance-name
region: us-east-1
zone: us-east-1b
size: 100
instance_type: db.r3.large
db_engine: MySQL
engine_version: 5.6.21
subnet: my-subnet
parameter_group: default.mysql5.6
security_groups: my-security-group
db_name: my-db-name
db_username: root
db_password: my-password


And here is my playbook:



---
- name: Playbook to provision RDS instance
hosts: localhost
connection: local
gather_facts: no

vars_files:
- vars/rds.yml

tasks:
- name: Create MySQL RDS Instance
local_action:
module: rds
command: create
instance_name: "{{ instance_name }}"
region: "{{ region }}"
zone: "{{ zone }}"
size: "{{ size }}"
instance_type: "{{ instance_type }}"
db_engine: "{{ db_engine }}"
engine_version: "{{ engine_version }}"
subnet: "{{ subnet }}"
parameter_group: "{{ parameter_group }}"
multi_zone: no
db_name: "{{ db_name }}"
username: "{{ db_username }}"
password: "{{ db_password }}"
vpc_security_groups: "{{ security_groups }}"
maint_window: Sun:04:00-Sun:08:00
backup_retention: 30
backup_window: 01:00-3:00


subnet, vpc_security_groups, region, zone etc. are fine as I'm able to create db.r3.large instance with same settings from the AWS console.


It seems that something is wrong with Ansible module or boto, but I could not find out anything helpful. My ansible version is 1.6.1, boto version is 2.36.0 and botocore version is 0.94.0.


How can I create db.r3.large instance form Ansible?





How to find non-shared aws ami

I'd like to delete all AMIs that my own and they are non-shared.

Eg:



$aws ec2 describe-images --executable-users 804427628951




This will list all images by user 804427628951 with explicit launch permissions. But I don't know how to list all non-shared AMI. Could you please help?

Thanks.





dimanche 29 mars 2015

ElasticSearch on Elastic Beanstalk

I'm trying to get ElasticSearch running in an Elastic Beanstalk environment. Using Docker image it's fairly straightforward to get one instance running in a load balanced environment. However, when I try to add more instances to the cluster, they fail to discover each other and every new one becomes a new_master.


My Dockerfile looks like following



FROM dockerfile/java:oracle-java8
RUN ... # Downloading and installing ElasticSearch
RUN /elasticsearch/bin/plugin install elasticsearch/elasticsearch-cloud-aws/2.5.0
VOLUME ["/data"]
ADD config/elasticsearch.yml /elasticsearch/config/elasticsearch.yml
WORKDIR /data
CMD ["/elasticsearch/bin/elasticsearch"]

EXPOSE 9200


And the configuration config/elasticsearch.yml looks like following:



cluster:
name: elastic-env-dev
cloud:
aws:
region: ap-southeast-2
discovery:
type: ec2
ec2:
tag:
Name: elastic-env-dev
ping_timeout: 120s


The name of the EB environment is elastic-env-dev.





How to check whether MFA is enabled for root account in AWS using boto?

I am working on trusted advisor and need to check whether MFA is enabled for root level also? Its in Security section of Trusted advisor Dashboard. I am working in Python using Boto.





Can I convert S3 backed AMI to EBS backed AMI?

Can I convert S3 backed AMI to EBS backed AMI? If yes, how? Can I convert EBS backed AMI to S3 backed AMI? If yes, how?





Elastic Beanstalk AWS ebextensions Windows ASP.net private bin files best practices

Deploying a ASP.NET web app to Elastic Beanstalk. Trying to find the proper way to deploy the app from Visual Studio using the AWS extensions. I've read up on the AWS ebextensions config files but I've also seen something on SO that there's quirks with ebextension files and Windows deployments. What I want to do is deploy my web app which relies on many private .NET dlls that need to go in the web site's bin folder. Right now I reference them directly from the project itself and mark them as 'Content' and 'Copy If Newer' in VS. Wondering if the better alternative is to package up these bin files, upload them to S3 and use the ebextensions to get them installed as part of the deployment process for an EB app. I'm struggling with how to properly do the latter idea. Any ideas on best practices for this case?





Unable to provision AWS server with salt-cloud: Key pair does not exist

I'm following the salt-cloud AWS Guid and am having a little trouble with an error message I believe to be unclear. The error is:



$ sudo salt-cloud -p ubuntu_aws test-vm
[INFO ] salt-cloud starting
[INFO ] Creating Cloud VM test-vm in ap-southeast-1
[ERROR ] EC2 Response Status Code and Error: [400 400 Client Error: Bad Request] {'Errors': {'Error': {'Message': "The key pair 'testkey' does not exist", 'Code': 'InvalidKeyPair.NotFound'}}, 'RequestID': '******************************'}
[ERROR ] There was a profile error: 'str' object does not support item assignment


I've learned that the last portion is a python error which I thought might suggest that there's a syntax error in my configuration, but I can't find any issues with it.


ubuntu_aws config



ubuntu_aws:
provider: aws
image: ami-e2f1c1b0
size: Micro Instance
ssh_username: ec2-user


provider config



private_key: /path/to/testkey.pem
keyname: testkey
securitygroup: default


I also noticed there are 2 default groups, neither of which I'm able to delete: enter image description here


When I visit http://ift.tt/1ES9j6v I can see that the key is indeed there on us-east-1.


enter image description here


My testkey.pem key has -rw------- permissions.


I'm still learning to use salt-cloud and AWS and I'm struggling to determine if it's an issue with my AWS config or something with my Salt config. Any guidance would be helpful.





Mongodb issues on AWS

I have been trying to install Mongodb on my AWS EC2 instance to work with a Node.js server I wrote.


Following this guide I was able to "install" monogdb


However I set to aliases to the paths of mongod and mongo since the paths were dropped when I logged out



alias mongod='/home/ec2-user/mongodb/mongodb-linux-x86_64-3.0.0/bin/mongod'
alias mongo='/home/ec2-user/mongodb/mongodb-linux-x86_64-3.0.0/bin/mongo'


I can start the mongo server using this alias no problem.


However when I run node server.js, I get



/home/ec2-user/project/node_modules/mongodb/lib/mongodb/connection/base.js:246
throw message;
^
AssertionError: {"name":"MongoError","ok":0,"errmsg":"ns not found"} == null
at /home/ec2-user/project/server.js:38:13
at /home/ec2-user/project/node_modules/mongodb/lib/mongodb/db.js:1217:20
at /home/ec2-user/project/node_modules/mongodb/lib/mongodb/db.js:1194:16
at /home/ec2-user/project/node_modules/mongodb/lib/mongodb/db.js:1903:9
at Server.Base._callHandler (/home/ec2-user/project/node_modules/mongodb/lib/mongodb/connection/base.js:453:41)
at /home/ec2-user/project/node_modules/mongodb/lib/mongodb/connection/server.js:487:18
at MongoReply.parseBody (/home/ec2-user/project/node_modules/mongodb/lib/mongodb/responses/mongo_reply.js:68:5)
at null.<anonymous> (/home/ec2-user/project/node_modules/mongodb/lib/mongodb/connection/server.js:445:20)
at emit (events.js:95:17)
at null.<anonymous> (/home/ec2-user/project/node_modules/mongodb/lib/mongodb/connection/connection_pool.js:207:13)


I've searched around a bit for this error but, couldn't find anything. The odd part is I can see the incoming connection in the screened mongod proccess. As such



2015-03-29T20:40:45.782+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37394 #34 (1 connection now open)
2015-03-29T20:40:45.782+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37395 #35 (2 connections now open)
2015-03-29T20:40:45.783+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37396 #36 (3 connections now open)
2015-03-29T20:40:45.783+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37397 #37 (4 connections now open)
2015-03-29T20:40:45.784+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37398 #38 (5 connections now open)
2015-03-29T20:40:46.721+0000 I COMMAND [conn35] CMD: drop data.page-1
2015-03-29T20:40:46.749+0000 I COMMAND [conn36] CMD: drop data.page-2
2015-03-29T20:40:46.772+0000 I COMMAND [conn37] CMD: drop data.page-4
2015-03-29T20:40:46.803+0000 I COMMAND [conn38] CMD: drop data.page-3
2015-03-29T20:40:46.822+0000 I COMMAND [conn34] CMD: drop data.page-5
2015-03-29T20:40:46.829+0000 I NETWORK [conn34] end connection 127.0.0.1:37394 (4 connections now open)
2015-03-29T20:40:46.829+0000 I NETWORK [conn35] end connection 127.0.0.1:37395 (3 connections now open)
2015-03-29T20:40:46.829+0000 I NETWORK [conn36] end connection 127.0.0.1:37396 (2 connections now open)
2015-03-29T20:40:46.829+0000 I NETWORK [conn37] end connection 127.0.0.1:37397 (1 connection now open)
2015-03-29T20:40:46.829+0000 I NETWORK [conn38] end connection 127.0.0.1:37398 (0 connections now open)




Elastic Beanstalk intermittently activates rack 1.5.2, but my Gemfile requires rack 1.6.0

I am running a standard Rails 4.2.0 app on Elastic Beanstalk. The container is the 64-bit Amazon Linux 2014.09 v1.0.9 box running Ruby 2.1.4, Puma 2.9.1 and Nginx 1.6.2.


When I push code to this environment, I get the following error in the puma.log: "You have already activated rack 1.5.2, but your Gemfile requires rack 1.6.0. Prepending bundle exec to your command may solve this."


I do not remember seeing these errors a few months ago when I was testing and it seems to be intermittent. Sometimes I push and everything works, other times I push and it fails.


http://ift.tt/19iUgtY suggests that there may be a bug in /opt/elasticbeanstalk/support/conf/puma.conf, but I've patched that file and the error still occurs. I've also made sure I have to have puma and rack in my Gemfile.


What is the most production ready and sustainable way to get my EC2 instances to load the right version of rack?





Main differences between SNS notifications and Webhooks

What are the main differences between AWS SNS notifications and Webhooks? When would you use one instead of another and why?





"The request signature we calculated does not match the signature you provided" using different access/secret keys

I've read a lot of this like issues here and I've got a good answers that solved other my problems.


I've created my own AWSAccessKeyId and secretAccess and the request has been done, but when my client gives me his AWSAccessKeyId and secretAccess I got this message:



<ItemLookupErrorResponse xmlns="http://ift.tt/1DUnkoW">
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
</Message>
</Error>
<RequestId>c53a74b1-4db2-4356-aca9-1b7d1519737b</RequestId>
</ItemLookupErrorResponse>


Here is my code:



var today = new Date();
time = today.toISOString();
time = encodeURIComponent(time);

var AWSAccessKeyId = "My Key";
var secretAccess = "SECRET ACCESS";
var associateTag = "";

var messageToEncrypt ="GET\nwebservices.amazon.com\n/onca/xml\nAWSAccessKeyId="+AWSAccessKeyId+"&ItemId=0679722769&Operation=ItemLookup&ResponseGroup=SalesRank&Service=AWSECommerceService&Timestamp="+time+"&Version=2013-08-01";
var sig = CryptoJS.HmacSHA256(messageToEncrypt, secretAccess);

sig = sig.toString(CryptoJS.enc.Base64);
sig = encodeURIComponent(sig);

var request = "http://ift.tt/1ONCXlp"+AWSAccessKeyId+"&ItemId=0679722769&Operation=ItemLookup&ResponseGroup=SalesRank&Service=AWSECommerceService&Timestamp="+time+"&Version=2013-08-01&Signature="+sig;


What is caused the problem? Is their anything else I've to ask my client to do to get the right keys? My client from another country, is this cause another problem?


I don't change a lot, just change the keys.





Amazon AWS SNS register application

Im trying to register an application using AWS SDK with the following code



$AmazonSNS = SnsClient::factory(array(
'key' => $sns_id,
'secret' => $sns_secret,
'region' => 'us-east-1'


));



$app_details = array(
'Name' => $app_name,
'Platform' => 'APNS',
'Attributes' => array(
'PlatformCredential ' => $pem, //PEM IS A STRING
'PlatformPrincipal' => ''
)
);

$results = $AmazonSNS->createPlatformApplication($app_details);


All I get is the following exception



<b>Fatal error</b>: Uncaught Aws\Sns\Exception\InvalidParameterException: AWS Error Code: InvalidParameter, Status Code: 400, AWS Request ID: 059d5491-75a5-5f7b-9183-190339511e06, AWS Error Type: client, AWS Error Message: Invalid parameter: Attributes Reason: Invalid attribute name: PlatformCredential , User-Agent: aws-sdk-php2/2.7.25 Guzzle/3.9.3 curl/7.38.0 PHP/5.5.22
thrown in <b>/home/notifications/public_html/Aws/Aws/Common/Exception/NamespaceExceptionFactory.php</b> on line <b>91</b><br />


My goal is to register an application to AWS for future use on sending notifications to the app users.


What is wrong with my code ?





Using IAM credentials in PHP AWS SDK for uploading files to S3

I'm using AWS PHP SDK to upload files to a S3 bucket. When I'm using the root credentials, everything works(the files are uploaded, I can list etc). However, I want to use IAM credentials(key/secret), but I'm getting:



AWS Error Code: SignatureDoesNotMatch, Status Code: 403, AWS Request ID: 4753C8291E073CE9, AWS Error Type: client, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method.



This is my IAM policy:



{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1427650841800",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::BUCKETNAME/*"
}
]
}


I've tried applying a general access to all S3 buckets policy but it didn't help.


I tried adding a bucket policy:



{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1427647391802",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::791442585307:user/IAM_USERNAME"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::BUCKETNAME/*"
}
]
}


But that didn't help either.


This is my CORS configuration for the bucket:



<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://ift.tt/1f8lKAh">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>


Is there anything I need to change or add somewhere? I don't understand what I'm doing wrong.


NOTE: I'm testing everything from a local environment, the application is a Laravel 4 application.





How to set up remote to EC2 config file when using a MAC

So I'm trying to deploy angular-fullstack to an ec2 instance.


I found an awesome example video on youtube however, one of this examples for a .git config file has a windows spefic key puttykeyfile which is windows specific, I rather have a pem.


Client Side - Create git repo and add a remote into config file



[remote "AWS_production"]
url = ssh://ubuntu@YOUR-IP/home/ubuntu/repo_do_not_delete/
fetch = +refs/heads/*:refs/remotes/repo_do_not_delete/*
puttykeyfile = C:\\Users\\YOUR-USER\\.ssh\\private.ppk


Question: Using this block of text from windows. How would I change puttykeyfile to reflect my pem inside my /Users/matthew.harwood/.ssh/private.pem folder?





AWS SES Guzzle Error when sending email

I have set up AWS Ses service using PHP SDK:



$this->client = SesClient::factory([
'key' => $params['key'],
'secretKey' => $params['secret_key'],
'region' => 'eu-west-1',
'base_url' => 'http://ift.tt/1Czzwtj',
]);

$this->client->sendEmail($this->params());

public function params() {
array(
'Source' => 'verified@gmail.com',
'Destination' => array(
'ToAddresses' => array('receiver@yahoo.com')
),
'Message' => array(
'Subject' => array(
'Data' => 'SES Testing',
'Charset' => 'UTF-8',
),
// Body is required
'Body' => array(
'Text' => array(
'Data' => 'My plain text email',
'Charset' => 'UTF-8',
),
'Html' => array(
'Data' => '<b>My HTML Email</b>',
'Charset' => 'UTF-8',
),
),
),
'ReplyToAddresses' => array( 'replyTo@email.com' ),
'ReturnPath' => 'bounce@email.com'
);
}


After trying to send email, I receive this error message:



exception 'Guzzle\Http\Exception\CurlException' with message
'[curl] 23: Failed writing body (0 != 86) [url] http://ift.tt/1Czzwtj'
in C:\xampp\htdocs\myProject\protected\lib\vendor\guzzle\guzzle\src\Guzzle\Http\Curl\CurlMulti.php:338


Anyone know how to fix that error?