mercredi 31 décembre 2014

Laravel connect to sqlserv RDS

I am trying to connect to a sql server RDS instance from my ubuntu EC2 using Laravel. But I'm always getting this error:



PDOException (20002)
SQLSTATE[01002] Adaptive Server connection failed (severity 9)


I have added a security group that opens all traffic to the DB just to test the connection but still no luck.


My config looks like:



'sqlsrv' => array(
'driver' => 'sqlsrv',
'host' => 'mssql.afdafaws.ap-northeast-1.rds.amazonaws.com',
'database' => 'database',
'username' => 'username',
'password' => 'password',
'prefix' => '',
),




kinesis stream account incorrect

I have setup my pc with python and connections to AWS. This has been successfully tested using the s3_sample.py file, I had to create an IAM user account with the credentials in a file which worked fine for S3 buckets. My next task was to create an mqtt bridge and put some data in a stream in kinesis using the awslab - awslabs/mqtt-kinesis-bridge. This seems to be all ok except I get an error when I run the bridge.py. The error is:



Could not find ACTIVE stream:my_first_stream error:Stream my_first_stream under account 673480824415 not found.


Strangely this is not the account I use in the .boto file that is suggested to be set up for this bridge, which are the same credentials I used for the S3 bucket



[Credentials]
aws_access_key_id = AA1122BB
aws_secret_access_key = LlcKb61LTglis


It would seem to me that the bridge.py has a hardcoded account but I can not see it and i can't see where it is pointing to the .boto file for credentials. Thanks in Advance





Starting Latchet server with artisan cmd - works on vagrant, not on EC2

I'm using Latchet with Laravel (4.2). I've set up Latchet on my vagrant box and start it using an artisan command on deployment:



public function fire()
{
Log::info("Latchet server starting");
$command = "php artisan latchet:listen --env " . App::environment();
$process = new Process($command);
$process->start();
sleep(1);
if ($process->isRunning()) {
Log::info("Latchet Server is running.");
Log::info("Latchet pid: " . $process->getPid());
} else {
Log::info("Latchet Server is not running.");
Log::info($process->getOutput());
}
}


This works perfectly fine on my vagrant box, but not when I run it on the Amazon EC2 (Linux) instance by SSH-ing in. On EC2 it says the server is running and using port 1111 as it should, but when I check open ports 1111 is not there (and when I navigate to the site I can see that the websocket connection can't be made). When I actually run php artisan latchet:listen --env development while SSH-ed into the instance it works fine (but then I can't close the terminal of course). So it seems it's just the artisan command that is not working.


At first I thought maybe the environment name isn't getting set properly with App::environment(), so I hard coded the environment name into the above after also checking that it was being set correctly via php artisan env - same result.


Because it works on my local vagrant box I don't think this is a Latchet problem. Something must be different between the EC2 and vagrant setups - maybe a security issue? I'm out of ideas - does anyone know what might be going wrong/if there is any special property in EC2 that prevents this from continuing to run?


Thanks!





Amazon MWS string to sign and signature: Python

I have to admit im very new to this so this might be a dumb question or I might be going around this completely the wrong way. Im trying to figure out the string to sign and Base64 HMAC signature. At this point I would like to verify that the code ive found works. Here it is:



import hashlib
import hmac
import base64

message = bytes("Message").encode('utf-8')
secret = bytes("secret").encode('utf-8')

signature = base64.b64encode(hmac.new(secret, message, digestmod=hashlib.sha256).digest())
print(signature)


I had the impression that I could copy the (string to sign) off of the scratchpad and replace it with "Message" and then paste in my secret key for "secret". However my output doesnt match amazon scratchpads signature. Can someone point out the error in my ways?





Turning off Unauthenticated Identities in Amazon Cognito for IOS

I disabled access to Unauthenticated Identities and found that my Logging threw these messages:



2014-12-31 13:43:33.010 com.tharock[421:136403] AWSiOSSDKv2 [Verbose] AWSURLResponseSerialization.m line:263 | -[AWSXMLResponseSerializer responseObjectForResponse:originalRequest:currentRequest:data:error:] | Response body: [<ErrorResponse xmlns="http://ift.tt/17546ih">
<Error>
<Type>Sender</Type>
<Code>ValidationError</Code>
<Message>Request ARN is invalid</Message>
</Error>
<RequestId>111c34e1-9136-11e4-92c2-75de57cf7c5e</RequestId>
</ErrorResponse>
]
2014-12-31 13:43:33.027 com.tharock[421:136403] AWSiOSSDKv2 [Error] AWSCredentialsProvider.m line:433 | __40-[AWSCognitoCredentialsProvider refresh]_block_invoke293 | Unable to refresh. Error is [Error Domain=com.amazonaws.AWSSTSErrorDomain Code=0 "The operation couldn’t be completed. (com.amazonaws.AWSSTSErrorDomain error 0.)" UserInfo=0x1740f7a00 {Type=Sender, Message=Request ARN is invalid, Code=ValidationError, __text=(
"\n ",
"\n ",
"\n ",
"\n "
)}]
2014-12-31 13:43:33.028 com.tharock[421:136403] Error: Error Domain=com.amazonaws.AWSSTSErrorDomain Code=0 "The operation couldn’t be completed. (com.amazonaws.AWSSTSErrorDomain error 0.)" UserInfo=0x1740f7a00 {Type=Sender, Message=Request ARN is invalid, Code=ValidationError, __text=(
"\n ",
"\n ",
"\n ",
"\n "
)}


Is it complaining about my CognitoRoleUnauth parameter defined in Constants.h? I have valid ARN supplied for CognitoRoleAuth, and valid CognitoPoolID and account. It has been working well with the unauthenticated Identity, but I must close that off now.





Newbie (myself) would like to pay to get an AWS requester pays bucket usage for arxiv answer - - is that allowed here?

Newbie (myself) would like to pay to get an AWS requester pays bucket usage for arxiv answer - - is that allowed here?


My amazon account is set up, and I have tried a few programs to try to locate and download the arxiv requester pays bucket, but no success.


If permitted here, I would like to pay for a detailed start-to-finish outline of how the HECK to do this on windows 7x64.


Yes, already checked here and on youtube, and am so frustrated I am willing to pay $10US via PayPal to the first brilliant person who solves this for me. Thanks in advance, miniscule





Firewall in Azure/AWS/IBM Virtual Machine ?

I have a Web Application, Web Service and SQL-Server in Azure.


I'm trying now to duplicate those services to AWS and IBM as well, to figure out which of these vendors are best to suit me.


But I have a question that relate to all cloud vendors:


Do I need to configure a firewall in any of the VM that I'm using ?


I know that in azure I can choose "End Points" (IP & ports) to expose but this is the only protection I need ? I don't need to configure or install anything else ?





AWS S3 - How to create x-amz-server-side-encryption-customer-key and customer-key md5 for direct upload?

I'm trying to create these keys but I can not. Has anyone created in ruby? I tried different ways but none worked. Sorry but not sample code I have only form!


'AES256'},{'x-amz-server-side-encryption-customer-key' => 'error'},{'x-amz-server-side-encryption-customer-key-MD5' => 'md5-error'}] do %>



How to connect to Amazon Web Service RDS on MySQL Workbench?

So I've set up a DB Instance on AWS, and looking around all the guides I should now be able to go on MySQL Workbench and connect it succesfully, as I have a hostname, port, user ID and password. However, when I enter all the details I specified when creating the instance, I get the error: "Failed to Connect to MySQL at with user ", then below it says the same error with (10060) in brackets.


I looked up this error but couldn't find any relevant solution. I need help quite urgently as I'm meant to have finished this project by the end of the week!





Spring Cloud - SQS

I'm trying to get a simple queue handler working with the Spring Cloud framework. I've successfully got the message handler polling the queue, However. The problem I'm seeing is that when I post a message to the queue, my handler is failing to unmarshall the payload in to the required java Object.



@MessageMapping("MyMessageQueue")
@SuppressWarnings("UnusedDeclaration")
public void handleCreateListingMessage(@Headers Map<String, String> headers, MyMessage message) {
//do something with the MyMessage object
}


The error I'm getting is



No converter found to convert to class MyMessage


As I understand it, the @MessageMapping should use Jackson to unmarshall my JSON payload into a MyMessage object. However its complaining that it cannot find a converter.


Has anyone come across this?


I'm using the 1.0.0.BUILD-SNAPSHOT version of Spring Cloud.





how to deploy an ionic framework to amazon? (elastic beanstalk prefered)

I have a node.js app, which creates http server and handles socket connections, and I have an Ionic Framework Application (Angular.js) Could you please give me some guide how to run it on AWS? On my local i simply do - node app.js and then go to ionic app folder and type ionic serve


Please, i'm finally stacked trying dozen of ideas and even Amazon Support could not help me, for some reason.





Can anyone suggest me the effective way to deal with s3 upload fail because of timezone difference issue?

I tried both SDK version V1 and V2. I have application in which I am posting user's photo/video on s3. When the device's timezone is not set to automatic,many times uploading fails because of timezone difference. I am not able to catch this error or exception consistently. didFailWithError never get called for timezone difference,I have to catch it in didCompleteWithResponse.


I used below code for 1.7.1 SDK.



[AmazonLogger verboseLogging];
AmazonS3Client *s3 = [[AmazonS3Client alloc] initWithAccessKey:AWS_AccessKey withSecretKey:AWS_SecretKey];
s3.endpoint=[AmazonEndpoints s3Endpoint:US_EAST_1];

@try
{

por = [[S3PutObjectRequest alloc] initWithKey:[aStrAWSPath lastPathComponent] inBucket:aStrFolder];
por.contentType = aStrType;
por.data = aDataToPost;
por.delegate=self;
[por setCannedACL:[S3CannedACL publicReadWrite]];
[s3 putObject:por];
aWSTotalBytesWritten = 0.0;
}
@catch (AmazonServiceException *exception)
{
NSLog(@"%@",exception.description);
}
@catch (AmazonClientException *exception)
{
NSLog(@"%@",exception.description);
}

-(void)request:(AmazonServiceRequest *)request didCompleteWithResponse:(AmazonServiceResponse *)response
{
if(response.exception==nil)
{
//Success
}
else
{
if([response.exception isKindOfClass:[AmazonServiceException class]])
{
AmazonServiceException *aServiceExceptionObj=(AmazonServiceException *)response.exception;
if([aServiceExceptionObj.errorCode isEqualToString:@"RequestTimeTooSkewed"])
{
//Please check your date&time settings.It should be set to automatically.
}
}
}
-(void)request:(AmazonServiceRequest *)request didFailWithError:(NSError *)error
{
NSLog(@"AWSError : %@", error.description);
}


In AWSiOSSDKv2,I used below code,



AWSServiceConfiguration *aConfigObj=[AWSServiceConfiguration configurationWithRegion:AWSRegionUSEast1 credentialsProvider:CustomCredentialsProviderObj];
AWSS3TransferManager *transferManager = [[AWSS3TransferManager alloc] initWithConfiguration:aConfigObj identifier:@"testUplaod"];
AWSS3TransferManagerUploadRequest *uploadRequest = [AWSS3TransferManagerUploadRequest new];
uploadRequest.bucket = @"testsdkv2/testsdkv2internal";
uploadRequest.key = [NSString stringWithFormat:@"%d.jpg",(int)[[NSDate date]timeIntervalSince1970]];
NSURL *aUrlObj=[NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"test" ofType:@"jpg"]];
uploadRequest.body = aUrlObj;
uploadRequest.ACL=AWSS3BucketCannedACLPublicReadWrite;
uploadRequest.contentType=@"image/jpeg";

[[transferManager upload:uploadRequest] continueWithBlock:^id(BFTask *task) {


if (task.error)
{
//Not uploaded
}

if (task.result)
{
// The file uploaded successfully.
}

return nil;
}];




How can I set the Cache-Control when using Write-S3Object

I am using Windows Powershell for AWS and I have tried the following:


Write-S3Object -BucketName 'user-ab-staging' -KeyPrefix 'content/css' -Folder 'content/css' -SearchPattern '*.css' -Metadata @{"Cache-Control" = "Value"} -CannedACLName PublicRead


It gives me a very strange error and only tries to load one css file:



Uploaded 1 object(s) to bucket 'user-ab-staging' from 'C:\g\ab-user\WebUserApp\content\css' with keyprefix
'content/css'
Write-S3Object :
At line:1 char:1
+ Write-S3Object `
+ ~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (Amazon.PowerShe...eS3ObjectCmdlet:WriteS3ObjectCmdlet) [Write-S3Objec
t], InvalidOperationException
+ FullyQualifiedErrorId : Amazon.S3.AmazonS3Exception,Amazon.PowerShell.Cmdlets.S3.WriteS3ObjectCmdlet


Can anyone help tell me what is wrong with this and how I can set the Cache data for the object when I am using Write-S3Object and the Powershell extension for ASW.


Thank you





PHP Upstart on Amazon EC2 Linux (Elasticbeanstalk)

I have a couple of PHP scripts that I have run for ages successfully on Ubuntu (AWS EC2) as Upstart daemon services. Currently in the process of migrating the stand alone EC2 to an ElasticbeanStalk worker application. Generally this has worked well and I have the deamons and some CRON jobs setup fine using Elasticbeanstalk extensions. The problem I am having is the daemons are falling over. I know the PHP is fine as I can run from the command line (plus has been running well on Ubuntu). Similarly I am confident my Upstart .conf file (below) is fine as it came from Ubuntu and works perfectly there. It also starts as expected but continually fails with the unhelpful "terminated with status 1" error. Status 1 being pretty much anything as I understand.


Extract from /var/log/messages



Dec 31 11:33:47 ip-172-31-0-74 init: init-pulses main process (8809) terminated with status 1
Dec 31 11:33:47 ip-172-31-0-74 init: init-pulses main process ended, respawning


init-pulses.conf



start on filesystem and started elastic-network-interfaces
stop on shutdown
respawn
respawn limit unlimited

script
sudo -u root php /var/www/html/index.php scripts init_pulses
end script


The PHP script contains a loop but it never hits the PHP. There is something up with how I am executing the command although as I said this is totally cool on Ubuntu. I have tried various forms of the same all with the same problem. Can anyone offer any suggestions on how to construct the script block so it actually manages to fire the command or any ideas on how to debug this?


Any help, as always, much appreciated





A script for Browser to Add Affiliate Code

I want to Know how i Can Add Affiliate Code To all the Links automatically(to online shopping sites like Flipkart.com ,Amazon India and Amazon US) through A script in a browser. My Retired Uncle Runs a Free Public Library(with always having financial trouble because no one donates), The Library Has few computers which are used many times by people to buy books or other stuff online through Amazon or Flipkart(an indian online shopping Portal). i was hoping to help him by generating some revenue Through those Purchases.


The Affiliate Format that flipkart uses is (assuming affiliate code = Code1)


http://flipkart.com/?affid=Code1 (if "?" has not been used in the url if used the "?" is repaced by "&") other wise "&affid=code1


the Amazon India And Amazon US uses the Same Format but the Affiliate codes are different


please help and tell me whether there is any Extention which i can use for doing the same.


Edit: Iam a totall noob when it comes to coding so please





Django ALLOWED_HOSTS with ELB HealthCheck

I have a django application deployed on Elastic Beanstalk. The HealthCheck for my app keeps failing because the IP of the ELB HealthCheck is not included in my ALLOWED_HOSTS settings variable.


How can I modify ALLOWED_HOSTS to make the HealthCheck pass? I would just pass in the explicit IP address, but I believe that this changes, so whenever the IP changes the check would fail again until I add the new IP.





How can i use wildcards in EC2 commands

I have some EC2 instances.I want to use 'ec2-describe-instances' command to get a list of instances based on a specific value of a tag.


The table shows my use-case.


Instance | Value (key:Purpose) | Outcome


InstanceA |Going | Filter


InstanceB |Shopping,Going | Filter


InstanceC |Going,Shoping | Filter


InstanceD |Shopping,Going,Chatting | Filter


InstanceE |GoingGreat | DONT Filter


InstanceF |NotGoing | DONT Filter


So i want to somehow use wildcard in the ec2-describe-instances command so that I get the expected outcome.





How to recycle aws server app pool remotely

I am trying to recycle the aws app pool remotely using below syntax



using (DirectoryEntry appPoolEntry = new DirectoryEntry(
"IIS://" + appPoolModel.ServerName + "/W3SVC/AppPools/"+appPoolModel.AppPoolName))
{
appPoolEntry.Invoke("Recycle", null);
appPoolEntry.Close();
}


But i am getting this below error



System.Runtime.InteropServices.COMException (0x800706BA): The RPC server is unavailable.



For aws server i am using the server name as follows



"ec2-[Server Public IP].compute-1.amazonaws.com"




How can I make a macro to run a few PowerShell commands one after the other?

I am using Amazon Web Service Powershell extension. I have multiple commands I want to run after each other like this:



PS C:\g> Write-S3Object -BucketName "user-staging" -Key "index.html" -File "index.html"
PS C:\g> Write-S3Object -BucketName 'user-staging' -KeyPrefix 'lib/pagedown' -Folder 'lib/pagedown' -SearchPattern '*.js'


How can I combine these into a macro, batch file or something similar so I can run them all just with one simple command such as:



PS C:\g> publish-files




Django: UnicodeDecodeError while trying to read template 500.html

I am trying to deploy my django application to a production environment with AWS Elastic Beanstalk. In my staging environment, where I have DEBUG=True, everything is fine, but when DEBUG=False I am getting the error UnicodeDecodeError while trying to read template /home/docker/code/django-app/templates/500.html


Here is my 500.html template:



{% extends "base.html" %}
{% load i18n %}

{% block title_html %}{% trans 'Server error (500)' %}{% endblock %}

{% block content %}
<h1>{% trans 'Server Error <em>(500)</em>' %}</h1>
<p>
{% trans "There has been an error. It's been reported to the site administrators and should be fixed shortly. Thank you for your patience." %}
</p>

{% endblock %}




Failed to rename compiled query -Redshift

While running multiple insert query on redshift I am facing the following issue:



context: 'result == 0' - failed to rename compiled query /rds/bin/padb.1.0.871/data/exec/156/1.473317247000000


Some data is inserted properly but it fails after inserting few records.I have google the issue but found no solution. Thanks.





mardi 30 décembre 2014

Amazon S3 Data retrieval in URL for all the images in bucket

I have my bucket on Amazon S3 filled with lot of images. I want to develop a API that would hotlink all the images to my website. For this i want to write a code that would fetch the URL`s for all the images from bucket into a PHP - array.


I could`nt find code that would dynamically fetch the URL of all the files in bucket without passing the file name.


Waiting for help !!





How do you structure sequential AWS service calls within lambda given all the calls are asynchronous?

I'm coming from a java background so a bit of a newbie on Javascript conventions needed for Lambda.


I've got a lambda function which is meant to do several AWS tasks in a particular order, depending on the result of the previous task.


Given that each task reports its results asynchronously, I'm wondering if the right way make sure they all happen in the right sequence, and the results of one operation are available to the invocation of the next function.


It seems like I have to invoike each function in the callback of the prior function, but seems like that will some kind of deep nesting and wondering if that is the proper way to do this.


For example on of these functions requires a DynamoDB getItem, following by a call to SNS to get an endpoint, followed by a SNS call to send a message, followed by a DynamoDB write.


What's the right way to do that in lambda javascript, accounting for all that asynchronicity?





Which one is better to user between parse , firebase and aws cognito

I am willing to use synchronisation service for my application. but i want to choose best one i want to know which one is better among all these. my application will run on android , ios , windows and web.





AWS ElasticLoad Balancer Inbound traffic security group rules, allow only my ip? [on hold]

My objective:

To make my AWS Elastic Load Balancer hittable by only traffic from my ip.


What I have tried:



  • created a security group in EC2 security groups

  • set an inbound rule that allows all traffic from my ip [all, all, all, /32]

  • assigned this ELB the newly created security group

  • attempted to hit the elb from an ip outside myoffice


The results:

All traffic, even from ips other than mine could still hit my ELB (and thus get through to my app servers).


What am I doing wrong? How can I block inbound traffic to my ELB (and the EC2 instances behind it)?





Cannot log in to phpMyAdmin when trying to connect to remote Amazon RDS

I have a php application running on Amazon Elastic Beanstalk running on their Linux Web Server OS. and a mysql database using Amazon RDS.


I can run a php script on the application that accesses the database and returns data, so I don't believe I have an issue with the database firewall.


I've installed phpMyAdmin on the server running the application, and have edited the config files so that I can now access the phpMyAdmin login page.


But when trying to log in (With the same credentials used in the application script that can access the database) I am given error: #2002 Cannot log in to the MySQL server.


I have tried looking at 6 or so different suggestions from other threads around about but none have helped. I have even allowed access for all traffic from any ip temporarily in the firewall security group, to rule out this as a cause.


Any ideas what this error is being caused by?





JavaFX Gradle build error, java.util.zip.ZipException: duplicate entry: META-INF/LICENSE

I'm using Gradle to build a JavaFX application. The problem I keep running into is a "duplicate entry" error for META-INF/LICENSE.


My jar includes a dependency on the Amazon AWS SDK, so I'm assuming the error is generated from that. To this point, I've found a solution as described here:


Duplicate Zip Entry after Gradle Plugin v0.13.1


which describes my exact problem, but only in the context of Android Gradle.


Specifically, the solution was:



android.packagingOptions {
pickFirst 'META-INF/LICENSE.txt'
}


Of course, such an option is noticeably absent in Gradle. My question: Is there a straightforward way to address this issue in the build code rather than having to manually look for and remove duplicate META-INF/LICENSE occurrences?


For completeness, here's the error gradle assemble generates:



Caused by: java.util.zip.ZipException: duplicate entry: META-INF/LICENSE
at com.sun.javafx.tools.packager.PackagerLib.copyFromOtherJar(PackagerLib.java:1409)
at com.sun.javafx.tools.packager.PackagerLib.jar(PackagerLib.java:1366)
at com.sun.javafx.tools.packager.PackagerLib.packageAsJar(PackagerLib.java:288)
... 54 more


And my gradle.build script:



apply from: 'javafx.plugin'

repositories {
mavenCentral()
}

dependencies {
compile ('com.amazonaws:aws-java-sdk:1.9.13') {
exclude group: 'commons-io', module: 'commons-io'
}
testCompile group: 'junit', name: 'junit', version: '4.+'
}

jar {
from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } }
manifest {
attributes 'Main-Class': 'com.buddyware.treefrog.Main'
}
}




$evalAsync not working with AWS dynamodb call

I have an issue where an array ($scope.wallet) is not being updated with data from my dynamodb query. here is the query:



function queryForCoupon(couponID){
var couponParams = {
"TableName":"coupons",
"KeyConditions":{
"couponID" :{
"AttributeValueList":[
{
"S": couponID
}
],
"ComparisonOperator":"EQ"
}
},
"Select": "ALL_ATTRIBUTES"
}
db.query(couponParams,
function(err, data) {
if(err){
console.log(err);
} else {

$scope.$evalAsync(function () { $scope.wallet.push(data.Items[0]); });
}

});
}


When I console.log(data.Items[0]) the correct data is there, but later on when I console.log() on the wallets contents nothing is there. I've tried $scope.apply and $scope.timeout as well with no luck.


Any thoughts?





Locust.io: Controlling the request per second parameter

I have been trying to load test my API server using Locust.io on EC2 compute optimized instances. It provides an easy-to-configure option for setting the consecutive request wait time and number of concurrent users. In theory, rps = wait time X #_users. However while testing, this rule breaks down for very low thresholds of #_users (in my experiment, around 1200 users). The variables hatch_rate, #_of_slaves, including in a distributed test setting had little to no effect on the rps.



Experiment info


The test has been done on a C3.4x AWS EC2 compute node (AMI image) with 16 vCPUs, with General SSD and 30GB RAM. During the test, CPU utilization peaked at 60% max (depends on the hatch rate - which controls the concurrent processes spawned), on an average staying under 30%.


Locust.io


setup: uses pyzmq, and setup with each vCPU core as a slave. Single POST request setup with request body ~ 20 bytes, and response body ~ 25 bytes. Request failure rate: < 1%, with mean response time being 6ms.


variables: Time between consecutive requests set to 450ms (min:100ms and max: 1000ms), hatch rate at a comfy 30 per sec, and RPS measured by varying #_users.



Locust.io throughput graph


The RPS follows the equation as predicted for upto 1000 users. Increasing #_users after that has diminishing returns with a cap reached at roughly 1200 users. #_users here isn't the independent variable, changing the wait time affects the RPS as well. However, changing the experiment setup to 32 cores instance (c3.8x instance) or 56 cores (in a distributed setup) doesn't affect the RPS at all.


So really, what is the way to control the RPS? Is there something obvious I am missing here?





Rackspace Cloudfiles publicly accessible Origin URL?

When using Rackspace Cloudfiles, and having Akamai CDN distribution enabled, a typical download http (publicly accessible) URL for a file is something like this:



http://7c3ca13c7a3lkld0f0ae5-83a5b57cdllakjeidk39d8.r36.cf2.rackcdn.com/css/style.css


But according to the API (and tools like Cyberduck), files have an Origin URL as well. Usually something sort of like this:



https://storage3kd.ord1.clouddrive.com/v1/MossoCloudFS_ra6c36-e36d-45c9/dist/css/style.css


When trying to access the Origin URL in the browser, I get an Unauthorized message. Is it possible to make a file publicly/anonymously accessible via it's Origin URL?


With Amazon AWS S3 + CloudFront, you can set it up so you can access a file either through it's cached CDN URL or via it's uncached S3 Origin Bucket URL.


Can you do the same thing with Rackspace Cloudfiles? If so, how?





Redirect Assets Using AWS S3 / CloudFront

I have a file that I want to rename, and I want to redirect incoming requests to the new file name.


Can I do this with AWS settings? I'm hoping I can setup a permanent alias in S3, so that the correct file gets copied out to CloudFront.


I found this doc on aliasing, but I'm not sure if it's what I need.


http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html





How do I add an array of RecordSets into Cloud Formation using troposphere?

I'm using the python module troposphere to create my cloud formation template. Most of it is complete but I seem to be confused about how to create my DNS entries for the load balancer with the RecordSets method/function. The output for this section is supposed to look like:



"devdevopsdemoELBDNSARecord0": {
"Type": "AWS::Route53::RecordSetGroup",
"Properties": {
"HostedZoneName": "FOO.net.",
"Comment": "Alias targeted to devdevopsdemoELB ELB.",
"RecordSets": [
{
"Name": "devopsdemo.dev.FOO.net.",
"Type": "A",
"AliasTarget": {
"HostedZoneId": {
"Fn::GetAtt": [
"devdevopsdemoELB",
"CanonicalHostedZoneNameID"
]
},
"DNSName": {
"Fn::GetAtt": [
"devdevopsdemoELB",
"CanonicalHostedZoneName"
]
}
}
},
{
"Name": "devopsdemo-dev.FOO.net.",
"Type": "A",
"AliasTarget": {
"HostedZoneId": {
"Fn::GetAtt": [
"devdevopsdemoELB",
"CanonicalHostedZoneNameID"
]
},
"DNSName": {
"Fn::GetAtt": [
"devdevopsdemoELB",
"CanonicalHostedZoneName"
]
}
}
}


I've started with:



hostedzone = "FOO.net"
myRecordSet = RecordSetType("devdevopsdemoELBDNSARecord0")
myRecordSet.HostedZoneName=Join("", hostedzone, "."])
myRecordSet.Comment="Alias targeted to devdevopsdemoELB ELB."


But then I'm not clear on how the RecordSets values should be entered.


I supposed I could just use the straight



myRecordSet.RecordSets =


And just put the json into place, but that seems a bit like a misuse of the purpose of using troposphere in the first place.


Update: Putting in the json results in this error


AttributeError: AWS::Route53::RecordSet object does not support attribute RecordSets



myRecordSet.RecordSets = [
{
"Name": "devopsdemo.dev.FOO.net.",
"Type": "A",
"AliasTarget": {
"HostedZoneId": {
"Fn::GetAtt": [
"devdevopsdemoELB",
"CanonicalHostedZoneNameID"
]
},
"DNSName": {
"Fn::GetAtt": [
"devdevopsdemoELB",
"CanonicalHostedZoneName"
]
}
}
},
{
"Name": "devopsdemo-dev.FOO.net.",
"Type": "A",
"AliasTarget": {
"HostedZoneId": {
"Fn::GetAtt": [
"devdevopsdemoELB",
"CanonicalHostedZoneNameID"
]
},
"DNSName": {
"Fn::GetAtt": [
"devdevopsdemoELB",
"CanonicalHostedZoneName"
]
}
}
}
]




Best way to view specific private images from Amazon S3 in a gallery?

I am using fineuploader to upload photos to Amazon S3. I am saving the generated image-name in a mySQL database on my web site hosting. The image-names are related to a user in the database (users in the database should only be able to see their own images). Before I used Amazon S3, I had a queue where I was looking up related images to a user in the database and returned an array return $query->fetchAll();. I have the following code in my view that loops through the array and display the images.



<?php foreach ($images as $img) { ?>
<div class="col-lg-3 col-md-4 col-xs-6 thumb">
<div class="thumbnail">
<a href="<?php echo URL . 'album/deleteimage/' . htmlspecialchars($img->image_id, ENT_QUOTES, 'UTF-8'); ?>"><button class="close" type="button" >×</button></a>
<img style="height:130px;" class="img-responsive" src="<?php if (isset($img->image_name)) echo htmlspecialchars(URL . 'img/uploads/' . $img->image_name, ENT_QUOTES, 'UTF-8'); ?>">
</div>
</div>
<?php } ?>


How do I do this with Amazon S3? If I change all my uploaded photos to the public (instead of the default private) on Amazon S3 can I still use my old code. Just change SRC in the IMG-tag to



s3.amazonaws/bucket_name/key_name


But I guess it is preferable to (still) have the images private om Amazon S3.





Amazon EC2 Free Tier - how to check how many free resources I used?

Amazon offers 750 hours of EC2 linux and windows instances per month in "free tier". Is there any way to see my "free tier" summary?


For example: "you used 153 out of 750 hours" ?





How to match up Amazon CloudSearch document update error with document

Amazon CloudSearch (at least v2) provides errors and warnings like:



{
"status": "success",
"warnings": [{
"message": "Multi-valued field \"color\" has no value (near operation with index 949)"
},
{
"message": "Multi-valued field \"color_id\" has no value (near operation with index 949)"
}],
"adds": 1000,
"deletes": 0
}


where the only way to figure out which document had an issue is parsing the "near operation with index X" and plucking that index.


I've seen a post somewhere that seems to indicate it can return document_id as well, but I'm not sure where that comes from.


So my question has two parts:


1) How can CloudSearch return the document id in the error/warning message? 2) If #1 isn't possible, is the CloudSearch error/warning message index 0- or 1-based?





Have custom Nginx error page when All backend servers unhealty

I am using Nginx Plus AWS EC2 image for my production environment. I use the health checks module to see the availability of backend servers. When all the servers are unhealthy. Then nginx returns a 404 error screen.


I am unable to find the HTML for that. I need to configure that.


Any ideas how can I find that ?


Thanks





can I reference instance data in AWS::AutoScaling::LaunchConfiguration

We are spinning up Elasticsearch in docker containers as part of an auto scaling group and using the launchconfiguration to create the configuration file.


Ideally I would like to use the config.files to create the configuration, but I need to also set the instances IP address as the publish_host for ES.


Right now I have this in my config.files:



"/data/elasticsearch.yml" : {
"content" : { "Fn::Join" : ["", [
"path.plugins: /elasticsearch/plugins\n\n",
"network.publish_host: INSTANCE_IP\n",
"node.name: INSTANCE_NAME\n",
"cluster.name: ", { "Ref" : "AWS::StackName" }, "\n\n",
"cloud.aws.region:\n",
" ", { "Ref" : "AWS::Region" }, "\n",
"discovery:\n",
" type: ec2\n",
"\n",
"discovery.ec2.tag.Jarfish: ElasticSearch\n",
"discovery.ec2.tag.Stack: ", { "Ref" : "AWS::StackId" }, "\n",
"\n",
"cloud.node.auto_attributes: true\n",
"discovery.zen.minimum_master_nodes: 2\n"
]]}


and I later use sed do replace the INSTANCE_IP and INSTANCE_NAME as part of a script in the UserData block. It works, but has me creating the config in 2 places.


Is there a way to reference the current instance and its IP via Fn::GetAtt or Ref as part of the files like I can for the StackId and StackName?


It seems that it's possible to get the IP address of an Instance type, but since this is a LaunchConfiguration, I don't see a way to get the InstanceID that could then be used to get the IP ...





cannot get correct syntax for pljson

I've installed pljson 1.05 in Oracle Xe 11g and written a PLSQL function to extract values from the return from Amazon AWS describe-instances.


Trying to obtain the values for top level items such as reservation ID work but i am unable to get values nested within lower levels of the json.


e.g. this example works (using the cutdown AWS JSON inline



DECLARE
reservations JSON_LIST;
l_tempobj JSON;
instance JSON;
L_id VARCHAR2(20);
BEGIN
obj:= json('{
"Reservations": [
{
"ReservationId": "r-5a33ea1a",
"Instances": [
{
"State": {
"Name": "stopped"
},
"InstanceId": "i-7e02503e"
}
]
},
{
"ReservationId": "r-e5930ea5",
"Instances": [
{
"State": {
"Name": "running"
},
"InstanceId": "i-77859692"
}
]
}
]
}');
reservations := json_list(obj.get('Reservations'));
l_tempobj := json(reservations);
DBMS_OUTPUT.PUT_LINE('============');
FOR i IN 1 .. l_tempobj.count
LOOP
DBMS_OUTPUT.PUT_LINE('------------');
instance := json(l_tempobj.get(i));
instance.print;
l_id := json_ext.get_string(instance, 'ReservationId');
DBMS_OUTPUT.PUT_LINE(i||'] Instance:'||l_id);
END LOOP;
END;


returning



============
------------
{
"ReservationId" : "r-5a33ea1a",
"Instances" : [{
"State" : {
"Name" : "stopped"
},
"InstanceId" : "i-7e02503e"
}]
}
1] Instance:r-5a33ea1a
------------
{
"ReservationId" : "r-e5930ea5",
"Instances" : [{
"State" : {
"Name" : "running"
},
"InstanceId" : "i-77859692"
}]
}
2] Instance:r-e5930ea5


but this example to return the instance ID doesnt



DECLARE
l_clob CLOB;
obj JSON;
reservations JSON_LIST;
l_tempobj JSON;
instance JSON;
L_id VARCHAR2(20);
BEGIN
obj:= json('{
"Reservations": [
{
"ReservationId": "r-5a33ea1a",
"Instances": [
{
"State": {
"Name": "stopped"
},
"InstanceId": "i-7e02503e"
}
]
},
{
"ReservationId": "r-e5930ea5",
"Instances": [
{
"State": {
"Name": "running"
},
"InstanceId": "i-77859692"
}
]
}
]
}');
reservations := json_list(obj.get('Reservations'));
l_tempobj := json(reservations);
DBMS_OUTPUT.PUT_LINE('============');
FOR i IN 1 .. l_tempobj.count
LOOP
DBMS_OUTPUT.PUT_LINE('------------');
instance := json(l_tempobj.get(i));
instance.print;
l_id := json_ext.get_string(instance, 'Instances.InstanceId');
DBMS_OUTPUT.PUT_LINE(i||'] Instance:'||l_id);
END LOOP;
END;


returning



============
------------
{
"ReservationId" : "r-5a33ea1a",
"Instances" : [{
"State" : {
"Name" : "stopped"
},
"InstanceId" : "i-7e02503e"
}]
}
1] Instance:
------------
{
"ReservationId" : "r-e5930ea5",
"Instances" : [{
"State" : {
"Name" : "running"
},
"InstanceId" : "i-77859692"
}]
}
2] Instance:


The only change from the first example to the second is replacing 'ReservationId' with 'Instances.InstanceId' but in the second example, although the function succeeds and the instance.print statement outputs the full json, this code doesnt populate the Instance ID into l_id so is not output on the DBMS_OUTPUT.


I also get the same result (i.e. no value in L_id) if I just use 'InstanceId'.


My assumption and from reading the examples suggested JSON PATH should allow me to select the values using either the dot notation for nested values but it doesnt seem to work. I also tried extracting instances into a temp variable using JSON_ALL and then accessing it singularly but also wasnt able to get a working example.


Any help appreciated. Many Thanks.





AWS Lambda making video thumbnails

I want make thumbnails from vídeos uploaded to s3, i know howto make it with node.js + ffmpeg.


Accord this forum post i can add libraries: https://forums.aws.amazon.com/message.jspa?messageID=583910


ImageMagick is the only external library that is currently provided by default, but you can include any additional dependencies in the zip file you provide when you create a Lambda function. Note that if this is a native library or executable, you will need to ensure that it runs on Amazon Linux.


But how i can put static ffmpeg binary on aws lambda?


And how can i call from node.js this static binary (ffmpeg) wiht amazon lambda?


I'm newbie with amazon aws and linux


Anyone can help me????





boto.sqs connect to non-aws endpoint

I'm currently in need of connecting to a fake_sqs server for dev purposes but I can't find an easy way to specify endpoint to the boto.sqs connection. Currently in java and node.js there are ways to specify the queue endpoint and by passing something like 'localhst:someport' I can connect to my own sqs-like instance. I've tried the following with boto:



fake_region = regioninfo.SQSRegionInfo(name=name, endpoint=endpoint)
conn = fake_region.connect(aws_access_key_id="TEST", aws_secret_access_key="TEST", port=9324, is_secure=False);


but it fails to retrieve the queue object. Has anyone achieved to connect to an own sqs instance ?





send responsive email with aws ses

I've developed a responsive email and used mailchimp + litmus to test it. When it worked as expected, I sent it to the engineers but they're using AWS SES icm with velocity template to send the emails, and somehow this is not responding to the responsive layout.


This is what my layout looks like:



<style type="text/css">
* { margin: 0px; padding: 0px; }

body { background: #e7e6e5; }
.wrapper { padding: 100px 0 50px 0; width: 100%; background: #e7e6e5; }
.centering { position: relative; margin: 0 auto; width: 500px; overflow: hidden; }
.inner { border-radius: 4px; }
.book { padding: 25px 0px; background: #8dc763; }
.content { padding: 45px 50px 40px 50px; text-align: left; }
.content p { padding: 0px; margin: 0px; font-family: Arial, Helvetica, sans-serif; color: #424444; font-size: 17px; line-height: 24px;}
.order, .order1 { border-bottom: #dcdcde solid 2px; }
.order p { padding: 0px; margin: 0px; font-size: 14px; color: #a1a1a1; line-height: 18px; }
.order td { padding: 14px 0; }
.order1 td { padding: 24px 0; }
.order1 p { padding: 0px; margin: 0px; font-family: Arial, Helvetica, sans-serif; font-size: 16px; line-height: 20px; color: #a1a1a1; }
.order1 p strong { color: #000; }
.total { padding: 25px 0px 20px 0px; }
.total p strong { color: #000; }
.button { padding: 25px 0px 10px 0px; }
.footer { padding: 28px 0px 28px 0px; }
.footer p { padding: 0px; margin: 0px; font-family: Arial, Helvetica, sans-serif; font-size: 14px; line-height: 18px; color: #7d7d7d; text-align: center; }
.footer a { color: #7d7d7d; text-decoration: none; }
.intro_text { padding-bottom: 24px; }

@media only screen and (max-width: 480px){
.centering { width: 100% !important; }
.content { padding: 45px 20px 40px 20px !important; }
.wrapper { padding: 0px 0px 20px 0 !important; }
.content p { font-size: 14px !important; }
.content { padding: 45px 20px 40px 20px !important; }
.content .order1 p { font-size: 12px !important; }
}
</style>
</head>
<body leftmargin="0" marginwidth="0" topmargin="0" marginheight="0" offset="0">


etc..


Is there a solution to fixing the responsiveness?





How to Route Elastic Beanstalk traffic through a Single IP for External API

I have Application deployed on Elastic Beanstalk and it need to make call to an external API Server which can reply to only single IP. AutoScaling is in picture. So I need to Route all my OUTBOND traffic through a Single Server. So that External API thinks that Request is coming from a single IP. May be by using NAT or any Proxy Server or VPN.





Q: AWS Beanstalk default domain wildcard cname prefix

I would like to have wildcard domains for an auto-scaling application running on AWS Beanstalk.


I cannot use a custom domain with Route53 or alternatives, so I am forced to use the default generated domain format, eg: "environment-name.elasticbeanstalk.com".


I would like to have something like this "*.environment-name.elasticbeanstalk.com" configured at the load balancer level for my setup.


Does anyone know if there is a way to specify a wildcard domain prefix for the beanstalk applications running with the default domain?





lundi 29 décembre 2014

How to connect and get attributes for ec2 instance

How do I connect to a specific instance (i-123456) using boto and perform some operation on that? I know we can get all reservations and get the instances from that.. but would like to know if there is a way to do it particular for that instance? Any help is appreciated.





How can we change the number of products per page when using Vacuum gem?

I'm using the Vacuum gem and want to display 12 products per page, but I always get 10 products.


I also tried to use batch code with 'item_search', but didn't work.


Can anyone please help me?





AWS Kinesis putRecord example for iOS [on hold]

Can anyone give a good example of how to upload data to kinesis with putRecord on iOS?





How to upload image to AWS S3 in PHP from memory?

So I currently have an upload system working using AWS S3 to upload images.


Here's the code:



//Upload image to S3
$s3 = Aws\S3\S3Client::factory(array('key' => /*mykey*/, 'secret' => /*myskey*/,));

try {
$s3->putObject(array(
'Bucket' => "bucketname",
'Key' => $file_name,
'Body' => fopen(/*filelocation*/, 'r+')
));
} catch(Exception $e) {
//Error
}


This image can be a jpeg or png, and I want to convert it to a png before uploading. To do this I use:



//This is simplified, please don't warn about transparency, etc.
$image = imagecreatetruecolor($width, $height);
imagecopyresampled($image, $source, 0, 0, 0, 0, etc.);


So I have this $image object in memory.


I want to upload this to S3 without having to save it locally, upload it and then delete it locally; this extra step seems pointless. But I can't work out how to upload this $image object directly.


Any ideas how this would be done? I'd assumed fopen() would create an object of a similar type to imagecreatetruecolor(), but I've tried passing the $image object in and it doesn't work - whereas it does if I open an image locally with fopen().





What is the best way to determine how much time for an EMR job is spent on Map vs Reduce Tasks?

I am running a custom jar hadoop job in Amazon's AWS EMR, and I want to gather data on how much time is spent running all Map tasks vs time spent running Reduce tasks. Is there a way in the framework to mine this data that I have not found? If not does anyone have any suggestions on the best way to generate this data?


Thank you,





Not getting a log from an except statement

I'm trying to catch an error in a log so I can see the data being passed. In the code below, the specific area I should be seeing the error come from is in the batch_put method where the placeholder self.logger.exception message is.


The self.logger.debug message in the adjoined try block works fine (as long as I set the init.py's level to DEBUG in my staging environment) and logs to the log as expected. But no matter what I've tried I can't seem to get any logging to happen in that except block.


The bulk of the code follows with config info:


main.py



import logging
[...]
logger = logging.getLogger()

#Flask routes making calls to module


module/__init__.py



import logging
[...]
if module.ENVIRONMENT is "production":
log_config(level=logging.WARNING)
else:
log_config(level=logging.INFO)


module/baselib.py



import logging
import logstash_formatter
[...]
def log_config (level=logging.WARNING):
"""
Logging facility.
"""

logger = logging.getLogger()

log_handler = logging.handlers.RotatingFileHandler("/var/log/module/main/main.log",
mode='a',
maxBytes=104857600,
backupCount=10)
formatter = logstash_formatter.LogstashFormatterV1()

log_handler.setFormatter(formatter)
logger.setLevel(level)
logger.addHandler(log_handler)


module/amazonlib.py



import logging
# Boto import stuff
[...]
class ddb_api(module.base):
def __init__ (self, access_key=None, secret_key=None, region=None):
"""
Initialize connection to DynamoDB v2 layer 1 (low-level API).
"""

# if no region is specified, default to US East 1.
if region is None:
region = "us-east-1"

self.handle = boto.dynamodb2.connect_to_region \
(
region,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)

self.logger = logging.getLogger()


def batch_put (self, table, *args):
"""
Takes a list of dicts and puts their data into the given table.
Ex: module.ddb.batch_put("test_table", *batch)

@type table: str
@param table: Name of the DDB table.
@type args: list
@param args: A list of dictionaries containing DDB data to be written.
"""

attempts = 0
table = self.get_table(table)

while 1:
try:
with table.batch_write() as batch:
for count, item in enumerate(args):
for i in module.listify(item):
try:
self.logger.debug("AMAZONLIB BATCH_PUT DATA: %s" % str(i))
batch.put_item(data=i)
except:
self.logger.exception("AMAZONLIB BATCH_PUT FAILURE: %s" % str(i))

if module.sentry:
module.sentry.captureException()

return True

except boto.dynamodb2.exceptions.ProvisionedThroughputExceededException:
if attempts <= module.DDB_MAX_ATTEMPTS:
attempts += 1
else:
if module.sentry:
module.sentry.captureException()
return False

except:
if module.sentry:
module.sentry.captureException()
return False




How to convert public to private ip?

How can I convert the Public IP of an Amazon instance to its private IP? given that I am on the same subnet as the host I am resolving the IP address for.





Windows Server 2012R2 and teamviewer

I would like to ask some help regarding how to set up teamviewer correctly to achieve the goal below. So, I have a server hosted in AWS and I would like to administer it via teamviewer. RDP is not an option because mainly I spend my time under very strict network rules - only port 80 can go through the firewall.


I installed teamviewer on the server but when the server has started and it is online the client on my desktop shows the server offline. Once I log in to the server via RDP it becomes online.


I googled aroung but there is no solution for this, or I try to use the wrong tool to achieve my goal.


Is there any way to solve this case?


Thanks in advance for any help!





Is Fineuploader with File Chunking more expensive on Amazon S3?

Fineuploader http://fineuploader.com/ has the possibility to use File Chunking



File Chunking / Partitioning


Splitting a file into smaller pieces allows for a more efficient overall upload, and powers some Fine Uploader features such as pausing, and resuming uploads. Fine Uploader can also upload multiple chunks for the same file concurrently.



Is Fineuploader with File Chunking more expensive on Amazon S3? Thinking of that Amazon will charge you for each request to Amazon S3. If fineuploader splits any file into smaller pieces it becomes more requests to Amazon = more expensive. Is that correct?





SignatureDoesNotMatch error when using SES with wp ses plugin

I have a python program sending emails with success using my global access key and secret key.


I try to use these same credentials in the settings interface of the wordpress ses plugin. Here is interface errors:


enter image description here


In the apache error log i get the error:



PHP Warning: SimpleEmailService::sendEmail(): Sender - SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\nRequest Id: b5fb7d11-8f84-11e4-a5cf-f71ae5f59c02\n in /var/www/html/wp-content/plugins/wp-ses/ses.class.0.8.4.php on line 383, referer: http://example.com/wp-admin/options-general.php?page=wp-ses/wp-ses.php






Migrating DNS Service for an Existing Domain to Amazon Route 53

I have read all the information about my question in this article (Amazon Guide). Actually I have a domain .net bought in Amazon working correctly but I need migrate a .ly domain to route my request to my Elastic Beanstalk (like .net domain).


I have requested the zone file like say in the article:



If you can get a zone file from your current DNS service provider, you can import your existing DNS configuration into your Amazon Route 53 hosted zone, which greatly simplifies the process of creating resource record sets. Try asking customer support for your current DNS service provider how to get a zone file or a records list.



This is my zone file (domain is not the real name, is just a example):



; Zone file for domain.ly
$TTL 14400
domain.ly. 86400 IN SOA dns1.onlydomains.net. support.libyaonline.com. (
2014120106 ;Serial Number
86400 ;refresh
7200 ;retry
3600000 ;expire
86400 ;minimum
)
domain.ly. 86400 IN NS dns1.onlydomains.net.
domain.ly. 86400 IN NS dns3.onlydomains.net.
domain.ly. 14400 IN A 96.127.***.54
localhost 14400 IN A 127.0.0.1
www 14400 IN CNAME domain-env.elasticbeanstalk.com.


I have imported it and I have read all the article but and I don't understand what I need to do in the next steps.


I don't know what to do with de NS and SOA... Do I need contact with the domain .ly support again?


Some Important Details:



  • The IP 96.127.###.54 is not the IP of my Elastic BeanStalk, I have other IP (like this 54.77.###.42)

  • The CNAME domain-env.elasticbeanstalk.com is not the correct. I changed it for newelastic-domain.elasticbeanstalk.com (and it's working fine with the .net domain)


This is the last step, that I don't understand... (Updating Your Registrar's Name Servers)


Thank you so much for the help!!





Amazon Web Services - PHP SDK - S3 putBucketAcl() returns Bad Request

I'm trying for HOURS to make s3->putBucketAcl() works, without any success.


I'm having various exception errors, and I'm currently blocked on this one:


400 - Bad Request - This request does not support content


Here are params I'm using:



Array
(
[ACL] => private
[Owner] => Array
(
[Id] => 086caca9fb91331dbedb9abbed21c0db5a940138c1ac6f6297f042550ba553b5
)

[Grants] => Array
(
[0] => Array
(
[Grantee] => Array
(
[ID] => 476105dd3f400339485f36296bde3563692d134e4c9a507e9ce63f114fcb2e14fdedff50140d41993f45137d0d049352
[Type] => CanonicalUser
)

[Permission] => FULL_CONTROL
)

)

[Bucket] => kaemo
)


By the way, what is the difference between 'ACL' => 'string' asked and grantees permissions?


I just want to update my buckect ACLs (add new ACL access).


Thanks for your help... AWS API => nights of headache !!!





AWS giant data transfer

I have a linux and windows instance on amazon ec2 for around 7-8 months. Every month i had billing 0.01$ - 0.06$. But two weeks ago i received an abuse report. I looked to my billing and it was of around 20$! 98% of it was DataTransfer. I terminated my instace and changed elastic ip adress. Everything seems good utill today my billing was 31.95! On picture you can see giant month data transfer out beyond the global free tier. It's around 3.5 TB. Billing details


Look down for additional info My configuration


Was it DDoS? Will IP changing help? Can i setup a data transfer limits?


P.S. Sorry for my english.





Pig filter matches not working with pig and EMR

I would like to filter all the strings that contains internal, but the data is not filtered. In my pig scripts I have:



preload = load '$INPUT' as (textline:chararray);
filterdata = FILTER preload BY SIZE(textline) > 100;
filterInternal = FILTER filterdata by NOT(textline MATCHES '.*internal.*');


Using Pig 0.12.0 on AWS





install redis on aws micro instance

I need to install redis in amazon cloud. I need it as a part of my npm module kue (deployment). Can anyone link me step by step tutorial or explain how to do it, considering the fact that I'm not good to bad with linux and administration.





How can I list all my AWS EC2 instances using NodeJS (in Lambda)?

I'm on AWS and using NodeJS AWS SDK. I'm trying to build a Lambda function and inside I want to get a list of all my EC2 instances, but I just can't seem to get it working. Can anyone spot what I'm doing wrong?


Here is my lambda function code:



var AWS = require('aws-sdk');
AWS.config.region = 'us-west-1';

exports.handler = function(event, context) {
console.log("\n\nLoading handler\n\n");
var ec2 = new AWS.EC2();
ec2.describeInstances( function(err, data) {
console.log("\nIn describe instances:\n");
if (err) console.log(err, err.stack); // an error occurred
else console.log("\n\n" + data + "\n\n"); // successful response
});
context.done(null, 'Function Finished!');
};


And this is my policy (I think it's correct?)



{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "arn:aws:ec2:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}


And if I do a console.log on 'ec2' I get:



{ config:
{ credentials:
{ expired: false,
expireTime: null,
accessKeyId: 'XXXXXXXXXXXXXXXXXX',
sessionToken: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
envPrefix: 'AWS' },
credentialProvider: { providers: [Object] },
region: 'us-west-1',
logger: null,
apiVersions: {},
apiVersion: null,
endpoint: 'ec2.us-west-1.amazonaws.com',
httpOptions: { timeout: 120000 },
maxRetries: undefined,
maxRedirects: 10,
paramValidation: true,
sslEnabled: true,
s3ForcePathStyle: false,
s3BucketEndpoint: false,
computeChecksums: true,
convertResponseTypes: true,
dynamoDbCrc32: true,
systemClockOffset: 0,
signatureVersion: 'v4' },
isGlobalEndpoint: false,
endpoint:
{ protocol: 'https:',
host: 'ec2.us-west-1.amazonaws.com',
port: 443,
hostname: 'ec2.us-west-1.amazonaws.com',
pathname: '/',
path: '/',
href: 'https://ec2.us-west-1.amazonaws.com/' } }




How to get price value from Amazon production Advertising API in php

First time I'm using Amazon Product Advertising API to retrieve production price information, I got the response as below,



["Item"]=>
object(stdClass)#15 (3) {
["ASIN"]=>
string(10) "B0017TZY5Y"
["OfferSummary"]=>
object(stdClass)#16 (6) {
["LowestUsedPrice"]=>
object(stdClass)#17 (3) {
["Amount"]=>
int(820)
["CurrencyCode"]=>
string(3) "EUR"
["FormattedPrice"]=>
string(8) "EUR 8,20"
}
["LowestCollectiblePrice"]=>
object(stdClass)#18 (3) {
["Amount"]=>
int(3490)
["CurrencyCode"]=>
string(3) "EUR"
["FormattedPrice"]=>
string(9) "EUR 34,90"
}
["TotalNew"]=>
string(1) "0"
["TotalUsed"]=>
string(1) "6"
["TotalCollectible"]=>
string(1) "1"
["TotalRefurbished"]=>
string(1) "0"
}
["Offers"]=>
object(stdClass)#19 (3) {
["TotalOffers"]=>
int(0)
["TotalOfferPages"]=>
int(0)
["MoreOffersUrl"]=>
string(1) "0"
}
}


I would like to know, how I can retrieve the LowestUsedPrice and LowestCollectiblePricevalue from the response by using php.





AWS.S3 in some point doesn't execute callback

we use node.js and S3.


I have the following code to check if a file is exists:



var s3 = new AWS.S3();

S3.prototype.search = function (baseDirectory, url, size, callBack) {
console.log("we are in the correct function");
s3.headObject({"Key" : url.substring(1, url.length), "Bucket" : S3_BUCKET}, function (err, data) {
console.log("Callback is done");

})}


Everything works O.K, but in some point I notice that the callback is not return. Meaning, I get only:



console.log("we are in the correct function");


But "Callback is done" isn't printed.


What can be the reason for this?





dimanche 28 décembre 2014

AWS was not able to validate the provided access credentials

I have been trying to create Security Group using AWS SDK, but somehow it fails to authenticate it. For the specific Access Key and Secret Key, i have provided the Administrative rights, then also it fails to validate. On the other side, I tried the same credentials on AWS S3 Example, it successfully executes.


Getting following error while creating security group:



com.amazonaws.AmazonServiceException: AWS was not able to validate the provided access credentials (Service: AmazonEC2; Status Code: 401; Error Code: AuthFailure; Request ID: 1584a035-9a88-4dc7-b5e2-a8b7bde6f43c)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1077)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.ec2.AmazonEC2Client.invoke(AmazonEC2Client.java:9393)
at com.amazonaws.services.ec2.AmazonEC2Client.createSecurityGroup(AmazonEC2Client.java:1146)
at com.sunil.demo.ec2.SetupEC2.createSecurityGroup(SetupEC2.java:84)
at com.sunil.demo.ec2.SetupEC2.main(SetupEC2.java:25)


Here is the Java Code:



public class SetupEC2 {
AWSCredentials credentials = null;
AmazonEC2Client amazonEC2Client ;

public static void main(String[] args) {
SetupEC2 setupEC2Instance = new SetupEC2();
setupEC2Instance.init();
setupEC2Instance.createSecurityGroup();
}

public void init(){
// Intialize AWS Credentials
try {
credentials = new BasicAWSCredentials("XXXXXXXX", "XXXXXXXXX");
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (/home/sunil/.aws/credentials), and is in valid format.",
e);
}

// Initialize EC2 instance
try {
amazonEC2Client = new AmazonEC2Client(credentials);
amazonEC2Client.setEndpoint("ec2.ap-southeast-1.amazonaws.com");
amazonEC2Client.setRegion(Region.getRegion(Regions.AP_SOUTHEAST_1));
} catch (Exception e) {
e.printStackTrace();
}
}

public boolean createSecurityGroup(){
boolean securityGroupCreated = false;
String groupName = "sgec2securitygroup";
String sshIpRange = "0.0.0.0/0";
String sshprotocol = "tcp";
int sshFromPort = 22;
int sshToPort =22;

String httpIpRange = "0.0.0.0/0";
String httpProtocol = "tcp";
int httpFromPort = 80;
int httpToPort = 80;

String httpsIpRange = "0.0.0.0/0";
String httpsProtocol = "tcp";
int httpsFromPort = 443;
int httpsToProtocol = 443;

try {
CreateSecurityGroupRequest createSecurityGroupRequest = new CreateSecurityGroupRequest();
createSecurityGroupRequest.withGroupName(groupName).withDescription("Created from AWS SDK Security Group");
createSecurityGroupRequest.setRequestCredentials(credentials);

CreateSecurityGroupResult csgr = amazonEC2Client.createSecurityGroup(createSecurityGroupRequest);

String groupid = csgr.getGroupId();
System.out.println("Security Group Id : " + groupid);

System.out.println("Create Security Group Permission");
Collection<IpPermission> ips = new ArrayList<IpPermission>();
// Permission for SSH only to your ip
IpPermission ipssh = new IpPermission();
ipssh.withIpRanges(sshIpRange).withIpProtocol(sshprotocol).withFromPort(sshFromPort).withToPort(sshToPort);
ips.add(ipssh);

// Permission for HTTP, any one can access
IpPermission iphttp = new IpPermission();
iphttp.withIpRanges(httpIpRange).withIpProtocol(httpProtocol).withFromPort(httpFromPort).withToPort(httpToPort);
ips.add(iphttp);

//Permission for HTTPS, any one can accesss
IpPermission iphttps = new IpPermission();
iphttps.withIpRanges(httpsIpRange).withIpProtocol(httpsProtocol).withFromPort(httpsFromPort).withToPort(httpsToProtocol);
ips.add(iphttps);

System.out.println("Attach Owner to security group");
// Register this security group with owner
AuthorizeSecurityGroupIngressRequest authorizeSecurityGroupIngressRequest = new AuthorizeSecurityGroupIngressRequest();
authorizeSecurityGroupIngressRequest.withGroupName(groupName).withIpPermissions(ips);
amazonEC2Client.authorizeSecurityGroupIngress(authorizeSecurityGroupIngressRequest);
securityGroupCreated = true;
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
securityGroupCreated = false;
}
System.out.println("securityGroupCreated: " + securityGroupCreated);
return securityGroupCreated;
}
}




AWS Elastic Beanstalk + uWSGI

I am trying to deploy a Python 3.4.2, Django 1.7 application using a Docker container with AWS Elastic Beanstalk. I am running uWSGI in the container to serve my application's dynamic content. Static content should be served from Amazon S3.


Everything seems (to me, anyways) to be configured correctly, and my error logs don't report anything, but when I upload my application the Health Check returns RED and when I navigate to my-app.elasticbeanstalk.com the page is entirely white. Running cURl on the url returns nothing, as well.


Elastic Beanstalk's /var/log/nginx/access.log contains the following lines over and over again



172.31.22.11 - - [29/Dec/2014:07:01:17 +0000] "GET //vitrufitness-staging.elasticbeanstalk.com HTTP/1.1" 404 3409 "-" "ELB-HealthChecker/1.0"
172.31.39.129 - - [29/Dec/2014:07:01:17 +0000] "GET //vitrufitness-staging.elasticbeanstalk.com HTTP/1.1" 404 3409 "-" "ELB-HealthChecker/1.0"
172.31.39.129 - - [29/Dec/2014:07:01:28 +0000] "GET //vitrufitness-staging.elasticbeanstalk.com HTTP/1.1" 404 3409 "-" "ELB-HealthChecker/1.0"
172.31.22.11 - - [29/Dec/2014:07:01:38 +0000] "GET //vitrufitness-staging.elasticbeanstalk.com HTTP/1.1" 404 3409 "-" "ELB-HealthChecker/1.0"


I wonder if the problem is due to the ports that I have exposed in my Dockerfile, or the way that I have uWSGI configured.


Here is my uwsgi.ini file (I am running the :docker section). I am not sure if I need to use http:, socket: or http-socket: to work with Elastic Beanstalk. I think Elastic Beanstalk uses Nginx reverse-proxy to serve the container.



[uwsgi]
ini = :base

[docker]
ini = :base
logto = /var/logs/uwsgi.log
http = :8000
socket = :8080
master = true
processes = 4

[base]
chdir = %dapp_dir/
pythonpath = %dapp_dir/
module=my_app.wsgi:application
die-on-term = true
# Set settings module.
env = DJANGO_SETTINGS_MODULE=my_app.settings
chmod-socket=664


Here is my Dockerfile



FROM ubuntu:14.04

# Get most recent apt-get
RUN apt-get -y update

# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip

RUN pip install uwsgi
RUN apt-get -y install libxml2-dev libxslt1-dev

RUN apt-get install -y python-software-properties uwsgi-plugin-python3

# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin

# Install node.js
RUN apt-get install -y nodejs npm

# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists

# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev

ADD . /home/docker/code

# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/

RUN pip install -r /home/docker/code/app_dir/requirements.txt

# Create directory for logs
RUN mkdir -p /var/logs
RUN mkdir /static
RUN touch /home/docker/code/app_dir/logfile

# Set environment
ENV PYTHONPATH $PYTHONPATH:/home/docker/code/app_dir
ENV DJANGO_SETTINGS_MODULE my_app.settings

EXPOSE 8000 8080

CMD ["/home/docker/code/start.sh"]


And since I am pulling my docker image from DockerHub, rather than having EB build from the Dockerfile itself, here is my Dockerrun.aws.json file that I need:



{
"AWSEBDockerrunVersion": "1",
"Authentication": {

"Bucket": "my_bucket",
"Key": "docker/dockercfg"
},
"Image": {
"Name": "me/my_image:0.0.1",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
],
"Logging": "/var/eb_log"
}




how to delete archive files from amazon glacier by using .net

enter image description hereIn which module of .net we have to upload the code to delete archive files from vault![image is showing what I have done till now. right now I want that in which module I have code I have already read the document of support provided





using System;
using Amazon.Glacier;
using Amazon.Glacier.Transfer;
using Amazon.Runtime;

namespace glacier.amazon.com.docsamples
{
class ArchiveDeleteHighLevel
{
static string vaultName = "examplevault";
static string archiveId = "*** Provide archive ID ***";

public static void Main(string[] args)
{
try
{
var manager = new ArchiveTransferManager(Amazon.RegionEndpoint.USEast1);
manager.DeleteArchive(vaultName, archiveId);
Console.ReadKey();
}
catch (AmazonGlacierException e) { Console.WriteLine(e.Message); }
catch (AmazonServiceException e) { Console.WriteLine(e.Message); }
catch (Exception e) { Console.WriteLine(e.Message); }
Console.WriteLine("To continue, press Enter");
Console.ReadKey();
}
}
}






Amazon RDS Staging vs Production DB

What is the best method for setting up distinct Production and Staging databases in Amazon RDS? Is it advisable to spin up one RDS instance for Production and another for Staging and keep them entirely separate, or does it work just as well to just use one RDS instance with a Production database and a Staging database?





acknowledge order report amazon mws

I am using "GetReportList" api with report list type as "_GET_ORDERS_DATA" to pull order reports from amazon. But I want to pull only new orders. How can I use the "acknowledged" field to make sure that I pull only new orders(which were not previously pulled).I observed that the "acknowledged" field is true by default. Please let me know if there is a way to pull new orders only(I am trying to avoid using timestamp here)


Thanks





Django + Docker + Elastic Beanstalk: WARNING [django.request:143] Not Found:

I am trying to deploy my django application in a Docker entire to Elastic Beanstalk. The docker container runs fine locally, but I am having problems with the upload.


I am serving my application with uWSGI for dynamic content and Amazon S3 for static files. uWSGI is running on port 8000, which I have exposed in my Dockerfile. I want ElasticBeanstalk to pull my docker image from DockerHub. Here is my Dockerrun.aws.json file:



{
"AWSEBDockerrunVersion": "1",
"Authentication": {

"Bucket": "my-bucket",
"Key": "docker/dockercfg"
},
"Image": {
"Name": "me/my-repo",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
],
"Logging": "/var/eb_log"
}


And when I check my uwsgi.log file inside the container I get an endless stream of:



[pid: 22|app: 0|req: 112/262] 172.17.42.1 () {32 vars in 455 bytes} [Sun Dec 28 23:08:00 2014] GET //my-app.elasticbeanstalk.com => generated 3397 bytes in 123 msecs (HTTP/1.1 404) 4 headers in 133 bytes (1 switches on core 0)
397 bytes in 123 msecs (HTTP/1.1 404) 4 headers in 133 bytes (1 switches on core 0)
[28/Dec/2014 23:08:00] WARNING [django.request:143] Not Found: /my-app.elasticbeanstalk.com


I'm not really sure where these errors are coming from... anybody have any input?





Unable to create anything within a docker container on ec2

I currently have a docker image deployed to amazon's ec2 with elastic beanstalk. It does not allow me to create any files within the docker container.


When I run this image locally, it works great but on EC2, I am unable to persist anything inside the container. This prevents me from creating the puma socket I need to run the server.


Am I missing anything?



[ec2-user@ip-0-0-0-0 ~]$ sudo docker run -t ericraio/app-web
03:52:29 web.1 | started with pid 10
03:52:29 nginx.1 | started with pid 11
03:52:30 web.1 | Puma starting in single mode...
03:52:30 web.1 | * Version 2.10.2 (ruby 2.1.5-p273), codename: Robots on Comets
03:52:30 web.1 | * Min threads: 0, max threads: 16
03:52:30 web.1 | * Environment: production
03:52:31 web.1 | * Listening on unix:///app/tmp/sockets/puma.sock
03:52:31 web.1 | /usr/local/bundle/gems/puma-2.10.2/lib/puma/binder.rb:276:in `initialize': No such file or directory - connect(2) for "/app/tmp/sockets/puma.sock" (Errno::ENOENT)
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/lib/puma/binder.rb:276:in `new'
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/lib/puma/binder.rb:276:in `add_unix_listener'
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/lib/puma/binder.rb:119:in `block in parse'
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/lib/puma/binder.rb:82:in `each'
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/lib/puma/binder.rb:82:in `parse'
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/lib/puma/runner.rb:119:in `load_and_bind'
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/lib/puma/single.rb:78:in `run'
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/lib/puma/cli.rb:507:in `run'
03:52:31 web.1 | from /usr/local/bundle/gems/puma-2.10.2/bin/puma:10:in `<top (required)>'
03:52:31 web.1 | from /usr/local/bundle/bin/puma:16:in `load'
03:52:31 web.1 | from /usr/local/bundle/bin/puma:16:in `<main>'
03:52:31 web.1 | exited with code 1
03:52:31 system | sending SIGTERM to all processes
03:52:31 nginx.1 | exited with code 0


Puma.rb



#!/usr/bin/env puma

# app do |env|
# puts env
#
# body = 'Hello, World!'
#
# [200, { 'Content-Type' => 'text/plain', 'Content-Length' => body.length.to_s }, [body]]
# end

environment 'production'
daemonize false

app_root = Shellwords.shellescape "#{File.expand_path('../..', __FILE__)}"
# app_root is "/Users/saba/rails projects/test"
#
pidfile "#{app_root}/tmp/pids/puma.pid"
state_path "#{app_root}tmp/pids/puma.state"

# stdout_redirect 'log/puma.log', 'log/puma_err.log'

# quiet
threads 0, 16
bind "http://unix#{app_root}/tmp/sockets/puma.sock"

# ssl_bind '127.0.0.1', '9292', { key: path_to_key, cert: path_to_cert }

# on_restart do
# puts 'On restart...'
# end

# restart_command '/u/app/lolcat/bin/restart_puma'
# === Cluster mode ===

# workers 2
# on_worker_boot do
# puts 'On worker boot...'
# end

# === Puma control rack application ===

activate_control_app "http://unix#{app_root}/tmp/sockets/pumactl.sock"


Dockerfile



FROM ruby:2.1.5

#################################
# native libs
#################################

RUN apt-get update -qq
RUN apt-get install -qq -y build-essential
RUN apt-get install -qq -y libpq-dev
RUN apt-get install -qq -y nodejs
RUN apt-get install -qq -y npm
RUN apt-get install -qq -y nginx

# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

#################################
# Install Nginx.
#################################

RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
RUN chown -R www-data:www-data /var/lib/nginx
ADD config/nginx.conf /etc/nginx/sites-enabled/default

EXPOSE 80

#################################
# Symlinking Nodejs for ubuntu
# -- http://stackoverflow.com/questions/26320901/cannot-install-nodejs-usr-bin-env-node-no-such-file-or-directory
#################################
RUN ln -s /usr/bin/nodejs /usr/bin/node

#################################
# NPM install globals
#################################

RUN npm install bower -g

#################################
# Rails
#################################

RUN mkdir /app
WORKDIR /app
ADD . /app

ENV RAILS_ENV production
ENV SECRET_KEY_BASE test123

RUN bundle install --without development test
RUN bundle exec rake bower:install
RUN bundle exec rake assets:precompile

CMD foreman start -f Procfile




About AWS RDS pricing

I have a question


I have a RDS : db.t2.micro with storage :20GB


It's now free tier usage. I want to know when the free period is over .


Is it priced by per hour??


I find this picture (http://aws.amazon.com/rds/previous-generation/?nc2=h_ls) is use hour to count the price .


will the storage(20GB) influence my bill?? Here didn't have info about db.t2.micro


I want to know this to decide whether I have to delete this RDS and open a new one just 5GB


Please guide me thanks!!


enter image description here





Docker + Supervisord + ElasticBeanstalk Conf File Not Found

I am trying to deploy my django application in a docker container to Elastic Beanstalk. On my local machine I am successfully building and running the application out of the container. I would think that once I have the container running locally, deploying to EB would be a no-brainer.


I am uploading my project directory to EB in a .zip file using the AWS interface, but I am getting an error that the container quits unexpectedly because my supervisor configuration file is not found.


The supervisor-app.conf file is certainly included with the files that I uploaded, and should be added by the Dockerfile to my container at /home/docker/code/supervisor-app.conf... I'm not sure why EB can't find it. Is it looking on the host machine or something?


Dockerfile



FROM ubuntu:14.04

# Get most recent apt-get
RUN apt-get -y update

# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip

RUN pip install uwsgi
RUN apt-get -y install libxml2-dev libxslt1-dev

RUN apt-get install -y python-software-properties uwsgi-plugin-python3

# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin

# Install node.js
RUN apt-get install -y nodejs npm

# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists

# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev

ADD . /home/docker/code

# Setup config files
#RUN echo "daemon off;" >> /etc/nginx/nginx.conf
#RUN rm /etc/nginx/sites-enabled/default
#RUN ln -s /home/docker/code/nginx-app.conf /etc/nginx/sites-enabled/
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/

# Create virtualenv and run pip install

RUN pip install -r /home/docker/code/vitru/requirements.txt


# Create directory for logs
RUN mkdir -p /var/logs
RUN mkdir /static

# Set environment
ENV env staging
ENV PYTHONPATH $PYTHONPATH:/home/docker/code/vitru
ENV DJANGO_SETTINGS_MODULE vitru.settings

# Run django commands
# python3.4 is at /usr/bin/python3.4, but which works too
RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py syncdb --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py makemigrations --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py migrate --noinput


EXPOSE 8080 8000

CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]


Dockerrun.aws.json



{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "8080"
}
],
"Volumes": [
{
"ContainerDirectory": "/home/docker/code",
"HostDirectory": "/home/docker/code"
}
],
"Logging": "/var/eb_log"
}


And here is what the logs from EB are saying



-------------------------------------
/var/log/eb-docker/containers/eb-current-app/unexpected-quit.log
-------------------------------------
Docker container quit unexpectedly on Sun Dec 28 23:15:27 UTC 2014:
Error: could not find config file /home/docker/code/supervisor-app.conf
For help, use /usr/bin/supervisord -h




cannot run eb push to send new version to elastic beanstalk

I have just set up a new project, the elastic beanstalk environment is running ok with sample application. this was all set up with eb cli.


when I try to do eb push with my new application i get the following



Traceback (most recent call last): File ".git/AWSDevTools/aws.elasticbeanstalk.push", line 57, in dev_tools.push_changes(opts.get("env"), opts.get("commit")) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line 196, in push_changes self.create_application_version(env, commit, version_label) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line 184, in create_application_version self.upload_file(bucket_name, archived_file) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line 145, in upload_file key.set_contents_from_filename(archived_file) File "/Library/Python/2.7/site-packages/http://boto-2.28.0-py2.7.egg/boto/s3/key.py", line 1315, in set_contents_from_filename encrypt_key=encrypt_key) File "/Library/Python/2.7/site-packages/http://boto-2.28.0-py2.7.egg/boto/s3/key.py", line 1246, in set_contents_from_file chunked_transfer=chunked_transfer, size=size) File "/Library/Python/2.7/site-packages/http://boto-2.28.0-py2.7.egg/boto/s3/key.py", line 725, in send_file chunked_transfer=chunked_transfer, size=size) File "/Library/Python/2.7/site-packages/http://boto-2.28.0-py2.7.egg/boto/s3/key.py", line 914, in _send_file_internal query_args=query_args File "/Library/Python/2.7/site-packages/http://boto-2.28.0-py2.7.egg/boto/s3/connection.py", line 633, in make_request retry_handler=retry_handler File "/Library/Python/2.7/site-packages/http://boto-2.28.0-py2.7.egg/boto/connection.py", line 1046, in make_request retry_handler=retry_handler) File "/Library/Python/2.7/site-packages/http://boto-2.28.0-py2.7.egg/boto/connection.py", line 919, in _mexe request.body, request.headers) File "/Library/Python/2.7/site-packages/http://boto-2.28.0-py2.7.egg/boto/s3/key.py", line 815, in sender http_conn.send(chunk) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 805, in send self.sock.sendall(data) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 229, in sendall v = self.send(data[count:]) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 198, in send v = self._sslobj.write(data) socket.error: [Errno 32] Broken pipe Cannot run aws.push for local repository HEAD:



I have another elastic beanstalk app that is running and when I run eb push in that directory it works fine so I dont think its anything to do with ruby or other dependancies not being installed. I also made changes and made another commit with a very simple message to make sure that wasnt causing the problem and still no joy


the difference between the app that can be pushed and this one is the aws account. the user credentials for this elastic beanstalk app that wont push are admin credentials