mardi 30 juin 2015

Amazon Web Services error tracking service - similar to airbrake

Does AWS have integrated error tracking service similar to Airbrake, which can be used with Ruby on Rails application and sidekiq background jobs?




How to upload a image in aws cdn quicker

I am working on a user profile where user uploads a image.The aws cdn takes 15 mins approx to update and invalidation also takes 12 mins !! Is there any other way around so user doesn't wait for so much time to see updated pic ? thanks in advance




Best Practice: NAT vs ElasticIP

I have two basic setup for web application that reside behind ELB on Amazon Web Service.

Layout A:

        +-----+                                        
    +---+ ELB +----+                                   
    |   +-----+    |                                   
    |              |                                   
    |              |                                   
+---v-----+  +-----v---+           +---------------+   
| EC2/EIP |  | EC2/EIP +----+----> | HTTP RESPONSE |   
+---------+  +---------+    |      +---------------+   
                            |                          
                            |      +------------------+
                            +----> | EXTERNAL WEBSITE |
                            |      +------------------+
                            |                          
                            |      +-----+             
                            +----> | API |             
                                   +-----+             

Layout B:

       +-----+                                              
   +---+ ELB +----+                                         
   |   +-----+    |                                         
   |              |                                         
   |              |                                         
+--v--+        +--v--+  +-----+         +---------------+   
| EC2 |        | EC2 +--+ NAT +--+----> | HTTP RESPONSE |   
+-----+        +-----+  +-----+  |      +---------------+   
                                 |                          
                                 |      +------------------+
                                 +----> | EXTERNAL WEBSITE |
                                 |      +------------------+
                                 |                          
                                 |      +-----+             
                                 +----> | API |             
                                        +-----+             

I believe both architecture have pros and cons:

Layout A:

  • Does the web server send http response back to ELB? if it goes directly to user, will it gain performance response?
  • If I limit outgoing traffic for Http port only on security group, is there still any security threat?

Layout B:

  • is this design creating another layer of point of failure (NAT)?
  • Will it work for Oauth communication?
  • Can it work with 3rd party CI and Orchestration tools (jenkins, chef)?

Both design are working well, but which design is the best practise for infrastructure considering performance and security.

thanks




Failure: DNS resolution failed: DNS response error code NXDOMAIN on AWS Route53

I have a site hosted on AWS and recently the site went down with NXDOMAIN error. The site was working before and the issue doesn't appear to be with the site as the Elastic Beanstalk direct link (xxxx-prod.elasticbeanstalk.com) is working fine.

In my Route53 I have a CNAME linking to my (xxxx-prod.elasticbeanstalk.com) and a SOA and 4 NS records supplied by AWS. xxxx is a placeholder for the actual site name. Running dig...

    dig xxxx.com any

; <<>> DiG 9.8.3-P1 <<>> vizibyl.com any
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 63003
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;xxxx.com.          IN  ANY

;; AUTHORITY SECTION:
com.            895 IN  SOA a.gtld-servers.net. nstld.verisign-grs.com. 1435723016 1800 900 604800 86400

;; Query time: 31 msec
;; SERVER: 64.71.255.204#53(64.71.255.204)
;; WHEN: Tue Jun 30 23:57:22 2015
;; MSG SIZE  rcvd: 102

It looks like my NS records might be the issue but I am not sure. Can someone confirm.




Error 405 when connecting a Facebook page hosted at Amazon S3

I would like to load a single image on a Facebook Custon Tab. The content is available online on this link: http://ift.tt/1Kpy7ZL

I hosted the files via Amazon S3. Configured Route 53 to handle my DNS. And activated CloudFront to serve HTTPS to the targeted bucket.

I also changed the CORS S3 config to be like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://ift.tt/1f8lKAh">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <ExposeHeader>ETag</ExposeHeader>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

After all that I still get the 405 when accessing via Facebook.

This is the verbose output when accessing the page via SSL.

curl -I -v -ssl http://ift.tt/1Kpy7ZL
* Hostname was NOT found in DNS cache
*   Trying 54.230.194.31...
* Connected to d3p61garc8eqm1.cloudfront.net (54.230.194.31) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
* Server certificate: *.cloudfront.net
* Server certificate: VeriSign Class 3 Secure Server CA - G3
* Server certificate: VeriSign Class 3 Public Primary Certification Authority - G5
> HEAD /facebook.html HTTP/1.1
> User-Agent: curl/7.37.1
> Host: d3p61garc8eqm1.cloudfront.net
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 428
Content-Length: 428
< Connection: keep-alive
Connection: keep-alive
< Date: Wed, 01 Jul 2015 01:44:29 GMT
Date: Wed, 01 Jul 2015 01:44:29 GMT
< Last-Modified: Wed, 01 Jul 2015 01:44:06 GMT
Last-Modified: Wed, 01 Jul 2015 01:44:06 GMT
< ETag: "89fc2a26c87fbc6d16e6cb91a86853b9"
ETag: "89fc2a26c87fbc6d16e6cb91a86853b9"
* Server AmazonS3 is not blacklisted
< Server: AmazonS3
Server: AmazonS3
< Age: 3455
Age: 3455
< X-Cache: Hit from cloudfront
X-Cache: Hit from cloudfront
< Via: 1.1 4ee6cbd5f14ab44f81cd41b0f1148c25.cloudfront.net (CloudFront)
Via: 1.1 4ee6cbd5f14ab44f81cd41b0f1148c25.cloudfront.net (CloudFront)
< X-Amz-Cf-Id: 4C5RXGqHqaURXGk1KuYBRS28QxKTZ60RnS4Na72iX8jiGhv85HxqLw==
X-Amz-Cf-Id: 4C5RXGqHqaURXGk1KuYBRS28QxKTZ60RnS4Na72iX8jiGhv85HxqLw==

<
* Connection #0 to host d3p61garc8eqm1.cloudfront.net left intact

I would love some help on this. Trying everything I found online e nothing worked.




How to configure Phusion Passenger X-frame options?

I'm trying to iframe a site I built (using Rails) and deployed on an ubuntu instance on AWS using Phusion Passenger.

I looked more into it and found that I need to change my X-frame options, a HTTP header from 'SAME ORIGIN' to 'ALLOWALL'. I already added this line into my config/application.rb file and my config/environments/production.rb

config.action_dispatch.default_headers.merge!({'X-Frame-Options' => 'ALLOWALL'})

Even then, when I open my site, I still get these settings in my Network Headers:

Status:200 OK
Transfer-Encoding:chunked
X-Content-Type-Options:nosniff
X-Frame-Options:SAMEORIGIN
X-Powered-By:Phusion Passenger 5.0.11

This leads me to believe that there's a Phusion Passenger config file somewhere that I need to change the X-Frame-Options for. Any clues or help would be great, thanks!




Wipe out EB init config

Is there a way to wipe out a previous 'eb init' config? The previous config has resources that are non-existent on an earlier AWS account. I am using a new AWS account and want to initialize an existent Beanstalk environment.

Thanks..




API or put the logic inside the app?

I'm busy building an app for android. When it's properly received by Android users I would like to expand to iOS.

But, before we get there, I first want to make the right choice. So my question, what to do?:

  1. writing all the logic inside the app and use Cognito (http://ift.tt/1R2Cu2t) to access the data from DynamoDB
  2. or let my app connect with my own API which handles the validation rules, which I then connect with DynamoDB database (don't know or API -> Cognito -> DynamoDB is a better solution, didn't really used it yet so...).

Now we all know about those issues where hackers built ways to bypass certain validation rules (as far as I read, most commonly by decompiling the app). I really want to avoid that!

So what do you experienced Android developers use? I know the answer seems obvious. But the reason I ask this is because I would like to avoid having my infrastructure, which I need to update etc. But to be able to register users, without the need of an third party which supports OpenID like twitter, facebook or Google, AND secure my validation rules, it seems like I have no choice. Or do I?




Amazon RDS unable to execute SET GLOBAL command

I am using Amazon RDS for mysql db. I want to run some SET commands for eg:

SET GLOBAL group_concat_max_len =18446744073709551615

But when I run this command I get this error

ERROR 1227 (42000): Access denied; you need (at least one of) the SUPER privilege(s) for this operation

When I try to add privileges, it does not allow me to add. Any help or inputs?




Why can't I join my AWS EC2 instance to my Simple AD?

I'm unable to join an EC2 instance to my Directory Services Simple AD in Amazon Web Services manually, per Amazon's documentation.

  • I've verified that the IP I entered for DNS in the network config on the EC2 instance is the DNS IP for the Simple AD.
  • I'm entering the FQDN foo.bar.com.
  • I've verified that the Simple AD and the EC2 instance are in the same subnet.

This is the error message I'm receiving:

The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller (AD DC) for domain "aws.bar.com":

The error was: "This operation returned because the timeout period expired." (error code 0x000005B4 ERROR_TIMEOUT)

The query was for the SRV record for _ldap._tcp.dc._msdcs.aws.bar.com

The DNS servers used by this computer for name resolution are not responding. This computer is configured to use DNS servers with the following IP addresses:

10.0.1.34

Verify that this computer is connected to the network, that these are the correct DNS server IP addresses, and that at least one of the DNS servers is running.




Update CloudSearch document using Python boto

I am using the latest boto tools for Python to add and search documents on Amazon CloudSearch. I haven't been able to find any documentation regarding the updates of documents. There is documentation for the old API here: http://ift.tt/1U5dt5u. Here, when adding a document you give a version number, and to quote the docs:

If you wish to update a document, you must use a higher version ID.

However, I don't find this feature in the boto namespaces for the new API (the ones with cloudsearch2). The add function no longer takes a version.

Currently what I am doing to update a document is getting it by ID, then adding it again. The logic of updating the fields is on my side.

What would be nice is to add a document with the same ID and higher version number and only fill in the fields that you want overridden, and the document should be updated.

Is there still a way to use the version of a document in the new boto API?




Struggling with AWS S3 Bucket Policy for Cloudfront distribution

I am trying to get my S3 content to display via Cloudfront. Unfortunately all that I see is a message stating that I do not have permission to access my files stored in S3. I have followed a few tutorials and really don't understand why it's not working.

Here is what I did:

Origin Domain Name: my_aws_bucket
Origin Path: /uploads      # This is the folder where my images are stored

I have told Cloudfront to restrict bucket access to my created identity, and to set up a new policy on my bucket:

{
    "Version": "2008-10-17",
    "Id": "PolicyForCloudFrontPrivateContent",
    "Statement": [
        {
            "Sid": "1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity **********"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::my_aws_bucket/*"
        }
    ]
}

I cleared all other policie including IAM user policies (Just to be extra sure that nothing is blocking my newly created policy)

When I refresh my page, I however still only see image text. If I click on the image link in my page source I am presented with the following:

<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>71C324761B2B3661</RequestId>
<HostId>
PUojsKhDRMcV1G2AItu8sBve5FdzJoq/ieecIrWVjFE5SpC2prxjz4PuI+nJLAHIgXcowtZY1M8=
</HostId>
</Error>

I have confirmed that the masked out values above matches that of my Identity.

I am pulling my hair out. As there is no reason that I can find why this shouldn't work and it's kept me busy for a few days now.




AWS CLI Unknown component: credential_provider

Ive been trying to setup an aws lambda function for a while now but seem to just keep running into this error. I've followed the basic tutorial and setup everything exactly as it is in the guide for the user I have. But no matter what I keep getting the Unknown component: credential_provider error when trying to do anything via the CLI. Has anyone else run into this or know of a more indepth/better tutorial for setting this up????




Is there any technique/software to collect data information like product detail & price from online stores?

Is there any technique/software to collect data information like product detail & price from online stores like eBay, Amazon, Flipkart, Snapdeal, etc.?

I need to collect some product details like product price, product rating (customer review) & some more information.




Kinesis Client Library : multiple workers for a stream

I have a .war in which we have a Kinesis Application which processes a stream which contains a single shard. We deploy two instances of the war in production. As a result, I would end up with two workers working on a single stream with a single shard. What is the recommended way to deal with this issue? I tried deploying two wars on my dev machine locally, and it seems to be fine in the sense that each record is being processed only once. I know that AWS recommends one instance per shard. From their docs:

Typically, when you use the KCL, you should ensure that the number of instances does not exceed the number of shards (except for failure standby purposes). Each shard is processed by exactly one KCL worker and has exactly one corresponding record processor, so you never need multiple instances to process one shard.

In general, I'm not clear on the multiplicity of the relationship between shards, workers, and record processors. From the documentation, it sounds like it is: For n shards, we need 1 worker and n record processors, which would imply that there is only ever 1 worker processing a stream.

Any help is appreciated.




Apache Tika and AWS Cloudsearch

I have a Apache Tika program which extracts metadata(json) and text given a word file as input.

I want to upload the extracted metadata and the text to Amazon cloudsearch & make it searchable.

I have created a cloudsearch domain and in the indexing options I have used the predefined configuration option (Microsoft office files)

Let me know how to insert the metadata and text + filename into Amazon cloudsearch using the AWS java cloudsearch v2 APIs.

Thanks




S3 to RDS file management system

I'm new to AWS and have a feasibility question for a file management system I'm trying to build. I would like to set up a system where people will use the Amazon S3 browser and drop either a csv or excel file into their specific bucket. Then I would like to automate the process of taking that csv/excel file and inserting that into a table within RDS. Now this is assuming that the table has already been built and those excel/csv file will always be formatted the same and will be in the same exact place every single time. Is it possible to automate this process or at least get it to point where very minimal human interference is needed. I'm new to AWS so I'm not exactly sure of the limits of S3 to RDS. Thank you in advance.




Port 9000 access refused on spark 1.4.0 EC2

With spark1.3.1 when i follow this instructions, Data access Spark EC2, everything works. But when i try to do the same thing with spark1.4.0 ec2 i fall on this error

Exception in thread "main" java.net.ConnectException: Call From ******/****** to ec2-********.compute- 1.amazonaws.com:9000 failed on connection exception: java.net.ConnectException: Connexion refusée;

I open all traffic on both slaves and master security group but it doesn't seems to work so i don't know what to do...




Unattended MySQL Install on AWS Ubuntu 14.04

I'm trying to execute the following script to install MySQL unattended.

"export DEBIAN_FRONTEND=noninteractive" isn't working - I still have to press enter a few times to get past the prompts.

AWS Image: Ubuntu Server 14.04 LTS (PV), SSD Volume Type - ami-d85e75b0

Any suggestions?

#!/bin/sh

sudo apt-get install libaio1

export DEBIAN_FRONTEND=noninteractive

# Install script for mysql database

sudo groupadd mysql
sudo useradd -r -g mysql mysql
sudo tar xvf mysql-server_5.6.21-1ubuntu12.04_amd64.deb-bundle.tar
if [ $? != 0 ];then echo "Unable to extract tar file."; exit 100; fi

sudo dpkg -i mysql-common_5.6.21-1ubuntu12.04_amd64.deb
if [ $? != 0 ];then echo "Unable to install package mysql-common."; exit 100; fi

sudo dpkg -i mysql-community-server_5.6.21-1ubuntu12.04_amd64.deb
if [ $? != 0 ];then echo "Unable to install package mysql-community-server."; exit 100; fi

sudo dpkg -i mysql-community-client_5.6.21-1ubuntu12.04_amd64.deb
if [ $? != 0 ];then echo "Unable to install package mysql-community-client."; exit 100; fi

sudo mv /etc/mysql/my.cnf my.cnf.in
if [ $? != 0 ];then echo "Unable to move /etc/mysql/my.cnf."; exit 100; fi

sudo sed -e s/127.0.0.1/0.0.0.0/g my.cnf.in | sudo tee /etc/mysql/my.cnf
if [ $? != 0 ];then echo "Unable to configure my.cnf."; exit 100; fi

#sudo rm -f my.cnf.in

sudo /etc/init.d/mysql restart
if [ $? != 0 ];then echo "Unable to restart mysql server."; exit 100; fi

exit 0

# Leave the last line empty, otherwise it can cause problems running the script




AWS : Splitting software & data in different volumes

AWS recommends keeping data & OS on separate EBS volumes. I have a webserver running on EC2 with an EBS volume. On a bare VM, I install the following:

- webserver, wsgi, pip & related software/config (some in /etc some in /home/<user>)
- server code & static assets in /var/www/
- log files are written to /var/log/<respective-folder>
- maintenance scripts in /home/<user>/

Database server is separate. For a webserver, which of the above items would benefit from higher IOPS and for which ones it doesn't matter ? My understanding is that the server code & log files should be moved to a separate EBS volume with higher IOPS. Or should I just move all of my stuff (except the softwares I installed in /etc i.e. webserver) to a separate volume with better IOPS ?




Using Login with Paypal and using OpenID with AWS Cognito

I am trying to use the OpenID framework supported by Paypal to tie the credentials in with the AWS Cognito service.

If I compare the configuration from Salesforce

http://ift.tt/1pL50UC

to the configuration at Paypal

http://ift.tt/1IqmQs3

the Paypal configuration is missing the jwks_uri element which is a REQUIRED element of the OpenID Provider metadata per OIDC specification and AWS uses the keys at that URI to verify the id tokens.

Is there a different url I should be using for login with Paypal to work with OpenID?

Is there any other way to get Login with Paypal to work with the AWS Cognito service with works well with other OpenID providers?




Deployment methods for docker based micro services architecture on AWS

I am working on a project using a microservices architecture. Each service lives in its own docker container and has a separate git repository in order to ensure loose coupling.

It is my understanding that AWS recently announced support for Multi-Container Docker environments in ElasticBeanstalk. This is great for development because I can launch all services with a single command and test everything locally on my laptop. Just like Docker Compose.

However, it seems I only have the option to also deploy all services at once which I am afraid defies the initial purpose of having a micro services architecture.

I would like to be able to deploy/version each service independently to AWS. What would be the best way to achieve that while keeping infrastructure management to a minimum?




Run SQL script file with multiple complex queries using AMAZON datapipeline

I have just created an account on Amazon AWS and I am going to use DATAPIPELINE to schedule my queries. Is it possible to run multiple complex SQL queries from .sql file using SQLACTIVITY of data pipeline?

My overall objective is to process the raw data from REDSHIFT/s3 using sql queries from data pipeline and save it to s3. Is it the feasible way to go?

Any help in this regard will be appreciated.




Signature calculated does not match the signature you provided Amazon

"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for detail"

Below is my SignatureHelper (based on Amazon class libraries).

public string SignRequest(Dictionary<string, string> parametersUrl, Dictionary<string, string> parametersSignture)
{
    var secret = Encoding.UTF8.GetBytes(parametersSignture["Secret"]);
    var signer = new HMACSHA256(secret);

    var stringToSign = CalculateStringToSign(parametersUrl, parametersSignture);
    var toSign = Encoding.UTF8.GetBytes(stringToSign);

    var sigBytes = signer.ComputeHash(toSign);
    var signature = Convert.ToBase64String(sigBytes);

    return signature;
}

private static string CalculateStringToSign(IDictionary<string, string> parameters, IDictionary<string, string> parametersSignture)
{
    var sorted = new SortedDictionary<string, string>(parameters, StringComparer.Ordinal);

    var data = new StringBuilder();
    data.Append(parametersSignture["RequestMethod"]);
    data.Append("\n");

    var endpoint = new Uri(parametersSignture["EndPoint"]);

    data.Append(endpoint.Host);
    if (endpoint.Port != 443 && endpoint.Port != 80)
    {
        data.Append(":")
            .Append(endpoint.Port);
    }

    data.Append("\n");
    var uri = endpoint.AbsolutePath;
    if (uri.Length == 0)
    {
        uri = "/";
    }

    data.Append(UrlEncode(uri, true));
    data.Append("\n");

    foreach (var pair in sorted.Where(pair => pair.Value != null))
    {
        data.Append(UrlEncode(pair.Key, false));
        data.Append("=");
        data.Append(UrlEncode(pair.Value, false));
        data.Append("&");
    }

    var result = data.ToString();

    return result.Remove(result.Length - 1);
}

private static string UrlEncode(string data, bool path)
{
    var encoded = new StringBuilder();
    var unreservedChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_.~" + (path ? "/" : string.Empty);

    foreach (char symbol in Encoding.UTF8.GetBytes(data))
    {
        if (unreservedChars.IndexOf(symbol) != -1)
        {
            encoded.Append(symbol);
        }
        else
        {
            encoded.Append("%" + string.Format("{0:X2}", (int)symbol));
        }
    }

    return encoded.ToString();
}

This is my data:

CalculateStringToSign POST

mws.amazonservices.com
/
AWSAccessKeyId=***&Action=SubmitFeed&FeedType=_POST_PRODUCT_DATA_&MWSAuthToken=****&Merchant=***&PurgeAndReplace=false&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2015-06-30T13%3A47%3A42Z&Version=2009-01-01

URL

"http://ift.tt/1LFLIxM"

This is what I receive back

Code: SignatureDoesNotMatch
Message: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.

I think its something within my helper (unsure what as I've look at many code samples and it seems the same.

Thanks,

Clare




Unique Hostname for New Relic's nrsysmond on Elastic Beanstalk

I'm configuring nrsysmond to run on an Elastic Beanstalk container that hosts Generic Docker containers.

Is there any way to get the instance index so that I could combine that with a constant? Something like Production-1, Production-2, etc.

The configuration I'm using looks like this:

packages: 
  yum: 
    newrelic-sysmond: [] 
  rpm: 
    newrelic: http://ift.tt/1h9KgWn 
commands: 
  "01": 
    command: nrsysmond-config --set license_key=`/opt/elasticbeanstalk/bin/get-config environment | jq .NEW_RELIC_LICENSE_KEY | sed -e 's/"//g'`
  "02": 
    command: echo hostname=`/opt/elasticbeanstalk/bin/get-config environment | jq .RAILS_ENV | sed -e 's/"//g'` >> /etc/newrelic/nrsysmond.cfg 
  "03": 
    command: usermod -a -G docker newrelic
  "04": 
    command: /etc/init.d/newrelic-sysmond restart

This works great, but sets all the hostnames to the same thing. I don't want to use the Elastic Beanstalk hostname, as those change every time the instances scale. This clogs up New Relic with dead instances.

This is on 64bit Amazon Linux 2015.03 v1.4.3 running Docker 1.6.2




How do you specify VPC with Ansible ec2_lc module?

I'm trying to use Ansible to create a launch configuration. I'm using the ec2_lc module as detailed at http://ift.tt/1CVQz8j.

I'm creating the launch configuration and specifying some security groups that are not part of my default VPC. However, it will not let me do this. It appears to be defaulting to the default VPC, and I don't see a setting in the docs to change this. Is there something I'm overlooking? The output from my playbook is as follows:

TASK: [aws-lc | building new aws launch configuration] ************************ 
failed: [localhost] => {"failed": true}
msg: BotoServerError: 400 Bad Request
<ErrorResponse xmlns="http://ift.tt/1jqPXi1">
  <Error>
    <Type>Sender</Type>
    <Code>ValidationError</Code>
    <Message>The security group 'xyz-general-sg' does not exist in default VPC 'vpc-3Cef6a45'</Message>
  </Error>
  <RequestId>54121d19-1f30-11e5-1121-51263ee1684e</RequestId>
</ErrorResponse>




How do I bypass an Amazon load balancer to terminate https access on my auto-scaling group instances?

I have set up a TCP listener on the elastic load balancer (ELB) port 443 which then forwards to the auto-scaling group (ASG) via SSL on port 443.

It is my understanding that the certificate on the ASG servers will be presented.

I have enabled back end authentication. On the console, the port configuration reads: 443 (TCP) forwarding to 443 (SSL) Backend Authentication: Enabled, followed by my PublicKeyPolicyType name.

This is not working. No certificate is being presented.

Am I missing something? Do I need to upload the cert to the ELB? I am trying to avoid this.




Django S3 uploaded file urls show credentials

I am using django-storages and Amazon S3 for file storages. In my model I have: avatar = models.ImageField(_('Avatar'), upload_to='avatars/profiles/', blank=True, null=True)

The image is uploaded successfully on save, but full url with credentials is saved. In my Retrieve requests/ when I read the url from db via console) I get something like: http://ift.tt/1LzLGXn

How can I prevent this? I could strip the url before responding, but I do not need and therefore do not want to save them in this format, because all files can be accessed publicly, also no need for credentials. Ps. I though of using the post_save hook but it seemed like a hack to me.




Tool to mock Amazon's SimpleDB locally

An app the company I work for is using Amazon SimpleDB. I have been tasked to write tests for that app. I am looking for a tool to mock SimpleDB locally and filling it with some dummy data, so that I don't always make requests.

I found SimpleDB/dev but it is using the old API, and the app I am supposed to test uses the new 2009 API. I also found about boto, but that doesn't solve my issue either. Any suggestions?




Unable to access content from aws load balancer

I have created a aws elastic load balancer and associated my existing instance with it. The instance is passing the health test. I accessed my instance directly using the ip:port and I am able to view the content. I have linked the same port in the aws configurations. When I try the DNS name in my browser, I do not get any response. What do you think is the issue?




Is it possible to get a time for state transition for an Amazon EC2 instance?

I'm accessing EC2 with the aws-sdk for Ruby. I have an array of instances from describe_instances().

This provides me with the state of the instances and even a state transition reason. But how can I get a time for the state transition?




Rsyslog, EC2 an Hostnames

We are automating our server farm using amazon's ec2. Part of this is collecting our log files using http://ift.tt/1vvvJ94 (like loggly, etc). Unfortunately, using rsyslog, we're not seeing the system names show up properly.

to reproduce:

  • we create an AMI of a well operating server with updated code, etc..
  • that server has the hostname ec2-123-123-123-13
  • we have it configured to launch and get up and running
  • as expected, every server gets its newest hostname
  • rsyslog initiates, and starts sending log data to papertrail
  • the server name passed in the rsyslog events is the original c2-123-123-123-13 (for example, the two lines below, 5 minutes apart, show the original system name and the new system name as the rest of the log.

" Jun 30 00:45:11 ec2-54-147-195-63 system: ec2-54-161-201-58.compute-1.amazonaws.com Jun 30 00:50:11 ec2-54-147-195-63 system: ec2-54-161-201-58.compute-1.amazonaws.com "

  • this is incredibly sticky. i've tried to add restarts within rc.local of both apache and rsyslog.
  • i can go into the box directly and restart apache and syslog, and it will tend to reset to the correct server name and start streaming.

unfortunately, this means that all our logging happens at the level of the staging server that we use pre-production. it also makes it very hard to debug, since all the servers look the same.

interesting observations: - when logging in (ssh) with username ubuntu, the prompt is still the OLD name - when sudo bash to log in as root, the prompt is the new name - when logging in much later / a second time, the prompt is the new name - we thought this might have something to do with EIPs and specific servers. unfortunately, even when we created a 3rd generation server, the initial (and sticky) IP address was that of the immediately preceding server. - i've tried to schedule a cron job @boot to reset rsyslog, or in the rc.local, but to know avail. it seems just to get stuck further.

rsyslogd 5.8.6. ubuntu

any suggestions, help? how can we reset the name and effectively use our remote logging?




list free elastic ips from AWS

I have a requirement where i need to filter out the free elastic ip addresses from AWS account that is, list only those which are not bound to any instance.

There are several filters i have seen here : http://ift.tt/1FMKRmW

but not able to figure out how only free elastic ips can be retrieved.

would be very helpful if anyone can give pointers on this.

Thanks a lot ~Yash




lundi 29 juin 2015

Route 53 alias record not working?

I previously had a website working on AWS. It was created & registered with AWS. It was setup in the hosted zone and point to an EC2 instance. Everything was working fine.

I got "smart" and created a load balancer, which pointed to the EC2 instance, and then I deleted the previous hosted zone record (and associated recordset) and re-added the hosted zone record which would point to the load balancer.

After much googling I determined I needed to add an "A" record, make it an alias and point it to the load balancer. All good so far.

Then I went to access the website in browser and Im getting ERR_NAME_NOT_RESOLVED. I waited hours for DNS servers to update and still no luck. Flushed DNS cache and no luck.

Ive changed multiple other things - tried www in front of name in recordset, tried a ptr record which pointed to load balancer DNS name, and even tried to sync the dns server names between the domain record and the hosted zone record. Still no luck. Same error.

Ive performed "nslookup debug" and honestly dont know what Im looking at.

C:\Users\sam>nslookup -debug abc.com

Got answer: HEADER: opcode = QUERY, id = 1, rcode = NOERROR header flags: response, auth. answer, want recursion, recursion avail. questions = 1, answers = 1, authority records = 0, additional = 0

QUESTIONS:
    1.1.168.192.in-addr.arpa, type = PTR, class = IN
ANSWERS:
->  1.1.168.192.in-addr.arpa
    name = xyz
    ttl = 0 (0 secs)


Server: xyz Address: 192.168.1.1


Got answer: HEADER: opcode = QUERY, id = 2, rcode = SERVFAIL header flags: response, want recursion, recursion avail. questions = 1, answers = 0, authority records = 0, additional = 0

QUESTIONS:
    abc.com, type = A, class = IN

------------

Got answer: HEADER: opcode = QUERY, id = 3, rcode = SERVFAIL header flags: response, want recursion, recursion avail. questions = 1, answers = 0, authority records = 0, additional = 0

QUESTIONS:
    abc.com, type = AAAA, class = IN

------------

Got answer: HEADER: opcode = QUERY, id = 4, rcode = SERVFAIL header flags: response, want recursion, recursion avail. questions = 1, answers = 0, authority records = 0, additional = 0

QUESTIONS:
    abc.com, type = A, class = IN

------------

Got answer: HEADER: opcode = QUERY, id = 5, rcode = SERVFAIL header flags: response, want recursion, recursion avail. questions = 1, answers = 0, authority records = 0, additional = 0

QUESTIONS:
    abc.com, type = AAAA, class = IN


*** xyzcan't find abc.com: Server failed

Im sure its something dumb. But Ive spent too much time on this and cant think anymore.

What did I do wrong?

Thanks for your help.




Connect mysql using workbench hosted on amazon ec2

I have created on ec2 instance with centos on amazon ec2, and installed mysql server on it. Now I am able to connect ec2 instance with ssh. and mysql also connect from that instance. Now I am trying to connect that mysql instance from my local pc (remove pc) using workbench tool, but I can't understand what should I have to input in 'hostname' and 'port'. Can anyone has idea to connect mysql instance? Doe's I have any permission problem?




I am using EC2 Management console and am getting an error

I want to use Amazon AWS EC2 Management console in order to run a JAVA program. When I try and connect to my instance using a Java SSH Client directly from my browser I get an error and nothing really happens. Can someone help be able to run my java code using this console?




Loading JSON data to AWS Redshift results in NULL values

I am trying to perform a load/copy operation to import data from JSON files in an S3 bucket directly to Redshift. The COPY operation succeeds, and after the COPY, the table has the correct number of rows/records, but every record is NULL !

It takes the expected amount of time for the load, the COPY command returns OK, the Redshift console reports successful and no errors... but if I perform a simple query from the table, it returns only NULL values.

The JSON is very simple + flat, and formatted correctly (according to examples I found here: http://ift.tt/1U2hyaN)

Basically, it is one row per line, formatted like:

{ "col1": "val1", "col2": "val2", ... }
{ "col1": "val1", "col2": "val2", ... }
{ "col1": "val1", "col2": "val2", ... }

I have tried things like rewriting the schema based on values and data types found in the JSON objects and also copying from uncompressed files. I thought perhaps the JSON was not being parsed correctly upon load, but it should presumably raise an error if the objects cannot be parsed.

My COPY command looks like this:

copy events from 's3://mybucket/json/prefix' 
with credentials 'aws_access_key_id=xxx;aws_secret_access_key=xxx'
json 'auto' gzip;

Any guidance would be appreciated! Thanks.




Upload image from phone to Amazon S3 then return the image URL

I'm working on an android app and I'm having trouble with one part of it.

I am trying to take a photo, upload this photo to Amazon S3, then get the URL of the image and set it as the image of an ImageView.

Currently I am getting the image from the intent perfectly fine, but I am having a lot of trouble understanding how to upload the image properly using a putObjectRequest.

Any help would be greatly appreciated.




Exceptions in Dynamo Mapped Pojo Setters?

Is it possible to have set/get methods that are mapped to Dynamo attributes through the mapping annotations throw exceptions such as IllegalArgumentException if, say, the input is a string but not formatted correctly. More specifically is this possible for the Hash Key attribute?




Strange behavior when attaching volumes to Docker based on mounted AWS EBS drives

I seem to be running into odd issues accessing volumes that I mounted on Docker from my host (an EC2 instance) which are based on EBS drives that I mounted to it.

For clarification, this is how my physical host is set up:

ubuntu@ip-10-0-1-123:/usr/bin$ df -k
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1      10178756 2226608   7433444  24% /
/dev/xvdb       10190136   23032   9626432   1% /zookeeper/data
/dev/xvdc       10190136   23032   9626432   1% /zookeeper/log

As you can see I have a drive for the root directory and 2 additional EBS drives that I mounted to the host at /zookeeper/data and /zookeeper/log.

When I run my container, I have my Docker volume mounts configured with docker-compose like so:

zookeeper1:
  image: lu4nm3/zookeeper:3.4.6
  hostname: zookeeper
  name: zookeeper
  restart: always
  privileged: true
  volumes:
    - /var/log/zookeeper:/var/log/zookeeper                  # this one is on the root drive
    - /opt/zookeeper/3.4.6/conf:/opt/zookeeper/3.4.6/conf    # this one is on the root drive
    - /zookeeper/data:/zookeeper/data                        # this one is on EBS drive 1
    - /zookeeper/log:/zookeeper/log                          # this one is on EBS drive 2

So far this seems pretty normal and you would expect it to work fine, but the setup of the host is where it gets strange. I've found that if I mount the EBS drives before installing Docker then everything works as expected. If I install Docker first and then I mount my drives, however, I run into strange issues related to my image that seem to be related to these new drives.

Has anyone ever run into similar issues when working with additional drives that are mounted to your physical host? The behavior I'm seeing based on the ordering of actions described above seems to indicate that Docker performs some sort of check when the daemon/client initializes that looks at all of the drives on the host. And if you mount additional drives after the fact, then Docker doesn't see them. Does this sound accurate?




Encrypt Amazon RDS

I want to create a RDS with MySQL on it, and I want it to be encrypted.

I am using the Ruby API, and I've looked into the RDS client API, and I saw that there are params that can be given:

tde_credential_arn
tde_credential_password

but both are related to oracle DB (Encrypting Amazon RDS Resources). I've also tried to use key storage_encryped and give it a true value, but the key wasn't a valid one (also I've seen it here: CreateDBInstance).

So, how can i do it with MySQL RDS ?




Is there a similar command like 'eb stop' in EB CLI3?

I want (1) The AWS stops charging (2) Don't remove the server or files (3) It's okay to pause the server

Is this possible with the current policies of the Elastic Beanstalk?

Thanks




Why three different versions of PHP?

I have searched before asking this question - I am surprised it has not been asked more often.

Why have the PHP teams got three different stable versions of PHP on the go?

5.6, 5.5, 5.4

And they've just recently released version 7 alpha

Could someone enlighten me as to 1) Why the PHP group decided that three different stable versions of PHP is good idea? And might I assume that I best just jump straight into 5.7 and clean up my code?

I don't think my requirements are exotic - I don't crunch data, I just use PHP validated data to read/write to MySQL - no rocket science.

The issue?

My old WAMP Zend v6 Community Edition run's PHP 5.5.7 and my new AWS micro machine uses 5.3.29 (build date May 2015 but amazingly, AWS have standardized on the pre-historic 5.3). I discovered a bug with json_encode. When I realised I have two different versions of PHP, I'm thinking its best I just upgrade both to similar versions. Hence I am thinking 5.7 is probably my best bet for future support. Comments welcome.




Register form android app using AWS cognito

I'm building a register form for an android app. Now I don't want to give users the ability to login using Facebook or Google, but I do want to use Cognito to provide a secure connection to the dynamo database.

The main reason I don't want to use facebook or google is because I want to give children, who don't have a facebook or google account, give the ability to register simply by entering an username and password.

enter image description here

Now as far as I understand, I can use my own database with users to connect with cognito:

enter image description here

But I do not want to configure and secure my own web server with database where I store my users. The main reason I want to use Amazon in the first place is because it's secure and I don't have to build my own complete infrastructure.

So does anyone know or there is a better way to still use cognito and dynamodb, and giving users the ability to register without any of the existing services like Google or Facebook.




Pgbouncer+Stunnel mixing up connections to Amazon RDS Postgres

Here are my configs.

I have two databases on RDS Postgresql. I have Stunnel listening to a single port (8432) and connecting to RDS.

I have two databases defined in pgbouncer. I have a gunicorn + gevent + flask stack connecting to these two databases separately. Meaning stack A connects to DB A and stack B connects to DB B

Everything connects and works fine - but after some time (or after a redeploy), the database connections seem to be crossing over. For example, stack A seems to be getting results from stack B, etc.

I tried a whole bunch of things, including having separate pgbouncer instances for both databases, but still this happens. Is this something that Amazon RDS is doing ?




Flask-SQLAlchemy: Can't reconnect until invalid transaction is rolled back

So I am using Amazon Web Services RDS to run a MySQL server and using Python's Flask framework to run the application server and Flask-SQLAlchemy to interface with the RDS.

My app config.py

SQLALCHEMY_DATABASE_URI = '<RDS Host>'
SQLALCHEMY_POOL_RECYCLE = 60

My __ init __.py

from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy

application = Flask(__name__)
application.config.from_object('config')
db = SQLAlchemy(application)

I have my main application.py

from flask import Flask
from application import db
import flask.ext.restless
from application.models import Person

application = Flask(__name__)
application.debug=True
db.init_app(application)

@application.route('/')
def index():
    return "Hello, World!"

manager = flask.ext.restless.APIManager(application, flask_sqlalchemy_db=db)
manager.create_api(Person, methods=['GET','POST', 'DELETE'])

if __name__ == '__main__':
    application.run(host='0.0.0.0')

The models.py

class Person(db.Model):
    __bind_key__= 'people'
    id = db.Column(db.Integer, primary_key=True)
    firstName = db.Column(db.String(80))
    lastName = db.Column(db.String(80))
    email = db.Column(db.String(80))

    def __init__(self, firstName=None, lastName=None, email=None):
        self.firstName = firstName
        self.lastName = lastName
        self.email = email

I then have a script to populate the database for testing purposes after db creation and app start:

from application import db
from application.models import Person

person = Person('Bob', 'Jones', 'bob@website.net')
db.session.add(person)
db.session.commit()

Once I've reset the database with db.drop_all() and db.create_all() I start the application.py and then the script to populate the database.

The server will respond with correct JSON but if I come back and check it hours later, I get the error that I need to rollback or sometimes the 2006 error that the MySQL server has gone away.

People suggested that I change timeout settings on the MySQL server but that hasn't fixed anything. Here are my settings:

innodb_lock_wait_timeout = 3000
max_allowed_packet       = 65536
net_write_timeout        = 300
wait_timeout             = 300

Then when I look at the RDS monitor, it shows the MySQL server kept the connection open for quite a while until the timeout. Now correct me if I'm wrong but isn't the connection supposed to be closed after it's finished? It seems that the application server keeps making sure that the database connection exists and then when the MySQL server times out, Flask/Flask-SQLAlchemy throws an error and brings down the app server with it.

Any suggestions are appreciated, thanks!




Ghost CMS Using AWS S3 GET Uploaded image

I am having an issue trying to display the right path for the image that is being uploaded. For some reason I see the POST method come through and can see that a directory was created within AWS S3 with the paths to the image, but for some reason, I see three GET requests. The first two are identical, loading an image found in the "Jun" directory, but the third request is searching for the image in the "jun" directory. It appears that Ghost is using that 3rd GET file path to load the images. Is there any reason why there are three GET requests and the last being a lowercase path that breaks?

Terminal Output:

POST /ghost/api/v0.1/uploads/ 200 457.752 ms - 122
GET /ghost-blogpost-images.s3-website-us-east-1.amazonaws.com2015/Jun/Screen_Shot_2015_06_29_at_10_29_02_AM-1435598251158.png 301 2.768 ms - -
GET /ghost-blogpost-images.s3-website-us-east-1.amazonaws.com2015/Jun/Screen_Shot_2015_06_29_at_10_29_02_AM-1435598251158.png/ 301 5.303 ms - 156
GET /ghost-blogpost-images.s3-website-us-east-1.amazonaws.com2015/jun/screen_shot_2015_06_29_at_10_29_02_am-1435598251158.png/ 404 78.365 ms - -

resource being loaded




How to query S3 public dataset using redshift

Amazon AWS documentation is just awful and totally unhelpful. Feels good to get that out now we can get down to the actual issue.

I am using SQL workbench to connect to my redshift cluster I am able to connect fine but can't run any commands...

How can I query the common crawl s3 dataset?




Use hardware in a aws cloud infrastructure

Hello I created a complete application and It is hosted in AWS, right now I have to use a device with my application and It should be connected to the server, the problem is that the server is an amazon ec2 instance. The question is how can I solve this problem?, I thought to connect my vpc with my company vpn and here use a dedicated server and connect my device there, but I'm not sure if there is a better solution to solve this problem.




How to add nodes of a autoscaling group automatically to nginx or HAProxy?

enter image description here

In the above architecture (source: http://ift.tt/1QZrN0r), The application server cluster belong to an autoscaling group. But is load balanced by a software loadbalancer (like nginx or HAProxy). My question is if the how the nodes in autoscaling group registers itself automatically with the loadbalancer(As I understand Elastic loadbalancer has this capability builtinm which may not be case for nginx or HAProxy)




Can I use loopback of version higher than 2.0 on AWS?

I am trying to develop server-side using loopback with database connector. However, I am quite confused with installing loopback on AWS.

reference for installing loopback on AWS

This website mentioned that only loopback of version 2.0 could be installed. Yet, when I browse through loopback website, http://ift.tt/1QZrN0m, this website shows that it seems possible to install loopback of version higher than 2.0 on AWS. Since there are some features only available after version 2.1x, it would be nice if AWS allows installation of loopback of version higher than 2.0. Could anyone help me solve the problem? BTW, I am only using free tier of AWS and do not intend to pay at this moment.




ISO8601 format - Amazon Web service

I am receiving the following error: 'Timestamp 2015-06-29T15%3A08%3A27Z must be in ISO8601 format' I have double checked and I believe it is in IS08601 format (then urlencoded)

When I have matched the format again the one created within http://ift.tt/1Do0Laa it seems to be the same.

Any idea?

  1. Mine: 2015-06-29T15%3A08%3A27Z
  2. Theirs: 2015-06-29T15%3A12%3A47Z

Thanks,

Clare




How to get the AWS instance types to display

I am trying to publish my website to AWS from visual studio, and am running into some difficulty. I am supposed to select the instance type in the wizard, but for some reason the instance types are not populating. Is there something that I haven't done on the AWS console? I've already created a new t2.micro instance and generated a key pair.

Here is a screenshot of where I am stuck in the Visual Studio wizard:Empty instance type box...




Django-storages to serve different urls (HTTP or HTTPS) to different users?

Right now my site is widely used behind one corporate network. For whatever reason, they refuse to connect to the recent HTTPS version of my site and I can't back into why. I've settled on the idea that they have issues in their backend.

Until I figure a workaround, I thought I would take them back the non-ssl version of my site. I'm not sure what the best design pattern here is with django-storages. Right now in my settings file I have:

AWS_PRELOAD_METADATA = True
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
STATICFILES_STORAGE = 'tiingo.s3pipeline.S3PipelineStorage'
AWS_STORAGE_BUCKET_NAME = 'media.website.com'
AWS_S3_SECURE_URLS = True
AWS_S3_CUSTOM_DOMAIN =  "media.website.com"

I'm not sure the best design pattern on how to read if the user is coming from http or https and then serve the secure URLs or not.

If I wasn't using pipeline and storages, I could directly choose to load in different .css or .js files on the template site, but that would feel to hacky here.

Any ideas?




See where function is defined

How can I see where this function: http://ift.tt/1HspGOd

copyObject(params = {}, callback) ⇒ AWS.Request

is defined?

What I tried:

var AWS = require('aws-sdk');
AWS.S3.prototype.copyObject
=> undefined

But that is undefined

I want to know it because I want to stub this function with proxyquire:

 var aws_stub = {};
 var Mover =  proxyquire('../../callback/mover',
                         {'aws-sdk': aws_stub}
                         ).Mover;

 var fake_aws_copyObject = function(params, func){func(null, "succeed")};
     fake_aws_copyObject_stub = sinon.spy(fake_aws_copyObject);
     aws_stub.AWS.S3 ... ??   = fake_aws_copyObject_stub; 




Error in Mongodb replication on aws

I am trying to do replication in Mongodb on aws. I have created two ubuntu instances and ssh into them and initiated replication. However when i am trying to add the second node using rs.add(), it gives the following error-

" Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded:"

I am following this link to do this http://ift.tt/1hikds8

This link suggests that i have to create three instances first, however i have just created 2 instances. Pls tell me if that is the reason for the error?




AWS Elastic Beanstalk Docker main command

How can we specify the command that Elastic Beanstalk must use when running a Docker image?

We can specify a CMD in the Dockerfile, but let's imagine that the image is not ours and we don't want to change the Dockerfile...




What are the benefit of encrypting AWS RDS instance

If we have a MySQL RDS in AWS which can only be accessed from the EC2's in the private subnet then is there any benefit in encrypting it from security point of view (Using default RDS encryption). Because the only way somebody can access the DB is when he gets inside the private subnet of AWS and in that case encrypting and not encrypting does not help as anyway the hacker can access the data from the EC2. Then only difference it would make it is that with encrypted RDS it will take him more time to dump the data and copy it somewhere else for his use. Otherwise what are the other benefits of having a private RDS instance encrypted? Assuming the only backup's of DB are in AWS itself using its default DB Instance backup, so nobody can access the data directly from DB backups too.




How can i trigger multiple aws cloudformation tasks in ansible?

I am trying to find a way to trigger multiple cloudformation api calls via ansible in parallel.

As the stack has grown a lot triggering each task seperately is eating up a lot of time. I looked at async option with poll set to 0 ( fire and forget ). But this doesn't trigger the cloudformation task at all.

Any suggestions ?




Can i get detail profile of user with total reviews from amazon?

Can i get detail profile of user with total reviews from amazon? Yes I can get reviews for products as API return me with an Iframe but I need the total reviews made by the user?




Downloading public data from Amazon Web Services (AWS) into RStudio

I'd like to download public data from Amazon Web Services (AWS) directly into RStudio. At this point, any dataset will do, but let's just say I want to download the NASA NEX data:

http://ift.tt/1JjbyEu

In Amazon, you can use their wget utility, but I want to download from within RStudio. Additionally, AWS lists a handful of programming languages that speak seamlessly to their API, but R / RStudio aren't on the list.

http://ift.tt/14Cr4dG

Does anyone know an easy way to speak directly to AWS API from within RStudio?




Amazon Web service - signature

I've been receiving an error from Amazon web service - InvalidParameterValue Either Action or Operation query parameter must be present.

I believe it is most likely due to the signature being incorrect as the XML document and Header matches that of a test I did in their scratchpad.

Does anything stand out as being incorrect?

Thanks,

Clare

using System;
using System.Collections.Generic;
using System.Security.Cryptography;
using System.Text;
using System.Web;

public class SignatureHelper
{
    private static string ConstructCanonicalQueryString(SortedDictionary<string, string> sortedParameters)
    {
        var builder = new StringBuilder();

        if (sortedParameters.Count == 0)
        {
            builder.Append(string.Empty);
            return builder.ToString();
        }

        foreach (var kvp in sortedParameters)
        {
            builder.Append(PercentEncodeRfc3986(kvp.Key));
            builder.Append("=");
            builder.Append(PercentEncodeRfc3986(kvp.Value));
            builder.Append("&");
        }

        var canonicalString = builder.ToString();
        return canonicalString.Substring(0, canonicalString.Length - 1);
    }

    /// <summary>
    /// Percent-encode (URL Encode) according to RFC 3986 as required by Amazon.
    /// This is necessary because .NET's HttpUtility.UrlEncode does not encode
    /// * according to the above standard. Also, .NET returns lower-case encoding
    /// * by default and Amazon requires upper-case encoding.
    /// </summary>
    /// <param name="value"></param>
    /// <returns></returns>
    private static string PercentEncodeRfc3986(string value)
    {
        value = HttpUtility.UrlEncode(string.IsNullOrEmpty(value) ? string.Empty : value, Encoding.UTF8);

        if (string.IsNullOrEmpty(value))
        {
            return string.Empty;
        }

        value = value.Replace("'", "%27")
                .Replace("(", "%28")
                .Replace(")", "%29")
                .Replace("*", "%2A")
                .Replace("!", "%21")
                .Replace("%7e", "~")
                .Replace("+", "%20")
                .Replace(":", "%3A");

        var sbuilder = new StringBuilder(value);

        for (var i = 0; i < sbuilder.Length; i++)
        {
            if (sbuilder[i] != '%')
            {
                continue;
            }

            if (!char.IsLetter(sbuilder[i + 1]) && !char.IsLetter(sbuilder[i + 2]))
            {
                continue;
            }

            sbuilder[i + 1] = char.ToUpper(sbuilder[i + 1]);
            sbuilder[i + 2] = char.ToUpper(sbuilder[i + 2]);
        }

        return sbuilder.ToString();
    }

    public string SignRequest(Dictionary<string, string> parametersUrl, Dictionary<string, string> 

parametersSignture)
    {
        var secret = Encoding.UTF8.GetBytes(parametersSignture["Secret"]);
        var signer = new HMACSHA256(secret);

        var pc = new ParamComparer();
        var sortedParameters = new SortedDictionary<string, string>(parametersUrl, pc);
        var orderedParameters = ConstructCanonicalQueryString(sortedParameters);

        // Derive the bytes needs to be signed.
        var builder = new StringBuilder();
        builder.Append(parametersSignture["RequestMethod"])
                .Append(" \n")
                .Append(parametersSignture["EndPoint"])
                .Append("\n")
                .Append("/\n")
                .Append(orderedParameters);

        var stringToSign = builder.ToString();
        var toSign = Encoding.UTF8.GetBytes(stringToSign);

        // Compute the signature and convert to Base64.
        var sigBytes = signer.ComputeHash(toSign);
        var signature = Convert.ToBase64String(sigBytes);

        return signature.Replace("=", "%3D").Replace("/", "%2F").Replace("+", "%2B");
    }

    public class ParamComparer : IComparer<string>
    {
        public int Compare(string p1, string p2)
        {
            return string.CompareOrdinal(p1, p2);
        }
    }
}
}




Hive support for UTF-16 files

we are using Hive 0.13.1 in AWS EMR, and everyting was fine until we had to work with UTF-16 encoded files.

The symptoms are:

  1. There are some string entries with "special symbols" at the start, most probably these are the first entries in a file, with BOM characters included.

  2. For every line entry, there is an all-NULLs entry immediately after. Most probably that is due to ITEMS TERMINATED BY '\n' table creation directive working in a funny way with UTF-16 encoded files.

  3. Whenever i try to find specific entries, specifying, say, "provider_name = 'provider1'" - nothing comes out, even though i can see such entries exist in the table, while doing selects like "select * from mytable limit 5". Seems like the STRINGs of the table and the STRINGs of hive cli don't match. They don't even match when i specifically copy the value from an output of another query - nothing is found.

Aside from that, everything works fine: count, select distinct, select .. limit etc.

I tried to apply the fix suggested here, replacing "GBK" with "UTF-16", with no luck. Perhaps the problem is - this fix was applied to Hive 0.14.0, and AWS EMR only supports 1.13.1 atm.

(However, this solution seems to be rather an old one - check here. Still, it doesn't help a bit.)

Could someone suggest anything?




AWS code deploy agent not able to install?

Hi i am trying to install code deploy agent in my ec2 agent but not able to succeed I m following below steps

sudo apt-get update
sudo apt-get install awscli
sudo apt-get install ruby2.0
cd /home/ubuntu
sudo aws s3 cp s3://bucket-name/latest/install . --region region-name
sudo chmod +x ./install
sudo ./install auto 

but ./install file is missing for me . But I dont think its a problem with AMI as I used same steps with same AMI in different ec2 instance. Any one has any idea. please help me.




AWS Pricing VS Google-Cloud-Platform Pricing

I want to host my website (PHP/MySQL) on a cloud platform. For sure, my website is new and I don't think that I will be too much traffic. So, I tried to compare the lowest config costs of cloud services between google and AWS. The lowest config cost according Google Cloud Platform pricing calculator is as follows:

  • Google Compute Engine (f1-micro): $4.09
  • Google Cloud SQL (D0 Instance): $11.30
  • Datastore (1GB): $0.18
  • Total: $15.57 (For details, have a look on this link: https://goo.gl/wJZikT )

Meanwhile, the lowest config cost according AWS Pricing calculator is:

  • Amazon EC2 (t1.micro): $14.64
  • Amazon RDS (db.t1.micro with 1GB of storage): $18.42
  • Amazon S3: $0.11
  • Total: $33.17

(For details, have a look on this link http://goo.gl/Pe7dFt )

My question is: how can it be that there is a big difference in the coast of cloud services between google cloud platform and AWS? Is there any thing wrong in my estimation? If it is the case please share with me a link on the configuration of the minimal configuration on AWS...

Thanks.




Amazon Kinesis Vs EC2

Sorry for the silly question, I am new to cloud development. I am trying to develop a realtime processing app in cloud, which can process the data from a sensor in realtime. the data stream is very low data rate, <50Kbps per sensor. probably <10 sensors will be running at once.

I am confused, what is the use of Amazon Kinesis for this application. I can use EC2 directly to receive my stream and process it. Why do I need Kinesis?




mysqld stops unexpectedly on t2.micro Amazon Linux instance

I'm running an t2.micro Amazon Linux instance on EC2. I installed LAMP and Wordpress on it.

I have been experiencing many unexpected mysqld shut-downs every time I left my terminal connected to my instance via ssh before going to bed/going outside. When I woke up/came back home, mysqld always shut down itself. (And I'm not sure if this is related to the self stopping issue or not.) Or is this a memory problem? (t2.micro instance provides only 1GB of memory)

And every time mysqld shuts down, the file permissions I had configured were gone, and it's annoying to re-apply the file permissions again every time.

I just started working with these server setup stuffs, I'm still a newbie... Could someone help me and tell me what to do to prevent future mysqld self shut-downs even if I left the terminal connection open, and how do I configure it so that I don't have to re-apply file permissions after a shut-down?

Here is the log from mysqld:

150627 18:02:22 InnoDB: Mutexes and rw_locks use GCC atomic builtins
150627 18:02:22 InnoDB: Compressed tables use zlib 1.2.7
150627 18:02:22 InnoDB: Using Linux native AIO
150627 18:02:22 InnoDB: Initializing buffer pool, size = 128.0M
150627 18:02:22 InnoDB: Completed initialization of buffer pool
150627 18:02:22 InnoDB: highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 10566571
150627 18:02:22 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 10566581
150627 18:02:22 InnoDB: Waiting for the background threads to start
150627 18:02:23 InnoDB: 5.5.42 started; log sequence number 10566581
150627 18:02:23 Note Server hostname (bind-address): '0.0.0.0'; port: 3306
150627 18:02:23 Note - '0.0.0.0' resolves to '0.0.0.0';
150627 18:02:23 Note Server socket created on IP: '0.0.0.0'.
150627 18:02:23 Note Event Scheduler: Loaded 0 events
150627 18:02:23 Note Server socket created on IP: '0.0.0.0'.
150627 18:02:23 Note Event Scheduler: Loaded 0 events
150627 18:02:23 Note /usr/libexec/mysqld: ready for connections.
Version: '5.5.42' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Commu$
mysqld_safe Number of processes running now:
150628 18:18:29 mysqld_safe mysqld restarted
/usr/bin/mysqld_safe: line 165: /usr/bin/nohup: Cannot allocate memory
150628 18:22:53 mysqld_safe Starting mysqld daemon with databases from /var/lib$
150628 18:22:53 Note Plugin 'FEDERATED' is disabled.
150628 18:22:53 InnoDB: The InnoDB memory heap is disabled
150628 18:22:53 InnoDB: Mutexes and rw_locks use GCC atomic builtins
150628 18:22:53 InnoDB: Compressed tables use zlib 1.2.7
150628 18:22:53 InnoDB: Using Linux native AIO
150628 18:22:53 InnoDB: Initializing buffer pool, size = 128.0M
150628 18:22:53 InnoDB: Completed initialization of buffer pool
150628 18:22:53 InnoDB: highest supported file format is Barracuda.
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
150628 18:22:53 InnoDB: Database was not shut down normally!
InnoDB: the log sequence number in the ib_logfiles!
150628 18:22:53 InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
150628 18:22:54 InnoDB: Waiting for the background threads to start
150628 18:22:55 InnoDB: 5.5.42 started; log sequence number 11269379
150628 18:22:55 Note Server hostname (bind-address): '0.0.0.0'; port: 3306
150628 18:22:55 Note - '0.0.0.0' resolves to '0.0.0.0';
150628 18:22:55 Note Server socket created on IP: '0.0.0.0'.
150628 18:22:55 Note Event Scheduler: Loaded 0 events
150628 18:22:55 Note /usr/libexec/mysqld: ready for connections.
Version: '5.5.42' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Commu$
150628 18:28:10 mysqld_safe Number of processes running now: 0
150628 18:28:13 mysqld_safe mysqld restarted
/usr/libexec/mysqld: error while loading shared libraries: libkrb5.so.3: failed$




dimanche 28 juin 2015

what are some similar free options to amazon web services?

We are working on an android app And we are want a free substitute to aws As our major requirements are Live video streaming, Live video streaming, similar to Periscope.




AWS IAM policy for ec2 resource

My Scenario,

I do have root access for an AWS account and I want to create an IAM user with a policy who can only describe single ec2 instance but not the other instances in my account.




Laravel 5.1 AWS S3 Storage, how to link images?

i am in the proccess of creating a "Content Management System" for a "start up company". I have a Post.php model in my project, the following code snippet is taken from the Create method:

        if(Request::file('display_image') != null){
        Storage::disk('s3')->put('/app/images/blog/'.$post->slug.'.jpg', file_get_contents(Request::file('display_image')));
        $bucket = Config::get('filesystems.disks.s3.bucket');
        $s3 = Storage::disk('s3');
        $command = $s3->getDriver()->getAdapter()->getClient()->getCommand('GetObject', [
            'Bucket'                     => Config::get('filesystems.disks.s3.bucket'),
            'Key'                        => '/app/images/blog/'.$post->slug.'.jpg',
            'ResponseContentDisposition' => 'attachment;'
        ]);

        $request = $s3->getDriver()->getAdapter()->getClient()->createPresignedRequest($command, '+5 minutes');

       $image_url = (string) $request->getUri();
    $post->display_image = $image_url;

The above code checks if there is a "display_image" file input in the request object.

If it finds a file it uploads it directly to AWS S3 storage. I want to save the link of the file in the Database, so i can link it later in my views.

Hence i use this piece of code:

            $request = $s3->getDriver()->getAdapter()->getClient()->createPresignedRequest($command, '+5 minutes');

       $image_url = (string) $request->getUri();
    $post->display_image = $image_url;

I get a URL, the only problem is that whenever i visit the $post->display_image URL i get a 403 permission denied. Obviously no authentication takes place when using the URL of the image.

How to solve this? I need to be able to link all my images/files from amazon S3 to the front-end interface of the website.




Laravel 5 session never persisted in AWS

I'm trying to deploy a Laravel app on AWS using OpsWorks and an Ubuntu machine. What ever driver I choose the session will not be persited while in my local environment everything was working perfectly.

My initial setup was using a custom ElastiCache driver to handle sessions (http://ift.tt/1TZnX6s) but even if I choose file driver or cookie driver or database, it will never store my session.




Why am I getting a CORS error with Rails + AWS S3?

I've uploaded files to an AWS S3 bucket, and I want my app (which is running on a local server) to be able to download files from the bucket. I've spent the last 1.5 hours trying to find a solution, but the answers I found didn't work. I've set my CORS permissions on AWS to the following:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://ift.tt/1f8lKAh">
  <CORSRule>
    <AllowedOrigin>http://localhost:3000</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
  </CORSRule>
</CORSConfiguration>

Does anyone have any ideas? I've also tried setting the 'AllowedOrigin' to *, but this didn't fix the issue either. Thanks in advance.




Fast Secure way to upload data into Amazon S3

We want to upload data in a secure way from our datacentre, which is a bank to Amazon s3.
Fast way - Tsunami UDP (It seems that UDP protocols are much faster compared to tcp protocals like scp,ftp etc). However, Tsunami UDP cant directly stream into s3, but I can across this post with http://ift.tt/1Bk1Azk. This plugin seems to be able to do so.
Any other recommendations here

Secure - Amazon Direct Connect might have added costs so not considering for the initial phase. Amazon Import/Export - This might only be feasible for the first time upload.
Any recommendations here to create a secure channel between aws and the datacentre




Amazon Web Services API- getting write access to read only files (Linux)

I am very new to API programming. The software I am developing is using a Fuse-Driver , on a linux Virtual machine.

Under linux, there is a package called 'space dock' that encapsulates a lot of the API commands (GET , etc) required for accessing data.

What is happening is this:

When I run my application, I can view my files on the API, this API uses Amazon Web services and Amazon's electronic cloud computer. I can only view them, I can not write them (read-only access).

When I end my process, there is a created folder called 'bfs_cache' that gets saved onto my virtual machine. This cache directory is only accessible when the program ends, and it turns out that the files have write-access under this cache'd directory.

The problem is that I am unable to figure out how to write the cache'd files into physical memory when the application is running.

I am reading this (see below) website, and it shows that private memory mapping page caches, that API's use, have read only permissions. Yet this website does not explain how to switch the page cache to have shared memory mapping (which has write access).

http://ift.tt/1kQSzs8

Using linux x86, is there a command that I can invoke that will search for a file in the cache and write it to physical memory?

Using an API, how can I change the read-only permissions using the AWS Rest API?

Note: this 'space dock' package automatically generates all the GET, and query commands used in an OAuth work flow. This means I am not very good with the manual commands and would like to avoid a manual OAuth work flow.

Thank you very much, and if there is anything unclear please let me know.




How do I remove default ssh host from ssh configuration?

I used to connect to Amazon web services using ssh command and application.pem key. Now when I try to connect to other platforms such as Github my ssh client looks for same application.pem key and tries to connect to AWS. How do I connect to Github or change the default host and key configuration.I am using a Ubuntu 13.10 system and following is my ssh output.

pranav@pranav-SVF15318SNW:~/.ssh$ ssh Warning: Identity file application.pem not accessible: No such file or directory.




Paperclip with Amazon S3: how to stop it from making a request when asked for url

sc.image.url(:thumb)

What I see then in the logs:

[AWS S3 200 0.601484 0 retries] head_object(:bucket_name=>"****",:key=>"sc/images/000/000/526/original/file.jpg")

Why Paperclip does this request? It seems it does it only for AWS.

Is there a way to stop it from doing the check so it simply returns the url without doing additional requests?




Jquery Ajax Post On AWS Instance Not Working

I've ran into an incredibly weird problem that I'm not entirely sure how to debug. I wrote a POST function here that works perfectly fine locally and then breaks when deployed to AWS.

     $.ajax({
            url: upload_url,
            type: 'POST',
            data: {'blobs': JSON.stringify(Blobs)},
            success: function(data){}});

Any ideas as to why post requests are not being called? I've console.log'd inside the AJAX call, and I see the log locally... but not on AWS. The code is identical.




Load ruby object from amazon s3 in rails

I want to be able to load .rb object from amazon s3. Now in my project I store my reports in the app/reports directory. They are object extended from Prawn::Document Class (pdf_report.rb):

class PdfReport < Prawn::Document
  def initialize(default_prawn_options={})
    super(default_prawn_options)
  end
  def header(title=nil)
    if title
      text title, size: 14, style: :bold_italic, align: :center
    end
  end
  def footer
  end
end

My objective is to load this classes from an amazon s3 bucket. How can I do this? Thank you.




Can I upgrade Elasticache Redis Engine Version without downtime?

I cannot find any information in the AWS documentation that modifying the Redis engine version will or will not cause downtime. It does not explain how the upgrade occurs other than it's performed in the maintenance window.

It is safe to upgrade a production Elasticache Redis instance via the AWS console without loss of data or downtime?

Note: The client library we use is compatible with all versions of Redis so the application should not notice the upgrade.




Pointing DNS to IP with WordPress on AWS

I apologize if I am asking a dumb question but I am fairly new to both WordPress and AWS.

I am running WordPress on an AWS EC2 Linux instance, and I have just changed my Host Records on Namecheap to point to the elastic IP. However, I just tested it out and when I go to my_name.com, it works properly. But when I click a new page within my website (for example: my_name.com/about-me), it goes back to the elastic IP address (i.e. 70.50.70.100/about-me). Am I missing a step somewhere? I have been googling like a mad-man but to no avail.

Any help is greatly appreciated! Thanks!




Amazon S3 Bucket Policy: how to prevent public access to only certain files?

So far, my root bucket only has all permissions enabled for my AWS account. My bucket policy is the default one given to you in Amazons tutorial:

{
    "Version":"2012-10-17",
    "Statement": [{
    "Sid": "Allow Public Access to All Objects",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::example.com/*"
  }
 ]
}

Does this mean that public users can only access any file on my bucket, but not upload/view list/edit permissions? Lastly, how can I modify this permission such that certain files redirect the user to my error page?




How to attach instances to AWS autoscale when creating autoscale group?

Currently I have two instances running under elb. If I create an autoscale group with min size 2 with this elb, it spawns two more instances and puts it under elb.

But I only want the instances come up on scale out condition not on creation of the group.




Elastic Beanstalk CLI, how do I create the environment with an RDS instance?

According to the aws docs here, it appears when I do an eb init on a project file, I should be prompted to create an RDS instance. when I run this, instead I only see:

~$ eb init

Select an application to use
1) eb-demo-php-simple-app
2) aws-eb-deploy
3) sb-test1
4) [ Create new Application ]
(default is 1): 4

Enter Application Name
(default is "eb-demo-php-simple-app2"):
Application eb-demo-php-simple-app2 has been created.

It appears you are using Docker. Is this correct?
(y/n): y
Do you want to set up SSH for your instances?
(y/n): y

Select a keypair.
1) ####
2) ####
3) ####
4) ####
5) ####
6) [ Create new KeyPair ]
(default is 6): 5

~$

I'm using the PHP demo app from here that they provide to test out the scripts. Following the docs, I load up the EB instance, but it fails since it never sets up the RDS backend it requires.

I assume the documentation is out of date, and the CLI no longer has this functionality. The old EB cli has 'deprecated' written all over it, so I'm not using that.

How do I set up RDS with ELB CLI? Is it possible anymore?




bash/shell jq parse to variable from aws json

I am trying to parse json result from aws result, but I getting error or null when I am using $ip, when I am using specific IP it work. something wrong when I am using tne variable inside the jq command

#!/bin/bash

aws ec2 describe-addresses --region eu-west-1 > 1.txt
ipList=( "52.16.121.238" "52.17.250.188" )

for ip in "${ipList[@]}";
do
    echo $ip
    cat 1.txt | jq '.Addresses | .[] | select (.PublicIp==$ip) |    .InstanceId'
    #echo $result

done

Please advise, Thanks. Cfir.




How to point subdomain to another document root in aws

I am using AWS for the first time. I have created an ec2 instance and installed Apache server , made domain mapping from GoDaddy,

Now I want to create subdomains and point subdomains to the another document roots.

Like this :

www.mydomain.com should have domain root html\mydomain

http://ift.tt/1LwqfGO document root to html\testsubdomain directory.

www.*.mydomain.com document root to  html\subdomain directory
                           (* any subdomain other than test)

I tried to edit vhost file but could not find vhost file in apache server. Generally where and how to achieve this?

Do I need to use Route53 for this?




Restrict AWS Tag Names

Is there a way to create an AWS policy that would restrict an AWS tag name?

For example, if I wanted to create a tag namespace like: admin.env.prod = true user.system.profile = webserver

To prevent a user from creating a tag with name 'admin.' but allow 'user.' (or anything other than 'admin.*' really).




AWS S3 - com.amazonaws.AmazonServiceException: Request ARN is invalid

I'm trying to make my android app download images from AWS S3. However, the following exception keeps coming up:

com.amazonaws.AmazonServiceException: Request ARN is invalid (Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError; Request ID: 3481bd5f-1db2-11e5-8442-cb6f713243b6)
            at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:710)
            at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:385)
            at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:196)
            at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:875)
            at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.assumeRoleWithWebIdentity(AWSSecurityTokenServiceClient.java:496)
            at com.amazonaws.auth.CognitoCredentialsProvider.populateCredentialsWithSts(CognitoCredentialsProvider.java:671)
            at com.amazonaws.auth.CognitoCredentialsProvider.startSession(CognitoCredentialsProvider.java:555)
            at com.amazonaws.auth.CognitoCredentialsProvider.refresh(CognitoCredentialsProvider.java:503)
            at com.application.app.utils.helper.S3Utils.getCredProvider(S3Utils.java:35)
            at com.application.app.utils.helper.S3Utils.getS3Client(S3Utils.java:45)
            at com.application.app.integration.volley.CustomImageRequest.parseNetworkError(CustomImageRequest.java:73)
            at com.android.volley.NetworkDispatcher.parseAndDeliverNetworkError(NetworkDispatcher.java:144)
            at com.android.volley.NetworkDispatcher.run(NetworkDispatcher.java:135)

I have a bucket and an identity pool. Also, created required roles.

My Cognito_APPUnauth_Role has the following INLINE POLICY:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1435504517000",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::mybucket/*"
            ]
        }
    ]
}

I have a java class named S3Utils that has some helper methods.

public class S3Utils {
    private static AmazonS3Client sS3Client;

    private static CognitoCachingCredentialsProvider sCredProvider;

    public static CognitoCachingCredentialsProvider getCredProvider(Context context){
        if (sCredProvider == null) {
            sCredProvider = new CognitoCachingCredentialsProvider(
                    context,
                    Definitions.AWS_ACCOUNT_ID,
                    Definitions.COGNITO_POOL_ID,
                    Definitions.COGNITO_ROLE_UNAUTH,
                    null,
                    Regions.US_EAST_1
            );
        }

        sCredProvider.refresh();
        return sCredProvider;
    }

    public static String getPrefix(Context context) {
        return getCredProvider(context).getIdentityId() + "/";
    }

    public static AmazonS3Client getS3Client(Context context) {
        if (sS3Client == null) {
            sS3Client = new AmazonS3Client(getCredProvider(context));
        }
        return sS3Client;
    }

    public static String getFileName(String path) {
        return path.substring(path.lastIndexOf("/") + 1);
    }

    public static boolean doesBucketExist() {
        return sS3Client.doesBucketExist(Definitions.BUCKET_NAME.toLowerCase(Locale.US));
    }

    public static void createBucket() {
        sS3Client.createBucket(Definitions.BUCKET_NAME.toLowerCase(Locale.US));
    }

    public static void deleteBucket() {
        String name = Definitions.BUCKET_NAME.toLowerCase(Locale.US);
        List<S3ObjectSummary> objData = sS3Client.listObjects(name).getObjectSummaries();
        if (objData.size() > 0) {
            DeleteObjectsRequest emptyBucket = new DeleteObjectsRequest(name);
            List<DeleteObjectsRequest.KeyVersion> keyList = new ArrayList<DeleteObjectsRequest.KeyVersion>();
            for (S3ObjectSummary summary : objData) {
                keyList.add(new DeleteObjectsRequest.KeyVersion(summary.getKey()));
            }
            emptyBucket.withKeys(keyList);
            sS3Client.deleteObjects(emptyBucket);
        }
        sS3Client.deleteBucket(name);
    }
}

Part of the method where the exception occurs, in CustomImageRequest.java:

s3Client = S3Utils.getS3Client(context);
            ObjectListing objects = s3Client.listObjects(new ListObjectsRequest().withBucketName(Definitions.BUCKET_NAME).withPrefix(this.urlToRetrieve));
            List<S3ObjectSummary> objectSummaries = objects.getObjectSummaries();
            //This isn't just an id, it is a full picture name in S3 bucket.
            for (S3ObjectSummary summary : objectSummaries)
            {
                String key = summary.getKey();
                if (!key.equals(this.urlToRetrieve)) continue;
                S3ObjectInputStream content = s3Client.getObject(Definitions.BUCKET_NAME, key).getObjectContent();
                try {
                    this.s3Image = IOUtils.toByteArray(content);

                } catch (IOException e) {
                }

                return new Object();
            }

What am I doing wrong that causes this exception to be thrown every time. Thanks in advance.




Verify Amazon Login Access Token in AWS Lambda Call

I am trying to verify the access token returned from Login with Amazon in a AWS Lambda Call.

According to the docs I need to:

To verify a token, make a secure HTTP call to http://ift.tt/1IlCm8D, passing the access token you wish to verify. You can specify the access token as a query parameter. For example:

http://ift.tt/1GFxKof!.....

Note Access tokens contain characters that are outside the allowed range for URLs. Therefore, you should URL encode access tokens to prevent errors.

In the lambda function I have the following code

var http = require('http');
var querystring = require('querystring');

exports.handler = function(event, context) {

var postData = querystring.stringify({});

var options = {
    hostname: 'api.amazon.com',
    port: 443, // 443 for https // with 80 get ECONNREFUSED // with 443 ECONNRESET
    path: '/auto/O2/tokeinfo?access_token=' + event.token,
    method: 'POST',
    headers: {
        'Content-Type': 'application/x-www-form-urlencoded',
        'Content-Length': postData.length
    }
};

var req = http.request(options, function(res) {
    console.log('STATUS: ' + res.statusCode);
    console.log('HEADERS: ' + JSON.stringify(res.headers));
    res.setEncoding('utf8');
    res.on('data', function (chunk) {
        console.log('BODY: ' + chunk);
    });
    res.on('end', function() {
        context.succeed("hello");  
    });
});

req.on('error', function(e) {
    console.log('problem with request: ' + e.message);
    context.fail('problem with request: ' + e.message); 
});

// write data to request body
req.write(postData);
req.end();

// context.fail('Something went wrong');
};

Unfortunately with this I am getting back

{
    "errorMessage": "problem with request: read ECONNRESET"
}

If I change the port 443 to port 80 I get back ECONNREFUSED. If I don't send the req.write(postData); the Lambda function times out.

Is there another message on the res.on I need to be listening for?




AngularJS AWS S3 sdk putObject won't send multiple objects

Here is my controller code:

for(var i=0; i< $scope.files.length; i++){
  var bucket = new AWS.S3({ params: { Bucket: $scope.creds.bucket } });
  console.log($scope.files[i]);
  var file = $scope.files[i];
  var file_type = file.type.split("/")[1];
  var _uuid = uuid.new();
  var key = _uuid + "." + file_type;
  console.log("I: " + i);
  console.log("KEY: " + key);

  var params = { Key: key,
                 ContentType: file.type,
                 Body: file,
                 ServerSideEncryption: 'AES256' };

  bucket.putObject(params, function(err, data) {
    if(err) {
      // There Was An Error With Your S3 Config
      console.log(err);
      return false;
    }
    else {
      // Success!
      console.log("Temp Bucket Upload Done");
      $scope.uploaded.push(key);
      $scope.server_upl();
    }
  })
  .on('httpUploadProgress',function(progress) {
    // Log Progress Information
    //console.log(Math.round(progress.loaded / progress.total * 100) + '% done');
  });
}

Here for every file I'm creating a uuid. But if I browse for multiple files(n), somehow(?), for loops for n times and $scope.uploaded has n entities but all the entities has same uuid. What is the thing that I don't see here?




Amazon Alexa Skills Kit: How do you link with external app account / userId

In an Amazon alexa skill request there is a userId and I'm trying to understand what this is and if there is some reference for it because I want to link and Amazon Echo user account with an account in my own app and to do this I would have to have some kind of static userId to work with.

Example request:

{
 "version": "1.0",
 "session": {
   "new": false,
   "application": {
   "applicationId": "amzn1.echo-sdk-ams.app.[unique-value-here]"
  },
  "sessionId": "session1234",
  "attributes": {},
  "user": {
    "userId": null //IS THERE A DETAILED REFERENCE OF THIS SOMEWHERE?
  }
},
"request": {
"type": "IntentRequest",
"requestId": "request5678",
"intent": {
  "name": "MyColorIsIntent",
  "slots": {
    "Color": {
      "name": "Color",
      "value": "blue"
    }
  }
}
}
}




Are there any cloud computing platforms that require no credit card for the free service?

I've tried Amazon WS, Google Cloud Platform and Microsoft Azure, and though these sevices all have a free plan, all of them require a valid credit card info for the registration, which I sadly cannot obtain at the moment. I am a student, and I would like to use a VM (either Windows or Linux) to run longer calculations, but not immensely resource heavy ones.

Are there any other services similar to the three mentioned above that can be used without a credit card? Or is there maybe some kind of workaround for these?

Thanks in advance.




EC2 server can't resolve hostnames

When trying to resolve a hostname (i.e. using dig), the server almost always fails, saying ;; connection timed out; no servers could be reached. Around one in ten attempts works, usually after a long waiting time.

Strange thing is that the same behavior happens also if I'm querying a different DNS server (Google's).

My default nameserver is Amazon's, @ 172.31.0.2 . I get this one automatically when the server connects using DHCP.

Pinging the IPs (8.8.8.8 & 172.31.0.2) also usually fails.

I've tried checking the VPC settings and security group settings, but found nothing. Also the fact it works every once in a while makes me even more confused.




ActiveRecord::Base.establish_connection with postgresql on AWS

When got my database.yml configured like this:

default: &default
  adapter: postgresql
  encoding: utf8
  pool: 5
  timeout: 5000

production:
  <<: *default
  host: my_db_address
  port: 5432
  database: my_db_name
  username: my_db_user_name
  password: my_db_password

< test and development ommited >

When I establish connection like this:

ActiveRecord::Base.establish_connection

it says ActiveRecord::AdapterNotSpecified - 'production'

It works hovewer If I do it like this: ActiveRecord::Base.establish_connection( {:adapter => 'postgresql', :database => 'my_db_name', :host => 'my_db_address', :port => '5432', :username => 'my_db_user_name', :password => 'my_db_password'} )

I'd rather load the config from database.yml. How do I do this?

I'm on Rails 4.2.1 and Postgres 9.4




Socialengine 4.8.6 on AWS (EC2, RDS (MYSQL/INNODB), S3 and CLOUDFRONT) - White Screen on Startup/ possible INNODB issue

Socialengine 4.8.6 - shows a 'white screen' on startup, only the http://ift.tt/1TWwvLy and sesystem.com/install can be accessed via the browser.

Problem started: I needed a new database with a copy of production data to test a new upgrade (Socialengine 4.8.9). I used PHPMYADMIN to copy the production db(PROD) to a new developement db (COPYOFPROD), both databases reside on AWS RDS, same instance and user/password, and both connect via INNODB.

Changed social/application/settings/database.php on my system from PROD to COPYOFPROD to test the database was correct (Yes - not very clever with aproduction system!). Got a white screen when trying to access the system via the browser with the new database.

Reverted the single config change on /application/settings/database.php to the original setting. The White screen remains and looking at the Socialengine error logs, it reports it doesn't recognise the PROD database anymore. No other file in the Socialengine install directories have changed.

Status : Access to the RDS databases on phpmyadmin is ok Accessing socialengine, http://ift.tt/1TWwvLA then select 'requirements and redundancy check' the System reports the following:

MySQL 'OK'

MySQL 4.1 'Unable to check. No database adapter was provided.'

MySQL InnoDB Storage Engine 'Unable to check. No database adapter was provided.'

I therefore summarise that i have crashed the INNODB service with my actions. I understand the service is sensitive and will crash if you change config entries.

I have read that the INNODB log files need to be removed before the service will resume, i have tried the procedure of removing the ib_logfiles and restarting mysql. Result : mysql start [ok], the ib_logfiles are recreated, but when i try Socialengine it still has a 'white screen' and still reports 'No database adaptor'.

My questions are :

1) How do i check INNODB services are running correctly on AWS EC2/RDSMYSQL? Note: I'm using Terminal from a OSX Machine connecting to EC2 via the standard ECUSER and a pemkey combination.

2) How do i access mysqlmonitor on AWS RDS with the appropriate permission problems to make checks on INNODB status. Currently the system reports - you need 'PROCESS' rights when trying show status commands.

3) Which are the best logs in Socialengine to see why the whitescreen is happening and are there any tips - because i am only assuming this is the INNODB issue but i need to confirm it.

I am a novice, so not sure what my next steps are..

Many Thanks




samedi 27 juin 2015

Not able to resume a stopped cloud on AWS instances

I have been working on installing solr cloud over AWS. As fresh run everything works fine and since i have used Hadoop as one of the dependencies it is understood that there is high availability.. So taking this base of though, i tried to stop the cloudera manager (essentially freezing hadoop , solr, other components). Then stop the instances come back next day to resume the working.. But this theory never works.. Below are the step by step things i do before i switch off and resume.

  1. 7:55, created folders for every datanode and in ~/recovery directory and also checked each nodes health
  2. copied the namenode current directory nn + dn from all 9 hosts with the help 2.*.sh script
  3. Stopped the cloudera manager & ready to switch off the cluster
  4. AT 8:04 cluster is stopped in cloudera manager. Made sure enough time is there between event 2 & 3 above.
  5. Physically stopping the aws instances at 8:05.
  6. Everything is stopped at 8:08
  7. Starting all the nodes again.. at 8:12
  8. started all all fine.. HDFS lost some blocks, some corrupted and some missing
  9. Solr cloud failed.. totally because it is observed most of the blocks belong to solr cloud..

As you see i have taken all the precautions even i have redistributed the nn + dd which i have saved before switch off. But it did not work..

This i have failed the 4th time, and restoring the cloud is a painful procedure. Why i want to do this, i want to save the client some valuable money when we are not undergoing any testing.

I am still not sure why i can resume from my physical machine and not from aws.. And why only Solr looses bytes.




Laravel 5 Amazon AWS S3 Error: Client error: 403 RequestTimeTooSkewed

I'm trying to upload files to an S3 bucket via a laravel app

I get the following error:

S3Exception in WrappedHttpHandler.php line 152: Error executing "PutObject" on "http://ift.tt/1qvthlM

AWS HTTP error: Client error: 403 RequestTimeTooSkewed (client): The difference between the request time and the current time is too large

I've done some research and many say that my my machine's time is not synced. I'm afraid of messing with homestead because im afraid of breaking something. Do i change my app timezone? Really not sure.

Please help and thank you for taking the time




Hosting webapp on an AWS EC2 server. Do I need to use the AWS "application services" for search and email?

I want to host my app on an VPS/VPC and am currently leaning towards the AWS EC2 server. I'm looking at the console right now and I see a bunch of services offered like CloudSearch(managed search service) and SES(email sending service).

Considering the fact that I have already written code to do these things (at least for the search) that works locally, do I/should I still utilize these services? If so, why and how?




installed Mysql in Amazon EC2 instance, then when to use RDS Instance

I have a dynamic website and I need to migrate to AWS. I am new to AWS and linux. I have a doubt while setting the environment. I have installed Mysql,Phpmyadmin separately.

I have the following questions :

  1. How to connect this installed Mysql with this installed Phpmyadmin ? How to access Phpmyadmin through browser in aws.
  2. Why do we need RDS then? Do I really need RDS instance?

Please help me..