jeudi 15 octobre 2015

AWS Dynamo DB Object Mapper Error

I am trying to query my AWS Dynamo DB table "UserCreds" via swift code. I have first defined a class and then mapped it to Dynamo DB table. Then I am calling AWSDynamoDBQueryExpression() to search for a row on this table. but the program keeps crashing with following error:

"ViewController11viewDidLoadFS0_FT_T_L_11DDBTableRow dynamoDBTableName]: unrecognized selector sent to class 0x102ec3d40]"

I have also ensured that I have correct credentials setup in the AppDelegate file. Any insight into the reasons why this could be failing would be much appreciated. Thanks!

Below is what the code looks like:

import UIKit

class ViewController: UIViewController {

override func viewDidLoad() {
    super.viewDidLoad()


    class DDBTableRow :AWSDynamoDBObjectModel {

        var UserIdentifier:String?
        var UserFullName:String?


        class func dynamoDBTableName() -> String! {
            return "UserCreds"
        }

        class func hashKeyAttribute() -> String! {
            return "UserIdentifier"
        }

        class func rangeKeyAttribute() -> String! {
            return "UserFullName"
        }

        //MARK: NSObjectProtocol hack
        override func isEqual(object: AnyObject?) -> Bool {
            return super.isEqual(object)
        }

        override func `self`() -> Self {
            return self
        }
    }


    //SEARCH FOR A ROW
    let dynamoDBObjectMapper = AWSDynamoDBObjectMapper.defaultDynamoDBObjectMapper()

    let queryExpression = AWSDynamoDBQueryExpression()
    queryExpression.indexName = "UserIdentifier-UserFullName-index"
    queryExpression.hashKeyAttribute = "UserIdentifier"
    queryExpression.hashKeyValues = "1234"
    queryExpression.scanIndexForward = true

    dynamoDBObjectMapper.query(DDBTableRow.self, expression: queryExpression).continueWithExecutor(AWSExecutor.mainThreadExecutor(), withBlock: { (task:AWSTask!) -> AnyObject! in
        if (task.error != nil) {
            print("Error: \(task.error)")

            let alertController = UIAlertController(title: "Failed to query a test table.", message: task.error.description, preferredStyle: UIAlertControllerStyle.Alert)
            let okAction = UIAlertAction(title: "OK", style: UIAlertActionStyle.Cancel, handler: { (action:UIAlertAction) -> Void in
            })
            alertController.addAction(okAction)
            self.presentViewController(alertController, animated: true, completion: nil)
        } else {
            if (task.result != nil) {
                print(task.result)
            }
            print("Performing Segue")
        }
        return nil
    })

}




Query hash/range key and local secondary index

Is it possible to Query a DynamoDB table using both the hash & range key AND a local secondary index?

I have three attributes I want to compare against in my query. Two are the main hash and range keys and the third is the range key of the local secondary index.




How to sign AWS API request in Java

How can I sign an AWS API request in Java? I was able to find how to do this in PHP but cannot seem to find anything in Java. I would like to sign a request to ItemSearch.

Is there maybe a library or something?




WSGIPath error when deploying Flask project in AWS

I have simple blog built with Flask as you can see here. However, when I am trying to deploy it on AWS using Beanstalk, I keep getting the error message:

"ERROR: Your WSGIPath refers to a file that does not exist."

In the log, I find the error message as: "Target WSGI script not found or unable to stat: /opt/python/current/app/application.py"

I have my application.py and .elasticbeanstalk in the root folder of the project.

Does anyone have an idea of why?

Thanks in advance.




How to make NodeJS Server deployed on AWS cloud go live?

I am new to NodeJS. I have the following code for hello world.

var http = require("http");
http.createServer(function(request, response) {
    response.writeHead(200, {'Content-Type': 'text/plain'});
    response.end('Hello World!\n I am the AWS cloud');
    }).listen(8081);
console.log('Server is running at http://127.0.0.1:8081');

This code is saved on my AWS cloud which runs an Ubuntu 14.04 instance. I have installed nodeJS on Ubuntu. When I run the code it works fine and displays the console log message, but when I open a web browser on a different machine and type http://Public-IP-Address:8081 it does not work.

Also when I stop the script by pressing Ctrl+c and then again execute it, it displays the following error message:

Server is running at http://127.0.0.1:8081
events.js:85
      throw er; // Unhandled 'error' event
        ^
Error: listen EADDRINUSE
    at exports._errnoException (util.js:746:11)
    at Server._listen2 (net.js:1156:14)
    at listen (net.js:1182:10)
    at Server.listen (net.js:1267:5)
    at Object.<anonymous> (/home/ubuntu/NodeJSscripts/hw.js:17:22)
    at Module._compile (module.js:460:26)
    at Object.Module._extensions..js (module.js:478:10)
    at Module.load (module.js:355:32)
    at Function.Module._load (module.js:310:12)
    at Function.Module.runMain (module.js:501:10)

When I go back and change the port number and run the script again, it then runs again without throwing any error, but the web browser still doesn't connect to the server through it's public IP.




journald to Cloudwatch Logs

I'm a newbie to CentOS and wanted to know the best way to parse journal logs to CloudWatch Logs.

My thought processes so far are:

--Use FIFO to parse the journal logs and ingest this to Cloudwatch Logs, - It looks like this could come with draw backs where logs could be dropped if we hit buffering limits.

--Forward journal logs to syslog and send syslogs to Cloudwatch Logs --

The idea is essentially to have everything logging to journald as JSON and then forward this across to CloudWatch Logs.

What is the best way to do this? how have others solve this problem?

Many Thanks!




Maven cannot find java.util.Objects when using mvn shade plugin?

I'm following AWS lambda's "create a jar using mvn":

http://ift.tt/1METsk0

and some reason when I try to do either "mvn clean install" nor "mvn package" run properly. the project is using java 8.

Here's my pom.xml:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://ift.tt/IH78KX"
         xmlns:xsi="http://ift.tt/ra1lAU"
         xsi:schemaLocation="http://ift.tt/IH78KX http://ift.tt/VE5zRx">
    <modelVersion>4.0.0</modelVersion>

    <groupId>lambda</groupId>
    <artifactId>test</artifactId>
    <packaging>jar</packaging>
    <version>0.0.0.1-SNAPSHOT</version>
    <name>lambda-test</name>

    <dependencies>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-core</artifactId>
            <version>1.1.0</version>
        </dependency>
        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-lambda-java-events</artifactId>
            <version>1.1.0</version>
        </dependency>
        <dependency>
            <groupId>com.jcraft</groupId>
            <artifactId>jsch</artifactId>
            <version>0.1.53</version>
        </dependency>
        <dependency>
            <groupId>com.google.code.gson</groupId>
            <artifactId>gson</artifactId>
            <version>2.3.1</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.3</version>
                <configuration>
                    <createDependencyReducedPom>false</createDependencyReducedPom>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

and my code:

package lambda.test;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.google.gson.Gson;
import com.jcraft.jsch.ChannelSftp;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.Session;

import java.util.Objects;
import java.util.Properties;
import java.util.Vector;

public class Test {

    private static final Gson gson = new Gson();

    public static String myHandler(Tester tester, Context context) {
        LambdaLogger logger = context.getLogger();
        logger.log("received: " + tester);

        try {
            JSch jSch = new JSch();
            Session session = jSch.getSession(username, sftpHost, sftpPort);
            session.setPassword(password);

            Properties config = new Properties();
            config.put("StrictHostKeyChecking", "no");

            session.setConfig(config);
            session.connect(15000);

            ChannelSftp channel = (ChannelSftp) session.openChannel("sftp");
            channel.connect();

            channel.cd("Inbox");
            Vector<ChannelSftp.LsEntry> list = channel.ls(".");

            logger.log("the list ==> \n" + gson.toJson(list));
        } catch (Exception e) {
            e.printStackTrace();
            logger.log("the exception: " + e.getMessage());
        }

        return String.valueOf(tester);
    }

    public static class Tester {
        private final String name;
        private final Integer id;

        public Tester(String name, Integer id) {
            this.name = name;
            this.id = id;
        }

        public String getName() {
            return name;
        }

        public Integer getId() {
            return id;
        }

        @Override
        public boolean equals(Object o) {
            if (this == o) return true;
            if (o == null || getClass() != o.getClass()) return false;
            Tester tester = (Tester) o;
            return  Objects.equals(name, tester.name) &&
                    Objects.equals(id, tester.id);
        }

        @Override
        public int hashCode() {
            return Objects.hash(name, id);
        }

        @Override
        public String toString() {
            return gson.toJson(this);
        }
    }
}




AWS S3 PHP download decrypted object

I am downloading object using PHP SDK and saving it local. But when i opens file that have raw/encrypted data. How i can download file with original content. File is csv format. Need to fix it urgently.

`

$objects = $s3->getIterator('ListObjects', array('Bucket' => $bucket, "Prefix" => "my folder path"));

foreach ($objects as $object) {
   $objkey = $object['Key'];
   $filename = basename($objkey);
  $result = $s3->getObject(array(
                    'Bucket' => $bucket,
                    'Key' => $objkey,
                  'SaveAs' => $uploadPath . "/" . $filename
                ));


}`




How to get the instance id of the instance being created inside the user data of CFT?

I was successfully able to attach another instance id from inside powershell if aws ec2 windows instance using CFT.

Can someone please help me how to get the instance id of the instance being created as I need to use inside the user data for ELB registration. So, in simpler words, i want to register the same instance that i am creating.

Below is my working command :-

Register-ELBInstanceWithLoadBalancer -LoadBalancerName ire798ELB -Instances i-b90d8d06 -Region us-east-1

But here i-b90d8d06 is some other instance id and not the one i am creating.




How to get the CloudWatch Agent and Metric Filters to Report Dimensions

Setup

CloudWatch Agent running on an EC2 instance reports audit logs to CloudWatch. Metric Filter in CloudWatch creates metrics for successful logins, failed logins, ect... when logs are reported.

Problem

Metrics created through the Metric Filter does not assign dimensions so I cant query CloudWatch to get a set of metric statistics by InstanceId. This would be extremely useful because I want to know the audit metrics per machine not per log group.

Comments

Attaching dimensions is pretty easy using the put-metric-data command. I am able to tag the metrics with the dimension for InstanceId and then retrieve only those metrics using get-metric-statistics. Is this kind of functionality not possible using the Metric Filters + CloudWatch Agent setup? What would be a possible workaround?




Updating Rails server config files with Rubber

I've already deployed my Rails application and just added an SSL certificate. So now I updated my server block in config/rubber/role/unicorn_nginx.conf. I just ran cap deploy:migrations to also update some other pieces of my app and expected the server to have updated my unicorn_nginx.conf file as well, but it didn't.

What's the command I should use to just update rubber .conf files?




Is there a way to configure Amazon Cloudfront to delay the time before my S3 object reaches clients by specifying a release date?

I would like to upload content to S3 and but schedule a time at which Cloudfront delivers it to clients rather than immediately vending it to clients upon processing. Is there a configuration option to accomplish this?




Stop traffic to unhealthy instances without replacing them by auto scaling

Using ELB for TCP protocol and AWS Auto Scaling I run into the following problem when scaling out.

  • three EC2 instances each with 2,000 connections
  • scaling out because that is my specified threshold
  • a new instance gets added by Auto Scaling

How can I stop now traffic going to the three EC2 instances which have too many connections?

  1. Removing it from ELB will mean that it will get terminated after a maximum of 1h using connection draining. Bad: TCP connections will get closed.

  2. Marking the EC2 instance as unhealthy using CloudWatch. Bad: Auto Scaling will detect and replace unhealthy instances

  3. Detaching EC2 instance from Auto Scaling group manually via AWS CLI. Bad: Detaching it from Auto Scaling will also remove it from ELB, see 1.

The only possible solution I can see here and I am not sure if it is feasilbe:

Using CloudWatch mark the EC2 instance as unhealthy. ELB will stop distributing traffic to it. At the same time update the EC2 health for Auto Scaling manually:

aws autoscaling set-instance-health --instance-id i-123abc45d –-health-status healthy

This should override the health in a way that ELB will continue to ignore the EC2 instance and AWS Auto Scaling will not try to replace the instance. Would that work or is there a better solution?




How to create basic EC2 instance with Ansible

I am trying to learn the Ansible with all my AWS stuff. So the first task which I want to do is creation the basic EC2 instance with all settings which I need and install docker there. I wrote the Playbook according to Ansible docs, but it doesn't really work. My Playbook:

# The play operates on the local (Ansible control) machine.
- name: Create a basic EC2 instance v.1.1.0 2015-10-14
  hosts: localhost
  connection: local
  gather_facts: false

# Vars.
  vars:
      hostname: Test_By_Ansible
      keypair: MyKey
      instance_type: t2.micro
      security_group: my security group   
      image: ami-d05e75b8                 # Ubuntu Server 14.04 LTS (HVM)
      region: us-east-1                   # US East (N. Virginia)
      vpc_subnet_id: subnet-b387e763      
      sudo: True
      locale: ru_RU.UTF-8

# Launch instance. Register the output.
  tasks:
    - name: Launch instance
      ec2:
         key_name: "{{ keypair }}"
         group: "{{ security_group }}"
         instance_type: "{{ instance_type }}"
         image: "{{ image }}"
         region: "{{ region }}"
         vpc_subnet_id: "{{ vpc_subnet_id }}"
         assign_public_ip: yes
         wait: true
         wait_timeout: 500
         count: 1                         # number of instances to launch
         instance_tags:
            Name: "{{ hostname }}"
            os: Ubuntu
            type: WebService
      register: ec2

    # Create and attach a volumes.
    - name: Create and attach a volumes
      ec2_vol:
         instance: "{{ item.id }}"
         name: my_existing_volume_Name_tag
         volume_size: 1   # in GB
         volume_type: gp2
         device_name: /dev/sdf
         with_items: ec2.instances
      register: ec2_vol

    # Configure mount points.
    - name: Configure mount points - mount device by name
      mount: name=/system src=/dev/sda1 fstype=ext4 opts='defaults nofail 0 2' state=present
      mount: name=/data src=/dev/xvdf fstype=ext4 opts='defaults nofail 0 2' state=present
      #wait_timeout: 500

    # Do some stuff on instance.
    # locale

But this Playbook crushes on volumes mount. And I don't know how to set locale because - locale_gen: name=de_CH.UTF-8 state=present line doesn't help. Could somebody get me a helping hand, please?




Cassandra is randomly slow on two nodes

I'm using Cassandra in my .NET project to store some data. I have performance issue when I'm deploing my app from dev machine to test environment on Amazon AWS.

Below is attached sample log from my app. It shows only query times. On my dev machine these commands work everytime in a few milliseconds but on multi node setup I'm getting this random slowdowns. It's the same codebase. And all the reads are simple queries by primary key (ex. GetTableDataChangeCommand). This database is almost empty (only a few rows in each table).

My setup is two m3.xlarge machines with centos7, and I'm querying Cassandra from AWS from the same datacenter.

But I'm getting the same problem on my customer's infrastructure (on premise, not AWS).

What can be the issue here? Any hints?

GetTableDataChangeCommand: 930ms
GetTableDataChangeCommand: 962ms
GetTableDataChangeCommand: 2ms
GetUserDirectRolesCommand: 960ms
GetTableDataChangeCommand: 930ms
GetAllGroupsCommand: 7ms
GetUserGroupsCommand: 9ms
GetAllOrganizationUnitsCommand: 10ms
GetAllRolesCommand: 8ms
GetUserRolesCommand: 4ms
GetUserDirectRolesCommand: 2ms
GetUserCommand: 2ms
GetTableDataChangeCommand: 2ms
GetTableDataChangeCommand: 1ms
GetTableDataChangeCommand: 5ms
GetAllGroupsCommand: 4ms
GetUserGroupsCommand: 1ms
GetTableDataChangeCommand: 1ms
GetAllRolesCommand: 4ms
GetAllOrganizationUnitsCommand: 4ms
GetUserRolesCommand: 3ms
GetUserDirectRolesCommand: 2ms
GetUserCommand: 3ms
GetTableDataChangeCommand: 3ms
GetTableDataChangeCommand: 5ms
GetUserGroupsCommand: 1ms
GetTableDataChangeCommand: 1ms
GetTableDataChangeCommand: 7ms
GetAllRolesCommand: 6ms
GetAllGroupsCommand: 8ms
GetAllOrganizationUnitsCommand: 7ms
GetUserRolesCommand: 3ms
GetUserDirectRolesCommand: 2ms
GetTableDataChangeCommand: 1ms
GetUserCommand: 3ms
GetTableDataChangeCommand: 1ms
GetAllOrganizationUnitsCommand: 5ms
GetTableDataChangeCommand: 8ms
GetTableDataChangeCommand: 9ms
GetUserGroupsCommand: 2ms
GetAllGroupsCommand: 10ms
GetAllRolesCommand: 4ms
GetUserRolesCommand: 3ms
GetTableDataChangeCommand: 1ms
GetUserCommand: 2ms
GetTableDataChangeCommand: 1ms
GetUserGroupsCommand: 1ms
GetUserDirectRolesCommand: 2ms
GetAllRolesCommand: 3ms
GetTableDataChangeCommand: 1ms
GetTableDataChangeCommand: 1ms
GetAllGroupsCommand: 4ms
GetAllOrganizationUnitsCommand: 5ms
GetUserRolesCommand: 3ms
GetUserCommand: 2ms
GetTableDataChangeCommand: 2ms
GetTableDataChangeCommand: 3ms
GetUserDirectRolesCommand: 5ms
GetTableDataChangeCommand: 3ms
GetTableDataChangeCommand: 4ms
GetUserGroupsCommand: 1ms
GetAllGroupsCommand: 4ms
GetAllOrganizationUnitsCommand: 7ms
GetAllRolesCommand: 6ms
GetUserRolesCommand: 3ms
GetTableDataChangeCommand: 1014ms
GetUserCommand: 1014ms
GetTableDataChangeCommand: 999ms
GetTableDataChangeCommand: 984ms
GetUserGroupsCommand: 984ms
GetUserDirectRolesCommand: 983ms
GetAllOrganizationUnitsCommand: 984ms
GetTableDataChangeCommand: 2ms
GetAllGroupsCommand: 4ms
GetAllRolesCommand: 2ms
GetUserRolesCommand: 3ms




AWS API Gateway: Cannot generate mapping template because model schema is missing a 'type' or '$ref' property

I am getting the error:

Cannot generate mapping template because model schema is missing a 'type' or '$ref' property` when trying to define an integration response

I have "type" defined.

My model schema is:

{
  "$schema": "http://ift.tt/1n3c9zE",
  "type": "object",
  "title": "Configuration",
  "properties": {
    "steps": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "ordinal": {
            "type": "integer"
          },
          "rules": {
            "type": "array",
            "items": {
              "properties": {
                "ordinal": {
                  "type": "integer"
                },
                "rId": {
                  "type": "integer"
                },
                "rMId": {
                  "type": "integer"
                },
                "rValue": {
                  "type": "string"
                }
              }
            }
          }
        }
      }
    }
  }
}




AWS EC2 Security Measures

I'm new to Servers environment and I don't know much about Servers, only few basics.

I'm getting into this just now and I found out Amazon has a 1-year trial so I wanted to try it out.

I see that their Dashboard has a lot of stuff and looks like they are already thinking with security measures. So this is what I did:

  • Created an instance of Ubuntu Server 14.04
  • Created a security group where I set the HTTP and HTTPS open to everyone (0.0.0.0/0) and an SSH opened only at my IP
  • Set an Alarm where in case of server down it restarts it
  • Accessed via SSH and installed Apache2, PHP, PhpMyAdmin, and other stuff thru command line.
  • I already had a domain so I connected the AWS EC2 to a subdomain of my own by changing a DNS and everything works fine.

Now, this is not production-mode yet, is still testing, but let's say I want to get it online for real. Are there other security measures I should take in order to make the virtual machine/server safe?

Or does Amazon do all the trick already with that Security Group thing?

Perhaps mine may sound like a stupid question, but this is because I'm a newbie to this world. In that case, be kind.




Need I have vagrant for installation edx platform on Ubuntu 12.04 for getting through AWS step

I try to install edx platform on ubuntu 12.04 by this tutorial http://ift.tt/1iW2CoW. I don't use Vagrant but on step [aws | update the ssh motd on Ubuntu] I've got error fatal: [localhost] => error while evaluating conditional: vagrant_home_dir.stat.exists == false. It is strange, because tutorial helps to install edx without vagrant. Why do I get errors about not existing vagrant?

This section, I mean aws, goes after certs. Take a look there http://ift.tt/1Qxkqcd. So... I'm not sure what will be installed by this, but it seems it related to Amazon Web Services. After a little search information about AWS and edx, I've got, that edx can be installed on AWS. But I install edx on my VPS, and it confused me more. Maybe I don't need get through this step. Maybe I don't understand something yet.

Sorry if my english is not clear. Thanks.




Run Filezilla on private subnet at AWS

I have a VPC with one public subnet and one private.
The public subnet has an OpenVpn Server and a public IP
The private subnet has a Win 2012R2 Server with an Oracle 11g server.
My set up:
1. VPC: VPC 2. Public Subnet: Summary for public subnet

Route table for public subnet

  1. Private Subnet: Summary for private subnet

    1. Internet gateway Internet gateway attach to my VPC

    2. Security Groups: 5a: For the VPN server: enter image description here 5b: For the Win Server (with the oracle server) enter image description here

So, I've set up the OpenVPN server and I am able to connect to the database from my laptop. Everything is fine. Now I want to run FileZilla in my Win Server to download some files from ftp sites.
I add more security groups to my Win Server with inbound and outbound rules but it doesn't work. I think that I have to add a route table for my private subnet to connect to outside world, but I have no idea how to implement that.

Rules that I have tried (inbound and outbound): enter image description here

Route table that I tried:
enter image description here

Any ideas?




How to install firefox in Amazon Clod (AWS)?

I have an AWS account with all privilges. I wanted to install firefox in the environment as my application will lauch firefox and do few testings in the web application.

Could someone help me on how to install firefox?

Thanks.




How can I "hide" the data in AWS from users?

I want to build an application using Amazon Web Services (AWS). The way the application should work is this; I make a program that lets the user import a large file in an external format and send it to AWS (S3?) in my own format. Next many users can access the data from web and desktop applications. I want to charge per user accessing the data. The problem is that the data on AWS must be in an unintelligible format or the users may copy the data over to another AWS account where I can not charge them. In other words the user need to do some "decrypting" of the data before they can be used. On the web this must be done in JavaScript which is plaintext and would allow the users to figure out my unintelligible format. How can I fix this problem? Is there for instance a built in encryption/decryption mechanism? Alternatively is there some easy way in AWS to make a server that decrypts the data using precompiled code that I upload to AWS?




Amazon product API - [Javascript] Show Shopping Deals

I hope someone can give me specifics on some information I want to obtain before I proceed.

1) I am new to Amazon AWS and would like to know if there is a direct documentation on how to pull out list of shopping deals just like when you visit amazon.com market place and hit on 'Todays Shopping Deals'.

2) Is there a good place to start learning on AWS specifically shopping deals?

3) Is it possible to pullout list of shopping deals from Amazon.com marketplace without using their API?

Thank you so much for your help.




calculations imprecise, Redshift, postgresql

I am wandering why I am getting imprecise results even though I did type casting.

select (sum(sellerid)::decimal/count(*)::decimal) from winsales

This is the example from the Redshift documentation. http://ift.tt/1jD7ZB0

It should be giving something like 2.454545, but it truncates the value to 2.45. Why is that and how can I increase the precision? Tried both float4 and float8 and get the same result.




Kibana won't connect to Elasticsearch on Amazon's Elasticsearch Service

After switching from hosting my own Elastiscsearch cluster to Amazon's Elasticsearch Service, my Kibana dashboards (versions 4.0.2 and 4.1.2) won't load and I'm receiving the following error in kibana.log:

{
  "name": "Kibana",
  "hostname": "logs.example.co",
  "pid": 8037,
  "level": 60,
  "err": {
    "message": "Not Found",
    "name": "Error",
    "stack": "Error: Not Found\n    at respond (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/transport.js:235:15)\n    at checkRespForFailure (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/transport.js:203:7)\n    at HttpConnector.<anonymous> (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/src\/lib\/connectors\/http.js:156:7)\n    at IncomingMessage.bound (\/srv\/kibana\/kibana-4.1.2-linux-x64\/src\/node_modules\/elasticsearch\/node_modules\/lodash-node\/modern\/internals\/baseBind.js:56:17)\n    at IncomingMessage.emit (events.js:117:20)\n    at _stream_readable.js:944:16\n    at process._tickCallback (node.js:442:13)"
  },
  "msg": "",
  "time": "2015-10-14T20:48:40.169Z",
  "v": 0
}

unfortunately, this error is not very helpful. I assume it's a wrapped HTTP 404, but for what?

How can I connect a Kibana install to Amazon's Elasticsearch Service?




Difference between Data transfer cost and Bandwidth costs in AWS

What is the difference between Data transfer costs and Bandwidth costs in AWS??

When I view my AWS bills, I see there are 2 types of expenses in Data Transfers tab, one of Data Transfer and one on Bandwidth costs. I tried to search a lot on this but could not get a clear demarcation on the difference between the two.




AWS, WordPress and URL issue

We set up WordPress and the website works okay but when a visitor clicks on any link the URL doesn't change. As far as I can tell we're using DC. I've tried a few different themes and the issue remains. Any suggestions? In the meantime, I'll try other themes as well. Thanks




upload image in amazon S3 after re-sizing it in Yii2

I want to upload RESIZED image in Amazon S3. I have uploaded original image. But now also want to create thumb and upload it.

Here is my code to upload original image:

    <?php
    $bucket = 'abc';
    $bucket = NEST_BUCKET_ORIGINAL;    
    if($s3->putObject($_FILES['pic']['tmp_name'], $bucket, $newfilename, S3::ACL_PUBLIC_READ) )
    {
        $key = "thumb/{$newfilename}";
        $s3file = 'http://'.$bucket.'.PATH/'.$newfilename;
        $collection = \Yii::$app->mongodb->getCollection('nest');
        $set = array('pic'=>$newfilename);
        $where = array('_id'=>$i);
        $collection->update($where,$set);
        $thumbFile = $_FILES["pic"]["name"];
        $targetpath = 'http://'.$bucket.'.PATH/'.$newfilename;
        Image::thumbnail($_FILES['pic']['tmp_name'], $w, $h)->save($key, ['quality' => 80]);
    }
    ?>




Amazon aws s3 content header

I am trying to create directory in amazon aws s3 for that I am trying following code ( I am using v3 php sdk)

$bucketName = 'somebucketName';
$key = 'folderName';

$params = [
    'Bucket' => $bucketName,
    'Key' => $key . '/'
  ];
$s3->putObject($params);

$s3 is instance of $s3 = new Aws\S3\S3Client class, I am getting bucket and object successfully with my current configuration.

it was working fine before but now I am getting error

Fatal error: Uncaught exception 'Aws\S3\Exception\S3Exception' with message 'Error executing "PutObject" on "http://ift.tt/1hGgE4r";

AWS HTTP error: Client error: 411 MissingContentLength (client): You must provide the Content-Length HTTP header.




check the lowest price being offered on amazon and then decrease our price base on increment value

Check lowest price being offered on amazon and then decrease our price based on an increment value we have stored in the oscommerce products table for that item we will have an upper price cieling and lower price floor the lowest and highest the item will sell for. and then the increment amount for example 3 cents the script will run through all the items we have selling on amazon which are also in the oscommerce site and check for the lowest price. It will then change our price on amazon to be $0.0X below that current price

and some code is here

<?php
require('includes/application_top.php');  
tep_set_time_limit(0);

//Amazon MWS setup  
include_once ('MarketplaceWebServiceOrders/.config.inc.php');  
require(DIR_WS_FUNCTIONS.'amazon.php');


$serviceUrl = "http://ift.tt/1nHjodS";

$config = array (
  'ServiceURL' => $serviceUrl,  
  'ProxyHost' => null,  
  'ProxyPort' => -1,  
  'ProxyUsername' => null,  
  'ProxyPassword' => null,  
  'MaxErrorRetry' => 3,  
);

 $service = new MarketplaceWebServiceProducts_Client(  
    AWS_ACCESS_KEY_ID,  
    AWS_SECRET_ACCESS_KEY,    
    APPLICATION_NAME,  
    APPLICATION_VERSION,  
    $config
);  


 $request = new marketplaceWebServiceProducts_Model_GetLowestOfferListingsForASINRequest();  
 $request->setSellerId(MERCHANT_ID);  
 $asin_list = new MarketplaceWebServiceProducts_Model_ASINListType();  
 $asin_list->setASIN(array('ASINcode'));  
 $request->setASINList($asin_list);  
 $request->setMarketplaceId(MARKETPLACE_ID);    
 // object or array of parameters
 invokeGetLowestOfferListingsForASIN($service, $request);


 function invokeGetLowestOfferListingsForASIN(MarketplaceWebServiceProducts_Interface $service, $request)
 {
     try {
       $response = $service->GetLowestOfferListingsForASIN($request);

       $dom = new DOMDocument();
       $dom->loadXML($response->toXML());
       $dom->preserveWhiteSpace = false;
       $dom->formatOutput = true;
       $xml_data = $dom->saveXML();
       $dom->loadXML($xml_data);

       $otherOfferXml = simplexml_load_string($xml_data);

       foreach($otherOfferXml as $offers)
       {
          // Skipping last RequestID section
          if(!isset($offers["status"]))
             continue;

          // Checking if the API returned any error then continue to next SKU
        if($offers["status"] != "Success")
            continue;

        $asin = (String) $offers->Product->Identifiers->MarketplaceASIN->ASIN;

        // Going through all ASIN's offers to get price
        $seller_counter = 0;
        $others_response_data[$asin] = "";
        foreach($offers->Product->LowestOfferListings->LowestOfferListing as $offers_list)
        {
            $others_response_data[$asin][$seller_counter]["LandedPrice"] = (String) $offers_list->Price->LandedPrice->Amount;
            $others_response_data[$asin][$seller_counter]["ListingPrice"] = (String) $offers_list->Price->ListingPrice->Amount;
            $others_response_data[$asin][$seller_counter]["Shipping"] = (String) $offers_list->Price->Shipping->Amount;
            $others_response_data[$asin][$seller_counter]["Fulfillment"] = $fulfillment_channel;
            $others_response_data[$asin][$seller_counter]["SKU"] = $asin_array[$asin]["sku"];
            $others_response_data[$asin][$seller_counter]["AZN_ASIN"] = $asin;
            $seller_counter++;  
        }
    }
 } catch (MarketplaceWebServiceProducts_Exception $ex) {
    echo("Caught Exception: " . $ex->getMessage() . "\n");
    echo("Response Status Code: " . $ex->getStatusCode() . "\n");
    echo("Error Code: " . $ex->getErrorCode() . "\n");
    echo("Error Type: " . $ex->getErrorType() . "\n");
    echo("Request ID: " . $ex->getRequestId() . "\n");
    echo("XML: " . $ex->getXML() . "\n");
    echo("ResponseHeaderMetadata: " . $ex->getResponseHeaderMetadata() . "\n");
   }
}

?>




Display retrieved aws instances to HTML page in Node.js

i have a code which retrieved instance from aws

var aws = require('aws-sdk');

aws.config.update({
    accessKeyId: 'YOUR_ACCESS_KEY', 
    secretAccessKey: 'YOUR_SECRET_KEY', 
    region: 'us-west-2'
});

var ec2 = new aws.EC2();

function printStatuses() {
    ec2.describeInstances({}, function(err, data) {
        if(err) {
            console.error(err.toString());
        } else {
            var currentTime = new Date();
            console.log(currentTime.toString());

            for(var r=0,rlen=data.Reservations.length; r<rlen; r++) {
                var reservation = data.Reservations[r];
                for(var i=0,ilen=reservation.Instances.length; i<ilen; ++i) {
                    var instance = reservation.Instances[i];

                    var name = '';
                    for(var t=0,tlen=instance.Tags.length; t<tlen; ++t) {
                        if(instance.Tags[t].Key === 'Name') {
                            name = instance.Tags[t].Value;
                        }
                    }
                    console.log('\t'+name+'\t'+instance.InstanceId+'\t'+instance.PublicIpAddress+'\t'+instance.InstanceType+'\t'+instance.ImageId+'\t'+instance.State.Name);
                }
            }
        }
    });    
} 

the above code works fine, it retrieved instances and displayed in terminal,i want to display it on HTML page I've already developed the front-end page and wish to have the instances come up on this page(inside table tag> instead of the console.




how to push values from listObjects to an array

I have an array for example named yourArray. var yourArray = []; When I get object names with listObjects from s3, I want to push this object names to the array.

var params ={
     Bucket: 'exBucket',
     Prefix: 'somePrefix'
      };
s3.listObjects(params, function(Err, Data){
    if(!Err){
      for (var i = 0; i < Data.Contents.length; i++){
         console.log('Listed: ', Data.Contents[i].Key);
yourArray.push(Data.Contents[i].Key);
         }};
    });

Here, console.log('Listed: ', Data.Contents[i].Key); gives me the all names. But yourArray.push(Data.Contents[i].Key); doesnt push the names and array is still empty. Where is the mistake ?




script to connect servers and run some commands on amazon

I want to write a fish script to run Locust on the amazon servers. I wrote the code as below, the problem is that when the shell connects to first server it cant send the other commands there.

Any helps, recommedationa are appreciated.

set labs 'ubuntu@compute1.amazonaws.com' 'ubuntu@compute2.amazonaws.com' 'ubuntu@compute3.amazonaws.com' 'ubuntu@compute4.amazonaws.com' 
set key /Users/mesutgunes/Desktop/project-key.pem

for lab in $labs
    ssh -i $key $lab 
    cd /path/to/project/
    screen
    locust -f file.py --master
    exit
end




Using google 2factor authentication with Private Key

I am having a AWS Linux server, & I am planning to use the 2 Factor authentication. I am having the following thing in mind:

Factor 1) SSH Key Factor 2) Google Authenticator

But on blogs and stuff i read Two-factor authentication works with passwords only. When you authenticate with your key, PAM is bypassed and no verification code is asked.

Is this possible using a Private_Key + Google Authenticator ?




Apple Push Notification is not working on my aws server, it's throght failed to connect 111 connection refused php push notification error

I have created a IOS application but unable to send push notification through AWS server. I was facing same problem godaddy shared server that's why I move to AWS.




Port 22: Connection Refused after EC2 Instance reboot

I hosted my website AWS EC2 instance and mysql in attached ebs. After Rebooting the instance, not able to access the instance and getting following error

ssh: connect to host 52.xx.xx.xxx port 22: Connection refused How to get back my server?




Getting "Errno::ENOENT: No such file or directory @ rb_sysopen" while reading the CSV file from the AWS S3

I have application which is deployed to Heroku. I have added functionality for uploading users thorough the CSV. For this I have provided CSV upload functionality (Used Paperclip gem).

Here is my code for reading the file and creating new user

def import(file)
  CSV.foreach(file.path, headers: true) do |row|
      row_hash = row.to_hash.values
      data = row_hash[0].split("\t")
      .
      .
      .
end

On the local it is working fine. But on the heroku it is giving me following error

Errno::ENOENT: No such file or directory @ rb_sysopen - http://ift.tt/1Rcp67D

I referred following links Errno::ENOENT (No such file or directory) in amazon-s3

File reading from Amazon server, ruby on rails, no match route

but didn't any success. For more debugging, I tried same url from my local rails console and it is giving me the same error.

2.2.2 :008 > cp = "http://ift.tt/1Rcp67D"
2.2.2 :008 > f = File.open(cp, "r")
Errno::ENOENT: No such file or directory @ rb_sysopen - http://ift.tt/PsQMbo

Also tried open uri http://ift.tt/1knxXTX.

I can download the same file from the browser.

Can any one let me know how to resolve this error. Is there any bucket permission issue (I have already provided open access for the bucket).




Is EC2Config Service is supposed to present in instance launched from custom created AMI?

I am trying to create AMI with my application installed for W2K8 and W2K12 Server. For this purpose, I have followed the below steps for both :

  1. Launch the required instance from the available instances . For example, Windows 2008 Server base.
  2. Once the instance is up and running, I am checking the version of EC2Config service. If new update is available, I am updating it.
  3. I am turning On the 'Automatic Windows Update' and installing the updates.
  4. Then, I am installing my application and do required changes.
  5. Now my machine is ready. As last step, I starting EC2Config Service wizard.
  6. On EC2Config Wizard, in 'Image' tab, I am enabling "Random" in "Administrator password".
  7. Then , I am clicking on button - "Shutdown with Sysprep".
  8. With Sysprep config done, machine is down and I am creating image of it. Let say image name is 'W2K8-Image'

Now, my question is - When I create new instance from image 'W2K8-image' and launch it, EC2Config service are still present. So, is it suppose to be present on this instance? If not, what are the setting need to be done to remove it while creating AMI?




Usin AWS IoT to offer a service to thirdy party

I would like to leverage ASW IoT to offer a service to my customers. Customers can be both "thing" owner or data consumer. The added value is given by computational stuff on the platform. Is it possible such a scenario? I would like to implement a REST api to let user register they own thing and mantain the association with Customers and things, but I don't want the thing need for my amazon account in order to push data. Is such a scenario possible?




Amazon S3 File upload in Yii2

I am using Amazon S3 in Yii2 framework

For now i am getting this type of error:

{
  "name": "PHP Notice",
  "message": "Undefined property: stdClass::$body",
  "code": 8,
  "type": "yii\base\ErrorException",
  "file": "/var/www/html/project_name/api/modules/v1/models/S3.php",
  "line": 880,
  "stack-trace": [
    "#0 /var/www/html/project_name/api/modules/v1/models/S3.php(880): yii\base\ErrorHandler->handleError(8, 'Undefined prope...', '/var/www/html/r...', 880, Array)",
    "#1 [internal function]: api\modules\v1\models\S3Request->__responseWriteCallback(Resource id #4, '<?xml version=\"...')",
    "#2 /var/www/html/project_name/api/modules/v1/models/S3.php(833): curl_exec(Resource id #4)",
    "#3 /var/www/html/project_name/api/modules/v1/models/S3.php(186): api\modules\v1\models\S3Request->getResponse()",
    "#4 /var/www/html/project_name/api/modules/v1/models/Webservice.php(105): api\modules\v1\models\S3->putBucket('nestimages', 'public-read')",
    "#5 /var/www/html/project_name/api/modules/v1/controllers/WsController.php(1066): api\modules\v1\models\Webservice->createNests3(Array)",
    "#6 [internal function]: api\modules\v1\controllers\WsController->actionCreatenests3()",
    "#7 /var/www/html/project_name/vendor/yiisoft/yii2/base/InlineAction.php(55): call_user_func_array(Array, Array)",
    "#8 /var/www/html/project_name/vendor/yiisoft/yii2/base/Controller.php(151): yii\base\InlineAction->runWithParams(Array)",
    "#9 /var/www/html/project_name/vendor/yiisoft/yii2/base/Module.php(455): yii\base\Controller->runAction('createnests3', Array)",
    "#10 /var/www/html/project_name/vendor/yiisoft/yii2/web/Application.php(84): yii\base\Module->runAction('v1/ws/createnes...', Array)",
    "#11 /var/www/html/project_name/vendor/yiisoft/yii2/base/Application.php(375): yii\web\Application->handleRequest(Object(yii\web\Request))",
    "#12 /var/www/html/project_name/api/web/index.php(18): yii\base\Application->run()",
    "#13 {main}"
  ]
}

I am using this demo: Demo

How can i resolve this error? Comment if any information required.




AWS Crash Issue and the website now does not work

We hosted our website in an AWS EC2 instance The instance is a t2.micro instance and is in running state.

I am able to ssh into the server. Apache, ftp, mysql, postfix servers are running and the daemon's are listening. I've checked using these commands :

netstat -tnlp ps -auxf

800 MB of Memory is free out of 1024 MB in RAM (so plenty)

I just disabled SSL, Opcach & Varnish and removed their packages, not dependecies. Restarted the apache and msql services. And there was a memory leak happened. And I could not execute any command, as a result I had to force reboot via AWS Management console. After the instance restarted I am able to ssh into the instance, restarted all services and they are now up and running.

Unfortunately when I visit the site, it throws error as connection could not be established. But rechecked the IP's in both A record of Godaddy's DNS records and from AWS console. Since our instance uses a Elastic IP, so it is persistent across reboots




Ansible: Create new RDS DB from last snapshot of another DB

Promote command does not work on Ansible. So I am trying to create a new db as a replica of an existing one and after making it master , delete the source db.

I was trying to do it like this:

  1. Make replica
  2. Promote replica
  3. Delete source db

But now I am thinking of this:

  1. Create new db from source db last snapshot [as master from the beginning]
  2. Delete the source db

How would that playbook go?

My playbook:

 - hosts: localhost
   vars:
     source_db_name: "{{ SOURCE_DB }}" # stagingdb
     new_db_name: "{{ NEW_DB  }}" # stagingdb2
   tasks:
   - name: Make RDS replica
     local_action:
       module: rds
       region: us-east-1
       command: replicate
       instance_name  : "{{ new_db_name  }}"
       source_instance: "{{ source_db_name  }}"
       wait: yes
       wait_timeout: 900 # wait 15 minutes

# Notice - not working [Ansible bug]
   - name: Promote RDS replica
     local_action:
       module: rds
       region: us-east-1
       command: promote
       instance_name: "{{ new_db_name }}" # stagingdb2
       backup_retention: 0
       wait: yes
       wait_timeout: 300

   - name: Delete source db
     local_action:
       command: delete
       instance_name: "{{ source_db_name }}"
       region: us-east-1




Why is the descisionTask not receiving any task from AWS-SWF service (SWF)?

I am using Nodejs for the backend. I tried this npm package to create a simple work flow (AMAZON-SWF). The package has an example folder which contains files which I put in my node project so that I understand how it works.

The problem is that the Decider is not receiving any task from the SWF server. because of which my work flow never runs. Is there some configuration problem. Please point out what errors I have done.

Below is the code for quick reference. The only change the code has is the version number change and change in the domain name. Otherwise it is the same code as the code which you can find here.

Following is the decider code.

var swf = require('./index');

var myDecider = new swf.Decider({
   "domain": "test-domain",
   "taskList": {"name": "my-workflow-tasklist"},
   "identity": "Decider-01",
   "maximumPageSize": 100,
   "reverseOrder": false // IMPORTANT: must replay events in the right order, ie. from the start
});

myDecider.on('decisionTask', function (decisionTask) {

    console.log("Got a new decision task !");

    if(!decisionTask.eventList.scheduled('step1')) {
        decisionTask.response.schedule({
            name: 'step1',
            activity: 'simple-activity'
        });
    }
    else {
        decisionTask.response.stop({
          result: "some workflow output data"
        });
    }

    decisionTask.response.respondCompleted(decisionTask.response.decisions, function(err, result) {

      if(err) {
          console.log(err);
          return;
      }

      console.log("responded with some data !");
    });

});

myDecider.on('poll', function(d) {
    //console.log(_this.config.identity + ": polling for decision tasks...");
    console.log("polling for tasks...", d);
});

// Start polling
myDecider.start();



/**
 * It is not recommanded to stop the poller in the middle of a long-polling request,
 * because SWF might schedule an DecisionTask to this poller anyway, which will obviously timeout.
 *
 * The .stop() method will wait for the end of the current polling request, 
 * eventually wait for a last decision execution, then stop properly :
 */
process.on('SIGINT', function () {
   console.log('Got SIGINT ! Stopping decider poller after this request...please wait...');
   myDecider.stop();
});

Following is activity code:

/**
 * This simple worker example will respond to any incoming task
 * on the 'my-workflow-tasklist, by setting the input parameters as the results of the task
 */

var swf = require('./index');

var activityPoller = new swf.ActivityPoller({
    domain: 'test-domain-newspecies',
    taskList: { name: 'my-workflow-tasklist' },
    identity: 'simple-activity'
});

activityPoller.on('error',function() {
    console.log('error');
});

activityPoller.on('activityTask', function(task) {
    console.log("Received new activity task !");
    var output = task.input;

    task.respondCompleted(output, function (err) {

        if(err) {
            console.log(err);
            return;
        }

        console.log("responded with some data !");
    });
});


activityPoller.on('poll', function(d) {
    console.log("polling for activity tasks...", d);
});

activityPoller.on('error', function(error) {
    console.log(error);
});


// Start polling
activityPoller.start();


/**
 * It is not recommanded to stop the poller in the middle of a long-polling request,
 * because SWF might schedule an ActivityTask to this poller anyway, which will obviously timeout.
 *
 * The .stop() method will wait for the end of the current polling request, 
 * eventually wait for a last activity execution, then stop properly :
 */
process.on('SIGINT', function () {
   console.log('Got SIGINT ! Stopping activity poller after this request...please wait...');
   activityPoller.stop();
});

Following is the code which registers:

var awsswf = require('./index');
var swf = awsswf.createClient();
/**
 * Register the domain "test-domain"
 */
swf.registerDomain({
    name: "test-domain-newspecies",
    description: "this is a just a test domain",
    workflowExecutionRetentionPeriodInDays: "3"
}, function (err, results) {

    if (err && err.code != 'DomainAlreadyExistsFault') {
        console.log("Unable to register domain: ", err);
        return;
    }
    console.log("'test-domain-newspecies' registered !")


    /**
     * Register the WorkflowType "simple-workflow"
     */
    swf.registerWorkflowType({
        domain: "test-domain-newspecies",
        name: "simple-workflow",
        version: "2.0"
    }, function (err, results) {

        if (err && err.code != 'TypeAlreadyExistsFault') {
            console.log("Unable to register workflow: ", err);
            return;
        }
        console.log("'simple-workflow' registered !")

        /**
         * Register the ActivityType "simple-activity"
         */
        swf.registerActivityType({
            domain: "test-domain-newspecies",
            name: "simple-activity",
            version: "2.0"
        }, function (err, results) {

            if (err && err.code != 'TypeAlreadyExistsFault') {
                console.log("Unable to register activity type: ", err);
                return;
            }

            console.log("'simple-activity' registered !");
        });

    });

});

Following is the code which starts the workflow execution:

var swf = require('./index');

var workflow = new swf.Workflow({
   "domain": "test-domain-newspecies",
   "workflowType": {
      "name": "simple-workflow",
      "version": "2.0"
   },
   "taskList": { "name": "my-workflow-tasklist" },
   "executionStartToCloseTimeout": "1800",
   "taskStartToCloseTimeout": "1800",
   "tagList": ["example"],
   "childPolicy": "TERMINATE"
});


var workflowExecution = workflow.start({ input: "any data ..."}, function (err, runId) {

   if (err) { console.log("Cannot start workflow : ", err); return; }

   console.log("Workflow started, runId: " +runId);

});

Following is index.js file

var basePath = "../node_modules/aws-swf/lib/";
exports.AWS = require('aws-swf').AWS;
exports.AWS.config.loadFromPath(__dirname + '/../config/awsConfig.json');
exports.createClient = require(basePath+"swf").createClient;
exports.Workflow = require(basePath+"workflow").Workflow;
exports.WorkflowExecution = require(basePath+"workflow-execution").WorkflowExecution;
exports.ActivityPoller = require(basePath+"activity-poller").ActivityPoller;
exports.ActivityTask = require(basePath+"activity-task").ActivityTask;
exports.Decider =  require(basePath+"decider").Decider;
exports.DecisionTask =  require(basePath+"decision-task").DecisionTask;
exports.EventList = require(basePath+"event-list").EventList;
exports.DecisionResponse = require(basePath+"decision-response").DecisionResponse;
exports.Poller = require(basePath+"poller").Poller;




Boto rds.get_all_dbinstances returning empty list

I'm trying to obtain a list of all my AWS RDS instances using Python's Boto module. I can obtain the instance metadata fine which I then read into the connection to utilise get_all_dbinstances() but keep getting [] returned.

from boto.utils import get_instance_metadata
import boto.rds

# This works fine    
m =  get_instance_metadata()

access_key = m[****]
secret_key = m[****]
security_token = m[****]

try:
# And this works fine
    region_object = boto.rds.connect_to_region("eu-central-1", aws_access_key_id=access_key, aws_secret_access_key=secret_key)

except ValueError, e:
    print "Error: {0}".format(e.message)

rds = boto.connect_rds(aws_access_key_id = access_key, aws_secret_access_key = secret_key, security_token = security_token, region=region_object.region)
instances = rds.get_all_dbinstances()
print instances

>> []




wurfl on elastic beanstalk

I have setup php environment on elastic beanstalk and uploading code through eb deploy command. EC2 instance is created by auto-scaling group and I can connect it using putty. I want to install wurfl on ec2, I want to know : 1) if I install wurfl on current running ec2 instance, will it get installed on other instance created by auto-scale group ? 2) if not, then how can I install wurfl on beanstalk




Getting started with AWS sdk in C#

I'm trying to start using AWS with Visual Studio 2015. Since the suggested way to use the SDK is to install packages with NUget, I guess that way I will loose the AWS projects templates. I don't know if there is some reason I really need these templates, or if Nuget packages are enougth. Reason I don't install the MSI: it seems to break my Visual Studio ( after trying to install I need to repair de setup )




Timeout error for AWS SNSClient Publish request

Here is the piece of code :

                        //Publishing the topic
                        snsClient.Publish(new PublishRequest
                        {
                            Subject = Constants.SNSTopicMessage,
                            Message = snsMessageObj.ToString(),
                            TopicArn = Settings.TopicArn
                        });

I am getting the below error :

The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.

And here is the screenshot of detailed error: enter image description here

But not able to get an idea how to solve this. Any hint or link will helpful.




I need to plan for disaster recovery for my AWS Account?

I was thinking what if My AWS Account get deleted/inaccessible one fine day? (may sound weird). Have anyone implemented any solution for this? Can we have back from one AWS account to another AWS account?




How to subscribe multiple device token at one time in AWS SNS

I am Using AWS SNS for sending push notification in Android and iOS devices. Can anyone tell me how to subscribe multiple device tokens at the same time (not one by one).

I have already code for one by one subscription of a device token.

CreateTopicRequest createTopicRequest = new CreateTopicRequest("MyNewTopic");
    CreateTopicResult createTopicResult = snsClient.createTopic(createTopicRequest);

    //print TopicArn
    System.out.println(createTopicResult);
    //get request id for CreateTopicRequest from SNS metadata
    System.out.println("CreateTopicRequest - " +  snsClient.getCachedResponseMetadata(createTopicRequest));

    String topicArn = createTopicResult.getTopicArn()

    CreatePlatformApplicationResult platformApplicationResult
    String platformApplicationArn = ''

    long now = System.currentTimeMillis()
    println("----before-subscription-------" + now)

    platformTokens.each { String platformToken ->

        try {
            platformApplicationResult = createPlatformApplication(applicationName, platform, principal, credential);
        } catch (Exception e) {
            System.out.println("----------------" + e)
        }

        platformApplicationArn = platformApplicationResult.getPlatformApplicationArn();
        CreatePlatformEndpointResult platformEndpointResult = createPlatformEndpoint(platform, "CustomData - Useful to store endpoint specific data", platformToken, platformApplicationArn);

        String endpointArn = platformEndpointResult.getEndpointArn()
    // System.out.println("------------endpointArn--------------" + endpointArn);

      //subscribe to an SNS topic
        SubscribeRequest subRequest = new SubscribeRequest(topicArn, "application", endpointArn);
        snsClient.subscribe(subRequest);

With one by one subscription is taking lot of time for more than 100 device tokens. SO please help me to solve my problem.

Thanks




mercredi 14 octobre 2015

AWS RDS painfully slow when connecting from local machine

I have an AWS RDS instance up and running. When the DB is queried from my website (also on AWS, same region) it runs beautifully. But, if I try to connect to the database from my local developments machine it takes AGES for any query to execute. Does anyone know why? I have opened up the security group to allow connections (while i try to connect from the local machine).




SNS SQS fanout architecture

Looking at the documentation for this pattern it says

The Amazon SQS message contains the subject and message that were published to the topic along with metadata about the message in a JSON document

So when I publish to an SNS topic, the only properties that are forwarded are the subject of the notification, and the default parameter? Does this mean if I want to send json to my queues I have to stringify it and set it as the default parameter of the notification?




Processing on 1000messages by socket

I need a suggestion on server setup. I am using java web socket and it will have to send 1000messaegs/sec to its connected users. Basically, 100 users will be connected to endpoint and each user will own a thread at server. Means 100 threads and each thread will send 10messages/sec i.e 1000messages/sec

Can you help me which specification of the Amazon AWS server can handle it?

Thanks




intermittent cURL errors in PHP API

Using PHP API v2

I am getting intermittent cURL errors coming through

[07-Oct-2015 03:46:27 UTC] PHP Fatal error:  Uncaught exception 'Guzzle\Http\Exception\CurlException' with message '[curl] 77:  [url] http://ift.tt/1jBNgxh' in /var/app/current/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php:359

Stack trace:
#0 /var/app/current/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php(292): Guzzle\Http\Curl\CurlMulti->isCurlException(Object(Guzzle\Http\Message\EntityEnclosingRequest), Object(Guzzle\Http\Curl\CurlHandle), Array)

#1 /var/app/current/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php(257): Guzzle\Http\Curl\CurlMulti->processResponse(Object(Guzzle\Http\Message\EntityEnclosingRequest), Object(Guzzle\Http\Curl\CurlHandle), Array)

#2 /var/app/current/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php(240): Guzzle\Http\Curl\CurlMulti->processMessages()

#3 /var/app/current/vendor/guzzle/guzzle/src/Guzzle/Http/Curl/CurlMulti.php(224): Guzzl in /var/app/current/vendor/aws/aws-sdk-php/src/Aws/Common/Client/AbstractClient.php on line 256

According to some docs cUrl 77 is an ssl cert issue, but I am wondering why this would intermittently fail.

I wondered if it was that the ec2 was unable to access s3 due ot some networking issue?

Is there something I need to do to prevent this from continuing to happen?




Domain name not showing up in DNS

It's been couple of days that I transferred my domain name from one AWS to another--dev environment to production. The problem is, the domain name isn't showing up in any DNS (Amazon or Google). I'm pretty sure I've configured the hosted zone correctly.

I'm also trying to verify SES which is failing and I also set MX records (Gmail) which don't work. The MX records and SES were set couple of days ago. Additionally, I created an A record to point to a elastic load balancer DNS name.

Any suggestions on what might be the problem? It's been couple of days and from past StackOverflow posts as well as past experience, DNS propagation on Amazon's server doesn't take more than 15 minutes.




Best practice for obtaining the credentials when executing a Redshift copy command

What's the best practice for obtaining the AWS credentials needed for executing a Redshift copy command from S3? I'm automating the ingestion process from S3 into Redshift by having a machine trigger the Copy command.

I know it's recommended to use IAM roles on ec2 hosts so that you do not need to store AWS credentials. How would it work though with the Redshift copy command? I do not particularly want the credentials in the source code. Similarly the hosts are being provisioned by Chef and so if I wanted to set the credentials as environment variables they would be available in the Chef scripts.




Usage of Identity Federation

There are some confuses of Identity Federation(OpenIDConnect). If i am going to integrate a Identity Provider then no need to write login logic & source code, no need to create user table and manange the user info, is it right ? and the identity provider will privde those functions. My resource API will be protected as resouce server in my side ?

But i found lots of APPs provider some of the third part login mechenism such as google, twitter, facebook, They just fetch a ID Token from IDP and create a new user in the Identity System of their owns, So, Dose it being misused of the concepts of Identity Federation here ?

B.T.W, Mostly, Seems, It is not accepted in mind if login through the third part identity system if a company create some of Apps, Company wannt the identtiy system of their own.. So , Is there any way to simplely make me own as a IdP? Does Amazon has a similar service?

Thx All~




Update SSL on AWS EC2 Ubuntu

I am trying to update the SSL certificate for a client on AWS using EC2. There are 3 instances and after much mucking about and changing the private keys needed to access them, I was able to finally connect through putty. They are all Ubuntu instances.

  • I've tried to follow these instruction but I cannot find where to add this info.
  • I've "grep"ed for VisturalHost and nothing comes back except a readme file that doesn't help
  • There are no /etc/apache2 directories on any of the instances.
  • All three instances have /usr/lib/ssl/certs and /etc/ssl/certs

My questions:

  1. What webserver is actually being used?
  2. Where can I find the config file to update the location of the SSL certificates?
  3. Where would I store the certificate files on the server?
  4. How do I know which instance is actually running the website?



Cloudfront and EC2

How do you setup Cloudfront in front of an EC2 instance? I'm interested in having users hit the Cloudfront url rather than the EC2 origin.

So instead of hitting ec2-52-64-xxx-xxx.ap-southeast-2.compute.amazonaws.com users would hit d111111abcdef8.cloudfront.net.

My intention is to save money on hosting by reducing the traffic and CPU load on the EC2 instance, while providing overseas users with faster load times.

Would I just point my DNS to the Cloudfront url instead of the EC2 origin?




Catching AWS EC2 Instance IPs dynamically

How to catch few AWS EC2 Instances IPs and put them to a script variable if its generates every time randomly and automatically. I was trying to make it with echo "$(curl http://ift.tt/1jpjrkf) master" >> /etc/hosts, but its just the ip of one of them. Also was trying with with aws ec2 describe-instances ... but don't know how to separate clear IP with other information. Any suggestions with awk\sed?




Finding a simple set up to use Cloudfront with a node.js express app

I am trying to set up Cloudfront in my nodeJS Express app (using Jade as a rendering engine) with the following requirements:

  • Only use Cloudfront on production server (not on localhost or staging server)
  • Not using a conditional statement on all scripts and css in the jade templates
  • Trying to find a one or two liner in case I need to switch to a different provider

I came up with one possible solution: overriding how Jade renders links and scripts and adding the Cloudfront url only on production:

Jade.override(“link”,function(){
if(prod) link.src = cloudfront_url + link.src
})

However, Jade doesn't allow overriding any its functions. Does anybody know an easy way to use Cloudfront on a node app?




Accessing AWS S3 folder name in unicode using AWS command line util

Due to my ignorance, I named a folder on S3 in unicode a few years ago. I'm able to list objects without spaces easily, but I can't access any files/folders with spaces. I've tried delimiting the space with a \, but it didn't work.

Example folder path:

http://s3my-folder/a-thing/إلى آخره

Command looks like:

aws s3 ls http://s3my-folder/a-thing/إلى\ آخره




awslogs.config forwarding some logs but not other

On my EC2 RHEL instance I have the following awslogs.config in my /var/awslogs/etc directory. I'll cut out the top part and get right to the logging aspect in this code snip.

[/opt/apache-tomcat-8.0.26/logs/PND.log]
datetime_format = %Y-%m-%d %H:%M:%S
file = /opt/apache-tomcat-8.0.26/logs/PND.log
buffer_duration = 5000
log_stream_name = {instance_id}
initial_position = start_of_file
log_group_name = /opt/apache-tomcat-8.0.26/logs/PND.log
[/var/log/secure]
datetime_format = %Y-%m-%d %H:%M:%S
file = /var/log/secure
buffer_duration = 5000
log_stream_name = {instance_id}
initial_position = start_of_file
log_group_name = /var/log/secure
[/var/log/messages]
datetime_format = %b %d %H:%M:%S
file = /var/log/messages
buffer_duration = 5000
log_stream_name = {instance_id}
initial_position = start_of_file
log_group_name = /var/log/messages

Logs from /var/log/messages and /var/log/secure are making it to the AWS log console for CloudWatch but logs from /opt/apache-tomcat-8.0.26/logs/PND.log are not. My REST service is running on Tomcat.

When I ssh to the server I can see log entries streaming into /opt/apache-tomcat-8.0.26/logs/PND.log but nothing is showing up on AWS, however, from that same instance I can see all the log entries from messages and secure.

I checked out awslogs.log file and there are no "No file is found with given path" errors for "/opt/apache-tomcat-8.0.26/logs/PND.log" which makes me think it can find it. If I grep the file I get the following entries.

2015-10-14 16:33:12,585 - cwlogs.push.stream - INFO - 938 - Thread-1 - Starting reader for [xxxxxxxxxxx, /opt/apache-tomcat-8.0.26/logs/PND.log]

So if the file can be read, why aren't I seeing log entries?

I was wondering if perhaps, because the group existed before for another instance, if that somehow blocks the new entries for the new instance, but that doesn't make sense to me. Instances should be able to share groups, which is why we can view streams by instance id.




Error using bees with machineguns

I have tried to use beeswithmachineguns for loadtesting a site but with little success. There is a similar post[link] (Bees with machine gun using Amazon free tier) but the error I receive is slightly different.

I use the following to start the bees.
bees up -s 1 -k Bees-West -g SWARM

The error I am getting is related to the groupId being empty but I am passing that value in with -g SWARM... or so I thought. My setup uses the us-west-2 region, which I have in my .boto file as noted in the linked post.

Connecting to the hive.
Attempting to call up 1 bees.
Traceback (most recent call last): File "/usr/local/bin/bees", line 5, in main.main()
File "/Library/Python/2.7/site-packages/beeswithmachineguns/main.py", line 127, in main parse_options()
File "/Library/Python/2.7/site-packages/beeswithmachineguns/main.py", line 111, in parse_options
bees.up(options.servers, options.group, options.zone, options.instance, options.login, options.key)
File "/Library/Python/2.7/site-packages/beeswithmachineguns/bees.py", line 104, in up
placement=zone) File "/Library/Python/2.7/site-packages/boto/ec2/connection.py", line 618, in run_instances
return self.get_object('RunInstances', params, Reservation, verb='POST')
File "/Library/Python/2.7/site-packages/boto/connection.py", line 699, in get_object
raise self.ResponseError(response.status, response.reason, body) boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
InvalidParameterValueValue () for parameter groupId is invalid. The value cannot be emptybd1c6bcb-875f-4294-af64-84853b5b258a




Why is Cloudfront serving me from datacenter far from me?

I'm located 1000 miles from Singapore. I use S3 in Singapore region with CloudFront to serve data.

When I download contents, cloudfront is serving me from US Washington server. (checking IP addresses)

Why doesn't it serve from Singapore instead?




AWS S3 Website stopped working or be acessed most probably DNS

I created website on AWS S3. Configured everything. Then I added domain to coudflare and added CNAME DNS subdomain docs.****.com. Everything was working fine. But my main domain is not used anywhere yet. And coudflare created so many DNS records. I deleted them all and only left now

enter image description here

My other domains are working with this minimum of DNS records. I believe the only record that in neede is first CNAME that links to S3.

Why did it stop working?




Cron jobs and daylight savings with EBS Worker and SQS

I have various business level counters that get reset daily, weekly and monthly at midnight in my local timezone PST/PDT so needs to allow for Daylight Savings.

However, my EBS worker instance that loads the jobs into SQS from cron.yaml is executing them in UTC despite the instance being configured as America/Los_Angeles.




Database trouble deploying django app to amazon beanstalk

I tried to deploy my Django application to AWS beanstalk. For this I was following this tutorial: http://ift.tt/1QLXiYn

However, when I use deploy the app using eb create, I get a MySQL error

Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock

I am guessing this is because I haven't done any database configuration (the document above doesn't mention a configuration step even once). So, I tried to add an RDS database using this guide: http://ift.tt/1K9bmoC.

Now, I am getting the error that

RDS requires that you have a subnet selected in at least two Availability Zones.

When I tried to create this subnets, other issues involving VPN's etc crop up. Can someone please help me get a simple django app up and running on the aws? Thanks!




Retrieve public IP of another EC2 instance in Saltstack formula

I'm setting up VPN nodes with Cloudformation and provision them with Saltstack. Let's call them left and right node. When provisioning the left node I need to know the public IP of the right node and vice versa. Is there a way to retrieve the IP of another EC2 instance in Saltstack formula? Both instances have tags associated with them.

Or is there a different way to achieve this? I would just like to omit any hardcoding.




how to setup cache redis Clusters zend frameword 2

How to setup redis cache Clusters on aws amazon using zend framework 2 . can you help me now?




Leverage browser caching aws s3 and cloud front

Google page speed is complaining about my browser cache, but at the same time its also showing me that the resources do seem to have an expires time (60 mins & 2 hours).

.

This is what the google pagespeed shows me.

enter image description here

And what I'm seeing in my s3 bucket.

enter image description here

Page Speed Link




Form submit issues with Squarespace site using Amazon CDN

I have a Squarespace site which I put behind the Amazon Cloudfront CDN for added security. However, the forms on the site don't submit anymore. The submit button simply blinks, and the form does not send. The following console errors are logged:

http://ift.tt/1k4PNRn Failed to load resource: the server responded with a status of 403 (Forbidden) http://ift.tt/1jzi55G Failed to load resource: the server responded with a status of 403 (Forbidden) http://ift.tt/1k4PNRn Failed to load resource: the server responded with a status of 403 (Forbidden)

Any idea what's going on and a work-around? Thanks!




Rails aws dynamo insert into database

I use this code to insert into dynamo db:

require "aws"

AWS.config(
    access_key_id: 'xxxxxxxxxxxxxxxxxx',
    secret_access_key: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
    region: 'eu-west-1'
)

dynamo_db = AWS::DynamoDB.new


table = dynamo_db.tables['mytable']
table.hash_key = [:string, :string]

# add an item
table.items.create(id: '12345', 'foo' => 'bar')

Everything is ok, the data is inserted but I still get this error:

missing hash key value in put_item response

What did I missed? According their documentation seems to be ok.




Laravel Forge Server Network Not Connected?

Context

I've set up two servers in AWS via Forge. One is a web server, the other a database server.

I've allowed each server to connect to the other via each server's Network tab. The default firewall rules open up ports HTTP (80) and HTTPS (443).

Problem

When testing the app in the browser, the static pages work great, but any pages requiring database access hang for a bit, then nginx throws a 504. (I can connect to the database server via Sequel Pro, so my credentials are accurate.)




PHP hash_pbkdf2 takes orders of magnitude longer on AWS instances

We wanted to benchmark the hash_pbkdf2 function in PHP to select an appropriate number of iterations for our application to use.

When I ran my benchmark script on an m4.large AWS instance, it took five orders of magnitude longer to run than it does on my laptop.

This Gist shows the script I am using, and the results I get from an m4.large instance (under load), a t2.micro instance (with full CPU credits and no load) and three different speed Intel i7 laptops.

http://ift.tt/1X3bI9J

You can see the 100,000 iterations take <200ms on the i7 laptops, but just a single iteration takes that long on the AWS instances.

I've included the PHP versions, and an OpenSSL benchmark that shows sha256 taking a comparable amount of time on both the AWS instances and an i7 laptop (and a rudimentary benchmark of the PHP hash function also showed this) - suggesting it is specifically related to the PBKDF2 operation.

What causes this to happen, and how can I speed it up?




Using Redis behing AWS load balancer

We're using Redis to collect events from our web application (using Redis pub/sub) behind AWS ELB. We're looking for a solution that will allow us to scale-up and high-availability for the different servers. We do not wish to have these two servers in a Redis cluster, our plan is to monitor them using cloudwatch and switch between them if necessary.

We tried a simple test of locating two Redis server behind the ELB, telnetting the ELB DNS and see what happens using 'redis-cli monitor', but we don't see nothing. (when trying the same without the ELB it seems fine)

any suggestions?

thanks




Why some services are called "AWS XXX" and the others "Amazon XXX"

I notice that some Amazon Web Services are called "AWS XXX" (for example AWS Lambda) and others are called "Amazon XXX" (for example Amazon RedShift). Why? There is any differences?

Thank you.




Service discovery vs load balancing

I am trying to understand in which scenario I should pick a service registry over a load balancer.

From my understanding both solutions are covering the same functionality.

For instance if we consider consul.io as a feature list we have:

  • Service Discovery
  • Health Checking
  • Key/Value Store
  • Multi Datacenter

Where a load balancer like Amazon ELB for instance has:

  • configurable to accept traffic only from your load balancer
  • accept traffic using the following protocols: HTTP, HTTPS (secure HTTP), TCP, and SSL (secure TCP)
  • distribute requests to EC2 instances in multiple Availability Zones
  • The number of connections scales with the number of concurrent requests that the load balancer receives
  • configure the health checks that Elastic Load Balancing uses to monitor the health of the EC2 instances registered with the load balancer so that it can send requests only to the healthy instances
  • You can use end-to-end traffic encryption on those networks that use secure (HTTPS/SSL) connections
  • [EC2-VPC] You can create an Internet-facing load balancer, which takes requests from clients over the Internet and routes them to your EC2 instances, or an internal-facing load balancer, which takes requests from clients in your VPC and routes them to EC2 instances in your private subnets. Load balancers in EC2-Classic are always Internet-facing.
  • [EC2-Classic] Load balancers for EC2-Classic support both IPv4 and IPv6 addresses. Load balancers for a VPC do not support IPv6 addresses.
  • You can monitor your load balancer using CloudWatch metrics, access logs, and AWS CloudTrail.
  • You can associate your Internet-facing load balancer with your domain name.
  • etc.

So in this scenario I am failing to understand why I would pick something like consul.io or netflix eureka over Amazon ELB for service discovery.

I have a hunch that this might be due to implementing client side service discovery vs server side service discovery, but I am not quite sure.




Unable to connect to amazon EC2 instance via PuTTY

I created a new instance of Amazon EC2 in Amazon Web Services (AWS) by using this link:. I even added a SSH rule like this:
Port: 22
Type: SSH
Source: <My IP address>/32

I downloaded the .pem file, converted it into .ppk file by using PuTTYGEN. Then I added host name in PuTTY like this:
ec2-user@<public_DNS>.
I selected default settings, added that .ppk file to PuTTY, logged in and I got this error:
img Even trouble shooting link didn't help me.
How can I connect to my amazon instance via PuTTY?




How to prevent brute force file downloading on S3?

I'm storing user images on S3 which are readable by default.

I need to access the images directly from the web as well.

However, I'd like to prevent hackers from brute forcing the URL and downloading my images.

For example, my S3 image url is at http://ift.tt/1jyOQAd

They can brute force test and download all the contents?

I cannot set the items inside my buckets to be private because I need to access directly from the web.

Any idea how to prevent it?




mount /var to another disk partition + linux RHEL 6.5 (AWS Virtual box)

i tried to mount /var to a disk partition in RHEL 6.5 (AWS Virtual box). whenever i do this SSH connectivity is getting lost with this error message "Network connection error".

Followed these steps for performing this activity.

SELINUX=disabled /* (disabled SELINUX) mkfs.ext4 /dev/xvdb1 /*Formatting drive
mkdir /mnt/var /*Make /var directory

mount /dev/sdb1 /mnt/var/ cp -axR /var/* /mnt/var/ mv /var/ /var.old mkdir /var umount /dev/sdb1 mount /dev/sdb1 /var

Made fstab entry for /var mount point /dev/sdb1 /var ext3 defaults 0 0

After this change, if i open a new session or reboot the machine SSH is not happening to the Linux server. Kindly provide your suggestions if i missed some steps.

Thanks in advance, Robin




Can't connect to S3 with PHP AWS SDK

Here is literally everything I have in my PHP script:

<?

require 'vendor/autoload.php';

use Aws\S3\S3Client;

// Instantiate an Amazon S3 client.

$s3 = new S3Client([

    'version' => 'latest',

    'region'  => 'us-west-2'

]);

debug($s3);

?>

I don't know anything about $s3 because I can't get what s3 debug prints... The page just stays loading and really never finishes. I've tried checking the docs and my s3 instance and didn't found anything about permissions but CORS permissions (which is the case) and I set all operations available to MY_url

I know what I might be missing: code to specify the instance I'm connecting to. But again, I found absolutely nothing on how to do this using AWS PHP SDK and neither did I find api credentials or something alike. What am I missing here exactly? Tyvm for your help.




S3 giving someone permission to read and write

I've created a s3 server which contain a large number of images. I'm now trying to create a bucket policy, which fits my needs. First of all i want everybody to have read permission, so they can see the images. However i also want to give a specific website the permission to upload and delete images. this website is not stored on a amazon server? how can i achieve this? so far i've created an bucket policy which enables everybody to see the images

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::examplebucket/*"
        }
    ]
}




insert data to AWS MySQL using Node.js code

I want to insert data from raspberry pi 2 (having Node.js application running on it) to AWS MySQL. I am able to connect to AWS MySQL instance using command prompt and MySQL workbench. Now I just want to push some data to AWS using node.js program.




boto3 aws api - Listing available instance types

Instance types: (t2.micro, t2.small, c4.large...) those listed here: http://ift.tt/1ek8FYG

I want to access a list of these through boto3. something like:

conn.get_all_instance_types()

or even

conn.describe_instance_types()['InstanceTypes'][0]['Name']

which everything seems to look like in this weird api.

I've looked through the docs for client and ServiceResource, but i can't find anything that seems to come close. I haven't even found a hacky solution that lists something else that happen to represent all the instance types.

Anyone with more experience of boto3?




AWS services that support Reserved capacity

What are the list of all AWS Services that offer Reserved capacity and in turn provide significant discounts on the service




AWS Route 53 Subdomain works intermittently

I have some strange problem with AWS Route 53 subdomain.

My main domain points to an AWS EC2 instance. I created another EC2 instance to host my dev environment. I want to access my test instance from a subdomain.

To do that, I created A records with my elastic EC2 test instance with 1day TTL.

The really strange thing is that sometimes it works, sometimes it doesn't. Just checkout the global propagation state of my subdomain and refresh the page (without cache) to see it changes every second.

What am I missing ?




AWS Java SDK - Method to copy the exact directory structure to local or HDFS

I want to write a java code to copy the exact directory structure (given a path) in a bucket including all files and sub-folders to some location in HDFS or the local file system.

I did some research but all I could find was how to copy exact objects (files), and not directory structures. HDFS has useful methods like hdfsFileSystem.copyToLocalFile to achieve the same. Is something like this available for S3 Java SDK?




How to Download Orders from Amazon ? Is there any api or files?

i got an instructions for downloading the orders from amazon for our products ?

So now i am confused how to do that and i searched for many things but i got different answers from different website. I just want to download order from the amazon for our product.

And I don't have any user credential for our store so first i want to build test application.




mardi 13 octobre 2015

AWS Tomcat server memory is too high

My application is running on REDHAT 6.0 on Tomcat 8. Sometimes memory usage is 100% when my application is used by 30 users at a time

Server Configuration Details

Machine ec-2 M3 Medium on AWS 
RAM : 4GB
DISK : 30GB
Processors : Intel Xeon E5-2670 v2 (Ivy Bridge) Processors

Is that normal? I am new to Java. What should i do to resolve it ?




Can you send SNS push notification from lambda function in Amazon AWS?

I am developing an application with Amazon AWS and what I am trying to achieve is to attach a lambda function to DynamoDB table, so after a new row is added ,the lambda function is triggered. In the above mentioned lambda function I want to add the functionality to send a push notification with Amazon SNS service, but I could not find any information in their documentation if that is possible and if it is, what exactly needs to be done to get it working? What I found in their documentation is that you can attach lambda function trigger to a SNS topic, which means that after a notification is pushed then the function is called, but what I am interested in is to send a push notification directly from lambda function. I would appreciate if someone shed some light on this topic for me.




How to associate Godaddy with AWS?

I have followed the tutorial and down everything.

  • I created a S3 Busket

  • I created the R53 and set the hosted zone

  • I bought a domain on godaddy and set the Elastic IP address

My question:

  1. Forwarding

What should I type in Forwarding in domain settings? I know for subdomain I should put the S3 busket.com as subdomain. But when I typed it, it said exceed the length. Or should I put it in FORWARD TO?

  1. Do I need to change the name servers? Some people say I should copy the 4 name servers from Route53 and some people say I don't need it.

  2. ZONE FILE Host @: What should I put here? domain name: www.example.com OR s3 bucket.com?

  3. In AWS my domain in R53 is empty. Is this right?

I've been worked on this for 3 days and I would truly appreciate you for the help.




Logstash Unable to push logs to ES

I am collecting logs from a NXlog server & sending it to my Logstash server{ELK Stack on a AWS machine}. It was able to send the logs to ES perfectly but then it just stooped sending logs to ES with the following errors:

{:timestamp=>"2015-10-13T06:41:25.526000+0000", :message=>"Got error to send bulk of actions: localhost:9200 failed to respond", :level=>:error}
{:timestamp=>"2015-10-13T06:41:25.531000+0000", :message=>"Failed to flush outgoing items", 
{:timestamp=>"2015-10-13T06:41:26.538000+0000", :message=>"Got error to send bulk of actions: Connection refused", :level=>:error}

Is this a security group issue? or something else?

Moreover my logstash-output file looks like :

output {
  stdout { }
  elasticsearch { host => "localhost" protocol => "http" port => "9200" }
}




Amazon Server Suggestion: 100Users, 100Threads, 1000pings/second

I need suggestion about Amazon EC2; I have 100Users simultaneous, each user is connected to JAVA web socket and running a thread (100Threads), 1000pings/second are sent to connected users 10pings each user/each thread will send. Can you please suggest me which system do I need to purchase with specs?

Thanks




Dynamodb gsi vs table

I am having trouble understanding what the difference is between a global secondary index and a table. Why would I use a global secondary index, why not just create another table. I have to specify read and write throughput for both. When a write occurs on a table with gsi I have to write to both the table and the index. My question then is why not just create another table instead of a global secondary index? What benefit do I get by using a gsi?




Amazon AWS ec2 .htaccess does nothing [duplicate]

This question already has an answer here:

I've opened a new instance and installed apache und php then made a little htaccess to redirect every request to index.php to test.php and go to goback.php if a 404 error happens.

RewriteEngine on
RewriteRule ^index.php$    test.php [NC,L] 
ErrorDocument 404 /goback.php

However, it seems to be doing absolutely nothing! Is there a solution?




Can't close ElasticSearch index on AWS?

I've created a new AWS ElasticSearch domain, for testing. I use ES on a different host right now, and I'm looking to move to AWS.

One thing I need to do is set the mapping (analyzers) on my instance. In order to do this, I need to "close" the index, or else ES will just raise an exception.

Whenever I try to close the index, though, I get an exception from AWS:

Your request: '/_all/_close' is not allowed by CloudSearch.

The AWS ES documentation specifically says to do this in some cases:

 curl -XPOST 'http://ift.tt/1hDceep'

I haven't found any documentation that says why I wouldn't be able to close my indices on AWS ES, nor have I found anyone else who has this problem.

It's also a bit strange that I've got an ElasticSearch domain, but it's giving me a CloudSearch error message, since I thought those were different services, though I suppose one is implemented in terms of the other.

thanks!




How to create a "folder-like" (i.e. PREFIX) object on S3 using the AWS CLI?

It is usually explained that there are no folders/directories on S3, which is true, however there are PREFIX objects. In some cases - e.g. using riofs to "mount" a filesystem to S3 - it could actually be useful to know how to create one of these.

Does anyone know if there is a "correct" way to do this with the AWS CLI? It's likely possible using the low-level API aws s3api ...

Related SO posts:

amazon S3 boto - how to create folder?

Creating a folder via s3cmd (Amazon S3)

P.S.

I also want to point out that in the AWS console this action is actually named "Create Folder...", so it's not really fair to tell people that there is no "concept" of a folder on S3.

Many thanks!




Where to put oAuth app secrets

I'm developing a web app which authenticates against a 3rd party service using oAuth. The 3rd party supplied me with an app secret and an app id.

The app code is supposed to live on github. Now, I don't want to push my app id and secret to github. The app itself is supposed to be deployed on either AWS or OpenShift.

What options do these (and other) cloud computing providers offer to store credentials like that? What other options are there?

I expected them to have like a secret store, and an API to access that store from my app's code, but I wasn't able to find anything.




AWS S3 list files in specific folder

I'm trying to list the files and folders in a specific folder in a S3 bucket. I would like to return the subfolder names in that folder, but not the contents of those subfolders.

I'm not sure if it can be done with the delimiter, but this is what I have:

$objects = $s3->getListObjectsIterator(array(
    'Bucket' => 'BUCKET NAME',
    'Prefix' => 'SUBFOLDER NAME/',
    'Delimiter' => '/'
));

foreach ($objects as $object) {
    echo $object['Key'] . "<br/>";
}




Setting the aws_region in rails/amazon web services

I've installed the aws sdk with gem 'aws-sdk', '~> 2.1.29'. Upon entering the aws interactive console with aws.rb and trying to run commands I get the error:

Aws::Errors::MissingRegionError: missing region; use :region option or export region name to ENV['AWS_REGION']

I need help on how to set the AWS_REGION environment variable. At the console I have done:

heroku config:set S3_REGION='us-east-1'

and in config/initializers/carrier_wave.rb I have tried both

Aws.config.update({
  region: 'us-east-1'
})

and

S3Client = Aws::S3::Client.new(
  aws_access_key_id: ENV['AWS_ACCESS_KEY_ID'],
  aws_secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
  aws_region: 'us-east-1'
)

but to no avail. (I have set the access key id and secret access key at the console also).




AWS EC2 update without downtime

Is there anyway I can update my application in EC2 without any downtime? Or something like pausing but not restarting




Amazon Product Advertising API - Showing prices and images?

I am currently investigating into the Amazon Product Advertising API and hoping to produce an XML response that contains details including an Item price and images.

This is a typical example of the URL I am using with the relevant parameters for searching on an item i.e ItemSearch:

        IDictionary<string, string> r1 = new Dictionary<string, String>();

        r1["Service"] = "ItemSearch";
        r1["ResponseGroup"] = "Images";
        r1["AssociateTag"] = "AssociateTag";
        r1["Operation"] = "ItemSearch";
        r1["Condition"] = "New";
        r1["Availability"] = "Available";
        r1["SearchIndex"] = "Apparel";
        r1["Keywords"] = itemToSearch;

        requestUrl = _signedRequestHandler.Sign(r1);

The above URL builder will give me a URL whose response contains images (as I'm using 'Images' in the 'ResponseGroup') setting this to 'Offers' would give me prices but no images, I want both, is this possible?




DynamoDB Batch Publish

I see AWS publish API for sending push notifications to devices. http://ift.tt/1jjqt1j

According to: http://ift.tt/1zzBeGd We can "Send messages directly to a specific device by calling the Publish function with the device’s ARN. You can easily scale this to handle millions of users by storing the endpoint ARNs in Amazon DynamoDB and using multi-threaded code on the server."

If I want to send push notifs to 100K users (who haven't registered to a specific topic), is there a multi-publish (or batch-publish) API, where I don't need to call the "Push notifications" API for every single user?




Upload to S3 via shell script without aws-cli, possible?

As the title says, is it possible to upload to S3 via shell script without aws-cli-tools?

If so, how?

What I'm trying to do is read from a txt file on S3 (which is public, so no authentication is required).

But I want to be able to overwrite whatever is in the file (which is just a number).

Thanks in advance,

Fadi




Best way to implement this while downloading from AWS3?

Ill try to explain this as best as i can. Im using "AFAmazonS3Manager.h" which is a subclass that uses AFNetworking to call AmazonA3 functionality. Now i already wrote all my code to be encapsulated in its own class. This class downloads any items i need off of aws3 and saves them into the local system. All fine and dandy. Now im trying to implement a progress indicator, this is where my trouble is abrewing. On the class level i added a dispatch_group to wait for all downloads to happen then be notified when the download happens. Again it works fine for me. heres an example of what im doing

- (void)getContentLengthOfMediaItem:(UFPMediaObject*)media_object contentLength:(float*)content_length{

dispatch_group_enter(_amazon_dispatch_group);
[_amazon_manager.requestSerializer setBucket:media_object.aws_media_bucket_name];
[_amazon_manager headObjectWithPath:[media_object shortpathForMediaFile]
                            success:^(NSHTTPURLResponse *response) {

                                NSString *lengthString = [[response allHeaderFields] objectForKey:@"Content-Length"];
                                *content_length = [lengthString longLongValue];
                                dispatch_group_leave(_amazon_dispatch_group);

                            } failure:^(NSError *error) {

                                NSLog(@"Error getting metadata for video %@",[media_object shortpathForMediaFile]);
                                NSLog(@"%@",error.localizedDescription);
                                dispatch_group_leave(_amazon_dispatch_group);

                            }];

}

I need to 1 be able to return the content_length to the calling class. Im trying to pass it by reference here but it keeps returning 0 instead of the number im getting from content length. So thats one issue. I mean i could just add a class level float variable and assign it in the success block but i dont want to do that. It doesnt seem like the right way to do it.

Heres something that i have working and thought i could write the content_length function the same way but my content_length function isnt working.

- (void)checkIfFileIsOver20MB:(UFPMediaObject *)media_object isOverTwentyMegabytes:(BOOL*)overTwenty{

dispatch_group_enter(_amazon_dispatch_group);
[_amazon_manager.requestSerializer setBucket:media_object.aws_media_bucket_name];
[_amazon_manager headObjectWithPath:[media_object shortpathForMediaFile]
                            success:^(NSHTTPURLResponse *response) {

                                NSString *lengthString = [[response allHeaderFields] objectForKey:@"Content-Length"];
                                NSString *byte_count_string = [NSByteCountFormatter stringFromByteCount:[lengthString longLongValue] countStyle:NSByteCountFormatterCountStyleFile];
                                NSInteger megabytes = [byte_count_string integerValue];

                                if(megabytes > 20)
                                    *overTwenty = TRUE;
                                else
                                    *overTwenty = FALSE;
                                dispatch_group_leave(_amazon_dispatch_group);


                            } failure:^(NSError *error) {

                                NSLog(@"Error getting metadata for video %@",[media_object shortpathForMediaFile]);
                                NSLog(@"%@",error.localizedDescription);
                                *overTwenty = FALSE;
                                dispatch_group_leave(_amazon_dispatch_group);

                            }];

}

When this group leaves and im back in the calling class. The Boolean isOver20MB has the value set within the success block. So i thoguht i could follow the same methodology for a float. But its not working.

Also anothet problem for me is that i want to update the progress indicator once its downloading. Now theres a nice block function that updates the progress in like so.

dispatch_group_enter(_amazon_dispatch_group);
[_amazon_manager.requestSerializer setBucket:media_object.aws_thumbnail_bucket];
[_amazon_manager getObjectWithPath:[media_object shortpathForThumbnailFile]
                          progress:^(NSUInteger bytesRead, long long totalBytesRead, long long totalBytesExpectedToRead) {


                          } success:^(id responseObject, NSData *responseData) {
                              @autoreleasepool {

                                  [UFPMediaLocalStorage saveDataToLocalStorageInFolder:kUFPLocalStorageFolderThumbnails
                                                                     subdirectory:media_object.aws_thumbnail_file_path
                                                                             data:responseData
                                                                         filename:media_object.aws_thumbnail_file_name];
                                  responseObject = nil;
                                  responseData = nil;
                                  dispatch_group_leave(_amazon_dispatch_group);

                              }
                          } failure:^(NSError *error) {

                              NSLog(@"Error Downloading Thumbnail for file %@",[media_object shortpathForThumbnailFile]);
                              NSLog(@"%@",[media_object shortpathForThumbnailFile]);
                              dispatch_group_leave(_amazon_dispatch_group);

                          }];

How am i able to get the progress indicator i need to update, which is in the calling viewcontroller subview to get the values from the progress block? I am going to pass the actual view for the progress view in the function and just update its progress within the success block but i dont want to do that. Since i cant just one day remove this code and plug it into another project like that. Since now the class i wrote for downloading Amazon data will be coupled with the progress indicator im creating. I know its hard to understand. but if anyone can help me go around the implementation i described that would be great.

It just seems like im missing one part mostly. Which is being able to pass back the values i need from within the blocks returning the values.