samedi 28 février 2015

Rest API Message Signature - Questions

I've been trying explore the use of Http REST api's. I came across another SO post pointing me to how AWS signs its requests with its secret key as documented here. The secret key is used to sign parts of the message (timestamp, request path, parameters, etc..) with the idea that none of these parts can be altered without producing a different hash.


Two Questions:


1) As part of the AWS standard, part of the "string-to-sign" formed by the client is the hash of the message body. For the server receiving the message from the client, it will need to compute the hash of that message body before it can compute the message signature. My question centers around how would this be implemented? In the case of servlets, where you put a authentication servlet filter to pre-process requests, the filter would need to download the entire body before it could compute the signature. Doesn't this mean a "hacker" could overwhelm a server by making large POST requests to the server because the entire body must first be downloaded?


2) I get AWS can work without SSL/TLS due to signing the messages. How though does the secret key initially get sent to the user? Wouldn't that need to be over TLS?


Thanks!





How can I prevent OpsWorks Deployment from defaulting to deploying to my Custom Layer?

I currently have a relatively simple OpsWorks MEAN stack configuration, consisting of two layers.


One layer is the Node.js App Server layer, and the other layer is a Custom MongoDB layer. (As a side note, I hope one day Amazon will provide a Mongo store for OpsWorks, but until then, I had to create my own custom layer.)


I really like the way everything works, with the exception that when I deploy my Applications as shown above, the Deployment defaults to deploying to my Custom MongoDB layer as well:


Deploying App to OpsWorks Instances with Custom MongoDB Layer


Other than remembering to uncheck the boxes just before I click 'Deploy', I can't seem to find any way to specify, in the Deployment, Application, Layer, or Stack configuration, that I don't ever want my Application deployed to my Custom layer.


That's possibly not a huge deal for my MongoDB layer specifically, but it doesn't seem to make sense to have the application code over there in general, and I can most certainly envision application-specific custom chef configuration that I definitely don't want applied to my DB layer.


Can anyone point me at a configuration option or other mechanism for excluding deployment to a custom OpsWorks layer?


Thanks!


-- Tim





How do I edit an AWS S3 Bucket Object?

I'd imagine this is a really dumb question considering I cannot find a single thing about it


I followed Amazon's instructions for hosting a static webpage: http://ift.tt/1hhcmMb


Now I want to update that static webpage, what is the easiest way to do that?



  • Is there a sync client?

  • Can I open the files on AWS somehow?

  • Do I need to download the file, edit it, and re-upload it? That seems really slow





Can't create Amazon Web Services (AWS) credentials object

I'm trying to get started developing against the local Dynamo DB service. The first step is simply creating a client with their SDK:



var storedAWSCreds = new StoredProfileAWSCredentials();


This throws an exception:



App.config does not contain credentials information. Either add the AWSAccessKey and AWSSecretKey or AWSProfileName



My app.config has the needed properties:



<add key="AWSProfileName" value="justin"/>
<add key="AWSProfilesLocation" value="C:\code\dynamodb\credentials"/>


The credentials profile file:



justin
aws_access_key_id = REMOVED-FOR-POST
aws_secret_access_key = REMOVED-FOR-POST


At this point I thought I would try one of the other overloaded methods and explicitly tell the constructor what the parameters should be:



var storedAWSCreds = new StoredProfileAWSCredentials("justin", @"C:\code\dynamodb\credentials");


Again, the same exception.


Okay, the exception says I can provide the credentials directly in my config so I tried that:



<add key="AWSAccessKey" value="REMOVED"/>
<add key="AWSSecretKey" value="REMOVED"/>


Again, the same exception.


How can I get the StoredProfileAWSCredentials object created? I'm clearly missing something obvious or their exception messages are incorrect.


I will point out, I can create a BasicAWSCredentials object by specifying the access key and secret key in the constructor:



var basicAWSCreds = new BasicAWSCredentials("REMOVED-FOR-POST", "REMOVED-FOR-POST");


But, at some point I would prefer to not have it hard-coded in my application.





Stopping then starting EC2 from command line

I'm trying to stop and then immediately start (NOT REBOOT) my Amazon EC2 server from within my instance


I have CLI (Command Line Interface Tools) and am running a Windows 2012 server.


Basically, I want to ec2-stop-instances from a batch, and then ec2-start-instances right after. But I want the start-instances to run after a minute or so.


This way, running the batch script will stop then start the instance.


Again, I can't use reboot. For some reason, it does not work with my needs.





Spark submit cluster mode - Amazon Web Services

I am getting an error in launching the standalone Spark driver in cluster mode. As per the documentation, it is noted that cluster mode is supported in the Spark 1.2.1 release. However, it is currently not working properly for me. Please help me in fixing the issue(s) that are preventing the proper functioning of Spark.


I have a 3 node cluster and I am using the below Command for launching the driver from master node. The driver gets launched at a slave node and gives below error.


Command:


spark-1.2.1-bin-hadoop2.4]# /usr/local/spark-1.2.1-bin-hadoop2.4/bin/spark-submit --class com.mashery.firststep.aggregator.FirstStepMessageProcessor --master spark://ec2-xx.xx.xx.compute-1.amazonaws.com:7077 --deploy-mode cluster --supervise file:///home/xyz/sparkstreaming-0.0.1-SNAPSHOT.jar /home/xyz/config.properties


Output:



Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/02/28 17:41:16 INFO SecurityManager: Changing view acls to: root
15/02/28 17:41:16 INFO SecurityManager: Changing modify acls to: root
15/02/28 17:41:16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/02/28 17:41:16 INFO Slf4jLogger: Slf4jLogger started
15/02/28 17:41:16 INFO Utils: Successfully started service 'driverClient' on port 48740.
Sending launch command to spark://ec2-xx.xx.xx.compute-1.amazonaws.com:7077
Driver successfully submitted as driver-20150228174117-0003
... waiting before polling master for driver state
... polling master for driver state
State of driver-20150228174117-0003 is RUNNING
Driver running on ec2-yy.yy.yy.compute-1.amazonaws.com:36323 (worker-20150228171635-ec2-yy.yy.yy.compute-1.amazonaws.com-36323)


log at driver stderr:



SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/mnt/worker/driver-20150228174117-0003/sparkstreaming-0.0.1-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/spark-1.2.1-bin-hadoop2.4/lib/spark-assembly-1.2.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://ift.tt/1f12hSy for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.net.BindException: Failed to bind to: http://ift.tt/1G3HkG7: Service 'Driver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:391)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:388)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
enter code here




How to download data from Amazon's requester pay buckets?

I am not a computer genius and have no computer science background. I have been struggling for about a week to download arXiv articles as mentioned here: http://ift.tt/1E2ztWZ.


I have tried lots of things: s3Browser, s3cmd. I am able to login to my buckets but I am unable to download data from arXiv bucket.


I tried:


1.


$ s3cmd get s3://arxiv/pdf/arXiv_pdf_1001_001.tar




s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
s3://arxiv/pdf/arXiv_pdf_1001_001.tar -> ./arXiv_pdf_1001_001.tar [1 of 1]
ERROR: S3 error: Unknown error


2.


$ s3cmd get --add-header="x-amz-request-payer:requester" s3://arxiv/pdf/arXiv_pdf_manifest.xml



It gave me same error again:



s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
s3://arxiv/pdf/arXiv_pdf_manifest.xml -> ./arXiv_pdf_manifest.xml [1 of 1]
ERROR: S3 error: Unknown error


3.

I have tried copying files from that folder too.



$ aws s3 cp s3://arxiv/pdf/arXiv_pdf_1001_001.tar .




A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining


This probably means that I made a mistake. Problem is I dont know how and what to add that will convey my permission to pay for download.


I am unable to figure out what should I do for downloading data from s3. I have been reading a lot on AWS sites but nowhere I can get pinpoint solution to my problem. Any help on this will be really appreciated.


Thanks!





How to connect to AWS rds mysql through android application?

I have been working on Android-mysql database connection. Initially I was using godaddy server to store my php files. From the Android application I used to make a connection to php files which in turn queried the Mysql database.


I have shifted from godaddy to AWS and I have a RDS and EC2 instance up and running. I know the credentials of the rds instance. The only thing I dont understand is where to put the php file (in godaddy I used to store it in the File Manager and then retrieve it through "hostname/filename.php"). I have searched for lots of tutorials but could not find a satisfactory answer.





Rails - Best deployment setup with AWS Auto Scaling

I am researching ways to deploy a ruby application to an AWS Autoscaling Group and I'm having a hard time deciding which way is best and finding good content about it.


I have looked into CodeDeploy, Elastic Beanstalk, CloudFormation, Capistrano, Chef and some others. The combination of some of them.


I, personally, didn't want to use Chef or anything that needs much time maintaining. Currently I am using Dokku on EC2, but I need to make a more scalable and elastic solution for a new project.


What would be the best suggestion and study material?





AWS EMR validation error

I have a problem running a map-reduce java application I simplified my problem using the tutorial code given from AWS which runs a pre-defined step:



public class Main {

public static void main(String[] args) {

AWSCredentials credentials = getCredentials();
AmazonElasticMapReduceClient emr = new AmazonElasticMapReduceClient(
credentials);

StepFactory stepFactory = new StepFactory();

StepConfig enabledebugging = new StepConfig()
.withName("Enable debugging")
.withActionOnFailure("TERMINATE_JOB_FLOW")
.withHadoopJarStep(stepFactory.newEnableDebuggingStep());

StepConfig installHive = new StepConfig().withName("Install Hive")
.withActionOnFailure("TERMINATE_JOB_FLOW")
.withHadoopJarStep(stepFactory.newInstallHiveStep());

RunJobFlowRequest request = new RunJobFlowRequest()
.withName("Hive Interactive")
.withAmiVersion("3.3.1")
.withSteps(enabledebugging, installHive)
.withLogUri("s3://tweets-hadoop/")
.withServiceRole("service_role")
.withJobFlowRole("jobflow_role")
.withInstances(
new JobFlowInstancesConfig().withEc2KeyName("hadoop")
.withInstanceCount(5)
.withKeepJobFlowAliveWhenNoSteps(true)
.withMasterInstanceType("m3.xlarge")
.withSlaveInstanceType("m1.large"));

RunJobFlowResult result = emr.runJobFlow(request);
System.out.println(result);
}

private static AWSCredentials getCredentials() {
AWSCredentials credentials = null;
credentials = new BasicAWSCredentials("<KEY>","<VALUE>");
return credentials;
}


}


where , are the secret active key and 'hadoop' is a keypair I created in the EC2 console.


After running I see the Job trying to start in the EMR console, after 1 minute it changes from 'starting' to 'Terminated with errors Validation error'


no other information is given


Any ideas what goes wrong?


Thanks!





Transfer the client.key from OpenVPN server to OpenVpn Client

I have to instances in Amazon Web Services , both of them run RHEL. I want to establish e VPN tunnel between this two instances , one of them acts as a VPN client and the other one acts as e VPN Server.


The problem to me is when I want to transfer the the client.conf, ca.crt and additional files from the server to the client.


For this i have used the command below:



scp ca.crt client.crt client.key root@a.b.c.d:/etc/openvpn


But it gives the following error:



Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection


I have done the readings and I haven't figure it out how to solve this error. Maybe it needs the private key of the server, but since I have launched in amazon web services, the key was generated by amazon and i don't know how to locate it in the server instance.


Any answer would be more than appreciated.





vendredi 27 février 2015

AWS Elastic IP Network vpc is not attached to any internet gateway

I have given a limited access to a AWS account.


I already created an EC2 instance but when I try to associate an elastic ip, I got the error below:



An error occurred while attempting to associate the address
Network vpc-(security id) is not attached to any internet gateway




aws cli ec2 instance isn't mounting block device

I am using aws cli to launch an ec2 instance; My command is something like this


aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type m3.xlarge --user-data file:///Users/xyz/daytoday/userdata --key-name "xyzxyz" --security-group-ids "sg-xxxxxxxx" --subnet-id subnet-xxxxxxxx --key XYZZZX --iam-instance-profile Name="FOOBAR" --associate-public-ip-address --block-device-mappings "[{\"DeviceName\": \"/dev/sdh\",\"Ebs\":{\"VolumeSize\":200}}]"


I am able to launch the instance with default root volume and the ebs of 200 gb is attached to the instance. No issues. Only problem is I am needing to mount the ebs manually.


[root@ip-10-2-0-34 ec2-user]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 20G 0 disk └─xvda1 202:1 0 20G 0 part / xvdh 202:112 0 200G 0 disk


Is there anyway, I can have the ebs mounted by default; Instead of me mounting manually.





What's a good way to obtain STS credentials with a SAML federated login via Okta for use in local command line tools?

The Amazon Web Services API provides the AssumeRoleWithSAML endpoint to allow a user to exchange a SAML assertion for a set of temporary API credentials from the AWS Security Token Service.


A SAML provider, like Okta, will generate a SAML assertion after a user logs into their web UI and Okta authenticates the user on that user's enterprise backend (e.g. enterprise LDAP).


Typically this assertion is then relayed from the users browser onto another web service that accepts SAML assertions, the relying party, in order to authenticate the user to this third party (for example when using Okta federated login to enable a user to log into the AWS web console)


What is the best way to enabled a federated user to authenticate with Okta, get an assertion, pass that assertion to STS and get back a set of temporary AWS API credentials that the user could then use with either the AWS command line tools, or with a local python boto script?



  • Launch a web browser from a python tool using the Python webbrowser module?

  • What's a fluid way to get an assertion from a web browser into a form usable by a command line tool?

  • Create a temporary ngrok tunnel to a locally running temporary webserver (e.g. an instance of flask or bottle) for Okta to redirect the users web browser onto in order to deliver the assertion to some local code?

  • How does one typically bridge the world of an interactive web page and local command line tools?





result doesn't appear in browser of amazon dynamodb

I have written a php code of dynamodb API to get some data on Ubuntu 14.04 LTS. I have configured the PHP AWS SDK, but when I execute the code over the browser the result doesn't appear there. However, when I execute it through the terminal, works perfectly.


php api_dynamodb_code.php


I need this working in the browser. I tested it using Google Chrome and FireFox with the same result.


I added echo 'Test1';


at the beginning and when I execute it,


Test1 apears in the browser, however when I added


echo 'Test2'; after using


Aws\DynamoDb\DynamoDbClient; $client = DynamoDbClient::factory(array( 'profile' => 'default', 'region' => 'us-west-2' ));


it didn't show in the browser. But both appears in Terminal using php command.


apache2, libapache2-mod-php5, mysql-server, php5-mysql, php5, php5-crul all are installed.





Having trouble creating a basic AWS AMI with Packer.io. SSH Timeout

I'm trying to follow these instructions to build a basic AWS image using Packer.io. But it is not working for me.


Here is my Template file:



{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-146e2a7c",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}",

# The following 2 lines don't appear in the tutorial.
# But I had to add them because it said this source AMI
# must be launched inside a VPC.
"vpc_id": "vpc-98765432",
"subnet_id": "subnet-12345678"
}]
}


You will notice that I had to deviate from the instructions by adding the two lines at the bottom (for VPC and subnets). This is because I kept getting the following error:



==> amazon-ebs: Error launching source instance: The specified instance type
can only be used in a VPC. A subnet ID or network interface
ID is required to carry out the request.
(VPCResourceNotSpecified)


That VPC and Subnet are temprorary ones that I manually had to create. But why should I have to do that? Why doesn't packer create those and then delete them like I see it creates a temporary security group and key-pair?


Furthermore, even after I add those two lines, it fails to create the AMI because it gets an SSH timeout. Why? I am having no trouble manually SSHing to other instances in this VPC. The temporary packer instance has InstanceState=Running, StatusChecks=2/2 and SecurityGroup that allows SSH from all over the world.


See the debug output of the packer command below:



$ packer build -debug -var 'aws_access_key=MY_ACCESS_KEY' -var 'aws_secret_key=MY_SECRET_KEY' packer_config_basic.json
Debug mode enabled. Builds will not be parallelized.
amazon-ebs output will be in this color.

==> amazon-ebs: Inspecting the source AMI...
==> amazon-ebs: Pausing after run of step 'StepSourceAMIInfo'. Press enter to continue.
==> amazon-ebs: Creating temporary keypair: packer 99999999-8888-7777-6666-555555555555
amazon-ebs: Saving key for debug purposes: ec2_amazon-ebs.pem
==> amazon-ebs: Pausing after run of step 'StepKeyPair'. Press enter to continue.
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing SSH access on the temporary security group...
==> amazon-ebs: Pausing after run of step 'StepSecurityGroup'. Press enter to continue.
==> amazon-ebs: Launching a source AWS instance...
amazon-ebs: Instance ID: i-12345678
==> amazon-ebs: Waiting for instance (i-12345678) to become ready...
amazon-ebs: Private IP: 10.0.2.204
==> amazon-ebs: Pausing after run of step 'StepRunSourceInstance'. Press enter to continue.
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Timeout waiting for SSH.
==> amazon-ebs: Pausing before cleanup of step 'StepRunSourceInstance'. Press enter to continue.
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Pausing before cleanup of step 'StepSecurityGroup'. Press enter to continue.
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Pausing before cleanup of step 'StepKeyPair'. Press enter to continue.
==> amazon-ebs: Deleting temporary keypair...
==> amazon-ebs: Pausing before cleanup of step 'StepSourceAMIInfo'. Press enter to continue.
Build 'amazon-ebs' errored: Timeout waiting for SSH.

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: Timeout waiting for SSH.

==> Builds finished but no artifacts were created.




Having trouble creating a basic AWS AMI with Packer.io. SSH Timeout

I'm trying to follow these instructions to build a basic AWS image using Packer.io. But it is not working for me.


Here is my Template file:



{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-146e2a7c",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}",

# The following 2 lines don't appear in the tutorial.
# But I had to add them because it said this source AMI
# must be launched inside a VPC.
"vpc_id": "vpc-98765432",
"subnet_id": "subnet-12345678"
}]
}


You will notice that I had to deviate from the instructions by adding the two lines at the bottom (for VPC and subnets). This is because I kept getting the following error:



==> amazon-ebs: Error launching source instance: The specified instance type
can only be used in a VPC. A subnet ID or network interface
ID is required to carry out the request.
(VPCResourceNotSpecified)


That VPC and Subnet are temprorary ones that I manually had to create. But why should I have to do that? Why doesn't packer create those and then delete them like I see it creates a temporary security group and key-pair?


Furthermore, even after I add those two lines, it fails to create the AMI because it gets an SSH timeout. Why? I am having no trouble manually SSHing to other instances in this VPC. The temporary packer instance has InstanceState=Running, StatusChecks=2/2 and SecurityGroup that allows SSH from all over the world.


See the debug output of the packer command below:



$ packer build -debug -var 'aws_access_key=MY_ACCESS_KEY' -var 'aws_secret_key=MY_SECRET_KEY' packer_config_basic.json
Debug mode enabled. Builds will not be parallelized.
amazon-ebs output will be in this color.

==> amazon-ebs: Inspecting the source AMI...
==> amazon-ebs: Pausing after run of step 'StepSourceAMIInfo'. Press enter to continue.
==> amazon-ebs: Creating temporary keypair: packer 99999999-8888-7777-6666-555555555555
amazon-ebs: Saving key for debug purposes: ec2_amazon-ebs.pem
==> amazon-ebs: Pausing after run of step 'StepKeyPair'. Press enter to continue.
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing SSH access on the temporary security group...
==> amazon-ebs: Pausing after run of step 'StepSecurityGroup'. Press enter to continue.
==> amazon-ebs: Launching a source AWS instance...
amazon-ebs: Instance ID: i-12345678
==> amazon-ebs: Waiting for instance (i-12345678) to become ready...
amazon-ebs: Private IP: 10.0.2.204
==> amazon-ebs: Pausing after run of step 'StepRunSourceInstance'. Press enter to continue.
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Timeout waiting for SSH.
==> amazon-ebs: Pausing before cleanup of step 'StepRunSourceInstance'. Press enter to continue.
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Pausing before cleanup of step 'StepSecurityGroup'. Press enter to continue.
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Pausing before cleanup of step 'StepKeyPair'. Press enter to continue.
==> amazon-ebs: Deleting temporary keypair...
==> amazon-ebs: Pausing before cleanup of step 'StepSourceAMIInfo'. Press enter to continue.
Build 'amazon-ebs' errored: Timeout waiting for SSH.

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: Timeout waiting for SSH.

==> Builds finished but no artifacts were created.




Amazon Windows Services for VPS, please advise

I need the VPS services for hosting my ASP.NET project. However, it's not just asp.net hosting, I also need SQL Server, RabbitMq and either my running conrole app or my windows service. So I read the suggestions to use Amazon Web Services as they provide first year for free. However when I registered I found that I don't have a clue of where I am: I don't see the option of creating a virtual machine with Windows I don't see the option of setting up SQL Server on such the machine and so on. So I was wondering whether I'm in the right place? Please advise if AWS can provide me with what I need or I came to the wrong place?





Ansible Dynamic Inventory fails to get the latest ec2 information

I am using ec2.py dynamic inventory for provisioning with ansible. I have placed the ec2.py in /etc/ansible/hosts file and marked it executable. I also have the ec2.ini file in /etc/ansible/hosts.



[ec2]


regions = us-west-2 regions_exclude = us-gov-west-1,cn-north-1


destination_variable = public_dns_name


vpc_destination_variable = ip_address route53 = False


all_instances = True all_rds_instances = False


cache_path = ~/.ansible/tmp


cache_max_age = 0


nested_groups = False group_by_instance_id = True group_by_region = True group_by_availability_zone = True group_by_ami_id = True group_by_instance_type = True group_by_key_pair = True group_by_vpc_id = True group_by_security_group = True group_by_tag_keys = True group_by_tag_none = True group_by_route53_names = True group_by_rds_engine = True group_by_rds_parameter_group = True



Above is my ec2.ini file



---
- hosts: localhost
connection: local
gather_facts: yes
vars_files:
- ../group_vars/dev_vpc
- ../group_vars/dev_sg
- ../hosts_vars/ec2_info
vars:
instance_type: t2.micro
tasks:
- name: Provisioning EC2 instance
local_action:
module: ec2
region: "{{ region }}"
key_name: "{{ key }}"
instance_type: "{{ instance_type }}"
image: "{{ ami_id }}"
wait: yes
group_id: ["{{ sg_npm }}", "{{sg_ssh}}"]
vpc_subnet_id: "{{ PublicSubnet }}"
source_dest_check: false
instance_tags: '{"Name": "EC2", "Environment": "Development"}'
register: ec2
- name: associate new EIP for the instance
local_action:
module: ec2_eip
region: "{{ region }}"
instance_id: "{{ item.id }}"
with_items: ec2.instances
- name: Waiting for NPM Server to come-up
local_action:
module: wait_for
host: "{{ ec2 }}"
state: started
delay: 5
timeout: 200
- include: ec2-configure.yml


Now the configuring script is as follows



- name: Configure EC2 server
hosts: tag_Name_EC2
user: ec2-user
sudo: True
gather_facts: True
tasks:
- name: Install nodejs related packages
yum: name={{ item }} enablerepo=epel state=present
with_items:
- nodejs
- npm


However when the configure script is called, the second script results into no hosts found. If I execute the ec2-configure.yml just alone and if the EC2 server is up & running then it is able to find it and configure it.


I added the wait_for to make sure that the instance is in running state before the ec2-configure.yml is called.


Would appreciate if anyone can point my error. Thanks





AWS Image creation with "no-reboot" option - what is meant by "file system integrity"?

In AWS, when creating an EBS image of an EC2 drive, there is an option called "no reboot", which allows you to create it without rebooting the machine. When using this option, there is a warning:



When enabled, Amazon EC2 does not shut down the instance
before creating the image. When this option is used, file
system integrity on the created image cannot be guaranteed.


What exactly is meant by this warning?


Is it:


1) All writes to the are disk unsafe (including writes to log files), because there is a chance that the filesystem itself in the created image is corrupted.


Or:


2) Writes to the disk are not guaranteed to be copied to the image in a consistent manner. For instance, if your instance writes to file A before file B during image creation, and File A is necessary for File B to be correct, you might not have file A in the created image.


Or is it something different entirely?





How to encrypt client-side with AWS KMS using the C# SDK

Is there already a C# library for encrypting and decrypting data using Amazon's Key Management Service (KMS) but without sending your sensitive data to Amazon (i.e. using "envelope encryption" as described in their developer guide)? Something that handles the nitty gritty details of choosing algorithm, mode, IV, etc.?


To be clear, I'm not asking how to do it... just trying to find out if I've wasted my time rolling my own.





Get all HITs with a certain status

The SearchHITs function seems almost useless for doing any actual searching. It merely pulls a listing of your HITs and doesn't allow for any filters. Is the only way to search to iterate through all the results? For example:



my_reviewable_hits = []
for page in range(5,50):
res = m.conn.search_hits(sort_direction='Descending', page_size=100, page_number=page)
for hit in res:
if hit.HITStatus == 'Reviewable':
my_reviewable_hits.append(hit)




How to use SFTP on EC2 (Centos) and WinSCP to transfer files

$ ssh -i AWS_KEY.pem centos@ec2-52-11-51-217.us-west-2.compute.amazonaws.com Last login: Fri Feb 27 19:14:25 2015


when I try to connect to SFTP it is throwing me following error:


$ sftp centos@ec2-52-11-51-217.us-west-2.compute.amazonaws.com Permission denied (publickey,gssapi-keyex,gssapi-with-mic). Couldn't read packet: Connection reset by peer





Passing messages from AWS to company site

I am looking for a way to pass log events from AWS application to my company site.


The thing is that the AWS application is 100% firewalled from everything except only one IP address because it's encryption related service.


I just don't know what service I should use to do this. There's so many services so I do really have no idea what is it.


I think I'd just use simple message service, does this makes sense? The thing is there's plenty of events (let's say 1M per day), so I don't want big extra costs for this.


Sorry for the generic question, but I think it's quite concrete - "What is the most optimal way to pass event message from AWS when volume is approx 1M per day each 256 bytes on average?".


I'd like to connect to AWS service instead to any of the EC2 hosts...


On both sides I have tomcats with AWS-SDK.





MariaDB Slave Error - String is too long for MASTER_HOST

I'm attempting to replicate an AWS RDS instance with MariaDB. For those not familiar, RDS instances have extremely long DNS hostnames and cannot be accessed by their underlying IP address.


When it comes time to issue the "change master" command, I receive the following error:



String 'my rds dns name' is too long for MASTER_HOST (should be no longer than 60)



I can't figure out how to bypass this. Any ideas?


For the record, I have successfully done this before with non RDS machines. I'm not a complete noob :)





AWS Lambda S3 Bucket Notification via CloudFormation

I'm trying to create a Lambda notification via CloudFormation but getting an error about the ARN format being incorrect.


Either my CloudFormation is wrong or it doesn't support the Lambda preview yet.



{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"LambdaArn": {
"Type": "String",
"Default": "arn:aws:lambda:{some-region}:{some-account-id}:function:{some-fn-name}"
}
},
"Resources": {
"EventArchive": {
"Type": "AWS::S3::Bucket",
"Properties": {
"NotificationConfiguration": {
"TopicConfigurations": [
{
"Event": "s3:ObjectCreated:Put",
"Topic": {
"Ref": "LambdaArn"
}
}
]
}
}
}
}
}


But when I push up this CloudFormation I get the message:



The ARN is not well formed


Does anyone have idea as to what this means? I know the example above has been modified so not to use my actual ARN, but in my actual code I've copied the ARN directly from the GUI.


Also, interestingly I was able to create the notification via the AWS console, and so I just assume that AWS CloudFormation doesn't yet support this feature (even though that's not quite clear I don't think when reading the documentation).





AWS cloudwatch dynamically list metrics and there infomation

Right, so I am trying to get a list of metric_names for a particular namespace (I'd rather it be for an object, but I'm working with what I've got) using AWS Ruby sdk, and cloudwatch has the list_metrics function, awesome!..


Except that list_metrics doesn't return what unit's and statistics a metric supports which is a bit stupid as you need both to request data from a metric.


If you're trying to dynamically build a list of metrics per namespace (which I am) you won't know what unit's or statistics a particular metric might support without knowing about the metrics before hand which makes using list_metrics to dynamically get a list of metrics pointless.


How do I get around this so I can build a hash in the correct format containing the metrics for any namespace without knowing anything about a metric before hand except for the hash structure.


Also why is there not a query for what metrics an object (dynamo,elb,etc) has?


It seems a logical thing to have because a metric does not exist for an object unless it's actually spat out data for that metric at least once (so I've been told); which means even if you have a list of all the metrics a namespace supports, it doesn't mean that an object within the namespace will have those metrics.





AWS cloudwatch dynamically list metrics and there infomation

Right, so I am trying to get a list of metric_names for a particular namespace (I'd rather it be for an object, but I'm working with what I've got) using AWS Ruby sdk, and cloudwatch has the list_metrics function, awesome!..


Except that list_metrics doesn't return what unit's and statistics a metric supports which is a bit stupid as you need both to request data from a metric.


If you're trying to dynamically build a list of metrics per namespace (which I am) you won't know what unit's or statistics a particular metric might support without knowing about the metrics before hand which makes using list_metrics to dynamically get a list of metrics pointless.


How do I get around this so I can build a hash in the correct format containing the metrics for any namespace without knowing anything about a metric before hand except for the hash structure.


Also why is there not a query for what metrics an object (dynamo,elb,etc) has?


It seems a logical thing to have because a metric does not exist for an object unless it's actually spat out data for that metric at least once (so I've been told); which means even if you have a list of all the metrics a namespace supports, it doesn't mean that an object within the namespace will have those metrics.





Expanding a property inline in PowerShell

I've been trying to construct a simple output from an AWS Route53 query in PowerShell that would have Name, Type and Values. However, the values are stored as ResourceRecords and I cannot manage to get them to show up properly, after hours of trying.


Here's a bit of code to show what I mean:



PS> $(Get-R53ResourceRecordSet -HostedZoneId "/hostedzone/xxx").ResourceRecordSets | Select Name,Type,ResourceRecords

Name Type ResourceRecords
---- ---- ---------------
nodepoint.ca. A {Amazon.Route53.Model.ResourceRecord}
nodepoint.ca. MX {Amazon.Route53.Model.ResourceRecord}
nodepoint.ca. NS {Amazon.Route53.Model.ResourceRecord...
nodepoint.ca. SOA {Amazon.Route53.Model.ResourceRecord}


As you can see the last column isn't expanded. This returns the last column:



PS> $cmd | Where {$_.Type -eq "NS"} | Select ResourceRecords


While this returns the proper records:



PS> $cmd | Where {$_.Type -eq "NS"} | Select -ExpandProperty ResourceRecords


I just can't manage to get that last column to display those values. I've tried:



PS> $(Get-R53ResourceRecordSet -HostedZoneId "/hostedzone/xxx").ResourceRecordSets | Select Name,Type,@{Name='Value';Expression={$_.ResourceRecords | Select -ExpandProperty ResourceRecord}}

PS> $(Get-R53ResourceRecordSet -HostedZoneId "/hostedzone/xxx").ResourceRecordSets | Select Name,Type,@{Name='Value';Expression={$_.ResourceRecords | Select -ExpandProperty ResourceRecord -Replace "`n"," "}}

PS> $(Get-R53ResourceRecordSet -HostedZoneId "/hostedzone/xxx").ResourceRecordSets | Select Name,Type,@{Name='Values';Expression={$_.ResourceRecords | Select -ExpandProperty ResourceRecords | Out-String}}


None of which work, they all show an empty third column. The only way I made it work is to manually write each value with a foreach:



PS> $(Get-R53ResourceRecordSet -HostedZoneId "/hostedzone/Z1W5966G1TGW7S").ResourceRecordSets | foreach { $_.Name; $_.Type; $_ | Select -ExpandProperty ResourceRecords}


But I want to keep it in columns, with the last column showing each record with a space in between. I don't know where to go from here.





How do i upload to amazon s3 using Heroku composer amazon aws sdk

I'm using heroku and i'm following this tutorial here, http://ift.tt/1l0enwb


I have placed the composer require line in my composer.json file as shown below.


{ "require" : { "silex/silex": "~1.1", "monolog/monolog": "~1.7" }, "require-dev": { "heroku/heroku-buildpack-php": "*" }, "require" : { "aws/aws-sdk-php": "~2.6" } }


As you can see i placed the amazon one last. However, i'm receiving the following error message.


2015-02-27T16:26:05.499004+00:00 app[web.1]: [27-Feb-2015 16:26:05 UTC] PHP Warning: require(vendor/autoload.php): failed to open stream: No such file or directory in /app/web/fb/fileupload.php on line 4


Does anyone know if i have to do anything other than place that line in my composer json file???? please help





How to configure OSB to consume messages from Amazon SQS

I'm newbie to AWS and trying to work on the SQS for the first time. I've an Oracle Service Bus (OSB) in non-cloud environment and would like to configure OSB to consume messages from Amazon SQS. The documentation mentions to use REST API and poll repeatedly for messages. I also read about the 'client library for JMS' so that the OSB could treat SQS as JMS provider. What is the best approach to achieve this? Appreciate your inputs.





Unable to open redis 6379 for inbound on AWS EC2

I have two servers on EC2. One hosting my php application and other hosting my redis server. I am managing my php session and data on redis server. So on my php server I gave the ip:port as session save path and got the error FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught exception 'RedisException' with message 'Connection closed'


So I search on the web and came to know that I need to open port 6379 on my redis instance for inbound traffic. I opened it by setting a custom TCP setting in AWS security group but still the port is coming closed to outside world. But I am able to listen to the port on redis server itself. Am i Missing anything in the process? Do I need to make any other change somewhere. Please guide me on this. I am very much new to AWS management


On Instance 1: I am using php-fpm, nginx and phpredis

On Instance 2: Using Redis





Http 504 Gateway error: SchemaValidator ERROR could not get database metadata

[ http-bio-8080-exec-2] UserMapperImpl INFO getUserByEmailId: Robin@justice.com [ http-bio-8080-exec-1] SchemaValidator ERROR could not get database metadata com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure


The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2214) at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:781) at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:46)





AWS- Assoction with VPC using python boto

how to get aws VPC with all attached services like route table,insance,subnets and all using python boto? can do like tree strucres please if u have any idea share with me below code is i have did i have problems with filters can help with u r solutions pleas?




  1. from boto.vpc import VPCConnection import boto.ec2 from boto.ec2 import EC2Connection



    ec2_conn = EC2Connection()
    ec2s = ec2_conn.get_all_instances()

    region = boto.ec2.regions()
    print region
    print region[0]
    vpc_conn = VPCConnection()

    vpcs = vpc_conn.get_all_vpcs()
    for vpc in vpcs:
    print" %35s " %vpc

    subnets = vpc_conn.get_all_subnets(filters={ 'vpc_id': vpc.id })
    for subnet in subnets:
    print" %45s " %subnet.id
    '''print" %45s " %subnet.availability_zone
    print" %45s " %subnet.cidr_block
    print '' '''

    route_tables = vpc_conn.get_all_route_tables(filters={ 'vpc_id': vpc.id })
    for route in route_tables:
    print" %55s " %route
    print ''

    acls = vpc_conn.get_all_network_acls(filters={ 'vpc_id': vpc.id })
    for acl in acls:
    print" %55s " %acl
    print ''

    enis = vpc_conn.get_all_network_interfaces(filters={ 'vpc_id': vpc.id })
    for eni in enis:
    print" %55s " %eni
    print ''

    instance = vpc_conn.get_all_instances(filters={ 'vpc_id': vpc.id})
    for inst in instance:
    print" %55s " %inst


    typ = vpc_conn.get_all_internet_gateways(filters={'attachment.vpc-id':


    vpc.id}) for i in typ: print" %55s " %i



    typ3 = vpc_conn.get_all_security_groups(filters={'vpc_id': vpc.id})
    for t3 in typ3:
    print" %55s " %t3






create a datapipeline with tags usinf boto.datapipeline

I want to create AWS datapipeline with tags. We are using boto.datapipeline API for creating the datapipeline. these tags are used to give read/write access to datapipeline users using IAM management


Please provide the code syntax to create a datapipeline with tags





Exclude directories from elastic beanstalk deploy

I have some directories that I would like to be in my local git repository, but NOT in the remote repository when I deploy to my beanstalk environment.


I have googled a bit, and found a few years old posts like this:


http://ift.tt/1vFlLIk


that explain that there is this option somewhere, but I have looked everywhere and cannot find it. I think it must still be there and possibly it's been moved around?


If that helps (though it probably doesn't make any difference), I've got an environment based on the sample node.js application. Where is this option?


Is it possible to do it in a config file in the .ebextensions folder instead?





OR-ing qualifications with MTurk

Is it possible to have a user have one of multiple Qualifications in order to work on a HIT. For example:



qualifications = Qualifications()
qualifications.add(
Requirement(comparator='EqualTo', integer_value=6, qualification_type_id=NewTest)
)
qualifications.add(
Requirement(comparator='EqualTo', integer_value=6, qualification_type_id=OldTest)
)


The user would need to have one of the NewTest or OldTest qualification. Is that possible?





Unable to connect to MySQL AWS RDS instance from local MySQL

I have created an MySQL RDS instance with VPC. Now i am trying to connect to that RDS instance from my Ubuntu 12.04 machine using MySQL client by following code:



mysql -u uname -h test.c6tjb4nxvlri.us-west-2.rds.amazonaws.com -P 3306 -p


But i am getting this error:



ERROR 2003 (HY000): Can't connect to MySQL server on 'test.c6tjb4nxvlri.us-west-2.rds.amazonaws.com' (110)


I searched about this error and everywhere solution came out like



  • Go to the Instances

  • Find the security group

  • Change the inbound rules of that security group by

  • Adding source of user machine public ip or

  • Set source ip as 0.0.0.0/16


enter image description here


I tried everything but still same error occures. Any explanations?





Failed to load resource: net::ERR_TOO_MANY_REDIRECTS-is url forwarding any issue?

I registerd my domain name in domain.com and hosted my web application in amazon ec2.I registered for elastic ip and when i point my public ip in URL forwarding of domain.com,the site is loading properly in chrome,but when i put the public DNS name of my ec2 instance, "Failed to load resource: net::ERR_TOO_MANY_REDIRECTS" error occurs.I have kept google api for authentication in my website and it works properly only when i use the public DNS name in the URL forwarding.The chrome version where i tested is 39.0.2171.95.The site works fine in IE-10.Please help me to resolve the issue





Elastic Beanstalk Invalid Parameter Value

I was getting a fairly ambiguous error from my app.config elastic beanstalk configuration file.


app.config



option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: production
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.26
packages:
yum:
GraphicsMagick: []
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 10M;


command line



> eb create --cfg med-env-sc
...
Printing Status:
INFO: createEnvironment is starting.
INFO: Using elasticbeanstalk-us-east-1-466215906046 as Amazon S3 storage bucket for environment data.
ERROR: InvalidParameterValue: Parameter values must be a string or a list of strings
ERROR: Failed to launch environment.
ERROR: Failed to launch environment.




jeudi 26 février 2015

How to build an AWS ami using packer?

I have an existing AWS EC2 instance. I launched it with AMI ami-9a562df2. Then I installed some extra software on it. Now I want to create a new AMI of that instance (with the software I installed). I'm using packer to do it.


But I can't understand the error messages that it produces. Here is my JSON file (basically lifted from the packer website):



{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-9a562df2",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}"
}]
}


Here is what happens when I run that JSON file:



$ ./packer build -var 'aws_access_key=MY_ACCESS_KEY' -var 'aws_secret_key=MY_SECRET_KEY' example.json
amazon-ebs output will be in this color.

==> amazon-ebs: Inspecting the source AMI...
==> amazon-ebs: Creating temporary keypair: packer 9999999-8888-7777-7777-666666666666
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing SSH access on the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Error launching source instance: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request. (VPCResourceNotSpecified)
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' errored: Error launching source instance: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request. (VPCResourceNotSpecified)

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: Error launching source instance: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request. (VPCResourceNotSpecified)


How can I avoid getting the error Error launching source instance: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request. (VPCResourceNotSpecified)?


I'm sure this seems elementary to some. But I don't know why it is asking me for a VPC and Subnet ID. What am I missing about this operation?





Mysqldump direct pipe to RDS instance - error 32 on write

I'm following the directions from this article pretty much verbatim - trying to copy data from one RDS instance to another.


http://ift.tt/1LO0zC2


But I am getting the following error within about 2 seconds of starting:



mysqldump: Got errno 32 on write


I know this is a a broken pipe error, but I don't know why it's happening. Anyone have any ideas to troubleshoot?





limits on aws account iam users and policies

I am working on aws to back up my data. i want to choose following approach for backing up my data.



  1. suppose i have n numbers of organizations and each organization has n number of departments and each department has n numbers of users.

  2. i want to back up those organizations data based on organization > department >user structure.

  3. now i have an iam user account and a bucket in aws s3.


  4. i create a folder structure based on organization structure as


    first folder for organization > subfolder for department > child sub folder for users




e.g organization name :ABC


department name : Developer


user name: testUser


so bucket folder structure on aws will be as


ABC/Developer/testUser/....


so here i show how i am managing organization's data on aws s3.


but now the point is i am allowing organizations users to put/get data to/from s3 on request.


after user's request i will generate federated user credentials for that user will policy which will allow user to put/get data to/from only users folder.


But my questions are:


Q 1. How many federated users can i generate. Is their is any limit for federated users?


Q 2. how many policies an IAM user can generate at a time?


Q 3. As my approach mentioned above their can be n number of federated users so can i generate n number of policies and applies those to federated user.


Thanks & Regerds


Amit Manchanda





Amazon Instances Network speed is too slow

We have two instances set-up for our projects, one of them is Windows Server 2012 R2 with MySQL 5.6.11 and PHP 5.6.11 installed for our web-based products, we have done so much optimisation to get the websites much faster but after fully deep monitoring, we found that our internet connection speed for the instances is two low not to mention that remote desktop connection is usually so slow. We need to have our instances internet speed much faster. This is a big problem while connecting to our databases.





Easiest way to serve different content from the same link based of user agent

I am current using an Amazon EC2 server to perform a GET on a url that comes in, read the ID in the the url (?id=XXXXX) and display and image based on the ID.


I then read the user agent and if it's mobile display one image, if it's desktop display another image. This could also be split as android vs iphone, or other parameters.


My question is, is there another way other than a EC2 server to do this? Is it possible to do this with a CDN that can pick the appropriate file?





Pulling data from power meters, is kinesis right for me?

I am trying to pull data from power meters to create an energy efficient app. I have to pull data every second from the power meter to display real time data for users. Currently I am using my application server to do it and I am experiencing data loss. I talked to an amazon solutions expert and he recommended me to use Kinesis. Upon further research, I find that Kinesis requires the data to be pushed in and I would need to add an extra layer (another app) to do this.


Is kinesis right for me? Or should I be looking at other services?


Any help is much appreciated and thank you very much in advanced.





How to edit sandbox account on Amazon Flexible Payments Services

Does anyone know how to edit the sandbox account details on Amazon Flexible Payments service?


Amazon is discontinuing this service in June, and all of the links to the old service pages seem to be redirecting to the new service.


I still need to edit details in the old service first before migrating, but everything I turn to redirects to something else.


http://ift.tt/183EeDn


Anyone know how to do this?





How to execute a method once (multiple processes/instances) per minute utilizing AWS

I have a process that sends SQS messages every minute. It's important that the messages go out every minute so I'm planning on running the process on multiple instances so that it's more fault tolerant.


Even though it's running on multiple instances I only want the SQS messages to go out once per minute. So if Machine A dispatches the messages I don't want Machine B to send them and vise versa.


I want to avoid having a master/slave setup.


I thought of using a separate SQS queue to send a done message that could be received by one of the processes to start dispatching the messages and send a done message when complete / after a minute, but if the done message doesn't get sent because of a failure or other issue they cycle would end and that's not acceptable.


I also thought of having the process that dispatches the messages place a timestamp in simpleDB or possibly another DB and have the processes check the timestamp on an interval. The first one that checks it and finds that it's older than a minute would update the timestamp and dispatch the messages.


I investigated SWF and found that it can run workers/activities on a timer, but SWF seems like overkill for this and I'd rather avoid getting it setup and running with access to my DB.


Does anyone have an elegant solution for problems like this?





Only allow touchscreen for Amazon Mechanical Turk

I'm trying to create an Amazon Mechanical Turk HIT with my own website where I want workers to only use a touchscreen device. What's the best way to ensure that my workers are only using a touchscreen and not a mouse? (If you're curious, this is for a research experiment where I need to record touch movements on the screen.)


Would the best way be to add event listeners to only touch events, such as "pointerdown" and "touchstart" and not "mousedown", etc.? Unfortunately, it seems that some browsers (e.g., IE on Windows 8 and chrome & firefox on Ubuntu) don't receive these touch events... I know that I won't be able to make this compatible with all browsers on all types of touch devices, but given that I want to make sure I am not getting mouse inputs, what's the best way to achieve the most compatibility?





Using AWS SWF to add a simple crontab to an EB app

I've been reading up on AWS' simple workflow service all day, and I'm still confused by it. I have an Elastic Beanstalk app with a file that needs to be run once every couple hours, which I could do (and was doing) via a simple crontab inside of the .ebextensions folder. The only problem is that if the auto-scaling app scales to a single instance, it may drop the leader instance and with it the task.


My question is how to go about running this task with SWF? It's a really simple task to repeat, but even with the AWS documentation and some examples, I don't really understand how to set it up and include it in my EB app.


Previously this is what I had:


In .ebextensions/01update_hipchat.config:


container_commands: 01_cronjobs: command: "crontab .ebextensions/update_hipchat.txt"


In .ebextensions/update_hipchat.txt:


* * * * * root /usr/bin/python update_hipchat.py


(Honestly I couldn't get this to work either, but I think I should switch over to SWF anyways.)


Any help or a point in the right direction would be appreciated!





Amazon Product Advertising API Signature

I am trying to produce a signature for the Amazon Product Advertising API, been at it a few hours and am still getting a 403 - could anyone have a quick look at the code and tell me if I am doing anything wrong please?


This is the function I use to create the signature



def create_signature(service, operation, version, search_index, keywords, associate_tag, time_stamp, access_key):
start_string = "GET\n" + \
"webservices.amazon.com\n" + \
"/onca/xml\n" + \
"AWSAccessKeyId=" + access_key + \
"&AssociateTag=" + associate_tag + \
"&Keywords=" + keywords + \
"&Operation=" + operation + \
"&SearchIndex=" + search_index + \
"&Service=" + service + \
"&Timestamp=" + time_stamp + \
"&Version=" + version

dig = hmac.new("MYSECRETID", msg=start_string, digestmod=hashlib.sha256).digest()
sig = urllib.quote_plus(base64.b64encode(dig).decode())

return sig;


And this is the function I use to return the string for the request



def ProcessRequest(request_item):
start_string = "http://ift.tt/1f5d9Nk?" + \
"AWSAccessKeyId=" + request_item.access_key + \
"&AssociateTag=" + request_item.associate_tag + \
"&Keywords=" + request_item.keywords + \
"&Operation=" + request_item.operation + \
"&SearchIndex=" + request_item.search_index + \
"&Service=" + request_item.service + \
"&Timestamp=" + request_item.time_stamp + \
"&Version=" + request_item.version + \
"&Signature=" + request_item.signature
return start_string;


And this is the run code



_AWSAccessKeyID = "MY KEY"
_AWSSecretKey= "MY SECRET KEY"

def ProduceTimeStamp():
time = datetime.datetime.now().isoformat()
return time;

item = Class_Request.setup_request("AWSECommerceService", "ItemSearch", "2011-08-01", "Books", "harry%20potter", "PutYourAssociateTagHere", ProduceTimeStamp(), _AWSAccessKeyID)
item2 = Class_Request.ProcessRequest(item)


An example web request it spits out that produces at 403 is this:-



http://ift.tt/1vDMqW7


There is also a holder class called ClassRequest that just has a field for every request field


The instructions I followed are here if anyone is intrested:- http://ift.tt/1joWDuw


I hope someone can help, I am new to Python and a bit lost





Problems trying to ssh to AWS EC2 instance

I've been using an EC2 server with LAMP for a few months now and all of a sudden I can't connect to it via ssh. I run the same command on Cygwin I've been running since I've started working with it, which is:



ssh -i ./Desktop/keys/teste.pem ubuntu@54.94.211.146 -v


At first I was getting this message on the debugger:



$ ssh -i Desktop/teste.pem -v ubuntu@54.94.211.146
OpenSSH_6.7p1, OpenSSL 1.0.1j 15 Oct 2014
debug1: Connecting to 54.94.211.146 [54.94.211.146] port 22.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file Desktop/teste.pem type -1
debug1: key_load_public: No such file or directory
debug1: identity file Desktop/teste.pem-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.7
ssh_exchange_identification: Connection closed by remote host


Then I tried reeboting the EC2 instance through the AWS dashboard. Now I get this:



$ ssh -i ./Desktop/keys/teste.pem ubuntu@54.94.211.146 -v
OpenSSH_6.7p1, OpenSSL 1.0.1j 15 Oct 2014
debug1: Connecting to 54.94.211.146 [54.94.211.146] port 22.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file ./Desktop/keys/teste.pem type -1
debug1: key_load_public: No such file or directory
debug1: identity file ./Desktop/keys/teste.pem-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.7
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 pat OpenSSH_6.6.1* compat 0x04000000
debug1: SSH2_MSG_KEXINIT sent
Connection closed by 54.94.211.146


I have absolutely no idea what happened. I neither remember doing anything unusual nor messing with the identity file, as the debugger suggests the problem is with the identification. I've already tried to ssh with a different machine and a backup of 'teste.pem', the errors are the same. I'm really at a loss.


Thanks in advance!





AWS PHP SDK ~ Why am I getting this error "The command must be prepared before retrieving the request"

I am using the latest PHP AWS SDK... I seem to be running into this problem in certain circumstances... for example:



require 'aws-autoloader.php';
use Aws\S3\S3Client;

$client = S3Client::factory(array(
'key' => <skey>,
'secret' => <secret>
));

$key = $_GET['key'];

try {

$client -> copyObject(array(
'ACL' => "private",
'Bucket' => "<bucket>",
'Key' => "{$key}-copy",
'CopySource' => "<bucket>/{$key}",
'ContentType' => "text/html",
'CacheControl' => "max-age=1, private",
'MetadataDirective' => "REPLACE"
));

} catch (Exception $e) {
echo json_encode($e->getMessage());
}

?>


Why am I getting this error, "The command must be prepared before retrieving the request?"





Do I have to start the ssh authentication agent every time I log in to EC2

I have noticed that I when I log into my EC2 instance, I have to start the authentication agent and then add my private key to ssh before I can pull code into my instance. Is there a better way to do this than typing:


eval "$ssh-agent" ssh-add path-to-my-private-key


Seems like I am either doing this wrong or dont have the easiest way to do it.


Many thanks, Mark





PostgreSQL on Rails 3.2 not honoring sslmode

I have a Rails 3.2 app using an Amazon RDS PostgreSQL database. I want the app to connect to the database over SSL. My database.yml looks like this:



development:
adapter: postgresql
encoding: utf8
database: xxx
host: xxx.rds.amazonaws.com
port: 1234
sslmode: verify-full
sslrootcert: <%= Rails.root %>/config/rds-combined-ca-bundle.pem
username: xxx
password: xxx


The sslrootcert is the public key downloaded from http://ift.tt/1CR7doI (see http://ift.tt/17AceqY)


The problem I am having is that sslmode verify-full does not seem to be working. I can change sslrootcert to /blah.pem and my database still connects and my Rails app functions. What am I missing?





PostgreSQL on Rails 3.2 not honoring sslmode

I have a Rails 3.2 app using an Amazon RDS PostgreSQL database. I want the app to connect to the database over SSL. My database.yml looks like this:



development:
adapter: postgresql
encoding: utf8
database: xxx
host: xxx.rds.amazonaws.com
port: 1234
sslmode: verify-full
sslrootcert: <%= Rails.root %>/config/rds-combined-ca-bundle.pem
username: xxx
password: xxx


The sslrootcert is the public key downloaded from http://ift.tt/1CR7doI (see http://ift.tt/17AceqY)


The problem I am having is that sslmode verify-full does not seem to be working. I can change sslrootcert to /blah.pem and my database still connects and my Rails app functions. What am I missing?





How to change mount volume on AWS

I have two volumes in my AWS ubuntu instance:



NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb 202:16 0 40G 0 disk /mnt
xvda1 202:1 0 400G 0 disk /


I want the 400GB one to be the main one that the server uses. However, this is currently not the case:



$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 32G 3.5G 27G 12% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.7G 8.0K 3.7G 1% /dev
tmpfs 752M 208K 752M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.7G 0 3.7G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdb 40G 49M 38G 1% /mnt


How would I mount the big volume mount as the active one?





AWS ruby sdk for Async non blocking calls

I want to publish custom application level metrics to be pushed to aws cloudwatch service. (http://ift.tt/1wl6yXS), so that I can see all the metrics both system and application level custom metrics in AWS dashboard. This way I don't have to use third party monitoring solution like graphite etc.


The thing here is that I don't want my application to get slowed down by making blocking calls to push metrics. Is there way I can make async calls (fire and forget) using aws ruby sdk? I know there are Async client in Java SDK but can't find anything for ruby sdk.





Random Celery tasks delaying fixed time

Some Celery tasks without any pattern are delaying 10, 20 or 30 seconds.


Seems to me there is a dns resolving timeout of 10 seconds or something similar.


Attaching trace:



[2015-02-26 19:03:03,417: INFO/MainProcess] Task apps.tasks.metrics.update_mp_user[8d4c90da-ef16-4b1e-8f7d-7ca635f8101a] succeeded in 0.412914252001s: None
[2015-02-26 19:03:13,071: INFO/MainProcess] Task apps.tasks.metrics.update_mp_user[ecc4b268-0917-4172-bdbf-144da5e3d67a] succeeded in 10.112269582s: None
[2015-02-26 19:03:13,628: INFO/MainProcess] Task apps.tasks.views.remove_fake_attendees[e9cd7bbd-a393-46b2-84b7-652ee3e069cc] succeeded in 10.059868463s: None
[2015-02-26 19:03:23,970: INFO/MainProcess] Task apps.tasks.views.register_view_plan[fe2b1113-fff9-483e-993b-2a5fadee170b] succeeded in 20.997350218s:
[2015-02-26 19:03:24,797: INFO/MainProcess] Task apps.tasks.metrics.update_mp_user[03ba7c4d-6ffa-4991-baa7-f6d71561da8f] succeeded in 11.157462471s: None
[2015-02-26 19:03:24,798: INFO/MainProcess] Task apps.tasks.views.attend_free_plan[84a45343-9d79-4e2c-af5c-a5d7e17764a0] succeeded in 21.538693966s: True
[2015-02-26 19:03:34,082: INFO/MainProcess] Task apps.tasks.metrics.update_mp_user[878d982c-5975-4ede-bf5c-e046c516d9db] succeeded in 10.110408876s: None
[2015-02-26 19:03:34,208: INFO/MainProcess] Task apps.tasks.views.register_view_plan[ae6ff3da-b533-426c-812c-4872d7b085a8] succeeded in 0.125267753s:




How do I change the nameservers for my AWS Route53 Registered Domain using boto?

I'm unable to use the AWS boto API to change name servers for my AWS Route53 Registered Domain. In the following Python code, I get



boto.exception.JSONResponseError: JSONResponseError: 400 Bad Request
{u'Message': u'Expected null', u'__type': u'SerializationException'}


even though I'm using the API as documented, passing a list of strings such as



['ns-705.awsdns-21.net', 'ns-1401.awsdns-24.org', 'ns-1107.awsdns-11.co.uk', 'ns-242.awsdns-75.com']


as the second argument.


How can I change nameservers from Python?





def createhz(domain=None, verbose=False):
"""Create a Hosted Zone for the specified domain and update nameservers for Route53 Registered Domain"""
r53 = boto.route53.connection.Route53Connection()
if r53.get_hosted_zone_by_name(domain + '.'):
print('WARNING: Hosted Zone for {} already exists.'.format(domain))
hz = r53.get_zone(domain + '.')
else:
if verbose:
print('Creating Hosted Zone for {}.'.format(domain))
hz = r53.create_zone(domain + '.')

nameservers = hz.get_nameservers()
if verbose:
print('Hosted Zone has nameservers:')
for ns in nameservers:
print(' {}'.format(ns))

registered_domains = boto.route53.domains.layer1.Route53DomainsConnection()

try:
registered_domains.get_domain_detail(domain)
if verbose:
print('Updating nameservers for Route53 Registered Domain.'.format(domain))
# THE FOLLOWING LINE FAILS
registered_domains.update_domain_nameservers(domain, nameservers)
except Exception as e:
if e.message == 'Domain not found.':
print('WARNING: No Route53 Registered Domain for {}.'.format(domain))
print('Set the nameservers at your domain registrar to:.'.format(domain))
for ns in nameservers:
print(' {}'.format(ns))
else:
raise e

return





Traceback (most recent call last):
File "manage.py", line 362, in <module>
manager.run()
File "/usr/local/lib/python2.7/site-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/usr/local/lib/python2.7/site-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/usr/local/lib/python2.7/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "manage.py", line 336, in createhz
raise e
boto.exception.JSONResponseError: JSONResponseError: 400 Bad Request
{u'Message': u'Expected null', u'__type': u'SerializationException'}




Why can't I find the Amazon S3 class in the latest version 2.3.5 version of zend framework?

I have the direct download link for zend framework,


http://ift.tt/1AwDxOh


I've downloaded it and across the internet people show tutorials of how to use the amazon s3 class to save files directly to s3. However, I cannot find the amazon s3 class for some reason. Was it recently removed from the zend framework?





AWS EC2 - Permission denied

Here’s what I’m entering in Terminal on OS X:



chmod 400 ~/.ssh/adamcarter-key-pair.pem
ssh -i ~/.ssh/adamcarter-key-pair.pem ec2-user@52.11.35.5


And am getting this in return:



Permission denied (publickey).


I’ve tried chmod 600, putting the .pem file in my Downloads folder, using all of the alternatives to ec2-user in the docs. I have no idea what else I can try. I’ve also tried the longer address under ‘Public DNS’.


One thing I did notice is when I created my key pair, it was instantly downloaded as a .txt file. I’ve changed the extension to .pem though so that shouldn’t be a problem, should it?


Any ideas would be very welcome!





why does aws EC2 instance automatically starts after manually stopping it?

I have a AWS EC2 instance. To avoid more billing , iam manually selecting the instance and stopping it. However in sometime i automatically starts back thereby adding to my billing. How do i permanently stop it and start manually only when i want to





why does aws EC2 instance automatically starts after manually stopping it?

I have a AWS EC2 instance. To avoid more billing , iam manually selecting the instance and stopping it. However in sometime i automatically starts back thereby adding to my billing. How do i permanently stop it and start manually only when i want to





why does aws EC2 instance automatically starts after manually stopping it?

I have a AWS EC2 instance. To avoid more billing , iam manually selecting the instance and stopping it. However in sometime i automatically starts back thereby adding to my billing. How do i permanently stop it and start manually only when i want to





Nodejs image resizer with graphicmagick

I have the following nodejs code, which as is it is, gets an image from AWS, resizes it into 4 different sizes and then saves it back into the AWS bucket into separate folders. However I need to write it so that it can be run on the dev environment as well. How could I write this so that depending on the input (local file on a vagrant machine, or on the AWS server) different functions are called (what to listen to?). It is worth noting that I am using AWS's new service Lambda.



// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm').subClass({ imageMagick: true });
var util = require('util');


// get reference to S3 client
var s3 = new AWS.S3();

exports.handler = function(event, context) {
// Read options from the event.
console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
var srcBucket = event.Records[0].s3.bucket.name;
var srcKey = event.Records[0].s3.object.key;

var _800px = {
width: 800,
dstnKey: srcKey,
destinationPath: "large"
};

var _500px = {
width: 500,
dstnKey: srcKey,
destinationPath: "medium"
};

var _200px = {
width: 200,
dstnKey: srcKey,
destinationPath: "small"
};

var _45px = {
width: 45,
dstnKey: srcKey,
destinationPath: "thumbnail"
};

var _sizesArray = [_800px, _500px, _200px, _45px];

var len = _sizesArray.length;

console.log(len);
console.log(srcBucket);
console.log(srcKey);

// Infer the image type.
var typeMatch = srcKey.match(/\.([^.]*)$/);
if (!typeMatch) {
console.error('unable to infer image type for key ' + srcKey);
return;
}
var imageType = typeMatch[1];
if (imageType != "jpg" && imageType != "png") {
console.log('skipping non-image ' + srcKey);
return;
}

// Download the image from S3, transform, and upload to same S3 bucket but different folders.
async.waterfall([
function download(next) {
// Download the image from S3 into a buffer.

s3.getObject({
Bucket: srcBucket,
Key: srcKey
},
next);
},

function transform(response, next) {


for (var i = 0; i<len; i++) {

// Transform the image buffer in memory.
gm(response.Body, srcKey)
.resize(_sizesArray[i].width)
.toBuffer(imageType, function(err, buffer) {
if (err) {
next(err);

} else {
next(null, response.ContentType, buffer);
}
});
}
},

function upload(contentType, data, next) {

for (var i = 0; i<len; i++) {

// Stream the transformed image to a different folder.
s3.putObject({
Bucket: srcBucket,
Key: "dst/" + _sizesArray[i].destinationPath + "/" + _sizesArray[i].dstnKey,
Body: data,
ContentType: contentType
},
next);
}
}

], function (err) {
if (err) {
console.error(
'---->Unable to resize ' + srcBucket + '/' + srcKey +
' and upload to ' + srcBucket + '/dst' +
' due to an error: ' + err
);
} else {
console.log(
'---->Successfully resized ' + srcBucket +
' and uploaded to' + srcBucket + "/dst"
);
}

context.done();
}
);
};




Optimize MySQL setting for AWS EC2 t2.small

I have a web server with Apache and MySQL running on AWS EC2 t2.small with Windows 2012 Server. AWS EC2 t2.small characteristics:



  • RAM 2 GB (used 65%)

  • 1 CPU 2.50 GHz (used 1%)


Now MySQL process (mysqld.exe) uses 400 MB of RAM (too much for me).


MySQL current settings are (my.ini):



key_buffer = 16M
max_allowed_packet = 16M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
tmp-table-size = 32M
max-heap-table-size = 32M
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
query_cache_type = 1
query_cache_limit = 256K
query_cache_min_res_unit = 2k
query_cache_size = 80M
innodb-log-files-in-group = 2
innodb-log-file-size = 64M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 800M
innodb_buffer_pool_size = 128M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50


Database is formed by 20 InnoDB tables and they are composed with 5/10 columns. The server has a low traffic.


How can I optimize my settings to be suitable with EC2 t2.small (2GB RAM)?





some regions can't resolve my domain by using ns of aws

step1: I've just registered a new domain from Route53, the default Name Server like this:


enter image description here


step2: then I also used Hosted Zones to Manage My domain and add some A records. like this:


enter image description here


I followed some tips to make these two place have the same NS,my questions are:




  1. I made the Hosted Zones Name server identical to those in Registered Domain(now all NS are those in first picture), but it didn't work.


    error (from online resolving website) something like:

    Response: Unknown (refused at ns-631.awsdns-14.net). Sorry, I could not continue.


    nslookup command returned something like: "server can't find www.domain.com: NXDOMAIN"




  2. I made Registered Domain NS identical to those in the Hosted Zones(now all NS are those in second picture) this time it seemed to work,see pictures:


    enter image description here enter image description here




however it brought another question:


I can't access my domain in my network.My personal computer's DNS is 192.168.1.254 (from my router).If I change it to 8.8.8.8. it worked. I'm not sure if others could resolve my domain or not. And I also can't force them to change their DNS server


Any help will be appreciated.





postgres_fdw cannot connect to server on Amazon RDS

I have two Postgres 9.3.5 instances in RDS, both in one security group that allows all inbound traffic from within the security group and all outbound traffic. I'm trying to set up one database to be able to select from a few tables from the other via postgres_fdw.


I've created the server -



create server master
foreign data wrapper postgres_fdw
OPTIONS (dbname 'main',
host 'myinstance.xxxxx.amazonaws.com');


as well as the requisite user mapping and foreign table -



create foreign table condition_fdw (
cond_id integer,
cond_name text
) server master options(table_name 'condition', schema_name 'data');


However, a simple select count(*) from condition_fdw gives me



ERROR: could not connect to server "master"
DETAIL: could not connect to server: Connection timed out
Is the server running on host "myinstance.xxxxxx.amazonaws.com" (xx.xx.xx.xx) and accepting
TCP/IP connections on port 5432?


I can connect to both databases via psql from an EC2 instance. I know until recently RDS didn't support postgres_fdw, but I'm running the newer versions that do.


In the create server statement, I have tried replacing "myinstance.xxxxxx.amazonaws.com" with the IP address it resolves to, no luck.


Any ideas?





How to get the content of a secured remote xml file to work with in Wordpress?

there is a xml-file on a remote server. It is updated daily.

I now need to access this file to work with it.


How I access the file via browser:



  1. Go to http:// example. com /things.xml (had to include the spaces because I need a reputation of 10 to post two urls)

  2. Get a "authentication required"-window and put in username and password

  3. Get redirected to "http://ift.tt/1BAMpVA"

  4. See rendering of XML by browser


I tried file_get_contents('username:password@http://ift.tt/1AbxlGG') and get following output:



E_WARNING : type 2 -- file_get_contents(username:password@http://ift.tt/1AbxlGG) [function.file-get-contents]: failed to open stream: No such file or directory



Even if I would be able to access the file there would be the problem of getting the contents to work inside Wordpress.


My goal is to display a list of "things".


Thank you in advance

(very first post. Hope I haven't made a lot of mistakes)


Mightvision





GCM data not being passed by AWS SNS

I'm using Amazon SNS to send push notifications to an Android device. If I send the following JSON I can't read the parameters in the data element.


{ "default": "message here", "GCM": { "data": { "message": "This is the message" } } }


I can read the default element but in my broadcastreceiver I can't do this.


protected void onHandleIntent(Intent intent) {



Bundle extras = intent.getExtras();

Log.d("GCM",extras.getString("message");


}


Trying to read the message element causes an error.


If I send directly through GCM I can read all of the parameters that start with data. using the above method with no problem at all.


What am I doing wrong?





Wordpress permalink not working on aws

I have spent 4-5 hours to sort it out but not able to solve it.


I have setup my wordpress website on AWS.Everything is working file except the permalinks of wordpress.


When permalinks are set to default pages/posts are working but not working with "%post-name%".


I have tried almost all the things by searching over google but no success.


I saw so many solutions all related to httpd.conf file but on my root there is no file httpd.conf neither no directory of http.


I changed the following code in apache.conf file but still not working



<Directory />
Options FollowSymLinks
AllowOverride All
Require all denied
</Directory>

<Directory /var/www/>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>


Restarted apache again and again but no luck.


Please help me guys.


Thanks.





AWS: Convert t1.micro (PV) to t2.medium (HVM)

I have t1.micro (PV) and I am trying to update (resize) to t2.medium (HVM).


However, when I created AMI from t1.micro and tried to attach to t2.medium, I did not get instance under list. So could not attached AMI to t2.medium.


So may I know how can I attached AMI to new instance from EC2 Console? Also, how can I take care of SSH , SSL , Cron , multi domains while migrating?


Resource I refereed: http://ift.tt/1EujmCb


Thanks





AmazonS3FullAccess managed policy on a group doesn't give S3 permission?

I have an S3 bucket that has in its policy permission for my CloudFront origin access identity:



{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <mine>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<my-bucket>/*"
}
]
}


Additionally I've created a group and attached the AmazonS3FullAccess managed policy to it and added an IAM user to that group. The managed policy looks like this:



{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}


However when that user tries to add anything to the bucket, I get a 403 (access denied). I'm unsure if any other operations work, I haven't written code to try them. It's only when I specifically allow that user's ARN access to the bucket directly in the bucket policy that they're allowed to add objects. What am I missing? It seems like the above group policy should allow members of that group access to all operations in all buckets, but it doesn't do that.