jeudi 30 avril 2015

Warnings when validating AWS Elastic Transcoder HLS file using Apple's Media Stream Validator

I am currently transcoding my videos using AWS Elastic Transcoder. I create a job, and add outputs for the following quality presets:

  • System Preset: HLS Audio - 64k
  • System Preset: HLS 400k
  • System Preset: HLS 1M
  • System Preset: HLS 2M

I then create a master playlist called index and these outputs/presets to it.

When the video is done transcoding, I'm using Apple's Media Stream Validator tool via terminal to validate the index file.

Here are the warnings that I am getting:

1) PROGRAM-ID has been deprecated and is no longer a valid attribute for #EXT-X-STREAM-INF
2) #EXT-X-ALLOW-CACHE should only be in master playlist
3) Unable to read video timestamps in track 1; this may be due to not having a key frame in this segment
4) Unable to read decode timestamps in track 1; this may be due to not having a key frame in this segment

I am not using any custom presets, only the ones that I listed above. These warnings happen with both videos recorded on the user's iPhone, and youtube videos that I have downloaded and converted to .mov files before converting them using Elastic Transcoder.

I know I can't be the only one experiencing these issues since they are defaults provided by AWS Elastic Transcoder.

I am worried that these warnings will prevent my iOS app from being accepted into the App Store.




Keep s3 folder private except for web application access

I am trying to protect a folder in my s3 architecture and only allow access to the web app using an AWS access key

company
  production
  development
  private

If I put a DENY access directly on the private folder, then that DENY statement overrides all other bucket policies or IAM policies set for the web app. Even if the web app has full admin access through an IAM policy, it will not be able to access the private folder if there is a DENY statement on that folder in the bucket policy.

I have tried the following bucket policy, but it still allows an anonymous user to look at files in the protected folder.

I thought all folders in a bucket are private unless stated in the bucket policy?

{
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "PublicReadGetObject",
        "Effect": "Allow",
        "Principal": "*",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::company/production/*"
    },
    {
        "Sid": "allow access to private files in the private folder",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::88888888888:user/web_app"
        },
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::company/*"
    }
]

}




AWS-SES: Handling Bounces for Invalid ISPs

I have created an emailing system using Amazon's Simple Email Service (SES) that handles bounces to invalid messages with their Notification(SNS) and Queue(SQS) services. Sending emails to valid addresses work as expected, but I am running into a problem when trying to report bounces.

There are 2 bounce situations: the first one works and the second one does not.

1) Emailing a fake address at an existing ISP (for eg: foo@gmail.com or foo2@yahoo.com) - correctly bounces and sends a Notification to my Queue through SNS

2) After emailing a fake address at a fake ISP (for eg: gideon@rosenthal.com), the Queue never receives a bounce from SNS.

However, the bounce is recognize on some level by AWS because it is added to the Bounce-Statistics Graph in the console.

enter image description here

I can't remove these addresses from my email list if I am never notified that email has bounced.

After doing a lot of research, I initially thought that it was a problem with the AWS Suppression List But I dont think that's possible since i have tried sending to email addresses that were very unlikely to have been used in the past 12 days.

My other thought, is this is a soft bounce, and the system will only be updated if it continues to bounce for the next 12 hours.

Any suggestions or advice would be appreciated.




web service address for Jenkins on AWS Windows installed as a Windows Service

I have just installed Jenkins (as a Windows Service) on AWS Windows (Win Server 2012R2) instance.

I'm able to launch it via browser using http://localhost:8080

What url should I use to access this server from other client? I have tried http://:8080 or without the port number but my client (over the net) is failing to connect. The Public DNS of the AWS is also not working.

I set up another Jekins on AWS Ubuntu server with apache tomcat and it's connecting just fine using the Public DNS of the AWS Ubuntu server.




Obtaining app private key for APNS .p12?

I am trying to renew my clients APNS certificates to continue sending push notifications using Amazon AWS and I am following this guide http://ift.tt/15DSlGH

I am having trouble with step 3. I imported the .cer file into Keychain Access, but as you can see in the screenshot there is no drop down/private key associated with the certificate. Does anyone have any experience with this?

http://ift.tt/1QPSZMt




File upload from swift to django

I am trying to upload photos and videos from swift iOS app to amazon s3 using django backend. I found a django app to connect django to s3 using http://ift.tt/1zhj4hW Problem is I have no idea how to upload a file from swift to django.

The django-s3direct I got working in the admin panel and they provide a nice method to generate a form for uploading the files but I don't know how to upload the files from swift using this plugin for django.

Should I find a different way of connecting the django backend to amazon s3? Is there a tutorial out there somewhere for this? (I can't find any)




How do I upload to Amazon S3 using .NET HttpClient WITHOUT using their SDK

How do I upload a file to a bucket on Amazon S3 just using .NET's HttpClient (or WebClient)? It has to be done using "PUT".

I can upload using Amazon's AWS SDK, but I would like to know how to do it without it. I've looked at their documentation for hours and still don't get it.

Any help would be greatly appreciated.

My code using their SDK is:

public static void TestPutObjectAsync() {
        AmazonS3Client client = new AmazonS3Client();
        client.AfterResponseEvent += new ResponseEventHandler(callback);

        PutObjectRequest request = new PutObjectRequest {
            BucketName = BUCKET_NAME,
            Key = "Item1",
            FilePath = FILE_PATH,
        };

        client.PutObjectAsync(request);
}

public static event EventHandler UploadComplete;

public static void callback(object sender, ResponseEventArgs e) {

        if (UploadComplete != null) {
            UploadComplete(sender, e);
        }
}




How to pass the output from logstash in local machine to Elasticsearch in AWS

I want to dump my logstash output to Elasticsearch hosted in AWS. I didn't get any proper documentation online for the same. Please advice.

Here is my logstash message:

"message" => "{\"timestamp\":\"Wed Apr 29 18:54:16 PDT 2015\",\"vHost_name\":\"130.65.132.233\",\"cpuUsage\":4438,\"cpuUsagemhz\":2130,\"memUsage\":9235,\"memGranted\":1172063,\"memActive\":359418,\"memConsumed\":1936530,\"diskUsage\":43,\"diskRead\":21,\"diskWrite\":21,\"netUsage\":20603,\"netReceived\":17986,\"netTrasnmitted\":2617,\"sysUptime\":797067,\"sysResourcesCpuUsage\":79}\r"

I want to separate every paramater in this and pass to my AWS Elasticsearch instance. Please help. Thanks.




Bamboo npm.load() required error

I'm using Atlassian Bamboo and Amazon Web Services as a build server and attempting to set up a build project for a web application that uses npm packages.

I'm using a slightly modified version of ami-04ccf46c, the Windows Server 2012 R2 image on Bamboo utilizing Amazon Web Services.

In my build plan, I am running a simple npm install task using a task of type npm. When I try to run the build plan, however, I receive the following in the logs:

30-Apr-2015 09:11:05 C:\opt\node-v0.10.35\node_modules\npm\lib\npm.js:32
30-Apr-2015 09:11:05 throw new Error('npm.load() required')
30-Apr-2015 09:11:05
30-Apr-2015 09:11:05 Error: npm.load() required
30-Apr-2015 09:11:05    at Object.npm.config.get (C:\opt\node-v0.10.35\node_modules\npm\lib\npm.js:32:11)
30-Apr-2015 09:11:05    at exit (C:\opt\node-v0.10.35\node_modules\npm\lib\utils\error-handler.js:51:40)
30-Apr-2015 09:11:05    at process.errorHandler (C:\opt\node-v0.10.35\node_modules\npm\lib\utils\error-handler.js:342:3)
30-Apr-2015 09:11:05    at process.emit (events.js:95:17)
30-Apr-2015 09:11:05    at process._fatalException (node.js:295:26)

Why does npm crash? Is npm not set up properly? Do I need to set some system variable?

View any discussion on this same question posted to Atlassian Answers.

Thanks in advance.




Can't access file in home directory from PHP on a EC2 server

Hi everyone I am wondering why this returns an empty string:

$settings = file_get_contents('/home/ec2-user/settings.json');

but this works:

$settings = file_get_contents('/var/www/settings.json');

I can't seem to be able to access my json file from the home directory... Any idea? Thanks




SSH freely inside AWS VPC

How do I configure my EC2 machines inside a VPC to be able to ssh without password or key between them?

What i'm trying to do is access one machine (which has a public IP) and from this machine access all others freely.

Is it even possible?




Use AWS Kinesis as a data source for an EMR MapReduce job

I set up a AWS Kinesis stream that receive data from multiple sources. I'd like to process that data in multiple incremental batches using MapReduce in EMR.

How do I specify the input source in my job? Are there any specific libraries to handle a Kinesis record? Sample code would be much appreciated!




Missing required arguments: aws_access_key_id, aws_secret_access_key on rake test

I'm doing chapter 11 of hartle's tutorial. When I ran heroku run rake db:migrate I got this error:

Missing required arguments: aws_access_key_id, aws_secret_access_key

I solved it with the answer in enter link description here and migrate successfully.but now when I run

bundle exec rake test

It gives me:

rake aborted!
ArgumentError: Missing required arguments: aws_access_key_id, aws_secret_access_key

This is my carrierwave file:

CarrierWave.configure do |config|
  config.fog_credentials = {
    :provider               => 'AWS',
    :aws_access_key_id      => ENV['S3_KEY'],
    :aws_secret_access_key  => ENV['S3_SECRET'],
    :region                 => ENV['S3_REGION'],
    :endpoint               => ENV['S3_ENDPOINT']
  }

  if Rails.env.test? || Rails.env.development?
    config.storage = :file
    config.enable_processing = false
    config.root = "#{Rails.root}/tmp/uploads/#{DateTime.now.to_f}.#{rand(999)}.#{rand(999)}"
  else
    config.storage = :fog
  end

  config.cache_dir = "#{Rails.root}/tmp/uploads/#{DateTime.now.to_f}.#{rand(999)}.#{rand(999)}"
  config.fog_directory  = ENV['S3_BUCKET_NAME']
  config.fog_public     = false
  config.fog_attributes = {}
end

I tested the answer in enter link description here and it didn't work for me.




Storing files in a database vs. a file system with PHP/MySQL

First off, I'm new to both PHP and MySQL, so I apologize for anything obvious I haven't yet found in my research.

I'm developing a lesson plan database that lets users search for lesson plans by record and will eventually allow uploading/downloading of various file formats to and from that lesson (record). Newly created lesson plans will need to have a new profile/database to store their own files and probably a new web page to navigate to...I can imagine this will get out of hand very quickly.

For example, I create a lesson with the name "Easy_as_123", subject as "Math", and grade as "1". Creating this record should then allow navigation to this lesson to add a textarea description and upload associated files available for download. I'm guessing the file sizes will be under 1MB. I'm hearing both filesystem and database is a valid option. Is there a sound way to do this with PHP/MySQL?

My problem is, I don't know how to go about storing the files so that they are referenced by the records. Do I need to create a separate database for each record and link it to that record?

Any help is appreciated




EMR issues with reducer.py

I'm running AWS and trying to run a simulation on the EMR setting. I know my mapper.py file is correct but I can't seem to figure out why my reducer.py file isn't correctly working.. The idea was to sort a movies.cvs file that holds data from IMDB and find the worst 20 movies from a voting and rating perspective. I've been trying to figure out why my code isn't working and would love some help if possible. All logs show that my mapper.py file is running correctly but not the reducer.py. I have included the code for my reducer.py. Thank you for the help.

reducer.py

#! /usr/bin/env python

import sys
from operator import itemgetter

arraysize = 20
q = 0

for line in sys.stdin:
     line = line.strip()
     title,votes,rating = line.split("\t")

try: 
        results = (title, int(votes), rating)
        results_printed.append(results)
        results_printed = [('x', int(0), 'x')]
        for q in range (0,arraysize):
            print(results_printed[q])
            q = q + 1
except ValueError:  pass
sorted(results_printed, key=itemgetter('votes'))




DNS configuration on Ubuntu EC2

I created an Ubuntu 14.04 server instance on Amazon's EC2 (using AMI - ami-b141a2f5 ) running as a spot instance.

As soon as the instance is launched I can connect from my Mac using SSH to the elastic ip address.

However, I can't ping google.com or any other URI. I can't run apt-get, wget, curl, etc. All commands get 'hostname unresolved'.

I have used elastic ip (linked to VPC), Route 53 with a registered domainname and two rulesets:

ns-1620.awsdns-10.co.uk ns-250.awsdns-31.com ns-1338.awsdns-39.org ns-898.awsdns-48.net

VPC with: DNS Resolution = Yes, DNS Host Names - Yes,

and an unrestricted security group.

/etc/resolv.conf is empty.

Any ideas?

Thanks in advance!




Does the data in mongodb provisioned in EC2 gets replicated while Autoscaling?

To deploy a server in Amazon Ec2, I wish to have the mongodb master database in an Ec2 instance itself and at an average I would be having around 5-6 Ec2 instances running in parallel which are scaled by amazon auto-scaling group.

As database is updated frequently and all instances are under Elastic load balancer,it is hard to predict which users data is in which database of Ec2. By following this approach, am i assured of data consistency while scaling in and out ? If it is not the good approach please suggest alternate ways of doing it.




Django Static Files on S3: S3ResponseError: 301 Moved Permanently

I'm trying to host my Django Static and Media files on Amazon S3 and I've been following every guide out there, but I still end up getting S3ResponseError: 301 Moved Permanently errors on deployment of my Elastic Beanstalk Application when it tries to run collectstatic.

My S3 is working and I can access other files on it. I also have it set to a custom domain so you can access the same file in the following ways:

  1. http://ift.tt/1JUHkH6
  2. http://ift.tt/1JUHkH8
  3. http://ift.tt/1I0ShcV

It is the third option that I want to use, but I've tried the other ones aswell. Both with and without https:// in the settings below.

My settings file look like this

#settings.py file
AWS_ACCESS_KEY_ID = 'XXX'
AWS_SECRET_ACCESS_KEY = 'XXX'
AWS_HEADERS = { 
    'Expires': 'Thu, 31 Dec 2099 20:00:00 GMT',
    'Cache-Control': 'max-age=94608000',
}
AWS_STORAGE_BUCKET_NAME = 's3.condopilot.com'
# I have also tried setting AWS_S3_CUSTOM_DOMAIN to the following:
# - "http://ift.tt/1JUHluv" % AWS_STORAGE_BUCKET_NAME
# - "http://ift.tt/1mwCfbS" % AWS_STORAGE_BUCKET_NAME
# - "s3.condopilot.com"
AWS_S3_CUSTOM_DOMAIN = "%s.s3-eu-west-1.amazonaws.com" % AWS_STORAGE_BUCKET_NAME
AWS_S3_CALLING_FORMAT = 'boto.s3.connection.OrdinaryCallingFormat'
AWS_S3_SECURE_URLS = False # Tried both True and False
AWS_S3_URL_PROTOCOL = 'http' # Tried with and without

STATICFILES_LOCATION = 'static'
STATIC_URL = "http://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, STATICFILES_LOCATION)
STATICFILES_STORAGE = 'custom_storages.StaticStorage'

MEDIAFILES_LOCATION = 'media'
MEDIA_URL = "http://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, MEDIAFILES_LOCATION)
DEFAULT_FILE_STORAGE = 'custom_storages.MediaStorage'

The reason I have AWS_S3_CALLING_FORMAT = 'boto.s3.connection.OrdinaryCallingFormat' is because without it I get the following error: ssl.CertificateError: hostname 's3.condopilot.com.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com'. All advice I find online regarding that error says that OrdinaryCallingFormat should be used when bucket name contains dots, example s3.condopilot.com.

My custom storages looks like this

#custom_storages.py
from django.conf import settings
from storages.backends.s3boto import S3BotoStorage

class StaticStorage(S3BotoStorage):
    location = settings.STATICFILES_LOCATION

class MediaStorage(S3BotoStorage):
    location = settings.MEDIAFILES_LOCATION

And yes, my S3 bucket is set up in eu-west-1.




How to search an Amazon S3 Bucket using Wildcards?

This stackoverflow answer helped a lot. However, I want to search for all PDFs inside a given bucket.

  1. I click "None".
  2. Start typing.
  3. I type *.pdf
  4. Press Enter

Nothing happens. Is there a way to use wildcards or regular expressions to filter bucket search results via the online S3 GUI console?




MongoDump Command Not Found on AWS EC2 Instance

Inside AWS EC2 Instance, I am trying to import my MongoLab hosted Database using mongodump command.

sudo mongodump -h dsXXXX.mongolab.com:xxxxx -d testDB -u XXXXX -p YYYYYYY -o dumpmongolabs

I get the following error -bash: mongodump:command not found

I had installed MongoDB like this

echo "[10gen]
name=10gen Repository
baseurl=http://ift.tt/JvOSFo
gpgcheck=0" | sudo tee -a /etc/yum.repos.d/10gen.repo

sudo yum -y install mongo-10gen-server mongodb-org-shell

What are we missing exactly? Cheers and Thanks in Advance.




How to create ssl certificate chain?

I have public key certificate and private key and i have uploaded both key in AWS cloudfront service.

I have tried to configure cloudfront and select custom SSL certificate and clicked on "YES EDIT" button. I received the below error message

com.amazonaws.services.cloudfront.model.InvalidViewerCertificateException: The specified SSL certificate doesn't exist in the IAM certificate store, isn't valid, or doesn't include a valid certificate chain. (Service: AmazonCloudFront; Status Code: 400; Error Code: InvalidViewerCertificate; Request ID: c169e804-ef21-11e4-a864-99c1866d5c97)

Please advice on above error.




How to use Route53 SDK to determine if a domain is available

Using the Java AWS SDK, how can one determine if a subdomain is available. Let's say I have a hosted zone, xyz.com - programatically I would like to generate a custom url, e.g. joe.xyz.cow, but first however determine if the 'joe' has already been taken.

I thought the Route53 SDK would offer a straightforward process to see if a name is in use for a hosted zone but all I've found is a CheckDNSAvailability method of the beanstalk API.




Vacancy Cloud systems / Devops engineer

Recently I'm searching for an Cloud systems / Devops engineer for my client in Amsterdam. In this function you are working on state-of-the art solutions in an AWS cloud. My client is a dynamic company with a start-up culture.

If you are interested in this vacancy or know someone who is interested in this vacancy, contact Lefit consultants Ivo Niesen (i.niesen@lefit.nl, +31(0) 6-24 86 03 08) or Jasper Jansen (recruitment@lefit.nl, +31(0) 76 - 26 00 00 4).




Unable to Connecting to Amazon instance EC2

I'm Using AWS - Amazon Web Services and running commands from MAC Terminal

http://ift.tt/1ETUOoF

But I am getting this error :

Gateway Timeout: can't connect to remote host

I'm unable to access the site anymore because of this error there is a security group applied to this instance and port 22 for ssh is listed under this security group




Amazon ElastiCache vs Ramfs in Linux

I am new to Amazon Web Services. I was reading about Amazon ElastiCache and wanted to clarify if it is like (may be more than that) using RAM filesystem in Linux where we use a portion of system memory as a file system. As I referred AWS documentation it says ElastiCache is a web service. Is it like an EC2 instance with few memory modules attached? I really want to understand how it exactly works.

Our company has decided to migrate our physical servers into AWS cloud. We use Apache web server and MySQL Database running in Linux. We provide a SaaS platform for e-mail marketing and event scheduling for our customers. There is usually a high web traffic to our website during 9am-5pm on weekdays. I would like to understand if we want to use ElastiCache service, how it will be configured in AWS.? We have planned two EC2 instances for our web server and an RDS instance for the database.

Thanks.




Amazon S3 Chrome vs. Firefox Bucket POST enclosure-type multipart/form-data

I have a form that makes a POST request directly to an Amazon S3 bucket with a file attachment. In Firefox it works fine. In Chrome, I get a 412 error

<?xml version="1.0" encoding="UTF-8"?>
<Error>
    <Code>PreconditionFailed</Code>
    <Message>At least one of the pre-conditions you specified did not hold</Message>
    <Condition>Bucket POST must be of the enclosure-type multipart/form-data</Condition>
    <RequestId>randomcharacters123456</RequestId>
    <HostId>morerandomcharacters123456</HostId>

</Error>

Is there supposed to be a difference in the way these browsers handle and submit forms? The form has the attribute enctype=multipart/form-data set fine. The only real difference I saw in the POST requests was the request payload boundary.

Chrome shows something like ------WebKitFormBoundary27B45bsnJm6YBk66
Firefox shows something like -----------------------------185573058924207

But I assume these are trivial differences.

Do I have to cater enctypes to particular browsers? Can I force Chrome to handle the enctype properly?




mercredi 29 avril 2015

AWS Machine Learning Cannot fetch evaluation matrix (Confusion Matrix and RMSE)

I am trying to explore aws machine learning and I got these errors on my binary classification and regression evaluation dashboard:

  • "Amazon ML cannot load the confusion matrix."
  • "Amazon ML cannot fetch the RMSE summary for this evaluation."

What is wrong with it? how can I fetch the confusion matrix and RMSE? Thanks




About EC2 Instance Memory leak, JAVA vs EC2

Today I've got an alert message because one of our EC2 instances exceed 70% limitation of its memory usage.
Its server type is c3.large and it has 3.75G of memory.
The main running application of this instance is a tomcat server and -Xmx2g option was given when it started.
At the point of failure, memory usage of the java process was 1.48g and there was no any memory leak on our JVM monitoring system.


Let's see the running information of normal instance first.

Result of landscape :




vdscm@:~$ landscape-sysinfo System load: 0.0 Processes: 113 Usage of /: 1.9% of 98.30GB Users logged in: 1 Memory usage: 47% IP address for eth0: XX.XX.XX.XX Swap usage: 0%
It uses 47% of memory.

Result of top : (sorted by %MEM desc)




top - 04:06:30 up 155 days, 21:37, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 113 total, 1 running, 112 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.2 us, 0.0 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3854812 total, 3691880 used, 162932 free, 154504 buffers KiB Swap: 0 total, 0 used, 0 free. 1702992 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3707 vdscm 20 0 3621092 1.204g 16140 S 0.0 32.7 123:55.73 java 5197 syslog 20 0 256616 30612 892 S 0.0 0.8 1:33.79 rsyslogd 8973 root 20 0 1065348 8204 2304 S 0.0 0.2 4:04.69 s3fs 22462 root 20 0 105628 4248 3268 S 0.0 0.1 0:00.01 sshd 22549 vdscm 20 0 21200 3660 1748 S 0.0 0.1 0:00.07 bash 5276 root 20 0 172124 3144 2428 S 0.0 0.1 7:42.74 controller 588 root 20 0 10224 2916 624 S 0.0 0.1 0:04.53 dhclient 1 root 20 0 33496 2564 1212 S 0.0 0.1 0:15.51 init ... 3 root 20 0 0 0 0 S 0.0 0.0 0:01.36 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 7 root 20 0 0 0 0 S 0.0 0.0 12:15.05 rcu_sched 8 root 20 0 0 0 0 S 0.0 0.0 21:04.46 rcuos/0 9 root 20 0 0 0 0 S 0.0 0.0 13:23.79 rcuos/1 10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/2 11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/3
Java process uses 32.7%. Total memory usage of all process is about 35%.
All process uses 35% and landscape says 47%. It's quite reasonable.
Next, Let's see the running information of the instance of problem.

Result of landscape :




vdscm@:~$ landscape-sysinfo System load: 0.0 Processes: 117 Usage of /: 1.7% of 98.30GB Users logged in: 1 Memory usage: 72% IP address for eth0: XX.XX.XX.XX Swap usage: 0%
It uses 72% of memory.

Result of top : (Sorted by %MEM desc)




top - 02:19:10 up 155 days, 19:11, 2 users, load average: 0.02, 0.02, 0.05 Tasks: 117 total, 1 running, 116 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.3 us, 0.0 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3854812 total, 3689104 used, 165708 free, 155256 buffers KiB Swap: 0 total, 0 used, 0 free. 727764 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26472 vdscm 20 0 3618176 1.480g 14536 S 0.3 40.3 160:22.85 java 983 syslog 20 0 260848 30372 452 S 0.0 0.8 1:41.61 rsyslogd 367 root 20 0 1205188 6000 1616 S 0.0 0.2 93:47.09 s3fs 24356 root 20 0 105628 4248 3264 S 0.0 0.1 0:00.00 sshd 20759 root 20 0 105628 4228 3244 S 0.0 0.1 0:00.00 sshd 24412 vdscm 20 0 21232 3696 1748 S 0.0 0.1 0:00.04 bash 20857 vdscm 20 0 21224 3684 1748 S 0.0 0.1 0:00.07 bash 590 root 20 0 10224 2908 620 S 0.0 0.1 0:04.45 dhclient ... 1198 root 20 0 12788 152 0 S 0.0 0.0 0:00.00 getty 2261 root 20 0 15252 152 0 S 0.0 0.0 2:09.86 nimbus 1128 root 20 0 4368 148 0 S 0.0 0.0 0:00.00 acpid 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:35.27 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 7 root 20 0 0 0 0 S 0.0 0.0 36:53.03 rcu_sched
memory usage = used - (buffers + cached Mem)
2,806,084(0.7279%) = 3,689,104 - (155,256 + 727,764) ==> 72%
Memory usage doesn't contain buffered and cached memory.
Java process uses 40.3%, rsyslogd uses 0.8 and almost other processes use less or equal 0.1%. Its summary is about 43%. It's only 8% much more than normal instance but landscape says 72%. Where the 29% of memory was gone ?
vdscm@:~$ cat /proc/meminfo MemTotal: 3854812 kB MemFree: 165828 kB Buffers: 155256 kB Cached: 727768 kB SwapCached: 0 kB Active: 2099668 kB Inactive: 376808 kB Active(anon): 1593532 kB Inactive(anon): 304 kB Active(file): 506136 kB Inactive(file): 376504 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 76 kB Writeback: 0 kB AnonPages: 1593452 kB Mapped: 22940 kB Shmem: 384 kB Slab: 282956 kB SReclaimable: 164592 kB SUnreclaim: 118364 kB KernelStack: 1712 kB PageTables: 6736 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1927404 kB Committed_AS: 2718656 kB VmallocTotal: 34359738367 kB VmallocUsed: 8288 kB VmallocChunk: 34359713979 kB HardwareCorrupted: 0 kB AnonHugePages: 1546240 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 28672 kB DirectMap2M: 3903488 kB
slab only uses 0.2g of memory.
We killed the java process and restarted it. These are only things we did. After that, 72% memory usage went down to 30%. It seems that the java process had been occupying more 30% of system memory in addition to its %MEM usage shown by top command. How is it possible ?
After Restart Java,

Result of landscape :




vdscm@:~$ landscape-sysinfo System load: 0.0 Processes: 113 Usage of /: 1.7% of 98.30GB Users logged in: 1 Memory usage: 30% IP address for eth0: XX.XX.XX.XX Swap usage: 0%

Result of top : (Sorted by %MEM desc)




top - 01:01:25 up 156 days, 17:53, 2 users, load average: 0.00, 0.01, 0.05 Tasks: 116 total, 1 running, 115 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.3 us, 0.0 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31567 vdscm 20 0 3599936 999640 10872 S 0.0 25.9 3:07.55 java 983 syslog 20 0 260848 30372 452 S 0.0 0.8 1:42.34 rsyslogd 367 root 20 0 1204736 7420 1616 S 0.0 0.2 93:47.58 s3fs 31367 root 20 0 843872 4288 1124 S 0.0 0.1 8:07.27 controller 10820 root 20 0 105628 4244 3264 S 0.0 0.1 0:00.00 sshd 11280 root 20 0 105628 4224 3244 S 0.0 0.1 0:00.00 sshd 10875 vdscm 20 0 21220 3648 1748 S 0.0 0.1 0:00.11 bash 11367 vdscm 20 0 21220 3536 1668 S 0.0 0.1 0:00.03 bash 590 root 20 0 10224 2908 620 S 0.0 0.1 0:04.48 dhclient 31453 root 20 0 12608 2476 2056 S 0.0 0.1 21:57.17 cdm 1 root 20 0 33508 2040 696 S 0.0 0.1 0:15.01 init 10874 vdscm 20 0 105628 1888 904 S 0.0 0.0 0:00.09 sshd 11366 vdscm 20 0 105628 1720 752 S 0.0 0.0 0:00.00 sshd 11269 vdscm 20 0 23684 1620 1096 R 0.0 0.0 0:00.04 top 996 root 20 0 43452 1184 808 S 0.0 0.0 0:00.85 systemd-logind ...



Amazon S3: How to get object size and lastmodified by key?

I found only one way to get object size and last modified: it is client.ListObjects() method. But it can returns multiple files by prefix. For example:

a.txt
a.txt.bak

What is right way to retrieve object size by key?




Using headObject to get x-amz-meta from S3 File

I have a file on S3 with some metadata for example x-amz-meta-description="some description" This metadata was included when I uploaded the file to S3. If I use Amazon console to check the metadata, the metadata is there. Next, I added the following to the CORS configuration to have access to the headers:

<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>

From my web app, I'm trying to check the headers of my file. I'm using the following javascript code:

AWS.config.update({accessKeyId: 'XXX', secretAccessKey: 'YYY'})
var bucket = new AWS.S3({params: {Bucket: 'zzz'}});
var params = {Bucket: 'zzz',Key: 'content/myfile.doc'};
bucket.headObject(params, function (err, data) {
    if (err)
        console.log(err, err.stack);
    else
        console.log(data);
});

After running the code data.Metadata is empty. Is there any other configuration to get the metadata associated to the file? What am I doing wrong?

Thanks for all

PS:I used getObject function, but Metadata is still empty.




camel aws-sqs route stops consuming messages abruptly from aws queue

HI I am using camel out of the box aws-sqs route to consume our messages from the amazon sqs queue .The configuration we have is as below

    <from
            uri="http://ift.tt/1GxQBlf}}" />
        <log message="Got from SQS Messaging Queue: ${body}" />

here the amazon.sqs.outbound.delay= 2000 and maxMessagesPerPoll=10 .What we have observed in our prod environments in that the consumption does start for some hours but then the route stops consuming messages until we manually stop and start the sqs route .Also post this step we have a couple of routes which further process the messages consumed .

Have any of you faced such issues if yes please help .The load we receive on the queue ranges at around on an average 1000 + messages in a day




Remote database connection in AWS database server

I am a new user of Amazon Web Server. I deployed a .net application on these Elastic Beantalk. It created a instance for RDS. I tried to login in using these RDS address and credential in Miscroft SQL Management Studio. It shows the following message.

Cannot connect to xxxxxxxxxxxxxxxxxxxx.us-west-2.rds.amazonaws.com.

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)

How can I access amazon database server from my MS SQL Management Studio?

Thanks all




Converting a JPG to WebP in AWS Lambda using GraphicsMagick/ImageMagick packages in Node.js

I am leveraging the AWS Lambda example script to resize a JPG image using Node.js and ImageMagick/GraphicsMagick libraries. I want to make a simple modification to convert the image from a JPG to a WebP format after the resize. (GraphicsMagick does not support WebP, but ImageMagick does, which is subclassed in the script). This should be possible with the following block of code, as per the example in the Buffers section here (which converts JPG to PNG).

gm('img.jpg')
.resize(100, 100)
.toBuffer('PNG',function (err, buffer) {
  if (err) return handle(err);
  console.log('done!');
})

When I run that block of code in my local Node.js installation (replacing PNG with WebP), it works.

When I modify the transform function (see below) of the AWS Lambda example script, however, and execute it on AWS, I receive the following error:

Unable to resize mybucket/104A0378.jpg and upload to mybucket_resized/resized-104A0378.jpg due to an error: Error: Stream yields empty buffer

Modified transform() function (see the line with 'webp'):

function tranform(response, next) {
            gm(response.Body).size(function(err, size) {
                // Infer the scaling factor to avoid stretching the image unnaturally.
                var scalingFactor = Math.min(
                    MAX_WIDTH / size.width,
                    MAX_HEIGHT / size.height
                );
                var width  = scalingFactor * size.width;
                var height = scalingFactor * size.height;

                // Transform the image buffer in memory.
                this.resize(width, height)
                    .toBuffer('webp', function(err, buffer) {
                        if (err) {
                            next(err);
                        } else {
                            next(null, response.ContentType, buffer);
                        }
                    });
            });
        }

I realize that the response.ContentType is still equal to image/jpeg, but I don't think that is playing a role here. Also, I realize that I should probably convert to WebP before resizing but...baby steps!

Any ideas?




elasticbeanstalk checking the config file output

Many thanks in advance for your insight and guidance. I finally managed to set my python-django project up with amazon's elasticbeanstalk. All works except the collectstatic command doesn't seem to be running since I get 404 on all my static stuff. I have been checking logs and sshing to find out what is wrong but not luck yet ... this is my config file:

container_commands:
 00_echo:
  command: "echo Baaaaaaa!!!!"
 01_collectstatic:
  command: "source /opt/python/run/venv/bin/activate && python myapp/manage.py collectstatic --noinput"

option_settings:
 "aws:elasticbeanstalk:application:environment":
  DJANGO_SETTINGS_MODULE: "myapp.settings"
   "PYTHONPATH": "/opt/python/current/app/vitagina:$PYTHONPATH"
 "aws:elasticbeanstalk:container:python":
  WSGIPath: myapp/myapp/wsgi.py
  NumProcesses: 1
  NumThreads: 18
 "aws:elasticbeanstalk:container:python:staticfiles":
  "/static/": "www/static/"

Should I be able to see the "Baaaaa!" in the logs somewhere? How do I check to make sure my commands are actually running?

Many thanks




When we use S3 glacier storage we are using the amazon glacier service?

My doubt is the title of my question, Im studying aws and Im not understanding if when we use the S3 glacier storage we are using the amazon glacier service or if the glacier storage its just a propertie of amazon s3. Do you know?




Push Spark Stdout on YARN on EMR into Job S3 Log Bucket

Similar questions about getting logs on YARN have been asked before, but I cannot seem to make this work the way I want on Amazon EMR in particular.

I have Spark jobs running on EMR, triggered via the Java AWS client. On earlier versions of Spark I used to run them directly on EMR, but with the later versions I switched to using YARN as the resource manager which should give some performance and cost benefits, as well as fit well with the EMR documentation.

The problem is of course the logs from my jobs disappear into YARN rather than being collected in the EMR console. In a perfect world, I want to pull these into the stdout or stderr files in the EMR web console step execution logs (currently stdout is empty and stderr contains just YARN noise).

The Amazon forums and the AWS EMR documentation on this page http://ift.tt/1CLt1jk would seem to suggest that the correct way is to just supplying a log URI like I'm already doing and everything will work (it doesn't).

I have tried the solution here: YARN log aggregation on AWS EMR - UnsupportedFileSystemException

I added This actually works nicely to push logs to a remote S3 bucket.

ScriptBootstrapActionConfig scriptBootstrapAction = new ScriptBootstrapActionConfig()
    .withPath("s3://elasticmapreduce/bootstrap-actions/configure-hadoop")
    .withArgs("-y", "yarn.log-aggregation-enable=true",
            "-y", "yarn.log-aggregation.retain-seconds=-1",
            "-y", "yarn.log-aggregation.retain-check-interval-seconds=3600",
            "-y", "yarn.nodemanager.remote-app-log-dir=s3n://mybucket/logs");

The problem is this creates a series of 'hadoop/application-xxxxxxxx/ip-xxxxxx' files, and cannot place them in an area accessible by the AWS EMR web console logs, since that's a subdirectory based off the EMR job id. Since bootstrap actions have to be supplied at job creation time, I don't know the job id yet to pass that in the path.

Have I missed anything to get EMR logs working?




MongoDB Cloud Deployment - High TTFB

I am building a mobile only application in Node.js + MongoDB. I have deployed my server in the AWS AP-Southeast-1 Region.

Since I am new to MongoDB, I am leveraging cloud hosting services like MongoLabs, Compose.IO, MongoDirector (testing a few out). Now, these cloud hosting platforms are deploying my database in either of the AWS AP-Southeast-2 OR US-East-1 Region due to unavailability of Shared hosting in the Southeast-1 Region.

While testing my APIs, I saw a alarmingly high latency in the form of TTFB (Time to First Byte) of ~ 1-1.5seconds. Is this because of the server & database hosted in different regions? Apart from this, my queries are relatively taking less time.

Awaiting a reply as we're soon to goto Production.




Omnibus 7.10.0 Gitlab Redirect https to http

http://ift.tt/1xWoR9a --> AWS-ELB [ingress 443 --> egress 80]) --> OmnibusGitlab

Now Omnibus redirects to the following and times out

http://ift.tt/1DCLcYi

Any way to debug this issue.




ImageResizer and S3Reader2: The string was not recognized as a valid DateTime

I'm upgrading a website to a .NET website using MVC5 using ImageResizer with the images stored on AWS S3. The images stored on S3 are fine, have public read access and load without a problem when calling the S3 URL.

When I use the ImageResizer plugin S3Reader2 I get the following error on most of my images: "The string was not recognized as a valid DateTime. There is an unknown word starting at index 26."

You can find the ImageResizer Diagnostics here: Diagnostics

You can find the stack trace here: Stacktrace

Any help would be highly appreciated!




How do I format a date field (@scheduledStartTime) in AWS?

I'm trying to grab the past hour based on the @scheduledStartTime field in AWS, but am having difficulty. The following works for grabbing just the current hour: hh=#{format(@scheduledStartTime,'hh')} which returns something along the lines of hh=06

I've tried this hh=(#{format(@scheduledStartTime,'hh')}-1) which returns hh=(06-1) when I actually want hh=05

How do I properly subtract the 1 from the field so that I get hh=05?




SSH to ec2 instance and execute

I have a datapipeline application that I need to respond to. When it finishes, I ssh to an ec-2 instance and execute a script. What is the best way to do this? Use lambda? It's not clear how I would store a pem file to use in ssh.




Hibernate Search on Amazon Beanstalk

I am trying to create a hibernate search project and deploy it on amazon beanstalk.

The project works fine locally but i have the following doubts regarding Beanstalk

  1. I am guessing beanstalk might use multiple underlying EC2 instances.If this is correct on which instance should i create the lucene index directory

2.If it is not possible to create a local filesystem setup is there a way i can use Amazon S3 as the index storage which will be common.




Can I configure Cloudfront so its split traffic to two origins

Can I configure Cloudfront so its split traffic to two origins so that the incoming url looks like its all going to one server, but then depending on path go to EB for some requests and S3 for others (dynamic versus static).

Even if I can do this is it a good idea, or should it be obvious from the original request url that the static and dynamic pages are hosted from different location.




With DEBUG=False django in EC2 in AWS, domain, stage.example.com is not reachable

I have setup a Django server to serve my stage environment like http://ift.tt/1J9wKf5. I entered CNAME for the 'stage' to be like ec2-xxx-xxx-xxx.compute.amazonaws.com in my dns setup. This worked fine until I flipped DEBUG = False in setting.

With DEBUG=False, stage.example.com is not reachable and ended up with DNS Lookup failure.

BTW, I added stage.example.com in ALLOWED_HOST. How do I make stage.example.com work under production mode in Django?




Indexing notifications table in DynamoDB

I am going to implement a notification system, and I am trying to figure out a good way to store notifications within a database. I have a web application that uses a PostgreSQL database, but a relational database does not seem ideal for this use case; I want to support various types of notifications, each including different data, though a subset of the data is common for all types of notifications. Therefore I was thinking that a NoSQL database is probably better than trying to normalize a schema in a relational database, as this would be quite tricky.

My application is hosted in Amazon Web Services (AWS), and I have been looking a bit at DynamoDB for storing the notifications. This is because it is managed, so I do not have to deal with the operations of it. Ideally, I'd like to have used MongoDB, but I'd really prefer not having to deal with the operations of the database myself. I have been trying to come up with a way to do what I want in DynamoDB, but I have been struggling, and therefore I have a few questions.

Suppose that I want to store the following data for each notification:

  • An ID
  • User ID of the receiver of the notification
  • Notification type
  • Timestamp
  • Whether or not it has been read/seen
  • Meta data about the notification/event (no querying necessary for this)

Now, I would like to be able to query for the most recent X notifications for a given user. Also, in another query, I'd like to fetch the number of unread notifications for a particular user. I am trying to figure out a way that I can index my table to be able to do this efficiently.

I can rule out simply having a hash primary key, as I would not be doing lookups by simply a hash key. I don't know if a "hash and range primary key" would help me here, as I don't know which attribute to put as the range key. Could I have a unique notification ID as the hash key and the user ID as the range key? Would that allow me to do lookups only by the range key, i.e. without providing the hash key? Then perhaps a secondary index could help me to sort by the timestamp, if this is even possible.

I also looked at global secondary indexes, but the problem with these are that when querying the index, DynamoDB can only return attributes that are projected into the index - and since I would want all attributes to be returned, then I would effectively have to duplicate all of my data, which seems rather ridiculous.

How can I index my notifications table to support my use case? Is it even possible, or do you have any other recommendations?




SSH to a node in private subnet. Any other way except having a bastion? (for windows especially)

I created a VPC with private and public subnets on AWS. After trying quite a few times, the only way I could SSH onto the private machine was via a node on the public subnet (and using ssh -A). Now, my doubt is: is there no other way of ssh-ing onto the private node? Isn't it public to the creators of the node?

I am unable to wrap my head around why even the people who created that node in the private subnet cannot log into it (unless, I can and I don't know yet)?

And if it's true that the only way to ssh into it is via the bastion node, then, how do I RDP onto a Windows machine on the private subnet? Is the only way to do it is have a windows machine on the public subnet and use that to RDP onto the private one?

Thanks!




Issues Conneting AWS through Vagrant

Hi I am new to Vagrant.

My objective is to connect to AWS without mentioning the credentials in the vagrantfile. I have created a new file called credentials under my cloned repository with the required keys.

In the vagrant file I have given it as

config.vm.provision "aws-credentials", type: "file", source: "credentials", destination: "~/.aws/credentials"

when I am trying to access AWS using the command "vagrant up --provider=aws"

I am getting the error saying

  • An access key Id is required
  • A secret access key is required.

Why is the file not being read. Could some one help me.




Restrict s3_direct_upload to specific filetypes

I have a quick question. I'm using the s3_direct_upload gem to facilitate uploads to my s3 bucket. I'd like to restrict uploads to PDFs only. How can I go about this?

Thanks!




Fog issue using iam profile and fetching urls from aws

Using Fog w/ AWS instance profiles and after 3 day my s3 urls are no longer working. I'm getting fresh urls, but the error returned from AWS is The provided token has expired. Restarting the application gets everything working again, but no errors other than the one from AWS are present.

I have read that switching to keys should fix my issue, but I was hoping to keep my iam profile. Has anyone run into this?




Amazon EC2 - Quartz and Job not running at correct time

I have a java app deployed on an Amazon EC2 server. I use quartz for scheduling various jobs.

I tried scheduling a job to run at 9am - I noticed it didnt execute until 10am I then tried to execute a job at 9am GMT-5 -should of executed at 2pm GMT but it actually executed at 3pm GMT

On further analysis i noticed the time on my Amazon server was set in UTC and is an hour behind GMT currently

I was just wondering - what part of my setup is not currently correct since the jobs are not executing at the correct time?

Do I need to specify anything when setting the cron trigger? I am setting up the Cron in quartz as follows using the CronScheduleBuilder

    CronExpression cronExpression = new CronExpression(cronValue);
    TimeZone timeZone = TimeZone.getTimeZone("Etc/GMT-5");
     cronExpression.setTimeZone(timeZone);

     Trigger trigger = TriggerBuilder.newTrigger().withIdentity(triggerName).startNow()
                .withSchedule(CronScheduleBuilder.cronSchedule(cronExpression)).build();

        JobDetail job = JobBuilder.newJob(MyCloudTasksServerTaskExecutor.class).withIdentity(taskId.toString())
                .storeDurably(true).build();

Any help is greatly appreciated




Estimate average monthly usage cost on EC2

I'm running a small app on EC2. I'm approaching the end of my free tier year. I'm interested in estimating my monthly costs to continue on with the service under the current workload. What's the best approach to this?




Cloudfront traffic cheaper than S3?

I was checking the pricing on the AWS page and noticed that for the us-east-1 region, the outgoing traffic is $ 0.09/GB and for transferring to Cloudfront is free. The pricing for delivering content from Cloudfront to US/EU is $ 0.085. Are there any other fees (except request fees) than I am missing out, or is the transfer really cheaper?

http://ift.tt/GYAlz3

http://ift.tt/1cryXmO




Media Streaming Help AWS Cloudfront Etc

Would really appreciate some help with the following i have been working with AWS for a while now to try and deliver a secure streaming experience but i seem to hit a hurdle at every step. I am just going to list them all some are really obvious.

Take the following. 1. Progressive streaming grabbing the public urls via Amazon S3 and streaming. (hurdle: user can just right click on most players and download, or open web inspector click network refresh the page and grab the url).

  1. Signing the urls with one of the sdk's

    $signedUrl = $client->getObjectUrl($bucket, 'video.mp4', '+10 minutes');

(hurdle: makes no real difference to the above the user can still copy the url in web inspector network tab and download)

  1. Using cloudfront RTMP and Amazon S3 (hurdle: works great for browser streaming relying on flash and there is no way to grab the url through inspect element and download, but this does need a fallback for mobile in which a mp4 would be needed to be provided you can open safari web developer tool set user agent to iphone refresh and get the url to download)

  2. Using HLS streaming (hurdle: works great on the browser and mobile but i hear it is not supported on some android devices, after playing around with this and creating a playlist i need to make all my segments public so someone could download each segment and then merge together i know i am being picky now)

I know this is possible because other people manage it i think the best way is to use the hls method and make the playlist.m3ub8 file signed with a real short time but if some can access the m3ub8 they can paste it into a service like this http://ift.tt/1w7949B and then view the video.

I know there is DRM which i am going to look into now, i just need to make my output files/segments encrypted within the web inspect element so they cannot be clicked this should solve a lot of my problems but cant find any good step by step tutorials on how to do this.

Can anyone give advise on their experiences?

Thanks




Can I use a S3 bucket object as a push notification (Polling object) without any issues?

Background : Due to quick development we have our servers in PHP and implementing services like Pusher and Socket.io is not an option.

So, we are planning on using AWS S3 bucket files and the data content, to update them and to poll them and see if there are new messages or not.

Would like to know how many requests/second can an S3 file or an S3 Bucket handle ?




What is the recommended AWS service for SAAS apps?

I'm looking at the various offerings form Amazon for managing a cloud based app and in short am unsure whether it is best to attempt to accomplish what I need using simply EC2 & EBS, Beanstalk, or OpsWorks or even CloudFormation.

To elaborate I want to offer customers of our web/Tomcat based app a cloud based trial version upon sign up, running on a custom domain (via Route53) with their data stored on an EBS volume. Certain directorys on the file system will need to be created upon instance creation.

My intention is make appropriate calls using the Java AWS SDK upon successful sign up to provision a system for the user, who will then be notified of the URL to access their custom site.

I'm not looking at multi-scaling as the number of concurrent users will never be high but each customers version should be running in isolation.

So I'm looking at the beanstalk API.. but then saw the OpsWorks Api, and then the CloudFormation templates.

If I were to use the Beanstalk API, I could launch an 'environment' for each customer, which means all customers would be part of the same beanstalk 'Application', meaning if I update the app version.. everyone would receive the update which sounds positive, however it seems wrong doing an environment per customer as typically environments are used for testing,production etc...

So I'm back at the beginning wondering what would be the typical deployment strategy for this type of AWS based SAAS system.

For clarity, the app does not require RDS and would only require a single instance per customer.




Smack 4.1 failing in tcp handshaking with aws server

I am getting no response from server, whille trying to connect to AWS, where we have deployed Ejabbered. W/System.err(13103): org.jivesoftware.smack.SmackException$NoResponseException: No response received within reply timeout. Timeout was 10000ms (~10s). Used filter: No filter used or filter was 'null'. W/System.err(13103): at org.jivesoftware.smack.SynchronizationPoint.checkForResponse(SynchronizationPoint.java:192) W/System.err(13103): at org.jivesoftware.smack.SynchronizationPoint.checkIfSuccessOrWait(SynchronizationPoint.java:114) W/System.err(13103): at org.jivesoftware.smack.SynchronizationPoint.checkIfSuccessOrWaitOrThrow(SynchronizationPoint.java:97) W/System.err(13103): at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:837) W/System.err(13103): at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:360) W/System.err(13103): at com.snapdeal.chat.service.XmppService$2.run(XmppService.java:228)




Cassandra on AWS

I'm new to AWS and also to Cassandra. I just read about EBS and S3 storage available in AWS. I was trying to figure out if we have Cassandra installed in EC2, which storage would it use? EBS or S3? Or is there other storage? I'm little confused with this. Please help me understand this.

Thanks Aravind




Amazon S3 drupal 7 module, bucket configuration issue

I am facing an issue while I am trying to set up the Amazon S3 module. When I am trying to save the defualt bucket name in "/admin/config/media/amazons3", I am getting this error -

"There was a problem using S3. The following exception was thrown: cURL resource: Resource id #95; cURL error: Failed connect to sayan100.s3.amazonaws.com:443; No error (cURL error code 7). See http://ift.tt/1mgwZgQ for an explanation of error codes"

I have created a new bucket in my amazon console, but still getting nothing.

Thanks in Advance:)




Upload an Object Using the AWS SDK for Java

Hi i am using the following code from AWS documentation:

http://ift.tt/1lRPa8S

And i have used the following jars : Jars used

And i am getting the following error :

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory at com.amazonaws.AmazonWebServiceClient.(AmazonWebServiceClient.java:58) at UploadObject.main(UploadObject.java:17) Caused by: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 2 more

Any help would be appreciated for the same. I have tried adding the common logging jars but no luck.




Amazon SNS CreatePlatformApplication directly in iOs

I would use the Amazon Web Service to send push notifications directly to a device.

Currently I have a php script that receives APNS ( iOS ) or GCM ( Android ) , but I have not yet found a way (no examples found on google) to derive the ARN Token by the AWS PHP SDK ( it would be nice if someone publish an example of this type ;) )

It's possible get the ARN directly from IOS or Android application? If yes, you can post me a sample code. Do you also think is the best method? or is it better to use php?




How to configure EC2 autoscaling based on multiple limits on same metric?

My primary requirement is as follows:

When CPU consumption on an instance exceeds 50 % then adjust capacity of autoscaling group to 5 instances, when CPU consumption exceeds 80% then adjust capacity to 10 instances.

However if I use cloudwatch alarms to set capacity I can imagine the following race condition:

  • 5 instances exist
  • CPU consumption exceeds 80 %
  • Alarm is triggered
  • Capacity is changed to 19 instances
  • CPU consumption drops below 50 %
  • Eventually CPU consumption again exceeds 50% but now capacity will be changed to 5 instances (which is something I don't want to happen)

So what I would ideally like to happen is that in response to alarm triggers I would like to ensure that capacity is altleast the corresponding threshold.

I am aware that this can be done by manually setting the capacity through AWS SDK - which could be triggered in response to lifecycle events monitored by a supervisor, but is there a better approach, preferably one that does not require setting up additional supervisors or webhooks for alarms ?




mardi 28 avril 2015

EC2 instance creation and polling until state is 'running' using boto library

I want to create a new spot instance using boto library and run some commands on it using fabric. I'm using the below code

instance = reservations[0].instances[0]
status = instance.update()
tries = 40
i=0
while status == 'pending':
  log.info("Status of instance: [ " + job_instance_id + " ] is pending")
  time.sleep(10)
  status = instance.update()
  i=i+1
  if i > tries:
    break
log.info("Status of instance [ " + job_instance_id + " ] is " + status)
if status == 'running':
  log.info("Adding tag")
  instance.add_tag("Name", "test_tag")
  public_dns_name = instance.public_dns_name
  log.info("Host Name : " + public_dns_name)
  init_instance(public_dns_name)

In init_instance I'm running some commands using fabric. These commands fail sometimes with "Unable to connect to host" error. Sometimes it works fine without any issues. Can you please let me know why does it fail sometimes? How can I handle this? I already keep polling untill the state of the instance changes to 'running' and run the commands via ssh only after the instance state moves to 'running'




How to redirect http requests to https in amazon ec2 without any use of ELB Or proxy?

I have installed a simple express application of NodeJS, and simply listening to an 'https' server, without any nginx proxy or ELB. I am able to get my site working on https, But not my http requests are not being routed.




cloning an amazon machine instance

I have two amazon machine instances running.Both of them are m3.xlarge instances. One of them has the right software and configuration that I want to use.I want to create a snapshot of the EBS volume for that machine and use that as the EBS volue to boot the second machine from. Can I do that and expect it to work without shutting down the first machine.




install package on rstudio server ubuntu 12.04.1

I followed the instructions on randyzwitch's blog (http://ift.tt/1rmkVID) to install r-studio server on ec2 instance running ubuntu 12.04.

When I start r-studio server and try to install packages, this is what was happening:

> install.packages("dplyr")
Warning in install.packages :
  package ‘dplyr’ is not available (for R version 2.14.1)
Installing package(s) into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
Warning in install.packages :
  'lib = "/usr/local/lib/R/site-library"' is not writable
Would you like to create a personal library
~/R/x86_64-pc-linux-gnu-library/2.14
to install packages into?  (y/n) y
Warning in install.packages :
  package ‘dplyr’ is not available (for R version 2.14.1)

I realized I need to update R so I checked out this post and updated it: http://ift.tt/1PVm3AQ

I am logged in as sudo user. When I try to install package now, i get this:

> install.packages("plyr")
Installing package into ‘/home/ubuntu/R/x86_64-pc-linux-gnu-library/3.2’
(as ‘lib’ is unspecified)
also installing the dependency ‘Rcpp’

trying URL 'http://ift.tt/1FwcAgk'
Content type 'application/x-gzip' length 2353791 bytes (2.2 MB)
==================================================
downloaded 2.2 MB

trying URL 'http://ift.tt/1PVm33L'
Content type 'application/x-gzip' length 392136 bytes (382 KB)
==================================================
downloaded 382 KB

Warning in install.packages :
  system call failed: Cannot allocate memory
Warning in install.packages :
  installation of package ‘Rcpp’ had non-zero exit status
Warning in install.packages :
  system call failed: Cannot allocate memory
Warning in install.packages :
  installation of package ‘plyr’ had non-zero exit status

The downloaded source packages are in
    ‘/tmp/Rtmp6Kgx5d/downloaded_packages’

I saw this post : lme4 package install failing on Ubuntu 12.04 and followed all instructions but it didn't solve the problem. Still same result. Thoroughly frustrated with trying to run rstudio server on aws. Someone please help!




Is it possible to connect to an AWS server running mongo through mongoengine in Python with a .pem key?

I have been looking for good documentation but can't seem to find anything except using a ssl key to connect from mongo, not to it on the server. Would I have to connect to the ec2 Instance from python and then to the mongo database running there? Much thanks.




AWS get all objects inside of folder in S3

I would like to get all the URLs of objects stored in a folder. I will only have one level of folders so I am not concerned with nested folders. I have read the PHP client API (http://ift.tt/1J7FIJK) for S3 but can't seem to find a way to accomplish this.

I found this code from StackOverflow to get the size of contents:

List<Bucket> buckets = s3.listBuckets();
long totalSize  = 0;
int  totalItems = 0;
for (Bucket bucket : buckets)
{
    ObjectListing objects = s3.listObjects(bucket.getName());
    do {
        for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) {
            totalSize += objectSummary.getSize();
            totalItems++;
        }
        objects = s3.listNextBatchOfObjects(objects);
    } while (objects.isTruncated());
    System.out.println("You have " + buckets.size() + " Amazon S3 bucket(s), " +
                    "containing " + totalItems + " objects with a total size of " + totalSize + " bytes.");
}

Which is close to something I want, except I do not want all the items in a bucket, except I want all the items in a certain bucket's folder. My second question is how much would I need to spend to accomplish this as there doesn't seem to be an get/put commands being used?




Get the URL node value in Amazon advertising xml in Python

I query the Amazon's product xml, looking for some books' cover to download. The string that i want to extract is located in node which is a child of parent node. How can I do the extraction?

Here is an exmaple xml document: 269b7ade-516d-4516-8293-75dfa8221204 0.0207040000000000 True ISBN 9789004203655 Images Books All 9004203656 A link to an image 75 50 A link to an image 160 107 A link to an image 500 333 A link to an image 30 20 A link to an image 75 50 A link to an image 75 50 A link to an image 110 73 A link to an image 160 107 A link to an image 500 333




Trying to make an API request using PHP with AWS Route53

I need to make one API request to AWS Route53 to create a reusable delegation set. You can't do this through the console web interface, it has to be through the API.

Here is the documentation for making this API request: http://ift.tt/1DUMLB6

<?php
$baseurl = "http://ift.tt/1GG2CdU";
$body = '<?xml version="1.0" encoding="UTF-8"?>
    <CreateReusableDelegationSetRequest xmlns="http://ift.tt/1xMuWCf">
       <CallerReference>whitelabel DNS</CallerReference>
    </CreateReusableDelegationSetRequest>';

$ch = curl_init();              
 // Set query data here with the URL
curl_setopt($ch, CURLOPT_URL, $baseurl);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST,           1 );
curl_setopt($ch, CURLOPT_POSTFIELDS,     $body ); 
curl_setopt($ch, CURLOPT_HTTPHEADER,     array('Host: route53.amazonaws.com','X-Amzn-Authorization: ')); 
curl_setopt($ch, CURLOPT_TIMEOUT, '3');
$rest = curl_exec($ch);

if ($rest === false)
{
// throw new Exception('Curl error: ' . curl_error($crl));
 print_r('Curl error: ' . curl_error($ch));
}
curl_close($ch);
print_r($rest);

?>

I know the request isn't signed/authenticated, but I'm not even able to connect to the server. I would at least like to get an error message that says I'm not authenticated before I continue. Instead all I get is "connection refused".

I'm sure I'm doing something completely wrong here. But Google has been of no use.




ERR_NAME_NOT_RESOLVED while trying to set up AWS to point to Heroku

When I head over to example.com in a web browser, I get ERR_NAME_NOT_RESOLVED.

$ host example.com
Host example.com not found: 3(NXDOMAIN)
$ host www.example.com
Host www.example.com not found: 3(NXDOMAIN)

Here's what I've done.

I've followed the instructions on Heroku's webpage. I'm trying to point example.com -> example.herokuapp.com.

Heroku seems to be set up properly.

$ heroku domains
=== example Domain Names
example.herokuapp.com
example.com
www.example.com

In S53, there's an A-level ALIAS for s3-website-us-west-2.amazonaws.com., the S3 Bucket. The S3 bucket, named example.com, has enabled Properties > Static Website Hosting > Redirect all requests to another host name > www.example.com . So A-level should redirect to www.example.com.

In S53, www.example.com has a CNAME -> example.herokuapp.com. So that should work, too.

The four nameservers listed as NS for example.com are:

ns-618.awsdns-13.net.
ns-1481.awsdns-57.org.
ns-1908.awsdns-46.co.uk.
ns-239.awsdns-29.com.

When I head over to Registered Domains, the nameservers there for the domain are the exact same.

At this point, I have no idea where to start troubleshooting. Have I missed something glaringly obvious? How can I try to figure out where the problem is?




AWS EBS ERROR: Source bundle is empty or exceeds maximum allowed size: 524288000

I am getting this error on the Terminal command line when I go to deploy my code to Amazon Web Services Elastic Beanstalk.

How do i fix this issue?




AWS: external image links, Paperclip works, assets/ doesn't

I've got an app that sends templates through to Mailchimp, and my images are hosted with AWS. I use Paperclip too. The problem is that images that I upload via Paperclip work fine like this:

<img src="<%= URI.join(site, document.photo.image(:email)) %>">

But images like this break:

<img src="<%= URI.join(site, 'user_footer548.jpg') %>">

I'm not sure how to find the correct link for the latter. The image is in assets, and site is defined like so:

def site
  Rails.env.staging? ? 'http://staging.app.com/' : 'http://app.com/'
end

Any help would be great. Thanks.




how to connect Django in EC2 to a Postgres database in RDS?

First time using AWS services with Django.

Was wondering how to configure the Django app running in a EC2 instance to a Postgres database in RDS?

the EC2 is running ubuntu 14.04

Any special configuration required?




AWS crc32 mismatch dynamo

I am trying to request some entries from AWS DynamoDB from an App in Android Studio. I am getting a CRC32 mismatch for a scanResult. Does anyone know why this is happening. attaching snippet and stack trace below.

credentials = new CognitoCachingCredentialsProvider(
            MapValidate.getContext(), // Context
            “FILLED_MY_ID_HERE", // Identity Pool ID
            Regions.US_EAST_1 // Region
    );

AmazonDynamoDBClient dynamoDB = new AmazonDynamoDBClient(credentials);
Region usEast1 = Region.getRegion(Regions.US_EAST_1);
dynamoDB.setRegion(usEast1);

HashMap<String,Condition> scanFilter = new HashMap<String,Condition>();

Condition condition1lat = new Condition()
            .withComparisonOperator(ComparisonOperator.EQ.toString())
            .withAttributeValueList(new AttributeValue().withS(user_lat));
scanFilter.put("DegLat", condition1lat);

ScanRequest scanRequest = new ScanRequest()
            .withTableName("MY_TABLE_NAME")
            .withAttributesToGet("DegLat","DegLong","Latitude")
            .withScanFilter(scanFilter);

ScanResult result = dynamoDB.scan(scanRequest);

I am getting the following exception as below:

04-28 19:34:03.729    4744-4793/com.google.sample I/AmazonHttpClient﹕ 
Unable to execute HTTP request: 
Client calculated crc32 checksum didn't match that calculated by server side




email URL forwarding with office365/AWS

I have 2 domains names that I have acquired through namecheap. ABC.com and ILOVEMYABCS.com

I am using @ABC.com for my emails but I want ABC.com(the webpage) to forward to ILOVEMYABCS.com just in case anyone ever types it in the web browser.

any suggestions on how to do this? I have office365 and currently setting up amazon web services.




What is the Amazon API Signiture

I am attempting to test the Amazon ItemtSearch API, yet it keeps sending back the error:

<ItemSearchErrorResponse xmlns="http://ift.tt/1ldHUXV">
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
</Message>
</Error>
<RequestId>9a355851-98f3-42fc-8bc3-b795030b4510</RequestId>
</ItemSearchErrorResponse>

With the URL: http://ift.tt/1zb16xr(access key)&Operation=ItemSearch&Keywords=Rocket&SearchIndex=Toys&Timestamp=2015-04-28T23:20:38Z&Signature=[request signiture]

I have supplied the AWS Access Key as from the security credentials page, yet do not know where to find the proper Signiture parameter. I had used the secret access key but that did not work and simply threw the error as seen.

What signature are they referring to in order to get this request to work?




Properly handling context.succeed()/context.fail() with AWS S3 service calls in AWS Lambda

I've already searched through posts here (i.e. How do you structure sequential AWS service calls within lambda given all the calls are asynchronous?) and elsewhere, and can't seem to find that one little bit of information that will help me get past this annoying issue. When you have a Lambda function that iterates through a loop, and within that loop makes a call to say s3.putObject(), it runs into a short-circuit issue when trying to properly deal with context.succeed()/context.fail() or the older context.done(null, 'msg') way of closing the Lambda process.

I.E. the iteration needs to call the s3.putObject() with the current object to be uploaded, but still output to cloudwatch or possibly SQS/SNS the file that was successfully uploaded. However, all of my attempts at putting this type of closure into the function meets with random results of sometimes getting the file names, other times only getting some of the file names, etc.

What is the best way to do it? I've attempted to use Q and async but to be honest, I'm still learning all of this stuff..

Below is a rough example of what i'm attempting to do:

function output(s3Object){
     s3.putObject(s3Object, function(err, data){
          if (err) {
               console.log('There was an issue with outputting the object.', err);
          } else {
             // how do you properly close this if you have x number of incoming calls??
          // context.done(null, 'success');
}


// and later in the code where it actually calls the output function
// and NOTE: it should output all of the file names that the invocation uploads!
for (var a = 0; a < myRecords.length; a++){
     output(myRecords[a]);
}

But, as I said previously, any attempts I've made so far, get mixed results.

Successfully output object: myBucket/prefix/part_000000123432345.dat
Successfully output object: myBucket/prefix/part_000000123432346.dat

But another test of the function outputs:

Successfully output object: myBucket/prefix/part_000000123432346.dat

Argh.




Mongodb replica set across AWS regions or subnets or data centers

I seem to have run into a blocking problem. I want a certain collection to be replicated across multiple AWS regions/subnets/data centers (eg a global lookup table or a global user credentials table) i.e. all data replicated across all regions and it can be updated by application layer of any region. In order to do that do I need to provide public IP address and therefore put all my mongodb EC2 nodes in public network so that replication can be done across the regions and in case of writes from application, a primary node in a different region / subnet can be accessed by my application layer? I don't see how else replication will happen.

Thanks and Regards, Archanaa Panda




How to get the Cognito Identity id in AWS Lambda

How can I get the identity id of the user (logged in by AWS Cognito) that invoked an AWS Lambda function? Do I have to use the SDK on the Lambda function to get the identity id?




ListObjectsResponse is truncated but NextMarker is null

I'm following the example here: http://ift.tt/1pP87is

I'm getting an endless loop over the same 1000 keys here. Why is response.NextMarker null in the first iteration (and every iteration) of the while loop if response.Truncated = true?

var client = new AmazonS3Client("accessKey", "secretKey");

var request = new ListObjectsRequest().WithBucketName(bucket).WithPrefix(prefix);

            do
            {
                ListObjectsResponse response = client.ListObjects(request);

                foreach (S3Object entry in response.S3Objects)
                {
                    Console.WriteLine("key = {0} size = {1}",
                        entry.Key, entry.Size);
                }

                if (response.IsTruncated)
                {
                    request.Marker = response.NextMarker;
                }
                else
                {
                    request = null;
                }
            } while (request != null);




Amazon SES with Codeigniter Email Library

I am trying to use the CI Email library to send emails using amazon ses. I am getting the following error:

220 email-smtp.amazonaws.com ESMTP SimpleEmailService-1040776345 KoByZloB2UXQfiVZcMG4 hello: 250-email-smtp.amazonaws.com 250-8BITMIME 250-SIZE 10485760 250-AUTH PLAIN LOGIN 250 Ok starttls: 454 TLS not supported The following SMTP error was encountered: 454 TLS not supported hello: 250-email-smtp.amazonaws.com 250-8BITMIME 250-SIZE 10485760 250-AUTH PLAIN LOGIN 250 Ok Failed to authenticate password. Error: 535 Authentication Credentials Invalid from: 530 Authentication required The following SMTP error was encountered: 530 Authentication required to: 503 Error: need MAIL command The following SMTP error was encountered: 503 Error: need MAIL command to: 503 Error: need MAIL command The following SMTP error was encountered: 503 Error: need MAIL command data: 503 Error: need MAIL command The following SMTP error was encountered: 503 Error: need MAIL command 500 Error: command not implemented The following SMTP error was encountered: 500 Error: command not implemented Unable to send email using PHP SMTP. Your server might not be configured to send mail using this method.

Any help will be appreciated.




Machine Learning - getting started

I am interested in Machine Learning recently. I have heard of commercial softwares like - Amazon ML and Azure ML. I have custom dataset available as CSV file or Excel files, on which I have some predictions to make.

  1. Which of the two is better in commercial versions ?
  2. Are there any open source softwares for ML ? If so, which will be better ?
  3. How to use my dataset in predictions?



WSO2 ESB - Configuring Amazon SQS as a Message Store

Is there a possibility to configure Amazon SQS as a Message Store in WSO2 ESB ? Using a Custom Message Store ?

I don't simply want to configure a SQS Connector but a fully integrated SQS Message Store able to be used by a Message Processor. If not, how can I consume my SQS Store on the ESB ? Can I have to write a Custom Message Processor which retrieve messages from the SQS Queue using the SQS Connector ?

Thanks in advance,




Ghost on AWS - how to customise my blog and change the instance url to my domain

I have just installed Ghost on Amazon Free tier following the recommended page.

I can see my personal instance of Ghost live no problem, by pasting the public DNS into a browser.

How do I customise the blog? Do I have to install a SSH client?

My personal website lives on a different host (GoDaddy).

So how do I change the current AWS instance url to a subdomain of my website?

I have looked at this page but it seems quite painful. Is there any other route?




How to store an uploaded file for later processing on Heroku?

I'm writing a PHP app that will process an uploaded file. A ZIP that contains a few CSVs, images, etc. The process requires user input if a warning/error arises while processing the file and the same file should be available to be re-processed later. On a normal server, I's use the filesystem to store the file, then keep the path on my DB.

However, in Heroku I can't do this. I'm using AWS S3 to store other file uploads. Should I store these ones too on S3 and each time I need them, download them to temp dir, process them, upload back and delete the local copy? Or is there a way to process the file while on AWS S3? Maybe mount the S3 bucket?




Amazon Elastic Transcoder: get target bucket

I uploaded a file to a amazon bucket:

aws.uploadFile(sourcePath, bucketName, sourceStorageKey, callback);

Then I created a conversion job:

aws.createTranscodingJob(pipelineId, sourceStorageKey, resultStorageKey, callback);

The callback receives the job information as data object:

data:{
    Job:{
        Input: { AspectRatio ... },
        Output:{ Id, Key, PresetId ... }
    }
}

Unfortunately the Output object doesn't contain a target-bucket to download the converted file from. How can I get the converted file when the transcoding job has been finished?




Connecting existing Amazon Database with Bluemix

I have more of a generic question here, I was wondering if it was possible to connect an existing Amazon Database to the IBM Bluemix service using a PHP buildpack? I cannot really find anything specific about this. Could someone perhaps point me to the right resource on how to do this in the right manner?

Cheers




Reusing EBS snapshots on a different percona xtradb cluster node

I'm evaluating a Percona xtradb 5.6 cluster of 3 nodes in AWS environment. I'm using ec2-consistent-snapshot with --mysql to make an EBS snapshot of the data. However when a snapshot was made on node 1, and then node 2 is relaunched using that snapshot, the cluster would break.

Through trial-and-error I've found that this is caused by reusing auto.cnf and gvwstate.dat files in mysql datadir, which would contain ids of node 1, and the issues were (apparently) caused by another node trying to join with id of another node already in cluster. Removing the said files appears to have fixed the issue and now nodes go up and down as expected.

My question is: did I do the right thing? Do I need to remove auto.cnf and gvwstate.dat before using another server's datadir? Do I need to do anything else? What's the standard practice for this sort of thing?




Nodemailer not working on Amazon EC2 instance

I am EC2 instance up and running , I wanted to integrate nodemailer in my application. Following the http://ift.tt/1tGxIwu I was able to send email from my localhost. When the same code I integrated on EC2 instance I am getting Invalid login error . Sometime gmail blocks login from other application and send confirm mail to the inbox. I didnt get any such mail also. Do I need to enable some port on EC2 instance or I can use nodemailer at all on EC2 instance. Please suggest




How to get the list of folders under a URL using AWS rest API's?

We have an archive stored in Amazon s3. For one of our application I need to get the list of folders under a given URL. Instead of parsing the output from the web query to the URL, Is there a way to get the list of folder names in JSON or XML format using AWS REST APi's ?

If its not possible with REST API's is there is any alternative better thano parsing the web request return ?




Getting AWSiOSSDKv2 [Error] Frequently

Could any tell me what exactly is the reason to get the following error ?

AWSiOSSDKv2 [Error] AWSURLSessionManager.m line:254 | __41-[AWSURLSessionManager taskWithDelegate:]_block_invoke208 | Invalid AWSURLSessionTaskType.




Setting up a functional Node/Angular/Postgres project on AWS

First, please forgive my naive or ignorant sounding questions. The truth is that, while I spent plenty of time coding other people's projects, I have never done any configuration like this before.

A partner and I are building a Postgres/Node/Angular (Although I realize that Mongo is the natural choice he is much more comfortable in Postgres and we decided to use that to make sure that we get the DB right) on AWS (Amazon Web Services). At this point this is what we have:

1) An AWS instance with Node and Postgres installed. 2) An Angular application that is ~25% complete.

At this point the Angular app is not calling server code, instead we have hardcoded in JSON to simulate the server responses. The plan is to write a Node API that is totally decoupled from the Angular front-end.

It is time to merge it all together and we have run into some issues involving the configuration:

1) What tool can we use to debug the Node code (until now we have been using the chrome developer to debug the angular JS)? 2) How do we set up the project to allow for version control of all the code (are there any standard tools)? 3) If a developer has checked out a version and is working it on his local machine, how can he access the database (we have no firewall and until now we were only planning on opening up the secure ports 443, 115, 22 etc.)? I mean, how can the server code that is running on his machine get access to data?

The point is that I am looking for advice regarding a standard functional set up. I have never set up a project like this and I am kind of lost on where to begin.

Thanks in advance




Domain=kCFErrorDomainCFNetwork Code=303 What does this error code means?

Getting the below error frequently from server.

Error Domain=kCFErrorDomainCFNetwork Code=303 "The operation couldn’t be completed. (kCFErrorDomainCFNetwork error 303.)" UserInfo=0x18a41d20 {NSErrorFailingURLKey=http://ift.tt/1GDcEMW, NSErrorFailingURLStringKey=http://ift.tt/1GDcEMW}

Can any one tell me what is the cause for the issues?

Regards, Chandrika




How to encode images to be used in an JSON array?

I want to do the following:

  1. Connect to Amazon S3 and get an image (.jpg) or sound (.m4a).
  2. Then I want to put them in an array of objects.
  3. Send them to an client

The file exsists in the S3 and can be reached with an browser.

Step one is already done with the following:

try{
    $result = $client->getObject(array(
        'Bucket' => $bucket,
        'Key'    => $filename
    ));
} catch (Exception $e) { //ERROR
    echo($e->getMessage());// FOR THE ERROR
}

I use the $result['Body'] to get the image.

Note: the server is an EC2 instance so the password is already done with an role from IAM

Step two:

image_array=[image1,sound1,sound2,image2];

echo json_encode(image_array);

This step is giving me an empty array. I understand that it is empty because of the encoding. The images are binary data objects and will not work fine with the JSON. But what is the right way? Should I do something like

image_array=[json_encode(image1),etc.];

Or should I do something like this

image_array=[utf8_encode(image1),etc];

Question: How am I supposed to give the image back in an JSON code so I won't break and is readable?

Note: I use this to give info back to the client something similar to this

total_array=[ [image1,property1,property2],
              [image2,property1,property2],
              [image3,property1,property2],
];




Access SQL Server 2012 thesaurus files on Amazon RDS

Fulltext search is enabled. But can't find info about editing thesaurus files. Current instance is db.t2.micro




OpsWorks Node Js Server not working as expected

I have node js server in opeworks, and my code is deploy on S3 service, when i was update the code and update zip folder in s3, after that i have redeploy the node. js app in node js server ,but the result after running the code it was show last app result it will not give as expected or as per updated code.

kindly please provide solution to resolve this issue its urgent.

thanks.




Conversion of datetime in python

Trying to use datetime to get an age of ec2 instance by comparing launch_time to current time. All working fine by using format below: datetime.datetime.strptime(instance.launch_time, "%Y-%m-%dT%H:%M:%S.%fZ")

Unfortunately I've got an one with 0 microseconds, so getting an error about not matching format (time data '2015-03-16T03:21:05Z' does not match format '%Y-%m-%dT%H:%M:%S.%fZ') 2015-03-16T03:02:12.910Z 2015-03-16T03:21:05Z - this one is problematic 2015-03-25T09:19:34.018Z

Any idea how to get around this? It looks like datetime is the easiest way to get this sorted but if there are quicker way of doing that, happy to see other options. FYI, comparision has to be done up to hour, so don't care about seconds ;)

Thanks, Andre




Dynamo DB Concepts

I know some of the questions I am going to be asked will be so silly, but I am new to Dynamo Db and I have a lot of confusion about it.

My questions are :

  1. After going through concept of Hash and Range key through this post What is Hash and Range Primary Key? I am thinking that is it possible to create a Range key which is not a part of Primary key. Suppose I want to define a Table Orders {**Id**,Date,Name....} with Id as a Hash Key and Date as a Range Key and Date not as a part of Primary Key.

  2. Is it possible to query a Table consisting Primary key as Hash and Range Key with only Hash key or Range key ? Like In Table orders {**ID,Date**,Address,Quantity....} say I have defined primary key as Hash and Range Key with ID as Hash Key and Date as Range key. Can we query on table using only ID or Date but not both ?

  3. What is the concept of projected attributes while creating a Local Secondary Index and Global Secondary Index ?




lundi 27 avril 2015

AWS Load Balancer upload private key from JKS file

I have an SSL cert from submitted CSR cert request, generated from java keytool.

I currently can't upload it to AWS Load Balancer Listeners.

I'm getting the error:

"Failed to create SSL Certificate: rapidssl. Private key was in an unrecognized format."

What to put in as private key? Should I convert my .jks or .crt file to openssl pem file? How?




Does AWS S3 automatically abort multipart uploads after a timeout?

Using multipart uploads, Amazon S3 retains all the parts until the upload is either completed or aborted. In an anonymous drop situation, it would be good for abandoned uploads to be automatically aborted after a timeout to reclaim the space and avoid the cost of holding any parts that made it.

It would be possible to create some external monitor using ListMultipartUploads, but it would be better if S3 did it automatically.

If you initiate an upload and maybe upload some parts, but then do nothing further, will S3 eventually abort it for the bucket owner?




Unable to send data from rails app in aws to android app

I have built a rails app which interacts with an android application. Sending and receiving data from android in local system is working flawlessly. But when I uploaded to AWS ec2, only I'm able to send data from android to server not the other way around. Please help me, struck with this.

I have used passenger gem to upload in aws.

My rails code to accept request from app which gives 200 OK

class Android::OnetimeloginController < ApplicationController
  respond_to :json
# This handles one time login from the android app

  def create

    # Takes up the credentials from the android app and
    # sends the header token to be used for further pings
        credentials = permitted_credentials

    # Checks the credentials and renders the responses 
      if credentials.has_key?("name")
        if Branch.exists?(name: credentials["name"]) && credentials["password"] == "password"

          response_json =  {"response" => "Yes"}
            render :json => response_json
        else
          render :text => "NO"
        end

      end

  end

  private

  def permitted_credentials
    params.require("credentials").permit(:name, :password, :tablet_id)
  end
end

But I'm not getting anything on the android side(although I get response in localhost)

I even got response for my CURL request. My curl command.

 curl -v -H "Content-type: application/json" -X POST -d ' {"credentials":{"name" : "banglore", "password": "password"}}'  http://IP/path_to_controller

I checked putting "Content-type" and "Accept" headers from android request, but no use. Please tell me where am I going wrong? Is it an AWS problem?

Thanks so much




Cannot launch Android emulator on ec2 instance

hi I’m trying to launch an Android Emulator on Amazon EC2 instance(ubuntu).

Steps I had followed:

1) Installed ubuntu-desktop on ec2 instance.

2) installed and configured vncserver in ec2 instance.

3) can able to view remote desktop of my instance on my local machine browser using realvnc.

But when I tried to launch emulator from command line like

./emulator -avd myemu

it throws the following error

libGL error: failed to load driver: swrast

I guess the problem is with some graphic drivers.

How can I solve this.




Flask routes returning 500 errors

So I'm attempting to deploy my python flask app to a ubuntu AWS EC2 instance. I've setup mod_wsgi, configured the virtual host setup a virtualenv and created alias to server my static files. For some reason I can't get my custom url for my api routes to return the correct information. I've tried everything I've searched everywhere this is my last option.

#!/usr/bin/env python
import threading
import subprocess
import uuid
import json
from celery import Celery
from celery.task import Task
from celery.decorators import task
from celery.result import AsyncResult
from scripts.runTable import runTable
from scripts.getCities import getCities
from scripts.pullScript import createOperation
from flask import Flask, render_template, make_response, url_for, abort, jsonify, request, send_from_directory, Response
app = Flask(__name__)
app.config['CELERY_BROKER_URL'] = 'amqp://guest:guest@localhost:5672//'
app.config['CELERY_RESULT_BACKEND'] = 'amqp'
app.config['CELERY_TASK_RESULT_EXPIRES'] = 18000
app.config['CELERY_ACCEPT_CONTENT'] = ['json']
app.config['CELERY_TASK_SERIALIZER'] ='json'
app.config['CELERY_RESULT_SERIALIZER'] = 'json'

operation = createOperation()
cities = getCities()
table = runTable()
value = ''
state = ''

celery = Celery(app.name, backend=app.config['CELERY_RESULT_BACKEND'], broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)

@task(bind=True)
def pull_async_data(self, data):
    global state
    state = pull_async_data.request.id
    operation.runSequence(data)


@app.route('/api/v1/getMapInfo', methods=['GET'])
def map():
    mp = operation.getMapData()
    resp = Response(mp, status=200, mimetype="application/json")
    return resp


@app.route('/api/v1/getTable', methods=['GET'])
def tables():
    tb = table.getTableInfo()
    resp = Response(tb, status=200, mimetype="application/json")
    return resp


##Get states from the DB
@app.route('/api/v1/getStates', methods=['GET'])
def states():
    st = cities.getStatesFromDB()
    resp = Response(st, status=200, mimetype="application/json")
    return resp


@app.route('/api/v1/getCities', methods=['POST'])
def city():
    data = request.get_json()
    # print data
    ct = cities.getCitiesFromDB(data)
    resp = Response(ct, status=200, mimetype="application/json")
    return resp


@app.route('/api/v1/getQueue', methods=['GET'])
def queue():
    queue = operation.getCurrentQueue()
    resp = Response(queue, status=200, mimetype="application/json")
    return resp



##Checking the status of api progress
@app.route('/api/v1/checkStatus', methods=['GET'])
def status():
    res = pull_async_data.AsyncResult(state).state
    js = json.dumps({'State': res})
    resp = Response(js, status=200, mimetype="application/json")
    return resp


##Perform the pull and start the script
@app.route('/api/v1/pull', methods=['POST'])
def generate():
    global value
    value = json.dumps(request.get_json())
    count = operation.getCurrentQueue(value)
    pull_async_data.apply_async(args=(value, ))
    js = json.dumps({"Operation": "Started", "totalQueue": count})
    resp = Response(js, status=200, mimetype="application/json")
    return resp


##Check main app
if __name__ == "__main__":
    app.run(debug=True)

Here is the WSGI file oakapp.wsgi

#!/usr/bin/python
import sys

activate_this = '/var/www/oakapp/venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

sys.path.append('/var/www/oakapp')

print sys.path.insert(0, '/var/www/oakapp/scripts')

from app import app as application

Here is the virtualhost environment

    <VirtualHost *:80>
            ServerName oakapp.com

            DocumentRoot /var/www/oakapp

            Alias /js /var/www/oakapp/js
            Alias /css /var/www/oakapp/css

            WSGIDaemonProcess oakapp user=apps group=ubuntu threads=5
            WSGIScriptAlias / /var/www/oakapp/oakapp.wsgi


            <Directory /var/www/oakapp/>
                    WSGIProcessGroup oakapp
                    WSGIApplicationGroup %{GLOBAL}
                    WSGIScriptReloading On
                    Order allow,deny
                    Allow from all
            </Directory>

            ErrorLog /var/www/oakapp/logs/oakapp_error.log
            LogLevel info
            CustomLog /var/www/oakapp/logs/oakapp_access.log combined
    </VirtualHost>

Here is my access log, it's creating the wsgi instance, so I have to be doing something right.

[Tue Apr 28 04:30:03.705360 2015] [:info] [pid 2611:tid 140512828155776] mod_wsgi (pid=2611): Attach interpreter ''.
[Tue Apr 28 04:30:11.704865 2015] [:info] [pid 2611:tid 140512695293696] [remote 72.219.180.235:10929] mod_wsgi (pid=2611, process='oakapp', application=''): Loading WSGI script '/var/www/oakapp/oakapp.wsgi'.
[Tue Apr 28 04:30:11.705804 2015] [:error] [pid 2611:tid 140512695293696] None

Here is a netstate -plunt output

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1062/sshd       
    tcp        0      0 127.0.0.1:3031          0.0.0.0:*               LISTEN      29277/uwsgi     
    tcp        0      0 0.0.0.0:46035           0.0.0.0:*               LISTEN      24222/beam      
    tcp6       0      0 :::22                   :::*                    LISTEN      1062/sshd       
    tcp6       0      0 :::5672                 :::*                    LISTEN      24222/beam      
    tcp6       0      0 :::80                   :::*                    LISTEN      2608/apache2    
    tcp6       0      0 :::4369                 :::*                    LISTEN      24197/epmd      
    udp        0      0 0.0.0.0:17372           0.0.0.0:*                           568/dhclient    
    udp        0      0 0.0.0.0:68              0.0.0.0:*                           568/dhclient    
    udp6       0      0 :::28264                :::*                                568/dhclient 

Here is the directory structure

    ├── app.py
    ├── app.pyc
    ├── css
    │   ├── fonts
    │   │   ├── untitled-font-1.eot
    │   │   ├── untitled-font-1.svg
    │   │   ├── untitled-font-1.ttf
    │   │   └── untitled-font-1.woff
    │   ├── leaflet.css
    │   └── master.css
    ├── js
    │   ├── images
    │   │   ├── layers-2x.png
    │   │   ├── layers.png
    │   │   ├── marker-icon-2x.png
    │   │   ├── marker-icon.png
    │   │   └── marker-shadow.png
    │   ├── leaflet.js
    │   └── main.js
    ├── json
    │   └── states.json
    ├── logs
    │   ├── oakapp_access.log
    │   └── oakapp_error.log
    ├── oakapp.wsgi
    ├── sass
    │   └── master.scss
    ├── scripts
    │   ├── database
    │   │   ├── cities_extended.sql
    │   │   ├── oak.db
    │   │   └── states.sql
    │   ├── getCities.py
    │   ├── getCities.pyc
    │   ├── __init__.py
    │   ├── __init__.pyc
    │   ├── pullScript.py
    │   ├── pullScript.pyc
    │   ├── runTable.py
    │   └── runTable.pyc
    ├── templates
        └── index.html

Any help is appreciated.

Here is a curl request ran from my personal machine to the host machine

    djove:.ssh djowinz$ curl http://ift.tt/1A92wUr
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
    <title>500 Internal Server Error</title>
    <h1>Internal Server Error</h1>
    <p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>

Here is the curl request ran from the host machine to the host machine using localhost as requested.

    root@ip-172-31-24-66:/var/www/oakapp# curl http://localhost/api/v1/getTable
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
    <title>500 Internal Server Error</title>
    <h1>Internal Server Error</h1>
    <p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>