mercredi 30 septembre 2015

Which AWS EC2 instance type is most optimal for audio streaming?

I'm in testing stage of launching an online radio. I'm using AWS CloudFormation stack with Adobe Media Server.

My existing instance type is m1.large and my Flash Media Live Encoder is streaming mp3 at 128kbps which i think is pretty normal but it's producing a stream that isn't smooth & stable at all and seems to have a lot of breaks.

Should i pick an instance type with higher specs? I'm running my test directly off of LiveHLSManifest link that opens on my iPhone's Safari and plays on browser's build-in player..which doesn't set any buffering on client side - could this be the issue?




How to find a file in Amazon S3 bucket without knowing the containing folder

My Amazon S3 bucket has a folder structure that looks like the below.

  • bucket-name\00001\file1.txt
  • bucket-name\00001\file2.jpg
  • bucket-name\00002\file3.doc
  • bucket-name\00001\file4.ppt

If I only know file name file3.doc and bucket name bucket-name how can i search for file3.doc in bucket-name. If I knew, it is in folder 00002, I could simply go to the folder and start typing the file name but I have no way to know in which folder the file I am searching for is under.




Using AWS S3 to Process Form Uploads on My Site

I know we are normally to post code samples but this is more of a question to clarify understanding (which I am not sure where I should put on SO).

I am a complete AWS/S3 newbie. (Read: < 5 hours experience)

After banging my head over configuring uploads from my site to my S3 bucket (and cursing the overabundance of AWS literature that leads people on a wild goose chase), I was successful. Great. But it's left me wondering if I missed something? Here's the scenario that I was trying to create:

A user visits the 'upload' area of my site. There they select a file to upload and enter in pertinent details about the resource (for argument's sake: Resource Title, Resource Type, Resource Size, etc.) They submit the form and the upload shoots off to S3 while the extraneous data is stored in my DB.

At least that was the intention when I started.

Now that I have S3 working, I've run into the following snags:

  • A user can only upload one file at a time to the service (I think)
  • The form can only process the upload data itself and nothing extraneous that the user might enter into the form (I think)

Have I missed something here? Can users only upload one resource at a time and can the form only handle parameter information related to the upload?

I put the S3 form into an iframe embedded into the larger form so that I am not redirecting the user to the 'success_action_redirect' page prematurely, but that doesn't fix my issue of wanting to send form data along with the upload in one fell swoop.

Since there are a billion other AWS services, maybe I've misunderstood and S3 isn't my best choice for a file upload/download site?

Thanks in advance for your insight!




AWS & PHP - Tokenising URLs

I try to protect my url with the token system, I followed this tutorial https://www.youtube.com/watch?v=8s2vvKqybms but I don't know why my URLS doesn't change, I have the original URL instead of the signed URL (so I have an denied access).

My start.php:

<?php

use Aws\S3\S3Client;
use Aws\Credentials\CredentialProvider;

require 'vendor/autoload.php';

$config = require('config.php');

// S3

$s3 = S3Client::factory([
    'key' => $config['s3']['key'],
    'secret' => $config['s3']['secret'],
    'region'  => 'us-west-2',
    'version' => 'latest',
    'credentials' => CredentialProvider::ini('default', '/home/-/.aws/creditentials')
]);

Config.php :

<?php

return [
    's3' => [
        'key' => '-',
        'secret' => '-',
        'bucket' => '-',
    ]
];

And my token.php:

<?php


require 'app/start.php';

$object = 'uploads/movieinfo.tbz2';

$url = $s3->getObjectUrl($config['s3']['bucket'], $object, '5 minutes');

?>

<!DOCTYPE html>
<html>
<head>
    <title>Token</title>
</head>
<body>
    <a href="<?php echo $url; ?>">Download</a>
</body>
</html>

I deleted all permissions on the file, it have the padlock. Do you know what's wrong ?

Thank you




AWS Elastic Beanstalk update environment error

Hi guys i stuck with strange error. It was working fine before just started 2 days ago

Command: aws elasticbeanstalk update-environment --environment-name my-env --version-label c4fc4991b8838933de0f498e2e0060b522078092

A client error (InvalidParameterValue) occurred when calling the UpdateEnvironment operation: The bucket name parameter must be specified when requesting an object

Also i could not find any bucket related configuration on this site http://ift.tt/1O3QULZ




Running AWS CLI commands as ec2-user

I'm trying to use the AWS CLi for the first time, and I am doing it through putty by SSHing to the ec2 instance.

I want to run a command like "aws ec2 authorize-security-group-ingress [options]"

But I get the following error: "A client error (UnauthorizedOperation) occurred when calling the AuthorizeSecurityGroupIngress operation: You are not authorized to perform this operation."

I believe that this is related to IAM user credentials. I have found out where to create IAM users, however I still don't understand how this helps me to execute this command when I'm logged into the server as ec2-user or root, or run the command through CRON.

I have done a fair amount of reading regarding the access controls on AWS in their documentation, but I seem to be missing something.

How can I allow the command to be executed from within the AWS instance?




What are some inexpensive options to deploy hobby play apps

I have a total of 5 Play applications that I have been working on my spare time. They are small projects, serious enough to be published to the world yet not enough to invest large sums of money. I would anticipate a maximum of 10k visits per month, they are read only (information comes out of a database, nothing comes in).

What are some good inexpensive options to deploy these 5 websites? They will have 5 different domain names.

Thanks,




sed command not working on aws template

i have a sed command i want to run on a file. the command adds a couple of lines in front of a string.

this is the sed command i want to run: sed -i "/</VirtualHost>/i\ SSLCertificateFile /etc/httpd/cert/aws.cer \nSSLCertificateKeyFile /etc/httpd/cert/aws.key" ssl.conf

the Json entry in the cloud-formation template User Date is :

"sed -i '/<\/VirtualHost>/i \ \nSSLCertificateFile /etc/httpd/cert/aws.cer \nSSLCertificateKeyFile /etc/httpd/cert/aws.key' /etc/httpd/conf.d/ssl.conf \n",

but im getting the following error:

sed: -e expression #1, char 23: unknown command: `S'

can some please help me fix the json expression

thanks




How do I configure OpsWorks to deploy a not-officially-supported version of Node.js?

I am trying to set up an OpsWorks stack to with a Node.js layer that uses the latest version of Node (4.1.1). I am fairly new to Chef and I am not sure where in the cookbooks repo I would need to make changes to pull down and install Node 4.1.1, instead of their default which is 0.12.7.

Any help is appreciated.




Wordpress on AWS with phpmyadmin for easy DB administration

I've been working on this for many hours.

I want to host Wordpress sites within AWS but also have phpmyadmin access to the databases for easy swapping of client sites/files/theming/updating etc. . .

I've successfully set up wordpress blogs on micro instances running lamp stacks, and have used amazon's RDS as the database, can connect to php my admin but don't know how to configure it to display the database from the related installation?

Severely new at this thank you




How to install SSL cert on AWS EC2?

A few weeks back godaddy was hosing my website and there was no problem, but since I started using AWC I can,t config the SSL certificate. The certificate is from name.com.

I tried a couple of different ways and still nothing.

  1. Tried via Elastic load balancer (I have the DNS but I don't know how to access https from there). Using the DNS I can access my website but only with 'http'

https://www.youtube.com/watch?v=X09cT8n2KeE

  1. Youtube video describing how to install ssl on ubuntu server.. still nothing

http://ift.tt/1JS6Vk7

  1. Two different articles ..still nothing

Am I missing something? Anyone experiencing the same/similar issues?




How to loop ansible variables

i have the below yml file which is working as expected

---
- hosts: local
- name: Example of provisioning servers
hosts: 127.0.0.1
connection: local
tasks:
- name: Modify security group
  local_action:
    module: ec2_group
    name: ansible_trail
    description: Modify SG Rules
    region: us-east-1
    rules:
      - proto: tcp
        from_port: 22
        to_port: 22
        cidr_ip: 198.168.45.23
    purge_rules: true*

i want to repeat the same action for all of my security groups how to do that in ansible .

instead of assigning the name: optv5_ansible_trail i want to get the value from list or file.




ec2-upload-bundle: Signature version 4 authentication failed, trying different signature version, which fails too

# ec2-upload-bundle --access-key AKxxxx --secret-key xxxx --bucket something-i-own --manifest image1.img.manifest.xml
Digest::Digest is deprecated; use Digest
Digest::Digest is deprecated; use Digest
Digest::Digest is deprecated; use Digest
Digest::Digest is deprecated; use Digest
Digest::Digest is deprecated; use Digest
Signature version 4 authentication failed, trying different signature version
ERROR: Error talking to S3: Server.NotImplemented(501): A header you provided implies functionality that is not implemented

This is: ec2-ami-tools 1.5.7 on Arch Linux latest, running on an EC2 instance.

I understand the Digest related messages are about an obsolete Ruby API that however still works. The NotImplemented is more concerning ...

How can I upload my bundle?




Register .cn Domain Name on Amazon AWS

As far as I know, neither Amazon AWS nor Google Domain name registration supports .cn domain name.

If I register .cn domain name in another DNS provider say GoDaddy.com, will .cn be able to be hosted on Amazon AWS?

Thanks.




Ruby, can't download a file, error 500

I'm trying to download a zipped file from Amazon Datafeed url and then decompress it.

This is my code:

    open('public/files/amazon_ce.xml', 'w') do |local_file|
      open('http://ift.tt/1OH7ym2', :http_basic_authentication=>[USERNAME, PASSWORD]) do |remote_file|
        local_file.write(Zlib::GzipReader.new(remote_file).read)
      end
    end

If I try with another file everything is ok, but not with this Amazon file: the error is:

OpenURI::HTTPError: 500 Internal Server Error

I logged the request when I download the same file using the browser...

GET /datafeed/getFeed?filename=it_amazon_ce.xml.gz HTTP/1.1
Host: assoc-datafeeds-eu.amazon.com:443
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: it-IT,it;q=0.8,en-US;q=0.6,en;q=0.4
Cookie: xxxxx
Referer: http://ift.tt/1GidfyT
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36

HTTP/1.1 302 Moved Temporarily
Cache-Control: no-cache
Content-Length: 0
Date: Wed, 30 Sep 2015 21:24:22 GMT
Expires: Thu, 01 Jan 1970 00:00:00 UTC
Location: http://ift.tt/1OH7wdX
Pragma: No-cache
Server: Apache-Coyote/1.1

Any idea?




AWS VPC Public Subnet with NAT server

I have a question on setting up my AWS VPC.

I currently have a public subnet where I have my webservers serving my application. I have a private subnet where my DB is hosted.

My application connects to many APIs which require me to whitelist the incoming IP address of my webserver. This hasn't been an issue since I have an elastic IP on my primary web server.

I'm starting to get a lot of traffic and have opsworks setup to scale, but my issue is that in the event opsworks starts new instances, those instances need to be whitelisted with my apis for them to have access.

My question is. Can I just create another public subnet and route my webservers through a NAT server in that subnet?

I have tried to do it and everytime I change my route table of my web servers to the nat server subnet, my apps die.

Here is the setup I have:

Public Subnet : Web Servers Private Subnet : DB Servers

Web servers connect to the internet via internet gateway.

Here is what I'm shooting for:

Public Subnet : Web Servers Public Subnet : NAT Server Private Subnet : DB Server

Web servers are routed to internet via NAT.

When I create a nat instance, i can ping it from my web servers, but when I change the route table to route through the subnet with the nat server, it stops working.

Things I have tried

I have made sure my source/dest is disabled for the NAT server

I have opened up all permissions on the ACL and Security Group for the NAT server and subnet.




amazon CloudWatchLogs putLogEvents

hi guys i'm trying to put Log on amazon CloudWatchLogs like this:

$response2 = $amzonLoger->putLogEvents([

            'logGroupName' => 'myGroup',
            'logStreamName' => 'myStream',
            'logEvents' => [
                [
                    'timestamp' => time(),
                    'message' => 'fuck this'
                ],
            ],
            'sequenceToken' => lastToken,
        ]);
        var_dump($response2);

but always i've this response :

bject(Guzzle\Service\Resource\Model)#289 (2) { ["structure":protected]=> NULL ["data":protected]=> array(2) { ["nextSequenceToken"]=> string(56) "495401145812734324234234236420825819917076850" ["rejectedLogEventsInfo"]=> array(1) { ["tooOldLogEventEndIndex"]=> int(1) } } }

can u help me understanding what does mean ["rejectedLogEventsInfo"]=> array(1) { ["tooOldLogEventEndIndex"]=> int(1), I will be very grateful for the help.




Amazon AWS '405 method not allowed' error when submitting form to Parse.com

I have a form that sends data to a database on Parse.com, the website itself is hosted on an amazon aws s3 server. When attempting to submit the form data I always get a 405 method not allowed.

I'm a bit of a noob so don't be too harsh, any ideas why?




MySQL connection with tomcat works, standalone app doesn't

In AWS I have a tomcat application that successfully connects to our DB. I also have a cron job that I'm running by running it from the command line on the same AWS instance. Both use the same code to connect to the DB and the same property files. However, when I run from the command line on that instance I get this error:

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
    at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
    at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074)
    at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:343)
    at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2132)
    ... 18 more
Caused by: java.net.ConnectException: Connection refused
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
    at java.net.Socket.connect(Socket.java:579)
    at java.net.Socket.connect(Socket.java:528)
    at java.net.Socket.<init>(Socket.java:425)
    at java.net.Socket.<init>(Socket.java:241)
    at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:253)
    at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:292)
    ... 19 more
Exception in thread "main" java.lang.NullPointerException
    at com.api.spring.management.CheckStatus.getRequests(CheckStatus.java:135)
    at com.api.spring.management.CheckStatus.main(CheckStatus.java:120)

I put some debug in the application and the DB URL, name, username and password are the same in both. So, I'm not sure why running from command line doesn't work, but running it through tomcat does. Any ideas?




S3 fails to unzip uploaded file

I'm following this example

// Load the stream
var fs = require('fs'), zlib = require('zlib');
var body = fs.createReadStream('bigfile').pipe(zlib.createGzip());

// Upload the stream
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}, function(err, data) {
  if (err) console.log("An error occurred", err);
  console.log("Uploaded the file at", data.Location);
})

And it "works" in that it does everything exactly as expected, EXCEPT that the file arrives on S3 compressed and stays that way.

As far as I can tell there's no auto facility for it to unzip it on S3, so, if your intention is to upload a publicly available image or video (or anything else that the end user is meant to simply consume) the solution appears to leave the uploaded file unzipped like so...

// Load the stream
var fs = require('fs'), zlib = require('zlib');
var body = fs.createReadStream('bigfile');//.pipe(zlib.createGzip()); <-- removing the zipping part

// Upload the stream
var s3obj = new AWS.S3({params: {Bucket: 'myBucket', Key: 'myKey'}});
s3obj.upload({Body: body}, function(err, data) {
  if (err) console.log("An error occurred", err);
  console.log("Uploaded the file at", data.Location);
})

I'm curious if I'm doing something wrong and if there IS an automatic way to have S3 recognize that the file is arriving zipped and unzip it?




Can custom object caching TTLs be configured in CloundFront via Cloudformation?

Via the CloudFront UI, I have the option to select "Customize" for "Object Caching", and then specify values for Minimum, Maximum, and Default TTL: options

However, I do not see support for anything other than MinimumTTL in the CloudFormation CacheBehavior property type.

Am I missing something or is this just not supported via CloudFormation?




Django gets 500 error or blank screen on Apache and AWS

I have a Django/python system installed on a AWS server. Somebody else made it, and it used to work fine, but now it's getting a http 500 error every time, except when I restart the httpd service, then it gets one time a blank page then on the next time it returns to http 500 error.

When I get the error, the httpd error_log gives me this:

[Wed Sep 30 14:34:27.611640 2015] [:error] [pid 7309] [remote 172.31.45.27:25376] mod_wsgi (pid=7309): Target WSGI script '/opt/python/current/app/lpclub/wsgi.py' cannot be loaded as Python module.
[Wed Sep 30 14:34:27.611690 2015] [:error] [pid 7309] [remote 172.31.45.27:25376] mod_wsgi (pid=7309): Exception occurred processing WSGI script '/opt/python/current/app/lpclub/wsgi.py'.
[Wed Sep 30 14:34:27.611710 2015] [:error] [pid 7309] [remote 172.31.45.27:25376] Traceback (most recent call last):
[Wed Sep 30 14:34:27.611728 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/current/app/lpclub/wsgi.py", line 14, in <module>
[Wed Sep 30 14:34:27.611780 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     application = get_wsgi_application()
[Wed Sep 30 14:34:27.611794 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
[Wed Sep 30 14:34:27.611829 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     django.setup()
[Wed Sep 30 14:34:27.611841 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/__init__.py", line 21, in setup
[Wed Sep 30 14:34:27.611875 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     apps.populate(settings.INSTALLED_APPS)
[Wed Sep 30 14:34:27.611887 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/apps/registry.py", line 85, in populate
[Wed Sep 30 14:34:27.611991 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     app_config = AppConfig.create(entry)
[Wed Sep 30 14:34:27.612004 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/apps/config.py", line 87, in create
[Wed Sep 30 14:34:27.612096 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     module = import_module(entry)
[Wed Sep 30 14:34:27.612109 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
[Wed Sep 30 14:34:27.612152 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     __import__(name)
[Wed Sep 30 14:34:27.612164 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/jsonate/__init__.py", line 2, in <module>
[Wed Sep 30 14:34:27.612195 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     from . import monkey_patches
[Wed Sep 30 14:34:27.612206 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/jsonate/monkey_patches.py", line 4, in <module>
[Wed Sep 30 14:34:27.612236 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     from django.contrib.auth.models import User
[Wed Sep 30 14:34:27.612248 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/contrib/auth/models.py", line 40, in <module>
[Wed Sep 30 14:34:27.612358 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     class Permission(models.Model):
[Wed Sep 30 14:34:27.612370 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/db/models/base.py", line 122, in __new__
[Wed Sep 30 14:34:27.612645 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     new_class.add_to_class('_meta', Options(meta, **kwargs))
[Wed Sep 30 14:34:27.612659 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/db/models/base.py", line 297, in add_to_class
[Wed Sep 30 14:34:27.612676 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     value.contribute_to_class(cls, name)
[Wed Sep 30 14:34:27.612685 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/db/models/options.py", line 166, in contribute_to_class
[Wed Sep 30 14:34:27.612825 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
[Wed Sep 30 14:34:27.612838 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/db/__init__.py", line 40, in __getattr__
[Wed Sep 30 14:34:27.612888 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     return getattr(connections[DEFAULT_DB_ALIAS], item)
[Wed Sep 30 14:34:27.612901 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/db/utils.py", line 242, in __getitem__
[Wed Sep 30 14:34:27.612987 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     backend = load_backend(db['ENGINE'])
[Wed Sep 30 14:34:27.612999 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/db/utils.py", line 108, in load_backend
[Wed Sep 30 14:34:27.613014 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     return import_module('%s.base' % backend_name)
[Wed Sep 30 14:34:27.613023 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
[Wed Sep 30 14:34:27.613036 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     __import__(name)
[Wed Sep 30 14:34:27.613045 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]   File "/opt/python/run/venv/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 27, in <module>
[Wed Sep 30 14:34:27.613113 2015] [:error] [pid 7309] [remote 172.31.45.27:25376]     raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
[Wed Sep 30 14:34:27.613136 2015] [:error] [pid 7309] [remote 172.31.45.27:25376] ImproperlyConfigured: Error loading psycopg2 module: No module named psycopg2

I've updated the psycopg module, did the "pip install -r requirements.txt --upgrade", checked everything and I can't find where the problem is.




MongoDB WiredTiger RAID EBS back ups

The MongoDB documentation says that for nodes backed by EBS volumes in a RAID array, the back up options are either:

  1. Lock writes to disk via db.fsyncLock(), take an EBS snapshot, and db.fsyncUnlock()
  2. Use LVM

However, it says that option (1) is only supported for the MMAPv1 storage engine. Why is that option not supported for other storage engines -- specifically WiredTiger?




command cannot use as root in Amazon Linax in EC2

I'm using aws ec2(Amazin Linux), and I'm in trouble. I installed git flow, and I can use "git flow init" command as ec2-user. But I cannot use "git flow init" as root user. I don't understand why.....




Nginx redirection magic

I have a third-party that's forwarding traffic over to me on a subdomain - let's call it subdomain.thirdparty.com

I would like to forward this traffic over to http://ift.tt/1Ghl5J5 - this is where the app lives. The links in the app require the /subdomain part in the URL.

BUT I would like to maintain the third-party URL in the browser, something like subdomain.thirdparty.com or http://ift.tt/1L4auXG

I'm hosted on AWS so I have Route 53 available to me, and have the following Nginx setup:

server{
    server_name *.mysite.com;
    listen 80;

    location /subdomain/{
        proxy_set_header SCRIPT_NAME /subdomain;
        proxy_set_header Host $host;
        proxy_pass http://127.0.0.1:9014;
    }
}

I've tinkered around with Nginx settings but just can't seem to figure it out. Any guidance would be greatly appreciated.




upload jenkins build revision to s3 without triggering codedeploy

I have successfully built my artifacts and uploaded the artifacts with the help of code deploy plugin in jenkins

Now I have a scenario where I'll deploy my revision but will not trigger the codedeploy automatically. I'll login to amazon console and select uploaded revision and trigger deployment.

Is there a plugin to upload only the artifact and not trigger deployment?

Thanks




Default Content-Type for Mapping Template

I'm working with a data provider for my project that does not adhere to any standards unfortunately, so no content-type specified within the header of the request. Actually it is specified, but with a different key then content-type.

The payload of the POST request is in xml format, so as far as I understand we need to use mapping template to wrap the payload in json object. All this works really great when we specify content-type to be one of the set up types in Integration Request part.

Now to my understanding if content-type is not specified in a request header then it should default to 'application/json' and execute mapping template associated with that type. In our case it behaves like it is ignoring the mapping template, which in turn results with following error returned:

{"Type":"User","message":"Could not parse request body into json."}

Just mention, the request is send for processing to AWS Lambda.

Any ideas how we can get that working ?

Edit: I have confirmed that the default is 'application/json' in case if the Content-Type is not set within the header. In that case I'm assuming what I'm experiencing is a bug.




SSL setup on AWS Elasticbeanstalk single instance - No Load Balancer

I have a django/python based web application that I have been deploying to AWS for the past year. Now we need to get SSL setup so our users can sign up and make payments online.

I've integrated the Stripe checkout js and now I am trying to get a self signed SSL certificate, FOR TESTING, to run on my DEV AWS EB instance.

I went through the AWS documentation here http://ift.tt/1KRCASt

AND, for the config file:

http://ift.tt/1qzyjrR

Now when I deploy to my AWS instance, I am getting the following errors:

2015-09-29 23:07:48 UTC-0400    ERROR   [Instance: *****] Command failed on instance. Return code: 1 Output: Error occurred during build: Command hooks failed .
2015-09-29 23:07:47 UTC-0400    ERROR   Script /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.py failed with returncode 1

I am not sure what is going on or how to go about debugging these errors.

Here is my config file:


  Resources: 
  sslSecurityGroupIngress: 
  Properties: 
    CidrIp: 0.0.0.0/0
    FromPort: 443
    GroupId: 
      Ref: AWSEBSecurityGroup
    IpProtocol: tcp
    ToPort: 443
  Type: "AWS::EC2::SecurityGroupIngress"
  files: 
  /etc/httpd/conf.d/ssl.conf: 
  content: |
      LoadModule wsgi_module modules/mod_wsgi.so
      WSGIPythonHome /opt/python/run/baselinenv
      WSGISocketPrefix run/wsgi
      WSGIRestrictEmbedded On
      Listen 443
      <VirtualHost *:80>
        ServerName myserver
        Redirect permanent / https://myserver
      </VirtualHost>

      <VirtualHost *:443>
        ServerName myserver


        SSLEngine on
        SSLCertificateFile "/etc/pki/tls/certs/server.crt"
        SSLCertificateKeyFile "/etc/pki/tls/certs/server.key"

        Alias /static/ /opt/python/current/app/static/
        <Directory /opt/python/current/app/static>
        Order allow,deny
        Allow from all
        </Directory>

        WSGIScriptAlias / /opt/python/current/app/application.py

        <Directory /opt/python/current/app>
        Require all granted
        </Directory>

        WSGIDaemonProcess wsgi-ssl processes=1 threads=15 display-name=%{GROUP} \
          python-path=/opt/python/current/app:/opt/python/run/venv/lib/python2.7/site-packages user=wsgi group=wsgi \
          home=/opt/python/current/app
        WSGIProcessGroup wsgi-ssl
      </VirtualHost>
  group: root
  mode: "000755"
  owner: root
  /etc/pki/tls/certs/server.crt: 
  content: "-----BEGIN CERTIFICATE-----\n\
      ********=\n\
      -----END CERTIFICATE-----\n\
      \x20\n"
  group: root
  mode: "000400"
  owner: root
  /etc/pki/tls/certs/server.key: 
  content: |
      -----BEGIN RSA PRIVATE KEY-----
      *******
      -----END RSA PRIVATE KEY-----
  group: root
  mode: "000400"
  owner: root
  packages: 
  yum: 
  mod24_ssl: []

I created my private key, certificate key and self signed certificate using OpenSSL when I SSH'ed into the EC2 instance under the ec2-user. In some cases, I've seen people have these files in /var/www/html/. I have the files in the home directory, would that cause a problem?

Any ideas of how to diagnose this or is there a better approach to setting up SSL on a single EB instance?

Thank you.




Does AWS Elastic Cache support Pub/Sub on Redis Cluster?

Looking the documentation of AWS Elastic Cache I can see they support Redis Cluster and talk about key/value data and Redis operations in general. However is not clear for me if this will support replication of Redis' pub/sub along the different servers.

We are building a chat server on node-xmpp. We will have many application servers for handling chat connections and we are relying on Redis pub/sub for handling the communication between chat threads. We require that regardless the actual Redis instance each chat server is communicating to, they can share the same pub/sub channel.

At AWS Elastic cache white paper (page 7) they indicate to use Redis if you want pub/sub. I understand from this that AWS Elastic Cache will actual support pub/sub scalability but I'm not convinced yet.




Amazon S3 upload error: An exception occurred while uploading parts to a multipart upload

I am trying to upload a 30 GB file to Amazon S3 using the AWS PHP SDK.

require('../vendor/autoload.php');

$client = new Aws\S3\S3Client([
'version' => 'latest',
'region'  => 'us-east-1'
]);

$bucket_name = 'My-new-bucket';
$file_name   = 'S3_www_1443369605.zip';

try {  

    $client->upload($bucket_name, $file_name, fopen($file_name, 'rb'), 'public-read');
    echo "File has been uploaded";

    } catch(Exception $e) { 
    echo "File upload error: $e"; 
    }

It works for files up to 7GB. When uploading the 30 GB file I am getting the following error after the script has run for about 2 hours:


2015-09-28 23:48:22 - File upload error: exception 'Aws\Exception\MultipartUploadException' with message 'An exception occurred while uploading parts to a multipart upload. The following parts had errors: - Part 560: Error executing "UploadPart" on "http://ift.tt/1MWEJSA"; ...





WNS receive JSON data from AmazonSNS

I want to link this question to my previous question Can't receive any notification from AmazonSNS

that question is actually working now after enabling Toast on .appxmanifest. I get notified when I publish a RAW message type but not JSON which is I actually need. The code is provided there but let me repost it here

d("init AmazonSimpleNotificationServiceClient");
AmazonSimpleNotificationServiceClient sns = new AmazonSimpleNotificationServiceClient("secret", "secret", RegionEndpoint.EUWest1);

d("get notification channel uri");
string channel = string.Empty;
var channelOperation = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
channelOperation.PushNotificationReceived += ChannelOperation_PushNotificationReceived;

d("creating platform endpoint request");
CreatePlatformEndpointRequest epReq = new CreatePlatformEndpointRequest();
epReq.PlatformApplicationArn = "arn:aws:sns:eu-west-1:X413XXXX310X:app/WNS/Device";
d("token: " + channelOperation.Uri.ToString());
epReq.Token = channelOperation.Uri.ToString();

d("creat plateform endpoint");
CreatePlatformEndpointResponse epRes = await sns.CreatePlatformEndpointAsync(epReq);

d("endpoint arn: " + epRes.EndpointArn);

d("subscribe to topic");
SubscribeResponse subsResp = await sns.SubscribeAsync(new SubscribeRequest()
{
    TopicArn = "arn:aws:sns:eu-west-1:X413XXXX310X:Topic",
    Protocol = "application",
    Endpoint = epRes.EndpointArn
});

private void ChannelOperation_PushNotificationReceived(Windows.Networking.PushNotifications.PushNotificationChannel sender, Windows.Networking.PushNotifications.PushNotificationReceivedEventArgs args)
{
    Debug.WriteLine("receiving something");
}

I get notified when a publish a RAW message but I need to get notified when I publish in JSON message type. I am not sure why I don't get notified when I use that message type? What else I am missing there?

thanks




Unable to SSH to RHEL 6.5 after exporting to AWS

I have exported a VM from VMware ESX server to AWS by converting it in to OVF and using ec2-import-instance Command. I am able to SSH this VM before exporting. sshd service is running fine and IPtables are updated to allow SSH. After launching the instance in AWS, I am getting the error "Connection Refused". Security groups in AWS are configured to allow SSH from any computer. I am not sure what I am missing here. can anyone help.




Tomcat 8 Spring JMS/AWS SQS memory leak

I have a following Spring(Boot) configuration for AWS SQS:

    /**
     * AWS Credentials Bean
     */
    @Bean
    public AWSCredentials awsCredentials() {
        return new BasicAWSCredentials(accessKey, secretAccessKey);
    }

    /**
     * AWS Client Bean
     */
    @Bean(destroyMethod="shutdown")
    public AmazonSQSAsync amazonSQSAsyncClient() {
        AmazonSQSAsync sqsClient = new AmazonSQSAsyncClient(awsCredentials());
        sqsClient.setRegion(regionProvider().getRegion());
        return new AmazonSQSBufferedAsyncClient(sqsClient);
    }

    /**
     * AWS Connection Factory
     */
    @Bean
    public SQSConnectionFactory connectionFactory() {
        SQSConnectionFactory.Builder factoryBuilder = new SQSConnectionFactory.Builder(regionProvider().getRegion());
        factoryBuilder.setAwsCredentialsProvider(awsCredentialsProvider());
        return factoryBuilder.build();
    }

    @Bean
    public AWSCredentialsProvider awsCredentialsProvider() {
        return new StaticCredentialsProvider(awsCredentials());
    }

    @Bean
    public RegionProvider regionProvider() {
        return new StaticRegionProvider(regionName);
    }

    /**
     * Registering MyQueueListener
     */
    @Bean(destroyMethod="shutdown")
    public DefaultMessageListenerContainer defaultMessageListenerContainer() {
        DefaultMessageListenerContainer messageListenerContainer = new DefaultMessageListenerContainer();
        messageListenerContainer.setConnectionFactory(connectionFactory());
        messageListenerContainer.setDestinationName(queueName);
        messageListenerContainer.setMessageListener(new MessageListenerAdapter(new MyQueueListener(reportJobService)));
        messageListenerContainer.setErrorHandler(new QueueListenerErrorHandler());
        messageListenerContainer.setTaskExecutor(defaultMessageListenerContainerTaskExecutor());
        messageListenerContainer.setMaxConcurrentConsumers(taskExecutorMaxConcurrentConsumers);

        return messageListenerContainer;
    }

    @Bean(destroyMethod="shutdown")
    public Executor defaultMessageListenerContainerTaskExecutor() {
        return Executors.newFixedThreadPool(taskExecutorThreadsNumber);
    }

On Tomcat 8 during the Reload via Tomcat Web Application Manager this configuration leads to the memory leak:

The following web applications were stopped (reloaded, undeployed), but their
classes from previous runs are still loaded in memory, thus causing a memory
leak (use a profiler to confirm):
/domain-api
/domain-api

My Tomcat log does not contain messages about possible memory leaks..

Looks like right now I have a 2 instances of my application(domain-api) up and running.. How to check it and how to fix it ?




Amazon DynamoDb Local with Maven and Java8

i try to start a DynamoDb as embedded service. But when try to start it, i get the following error:

I create an example project at: http://ift.tt/1N0AhlK

if you run mvn spring-boot:run your get this exception:

Thanks a lot

Marcel

Initializing DynamoDB Local with the following configuration:
Port:   8000
InMemory:   true
DbPath: null
SharedDb:   false
shouldDelayTransientStatuses:   false
CorsParams: *

Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dynamoDbConfiguration': Invocation of init method failed; nested exception is com.amazonaws.AmazonServiceException: The request processing has failed because of an unknown error, exception or failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 9cced04b-da87-48b0-b9e1-66a2fdb1f94f)
    at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:136)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:408)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1566)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
    at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:755)
    at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:757)
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
    at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:687)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:321)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:967)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:956)
    at example.Application.main(Application.java:18)
Caused by: com.amazonaws.AmazonServiceException: The request processing has failed because of an unknown error, exception or failure. (Service: AmazonDynamoDBv2; Status Code: 500; Error Code: InternalFailure; Request ID: 9cced04b-da87-48b0-b9e1-66a2fdb1f94f)
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
    at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
    at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1776)
    at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.listTables(AmazonDynamoDBClient.java:1203)
    at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.listTables(AmazonDynamoDBClient.java:1216)
    at example.DynamoDbConfiguration.init(DynamoDbConfiguration.java:35)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleElement.invoke(InitDestroyAnnotationBeanPostProcessor.java:349)
    at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor$LifecycleMetadata.invokeInitMethods(InitDestroyAnnotationBeanPostProcessor.java:300)
    at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:133)
    ... 17 more

``




How to create an ec2 instance using boto3

Is it possible to create an ec2 instance using boto3 in python? Boto3 document is not helping here, and I couldn't find any helping documents online. please provide some sample codes/links.




Expected behavior for AWS Kinesis ShardIteratorType TRIM_HORIZON

Context: I'm not necessarily referring to a KCL-based application, just pure Kinesis API calls.

Does the using the TRIM_HORIZON shard iterator type immediately give you the earliest published record in the stream (ie earliest available within Kinesis' built-in 24hr window), or simply an iterator/cursor for some time period as much as 24 hours ago, that you must then use to advance along the stream until you hit the earliest published record?

Put another way, in case that's not quite clear....

When using the shard iterator type of TRIM_HORIZON, is the expected behavior that it will begin with returning the records that were available 24 hours ago, BUT if zero records were published exactly 24 hours ago, and instead only 3 hours ago, that your application will need to iteratively poll through the previous 21 hours before it reaches the records published 3 hours ago?

Timeline example:

  1. Sept 29 5:00 am - Create a stream "foo" with 1 shard
  2. Sept 29 5:02 am - Publish a single record, "Item=A", to the "foo" stream
  3. Sept 29 5:03 am - Issue a GetShardIterator call with TRIM_HORIZON as your shard iterator type, then issue a GetRecords call with that shard iterator and receive the record "Item=A"
  4. Sept 30 7:02 am - Publish a second record, "Item=B", to the "foo" stream
  5. Sept 30 7:03 am - Issue a GetShardIterator call with TRIM_HORIZON as your shard iterator type, then issue a GetRecords call with that shard iterator. What should be expected as the result from this call? (Note: we did not remember/re-use the shard iterator from step 3)

For Step 5 above, it's been more than 24 hours since the "Item=A" message was published on the stream and only a minute since "Item=B" was published. Will a fresh shard iterator with TRIM_HORIZON immediately give you the earliest available record, or do you need to need to keep iterating until you hit a time period when something has been published?

I'd been experimenting with Kinesis and everything was working fine yesterday or two days ago (ie. I was publishing AND consuming without any issues). I made some additional modifications to my code and began publishing again today. When I fired up my consumer, nothing was coming out at all even after letting it run for a few minutes. I tried publishing and consuming at exactly the same time, and still nothing. After manually playing with the AFTER_SEQUENCE_NUMBER iterator type, and using some sequence numbers from my consumer logs from a few days ago, I was able to reach my recently published messages. But then if I go back to using the TRIM_HORIZON type, I see no messages at all.

I've looked at the docs, but most of docs I found assume you are using the KCL (I actually was using KCL initially, but when it started failing I dropped down to raw API calls) and mention that you must have an application name and that DynamoDB tables are used for tracking state. Which as best I can tell is not true if you're using pure Kinesis API calls or the Kinesis CLI, both of which I eventually tried. I finally wrote a pure API script to start with TRIM_HORIZON and poll infinitely and eventually it hit new records (took ~600 iterations; started out 14hrs behind "now" and found records at about 5 hours behind "now"). If this is expected behavior, it seems like the wording in the docs is just a little confusing/misleading:

TRIM_HORIZON - Start reading at the last untrimmed record in the shard in the system, which is the oldest data record in the shard.

I assumed (now seemingly incorrectly) that the terms "oldest data record" meant record that I've published into the stream, not simply a time period in the stream.

It'd be great if someone can help confirm/explain the behavior I'm seeing.

Thanks!




Build failing with Amazon ProfileCredentialsProvider

   //   this.s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
        this.s3Client = new AmazonS3Client();

If I uncomment the first line, my project build is failing saying that it doesn't recognise the package ( com.amazonaws.auth.profile) and that the it encountered an error at ProfileCredentialsProvider().

But if I use the second line, everything is ok. Why does that happen?




Really Basic S3 Upload credentials

I'm giving Amazon Web Services a try for the first time and getting stuck on understanding the credentials process.

From a tutorial from awsblog.com, I gather that I can upload a file to one of my AWS "buckets" as follows:

s3 = Aws::S3::Resource.new

s3.bucket('bucket-name').object('key').upload_file('/source/file/path')

In the above circumstance, I'm assuming he's using the default credentials (as described here in the documentation), where he's using particular environment variables to store the access key and secret or something like that. (If that's not the right idea, feel free to set me straight.)

The thing I'm having a hard time understanding is the meaning behind the .object('key'). What is this? I've generated a bucket easily enough but is it supposed to have a specific key? If so, how to I create it? If not, what is supposed to go into .object()?

I figure this MUST be out there somewhere but I haven't been able to get it (maybe I'm misreading the documentation). Thanks to anyone who gives me some direction here.




generate a Cloudfront Signed URL using Clojure

Is there an easy way to generate a Cloudfront Signed URL in Clojure?

I'm using Amazonica for S3 (which works great):

(s3/set-s3client-options {:path-style-access true})
(aws/defcredential s3_accesskey s3_secretkey)
(s3/generate-presigned-url bucket key (-> 6 hours from-now))

Is there anything similar for Cloudfront?




Facebook new version integration with aws

Please can anyone tell me how to integrate the new version of facebook sdk i.e. 4.6.0 in android app using AWS(amazon web services)?




mardi 29 septembre 2015

EC2 used space increases after resizing volume

I have followed AWS guide to expand the 8GB volume to 16GB.

What I have done is:

  1. Take snapshot of 8GB volume
  2. Create new 16GB volume by the snapshot
  3. Detach 8GB volume from EC2 instance, then attach 16GB volume to that EC2 instance

After that, df -h:

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       16G   11G  5.3G  67% /

Why the Used size increase to 11G? It should be 8G.




AngularJS authentication fail in amazon AWS

seriously i need to know why my application failed on AWS Beanstalk. This is the url:

AWS angular site

The console log an error of token mismatch, but the exactly same copy works on my local workstation.

Thanks!!




System Integrity Fail

I have a server that I set up at Amazon AWS EC2.

Recently i have been receiving these...

Time:    Tue Sep 29 07:30:40 2015 -0400
PID:     11592 (Parent PID:11381)
Account: stymco
Uptime:  54888 seconds


Executable:

/usr/local/cpanel/3rdparty/perl/514/bin/perl


Command Line (often faked in exploits):

spamd child


Network connections by the process (if any):

tcp: 127.0.0.1:783 -> 0.0.0.0:0
tcp: 127.0.0.1:783 -> 127.0.0.1:60912
tcp: 10.0.0.15:48469 -> 208.83.137.115:2703
udp: 10.0.0.15:24448 -> 10.0.0.2:53


Files open by the process (if any):

/dev/null
/dev/null
/dev/null
/usr/local/cpanel/3rdparty/perl/514/bin/spamd
/home/stymco/.razor/razor-agent.log

And then today I received this...

Time:     Tue Sep 29 21:35:18 2015 -0400

The following list of files have FAILED the md5sum comparison test. This means that the file has been changed in some way. This could be a result of an OS update or application upgrade. If the change is unexpected it should be investigated:

/usr/bin/ldapadd: FAILED
/usr/bin/ldapcompare: FAILED
/usr/bin/ldapdelete: FAILED
/usr/bin/ldapexop: FAILED
/usr/bin/ldapmodify: FAILED
/usr/bin/ldapmodrdn: FAILED
/usr/bin/ldappasswd: FAILED
/usr/bin/ldapsearch: FAILED
/usr/bin/ldapurl: FAILED
/usr/bin/ldapwhoami: FAILED
/usr/sbin/slapacl: FAILED
/usr/sbin/slapadd: FAILED
/usr/sbin/slapauth: FAILED
/usr/sbin/slapcat: FAILED
/usr/sbin/slapd: FAILED
/usr/sbin/slapdn: FAILED
/usr/sbin/slapindex: FAILED
/usr/sbin/slappasswd: FAILED
/usr/sbin/slapschema: FAILED
/usr/sbin/slaptest: FAILED

Is this something that is i should be worried about? Or is there a way that i can look deeper into this.

I have searched the web about some of these notices and all i can find is to disable the warnings that are getting emailed to me.

Any advice is much appreciated.




How to deploy Java RMI server to Amazon AWS?

I am trying to learn about Java RMI and I figure it would be useful to deploy my program to a remote server.

I've signed up for a free Amazon AWS account.

Can someone point me and future curious ones in the right direction of how to deploy a simple Java RMI program on an Amazon server with the rmiregistry etc.

I don't even know where to begin




AWS Laravel DecryptException

I just upload a fresh unmodified copy of script that using laravel and angularjs to AWS beanstalk, but it encounter DecryptException error message when ever i tried to login using an api call to server:

enter image description here

But it works fine on my workstation. These are some of the functions I use to create a user token.

protected function createToken($user)
    {
        $payload = [
            'sub' => $user->id,
            'iat' => time(),
            'exp' => time() + (2 * 7 * 24 * 60 * 60)
        ];
        return JWT::encode($payload, Config::get('app.token_secret'));
    }

Authenticate.php

public function handle($request, Closure $next)
    {
        if ($request->header('Authorization'))
        {
            $token = explode(' ', $request->header('Authorization'))[1];
            $payload = (array) JWT::decode($token, Config::get('app.token_secret'), array('HS256'));

            if ($payload['exp'] < time())
            {
                return response()->json(['message' => 'Token has expired']);
            }

            $request['user'] = $payload;

            return $next($request);
        }
        else
        {
            return response()->json(['message' => 'Please make sure your request has an Authorization header'], 401);
        }
    }

Thanks!!




Permission Denied while trying to run a Python package

I'm trying to use a Python package called csvkit on an AWS EC2 machine. I was able to install it after some hiccups, which might be related - running pip install csvkit threw an error at

with open(path, 'rb') as stream:
    IOERROR: [Errno 13] Permission denied: '/usr/local/lib/python2.7/site-packages/python_dateutil-http://ift.tt/1MZo5BR'

But I was able to install it with some other command.

Now onto the original problem - when I try to run a simple function within the csvkit package like cavstat, this is the full error output:

[username ~]$ csvstat
Traceback (most recent call last):
  File "/usr/local/bin/csvstat", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in <module>
    working_set = WorkingSet._build_master()
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 614, in _build_master
    ws.require(__requires__)
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 920, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 815, in resolve
    new_requirements = dist.requires(req.extras)[::-1]
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2552, in requires
    dm = self._dep_map
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2537, in _dep_map
    for extra, reqs in split_sections(self._get_metadata(name)):
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2978, in split_sections
    for line in yield_lines(s):
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2252, in yield_lines
    for ss in strs:
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2566, in _get_metadata
    for line in self.get_metadata_lines(name):
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1589, in get_metadata_lines
    return yield_lines(self.get_metadata(name))
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1581, in get_metadata
    return self._get(self._fn(self.egg_info, name))
  File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1692, in _get
    with open(path, 'rb') as stream:
IOError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/site-packages/python_dateutil-http://ift.tt/1MZo5BR'

I'm not sure what to even search to see what's the issue. Is this related to date-utils? I'm fairly new to the Linux world, so editing configuration files and whatnot is a bit difficult for me.




How to launch ECS cluster in default VPC?

Is this possible? I would like to use elasticache, which seems to only be allowed to be created in my default VPC (alternative question is: How can I launch elasticache in a custom VPC), however I can't connect to it in a separate VPC. I don't know how to configure my clusters (or launch them outside of the "Getting Started")so that I can launch them in an existing VPC that I can set.




AWS Javascript Uploading Multiple Files with progress reported

Scenerio....

I'm trying to upload a number of files to AWS through the Javascript API. While the number of files may be many (100 - 500), I am managing this by first sending the file information to my web server (via Ajax) to parse name, size, etc... and insert this information into my database. I then return the id of the newly created record back to my web page and send the file to S3 using the 'id' as the file name. This way, I believe I am creating unique records and file names.

Problem...

Since my Ajax routine is in a for loop, so is the file upload to S3. I cannot get this script to give me any valuable feedback on the upload progress (the upload itself works fine).

Any of the following would make me happy...

  1. A div with the total upload size (all the files) and the total progress reported.
  2. A list of all the files being uploaded with their totals and their amounts loaded.
  3. A div that reports "20 out of 520 files loaded"

My Code...

var aj = $.ajax({
            url: 'handler.php',
            type: 'POST',
            data: { packnameid: packnameid,
                    number: number,
                    name: name
                },
            success: function(result) {
                     // Initialize the Amazon Cognito credentials provider
                AWS.config.region = 'us-east-1'; // Region
                AWS.config.credentials = new AWS.CognitoIdentityCredentials({
                IdentityPoolId: '********************',
                });

                var bucket = new AWS.S3({params: {Bucket: 'traseindex'}});
                var params = {Key: result+".pdf", ContentType: file.type, Body: file};

                bucket.upload(params).on('httpUploadProgress', function(evt) {
                document.getElementById('total').innerHTML = evt.total;
                document.getElementById('status').innerHTML = i +" of " + files.length + " " + parseInt((evt.loaded * 100) / evt.total)+'%'
                console.log("Uploaded :: " + parseInt((evt.loaded * 100) / evt.total)+'%');

                }).send(function(err, data) {
                  document.getElementById('total').innerHTML = i +" of "+ files.length + " complete";
                });
            }
        });

No matter how I update the httpUploadProgress piece, I fail to achieve my goal. For instane, "i of file.length" immediately reports that all the files are done, yet the parseint evaluation continues to show me the percentage complete of each file as the upload processes.

Ive tried creating multiple divs with the id of the loop iteration "i", then append each of them with the parseint evaluation, but the percentage only shows up in the last div created.

If someone could point me in the correct direction, how to approach this, i do appreceiate it.

thanks.




Elastic Transcoder: Duplicate output key error

Over the last day we started getting an interesting error when trying to push a transcoding job with the PHP SDK:

'Aws\ElasticTranscoder\Exception\ElasticTranscoderException' with message 'Error executing "CreateJob" on "http://ift.tt/1O0C09g"; AWS HTTP error: Client error: 400 ValidationException (client): Playlists '64k' is a duplicate of an output key. - {"message":"Playlists '64k' is a duplicate of an output key."}' in /var/www/html/app/1.0/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php:152

The settings we're pushing to Elastic Transcoder:

        'PipelineId'      => $this->config['pipeline_id'],
        'OutputKeyPrefix' => "$prefix/",
        'Input'           => [
            'Key' => "uploads/$input_filename.$input_extension",
        ],
        'Playlists'       => [
            'OutputKeys' => [$bitrate],
            'Name'       => $bitrate,
            'Format'     => 'HLSv4',
        ],
        'Outputs'         => [
            'PresetId'        => $preset_id,
            'Key'             => $bitrate,
            'SegmentDuration' => '9.0',
        ],

where $bitrate is '64k' with the (target) end result of the transcoding job creating the files: 64k.ts, 64k.m3u8, 64k_v4.m3u8.

My first thought was possibly an S3 key conflict due to the prefix existing already but even after clearing the output bucket the error remained. And as far as I'm aware 64k.ts and 64k.m3u8 are treated as distinct objects in S3.

Does the duplicate output key in this case refer to an S3 object or perhaps a conflict in the transcoding job?




AWS SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided

I am getting this response to GET request to amazon product api.

    <?xml version="1.0"?>
<ItemSearchErrorResponse xmlns="http://ift.tt/1zDT4Ys"><Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.</Message></Error><RequestId>9aff7feb-7f9b-4efb-aece-b595b1b7b0e5</RequestId></ItemSearchErrorResponse>

In my javascript, I am generating signature as:

  var string='GET\nwebservices.amazon.in\n/onca/xml\n';
  console.log(string+$.param(paramO));
  var hash = CryptoJS.HmacSHA256(string+$.param(paramO), SecretAccessKey);
  var signature = CryptoJS.enc.Base64.stringify(hash);
  var paramO = {
    Service:'AWSECommerceService',
    Operation:'ItemSearch',
    AWSAccessKeyId:AccessKeyId,
    AssociateTag:AssociateTag,
    Version:'2011-08-01',
    SearchIndex:'All',
    Keywords:name,
    ResponseGroup:'ItemAttributes,OfferSummary',
    Timestamp:timestamp,
    Signature:signature
  };




Can't Connect to Amazon RDS MySQL Instance from anywhere Except Home

I am working on a school project in which we need to perform Statistical Analysis in R. For the sake of the project, I have created an Amazon Web Services RDS MySQL instance, that I would like to share with my colleagues.

I have already uploaded the data that we need for our project in the database and can connect to the instance via both the MySQL Client and R from home. However, I cannot connect from either School or any Local Café via either the MySQL Client or any Local Café.

I have configured the Security Group so that anyone can access the database (both Inbound & Outbound). The Port that I use is 1433.

Anybody has an idea how I can resolve the problem?




on Amazon EMR 4.0.0, setting /etc/spark/conf/spark-env.conf is ineffective

I'm launching my spark-based hiveserver2 on Amazon EMR, which has an extra classpath dependency. Due to this bug in Amazon EMR:

http://ift.tt/1iJodbh

My classpath cannot be submitted through "--driver-class-path" option

So I'm bounded to modify /etc/spark/conf/spark-env.conf to add the extra classpath:

# Add Hadoop libraries to Spark classpath
SPARK_CLASSPATH="${SPARK_CLASSPATH}:${HADOOP_HOME}/*:${HADOOP_HOME}/../hadoop-hdfs/*:${HADOOP_HOME}/../hadoop-mapreduce/*:${HADOOP_HOME}/../hadoop-yarn/*:/home/hadoop/git/datapassport/*"

where "/home/hadoop/git/datapassport/*" is my classpath.

However after launching the server successfully, the Spark environment parameter shows that my change is ineffective:

spark.driver.extraClassPath :/usr/lib/hadoop/*:/usr/lib/hadoop/../hadoop-hdfs/*:/usr/lib/hadoop/../hadoop-mapreduce/*:/usr/lib/hadoop/../hadoop-yarn/*:/etc/hive/conf:/usr/lib/hadoop/../hadoop-lzo/lib/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*

Is this configuration file obsolete? Where is the new file and how to fix this problem?




Move files to S3 then remove source files after completion

I am just familiarising myself with amazon webservices and S3 aws cli.

does anyone know if it's possible to remove files from the source that exist on the destination bucket.

I was trying the below

aws s3 sync /home/recsout s3://myfakebucketname001 --delete

I was previously using rsync to a nas drive with the --remove-source-files but want to use s3 and aws cli instead.

Many Thanks




Problems with SaveAsync task in DynamoDB for C#

I'm trying to save my administrator class object to DynamoDB using Context.SaveAsync method:

// Save admin to DynamoDB.
context.SaveAsync(admin,(result)=>{
  if (result.Exception == null)
  { 
    Console.WriteLine("admin saved");
  }
});

but it keeps bothering me with following error:

cannot convert `lambda expression' to non-delegate type `system.threading.cancellationtoken'

How do I handle this issue ?. I'm using Xamarin Studio for OS X




AWS API Gateway won't open up

I created a "hello world" lambda function and then deployed it to an end-point using AWS's API Gateway:

config settings

All very much basic settings but I was sure to change the security to "open" and while i was told that it could take up for 15 minutes for the domain to resolve I found that even after 30 I was getting the following response from the "open" end-point:

 {"message":"Missing Authentication Token"}

Am I missing something obvious? Shouldn't this have been available with what I did?




CloudWatch log role ARN

I am trying to setup a really basic API with the AWS API Gateway product and it seems I can not find any policies which will suffice for it to log and for that matter even leave the first page of the settings screen. I am stuck here:

URL: http://ift.tt/1PM6Pxn

and my desperations has led to the following permissions being granted to the role:

enter image description here

I've also added the following bespoke policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

All to no avail. Whenever I press the save button I get the following:

enter image description here

Any help would be greatly appreciated.




Mailgun doesnt send mail after hosting django app on aws server

 def send_email_using_mailgun(to_address, reset_link):
     return requests.post(
     "http://ift.tt/1jvuE2m",
     auth=("api", settings.MAILGUN_API_KEY),
     data={"from": "APP_NAME <"+settings.MAILGUN_FROM_ADDRESS+">",
          "to": to_address,
          "subject": "ResetPassword",
          "text": "Click on the link to reset the password" +
           reset_link})


Error: Traceback (most recent call last):
File"/home/ubuntu/django_env/nithenv/local/lib/python2.7/sitepackages/django/core/handlers/base.py", line 112, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view
return view_func(*args, **kwargs)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/rest_framework/views.py", line 403, in dispatch
response = self.handle_exception(exc)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/rest_framework/views.py", line 400, in dispatch
response = handler(request, *args, **kwargs)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/rest_framework/decorators.py", line 50, in handler
return func(*args, **kwargs)
File "/home/ubuntu/django_projects/styleinpocket/StyleInPocket/styleApp/views.py", line 499, in reset_password_api
stat = user.request_reset()
File "/home/ubuntu/django_projects/styleinpocket/StyleInPocket/styleApp/models.py", line 63, in request_reset
return send_email_using_mailgun(self.email_id, reset_link)
File "/home/ubuntu/django_projects/styleinpocket/StyleInPocket/styleApp/helpers.py", line 74, in send_email_using_mailgun
"Follow the link below to set a new password:<a href='" + reset_link + "'>" + reset_link + "</a>"
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/pip/_vendor/requests/api.py", line 109, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/pip/_vendor/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/home/ubuntu/django_env/nithenv/local/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', error(110, 'Connection timed out'))

Works fine when testing locally. I receive emails. Doesnt work after pushing code on aws ec2 with nginx and gunicorn. Gives ('Connection aborted.', error(110, 'Connection timed out')). Mailgun Domain authentication is done and shows green ticks on TXT records , CNAME, MX




React Native AWS image upload

I am making an app in React native and will be using Amazon Web Services for image upload, I was wondering if the AWS node SDK can be used in my React Native app because I have read multiple outdated blogposts that said they were having issues, any one tried it out? (Node version: 4)

Thanks, Lohit




S3FS not reliably mounted

I have AWS instances using Amazon AMI Linux. I've made custom disk images with S3FS installed, that are launched by load balancer automatically. In the S3 bucket all the instances have shared images.

In /etc/fstab there is one line added

s3fs#mybucket:/images /var/app/current/images fuse uid=500,gid=500,allow_other,use_cache=/tmp/cache 0 0

The problem is, that whenever EC2 instance is started from this custom AMI, there is ~50% probability that S3 bucket will not be mounted correctly and the Transport endpoint is not connected is shown when df -h. How to make mounting of S3 bucket more reliable?




how to configure neo4j database on ec2

I set up a neo4j database on ec2 and am not sure how to access it with my restclient. Firstly, how do I change the username and password once its on ec2 and running? Also, what do I change my localhost to so I can access the server?

This is the example statement I want to know how to configure:

from neo4jrestclient import client
db = client.GraphDatabase("http://localhost:7474", username="neo4j", password="neo4j")




PostgreSQL Permissions in Amazon RDS using Flyway Migration

I'm trying to get Flyway up and running for database source control of Postgres and am running into permissions issues.

Scenario: New database on AWS RDS PostgreSQL instance.

  • RDS superuser configured is "postgres" and I login as that to create the db and setup roles.

  • I create a "flyway_user" account that I want the flyway tool to use and add it to the rds_superuser role.

  • I create a couple schemas that flyway will manage and GRANT ALL ON SCHEMA to the "flyway_user" account.

When I then try to have flyway to a migration to create tables in these schemas I get an error of permission denied. So what am I missing here? Thanks in advance for any advice.




Inspect HttpRequestMessage just before it is sent

I have been tasked with integrating with the Amazon Gift Codes On Demand (AGCOD) RESTful API. We are required to sign our requests using Signature Version 4, something that is performed by their AWS SDK for .NET for other services, but not AGCOD.

I am using the HttpClient class from the System.Net.Http namespace to communicate with AWS's API. This in turn is using the HttpClientHandler to create an HttpRequestMessage. In so doing extra headers like Host, Content-Length and Connection are added to the message.

My question is, how do I go about inspecting the message after I have called PostAsync and the headers have been added, but before it is sent to the server so I can compute and add the signature?

I could obviously simply just specify these headers myself. But that only helps for known headers. If a different HttpMessageHandler is used (e.g. the WebRequestHandler) then different headers may be added (for example Content-Encoding and Cache-Control). If I don't know about all the headers in the message I will not be able to compute the correct signature.




Can't receive any notification from AmazonSNS

I am not sure why I can't receive any notification from AmazonSNS. Am I missing something in my code? I am using the latest version of AWSSDK for Windows Store App by the way.

Here's my code so far.

d("init AmazonSimpleNotificationServiceClient");
AmazonSimpleNotificationServiceClient sns = new AmazonSimpleNotificationServiceClient("secret", "secret", RegionEndpoint.EUWest1);

d("get notification channel uri");
string channel = string.Empty;
var channelOperation = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
channelOperation.PushNotificationReceived += ChannelOperation_PushNotificationReceived;

d("creating platform endpoint request");
CreatePlatformEndpointRequest epReq = new CreatePlatformEndpointRequest();
epReq.PlatformApplicationArn = "arn:aws:sns:eu-west-1:X413XXXX310X:app/WNS/Device";
d("token: " + channelOperation.Uri.ToString());
epReq.Token = channelOperation.Uri.ToString();

d("creat plateform endpoint");
CreatePlatformEndpointResponse epRes = await sns.CreatePlatformEndpointAsync(epReq);

d("endpoint arn: " + epRes.EndpointArn);

d("subscribe to topic");
SubscribeResponse subsResp = await sns.SubscribeAsync(new SubscribeRequest()
{
    TopicArn = "arn:aws:sns:eu-west-1:X413XXXX310X:Topic",
    Protocol = "application",
    Endpoint = epRes.EndpointArn
});

private void ChannelOperation_PushNotificationReceived(Windows.Networking.PushNotifications.PushNotificationChannel sender, Windows.Networking.PushNotifications.PushNotificationReceivedEventArgs args)
{
    Debug.WriteLine("receiving something");
}




AWS device farm with Espresso and JUnit4

I want to test my app in AWS farm, using

androidTestCompile 'com.android.support.test:runner:0.4'
androidTestCompile 'com.android.support.test:rules:0.4'
androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.1'
androidTestCompile 'com.android.support.test.espresso:espresso-intents:2.2.1'
androidTestCompile('com.android.support.test.espresso:espresso-contrib:2.2.1') {
    exclude group: 'com.android.support', module: 'appcompat'
    exclude group: 'com.android.support', module: 'support-v4'
    exclude module: 'recyclerview-v7'
}
androidTestCompile 'junit:junit:4.12'
androidTestCompile 'com.squareup.retrofit:retrofit-mock:1.9.0'
androidTestCompile 'com.squareup.assertj:assertj-android:1.1.0'
androidTestCompile 'com.squareup.spoon:spoon-client:1.2.0'

Sample test:

@RunWith(AndroidJUnit4.class) and run with AndroidJUnitRunner, I have my tests starting like:

@RunWith(AndroidJUnit4.class)
@LargeTest
public class EstimationActivityTests {

@Rule
public ActivityTestRule<LoginActivity> mActivityRule = new ActivityTestRule(LoginActivity.class);

@Before
public void setup() {
}

@Test
public void showsRightDataOnCreate() {
org.junit.Assert.assertEquals("asd", "asd");
}
}

But it just test teardown and setup suite tests... looks like it dont recognize the tests...

Another thing is that I´m creating the apk and test apk with gradlew.

#./gradlew assembleMockAndroidTest

and I upload the files in app-mock-androidTest-unaligned.apk and app-mock-unaligned.apk.

What´s wrong in my process?

Case: http://ift.tt/1FFtJ9q




Elastic Transcoder Job Status in PHP

How do you print the Job status of an elastic transcoder job with the following parameters in php

{Progressing, Completed, Warning, Error}

I have submitted the job and the video is transcoding. I'm retrieving the result's through S3 Listing.




IAM, apply policy only to tagged instances

I need create a policy IAM to stop or terminate only instances that have a specific tag (multiple instances), i have written this:

{
            "Action": [
                "ec2:StopInstances",
                "ec2:TerminateInstances"
            ],
            "Effect": "Allow",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/Name": "tag1",
                    "ec2:ResourceTag/Name": "tag2",
                    "ec2:ResourceTag/Name": "tag3"
                }
            }
        },

But the form is invalid, only one string ec2:ResourceTag/Name can be written. How can Stop and Terminate instances with different tag Name ?




Best setup for GCM based mobile push on AWS

I am creating a backend service running on AWS, and I will have mobile clients on Android and later on also on iOS. I need mobile push functionality in order to push events from the backend services to the devices.

Now, I am having several concerns on what the best setup is:

  • What is the benefit of using AWS SNS to implement mobile push? For GCM, it's easy enough for my backend service code to make a HTTP call to the GCM API to publish a message.

    I know that SNS can act as a wrapper for multiple push service implementations including GCM and APNS, and I was thinking that this would help hide the details about what kind of device is used etc from the backend services, but since you need to create one "SNS platform application" for each bearer anyway, I'm not so sure - will my backend code still need to know which kind of device it wants to talk to and go about publishing downstream messages differently for Android vs iOS? I can imagine it would be easier than having to write completely different code for pushing messages down GCM vs APNS, but will it be significantly simpler?

  • If using SNS to wrap GCM, what is the best procedure for registering device endpoints? I have struggled to find a decent tutorial for this, and the only one I have found so far does it like so:

    1. Android app running on device registers with GCM using Google Play Services API

    2. Android app running on device registers a SNS platform endpoint based on the GCM token, using e.g. AWS Cognito credentials to access the account

    3. Android app running on device stores the platform endpoint ARN in the backend user database etc, to be used when a push message needs to be sent to the device

    I guess it would also be possible to have the device perform only step 1) and then pass the GCM token to my backend service which would create an SNS endpoint from it and store that in the database? This would prevent my devices from having to handle AWS credentials etc. But then I would have to implement GCM/APNS specific functionality on my server.

    Is the handling of expired/rotated GCM tokens a factor in how this procedure should be set up? Is it a good idea to store only the AWS SNS endpoint ARNs in my user database?

  • Are there other benefits of using SNS as opposed to directly calling GCM/APNS/... ?




aws ec2 describe-addresses won't show some instances

I'm scripting some stuff with aws ec2 describe-addresses, but, for some reason, some instances won't be returned by it.

Example:

$ aws ec2 describe-addresses --filter=Name=instance-id,Values=i-xxxxx 
{
    "Addresses": []
}

The given instanceId is valid and has addresses, but it just won't be shown by aws cli.

However, for another instances it seems to work just fine:

$ aws ec2 describe-addresses --filter='Name=instance-id,Values=i-yyyyyy'                                           
{
    "Addresses": [
        {
            "PrivateIpAddress": "X.X.X.X",
            "InstanceId": "i-yyyyyy",
            "NetworkInterfaceOwnerId": "XXXXXXXXXX",
            "Domain": "vpc",
            "AllocationId": "eipalloc-xxxxxx",
            "PublicIp": "Y.Y.Y.Y",
            "NetworkInterfaceId": "eni-xxxxxx",
            "AssociationId": "eipassoc-xxxxx"
        }
    ]
}

The keys I'm using have EC2FullAccess policy, so, it doesn't seem to be related to security...

What am I doing wrong? Any tips? Is there any limitations of aws cli that I'm not aware of?




Facebook integration in android using AWS

I don't why but after adding this to my code and running my app it shows your app has stopped. Please can someone help me with this.

Map logins = new HashMap(); logins.put("graph.facebook.com", AccessToken.getCurrentAccessToken().getToken()); credentialsProvider.setLogins(logins);




boto3 attach volume to next available device name on instance

Using boto3, what is the best way to attach a volume resource to the next available device name on an instance resource?

Example utilized instance block devices:

/dev/sda
/dev/sdb
/dev/sdd
/dev/sde
/dev/sdf

/dev/sdc is open so i'd like the attach my volume there.




Signature does not match while trying to upload file to a temporary URL

I am currently trying to build a webservice to generate temporary URLs to Amazon S3 (so that I don't need to keep the credentials elsewhere). It works fine if I do not include the 'Content-MD5' key to the header for the generate_url method, but once included, I always get the same error :

"The request signature we calculated does not match the signature you provided. Check your key and signing method."

The md5 is generated and included in the following way :

md5checksum = key.compute_md5(open(filepath, "rb"))[0]
r = cli.session.post(serviceAddress + webService , data=json.dumps({"key": key, "size": os.path.getsize(filepath), "md5" : md5checksum}))

I have also tried generating the md5 with

md5checksum = hashlib.md5(open(filepath).read()).hexdigest()

On the webservice's side, the temporary URL is generated via

headers={'Content-Length': length, 'Content-MD5': md5}
return self.conn.generate_url(expire, 'PUT', self.name, key, headers=headers, force_http=True)

I have checked that the md5 does not change between the generation of the URL and the file's upload. If I just remove 'Content-MD5': md5, it does work fine.




JW Player - Amazon Web Services CDN and Advanced Javascript Debugging

I have a customized JW Player 7 Pro embedded on the following page: http://ift.tt/1h6QrvA.

The embed code is as follows:

<!--Course Video, Scripts and Style-->
<div id="visualSPPlayer">Loading the player...</div>
<script type="text/javascript">
var playerInstance = jwplayer("visualSPPlayer");
playerInstance.setup({
file: "http://ift.tt/1iYHeHv",
primary: "HTML5",
image: "http://ift.tt/1h6QrLO",
   width: "100%",
aspectratio: "16:9",
       tracks :[{
file: "http://ift.tt/1iYHciL", 
            label: "English",
            kind: "captions",
        },{
              file:'http://ift.tt/1h6QrLT',
               kind:'chapters'

},
{ 
            file: "http://ift.tt/1iYHeHx", 
            kind: "thumbnails"
        }],
skin: {
  name: "vapor",
active: "#E16933",
inactive: "#E16933",
background: "#333333"
}

});
</script>
<script type="application/javascript" src="http://ift.tt/1h6QrLV"></script>
<link rel="stylesheet" href="http://ift.tt/1iYHciN" type="text/css" media="screen" />

The player.js file contents:

jQuery(document).ready(function() {

jQuery(function($){

var playerInstance = jwplayer();

var chapters = [];
var captions = [];
var toc = [];
var caption = -1;
var matches = [];
var seekArr = [];
 var seekPos = [];
var seePos;
var query = "";
var cycle = -1;

var transcript = document.getElementById('courseTranscript');
var search = document.getElementById('courseSearch');
var match = document.getElementById('courseMatch');


var caption_file;
var chapter_file;


playerInstance.onReady(function(){
        
//Self-Hosted
caption_file = playerInstance.getPlaylist()[0].tracks[0].file;
chapter_file = playerInstance.getPlaylist()[0].tracks[1].file;

    if (playerInstance.getRenderingMode() == "flash") {
        return;
      }

      tag = document.querySelector('video');
      tag.defaultPlaybackRate = 1.0;
      tag.playbackRate = 1.0;

      playerInstance.addButton("http://ift.tt/1h6QtU5", "1.5x", function() {
        playerInstance.seek(playerInstance.getPosition());
        tag.playbackRate = 1.5;
      },"playerHighSpeed");

      playerInstance.addButton("http://ift.tt/1iYHciP", "1.0x", function() {
        playerInstance.seek(playerInstance.getPosition());
        tag.playbackRate = 1.0;
      },"playerNormalSpeed");

    playerInstance.addButton("http://ift.tt/1h6QtU8", "0.5x", function(){
        playerInstance.seek(playerInstance.getPosition());
        tag.playbackRate = 0.5;
      },"playerSlowSpeed");


     
 });

   

 //Adds Player Focus on Playing
playerInstance.on('play', function() {

         $('html, body').animate({
        scrollTop: $(".jwplayer").offset().top - 190
    }, 1000);

});


playerInstance.onReady(function(){

 $.get( caption_file , function( data ) {
                 data = data.trim();
                     var t = data.split("\n\r\n");
             
                      for(var i=0; i<t.length; i++) {
                        var c = parse(t[i]);
                        chapters.push(c);

                      }
                      loadCaptions();
          loadChapters();
          
                    });

        //

 });



// Load chapters / captions


  function loadCaptions(){
       
     $.get(caption_file, function( data ) {
  
        data = data.trim();

      var t = data.split("\n\r\n");
      t.pop();
      var h = "<p>";
      var s = 0;
      for(var i=0; i<t.length; i++) {
        var c = parse(t[i]);
        if(s < chapters.length && c.begin > chapters[s].begin) {
           s++;
        }
        h += "<span id='caption"+i+"'>"+c.text+"</span>";
        captions.push(c);
      }
      transcript.innerHTML = h + "</p>";


    });

};



function parse(d) {
    var a = d.split("\n");
   
    //console.log(a[1]);
    var i = a[1].indexOf(' --> ');

    var t = a[2]; //Caption text
 

    if (a[3]) {  t += " " + a[3]; }
    t = t.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;');
    return {
      begin: seconds(a[1].substr(0,i)),
      btext: a[1].substr(3,i-7),
      end: seconds(a[1].substr(i+5)),
      text: t
    }
};

function seconds(s) {
  var a = s.split(':');
 
  secs = a[2].substring(0, a[2].indexOf(','));

    var r = Number(secs) + Number(a[a.length-2]) * 60;


  if(a.length > 2) { r+= Number(a[a.length-3]) * 3600; }
  return r;

};

function toc_seconds(s) {
  var a = s.split(':');
 
secs = a[2].substring(0, a[2].indexOf('.'));
  
   var r = Number(secs) + Number(a[a.length-2]) * 60;

  if(a.length > 2) { r+= Number(a[a.length-3]) * 3600; }
  return r;

};

function toc_time(s) {
  var a = s.split(':');
   var ms = a[2].split(".");

   var h = a[0];
  
   if (h != "00") {
        var r = a[0] + ":"+ a[1] + ":" + ms[0]; 
   } else {
        var r = a[1] + ":" + ms[0];     
   }

  
   return r;

};



// Highlight current caption and chapter
playerInstance.onTime(function(e){
  var p = e.position;
  for(var j=0; j<captions.length; j++) {
    if(captions[j].begin < p && captions[j].end > p) {
      if(j != caption) {
        var c = document.getElementById('caption'+j);
        if(caption > -1) {
          document.getElementById('caption'+caption).className = "";
        }
        c.className = "current";
        if(query == "") {
          transcript.scrollTop = c.offsetTop - transcript.offsetTop - 40;
        }
        caption = j;
      }
      break; 
    }
  }
});



// Hook up interactivity
transcript.addEventListener("click",function(e) {
  if(e.target.id.indexOf("caption") == 0) { 
    var i = Number(e.target.id.replace("caption",""));
    playerInstance.seek(captions[i].begin);
  }
});

/**/

search.addEventListener('focus',function(e){
  setTimeout(function(){search.select();},100);
resetSearch();
  $("#prevMatchLink").hide();
    $("#nextMatchLink").hide();
});
search.addEventListener('keydown',function(e) {
  if(e.keyCode == 27) {
    resetSearch();
    $("#prevMatchLink").hide();
    $("#nextMatchLink").hide();
  } else if (e.keyCode == 13) {
    $("#prevMatchLink").show();
    $("#nextMatchLink").show();
    var q = this.value.toLowerCase();
    if(q.length > 0) {
      if (q == query) {
        if(cycle >= matches.length - 1) {
          cycleSearch(0);

          } else { 

          cycleSearch(cycle + 1);
        }
      } else {
        resetSearch();
        searchTranscript(q);
      }
    } else {
      resetSearch();
    }
  } else if (e.keyCode == 37) {
    cycleSearch(cycle - 1);
  }
  else if (e.keyCode == 39) {
    cycleSearch(cycle + 1);
  }
});

$("#prevMatchLink").click(function(e) {
e.preventDefault();
cycleSearch(cycle - 1);
});

$("#nextMatchLink").click(function(e) {
e.preventDefault();
cycleSearch(cycle + 1);
});



// Execute search
function searchTranscript(q) {
  matches = [];
  query = q;
  for(var i=0; i<captions.length; i++) {
    var m = captions[i].text.toLowerCase().indexOf(q);
    if( m > -1) {
      document.getElementById('caption'+i).innerHTML = 
        captions[i].text.substr(0,m) + "<em>" + 
        captions[i].text.substr(m,q.length) + "</em>" + 
        captions[i].text.substr(m+q.length);
      matches.push(i);
    }
  }
  if(matches.length) {
    cycleSearch(0);
  } else {
    resetSearch();
  }
};

function cycleSearch(i) {
  if(cycle > -1) {
    var o = document.getElementById('caption'+matches[cycle]);
    o.getElementsByTagName("em")[0].className = "";
  }
  var c = document.getElementById('caption'+matches[i]);
  c.getElementsByTagName("em")[0].className = "current";
  match.innerHTML = (i+1) + " of " + matches.length;
  transcript.scrollTop = c.offsetTop - transcript.offsetTop - 40;
  cycle = i;
};

function resetSearch() {
  if(matches.length) {
    for(var i=0; i<captions.length; i++) {
      document.getElementById('caption'+i).innerHTML = captions[i].text;
    }
  }
  query = "";
  matches = [];
  match.innerHTML = "0 of 0";
  cycle = -1;
  transcript.scrollTop = 0;
};


var videoTitle = $(".videoTitle").text();


        var hasPlayed = false;

        playerInstance.onBeforePlay(function(event) {
                if(hasPlayed == false){
                        ga('send', 'event', 'Video', 'Play', videoTitle);
                        hasPlayed = true;
                }
        });


    //Can be used to trigger the Course to Marked Completed so the user doesn't have to
    playerInstance.on('complete', function() {

    });


     function loadChapters(){
       
     $.get(chapter_file, function( data ) {
  
        data = data.trim();
               
      var c = data.split("\n\r\n");
      
      var d;

      for (var i = 0; i < c.length; i++) {
      
        d = c[i].split("\n");
        //pushes in Title for each chapter
        toc.push(d[0]);
        //pushes in the time intervals for each chapter
        seekArr.push(d[1]);
        
      };

       

      for (var a = 0; a < seekArr.length; a++) {
        //Splits the time interval and pushes the start interval for each chapter
        var tempPos = seekArr[a].split(" --> ");
        seekPos.push(tempPos[0]);
      };

      runTOC(seekPos);

      var toc_output = "";
      $.each(toc, function(i, v) {
       
        toc_output += "<li class=ch"+i+"><a href='#' onclick='jwplayer().seek("+toc_seconds(seekPos[i])+");'>"+v+"</a> ("+toc_time(seekPos[i])+")</li>"
      
      });

      if (toc.length < 7) {
      toc_output += " <li class='blank'> </li><li class='blank'> </li>";
      }

      $(".courseTitles ul").html(toc_output);
    
    });

         

};
     
      function runTOC(x) {
     
          playerInstance.onTime(function(event){
          
          for (var i = 0; i < x.length; i++) {
           
           if (event.position >  toc_seconds(x[i]) ) {
            $(".courseTitles ul li").removeClass("active");
            $(".courseTitles ul li.ch"+i).addClass('active');
          
          }


          };

         


        });

      
      }

     
});

});

We are hosting the video and Chapter/Captions VTT files using Amazon Web Services with Cloudfront.

We have included the interactive transcript from the captions as well as dynamic video chapters to be loaded once the video is ready to be played.

One thing I have noticed is that the chapters and the transcript do not always load and require the page to be refreshed several times so I was thinking that maybe it was a caching issue on the AWS side of the equation.

I have used Google Chrome and there are no errors in the developers console when the chapters and transcript do not load.

It should be noted that this functionality was working flawlessly when were using the JW Platform cloud hosted solution so it seems to be a factor of the AWS/Cloudfront CDN.




npm install failing on AWS EC2

Im installing node on an AWS EC2 instance,

following instrucions from here: http://ift.tt/xDwEm3

but is not working, im getting the following error:

npm ERR! Linux 4.1.7-15.23.amzn1.x86_64 npm ERR! argv "node" "/home/ec2-user/npm/cli.js" "install" "marked-man" "--no-global" npm ERR! node v0.6.8 npm ERR! npm v3.3.5

npm ERR! Object # has no method 'exists' npm ERR! npm ERR! If you need help, you may report this error at: npm ERR!
http://ift.tt/1wdyVck

npm ERR! Please include the following file with any support request: npm ERR! /home/ec2-user/npm/npm-debug.log npm ERR! code 1 make[1]: * [node_modules/.bin/marked-man] Error 1 make[1]: Leaving directory `/home/ec2-user/npm' make: * [man/man1/npm-edit.1] Error 2




Amazon EC2 boot time

Our web app performs a random number of tasks for a user initiated action. We have built a small system where a master server calculates the number of worker servers that are needed to complete the task, and the same number of EC2 instances are "Turned On" which pick up the tasks and perform the same.

"Turned On" because the time taken to span an instance from an AMI is extremely high. So the idea is have a pool of worker instances and start and stop them as per requirement.

Also considering how amazon charges when you start up an instance (You are billed for 1 hour every time you Turn on an instance). The workers once spawned will be active for an hour and will accept other tasks during this period.

We have managed to get this architecture up and running, however the boot up time still bothers us as it fluctuates between 40 to 80 seconds. Is there some way we can reduce the same.

Below is the stack information of the things running on the worker instance

  • Ubuntu AMI
  • Node JS (using forever-service for auto startup on boot)
  • Docker (the tasks are performed inside individual docker containers)