vendredi 31 juillet 2015

I want to create aws cloudformation stack using openstack heat

I want to create aws cloudformation stack using openstack heat I dont have any idea where to start.




How to Connect with aws Mysql Database to android App through JSP?

I had made an instance in Amazon aws server which gives me some IP to connect. After Connecting that server from my terminal i had installed the mysql-server in that and had made database,tables with some entries. Now I want to connect this mysql database to my android app through jsp. How can I acheive this?




Laravel 5.1 AWS SDK proper credentials setup: exception 'Aws\Exception\CredentialsException'

I am working on a Laravel 5.1 App Using this package:

"aws/aws-sdk-php-laravel": "~3.0"

I am trying to properly setup a local environment and a production environment. I keep getting this error when trying to send mail on production server: (My .env file is gitignored and only exists locally)

production.ERROR: exception 'Aws\Exception\CredentialsException' with message 'Error retrieving credentials from the instance profile metadata server. 

Ran

php artisan vendor:publish

My .env file look slike so, only with the keys:

APP_ENV=local
APP_DEBUG=true
APP_KEY=

DB_HOST=localhost
DB_DATABASE=
DB_USERNAME=
DB_PASSWORD=

CACHE_DRIVER=file
SESSION_DRIVER=database
QUEUE_DRIVER=sync

MAIL_DRIVER=ses
MAIL_HOST=email-smtp.us-west-2.amazonaws.com
MAIL_PORT=587
MAIL_USERNAME=
MAIL_PASSWORD=
MAIL_ENCRYPTION=null

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_REGION=us-west-2

My config/Mail.php looks like this only with keys and email address.

<?php

return [

/*
|--------------------------------------------------------------------------
| Mail Driver
|--------------------------------------------------------------------------
|
| Laravel supports both SMTP and PHP's "mail" function as drivers for the
| sending of e-mail. You may specify which one you're using throughout
| your application here. By default, Laravel is setup for SMTP mail.
|
| Supported: "smtp", "mail", "sendmail", "mailgun", "mandrill", "ses", "log"
|
*/

'driver' => env('MAIL_DRIVER', 'ses'),

/*
|--------------------------------------------------------------------------
| SMTP Host Address
|--------------------------------------------------------------------------
|
| Here you may provide the host address of the SMTP server used by your
| applications. A default option is provided that is compatible with
| the Mailgun mail service which will provide reliable deliveries.
|
*/

'host' => env('MAIL_HOST', ''),

/*
|--------------------------------------------------------------------------
| SMTP Host Port
|--------------------------------------------------------------------------
|
| This is the SMTP port used by your application to deliver e-mails to
| users of the application. Like the host we have set this value to
| stay compatible with the Mailgun e-mail application by default.
|
*/

'port' => env('MAIL_PORT', 587),

/*
|--------------------------------------------------------------------------
| Global "From" Address
|--------------------------------------------------------------------------
|
| You may wish for all e-mails sent by your application to be sent from
| the same address. Here, you may specify a name and address that is
| used globally for all e-mails that are sent by your application.
|
*/

'from' => ['address' => '', 'name' => ''],

/*
|--------------------------------------------------------------------------
| E-Mail Encryption Protocol
|--------------------------------------------------------------------------
|
| Here you may specify the encryption protocol that should be used when
| the application send e-mail messages. A sensible default using the
| transport layer security protocol should provide great security.
|
*/

'encryption' => env('MAIL_ENCRYPTION', 'tls'),

/*
|--------------------------------------------------------------------------
| SMTP Server Username
|--------------------------------------------------------------------------
|
| If your SMTP server requires a username for authentication, you should
| set it here. This will get used to authenticate with your server on
| connection. You may also set the "password" value below this one.
|
*/

'username' => env('MAIL_USERNAME', ''),
/*
|--------------------------------------------------------------------------
| SMTP Server Password
|--------------------------------------------------------------------------
|
| Here you may set the password required by your SMTP server to send out
| messages from your application. This will be given to the server on
| connection so that the application will be able to send messages.
|
*/

'password' => env('MAIL_PASSWORD','' ),

/*
|--------------------------------------------------------------------------
| Sendmail System Path
|--------------------------------------------------------------------------
|
| When using the "sendmail" driver to send e-mails, we will need to know
| the path to where Sendmail lives on this server. A default path has
| been provided here, which will work well on most of your systems.
|
*/

'sendmail' => '/usr/sbin/sendmail -bs',

/*
|--------------------------------------------------------------------------
| Mail "Pretend"
|--------------------------------------------------------------------------
|
| When this option is enabled, e-mail will not actually be sent over the
| web and will instead be written to your application's logs files so
| you may inspect the message. This is great for local development.
|
*/

'pretend' => false,

];




AWS EC2 Public IP based in Sydney

I tried changing the location to Asia/Pacific(Sydney) and created an instance with default configuration and my instance is assigned to a US based IP address.

I created an other VPC with subnet using AWS IP range with Sydney based list and then created an instance, my private IP is assigned to Sydney based IP address but still my public IP address is assigned to some US based IP address.

Is there anyway I can get an Sydney based IP address to my instance?




How to achieve EC2 high availability while prefering the instance launch into a specific Availability Zone

I am looking for how to specify the zone I want to deploy to in a single instance deployment, with autoscaling, while also having automatic failover to another zone -- Do any options exist to achieve this?


More context

Due to how reserved instances are linked to a single availability zone (AZ), we find it to be a good strategy (from an "ease of management"/simplicity perspective), when buying reserved instances for our dev environment, to buy them all in a single zone and then launch all dev instances in that single zone. (In production, we buy across zones and run with autoscale groups that specify to deploy across all zones).

I am looking for how to:

  1. Specify the AZ that I want an instance to be deployed to, so that I can leverage the reserved instances that are tied to a single (and consistent) AZ.

while also having

  1. The ability to failover to an alternate zone if the primary zone fails (yes, you will pay more money until you move the reserved instances, but presumably the failover is temporary e.g. 8 hours, and you can fail back once the zone is back online).

The issue is that I can see how you can achieve 1 or 2, but not 1 and 2 at the same time.

To achieve 1, I would specify a single subnet (and therefore AZ) to deploy to, as part of the autoscale group config.

To achieve 2, I would specify more than one subnet in different AZs, while keeping the min/max/capacity setting at 1. If the AZ that the instance non-deterministically got deployed to fails, the autoscale group will spin up an instance in the other AZ.

One cannot do 1 and 2 together to achieve a preference for which zone an autoscale group of min/max/capacity of 1 gets deployed to while also having automatic failover if the zone I am in fails; they are competing solutions.




Sending mail to SES by using only an instance role

Is there a way to leverage instance roles to be able to send mail to SES from an Amazon Linux EC2 instance so that one does not have to have IAM access keys on the box?

Would prefer not to have any private keys, including IAM keys (even those with locked down privileges), on our EC2 instances.




Remote Desktop from Windows to Debian on AWS?

I need to set up a Remote Desktop connection from Windows 8 to a Debian instance running on AWS. I've tried modifying the Ubuntu instructions from AWS, replacing the ubuntu-desktop with task-desktop. When I connect sesman tells me

connecting to sesman ip 127.0.0.1 port 3350
sesman connect ok
sending login info to session manager, please wait ...
xrdp_mm_process_login_response: login successful for display ..
started connecting
connecting to 127.0.0.1 5910
error - problem connecting

There's nothing helpful in /var/log/auth.log or /var/log/xrdp-sesman.log.

What am I missing here?




AWS SDK 2.0 Ruby Presigned URL & transcoding the content after upload

My intention:

  1. Get the presigned URL for posting a resource
  2. Post resource to S3 bucket
  3. Transcode my resource on s3 (for video files)

I have figured out 1 & 2 from here after hitting the route, say /getPresignedURL. Has anyone done 3?

My backup plan is to create another route, say /fileUpload which upon successful upload to the presignedURL will return a 200. I will then run a job for transcoding the video, manually. Any easier way to do this? TIA.




Twilio returns 502 while pointing to AWS API Gateway hooked to Lambda

I have a twilio number pointing to an AWS API Gateway which is hooked to an AWS lambda function which returns XML. After multiple attempts I was finally able to to map the response from Lambda to API gateway and now it is returning valid XML for twilio. If I try to go to the URL or I try to make a curl to the URL I get the expected result: some XML. However, when I try this from twilio I get 502 Bad gateway. Nay ideas why? Also I tried this from multiple IP address so I don't think it is a security issue, since I don't have any security enabled or authentication on the API gateway. I need help...




Create basic AWS CloufFormation Template for single server

I have no experience with AWS CloudFormation Templates so I apologize for the incredibly simple question which I can't find an answer to because I think it is so basic.

I am trying to create a cloudformation template for a single server in AWS Test Drive. Here is the criteria:

Deploy AMI Force m3-large Will be running in a single location Get a public IP Spit back the public DNS or public IP address

Everything I've looked up wants to be more complex than I need and I can't figure out which pieces are needed and which ones can be taken out. What is the bare minimum to deploy a single ami with no customization (all customization is performed inside the VM during bootup.

Thanks for your help.




QuickBooks Webconnector is not working with Apache Load balancer JSESSIONID

I have configured Apache load balancing server and am trying to use that with the QuickBooks Web connector.

Here are my Apache load balncer config

#Cluster configuration with Stikeyness
ProxyRequests Off
<Proxy http://balancermycluster>
BalancerMember ajp://localhost:8010/dbsync2/   route=node1 loadfactor=4
BalancerMember ajp://10.0.0.187:8010/dbsync2/  route=node3 loadfactor=4
#ProxySet lbmethod=bybusyness
ProxySet stickysession=JSESSIONID timeout=300
</Proxy>
ProxyPass /dbsync2/ balancer://mycluster/
ProxyPassReverse /dbsync2/ balancer://mycluster/

QuickBooks webconnector making first request to Apache server and that is getting connected to the back-end tomcat but QuickBooks sending 4 request to complete its action that should go to single server.

In my case the requests are not sticking into single tomcat its getting scatters across the multiple tomcats. Amazon uses JESESSSIONID QuickBooks web connector is working fine with AWS Sticky sessions.

Is there any way to keep this request to the same tomcat instance using JSESSIOND ???

Thanks in advance please help me with this




Deploy Docker environment on Elastic Beanstalk

I just "Dockerized" my infrastructure into containers. The environment basically is one nginx-php-fpm container which contains nginx configured with php-fpm. This container connects to multiple data-containers which contains the application files for the specific component.

I've seen multiple talks on deploying a single container to Beanstalk, but I'm not sure how I would deploy an environment like this. Locally the environment works. I got my nginx-php-fpm container using the --volumes-from flag to a data-container.

How would I create the same environment on Beanstalk? I can't find the option to volume from another container. Also is there a good platform that handles the Docker orchestration yet?




Multilib version problems found trying to install PhantomJS on AWS ec2

I'm trying to install PhantomJS on my linux ec2 by following this tutorial: http://ift.tt/1I7YD8c

Unfortunately, I'm getting a multilib version problem.
Any tips on how to fix this problem? I've already tried running package-cleanup --cleandupes.

Error stack

---> Package nss-softokn-freebl.i686 0:3.16.2.3-9.36.amzn1 will be installed
--> Finished Dependency Resolution
Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:

         1. You have an upgrade for fontconfig which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of fontconfig of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude fontconfig.otherarch ... this should give you an error
            message showing the root cause of the problem.

         2. You have multiple architectures of fontconfig installed, but
            yum can only see an upgrade for one of those architectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.

         3. You have duplicate versions of fontconfig installed already.
            You can use "yum check" to get yum show these errors.

       ...you can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).

       Protected multilib versions: fontconfig-2.8.0-5.8.amzn1.i686 != fontconfig-2.10.95-7.1.ll1.x86_64




How to set networkaddress.cache.ttl in scala?

I need to set DNS TTL in scala for my Elastic Beanstalk application in AWS. What are the options to set java.security options in scala? Can this be done through config files or it should be done only in runtime?




Setting hive properties in Amazon EMR?

I'm trying to run a Hive query using Amazon EMR, and am trying to get Apache Tez to work with it too, which from what I understand requires setting the hive.execution.engine property to tez according to the hive site?

I get that hive properties can be set with set hive.{...} usually, or in the hive-site.xml, but I don't know how either of those interact with / are possible to do in Amazon EMR.

So: is there a way to set Hive Configuration Properties in Amazon EMR, and if so, how?

Thanks!




What are the possible use cases for Amazon SQS or any Queue Service?

So I have been trying to get my hands on Amazon's AWS since my company's whole infrastructure is based of it.

One component I have never been able to understand properly is the Queue Service, I have searched Google quite a bit but I haven't been able to get a satisfactory answer. I think a Cron job and Queue Service are quite similar somewhat, correct me if I am wrong.

So what exactly SQS does? As far as I understand, it stores simple messages to be used by other components in AWS to do tasks & you can send messages to do that.

In this question, Can someone explain to me what Amazon Web Services components are used in a normal web service?; the answer mentioned they used SQS to queue tasks they want performed asynchronously. Why not just give a message back to the user & do the processing later on? Why wait for SQS to do its stuff?

Also, let's just say I have a web app which allows user to schedule some daily tasks, how would SQS would fit in that?




Unable to serve static media from S3 with Django

I need to use Amazon S3 to serve my static and media files to my Django project.

However, I am facing a lot of issues with that. First, my code:

s3utils.py

from storages.backends.s3boto import S3BotoStorage

class FixedS3BotoStorage(S3BotoStorage):
def url(self, name):
    url = super(FixedS3BotoStorage, self).url(name)
    if name.endswith('/') and not url.endswith('/'):
        url += '/'
    return url

StaticS3BotoStorage = lambda: FixedS3BotoStorage(location='static')
MediaS3BotoStorage = lambda: FixedS3BotoStorage(location='media')

In settings.py

DEFAULT_FILE_STORAGE = 'SpareGuru.s3utils.MediaS3BotoStorage'
STATICFILES_STORAGE = 'SpareGuru.s3utils.StaticS3BotoStorage'

AWS_HOST = "s3-ap-southeast-1.amazonaws.com"
AWS_ACCESS_KEY_ID = 'xx'
AWS_SECRET_ACCESS_KEY = 'yy'
AWS_STORAGE_BUCKET_NAME = 'zz'

S3_URL = 'http://%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
MEDIA_DIRECTORY = '/media/'
STATIC_DIRECTORY = '/static/'

STATIC_URL = "/static/"

MEDIA_URL = "/media/"

STATIC_ROOT = S3_URL + STATIC_DIRECTORY
COMPRESS_ROOT = STATIC_ROOT
MEDIA_ROOT = S3_URL + MEDIA_DIRECTORY

Here are the issues I face:

  1. When running ./manage.py collectstatic, it starts to upload the files to S3 and after a couple of files, I get Broken Pipe error.

  2. When trying to run the webpage, I get the error: 'http://ift.tt/1IOXqGg' isn't accessible via COMPRESS_URL ('/static/') and can't be compressed.

No idea what's going on here.

To be more clearer, my bucket policy is:

{
"Version": "2008-10-17",
"Statement": [
    {
        "Sid": "AllowPublicRead",
        "Effect": "Allow",
        "Principal": {
            "AWS": "*"
        },
        "Action": "s3:GetObject",
        "Resource": [
            "arn:aws:s3:::zz/*",
            "arn:aws:s3:::zz"
        ]
    }
]
}




AWS Kinesis Consumer Python 3.4 Boto

I am trying to build a kinesis consumer script using python 3.4 below is an example of my code. I want the records to be saved to a local file that I can later push to S3:

from boto import kinesis
import time
import json

# AWS Connection Credentials
aws_access_key = 'your_key'
aws_access_secret = 'your_secret key'

# Selected Kinesis Stream
stream = 'TwitterTesting'

# Aws Authentication
auth = {"aws_access_key_id": aws_access_key, "aws_secret_access_key": aws_access_secret}
conn = kinesis.connect_to_region('us-east-1',**auth)

# Targeted file to be pushed to S3 bucket
fileName = "KinesisDataTest2.txt"
file = open("C:\\Users\\csanders\\PycharmProjects\\untitled\\KinesisDataTest.txt", "a")

# Describe stream and get shards
tries = 0
while tries < 10:
    tries += 1
    time.sleep(1)
    response = conn.describe_stream(stream)
    if response['StreamDescription']['StreamStatus'] == 'ACTIVE':
        break
else:
    raise TimeoutError('Stream is still not active, aborting...')

# Get Shard Iterator and get records from stream
shard_ids = []
stream_name = None
if response and 'StreamDescription' in response:
    stream_name = response['StreamDescription']['StreamName']
    for shard_id in response['StreamDescription']['Shards']:
        shard_id = shard_id['ShardId']
        shard_iterator = conn.get_shard_iterator(stream,
        shard_id, 'TRIM_HORIZON')
        shard_ids.append({'shard_id': shard_id, 'shard_iterator': shard_iterator['ShardIterator']})
        tries = 0
        result = []
        while tries < 100:
            tries += 1
            response = json.load(conn.get_records(shard_ids, 100))
            shard_iterator = response['NextShardIterator']
            if len(response['Records'])> 0:
                for res in response['Records']:
                    result.append(res['Data'])
                    print(result, shard_iterator)

For some reason when I run this script I get the following error each time:

Traceback (most recent call last):
  File "C:/Users/csanders/PycharmProjects/untitled/Get_records_Kinesis.py",  line 57, in <module>
    response = json.load(conn.get_records(shard_ids, 100))
  File "C:\Python34\lib\site-packages\boto-2.38.0-py3.4.egg\boto\kinesis\layer1.py", line 327, in get_records
    body=json.dumps(params))
  File "C:\Python34\lib\site-packages\boto-2.38.0- py3.4.egg\boto\kinesis\layer1.py", line 874, in make_request
    body=json_body)
boto.exception.JSONResponseError: JSONResponseError: 400 Bad Request
{'Message': 'Start of list found where not expected', '__type':   'SerializationException'}

My end goal is to eventually kick this data into an S3 bucket. I just need to get these records to return and print first. Any suggestions and advice would be great, I am still new to python and at a complete lost. The data going into the stream is JSON dump twitter data using the "put_record" function. I can post that code too if needed.

Thanks!!




AWS pre-signed URL omitting x-amz-security-token when not used with an STS token

My calls to get a pre-signed URL for a putObject operation on S3 are returning without the x-amz-security-token if I use my default web identity credentials

var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey'};
var url = s3.getSignedUrl('getObject', params);
console.log("The URL is", url);

returns

http://ift.tt/1hb5pkQ

However, if I use an Cognito/STS token and set the AWS credentials to that - I do get the security token.

http://ift.tt/1hb5rJe

I've checked the default web role and identity and they have put permissions to the bucket. What am I doing wrong? What should I be checking either in the code or in the AWS configuration to sort out why it's not returning a working s3 signed url?




How to run que-rails in aws elastic-beanstalk in async with puma

I am using que gem for sending emails asynchronously. I am using elastic-beanstalk and puma as the server. As per the puma's documentation I find out that there are two ways to provide custom config to the puma (which i have tried)

  1. putting config/puma.rb
  2. putting config/puma/production.rb

what i want to achieve is this. I did not find any way to override the on_worker_boot apart from this. What else I have tried - Writing custom script (.ebextensions) to kill existing process with que:work and with starting again on post load environment

files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_restart_que.sh":
mode: "000755"
owner: root
group: root
content: |
  #!/usr/bin/env bash
  set -xe
  # Loading environment data
  EB_SCRIPT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k script_dir)
  EB_SUPPORT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k support_dir)
  EB_APP_USER=$(/opt/elasticbeanstalk/bin/get-config container -k app_user)
  EB_APP_CURRENT_DIR=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_dir)

  . $EB_SUPPORT_DIR/envvars
  . $EB_SCRIPT_DIR/use-app-ruby.sh

  cd $EB_APP_CURRENT_DIR

  su -s /bin/bash -c "pkill -f que:work" $EB_APP_USER

  echo 'REAPED que:work processes.'

  su -s /bin/bash -c "bundle exec rake que:work" $EB_APP_USER

But no luck so far.. Are there any other alternatives to set the Que.mode = :async after puma starts?




How can I package or install an entire program to run in an AWS Lambda function

If this is a case of using Lambda entirely the wrong way, please let me know.

I want to install Scrapy into a Lambda function and invoke the function to begin a crawl. My first problem is how to install it, so that all of the paths are correct. I installed the program using the directory to be zipped as its root, so the zip contains all of the source files and the executable. I am basing my efforts on this article. In the line it says to include at the beginning of my function, where does the "process" variable come from? I have tried,

var process = require('child_process');
var exec = process.exec;
process.env['PATH'] = process.env['PATH'] + ':' + 
process.env['LAMBDA_TASK_ROOT']

but I get the error,

"errorMessage": "Cannot read property 'PATH' of undefined",
"errorType": "TypeError",

Do I need to include all of the library files, or just the executable from /usr/lib ? How do I include that one line of code the article says I need?

Edit: I tried moving the code into a child_process.exec, and received the error

"errorMessage": "Command failed: /bin/sh: process.env[PATH]: command not found\n/bin/sh: scrapy: command not found\n"

Here is my current, entire function

console.log("STARTING");
var process = require('child_process');
var exec = process.exec;

exports.handler = function(event, context) {    
    //Run a fixed Python command.
    exec("process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT']; scrapy crawl backpage2", function(error, stdout) {
        console.log('Scrapy returned: ' + stdout + '.');
        context.done(error, stdout);
    });

};




Only show IAM user the icons they need on management console landing page

Is it possible to only provide menu links / icons for the AWS services that the IAM user has access to?

At the moment, my user only has S3 access. However, when he logs in to the management console he still gets to see all the icons (such as EC2, Glacier, Cloudfront etc). Although when he clicks on EC2 it says he has no privileges to use/view details. It is just too cluttered for a user who only has access to S3.

Is is possible to do something about this? thanks.




Circular dependency in aws stack because of private ip [AWS CloudFormation]

I am using a cloudformation template to create my EC2 instance. In userdata section i need to run a shell file that i have created in metadata. For that shell file i am passing private ip of the instance as a parameter. To get the private ip i am using this :

{
    "Fn::GetAtt" : [ "ConsoleServer", "PrivateIp" ]
},      

i ask the wait handler to wait while my user data gets executed but the wait handeler is dependent on the EC2 that i am trying to configure.

This is causing cicular dependency but i am unable to understand how to get private ip of the instance using some other way ?

Below are the part that matter : Metadata

 "Resources": {
        "ConsoleServer": {
            "Type": "AWS::EC2::Instance",
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "config": {
                        "files": {
                            "/usr/local/share/deployment-script.sh": {
                                "mode": "755",
                                "owner": "ec2-user",
                                "group": "ec2-user",
                                "content": {
                                    "Fn::Join": [
                                        "",
                                        [
                                            "#!/bin/bash\n",
                                            "sh master.sh ",
                                            {
                                                "Ref": "S3ConsoleZip"
                                            }, " ",
                                            {
                                                "Fn::GetAtt" : [ "ConsoleServer", "PrivateIp" ]
                                            },

and this is my userdata section followed by waithandler

 "UserData": {
                    "Fn::Base64": {
                        "Fn::Join": [
                            "",
                            [
                                "#!/bin/bash -v\n",
                                "sudo su",
                                "\n",
                                "chmod -R 775 /usr/local/share\n",

                                "yum update -y aws-cfn-bootstrap\n",
                                "## Error reporting helper function\n",
                                "function error_exit\n",
                                "{\n",
                                "   /opt/aws/bin/cfn-signal -e 1 -r \"$1\" '",
                                {
                                    "Ref": "WaitHandleServer"
                                },
                                "'\n",
                                "   exit 1\n",
                                "}\n",
                                "## Initialize CloudFormation bits\n",
                                "/opt/aws/bin/cfn-init -v -s ",
                                {
                                    "Ref": "AWS::StackName"
                                },
                                " -r ConsoleServer",
                                "   --region ",
                                {
                                    "Ref": "AWS::Region"
                                },
                                " > /tmp/cfn-init.log 2>&1 || error_exit $(</tmp/cfn-init.log)\n",
                                "cd /usr/local/share\n",
                  *********              "sh deployment-script.sh >> /home/ec2-user/deployment-script.log\n",
                                "/opt/aws/bin/cfn-signal",
                                " -e 0",
                                " '",
                                {
                                    "Ref": "WaitHandleServer"
                                },
                                "'",
                                "\n",
                                "date > /home/ec2-user/stoptime"
                            ]
                        ]
                    }
                }
            }
        },
        "WaitHandleServer": {
            "Type": "AWS::CloudFormation::WaitConditionHandle"
        },
        "WaitConditionServer": {
            "Type": "AWS::CloudFormation::WaitCondition",
            "DependsOn": "ConsoleServer",
            "Properties": {
                "Handle": {
                    "Ref": "WaitHandleServer"
                },
                "Timeout": "1200"
            }
        }
    },

i have added ********* where call is being made from user data section




React IE8 works locally, not in production

We started introducing React to our large Django project to handle frontend complexity. So far, so good, but we ran into a problem.

React does not work in production on IE8. Locally it works fine on IE8. I've included es5-shim and es5-sham and I do see them in dev tools in production (included in the header, before React and code that's using React). But still, I get this error, like there's no shim:

SCRIPT438: Object doesn't support property or method 'isArray' 

I also got:

SCRIPT438: Object doesn't support property or method 'bind'

after which I included script mentioned in this post:

How to handle lack of JavaScript Object.bind() method in IE 8

However, after that I get:

SCRIPT5023: Function does not have a valid prototype object 

And I'm still getting the old errors. Again, locally it's working fine in IE8 so I'm guessing it's not the code itself that is the problem. Our app uses AWS CloudFront (but I do see the static .js files included in the html).

Any ideas what might be happening here?




Xcode - Swift - AWS - List all objects in S3 bucket

I am trying to figure out how to list all the objects from an AWS S3 bucket in Swift (in Xcode). I can't seem to find the information anywhere on the internet, but maybe I didn't look hard enough. If anyone could refer me to the code that will allow me to do this that would be great.




AWS CLI filter OR logic

I am trying to retrieve a list of servers using the AWS CLI tools. I have 2 groups of servers, 1 will have the string "mind" in the Name tag, another group will have the string "intelligence" in the Name tag.

I can filter the output of DescribeInstances using wildcards but can I return instances that contain mind OR intelligence? in the Name?

Currently I have to run the command twice replacing the filter Value.




AWS Cognito - offline data availability

I am building a phonegap app and use AWS Cognito to store the User data. In the description of Cognito, it is said, that the data is offline available. This does not work in my code:

function getCognitoData(){
 var params = {
  IdentityPoolId: COGNITO_IDENTITY_POOL_ID,
  Logins: {
   'graph.facebook.com': FACEBOOK_TOKEN
  }  
 };
 AWS.config.region = AWS_REGION;
 AWS.config.credentials = new AWS.CognitoIdentityCredentials(params);
 AWS.config.credentials.get(function(err) {
  if (err) {
   console.log("Error: "+err);
   return;
  }
  console.log("Cognito Identity Id: " + AWS.config.credentials.identityId);

  var syncClient = new AWS.CognitoSyncManager();

  syncClient.openOrCreateDataset('myDataset', function(err, dataset) {
   dataset.get('myKey', function(err, value) {
    console.log(value, err);
   }); 
  });
 });
}

The AWS Credentials for the Identity Pool and the Facebook Token are previously set, and work in the online mode, but I don't get the dataset data, when being online.

Am I doing something wrong or is it generally not possible to get the Cognito Dataset data while being offline? I read, that the data is actually being held in the local storage.

I am using the current AWS SKD (Release v2.1.42) and the Amazon Cognito JS.




Upload objects in folder amazon

I am trying to uploads objects in folder and i know there is no concept of real folder in amazon aws, I am doing this for file management purpose

For that i am POST Uploads to Amazon S3 (http://ift.tt/1MBhkVN) its working fine but the problem is that it only upload file to bucket like

bucket-demo
    -file-one.jpg
    -file-tow.png

but what i am trying to do is

bucket-demo
     /abc/                 <--folder
         file-one.png
         file tow.png

now i tried to put directory name after action link but doesnt work




Creating Azure or AWS instance by Chef APIs in .Net

Using chef APIs in .Net to create an instance in Azure or amazon web services.




Service Discovery using chef recipe

I am working on clustered environment, where I have multiple clusters with each cluster have multiple nodes involved. I have to announce the node the service is up and the master node have to discover the the newly available node.

I am announcing a new node as soon as it becomes available and search the node using chef search resource. I am using open source/ on premises chef-server, there seems to issue with the same. The results are ambiguous and not consistent too.

what is the alternate ways to achieve this, kindly help me out.

Thank you




AWS: use existing domain name for Elastic Beanstalk?

I already have a domain name, let's call it www.example.com, and I want my Elastic Beanstalk instance to use that domain name. How can I set it up?




hosting a laravel application in aws elastic beanstalk with rds db instance

I have successfully completed deploying my application on elastic beanstalk. But when I call the URL it shows (I have exported my table to RDS DB instance)

ErrorException in Connector.php line 47: SQLSTATE[HY000] [2002] Connection timed out (View: /var/app/current/local/resources/views/themes/default1/client/cart.blade.php)

My database.php configuration is:

'mysql' => [
        'driver'    => 'mysql',
        'host'      => 'rds.cvp31y7ebg1x.us-west-2.rds.amazonaws.com:3306',
        'database'  => 'rdsdb',
        'username'  =>'rdsuser',
        'password'  => '******',
        'charset'   => 'utf8',
        'collation' => 'utf8_unicode_ci',
        'prefix'    => '',
        'strict'    => false,
    ],

Please help me to figure out this problem.

Thanks.




Cloudwatch integration with xmatters

I want to integrate cloudwatch with xmatters. I know I can use xmatters integration agent to integrate xmatters with other apps but how to access cloudwatch alarms. Is there any JAVA API which I can use to access cloudwatch alarms and then redirect it to xmatters.

Thanks




I am trying to create a database connector. The connector box is a Linux rhel511 on AWS. I am getting the error "unable to connect to the database"

I am trying to create a database connector. The connector box is a Linux rhel511 on AWS. I am getting the error "unable to connect to the database".

With the similar information i have already created a database connector for another connector box on our datacenter.




jeudi 30 juillet 2015

When I tried to deploy rails files to EC2, An error occurred while installing pg is caused

I'd like to deploy my rails project using Capistrano3. Could you tell me how to deal withe the error? Thank you for your kindness.

When I tried to deploy, I got the following error message.

Deploy

cap production deploy

Erro code

bundle stdout: An error occurred while installing pg (0.18.2), and Bundler cannot continue.
Make sure that `gem install pg -v '0.18.2'` succeeds before bundling.

When I typed like this,

   [ec2-user@ip-172-31-47-193 ~]$ gem install pg -v '0.18.2'

I got the following error.

Building native extensions.  This could take a while...
ERROR:  Error installing pg:
ERROR: Failed to build gem native extension.

/home/ec2-user/.rbenv/versions/2.2.2/bin/ruby -r ./siteconf20150731-20195-11x65kw.rb extconf.rb
checking for pg_config... no
No pg_config... trying anyway. If building fails, please try again with
 --with-pg-config=/path/to/pg_config
checking for libpq-fe.h... no
Can't find the 'libpq-fe.h header
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers.  Check the mkmf.log file for more details.  You may
need configuration options.

Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/home/ec2-user/.rbenv/versions/2.2.2/bin/$(RUBY_BASE_NAME)
--with-pg
--without-pg
--enable-windows-cross
--disable-windows-cross
--with-pg-config
--without-pg-config
--with-pg_config
--without-pg_config
--with-pg-dir
--without-pg-dir
--with-pg-include
--without-pg-include=${pg-dir}/include
--with-pg-lib
--without-pg-lib=${pg-dir}/lib

extconf failed, exit code 1

Gem files will remain installed in /home/ec2-user/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/pg-0.18.2 for inspection.
Results logged to /home/ec2-user/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/extensions/x86_64-linux/2.2.0-static/pg-0.18.2/gem_make.out




Send Compatible pointer block types with AWSContinuationBlock

I'm getting the following error when trying to run a block with AWSS3TransferManager:

incompatible block pointer types sending 'id ((^)(void)) to parameter of type 'AWSContinuationBlock' (aka id (^)(AWSTask *__strong))

I believe this is due to different block types where I am currently not returning any value while it expects an AWSTask, but I'm not sure how to return an AWSTask.

_uploadRequest = [AWSS3TransferManagerUploadRequest new];

AWSS3TransferManager *transferManager = [AWSS3TransferManager defaultS3TransferManager];
[[transferManager upload:_uploadRequest] continueWithExecutor:[BFExecutor mainThreadExecutor] withBlock:^id(BFTask *task){

    if (task.error){
        NSLog(@"%@",task.error);
    }
}];




AWS DynamoDB error "'blob' value should be a NSData type."

I can't solve this bug that I keep receiving for my program. What does this error means?

Thanks.




400 Bad Request when using solrcloud

Can anybody help me i want to create a solrcloud on aws using this code http://ift.tt/1GyTAQl i try to build using cmd [fab demo:demo1,n=1] getting below error I'm getting this while pulling instances after connection to amazon server.

ERROR: boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request

Appreciate your help

thanks in advance

root@adminuser-VirtualBox:/opt/febric/solr-scale-tk# fab demo:demo1,n=1
Going to launch 1 new EC2 m3.medium instances using AMI ami-8d52b9e6
Setup Instance store BlockDeviceMapping: /dev/sdb -> ephemeral0
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/fabric/main.py", line 743, in main
    *args, **kwargs
  File "/usr/local/lib/python2.7/dist-packages/fabric/tasks.py", line 427, in execute
    results['<local-only>'] = task.run(*args, **new_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/fabric/tasks.py", line 174, in run
    return self.wrapped(*args, **kwargs)
  File "/opt/febric/solr-scale-tk/fabfile.py", line 1701, in demo
    ec2hosts = new_ec2_instances(cluster=demoCluster, n=n, instance_type=instance_type)
  File "/opt/febric/solr-scale-tk/fabfile.py", line 1163, in new_ec2_instances
    placement_group=placement_group)
  File "/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 973, in run_instances
    verb='POST')
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1208, in get_object
    raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>InvalidParameterValue</Code><Message>Value () for parameter groupId is invalid. The value cannot be empty</Message></Error></Errors><RequestID>ca03b6d4-ce0e-46d3-99e3-ccad4a43c4ff</RequestID></Response>



run mrjob on Amazon EMR, t2.micro not supported

I tried to run a mrjob script on Amazon EMR. It worked well when I used instance c1.medium, however, it had an error when I changed instnace to t2.micro. The full error message was shown below.

C:\Users\Administrator\MyIpython>python word_count.py -r emr 111.txt using configs in C:\Users\Administrator.mrjob.conf creating new scratch bucket mrjob-875a948553aab9e8 using s3://mrjob-875a948553aab9e8/tmp/ as our scratch dir on S3 creating tmp directory c:\users\admini~1\appdata\local\temp\word_count.Administr ator.20150731.013007.592000 writing master bootstrap script to c:\users\admini~1\appdata\local\temp\word_cou nt.Administrator.20150731.013007.592000\b.py

PLEASE NOTE: Starting in mrjob v0.5.0, protocols will be strict by default. It's recommended you run your job with --strict-protocols or set up mrjob.conf as de scribed at http://ift.tt/1IvHDtU ols

creating S3 bucket 'mrjob-875a948553aab9e8' to use as scratch space Copying non-input files into s3://mrjob-875a948553aab9e8/tmp/word_count.Administ rator.20150731.013007.592000/files/ Waiting 5.0s for S3 eventual consistency Creating Elastic MapReduce job flow Traceback (most recent call last): File "word_count.py", line 16, in MRWordFrequencyCount.run() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\job.py", line 461, in run mr_job.execute() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\job.py", line 479, in execute super(MRJob, self).execute() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\launch.py", line 153, in execute self.run_job() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\launch.py", line 216, in run_job runner.run() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\runner.py", line 470, in run self._run() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\emr.py", line 881, in _run self._launch() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\emr.py", line 886, in _launch self._launch_emr_job() File "F:\Program Files\Anaconda\lib\site-packages\mrjob\emr.py", line 1593, in _launch_emr_job persistent=False) File "F:\Program Files\Anaconda\lib\site-packages\mrjob\emr.py", line 1327, in _create_job_flow self._job_name, self._opts['s3_log_uri'], **args) File "F:\Program Files\Anaconda\lib\site-packages\mrjob\retry.py", line 149, i n call_and_maybe_retry return f(*args, **kwargs) File "F:\Program Files\Anaconda\lib\site-packages\mrjob\retry.py", line 71, in call_and_maybe_retry result = getattr(alternative, name)(*args, **kwargs) File "F:\Program Files\Anaconda\lib\site-packages\boto\emr\connection.py", lin e 581, in run_jobflow 'RunJobFlow', params, RunJobFlowResponse, verb='POST') File "F:\Program Files\Anaconda\lib\site-packages\boto\connection.py", line 12 08, in get_object raise self.ResponseError(response.status, response.reason, body) boto.exception.EmrResponseError: EmrResponseError: 400 Bad Request
Sender ValidationError Instance type 't2.micro' is not supported c3ee1107-3723-11e5-8d8e-f1011298229d

This is my config file detail

runners:
  emr:
    aws_access_key_id: xxxxxxxxxxx
    aws_secret_access_key: xxxxxxxxxxxxx
    aws_region: us-east-1
    ec2_key_pair: EMR
    ec2_key_pair_file: C:\Users\Administrator\EMR.pem
    ssh_tunnel_to_job_tracker: false
    ec2_instance_type: t2.micro
    num_ec2_instances: 2




WARNING: psql major version 9.3, server major version 9.4

I'm trying to run psql commands to import data into Redshift as specified here: http://ift.tt/1mQm6Sw. Our application is hosted on Heroku. I had a script and it was working perfectly fine, uploading the data without an error.

Then we added a PostgreSQL database under the same app but for a different section of the code. It was version 9.4. Ever since then, I have not been able to import into our Redshift table using the exact same script. This is the error that I see:

Error: You must install at least one postgresql-client-<version> package.

Interestingly, I am able to connect to the psql client on Heroku by running heroku pg:psql -a myapp. I do, however, get this warning message when psql first starts:

---> Connecting to DATABASE_URL
psql (9.3.4, server 9.4.4)
WARNING: psql major version 9.3, server major version 9.4.
         Some psql features might not work.

Is this an error with the psql client or with Heroku? Do I need to upgrade psql to 9.4, and if so, how do I do that?

Any help is much appreciated!




Amazon EC2 Windows Ubuntu

I am new to AWS EC2 so that I make this post for some questions.

1) Right now, I am considering running some script on the server. I use two tools usually. One is a software can only be used in Windows. The other is just python. Should I open two instances, one for windows, one for ubuntu? Or just one instance of Windows with Git Bash installed? I want to be cost and performance efficiently.

2) I am not going to use the script very often (usually 2-3 hours per day or 10-12 hours per week). Therefore, is it easy to schedule those jobs automatically across the instances? I mean it can automatically turn off and restart given appropriate time.

3) Some of the script involves web scraping. I am also wondering if it is ok to switch IP address every time I run the script. Mainly, it is for python script.

Thanks.




Totally screwed? EC2 instance and set PAM to 'no' now I can't SSH in. Any ideas?

Totally screwed? EC2 instance and set PAM to 'no' now I can't SSH in. Any ideas?

I was having some issues with SFTP and that's the only thing I changed.

Thanks in advance!




Python Process Terminated due to "Low Swap" When Writing To stdout for Data Science

I'm new to python so I apologize for any misconceptions.

I have a python file that needs to read/write to stdin/stdout many many times (hundreds of thousands) for a large data science project. I know this is not ideal, but I don't have a choice in this case.

After about an hour of running (close to halfway completed), the process gets terminated on my mac due to "Low Swap" which I believe refers to lack of memory. Apart from the read/write, I'm hardly doing any computing and am really just trying to get this to run successfully before going any farther.

My Question: Does writing to stdin/stdout a few hundred thousand times use up that much memory? The file basically needs to loop through some large lists (15k ints) and do it a few thousand times. I've got 500 gigs of hard drive space and 12 gigs of ram and am still getting the errors. I even spun up an EC2 instance on AWS and STILL had memory errors. Is it possible that I have some sort of memory leak in the script even though I'm not doing hardly anything? Is there anyway that I reduce the memory usage to run this successfully?

Appreciate any help.




Run EMR job with output results in another AWS account S3 bucket

I'm trying to run an EMR job in java where the input files are in the S3 bucket for one account and the output results are written to an S3 bucket for a different account. I understand you can give read/write permissions from one account to the other, but how would specify that the input and output are in different directories?

For example, you might specify the input path as "http://s3{bucket-name}/{input-folder}/" and output path as "http://s3{bucket-name}/{output-folder}/". How would you specify a bucket in another account?




Can't access docker image from EC2 server

I've got a standard EC2 Ubuntu server running a basic LAMP stack. I've installed Docker and I'm trying to hook up a solr container (http://ift.tt/1rOg03m specificly).

I have the docker image running ce32c020e7da makuk66/docker-solr "/bin/bash -c '/opt/ 10 minutes ago Up 10 minutes 0.0.0.0:8983->8983/tcp solr5 According to the readme of the solr image I should be able to access the solr admin panel on port 8983

Using the public IP of the EC2 server on port 80 I land on a web page (expected as apache is running) but when trying to access on port [..]:8983 I get 504 Gateway Timeout.

I've allowed all incoming connections on port 8983 for the security group that the EC2 server is a part of but still no luck...

There isn't anything further I should need to do here is there?




HostGator to Amazon PHP send mail not working

After I transferred hosts from hostgator to aws ubuntu, all my send mail does not work. for example when someone register's, it is supposed to send an automated email to that user. I am using X-Mailer PHP. I don't know what is causing the problem? Maybe didn't install something on the server? please help. thanks,




Why use Developer identity or third party to authenticate via backend

In the IOS SDK S3TransferManager Sample provided as an example by Amazon Web Services it looks as if I can access AWS resources like s3 without having to go through authentication providers like Facebook or Google. So what is the purpose of having my own developer identity or authenticating through the backend instead of on mobile if I'm using Parse? For example I believe Parse uses front end authentication (on mobile) to authenticate users rather than using Cloud Code (http://ift.tt/1eDSI03)

"Cloud Code is easy to use because it's built on the same JavaScript 
SDK that powers thousands of apps. The only difference is that this  
code runs in the Parse Cloud rather than running on a mobile device."

couldn't I just authenticate users with parse on the front end and when succeeded just copy and past this code below into the success block?

// Authenticate with Parse and if authentication succeeded execute code below
AWSCognitoCredentialsProvider *credentialsProvider = [[AWSCognitoCredentialsProvider alloc]
                                                      initWithRegionType:AWSRegionUSEast1
                                                      identityPoolId:@"identity-pool"];

AWSServiceConfiguration *configuration = [[AWSServiceConfiguration alloc] initWithRegion:AWSRegionUSEast1 credentialsProvider:credentialsProvider];

[AWSServiceManager defaultServiceManager].defaultServiceConfiguration = configuration;

and still have access to my aws resources. That way I don't need to use the AWSCredentialsProvider protocol that needs the access key, secret key, and session key sent to my app from the backend. Plus it seems like the IOS SDK manages allocating token sessions by itself (automatically) on mobile, is my thinking correct or am I missing something? still new to this so sorry if I sound ignorant




Django haystack with Elasticsearch cannot find database when rebuilding index

I added Haystack to a Django project that was already succesfully deployed to an AWS ElasticBeanstalk instance. Haystack is working locally but in the AWS environment when I run rebuild_index. I get this error:

Failed to clear Elasticsearch index: ConnectionError(('Connection aborted.', error(111, 'Connection refused'))) caused by: ProtocolError(('Connection aborted.', error(111, 'Connection refused')))
All documents removed.
ERROR:root:Error updating api using default 
Traceback (most recent call last):
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 188, in handle_label
    self.update_backend(label, using)
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 219, in update_backend
    total = qs.count()
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/models/query.py", line 318, in count
    return self.query.get_count(using=self.db)
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py", line 464, in get_count
    number = obj.get_aggregation(using, ['__count'])['__count']
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py", line 445, in get_aggregation
    result = compiler.execute_sql(SINGLE)
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 838, in execute_sql
    cursor = self.connection.cursor()
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 162, in cursor
    cursor = self.make_debug_cursor(self._cursor())
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 135, in _cursor
    self.ensure_connection()
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
    self.connect()
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/utils.py", line 97, in __exit__
    six.reraise(dj_exc_type, dj_exc_value, traceback)
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 130, in ensure_connection
    self.connect()
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/backends/base/base.py", line 119, in connect
    self.connection = self.get_new_connection(conn_params)
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 176, in get_new_connection
    connection = Database.connect(**conn_params)
  File "/opt/python/run/venv/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect
    conn = _connect(dsn, connection_factory=connection_factory, async=async)
OperationalError: could not connect to server: Connection refused
        Is the server running on host "localhost" (127.0.0.1) and accepting
        TCP/IP connections on port 5432?

It appears that Haystack is trying to connect to the database specified in my local settings, instead of the Postgres RDS I have specified specifically for my AWS ElasticBeanstalk environment even though the 'DATABASE' setting works on AWS for ./manage.py loaddata.

    if 'RDS_DB_NAME' in os.environ:
        DATABASES = {
            'default': {
                'ENGINE': 'django.db.backends.postgresql_psycopg2',
                'NAME': os.environ['RDS_DB_NAME'],
                'USER': os.environ['RDS_USERNAME'],
                'PASSWORD': os.environ['RDS_PASSWORD'],
                'HOST': os.environ['RDS_HOSTNAME'],
                'PORT': os.environ['RDS_PORT'],
            }
        }
    else:
        DATABASES = {
            'default': {
                'ENGINE': 'django.db.backends.postgresql_psycopg2',
                'NAME': 'hhwc',
                'HOST': 'localhost',
                'PORT': '5432',
            }
        }

Is there something wrong in this 'DATABASE' setting, or does Haystack look somewhere else to find the location of the database it should connect to for generating indexes?

Any help troubleshooting this is welcome. Thanks in advance.




change loop output to per line instead of a group

I am trying to get a desired output for rest of my script to work...currently when i assign a varaible called "st", i get the below output...but note that one of the lines i get a cidr block of "[2.2.2.2/32, 12.12.12.12/32, 13.13.13.13/32, 14.14.14.14/32, 15.15.15.15/32]"....how can i break this down so i get a desired output(look at very end for this)....

>>> import boto.ec2
>>> fts = {'vpc-id': 'vpc-1895327d', 'group-name': 'Full blown SG test'}
>>> sgs = boto.ec2.connect_to_region("us-east-1", aws_access_key_id='XXXXXXXX', aws_secret_access_key='XXXXXX').get_all_security_groups(filters=fts)

>>> for sg in sgs:
   for rule in sg.rules:
       st = sg, sg.id, "inbound:", rule, " source:", rule.grants
       print st

(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [sg-c65a20a3-995635159130])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [sg-99c4befc-995635159130])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(110-110), ' source:', [9.9.9.9/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(0-443), ' source:', [4.4.4.4/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(443-443), ' source:', [0.0.0.0/0])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:icmp(-1--1), ' source:', [3.3.3.3/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(3306-3306), ' source:', [5.5.5.5/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [sg-35568d51-995635159130])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(0-65535), ' source:', [1.1.1.1/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(389-389), ' source:', [10.10.10.10/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [2.2.2.2/32, 12.12.12.12/32, 13.13.13.13/32, 14.14.14.14/32, 15.15.15.15/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:udp(53-53), ' source:', [7.7.7.7/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(25-25), ' source:', [11.11.11.11/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(53-53), ' source:', [8.8.8.8/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(5439-5439), ' source:', [6.6.6.6/32])
>>> 
>>> 
>>> 

i want the final output to be something like below...note how the big CIDR block is broken down so now it is on 5 lines instead of 1 line

......
......
......
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(389-389), ' source:', [10.10.10.10/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [2.2.2.2/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [12.12.12.12/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [13.13.13.13/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [14.14.14.14/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [15.15.15.15/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:udp(53-53), ' source:', [7.7.7.7/32])
......
......
......

So i thought maybe i can use the length function on rules.grants and if its is greater than 1 then build a diff "st" variable.

>>> for sg in sgs:
   for rule in sg.rules:
       if len(rule.grants) > 1:
            st = sg, sg.id, "inbound:", rule, " source:", rule.grants[sg]
       else:
            st = sg, sg.id, "inbound:", rule, " source:", rule.grants
       print st

(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [sg-c65a20a3-995635159130])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [sg-99c4befc-995635159130])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(110-110), ' source:', [9.9.9.9/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(0-443), ' source:', [4.4.4.4/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(443-443), ' source:', [0.0.0.0/0])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:icmp(-1--1), ' source:', [3.3.3.3/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(3306-3306), ' source:', [5.5.5.5/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:-1(None-None), ' source:', [sg-35568d51-995635159130])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(0-65535), ' source:', [1.1.1.1/32])
(SecurityGroup:Full blown SG test, u'sg-3ff65858', 'inbound:', IPPermissions:tcp(389-389), ' source:', [10.10.10.10/32])

Traceback (most recent call last):
  File "<pyshell#206>", line 4, in <module>
    st = sg, sg.id, "inbound:", rule, " source:", rule.grants[sg]
TypeError: list indices must be integers, not SecurityGroup
>>> 

Any thought on how i can achive this ?




Amazon AWS Elastic Beanstalk service is down

I am not able to access Amazon Aws Elastic Beanstalk page. I am getting below error page Elastic Beansstalk is down

I am also not able to deploy any code to Elastic BeanStalk.




SignatureDoesNotMatch error while amazon web service SES through HTTP.

I am stuck at SignatureDoesNotMatch error while using aws ses. I am creating signature key by using GMT DATE and security key with HMAC SHA256 and then converting it to Base64.

Url: http://ift.tt/1DdsbSR

And input headers as x-amz-date: Thu, 30 Jul 2015 18:15:51 +0000 X-Amzn-Authorization: AWS3-HTTPS AWSAccessKeyId=,Algorithm=HmacSHA256,Signature=




Rails POST and GET request methods take less time to run after running the first time

I have a Ruby on Rails Application that stores JSON data in a MySQL database through a POST request. I also have accompanying GET routes to access the data in the database. When I first send one of the requests, either through the POST or GET route, the request takes about 1 second. When I send either the same request, or the other request, it then takes about 0.1 seconds, which is significantly faster.

I first thought that the SQL queries were being cached by rails, and this is why the subsequent same request was so much faster, but in the web server console, the requests were being shown again with times next to them (ex. 0.3 ms) and without a "CACHE". In any case, even if it was caching the SQL queries, the queries for the other route are very different with only minimal similar queries, so query caching shouldn't have sped up the other query so much.

I read somewhere that MySQL might be caching the indexes, which makes sense why it is faster if done immediately, but if I wait an hour, it takes the same slower speed. Is the time MySQL spends accessing the uncached indexes not shown in the SQL query log, because if I add up the various (0.1 ms) next to each query, it doesn't nearly add up to the overall request time (even the one shown at the bottom of the log).

If all of this is correct, and I am experiencing the effects of index caching and a performance penalty for accessing an index that isn't cached, is there a way I can reduce the time for this first request. My request will be sent about every hour or so, and by then the cache will no longer have my indexes. Anything for me to even look into to speed up my requests would be appreciated.




AWS ELB (Elastic Load Balancer) sometimes returns 504 (gateway timeout) right away

I am currently switching over an application to amazon but I'm noticing that sometimes the response I receive is a 504. Our system is setup in a way where we have a LB in front of our ELB and then it goes straight to tomcat.

We are currently timing all our requests in our service and in the servlet filter logging the response time and they are always less than 1s. We then look at the LB logs and are seeing a 504 and it appears that somehow the ELB is timing out and returning the 504.

Does anyone know why this could happen? Thanks

EDIT: Not sure if it matters but currently we only have 1 instance and it can scale to 3 instances.




Tomcat deployment using maven plugin

I am trying to deploy my webapp to tomcat7 on Amazon AWS EC2 instance using maven from my local system, but I am continuously getting this error

[ERROR] Failed to execute goal org.apache.tomcat.maven:tomcat7-maven-plugin:2.2:deploy (default-cli) on project EnrollItWeb: Cannot invoke Tomcat mana
ger: Connection reset by peer: socket write error -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.tomcat.maven:tomcat7-maven-plugin:2.2:deploy (default-cli) o
n project EnrollItWeb: Cannot invoke Tomcat manager
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
        at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
        at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
        at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
        at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
        at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
        at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
        at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
        at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
        at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
        at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
        at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
        at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
        at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Cannot invoke Tomcat manager
        at org.apache.tomcat.maven.plugin.tomcat7.AbstractCatalinaMojo.execute(AbstractCatalinaMojo.java:141)
        at org.apache.tomcat.maven.plugin.tomcat7.AbstractWarCatalinaMojo.execute(AbstractWarCatalinaMojo.java:68)
        at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
        ... 20 more
Caused by: java.net.SocketException: Connection reset by peer: socket write error
        at java.net.SocketOutputStream.socketWrite0(Native Method)
        at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
        at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
        at org.apache.http.impl.io.AbstractSessionOutputBuffer.write(AbstractSessionOutputBuffer.java:181)
        at org.apache.http.impl.io.ContentLengthOutputStream.write(ContentLengthOutputStream.java:115)
        at org.apache.tomcat.maven.common.deployer.TomcatManager$RequestEntityImplementation.writeTo(TomcatManager.java:880)
        at org.apache.http.entity.HttpEntityWrapper.writeTo(HttpEntityWrapper.java:89)
        at org.apache.http.impl.client.EntityEnclosingRequestWrapper$EntityWrapper.writeTo(EntityEnclosingRequestWrapper.java:108)
        at org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:117)
        at org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:265)
        at org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:203)
        at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:236)
        at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:121)
        at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
        at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
        at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
        at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
        at org.apache.tomcat.maven.common.deployer.TomcatManager.invoke(TomcatManager.java:742)
        at org.apache.tomcat.maven.common.deployer.TomcatManager.deployImpl(TomcatManager.java:705)
        at org.apache.tomcat.maven.common.deployer.TomcatManager.deploy(TomcatManager.java:388)
        at org.apache.tomcat.maven.plugin.tomcat7.deploy.AbstractDeployWarMojo.deployWar(AbstractDeployWarMojo.java:85)
        at org.apache.tomcat.maven.plugin.tomcat7.deploy.AbstractDeployMojo.invokeManager(AbstractDeployMojo.java:82)
        at org.apache.tomcat.maven.plugin.tomcat7.AbstractCatalinaMojo.execute(AbstractCatalinaMojo.java:132)
        ... 23 more
[ERROR]
[ERROR] Re-run Maven using the -X switch to enable full debug logging.

I have added the plugin in pom.xml

          <plugin>
                <groupId>org.apache.tomcat.maven</groupId>
                <artifactId>tomcat7-maven-plugin</artifactId>
                <version>2.2</version>
                <configuration>
                    <url>http://{my-ip}:{my-port}/manager/html</url>
                    <server>{server-name}</server>
                    <path>/{context-path}</path>
                </configuration>
            </plugin>

and corresponding server setting in $MAVEN_HOME/conf/settings.xml

    <server>
      <id>{server-name}</id>
      <username>{username}</username>
      <password>{password}</password>
    </server>

I am having Ubuntu instance at AWS with tomcat installed at root.




Chef aws driver tags don't work using Etc.getlogin

I am currently using Chef solo on a Windows machine. I used the fog driver before which created tags for my instances on AWS. Recently, I moved to the aws driver and noticed that aws driver does not handle tagging. I tried writing my own code to create the tags. One of the tags being "Owner" which tells me who created the instance. For this, I am using the following code:

  def get_admin_machine_options()
    case get_provisioner()
    when "cccis-environments-aws"

      general_machine_options = {ssh_username: "root",
        create_timeout: 7000,
        use_private_ip_for_ssh: true,
        aws_tags: {Owner: Etc.getlogin.to_s}
      }

      general_bootstrap_options = {
        key_name: KEY_NAME,
        image_id:  "AMI",
        instance_type: "m3.large",
        subnet_id: "subnet",
        security_group_ids: ["sg-"],
      }

      bootstrap_options = Chef::Mixin::DeepMerge.hash_only_merge(general_bootstrap_options,{})

      return Chef::Mixin::DeepMerge.hash_only_merge(general_machine_options, {bootstrap_options: bootstrap_options})

    else
      raise "Unknown provisioner #{get_setting('CHEF_PROFILE')}"
    end
  end

  machine admin_name do
        recipe "random.rb"
        machine_options get_admin_machine_options()
        ohai_hints ohai_hints
        action $provisioningAction
  end

Now, this works fine on my machine. The instance is created on my machine with proper tags but when I run the same code on someone else's machine. It doesn't create the tags at all. I find this to be very weird. Does anyone know what's happening? I have the same code!




Adding multiple SSL certificates in apache AWS

Here is the architecture I have an EC2 instance , 2 web domains, Load Balancer.

the 2 domains are pointing toward the same Load Balancer and the load balancer has an EC2 Instance on it.

I have c1 on the load balancer and in apache config as well.

Since both domains are pointing to same server , I want to use different cert on that server , so when I do abc.com or def.com , I should get https certified .




MongoDB connections from AWS Lambda

I'm looking to create a RESTful API using AWS Lambda/API Gateway connected to a MongoDB database. I've read that connections to MongoDB are relatively expensive so it's best practice to retain a connection for reuse once its been established rather than making new connections for every new query.

This is pretty straight forward for normal applications as you can establish a connection during start up and reuse it during the applications lifetime. But, since Lambda is designed to be stateless retaining this connection seems to be less straight forward.

Therefore, I'm what would be the best way to approach this database connection issue? Am I forced to make new connections every time a Lambda function is invoked or is there a way to pool these connections for more efficient queries?

Thanks.




Passing in specific return order for AWS Cloudsearch query

Is there a way that I am able to tell a cloudsearch query that I want to get the matching results back in a specific order?

I have cloudsearch populated with Products. For each User, I have a predefined order that I want those products to appear. The user can filter the Products by a number of fields, and this will call Cloudsearch and return the matches, 10 at a time (ajax loaded paged results)

How can I tell Cloudsearch that once it has found those matches, I want them to be returned in the predefined order for this particular customer?




kibana 4.1 Gateway Timeout via AWS LoadBalancer

After upgrading to latest ES and Kibana, while trying to create a new dashboard, I keep getting this error:

Gateway Timeout

I am using Amazon AWS LoadBalancer in-front of 2 Kibana servers and this was not an issue with previous version.

Anyone come across this and able to assist with a workaround.

Thanks




Upload bamboo build logs to S3

I'm trying to figure out a way to upload the log from a bamboo build to S3 whether it's a failure or not. I'm having trouble figuring out how I would write the job to do this. I'd like to configure this to do all jobs globally in Bamboo if possible.

Any ideas or a place to point me in the right direction? Googling around isn't giving me a lot of ideas.




http://ift.tt/1DQ0jQg on AWS S3

I am hosting my website on AWS S3. The only file I have there is http://ift.tt/OLCuVP.

Is it possible to let users land on http://ift.tt/1DQ0jQg (where anything stands for any valid URI) so that while the user still sees http://ift.tt/1DQ0jQg in the address bar the http://ift.tt/1MzIQDd is invoked instead?

If this is not possible on S3, what technology would allow me to do this?

What is the name of want I want to do? Masking? Redirecting?

EDIT: I don't really need the anything URI as a parameter as long as I can determined from within index.html via JavaScript.




AWS lambda send response to API gateway

I have a lambda function that generates some text. This is for a simple Twilio app

<Say>Welcome to your conference room!</Say>
<Dial>
   <Conference beep="true">waitingRoom</Conference>
</Dial>

When I make a POST request using postman it outputs exactly that. but I have two problems:

  1. The headers comes back at application/json, and I need it as text/xml.
  2. When I make the POST request from Twilio I get 502 Bad Gateway

I know it has to do something with the incoming params mapping and also mapping the response from Lambda back to the API Gateway as text/xml. But I can;t figure out how to do this.

enter image description hereenter image description here




AWS Flask no longer accepting POST method from Flash actionscript

I have a Flask project deployed on AWS / Elastic Beanstalk

I have an embedded Adobe Flash SWF which attempts a POST request method to my published URL. It was previously working, but now it no longer accepts it -- instead, it does nothing.

The POST request does work on localhost, but not when deployed. I believe this may be a permissions issue involving crossdomain / CORS support, but I am not sure. How can I ensure that cross-domain requests are accepted in AWS EB?




How to remove external identity from cognito user id

I have linked external identity of Facebook with cognito id using below code:

credentialsProvider?.logins = ["graph.facebook.com" : FBSDKAccessToken.currentAccessToken().tokenString]
credentialsProvider?.refresh()

Taken reference from : http://ift.tt/1IaHt7Q. Now I want that if user logout from Facebook then also want to remove Facebook identity from that cognito id, want to keep same cognito id but just want remove that external identity . How to do that?




getting permission denied in amazon aws

I am trying to connect to amazon s3 by Using the AWS credentials file for that i have done following things

  1. I have created credentials.ini file at .aws\credentials. It have valid AWSAccessKeyId and AWSSecretKey

    [default]
    AWSAccessKeyId=somekey
    AWSSecretKey=somesecretkey
    
    
  2. I am doing following to use key and list all object

.

$s3 = new Aws\S3\S3Client([
    'version' => 'latest',
    'region'  => 'us-west-2'
]);


$result = $s3->listBuckets();
var_dump($result);

and i am getting error

Warning: parse_ini_file(C:\Users\user\.aws\credentials): failed to open stream: Permission denied in C:\xampp\htdocs\aws\vendor\aws\aws-sdk-php\src\Credentials\CredentialProvider.php on line 216

Fatal error: Uncaught exception 'Aws\Exception\CredentialsException' with message 'Error retrieving credentials from the instance profile metadata server. (cURL error 28: Connection timed out after 1000 milliseconds (see http://ift.tt/1mgwZgQ))' in C:\xampp\htdocs\aws\vendor\aws\aws-sdk-php\src\Credentials\InstanceProfileProvider.php:79 Stack trace: #0 C:\xampp\htdocs\aws\vendor\guzzlehttp\promises\src\Promise.php(199): Aws\Credentials\InstanceProfileProvider->Aws\Credentials\{closure}(Array) #1 C:\xampp\htdocs\aws\vendor\guzzlehttp\promises\src\Promise.php(152): GuzzleHttp\Promise\Promise::callHandler(2, Array, Array) #2 C:\xampp\htdocs\aws\vendor\guzzlehttp\promises\src\TaskQueue.php(60): GuzzleHttp\Promise\Promise::GuzzleHttp\Promise\{closure}() #3 C:\xampp\htdocs\aws\vendor\guzzlehttp\guzzle\src\Handler\CurlMultiHandler.php(96): GuzzleHttp\Promise\TaskQueue->run() #4 C:\xampp\htdocs\aws\vendor\guzzlehttp\guzzle\src\Handler\CurlMultiHandler.php(123): GuzzleHttp\Handler\CurlMultiHandler->tick in C:\xampp\htdocs\aws\vendor\aws\aws-sdk-php\src\Credentials\InstanceProfileProvider.php on line 79




Cant post to CloudMQTT

Im not sure if this is the right place to post this but their support email is no longer working. When I try to create a Topic on the websocket UI And i click Send, nothing comes up under Topic/Message. I have tried refreshing, different browsers, delete history.

CloudMQTT

But from my app i can successfully subscribe to the topic and publish a message, and again it does not show up in the WebSocket UI.

Why could this be? Sorry again if this is not the right place to post.

Thanks




(python/boto sqs) UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 5: ordinal not in range(128)

I can not send messages with accented characters for SQS in python with the AWS SDK (boto).

Versions

Python: 2.7.6 boto: 2.20.1

CODE

   #!/usr/bin/env python
   # -*- coding: utf-8 -*-

import boto.sqs
from boto.sqs.message import RawMessage

    # print boto.Version
sqs_conn = boto.sqs.connect_to_region(
'my_region',
aws_access_key_id='my_kye',
aws_secret_access_key='my_secret_ky')
queue = sqs_conn.get_queue('my_queue')
queue.set_message_class(RawMessage)

msg = RawMessage()

body = '1 café, 2 cafés, 3 cafés ...'
msg.set_body(body)
queue.write(msg)




Does DB::raw affect when uploaded on server?

Currently using this code to fetch data on mysql and its working on my localhost but when I uploaded it on our AWS server it stop sorting?

$stores = DB::table('stores')->select('storename','id','photo','address',DB::raw("( 3959 * acos( cos( radians('$lat') ) * cos( radians( '$lat' ) ) * cos( radians( longitude ) - radians('$lon') ) + sin( radians('$lat') ) * sin( radians( latitude ) ) ) ) AS distance"))->orderBy('distance')->where('domain',$domain->appEnv)->take(25)->get();

Is there something that is being affected when I uploaded on AWS?

Note that our DB is on a different server for RDS




Error when trying to implement amazon mturk SDK into android studio project

i am trying to implement amazon's mechanical turk into my android app. I have followed the instructions: http://ift.tt/1LSuZ9v but when i am specifying the third party .jar files, it gives me

Error:Gradle: Execution failed for task ':app:dexDebug'.

com.android.ide.common.process.ProcessException: org.gradle.process.internal.ExecException: Process 'command '/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home/bin/java'' finished with non-zero exit value 1

I have tried taking out every combination of .jar files and found out that jaxrpc.jar is the file causing this error. Without this file, the error when running the project is:

Exception in thread "main" java.lang.NoClassDefFoundError: javax/xml/rpc/ServiceException at com.example.mturk.HomeworkRequest.(HomeworkRequest.java:20) at com.example.mturk.HomeworkRequest.main(HomeworkRequest.java:39) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at ...

where line 20 points to:

service = new RequesterService(new PropertiesClientConfig("../mturk.properties"));

and line 39 points to:

HomeworkRequest app = new HomeworkRequest();

I have tried enabling multiDex and looked around but couldn't find any solution. Any help would be greatly appreciated, i have been stuck on this problem for a few days now.

Thanks!




mercredi 29 juillet 2015

how to invoke aws lambda function through http request without generating signature and avoiding sdk

I can invoke AWS Lambda function through android without signing the request each time. I wanted to invoke my Lambda function via a HTTP request without generating signature.
Because the signature generation process is complex and i need to invoke my Lambda function from all browser through the HTTP request.
If there any way to do that please describe the procedures.




Can an AWS Lambda function call another

I have 2 Lambda functions - one that produces a quote and one that turns a quote into an order. I'd like the Order lambda function to call the Quote function to regenerate the quote, rather than just receive it from an untrusted client.

I've looked everywhere I can think of - but can't see how I'd go about chaining or calling the functions...surely this exists!




AWS Cloudfront allow access only if user's browser allows caching

I'm trying to lower bandwidth charges. Is it possible to allow access to cloudfront only if a user's browser allows caching? I'm looking into a virtual private cloud but I don't see it working with cloudfront.




Amazon Transcoder Client issue with Pipelines Job on CURL

I am quite new to amazon web services, and i am working on video encode and create thumbnail under amazong transcoder srevices.

require 'lib/aws/aws-autoloader.php';

use Aws\ElasticTranscoder\ElasticTranscoderClient;
// Create a service locator using a configuration file
$client = ElasticTranscoderClient::factory(array(
        'key'    => 'xxxxx',
        'secret' => 'xxxxx',
        'region' => 'ap-southeast-2',
));

$result = $client->createJob(array(
        'PipelineId' => 'xxxxx',
        'Input' => array(
          'Key' => 'video.MOV',
          'FrameRate' => 'auto',
          'Resolution' => 'auto',
          'AspectRatio' => 'auto', 
          'Interlaced' => 'auto',
          'Container' => 'auto'
        ),
        'Output' => array(
          'Key' => 'output.mp4',
          'ThumbnailPattern' => 'thumb{count}.jpg',
          'Rotate' => 'auto',
          'PresetId' => '1351620000001-000010'
        ),
));

somehow i am keep getting this following error: Any one can help? CURL is been enabled tho.

( ! ) Fatal error: Uncaught exception 'Guzzle\Http\Exception\CurlException' with message '[curl] 6: Couldn't resolve host 'elastictranscoder.ap-southeast-2.amazonaws.com' [url] http://ift.tt/1Dc8FpI' in /Applications/MAMP/htdocs/projects/scavideo/server/lib/aws/Aws/Common/Client/AbstractClient.php on line 258
( ! ) Guzzle\Http\Exception\CurlException: [curl] 6: Couldn't resolve host 'elastictranscoder.ap-southeast-2.amazonaws.com' [url] http://ift.tt/1Dc8FpI in /Applications/MAMP/htdocs/projects/scavideo/server/lib/aws/Guzzle/Http/Curl/CurlMulti.php on line 359
Call Stack

here is my CURL setting

cURL support    enabled
cURL Information    7.41.0
Age 3
Features
AsynchDNS   No
CharConv    No
Debug   No
GSS-Negotiate   No
IDN Yes
IPv6    Yes
krb4    No
Largefile   Yes
libz    Yes
NTLM    Yes
NTLMWB  Yes
SPNEGO  No
SSL Yes
SSPI    No
TLS-SRP No
Protocols   dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, ldaps, pop3, pop3s, rtsp, smb, smbs, smtp, smtps, telnet, tftp
Host    x86_64-apple-darwin10.8.0
SSL Version OpenSSL/0.9.8zd
ZLib Version    1.2.8

thanks heaps




HostGator to AWS - PHP/MySQL - INSERT/SELECT/UPDATE is not working

After i switched from hostgator to aws linux ubuntu, my PHP PDO is not working. I can connect to the databases but only some INSERT/SELECT/UPDATE/DELETE works. What's wrong?




AWS Autoscalling Rolling over Connections to new Instances

Is it possible to automatically 'roll over' a connection between auto scaled instances?

Given instances which provide a compute intensive service, we would like to

  1. Autoscale a new instance after CPU reachs say 90%
  2. Have requests for service handled by the new instance.

It does not appear that there is a way with the AWS Dashboard to set this up, or have I missed something?




Heroku and Amazon CloudFront Cross-Origin Resources Sharing and Image URL

I have 1 question and 1 issue:

Question: If using CloudFront, is the image URL supposed to have s3.amazonaws.com or randomblah.cloudfront.net?

http://ift.tt/1DbYPUS

or with the actual cloudfront.net url...

http://ift.tt/1KAWS4i

Right now, I have this in my production.rb

config.action_controller.asset_host = 'fdawfwe8200.cloudfront.net'

Issue: I'm getting Redirect at origin 'http://ift.tt/1DbYPUU' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header.... error

How do I fix this?

The only thing I see breaking on my site is, bootstrap icons are boxes.




Redirect http to https AWS ELB without hosting ssl certificate on webserver

I have web servers running nginx behind a AWS ELB. I have setup the ELB to accept https connections, and send the requests over http to the webservers. I would also like to accept http connection to the ELB, and redirect them to https.

All solutions to this redirection problem involve handling https on the webserver and redirecting it to http.

Is there a way to do this without handling the redirect on the webserver? I would rather not have to have my ssl certificate on the ELB and all webservers.




Alternative to UDF in aws redshift

I am creating views on a redshift table, but would like to have some sort of argument that I can pass to limit the data I get back from the view. The table is for the whole month and joins take a lot of time. I looked into redshift documentation but it says that redshift doesnot support user-defined functions. Is there any alternative to choose besides views/UDF....

To be particular, I have a query like: “with lookup as(Select DISTINCT * from Table where property_value = 'myproperty' AND time_stamp > ‘2015-07-##’ AND time_stamp < ‘2015-07-##’ order by sortkey) Select * from lookup where ……..”

I wanted to be flexible in changing the time_stamp. Also, would like user to pass arguments to the created view and grab data just for specified timestamps.

Thanks




how to clean up docker overlay directory?

I'm running docker via CoreOS and AWS's ECS. I had a failing image that got restarted many times, and the containers are still around- they filled my drive partition. Specifically, /var/lib/docker/overlay/ contains a large number of files/directories.

I know that docker-cleanup-volumes is a thing, but it cleans the /volumes directory, not the /overlay directory.

docker ps -a shows over 250 start attempts on my bad docker container. They aren't running, though.

Aside from rm -rf /var/lib/docker/overlay/*, how can I/should I clean this up?




Amazon S3 + CloudFront CDN AccessDenied after changing URL to test

CloudFront isn't working for me? Maybe I'm missing a step...

I created a new distribution, and chose an Origin Domain Name from a drop down from Amazon S3 Buckets that I'm using for my application. bucketname.s3.amazonaws.com

Left everything else default... and then when I tried to change an image URL, I get an XML error with AccessDenied.

I did:

http://ift.tt/1Jwj5li

http://ift.tt/1KAGikY




Process exited before completing request error when testing a function in AWS Lambda

After completing the AWS Lambda tutorial for creating thumbnails, I decided to try and tweak the code to check if a file was a jpg or csv file and if it was simply move it to a new bucket. The only things I removed from my code were the comments and the function within the async.waterfall that would resize images. However, whenever I test or run this new code, I get "process exited before completing request" and the function does not transfer the files correctly. Here is the code:

var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm')
            .subClass({ imageMagick: true }); // Enable ImageMagick integration.
var util = require('util');






var s3 = new AWS.S3();

exports.handler = function(event, context) {

    console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
    var srcBucket = event.Records[0].s3.bucket.name;

    var srcKey    =
    decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));  
    var dstBucket = 'datacollectionbucket';
    var dstKey    = srcKey;


    if (srcBucket == dstBucket) {
        console.error("Destination bucket must not match source bucket.");
        return;
    }


    var typeMatch = srcKey.match(/\.([^.]*)$/);
    if (!typeMatch) {
        console.error('unable to infer file type for key ' + srcKey);
        return;
    }
    var imageType = typeMatch[1];
    if (imageType != "jpg" && imageType != "csv") {
        console.log('skipping unrecognized file ' + srcKey);
        return;
    }


    async.waterfall([
        function download(next) {
            s3.getObject({
                    Bucket: srcBucket,
                    Key: srcKey
                },
                next);

        function upload(contentType, data, next) {

            s3.putObject({
                    Bucket: dstBucket,
                    Key: dstKey,
                    Body: data,
                    ContentType: contentType
                },
                next);
            }
        ], function (err) {
            if (err) {
                console.error(
                    'Unable to resize ' + srcBucket + '/' + srcKey +
                    ' and upload to ' + dstBucket + '/' + dstKey +
                    ' due to an error: ' + err
                );
            } else {
                console.log(
                    'Successfully classified ' + srcBucket + '/' + srcKey +
                    ' and uploaded to ' + dstBucket + '/' + dstKey
                );
            }

            context.done();
        }
    );
};

Thanks guys




How to concatenate EC2 hostnames in config file in Ansible playbook?

I try to write ansible playbook to setup MongoDB shard cluster in Amazon EC2. I create three EC2 instances using ec2 ansible module ( http://ift.tt/1DbCSoR )

- name: Create EC2 instances for MongoConfigs
  ec2:
    key_name: mongo
    ..............
    wait: yes
  register: ec2_config

Ok, ec2_config variable contain created instances list.

Then on all mongos instances I have to start mongos with configDB param: http://ift.tt/1OCCT5U ( You must specify either 1 or 3 configuration servers, in a comma separated list )

For example, I have template:

systemLog:
  destination: file
  path: "/logs/mongodb.log"
  logAppend: true

sharding:
  configDB: {{ configDBHosts }}

How I have to set configDBHosts value like this:

ip-10-0-103-87.us-west-2.compute.internal:27019,ip-10-0-103-88.us-west-2.compute.internal:27019,ip-10-0-103-89.us-west-2.compute.internal:27019

?




Setting aws sessionToken in AWS signature v4

Using the node.js library - I need to set the sessionToken on credentials for my S3 put command to work with proper permissions. This is how I would set the credentials directly.

  AWS.config.update({accessKeyId : credentials.AccessKeyId,
    secretAccessKey:credentials.SecretAccessKey, 
    sessionToken: credentials.SessionToken});

Trying to re-create the same upload s3 request using the REST api only, I've used the aws4 library to generate the AWS signature v4 but I don't see any place to set the session token.

Making the call to my s3 put command without the sessionToken using the AWS.S3 object throws a permissions error - and the same as well with the REST api.

How can I set the session token into the v4 signature?




Non-Windows instances with a virtualization type of 'hvm' are currently not supported for this instance type : [AWS Cloudformation]

I am trying to create a an t2.micro ec2 instance with amazon linux as os using cloudformation . Following is the json file (parts that matter).

    "FileName" :{
        "Type" : "String",
        "Default" : "cf-file.sh",
        "AllowedValues": [ "cf-file.sh"]
    },
    "InstanceType" : {
      "Description" : "WebServer EC2 instance type",
      "Type" : "String",
      "Default" : "t2.micro",
      "AllowedValues" : ["t2.micro"],
      "ConstraintDescription" : "must be a valid EC2 instance type."
    },

       "AMIID" :{
         "Type": "String",
        "Default":"ami-1ecae776",
        "AllowedValues":["ami-1ecae776"]
    }
  },
  "Resources" : {
    "EC2Instance" : {
      "Type" : "AWS::EC2::Instance",
      "Properties" : {
        "UserData" : {
                "Fn::Base64" : {
                    "Fn::Join" : [ 
                            "", 
                            [
                                "#!/bin/bash\n",
                                "yes y | yum install dos2unix\n",
                                "touch ",{ "Ref" : "FileName" },"\n",
                                "chmod 777 ",{ "Ref" : "FileName" },"\n" 
                            ]
                    ]
                 } 
        },
          "KeyName" : { "Ref" : "KeyName" },
        "ImageId" : { "Ref" : "AMIID" }
      }
    },

When i run this file i get following error

Non-Windows instances with a virtualization type of 'hvm' are currently not supported for this instance type

I guess this error comes when we use t1 family instance type but i am using t2.micro. Please explain the reason why is it so ?




Ignoring file extensions on Amazon S3

I want to create a simple rewrite rule that ignores any extension that is added to a request. For example, both foo.bar and foo.don would point to the file foo. If this is not possible, can I add aliases to files? A third option is to upload an extra version of the file with the extension for every extension that the file should be available, but I don't think this is a good solution.




Heroku Application Error After Form Submittal, But No Error Logged?

I get an error when I try to create a new form with image uploading to Amazon S3.

enter image description here

I think its because of my photo uploads, but not too sure. when I do this in my development environment, no issues. I'm using Amazon S3, in development as well.

I checked heroku logs and I get no error.

Once I create a new form, it supposed to direct me to that show.html.erb page, with the id in the URL (ie: http://ift.tt/1IKUiLH), but instead, it sent me to http://ift.tt/1npEVfG and the error.

Oh, I'm also using friendly_id gem

def create
  @project = project.new(project_params)

  respond_to do |format|
    if @project.save

      if params[:photos]
        params[:photos].each { |image|
          @project.project_images.create(photo: image)
        }
      end
      format.html { redirect_to @project, notice: 'Trip was successfully created.' }
      format.json { render :show, status: :created, location: @project }
  else
      format.html { render :new }
      format.json { render json: @project.errors, status: :unprocessable_entity }
    end
  end
end

When I go back to http://ift.tt/1IKUiLH, it works. But I only get this error right after I upload or submit a form.