dimanche 31 mai 2015

Aamazon AWS Windows Server

I have Amazon AWS windows sever 2012 R2. I have installed Xammp, Now I want to make virtual hosts. I need If some body access IP like 52.10.221.200 then its should access blog C:/xampp/htdocs/blog, if some one access 52.10.221.200/new_blog then C:/xampp/htdocs.

I have maked virtual hosts like this.

<VirtualHost *:80>
    DocumentRoot "C:/xampp/htdocs"
    ServerName localhost  
</VirtualHost>


<VirtualHost 52.10.221.200>
  DocumentRoot "C:/xampp/htdocs/blog"
  ServerName blog
  # This should be omitted in the production environment
  SetEnv APPLICATION_ENV development

  #ErrorLog "logs/dummy-host2.example.com-error.log"
  #CustomLog "logs/dummy-host2.example.com-access.log" common
</VirtualHost>

If I access 52.10.221.200, its working fine, I am able to access blog, But if I try to access 52.10.221.200/new_blog ( or any other directory), its not work. I am not able to access any other directory. Can you please help me how I can solve it? Thanks




Frontend and communication for EC2 based cloud application

We are implementing a system that will take some image as input, do some processing on it, and return the results. We have to do the processing on an EC2 instance. I'm pretty new to cloud computing in general, and haven't worked with the web either, and I'm trying to decide the best way to create the frontend for this system. (The backend is simply c++ code running on amazon EC2). For the frontend, I have two options:

  • A desktop app that will somehow communicate with the EC2 instance. This one is simpler to build, as I've some experience with this, but I don't know how I will be able to talk to the backend. There's SSH, but I don't know how suitable it is.

  • A webserver running on the EC2 instance itself. This sounds like a better idea , but I haven't done any webdevelopment, so might end up taking more time.

I'm not looking to create any fancy UIs, just something functional that lets the end user upload the image, and view the results. Which option should I go for?




Ubuntu How to use Shutter to store in files thumbnail of a website

Need help, I have a DB (updated constantly) of URLs and for each of those I need to store a thumbnail. It runs on Ubuntu AWS (cloud service) and it should run in the background(PHP). Since the thumbnail generation services "Pagepeeker","thumbalizr" ... are not free and can't handle large volumes I am trying to use Shutter:

ubuntu@:~$ shutter --web=http://www.google.com/ -o /var/www/html/thumbs/test.png -e

But Shutter requires openning a graphical interface to appear on my end. How can I do that without a graphical interface?

Thanks in advance for your help.




Deploy python web server on AWS Elastic Beanstalk

I'm deploying an python web server on AWS now and I have a some question about it. I'm using websocket to communicate between back end and front end.

  1. Do I have to use framework like django or flask?
  2. If not, where should I put the index.html file? in other word, after deploying, how do AWS know the default page of my application?

Thanks in advance.




SSLException due uploading item for Amazon service

I upload my file to aws service from android. I configured it like this:

  AwsMetadata awsMetadata = resultData.getParcelable(Params.CommandMessage.EXTRA_MESSAGE);
        AWSCredentials awsCredentials = new BasicAWSCredentials(
                awsMetadata.getAccountId(),
                awsMetadata.getSecretKey()
        );
        // set up region
        TransferManager transferManager = new TransferManager(awsCredentials);
        Region region = Region.getRegion(Regions.fromName(awsMetadata.getRegionEndpoint()));
        transferManager.getAmazonS3Client().setRegion(region);


        final MediaItem mediaItem = datasource.get(0);
        Log.d(App.TAG, "File is exists: "
                + mediaItem.getContentUri() + " "
                + new File(mediaItem.getContentUri()).exists());

        // prepare file for upload
        PutObjectRequest putObjectRequest = new PutObjectRequest(
                awsMetadata.getBucketName(),
                awsMetadata.getSecretKey(),
                new File(mediaItem.getContentUri())
        );


        Log.d(App.TAG, "Total data: " + mediaItem.getSize());
        Upload upload = transferManager.upload(putObjectRequest, new S3ProgressListener() {

            private int totalTransfered = 0;

            @Override
            public void onPersistableTransfer(PersistableTransfer persistableTransfer) {
            }

            @Override
            public void progressChanged(ProgressEvent progressEvent) {

                Log.d(App.TAG, "Bytes are transferred: " + progressEvent.getBytesTransferred());
                totalTransfered += progressEvent.getBytesTransferred();
                long totalSize = mediaItem.getSize();
                Log.d(App.TAG, "Total transferred: " + ((totalTransfered / totalSize) * 100) + " percent");
            }
        });
    }

06-01 11:45:00.712    5182-5768/com.home I/AmazonHttpClient﹕ Unable to execute HTTP request: Write error: ssl=0xb4bb3600: I/O error during system call, Connection reset by peer
    javax.net.ssl.SSLException: Write error: ssl=0xb4bb3600: I/O error during system call, Connection reset by peer
            at com.android.org.conscrypt.NativeCrypto.SSL_write(Native Method)
            at com.android.org.conscrypt.OpenSSLSocketImpl$SSLOutputStream.write(OpenSSLSocketImpl.java:765)
            at com.android.okio.Okio$1.write(Okio.java:70)
            at com.android.okio.RealBufferedSink.emitCompleteSegments(RealBufferedSink.java:116)
            at com.android.okio.RealBufferedSink.write(RealBufferedSink.java:44)
            at com.android.okhttp.internal.http.HttpConnection$FixedLengthSink.write(HttpConnection.java:291)
            at com.android.okio.RealBufferedSink.emitCompleteSegments(RealBufferedSink.java:116)
            at com.android.okio.RealBufferedSink$1.write(RealBufferedSink.java:131)
            at com.amazonaws.http.UrlHttpClient.write(UrlHttpClient.java:155)
            at com.amazonaws.http.UrlHttpClient.createConnection(UrlHttpClient.java:143)
            at com.amazonaws.http.UrlHttpClient.execute(UrlHttpClient.java:60)
            at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:353)
            at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:196)
            at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4234)
            at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1644)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:134)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadCallable.call(UploadCallable.java:126)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.upload(UploadMonitor.java:182)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.call(UploadMonitor.java:140)
            at com.amazonaws.mobileconnectors.s3.transfermanager.internal.UploadMonitor.call(UploadMonitor.java:54)
            at java.util.concurrent.FutureTask.run(FutureTask.java:237)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
            at java.lang.Thread.run(Thread.java:818)

And got SSLException. Amazon SDK uses it's own client and it should be configured properly from the box.

What is the reason for this beahaviour?




AWS ec2 instance running tomcat 8 gives error: getDispatcherType() is undefined for the type HttpServletRequest

getDispatcherType() is undefined for the type HttpServletRequest when deploying to aws ec2 instance running tomcat 8 using beanstalk. Same webapp works when deployed on localhost on my machine.

The project runs perfectly when deployed on localhost in my machine but gives error when deployed on AWS using beanstalk. I am using eclipse with aws java toolkit.

  1. I have used javax.servlet-api version 3.0.1 with scope provided.
  2. I have removed all maven dependencies on servlet-api 2.5.
  3. The webapp runs perfectly on localhost in my machine.
  4. I downloaded the zip of uploaded package from aws and it only contains javax.servlet-api 3.0.1 and no other version of it.

Here is the stacktrace.

org.apache.jasper.JasperException: Unable to compile class for JSP: 
An error occurred at line: [82] in the generated java file: [/usr/share/tomcat8/work/Catalina/localhost/ROOT/org/apache/jsp/WEB_002dINF/jsp/login_jsp.java]
The method getDispatcherType() is undefined for the type HttpServletRequest

Stacktrace:
org.apache.jasper.compiler.DefaultErrorHandler.javacError(DefaultErrorHandler.java:102)
org.apache.jasper.compiler.ErrorDispatcher.javacError(ErrorDispatcher.java:198)
org.apache.jasper.compiler.JDTCompiler.generateClass(JDTCompiler.java:450)
org.apache.jasper.compiler.Compiler.compile(Compiler.java:361)
org.apache.jasper.compiler.Compiler.compile(Compiler.java:336)
org.apache.jasper.compiler.Compiler.compile(Compiler.java:323)
org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:570)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:356)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:396)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:340)
javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
org.springframework.web.servlet.view.InternalResourceView.renderMergedOutputModel(InternalResourceView.java:209)
org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:266)
org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1225)
org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1012)
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
javax.servlet.http.HttpServlet.service(HttpServlet.java:618)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:316)
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:126)
org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:90)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:122)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:168)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:48)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:205)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:120)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.csrf.CsrfFilter.doFilterInternal(CsrfFilter.java:96)
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:64)
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:91)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:53)
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108)
org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:213)
org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:176)
org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:344)
org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:261)

This is my pom.xml file with all dependecies:

<properties>
    <spring.version>4.0.1.RELEASE</spring.version>
    <spring.security.version>4.0.1.RELEASE</spring.security.version>     
</properties>

    <dependencies>
        <dependency>
            <groupId>com.microsoft.azure</groupId>
            <artifactId>azure-documentdb</artifactId>
            <version>1.0.0</version>
        </dependency>
        <dependency>
            <groupId>com.google.code.gson</groupId>
            <artifactId>gson</artifactId>
            <version>2.3.1</version>
        </dependency>
        <dependency>
            <groupId>javax.mail</groupId>
            <artifactId>mail</artifactId>
            <version>1.4.7</version>
        </dependency>



        <dependency>
            <groupId>javax.servlet</groupId>
            <artifactId>javax.servlet-api</artifactId>
            <version>3.0.1</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>javax.servlet.jsp.jstl</groupId>
            <artifactId>jstl-api</artifactId>
            <version>1.2-rev-1</version>
            <exclusions>
                <exclusion>
                    <artifactId>servlet-api</artifactId>
                    <groupId>javax.servlet</groupId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>org.glassfish.web</groupId>
            <artifactId>jstl-impl</artifactId>
            <version>1.2</version>
            <exclusions>
                <exclusion>
                    <artifactId>servlet-api</artifactId>
                    <groupId>javax.servlet</groupId>
                </exclusion>
            </exclusions>
        </dependency>



        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.12</version>
        </dependency>

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>1.7.12</version>
        </dependency>

        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.16.4</version>
            <scope>provided</scope>
        </dependency>


        <!-- Spring dependencies -->
        <!-- Spring 3 dependencies -->
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>${spring.version}</version>
            <exclusions>
              <exclusion>
                <groupId>commons-logging</groupId>
                <artifactId>commons-logging</artifactId>
              </exclusion>
            </exclusions>
        </dependency>

        <!-- Spring MVC -->
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${spring.version}</version>
        </dependency>

        <!-- Spring + aspects -->
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-aspects</artifactId>
            <version>${spring.version}</version>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-web</artifactId>
            <version>${spring.version}</version>
        </dependency>


        <!-- Spring Security -->
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-web</artifactId>
            <version>${spring.security.version}</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-config</artifactId>
            <version>${spring.security.version}</version>
        </dependency>

        <!-- Spring Security JSP Taglib -->
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-taglibs</artifactId>
            <version>${spring.security.version}</version>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-tx</artifactId>
            <version>4.0.1.RELEASE</version>
        </dependency>


        <!-- AWS Dependecies excluding spring -->

        <dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk</artifactId>
            <version>1.9.39</version>
        </dependency>

What is it that I am missing?




Ansible -- ec2_group and ec2_tag in the same role?

I am trying to get an Ansible role with the ec2_group definition and the ec_tag on the same file as I would need to have it pretty compact.

For the ec2_tag I need the sg_id.. is there any way of getting that value dynamically?

Any way of doing anything like this?

roles/region-environment/tasks/env_sg_test.yml

- name: example ec2 group
  local_action:
    module: ec2_group
    name: my-security-group
    description: Access my-security-group
    vpc_id: "{{ vpc }}"
    region: "{{ region }}"
    rules:
      - proto: tcp
        from_port: 22
        to_port: 22
        cidr_ip: 0.0.0.0/0
      - proto: tcp
        from_port: 443
        to_port: 443
        cidr_ip: 0.0.0.0/0

- name: Tag the security group with a name
  local_action:
    module: ec2_tag
    resource: <----- Resource. SG_ID?
    region: "{{ region }}"
    state: present
    tags:
      Name: "My Security Group Name"
      env: "production"
      service: "web"

Thanks!!




permission denied (publickey) - AWS EC2

I am trying to get a Django app running on Amazon EC2. I currently have my .pem file saved in the root of my Django project.

When I try this

chmod 600 oby.pem
ssh -i oby.pem ubuntu@52.0.215.90

in my mac terminal, I receive this error: Permission denied (publickey).

  • To begin, am I saving the my oby.pem file in the right location? If not, where should it go?
  • Furthermore, what are the necessary steps to correctly set up the ssh key?

Thank you!




Amazon Sns and cloudwatch pricing

Im studing aws pricing and I have two doubts about Amazon SNS and Amazon Cloudwatch.

About cloudwatch, Im using it to monitor sns topics and to monitor a dynamodb table. Im reading about cloudwatch pricing and it says that the basic monitoring metrics for amazon ec2, amazon ebs, elastic load balancers and amazon rds are free. So to monitor sns topics and dynamodb its not basic monitoring and we need to pay $0.50 per month for each metric?

About SNS it says that we pay based on the number of notifications we publish, the number of notifications we deliver, and any additional API calls for managing topics and subscriptions. Im bit confuse about this pricing, we pay for any API call, for example create a new topic, getl all topics, etc, ok this part it is clear, but Im not understanding about the other two, for exmaple in my code I have :

message = "this is a test"
message_subject = "Message test"        
publication = sns.publish(topicArn, message, subject=message_subject)

In this case we have the cost about publish, and also in this sns.publish we need to pay relative to API calls? And also we need to pay when we publish a message, and to this cost it is added the cost of deliver notifications? But its not the same thing? Publish a message or deliver a notification? Or notifications it is that confirmation subscriptions that we receive in email when we subscribe some email in the topic?

subscriptionEmail = sns.subscribe(topicArn, "email", email)




How to fix an "ImportError: No module named numpy" on AWS EC2 Server with AMI Linux but it installed?

For five days or so I could start a script with numpy without problems. I installed it like here in the forum proposed and it worked. when I try to start another python script with numpy today, I got the following error:

Traceback (most recent call last):
  File "neuralTensorNetworkAcc.py", line 8, in <module>
    import numpy as np
ImportError: No module named numpy

But numpy seems to be installed:

[ec2-user@ip-172-31-10-106 sk-learn]$ sudo yum install numpy                    Loaded plugins: priorities, update-motd, upgrade-helper
amzn-main/latest                                         | 2.1 kB     00:00
amzn-updates/latest                                      | 2.3 kB     00:00
Package python26-numpy-1.7.2-8.16.amzn1.x86_64 already installed and latest version
Nothing to do

What is wrong? How to fix that bug?

Thanks in advance!




Amazon S3 redirect based on browser language

I've a multi language website. I want to redirect english users to /en and spanish users to /es.

Currently I'm doing it via Javascript, but I feel there is a better way.

<html>
<body>
<script type="text/javascript">
    var language = window.navigator.userLanguage || window.navigator.language;
    if (["es","en"].indexOf(language) != -1){
        window.location.replace(language);    
    }
    else{
        // other languages go to en
        window.location.replace("/en");
    }
</script>
</body>
</html>




On an AWS instance, how do I replicate the '/home' configuration?

On an AWS instance, how do I replicate the '/home' configuration I use on my local server?

I created an AWS volume which I want to attach to an AWS instance. Many scripts I run are hard coded to '/home/my-user-name/program-name/'.

If I mount an EBS volume I create called '/home' (where I will place my applications), my understanding is that will mount over the /home created when the instance is created. I believe this will then 'smash' the /home/users and their ssh files (ssh log-in credentials) set up on the server.

I tried this already and the result was that I couldn't log-in. I fixed it by attaching the unreachable volume to another instance and copied over /home/users from the copy I made initially into /home-orig, before mounting the external volume as /home.




Installing Node.js and pm2 on AWS EC2 instance Ubuntu 14.04.2 LTS

I am trying to install node.js in an instance of amazon web services. When I try to keep a node app alive as a daemon using nodejs it works.

nodejs server.js

However when I do it using pm2 doesn't work

pm2 start server.js

Thanks in advance




No module named simplejson in python UDF on EMR

I'm running an Amazon Elastic MapReduce (EMR) job using Pig. I'm having trouble importing the json or simplejson modules into my Python user defined function (UDF).

Here is my code:

#!/usr/bin/env python
import simplejson as json
@outputSchema('m:map[]')
def flattenJSON(text):
    j = json.loads(text)
    ...

When I try to register the function in Pig I get an error saying "No module named simplejson"

grunt> register 's3://chopperui-emr/code/flattenDict.py' using jython as flatten;
2015-05-31 16:53:43,041 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1121: Python Error. Traceback (most recent call last):
File "/tmp/pig6071834754384533869tmp/flattenDict.py", line 32, in <module>
import simplejson as json
ImportError: No module named simplejson

However, my Amazon AMI includes Python 2.6, which includes json as a standard package (using import json doesn't work either). Also, if I try to install simplejson using pip it says it's already installed (on both master and core nodes).

[hadoop@ip-172-31-46-71 ~]$ pip install simplejson
Requirement already satisfied (use --upgrade to upgrade): simplejson in /usr/local/lib64/python2.6/site-packages

Also, it works fine if I run python interactively from the command line on the master node

[hadoop@ip-172-31-46-71 ~]$ python
Python 2.6.9 (unknown, Apr  1 2015, 18:16:00) 
[GCC 4.8.2 20140120 (Red Hat 4.8.2-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import json
>>> 

There must be something different about how EMR or Pig is setting up the Python environment, but what?




aws ec2 instance size of volume fedora versus ubuntu

I don't understand how the size of the EC2 instance upon creation (by Linux distro) works and it's implication.

I see drive size layout differences between two distributions of servers I created on AWS; I want to understand what is the design and intent. (I am experimenting setting up AWS EC2 instances. My goal is to move processes I run on my local server to AWS.)

My local server runs Fedora; on AWS I created both Ubuntu and Fedora instances.

I think I created the Fedora with a 10 GB instance. The root drive is 2 GB and then four dirs under root are 2 GB. The current available size of / is 1.2 GB (I think I did some updates via yum).

The Ubuntu drive instance was created with 12 GB. The root drive is 8 GB and two dirs under root are 2 GB. The current available size of / is 5.7GB (I think I did some updates with apt-get).

Below is the df -h of each instance.

Is there a practical limitation of the Fedora root being 2 GB? Does it matter? Is there a way to increase it?

Fedora: Filesystem Size Used Avail Use% Mounted on /dev/mapper/atomicos-root 2.0G 796M 1.2G 40% / devtmpfs 2.0G 0 2.0G 0% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 2.0G 260K 2.0G 1% /run tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/xvda1 190M 35M 142M 20% /boot tmpfs 396M 0 396M 0% /run/user/1000

Ubuntu: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.8G 1.7G 5.7G 23% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 1.9G 12K 1.9G 1% /dev tmpfs 377M 332K 377M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm none 100M 0 100M 0% /run/user




rake db:migrate runs in development AWS Beanstalk

I'm new to Beanstalk. I've created a Rails application and set the database production configuration to use the environment variables hopefully provided by AWS. I'm using Mysql (mysql2 gem), and want to use RDS and Passenger (I have no preference there).

On my development environment I can run the rails application with my local Mysql (it is just a basic application I've created for experimentation). I have added the passenger gem to Gemfile and bundled, but I'm using WEBBrick in development still.

The only thing I did not do by the book is that I did not use 'eb' but rather tried from the console. My application/environment failed to run as while "rake db:migrate" it still thinks I wanted it to connect to the local Mysql (I guess from the logs that it is not aware of RAKE_ENV and hence uses 'development').

Any tip? I can of course try next the 'eb', yet would prefer to work with the console.

Regards, Oren




s3 shunked uploads with blueimp.fileupload

I'm implementing AWS chunked uploads using the blueimp plugin, and I've run into a problem in the event ordering.

Instead of Content-Range (presence of which causes Amazon to immediately throw a 403), S3 uses an upload ID and part number query parameters.

So before each chunk, I need to reach out to my signing service and change the url of the next chunk.

It doesn't appear that options.chunksend blocks the event system like options.add, so my next chunk is sent to the same URL as the first chunk (set in options.add), overwriting it.

How can I block the send of a chunk so I can change the url?




Amazon SQS Get Message Attributes

it try to fetch some attributes from an Amazon SQS message.

I use the following snippet in Java (Eclipse with the Amazon SDK):

{...
        while(timer<2){
    //Receive new messages from all input Queues
    ReceiveMessageRequest receiveMessageRequestCreditCardTerminal = new ReceiveMessageRequest(creditCardTerminalToShopURL);
    ReceiveMessageRequest receiveMessageRequestShipping = new ReceiveMessageRequest(shippingToShopURL);
    ReceiveMessageRequest receiveMessageRequestSuggestion = new ReceiveMessageRequest(suggestionServerToShopURL);
    requestList.add(receiveMessageRequestCreditCardTerminal);
    requestList.add(receiveMessageRequestShipping);
    requestList.add(receiveMessageRequestSuggestion);
    System.out.println("Reached run method and in while loop");

    for(ReceiveMessageRequest r : requestList){ 
    System.out.println("Reched for loop with r");
    List<Message> messagesList = sqs.receiveMessage(r.withMessageAttributeNames("All")).getMessages();  
        for(Message m : messagesList){
            System.out.println("Reached run method and in for loop with message");
            System.out.println("Message: "+m.getMessageId());
            System.out.println("Attributes: "+m.getAttributes());
...}
}

But the Attributes System out returns an empty Array... what can I do, I'm really desperate -.-'




Weighted round robin dns between 2 Cloudfront distributions

We are trying to use aws to do some a gradual deployment test with our javascript code, but it seems to fail us

we created 2 S3 buckets with CF distributions :

a.example.net -> aaa.cloudfront.net

b.example.net -> bbb.cloudfront.net

than we created a weighted round robin DNS entry in route53

test.example.net -> (cname) -> aaa.cloudfront.net (5)

test.example.net -> (cname) -> bbb.cloudfront.net (95)

in the S3 bucket we put a file with the CF corresponding domain name for each bucket :

http://ift.tt/1crudSt

What I am expecting is to get 95% of the time bbb and 5% of the time d3nrwpaeicu4xy. What we actually get is aaa 100% of the time :(

I opened a ticket to the route53 team to check if this is a problem with the dns configuration but they have shown me , and I have seen it myself that the dns queries split between the 2 buckets.

Hope this is clear enough.




Amazon SQS Error: InvalidParameterValue

I've got a problem with Amazon SQS in combination with Eclipse. When I run this method:

    public void askForPayment(int userId, double amount){
    SendMessageRequest request = new SendMessageRequest(shopToCreditCardTerminalURL, "");
    request.addMessageAttributesEntry("type",new MessageAttributeValue().withStringValue("@AskForPayment"));
    request.addMessageAttributesEntry("userID", new MessageAttributeValue().withStringValue(Integer.toString(userId)));
    System.out.println(Double.toString(amount));
    request.addMessageAttributesEntry("amount", new MessageAttributeValue().withStringValue(Double.toString(amount)));
    sqs.sendMessage(request);
}

I get the following error:

Exception in thread "main" com.amazonaws.AmazonServiceException: The message attribute 'amount' must contain non-empty message attribute type. (Service: AmazonSQS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 69ee2891-1a47-5511-982f-6575ebc8a1ed)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1160)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:748)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:467)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:302)
at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:2422)
at com.amazonaws.services.sqs.AmazonSQSClient.sendMessage(AmazonSQSClient.java:1015)
at de.patrick.onlineshop.Onlineshop.test(Onlineshop.java:271)
at de.patrick.onlineshop.Onlineshop.main(Onlineshop.java:63)

I don't get the point of the problem? Does anybody know anything?

:)




Amazon SQS getMessageID

Amazon SQS delivers only at least one delivery. So i have to proof whether the Message is send a second time. Is it possible to identify the delete of a unique Message with the getMessageID() method? That I can store the String in a data structure and match it?




Error deploying Play Framework on AWS Beanstalk Docker

Im running a Play Framework app on AWS Beanstalk with Docker (64bit Amazon Linux 2015.03 v1.4.1 running Docker 1.6.0).

Docker File:

FROM relateiq/oracle-java8
MAINTAINER XXXX
EXPOSE 9000
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
RUN ["chmod", "+x", "bin/app"]
USER daemon
ENTRYPOINT ["bin/app"]
CMD []

Dockerrun.aws.json

{
   "AWSEBDockerrunVersion": "1",
   "Ports": [{
       "ContainerPort": "9000"
   }]
}

When the instance first starts I get about 1 minute where its deployed as normal, then after I browse a few pages the error shows:

502 Bad Gateway

nginx/1.6.2

The error in the ElasticBeanstalk logs is:

Play server process ID is 1 This application is already running (Or delete /opt/docker/RUNNING_PID file).

I also get in the /var/log/docker-events.logthe following messages every 30 seconds:

2015-05-30T20:07:58.000000000Z d0425e47095e5e2637263a0fe9b49ed759f130f31c041368ea48ce3d99d1e947: (from aws_beanstalk/current-app:latest) start
2015-05-30T20:08:15.000000000Z d0425e47095e5e2637263a0fe9b49ed759f130f31c041368ea48ce3d99d1e947: (from aws_beanstalk/current-app:latest) die
2015-05-30T20:08:16.000000000Z d0425e47095e5e2637263a0fe9b49ed759f130f31c041368ea48ce3d99d1e947: (from aws_beanstalk/current-app:latest) start
2015-05-30T20:08:31.000000000Z d0425e47095e5e2637263a0fe9b49ed759f130f31c041368ea48ce3d99d1e947: (from aws_beanstalk/current-app:latest) die

Can anyone see my issue? Cheers.




Amazon EC2 instance local data sharing

Can EC2 local data shared like s3?

for example I have some text file in my EC2 instance home directory ex: home\mydata.txt I want to share mydata.txt as public url. I Know it can be by copying to s3, but want to check this option as well.

something like : http://ift.tt/1dEstad




amazon s3 bucket returns only "top level" keys

I am facing a problem that AWSSDKs ListObjects only returns keys of "top level folders" (I know there are no such things like folders in a bucket). In my bucket I store images in different resolutions. The "folder structure" in the bucket looks like:

year/month/resolution/file in this resolution (of the image when its uploaded)

If an original image should be updated, all images in smaller resolutions should be deleted. Therefore I want to get all occurences of an image in the bucket. I use the following code snippet to do this and for top level keys it works fine.

using (IAmazonS3 amazonS3Client =     Amazon.AWSClientFactory.CreateAmazonS3Client(AWSAccessKey, AWSSecretKey, Amazon.RegionEndpoint.EUWest1))
{
    ListObjectsRequest S3ListObjectRequest = new ListObjectsRequest();
    S3ListObjectRequest.BucketName = "my_bucket";
    S3ListObjectRequest.Delimiter = "LL.jpg";

    ListObjectsResponse listObjectRequest = amazonS3Client.ListObjects(S3ListObjectRequest);

    foreach (string S3BucketDir in listObjectRequest.CommonPrefixes)
    {
        //delete image
    }
}

for LL.jpg i get the following CommonPrefixes:

 - 2014/07/T120x120/LL.jpg
 - 2014/07/T160x160/LL.jpg
 - 2014/07/T320x320/LL.jpg
 - 2014/07/T640x640/LL.jpg
 - 2014/07/T76x76/LL.jpg
 - 2014/07/T80x80/LL.jpg

but for orp.jpg I should get:

 - 2015/05/T120x120/orp.jpg
 - 2015/05/T160x160/orp.jpg
 - 2015/05/T640x640/orp.jpg

however it is empty every time.

If I set the prefix

S3ListObjectRequest.Prefix = "2015/05";

the 3 CommonPrefixes are returned. (If I only use '2015' as prefix its also empty, because ListObjects only searches in 2015/01)

Thank you for your help.




samedi 30 mai 2015

AWS sdk ruby - get instance type specs

Is it possiable to get the machine specs from the instance type?

get_spec("t1.small") => {CPU:64, RAM:8 ....HVM:true} is there such kind of method?




How to send server side notifications using rails & aws to phonegap device

I'm searching for a solution for sending server side notifications from rails to a cordova app for a certain set of devices at a particular point in time.

Let's say users place bids for an item. Each time a bid is placed on that item, every user who posted a bid needs to be notified. The notification needs to take the form of a JS callback.

Now I'm digging through examples of AWS SNS but I fear it doesn't fit my purpose. The flow on AWS SNS is roughly this one

Platform_applicaton --> Platform_endpoint --> subscription for a topic



require 'rubygems'
require 'aws-sdk'


sns = Aws::SNS::Client.new(
  access_key_id: 'X',
  secret_access_key: 'X',
  region: 'X',
  ssl_ca_bundle: 'c:\tmp\ca-bundle.crt'
)

# create platform application
platform_app = sns.create_platform_application(
  # required
  name: "parking-space-web",
  # required
  platform: "GCM",
  # required
  attributes:
    { :PlatformCredential => "google_api_key" ,
      :PlatformPrincipal => "" }
)

puts platform_app['platform_application_arn']

#create endpoint
endpoint = sns.create_platform_endpoint(
  # required
  platform_application_arn: platform_app['platform_application_arn'],
  # required
  token: "app1"
)

# subscribe to topic
subscription = sns.subscribe(
  # required
  topic_arn: "arn:aws:topic:arn",
  # required
  #I can choose whatever protocol I want but the physical notification will    just be a call made via that specific protocol ( http/email ).
  protocol: "application", 
  endpoint: endpoint['endpoint_arn'],
)

How is that useful to me? I'm publishing a message via http/email which is plainly sent to multiple http/email subscribers. If I needed that I would simply make the http/email requests myself. What's the advantage of SNS?

I figure that the real deal with SNS is the 'application' protocol which uses the vendor API keys ( GCM, APNS, ADM, etc. ) to send notification to/from the specific platforms, but that doesn't help me much when using cordova. I have to install a custom plugin to intercept those notifications. Not bad, but I figure there's a cleaner solution.

Given what I found it seems that AWS SQS is the best solution.

  1. Can AWS SQS deliver messages to multiple recipients (topic-like)?
  2. Do messages persist and get delivered when client comes back online?
  3. Is it feasible to create one queue for each item, and publish a message each time a bid is placed? This will result in a LOT of queues being created.



How to create an image file from dataURL to upload to S3?

I am using the JavaScript AWS SDK to upload an image to S3. Before uploading the image, I am resizing using canvas (source: this question).

The resize works well, however I need to create a new file object, like the one obtained from var file = files[0];, that contains the newly resized dataURL.

Files are uploaded to S3 like so:

var params = { Key: fileName, ContentType: file.type, Body: file, ServerSideEncryption: 'AES256' };

bucket.putObject(params, function(err, data) {...}

Replacing Body: file with Body: dataURL results in a corrupted image file that cannot be viewed.

My question is, how can I create a file with the new dataURL that can be uploaded to S3?

I've tried using dataURLtoBlob() to create a blob from a dataURL to be submitted instead of file. The uploaded image file was corrupted and not viewable.

function dataURLtoBlob(dataurl) {
    var arr = dataurl.split(','), mime = arr[0].match(/:(.*?);/)[1],
        bstr = atob(arr[1]), n = bstr.length, u8arr = new Uint8Array(n);
    while(n--){
        u8arr[n] = bstr.charCodeAt(n);
    }
    return new Blob([u8arr], {type:mime});
}

...

var newFile = dataURLtoBlob(dataURL);

// send newFile instead of file

Full code:

var files = element[0].files;
var file = files[0];

var img = document.createElement("img");

var reader = new FileReader();
reader.onloadend = function() {
  img.src = reader.result;
  var canvas = document.createElement('canvas');
  var ctx = canvas.getContext("2d");
  ctx.drawImage(img, 0, 0);

  var MAX_WIDTH = 800;
  var MAX_HEIGHT = 600;
  var width = img.width;
  var height = img.height;

  if (width > height) {
    if (width > MAX_WIDTH) {
      height *= MAX_WIDTH / width;
      width = MAX_WIDTH;
    }
  } else {
    if (height > MAX_HEIGHT) {
      width *= MAX_HEIGHT / height;
      height = MAX_HEIGHT;
    }
  }
  canvas.width = width;
  canvas.height = height;
  var ctx = canvas.getContext("2d");
  ctx.drawImage(img, 0, 0, width, height);

  var dataURL = canvas.toDataURL("image/png");

  // need to send a file object with this new dataURL as a function argument here

};
reader.readAsDataURL(file); 




How to deploy to autoscaling group with only one active node without downtime

There are two questions about AWS autoscaling + deployment which I cannot clearly answer:

  1. I'm currently trying to figure out, whats the best strategy to deploy to an EC2 instance behind an ELB which is the only member of an autoscaling group without downtime.

By now the EC2 setup will be done with puppet including the deployment of the application, triggered after an successful build by jenkins.

The best solution I have found is to check per script how many instances are registered at the ELB. If a single one is registered, spawn a new one, which runs puppet on startup (the new node will be up to date) and kill the old node.

  1. How to deploy (autoscaling EC2 behind an ELB) without delivering two different versions of the application?

Possible solution: Check per script how many EC2 instances are registered to the ELB, spawn the same amount of instances, register all new instances and unregister all old ones.

My experiences with AWS teacher me that AWS has a service for everything. So are there any services out there to accomplish my requirements and my solutions are inconvenient?




Deis: Failed Initializing SSH client

I am a DEIS newbie trying to set it up on AWS. It seems to go well. But in the end if fails with SSH client errors.

It first failed with SSH_AUTH_SOCK not being set. Then I manually started ssh-agent with: eval ssh-agent $SHELL.

Next, it fails with the following error message:

Your Deis cluster has been successfully deployed to AWS CloudFormation and is started.
Please continue to follow the instructions in the documentation.
Enabling proxy protocol
Error: failed initializing SSH client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
#
# Enabling proxy protocol failed, please enable proxy protocol
# manually after finishing your deis cluster installation.
#
# deisctl config router set proxyProtocol=1
#

What am I missing? Is there any part of the setup that deals with this ssh client issue that I missed?

I am running the provisioning script from Ubuntu 14.04 LTS as root.




Sharing xlsm file without option to download, on something like aws

I have an xlsm file and need to share it for both read/write but without option to download. I have an Amazon Web Services account and I also tried Amazon Workspaces too but not able to move ahead. Is there a better way to do it which is freely available. Thanks.




DynamoDB: Store array of IDs and do a batchGet for each ID

In DynamoDB, I have a Groups table and a Users table. One or many Users can belong to a Group.

Using DynamoDB, is it possible to perform one query to get a single Group by ID, and also all of the Users in that Group by the User IDs in that Group record?

If not, what is the most efficient way to do this?




Forbidden access to S3 using paperclip and fog

When using fog via paperclip with the following configuration:

config.paperclip_defaults = {
  :storage => :fog,
  :fog_credentials => {
    :provider => 'AWS',
    :aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
    :aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
    :region => 'eu-central-1'
  },
  :fog_directory => ENV['FOG_DIRECTORY']
}

Access to S3 fails with the following error:

Excon::Errors::Forbidden: Expected(200) <=> Actual(403 Forbidden)
SignatureDoesNotMatch - The request signature we calculated does not match the signature you provided. Check your key and signing method.

Logging directly with the awscli tools using the same credentials and setting the same region works. I double checked the keys. Also, aws s3api get-bucket-location --bucket mybucket returns eu-central-1.




Trouble setting up a Loopback production host on AWS EC2

I'm having trouble setting up a StrongLoop LoopBack production host on AWS EC2. I'm following these directions.

This is what I tried. I created an EC2 server that's a Ubuntu Server 14.04 LTS. Then I:

$ ssh -i ~/mykey.pem ubuntu@[ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install build-essential
$ curl -o- http://ift.tt/1HCBTdk | bash
$ nvm install v0.12.4
$ nvm alias default 0.12.4
$ npm install -g strong-pm

$ sudo sl-pm-install
sudo: sl-pm-install: command not found

$ sl-pm-install
Error adding user strong-pm:
useradd: Permission denied.
useradd: cannot lock /etc/passwd; try again later.
Error installing service 'undefined': Command failed: /usr/sbin/useradd --home /var/lib/strong-pm --shell /bin/false --skel /dev/null --create-home --user-group --system strong-pm
useradd: Permission denied.
useradd: cannot lock /etc/passwd; try again later.

As you can see, I cannot install the standalone StrongLoop Process Manager module as "ubuntu" or by using "sudo." This made me wonder if I should be installing everything as root, but I ran into trouble with this approach as well:

$ sudo su
# curl -o- http://ift.tt/1HCBTdk | bash
# nvm install v0.12.4
# nvm alias default 0.12.4

When tring to install the standalone StrongLoop Process Manager module as root, I got the following error:

# npm install -g strong-pm
> heapdump@0.3.5 install /root/.nvm/versions/node/v0.12.4/lib/node_modules/strong-pm/node_modules/strong-runner/node_modules/strong-supervisor/node_modules/heapdump
> node-gyp rebuild

sh: 1: node-gyp: Permission denied
npm WARN optional dep failed, continuing heapdump@0.3.5

> strong-fork-syslog@1.2.3 install /root/.nvm/versions/node/v0.12.4/lib/node_modules/strong-pm/node_modules/strong-runner/node_modules/strong-supervisor/node_modules/strong-fork-syslog
> node-gyp rebuild

sh: 1: node-gyp: Permission denied
npm WARN optional dep failed, continuing strong-fork-syslog@1.2.3

> strong-agent@1.5.1 install /root/.nvm/versions/node/v0.12.4/lib/node_modules/strong-pm/node_modules/strong-runner/node_modules/strong-supervisor/node_modules/strong-agent
> node-gyp rebuild || exit 0

sh: 1: node-gyp: Permission denied
/
> sqlite3@3.0.8 install /root/.nvm/versions/node/v0.12.4/lib/node_modules/strong-pm/node_modules/strong-mesh-models/node_modules/minkelite/node_modules/sqlite3
> node-pre-gyp install --fallback-to-build

sh: 1: node-pre-gyp: Permission denied
npm ERR! Linux 3.13.0-48-generic
npm ERR! argv "/root/.nvm/versions/node/v0.12.4/bin/node" "/root/.nvm/versions/node/v0.12.4/bin/npm" "install" "-g" "strong-pm"
npm ERR! node v0.12.4
npm ERR! npm  v2.10.1
npm ERR! file sh
npm ERR! code ELIFECYCLE
npm ERR! errno ENOENT
npm ERR! syscall spawn

npm ERR! sqlite3@3.0.8 install: `node-pre-gyp install --fallback-to-build`
npm ERR! spawn ENOENT
npm ERR! 
npm ERR! Failed at the sqlite3@3.0.8 install script 'node-pre-gyp install --fallback-to-build'.
npm ERR! This is most likely a problem with the sqlite3 package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node-pre-gyp install --fallback-to-build
npm ERR! You can get their info via:
npm ERR!     npm owner ls sqlite3
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /home/ubuntu/npm-debug.log

What's a proper way to set up a StrongLoop LoopBack production host on AWS EC2? How's it done?




Reverting DEIS in AWS

I am trying to setup DEIS in AWS. So I am in the process of learning and it is expected that I will have to provision and setup many times before I master things.

Let's say I run provision-ec2-cluster and for some reason, I want to revert everything that was done in AWS (delete VPC, dele instances, scaling rules, security groups, etc.).

What is the easiest way to do that? Does Deis come with an script that can help me in this respect?




Amazon SNS Mobile Push - How to send/publish to multiple endpoints?

I want to send messages to multiple users/Endpoints ARN selecting from MYSQL database without using Topic using PHP SDK.




Pyboto: remove an ip based rule from a security group

I've been reading through the Pyboto documentation and whilst I know how to add an ip-based rule to a security group, I have not been able to find a method remove an ip-based rule. The method remove_rule on a security group object, doesn't update the security group on the ec2 instance - so I'm not sure that will help me either.

Has anyone accomplished this before?




vendredi 29 mai 2015

Streaming twitter to kenisis max size

I am going to be streaming live tweets in JSON to Amazon kenisis but kenisis only accepts a max data size of 50Mb per file. What is the largest a tweet to JSON with a metadata can be so I know if I need to zip or not.




AWS CloudFormation: How to get subnet list from VPC?

In CloudFormation, I'm creating a VPC, two EC2 instances, and an Elasticache in front of them. In the template, I'm trying to add the elasticache to the vpc. The problem's happening in creating the AWS::Elasticache::SubnetGroup

    "CacheSubnetGroup" : {
      "Type" : "AWS::ElastiCache::SubnetGroup",
      "Properties" : {
        "Description" : "Subnets available for the ElastiCache Cluster",
        "SubnetIds" : [ ... ]
      }
    },

I do not want to ask the user to input the subnet list as suggested here because I'm assuming the user doesn't know what a subnet is. Is there any function similar to { "Fn::GetAtt" : ["myVpc", "SubnetList"] }?




Troubleshoot UnknownResourceException when following AWS tutorial

I'm attempting to follow this AWS tutorial. But I'm having trouble at "You can run GreeterWorker successfully at this point." as I'm getting an UnknownResourceException.

Exception in thread "main" com.amazonaws.services.simpleworkflow.model.UnknownResourceException: Unknown domain: helloWorldWalkthrough (Service: AmazonSimpleWorkflow; Status Code: 400; Error Code: UnknownResourceFault; Request ID: xxxxx)

Steps taken

  • Resolved permission exception by attaching the SimpleWorkflowFullAccess IAM Policy to my AWS user.
  • Verified that the helloWorldWalkthrough is registered on the SWF dashboard
  • registered new helloWorldWalkthrough2 domain, same error occured

The tutorial didn't cover the step about attaching the SimpleWorkflowFullAccess policy to the AWS user, so I'm wondering if there is a similar undocumented step to allow my user to find this domain.

My code is copy/pasted from the GreeterWorker class in the tutorial.

import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflow;
import com.amazonaws.services.simpleworkflow.AmazonSimpleWorkflowClient;
import com.amazonaws.services.simpleworkflow.flow.ActivityWorker;
import com.amazonaws.services.simpleworkflow.flow.WorkflowWorker;

public class GreeterWorker  {
   public static void main(String[] args) throws Exception {
     ClientConfiguration config = new ClientConfiguration().withSocketTimeout(70*1000);

     String swfAccessId = System.getenv("AWS_ACCESS_KEY_ID");
     String swfSecretKey = System.getenv("AWS_SECRET_KEY");
     AWSCredentials awsCredentials = new BasicAWSCredentials(swfAccessId, swfSecretKey);

     AmazonSimpleWorkflow service = new AmazonSimpleWorkflowClient(awsCredentials, config);
     service.setEndpoint("http://ift.tt/1KDc82N");

     String domain = "helloWorldWalkthrough";
     String taskListToPoll = "HelloWorldList";

     ActivityWorker aw = new ActivityWorker(service, domain, taskListToPoll);
     aw.addActivitiesImplementation(new GreeterActivitiesImpl());
     aw.start();

     WorkflowWorker wfw = new WorkflowWorker(service, domain, taskListToPoll);
     wfw.addWorkflowImplementationType(GreeterWorkflowImpl.class);
     wfw.start();
   }
}




Shutdown scripts to run upon AWS termination

I am trying to get some scripts to run upon an aws termination action. I have created /etc/init.d/Script.sh and linked symbolically to /etc/rc01.d/K01Script.sh

However terminating through aws console did not produce the output I was looking for. (It is a script that does a quick API call to a server over https should take only a few seconds).

Then I tried again but specifically changed a kernel parameter: 'sudo sysctl -w kernel.poweroff_cmd=/etc/rc0.d/K01Script.sh'

and again no output.

I get the message "The system is going down for power off NOW!" when terminating the server so I'm pretty sure the Ubuntu server is going into runlevel 0. Permissions are owned by root.

I know I could create a lifecycle to do something like this but my team prefers the quick and dirty way.

any help very much appreciated!




multiple accounts linked with the same route53

I want to link one route53 (with multiple hostedZones) to two different accounts from amazon "aws". Is it possible? How can I do it?




SailsJS is unstable on Amazon elastic beanstalk

I'm using sailsJS on an elastic beanstalk auto-scaling deployment, but things are misbehaving, it seems very unstable.

For example, (seemingly) out of the blue the following custom model method, that had been running fine for the last 3 months or so, stopped working

var obj = this.toObject();
obj.permissions = obj.getPermissions();

Changing the code to

var obj = this.toObject();
obj.permissions = this.getPermissions();

fixed the problem, but only after bringing the site down for a couple of hours.

Another example

User.findOne({ id: 'someIDstring' }, function(err, user) { ... });

Suddenly started returning a user model with its associations populated with embedded objects... which when saved started throwing waterline errors due to the embedded records.

My guess is that the dependencies of sails are being updated when elastic beanstalk is spinning up new servers, and some of those dependencies are changing the way that sails is running.

Or I'm completely off the mark and something else is happening. Either way I'm getting very nervous that a rather busy site is going to fall over at any time.

Does anyone have any suggestions as to what's going on. Or have had any similar experiences.

Muchos gracias.




Wavesurfer doesn't draw wave with CROS Error because of cookies

I use wavesurfer, I get the following error:

XMLHttpRequest cannot load http://ift.tt/1KtTLdv. 
No 'Access-Control-Allow-Origin' header is present on the requested resource. 
Origin 'http://ift.tt/1KtU9sk' is therefore not allowed access. The response had HTTP status code 403.

The call is loaded, but the wave wasn't drawn, I check the network of requests and I found two requests for this call as the following:

  1. 403 Forbidden.

403

  1. 304 Not Modified.

304

The code of loading the call as the following:

scope.wavesurfer.load(scope.url);

For the second image I find there's cookies send with the request as the following:

Cookie:__zlcmid=TAePb8mwejYLug; calltrk_referrer=https%3A//http://ift.tt/1AD957g; calltrk_landing=https%3A//http://ift.tt/1KtU6wO; calltrk_session_id_150722382=c16eaa33-386f-4ab3-ba8d-b3d0cff070ef; __utma=52313532.1896763581.1423186152.1427741816.1431536946.4; __utmz=52313532.1431536946.4.3.utmcsr=bigleap.com|utmccn=(referral)|utmcmd=referral|utmcct=/utahs-best-brightest/; _ga=GA1.2.1896763581.1423186152; CloudFront-Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9hdWRpb3RlbXAuZGVudGFsbWFya2V0aW5nLm5ldC8qIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNDMzMDE2ODQ5fX19XX0_; CloudFront-Signature=btJ4dYPe3Cv87mQZzb6dkYVOLRcKQbscJ3h-ZJgSWGikNi1nXLuYXCGIwsHJWbhdTRiP8Gjru0mIQyOJdCioOa4tP3sAOSGXl9Cy1T2bM1sahgWZZ3GSk6GMyi21TVy3YsxDEdTUoMipeE0b5CduzcpcquB3hjYtfOUwI6CIrsTXkhajrGAk1rg~6tItPqMtxgmwrRM1oM8th0UgxgPWwVD2pok1ecS5ylwOiXbnSETpQzgXqS0C37bT94KpvafCjaclqgQPNcXrZRqbK~HLh28Gd4IZ3pDzIr3GNe3lkDUVIBYbStDsGZtawnS53ASmGXl3rP~DrPKYlahYX~ajKg__; CloudFront-Key-Pair-Id=APKAJL5DFWOODOOKTH2A

I put this cookies using Node.js Code as the following:

res.cookie('CloudFront-Policy',encodedCustomPolicy,{domain :cookieDomainName , path:'/', httpOnly:true,secure:true});
res.cookie('CloudFront-Signature',customPolicySignature,{domain :cookieDomainName , path:'/', httpOnly:true,secure:true});
res.cookie('CloudFront-Key-Pair-Id',cloudFrontKeyPairId,{domain :cookieDomainName , path:'/', httpOnly:true,secure:true}

So, I need to put three cookies on the first request, to get the call and draw the wave of it.

  1. How can I send cookies with first request ?
  2. How can I put header when I call load function of wavesurfer ?



Can I use the default ubuntu user in EC2 to do git clone?

So I have spun up a simple ubuntu EC2 istance with lamp.
I installed everything with apt-get as root sudo su

Now I have to clone the repository of the website, i.e.:

git clone ...

inside a dir in /var/www/...

Can I do this as ubuntu, or should I create another user and do git clone with this user?




PHP Fatal Error on 'php artisan migrate' on remote AWS EB instance: laravel.log: Permission denied

When I SSH into my AWS EB instance to run php artisan migrate, I get the following error message:

Link to bigger size of picture below

php fatal error on running php artisan migrate on eb instance

I am completely confused. First, I don't get this error on the local server. Second, what does a simple log file have to do with migrations anyway? They are ignored by git by default, so no log files are uploaded.

Sigh... Any ideas on how I can be allowed to run my php artisan migrate?




Access properties of AWS powersehll cmdlet Get-EC2SecurityGroup so I can filter and assign?

  1. I am trying to get a list of security groups. (Successful - Using Get-EC2SecurityGroup)
  2. Get a list of the specific IPPermissions associated with each security group. ( Successful - Using (Get-EC2SecurityGroup).IpPermissions )
  3. Only return results where the FromPort = "xxx" ( Unsuccessful - Not sure how to access the FromPort property that is returned in the result list )

Ultimately what I am trying to accomplish is:

  1. Get a list of existing security groups, and loop through each group.

  2. While looping through each group, call the IpPermissions, and look for the specific FromPort "xxx".

  3. If the FromPort is a match, record the other properties: (FromPort, IpProtocol, IpRanges, ToPort, UserIdGroupPairs)

Problem I am having

  1. I am not sure how to do a loop using the amazon objects

  2. I cant seem to access the properties even though they appear to be named and have values.

  3. I have tried using -Filter with many different iterations, with no success.

  4. The documentation seems self-referencing, and the examples I have run across dont get down to this level of detail.

Results returned from (Get-EC2SecurityGroup).IpPermissions

FromPort         : 123
IpProtocol       : tcp
IpRanges         : {0.0.0.0/0}
ToPort           : 123
UserIdGroupPairs : {}



How to manage third party files in replicable EC2 instance?

I have a file from a library with size > 50MB so I cannot deploy it with Git in my instances. I include this file in some of my PHP scripts, so what should I do in order to leave my instance replicable and include this file in my scripts?

I can store it in a S3 bucket but im not sure if that's a good practice (including external files).




copy key pair to amazon

I have run into trouble

I have ec2 instance. I connected to it via ssh. I wanted to set up POST hook for git. And accidentally removed authorised_keys from /.ssh directory

My question is : if I am still connected to my aws instance can I copy myKey.pem to /.ssh directory ?

I want to omit instance restore process

Thank you in advance !))




Private communications between AWS EC2 instances

I have three EC2 instances.
Each one contains a server.
Each server needs that TCP ports 8181, 2181, 2888 and 3888 are opened on each machine.
Each server should have the possibility to talk with the other servers via these ports.

I created a security group (for example, named : sg-4d775c42) where I put the following rules :

Custom TCP Rule | TCP | 8181 | sg-4d775c42
Custom TCP Rule | TCP | 2181 | sg-4d775c42
Custom TCP Rule | TCP | 2888 | sg-4d775c42
Custom TCP Rule | TCP | 3888 | sg-4d775c42

I thought that these rules mean that each machine in the security group sg-4d775c42 can call the ports 8181, 2181, 2888 and 3888 of the other machines in the same group.

But it seems that it's not the case !

If I open the ports to the world, ie the following rules :

Custom TCP Rule | TCP | 8181 | 0.0.0.0/0
Custom TCP Rule | TCP | 2181 | 0.0.0.0/0
Custom TCP Rule | TCP | 2888 | 0.0.0.0/0
Custom TCP Rule | TCP | 3888 | 0.0.0.0/0

Of course, my servers can talk together.

Moreover, the private communication between these 3 servers is the first step. The second step will be to connect into an other private network this 3 servers ensemble to an other instance.

So my question is the following one :

How can I create (or simulate) a private network where my ports are opened between my EC2 instances ?




Data storage and retrieval framework for large organization

I work for a large company and we are finally getting around to integrating 'big data' into our practices. We receive terabytes of data from different retailers with both categorical and continuous data.

We currently are using a relational database service to handle our data, but I'm not satisfied with its connectivity capabilities. I'd like to store our data in a warehouse/database that is capable of connecting to R, SAS, SPSS, Python, and Tableau, at least. Basically I want the broadest connectivity possible.

Does anyone have any recs? AWS? NoSQL? Any help or pointers to somewhere that can help me would be great.




authentication for SSH into EC2 with new user failing

I am working with Chef on EC2 instances, and created a user data script to be passed in through the knife ec2 command, which creates a new user, copies the public key file from the default ec2-user and sets the correct ownership and permissions.

#!/bin/bash
CHEFUSER="$(date +%s | sha256sum | base64 | head -c 32)"
useradd $CHEFUSER
echo $CHEFUSER 'ALL=(ALL) NOPASSWD:ALL' | tee -a /etc/sudoers
cp -f /home/ec2-user/.ssh/authorized_keys /tmp/
chown $CHEFUSER /tmp/authorized_keys
runuser -l $CHEFUSER -c 'mkdir ~/.ssh/'
runuser -l $CHEFUSER -c 'mkdir ~/.aws/'
runuser -l $CHEFUSER -c 'chmod 700 ~/.ssh/'
runuser -l $CHEFUSER -c 'mv -f /tmp/authorized_keys ~/.ssh/'
runuser -l $CHEFUSER -c 'chmod 600 ~/.ssh/authorized_keys'

Checking ownership and permissions seems to return as expected after running the script:

# ls -l .ssh/authorized_keys
-rw-------. 1 NWYzMThiMDBmNzljOTgxZmU1NDE1ZmE0 root 396 May 29 11:28 .ssh/authorized_keys
# stat -c '%a %n' .ssh/
700 .ssh/
# stat -c '%a %n' .ssh/authorized_keys
600 .ssh/authorized_keys

If I SSH in with ec2-user and copy/paste the same commands as root (which is how the script runs according to Amazon), everything works fine and I can then SSH in with the new user.




AWS CloudFormation Template Builder Libraries

What libraries are available for constructing + validating AWS CloudFormation JSONs at a higher level of abstraction?

Googling has revealed troposphere, are there others (especially in languages other than Python)?




symfony amazon S3 database

I have project in Symfony 2.6 I i=use amazon S3 service and my photo upload in server amazon. I dump url photo like this: http://ift.tt/1ACvOAj and my question how I write this url in my database and read this photo for my template ?

Entity Photo

class Photo

{ /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ private $id;

/**
 * @ORM\Column(type="string")
 */
private $title;

/**
 * @Gedmo\Timestampable(on="create")
 * @ORM\Column(type="datetime", name="createdAt")
 */
protected $createdAt;

/**
 * @ORM\Column(name="photo_storage")
 * @Assert\File( maxSize="20M")
 */
private $photo;

and I have formtype for this entity

public function buildForm(FormBuilderInterface $builder, array $options)
{
    $builder
        ->add('title')
        ->add('photo', 'file', array(
            'label' => 'Photo',
        ))
        ->getForm();
}

public function setDefaultOptions(OptionsResolverInterface $resolver)
{
    $resolver->setDefaults(array(
        'data_class' => 'AppBundle\Entity\Photo',
    ));
}

public function getName()
{
    return 'photo';
}

and I my controller

    public function addAction(Request $request)
{
    $em = $this->getDoctrine()->getManager();

    $post = new Photo();

    $form = $this->createForm(new AddPhotoType(), array());


    if ($request->isMethod('POST')) {
        $form->bind($request);
        if ($form->isValid()) {
            $data = $form->getData();
            $url = sprintf(
                '%s/%s',
                $this->container->getParameter('acme_storage.amazon_s3.base_url'),
                $this->getPhotoUploader()->upload($data['photo'])
            );

            dump($url);
            $em->persist($post);
            $em->flush();

            return $this->redirect($this->get('router')->generate('homepage'));
        }
    }

    return array(
        "form" => $form->createView(),
    );
}

/**
 * @return \StorageBundle\Upload\PhotoUploader
 */
protected function getPhotoUploader()
{
    return $this->get('acme_storage.photo_uploader');
}

how to upload photos in amazon s3 (this work) and then set url to the local database ?




Servlet not working :Error 500

i'm new to tomcat and i try access to my project located at http://ift.tt/1JZ5d2C , I get the following error...

javax.servlet.ServletException: "Servlet.init ()" for the EB-api servlet threw an exception
org.apache.catalina.authenticator.AuthenticatorBase.invoke (AuthenticatorBase.java:501)
org.apache.catalina.valves.ErrorReportValve.invoke (ErrorReportValve.java:98)
org.apache.catalina.valves.AccessLogValve.invoke (AccessLogValve.java:950)
org.apache.catalina.connector.CoyoteAdapter.service (CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process (AbstractHttp11Processor.java:1040)
org.apache.coyote.AbstractProtocol $ AbstractConnectionHandler.process (AbstractProtocol.java:607)
org.apache.tomcat.util.net.JIoEndpoint $ SocketProcessor.run (JIoEndpoint.java:313)
java.util.concurrent.ThreadPoolExecutor $ Worker.runTask (ThreadPoolExecutor.java:886)
java.util.concurrent.ThreadPoolExecutor $ Worker.run (ThreadPoolExecutor.java:908)
java.lang.Thread.run (Thread.java:619)

mother cause

java.lang.UnsupportedClassVersionError: telecom / SudParis / eu / paas / core / server / resources / manager / application / ApplicationManagerRessource: Unsupported major.minor Version 51.0 (Unable to load class telecom.sudparis.eu.paas.core.server.ressources .manager.application.ApplicationManagerRessource)
org.apache.catalina.loader.WebappClassLoader.findClassInternal (WebappClassLoader.java:2948)
org.apache.catalina.loader.WebappClassLoader.findClass (WebappClassLoader.java:1208)
org.apache.catalina.loader.WebappClassLoader.loadClass (WebappClassLoader.java:1688)
org.apache.catalina.loader.WebappClassLoader.loadClass (WebappClassLoader.java:1569)
java.lang.Class.forName0 (Native Method)
java.lang.Class.forName (Class.java:247)
com.sun.jersey.core.reflection.ReflectionHelper.classForNameWithException (ReflectionHelper.java:238)
com.sun.jersey.spi.scanning.AnnotationScannerListener$AnnotatedClassVisitor.getClassForName(AnnotationScannerListener.java:214)
com.sun.jersey.spi.scanning.AnnotationScannerListener$AnnotatedClassVisitor.visitEnd(AnnotationScannerListener.java:183)
org.objectweb.asm.ClassReader.accept (Unknown Source)
org.objectweb.asm.ClassReader.accept (Unknown Source)
com.sun.jersey.spi.scanning.AnnotationScannerListener.onProcess (AnnotationScannerListener.java:133)
com.sun.jersey.core.spi.scanning.uri.FileSchemeScanner $ 1.f (FileSchemeScanner.java:86)
com.sun.jersey.core.util.Closing.f (Closing.java:71)
com.sun.jersey.core.spi.scanning.uri.FileSchemeScanner.scanDirectory (FileSchemeScanner.java:83)
com.sun.jersey.core.spi.scanning.uri.FileSchemeScanner.scanDirectory (FileSchemeScanner.java:80)
com.sun.jersey.core.spi.scanning.uri.FileSchemeScanner.scan (FileSchemeScanner.java:71)
com.sun.jersey.core.spi.scanning.PackageNamesScanner.scan (PackageNamesScanner.java:225)
com.sun.jersey.core.spi.scanning.PackageNamesScanner.scan (PackageNamesScanner.java:141)
com.sun.jersey.api.core.ScanningResourceConfig.init (ScanningResourceConfig.java:80)
com.sun.jersey.api.core.PackagesResourceConfig.init (PackagesResourceConfig.java:104)
com.sun.jersey.api.core.PackagesResourceConfig. <init> (PackagesResourceConfig.java:78)
com.sun.jersey.api.core.PackagesResourceConfig. <init> (PackagesResourceConfig.java:89)
com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig (WebComponent.java:700)
com.sun.jersey.spi.container.servlet.WebComponent.createResourceConfig (WebComponent.java:678)
com.sun.jersey.spi.container.servlet.WebComponent.init (WebComponent.java:203)
com.sun.jersey.spi.container.servlet.ServletContainer.init (ServletContainer.java:373)
com.sun.jersey.spi.container.servlet.ServletContainer.init (ServletContainer.java:556)
javax.servlet.GenericServlet.init (GenericServlet.java:158)
org.apache.catalina.authenticator.AuthenticatorBase.invoke (AuthenticatorBase.java:501)
org.apache.catalina.valves.ErrorReportValve.invoke (ErrorReportValve.java:98)
org.apache.catalina.valves.AccessLogValve.invoke (AccessLogValve.java:950)
org.apache.catalina.connector.CoyoteAdapter.service (CoyoteAdapter.java:408)
org.apache.coyote.http11.AbstractHttp11Processor.process (AbstractHttp11Processor.java:1040)
org.apache.coyote.AbstractProtocol $ AbstractConnectionHandler.process (AbstractProtocol.java:607)
org.apache.tomcat.util.net.JIoEndpoint $ SocketProcessor.run (JIoEndpoint.java:313)
java.util.concurrent.ThreadPoolExecutor $ Worker.runTask (ThreadPoolExecutor.java:886)
java.util.concurrent.ThreadPoolExecutor $ Worker.run (ThreadPoolExecutor.java:908)
java.lang.Thread.run (Thread.java:619)

I've got a web.xml file for this project...

web.xml

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://ift.tt/ra1lAU" xmlns="http://ift.tt/nSRXKP" xmlns:web="http://ift.tt/nSRXKP" xsi:schemaLocation="http://ift.tt/nSRXKP http://ift.tt/LU8AHS" id="WebApp_ID" version="2.5">
  <display-name>EB-PaaS REST API</display-name>
  <servlet>
    <servlet-name>EB-api</servlet-name>
    <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class>
    <init-param>
      <param-name>com.sun.jersey.config.property.packages</param-name>
      <param-value>telecom.sudparis.eu.paas.core.server.ressources.manager</param-value>
    </init-param>
    <load-on-startup>1</load-on-startup>
  </servlet>
  <servlet-mapping>
    <servlet-name>EB-api</servlet-name>
    <url-pattern>/rest/*</url-pattern>
  </servlet-mapping>
  <welcome-file-list>
    <welcome-file>index.html</welcome-file>
    <welcome-file>index.jsp</welcome-file>
  </welcome-file-list>
</web-app>

Please can anybody help me as soon as possible , i extremely need it for my project...




Viewing console.log's of lambda function

Does anyone know how to get to the console.log's when running an aws lambda function?

Its fine when I run the function in its test environment, but I want to see the logs when I run it in a production environment.




Using the public IP on AWS from java getCanonicalHostName method

I'm trying to use AWS as a scalable analytics tool. I'm using apache zeppelin as an interactive shell to a Spark cluster and trying to plot using wisp. This is causing a problem as the plotting approach in wisp is to start a web app based on what I think is a jetty server. This works well on my local machine but on AWS it does not work as it picks up the private IP address rather than the public one.

Within wisp, it uses java.net.InetAddress.getLocalHost.getCanonicalHostName to retrieve the IP address of the machine. This always returns the private FQDN address. How can I make the java function return the public IP address or FQDN AWS provides without hardcoding something in wisp every time I spin up a cluster and rebuilding?

I have tried changing /etc/hosts and /etc/hostname but both have no effect. I don't really know where java.net.InetAddress.getLocalHost.getCanonicalHostName is getting it's address from.

Any help or advice greatly appreciated.

Dean




socket io server running on port 90 - AWS ec2 - security group port is open - yet can't connect

I have a running nodejs based socket.io server on AWS ec2, this server is running on port 90, and i can run the local tests on this server at the same port.

netstat -a shows me the port 90 as open as well for connections

tcp        0      0 localhost:3001          *:*                     LISTEN     
tcp        0      0 *:90                    *:*                     LISTEN  

I can vouch for the fact that I have the port 90 open on my security group settings, yet I can not connect at the port 90 to my server. I am not doing anything as foolish as making my clients connect over the localhost.

I have tried telnet to my server at the port 90 but it doesn't work

I have port 22 and 80 open as well and I can very well telnet to them.




Querying Amazon product search API in Cloud Code

I would like to perform a product search on Amazon from Cloud Code. Looking around the web I have found only this snippet (on Reddit):

console.log('about to define AmazonHttpRequestPromise');

    var AmazonHttpRequestPromise = Parse.Cloud.httpRequest({
        url: url,
        params: {
          'AWSAccessKeyId': '*ACCESS KEY GOES HERE*',
          'AssociateTag': '*ASSOCIATE TAG GOES HERE*',
          'Keywords': 'harry+potter',
          'Operation': 'ItemSearch',
          'SearchIndex': 'Books',
          'Service': 'AWSECommerceService',
          'Version': '2013-08-01',
        }
      });

    console.log('AmazonHttpRequestPromise looks like this:' + AmazonHttpRequestPromise);

    return AmazonHttpRequestPromise;

    function (err) {
      console.log('error!' + err);
      response.error('DAMN IT MAN');
    }

Questions:

1) Is this the best approach?

2) Do I need any special permission from Amazon for performing this operation and showing my user the results?




Azure website does not have web.config file

I deployed an empty web app on azure. At this point I am only hosting a basic site, with only HTML, CSS and JS files. I uploaded and assigned an SSL certificate to my custom domain and would like to force https. The official documentation states to edit the web.config file to implement the rewrite rule however I do not have a web.config file, which the documentation says should be added by default.




how to change installation path of pip command? How to set default python directory for installation of packages using pip command

I have deployed my Django Application on AWS EC2, which is running properly. I dont know what happen but its giving me error for django.core.management as

Traceback (most recent call last):
  File "/home/ec2-user/myproject/manage.py", line 8, in <module>
    from django.core.management import execute_from_command_line
ImportError: No module named django.core.management

I googled and searched on SO answers also and gone through all the possible answers. The issue I am facing is, I am running cron jobs in my application which I set in settings.py. Due to this error it is not get executed on time. But I wondered that the other application I can access and its running properly. only the thing at the time of cronjob it giving this error.
One more issue I am facing is pip command is not working with root user

[ec2-user@ip-XXX-XXX-XXX-XXX ~]$ sudo pip install django
sudo: pip2.7: command not found

it is runnning with ec2-user. but due to lack of permissions it is not able to install or update the modules.
I have two versions of python python2.6 and python2.7, whenever I try to install modules it by default goes in python2.6 directory. I want to set the python version 2.7 default and the modules also should install in 2.7 directory. Please suggest me how to resolve. some similar question was there I applied all the answers but not worked. Please tell me how can I resolve this problem. Thanks in advance.




Elastic Beanstalk Deployment Does not work for deploying changes

I had deployed a java application using Elastic Beanstalk. I had to make some changes to the application. But when I try to redeploy the changes using elastic beanstalk again, my changes arent applied. If i deploy the same application to a new environment, I see the changes.

Am i missing something while redeploying? I am using the eclipse AWS plugin to deploy.




Powershell S3 upload

Hoping you can help. The below script copies all files from a folder structure then pastes them to an S3 bucket. However I want it to be able to skip files that have not been changed etc to avoid duplicating the upload. Does anyone know how I can get a if file exists check or last modified?

Import-Module "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.psd1"
$bucket="bucketname"
$source="e:\dfs\*"
$outputpath="C:\temp\log.txt"
$AKey="xxxx"
$SKey="xxxx"

Set-AWSCredentials -AccessKey $AKey -SecretKey $SKey -StoreAs For_Move
Initialize-AWSDefaults -ProfileName For_Move -Region eu-west-1

Start-Transcript -path $outputpath -Force
foreach ($i in Get-ChildItem $source -include *.* -recurse)

{
if ($i.CreationTime -lt ($(Get-Date).AddDays(-0)))
{
$fileName = (Get-ChildItem $i).Name
$parentFolderName = Split-Path $i -Parent


Write-S3Object -BucketName $bucket -Key dfs/$parentFolderName/$filename -File $i

}
}




UpdatePolicy in Autoscaling group not working correctly for AWS CloudFormation update

I am using AWS CloudFormation to launch my server stack. I have created a LaunchConfig and then AutoScaling group that uses the above launchconfig. I have set CreationPolicy which waits for signals from my EC2 instances which creating the CF stack.

Also, i have set UpdatePolicy to AutoScaling group to wait for the signals from new instances if i update the CF stack with more desired number of instances as follows:

"UpdatePolicy" : {
        "AutoScalingRollingUpdate" : {
            "PauseTime" : "PT10M",
            "WaitOnResourceSignals" : "true"
        }
    }

According to the above, CF should wait for signals from newly launched instances (or get timed out) before setting the status of CF stack as "UPDATE_COMPLE".

But it is not working as explained above. The status of CF stack immediately changes to "UPDATE_COMPLE" w/o waiting for signals.

Please help.




OrderAcknowledgement not working (error 25)

I am trying to submit my orderacknowledgement in aws scratchpad but i seem to get the following error:
"We are unable to process the XML feed because one or more items are invalid. Please re-submit the feed."
With errorcode: 25 I might be overlooking something but i can't figure out what because as far as i can tell the xml is made as the scheme says it should. The Scheme can be found here: Scheme Can anyone see what is wrong with the xml?

<?xml version="1.0" encoding="UTF-8"?>
<AmazonEnvelope xmlns:xsi="http://ift.tt/ra1lAU" xsi:noNamespaceSchemaLocation="amzn-envelope.xsd">
    <Header>
        <DocumentVersion>1.01</DocumentVersion>
        <MerchantIdentifier>xxxxxxxxxxxxxxxxxx</MerchantIdentifier>
    </Header>
    <MessageType>OrderAcknowledgement</MessageType>
    <Message>
        <MessageID>1</MessageID>
        <OrderAcknowledgement>
            <AmazonOrderID>xxx-xxxxxxx-xxxxxxx</AmazonOrderID>
            <MerchantOrderID>xxxxxxx</MerchantOrderID>
            <StatusCode>Success</StatusCode>
            <Item>
                <AmazonOrderItemCode>xxxxxxxxxxxx</AmazonOrderItemCode>
                <MerchantOrderItemID>xxxxxx</MerchantOrderItemID>
            </Item>
        </OrderAcknowledgement>
    </Message>
</AmazonEnvelope>




Put multiple items into DynamoDB by Java code

i would like use batchWriteItem method of SDK Amazon to put a lot of items into table. I retrive the items from Kinesis, ad it has a lot of shard. I used this method for one item:

public static void addSingleRecord(Item thingRecord) {


    // Add an item
    try
    {

        DynamoDB dynamo = new DynamoDB(dynamoDB); 
        Table table = dynamo.getTable(dataTable);
        table.putItem(thingRecord);

    } catch (AmazonServiceException ase) {
        System.out.println("addThingsData request  "
                + "to AWS was rejected with an error response for some reason.");
        System.out.println("Error Message:    " + ase.getMessage());
        System.out.println("HTTP Status Code: " + ase.getStatusCode());
        System.out.println("AWS Error Code:   " + ase.getErrorCode());
        System.out.println("Error Type:       " + ase.getErrorType());
        System.out.println("Request ID:       " + ase.getRequestId());
    } catch (AmazonClientException ace) {
        System.out.println("addThingsData - Caught an AmazonClientException, which means the client encountered "
                + "a serious internal problem while trying to communicate with AWS, "
                + "such as not being able to access the network.");
        System.out.println("Error Message: " + ace.getMessage());
    }
}

public static void addThings(String thingDatum) {
    Item itemJ2;
    itemJ2 = Item.fromJSON(thingDatum);
    addSingleRecord(itemJ2);

}

The item is passed from:

private void processSingleRecord(Record record) {
    // TODO Add your own record processing logic here


    String data = null;
    try {


        // For this app, we interpret the payload as UTF-8 chars.
        data = decoder.decode(record.getData()).toString();
        System.out.println("**processSingleRecord - data " + data);
        AmazonDynamoDBSample.addThings(data);

    } catch (NumberFormatException e) {
        LOG.info("Record does not match sample record format. Ignoring record with data; " + data);
    } catch (CharacterCodingException e) {
        LOG.error("Malformed data: " + data, e);
    }
}

Now if i want to put a lot of record, I will use:

// Add a new item to Forum
        TableWriteItems dataTableWriteItems = new TableWriteItems(dataTable) //Forum
            .withItemsToPut(thingRecord);
        System.out.println("Making the request.");
        BatchWriteItemOutcome outcome = dynamoDB.batchWriteItem(batchWriteItemRequest);

but in the amazon sample it used two table with one item, so this method is wrong. How can I group the item before send them? I have to look carefully because I have a lot of shard and then a lot of thread.




How to route requests to specific instances in amazon?

I am having cname(abc.com) pointed to my elastic IP and need to create three EC2 instances(e.g. Instance1, Instance2, Instance3) for three different applications.

Now I want to achieve following results: If user hits "abc.com/App1", request should be redirected to Instance1.If user hits "abc.com/App2", request should be redirected to Instance2.If user hits "abc.com/App3", request should be redirected to Instance3.

All these Instances should work independently. And, If any of these goes down, it should not impact others. We can't use subdomains. I am trying to find out something in ELB.




AWS RDS Parameter Group not changing MySQL encoding

I am running a MySQL database on RDS. I want to change all of my encodings to utf8mb4. I created a parameter group on RDS with all character_set_* parameters as utf8mb4, assigned it to my RDS instance, and then rebooted the instance. However, when I run SHOW VARIABLES LIKE '%char%' on my DB, there are still values of latin1, which I do not want:

character_set_client        latin1
character_set_connection    latin1
character_set_database      utf8mb4
character_set_filesystem    binary
character_set_results       latin1
character_set_server        utf8mb4
character_set_system        utf8
character_sets_dir          /rdsdbbin/mysql-5.6.22.R1/share/charsets/

Likewise, new columns that I create on the DB are latin1 encoded instead of utf8mb4 encoded. I can change the encoding values manually through the mysql command line, but this doesn't help since the values are also reset to latin1 when I push to production.




jeudi 28 mai 2015

JAVA AWS Machine Learning API to enable Realtime prediction

Can someone help me with name of api which enables realtime prediction of a model. Please note that i am not asking for RealtimeEndpointRequest object. i have gone through the entire documentation of AWS Machine Learning SDK but haven't found any thing.




php exec mysqldump to back up database in sql format

I'm trying to use exec() in php to run mysqldump to back up a database in AWS(Amazon Web Service) with name projectdata. But I can only create an empty sql file.

I'm running the php file with xampp, under Windows 7 where mysqldump is in C:\xampp\mysql\mysqldump

Please help :

> exec('C:\xampp\mysql\mysqldump --user=user --password=password --host=cannotTellyou.amazonaws.com:3306 projectdata  > backup.sql');

Thanks for your attention.




How to connect via command line to a Hadoop cluster in AWS?

We just installed a CDH5 cluster in AWS using Cloudera Director, and everything is working now (I can use HUE and everything).

We went with the standard configuration (Master, Workers and Gateway). But now, I want to use the cluster via command line (to the hdfs) , which are the steps for doing that?

Thanks in advance




Ansible EC2 Dynamic inventory minimum IAM policies

Has someone figured out the minimum IAM policies required to run the EC2 dynamic inventory script (ec2.py) on ansible via an IAM role?.

So far, I haven't seen a concrete reference in this matter other than specifying credentials for boto library in the official documentation of ansible, however, on production environments, I rarely use key pairs for access to AWS services from EC2 instances, instead I have embraced the use of IAM roles for that case scenario.

I have tried policies allowing ec2:Describe* actions but it doesn't seem to be enough for the script as it always exits with Unauthorized operation.

Could you help me out?. Thank you.




Linux: huge files vs huge number of files

I am writing software in C, on Linux running on AWS, that has to handle 240 terabytes of data, in 72 million files.

The data will be spread across 24 or more nodes, so there will only be 10 terabytes on each node, and 3 million files per node.

Because I have to append data to each of these three million files every 60 seconds, the easiest and fastest thing to do would to be able to keep each of these files open at one time.

I can't store the data in a database, because the performance in reading/writing the data will be too slow. I need to be able to read the data back very quickly.

My questions:

1) is it even possible to keep open 3 million files

2) if it is possible, how much memory would it consume

3) if it is possible, would performance be terrible

4) if it is not possible, I will need to combine all of the individual files into a couple of dozen large files. Is there a maximum file size in Linux?

5) if it is not possible, what technique should I use to append data every 60 seconds, and keep track of it?




AWS The request signature we calculated does not match the signature you provided. Check your key and signing method

I have searched on the web for over two days now, and probably have looked through most of the online documented scenarios and workarounds, but nothing worked for me so far.

I am on AWS SDK for PHP V2.8.7 running on PHP 5.3. I am trying to connect to my S3 bucket with the following code:

// Create a `Aws` object using a configuration file

        $aws = Aws::factory('config.php');

        // Get the client from the service locator by namespace
        $s3Client = $aws->get('s3');

        $bucket = "xxx";
        $keyname = "xxx";

        try {
            $result = $s3Client->putObject(array(
                'Bucket'        =>      $bucket,
                'Key'           =>      $keyname,
                'Body'          =>      'Hello World!'
            ));
            $file_error = false;
        } catch (Exception $e) {
            $file_error = true;
            echo $e->getMessage();
            die();
        }
        //  

My config.php file is as follows:

<?php

return array(
    // Bootstrap the configuration file with AWS specific features
    'includes' => array('_aws'),
    'services' => array(
        // All AWS clients extend from 'default_settings'. Here we are
        // overriding 'default_settings' with our default credentials and
        // providing a default region setting.
        'default_settings' => array(
            'params' => array(
                'credentials' => array(
                    'key'    => 'key',
                    'secret' => 'secret'
                )
            )
        )
    )
);

It is producing the following error:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

I've already checked my access key and secret at least 20 times, generated new ones, used different methods to pass in the information (i.e. profile and including credentials in code) but nothing is working at the moment.

Would appreciate any input from the community here - cheers!




aws sqs error may not be available in the eu-central-1

After massive message treatment, sqs suddenly stops working then fire this error:

enter image description here

I don't understand, I'm 100% sure this service is available in this region and I got the right queues access.

Can someone explain me how this could happen?




Unable to provision an AWS SQL Server RDS instance in Multi AZ using boto

I'm trying to provision an SQL Server Standard Edition AWS RDS instance which is mirrored across two AZs using boto's rds2.

Whenever I call the create_db_instance method in boto.rds2.layer1.RDSConnection with the appropriate arguments, I keep getting the following error:

boto.exception.JSONResponseError: JSONResponseError: 400 Bad Request
{'RequestId': 'fdc54b48-0586-11e5-951d-c3153310155b', 'Error': {'Message': 'To configure Multi-AZ for SQL Server DB Instances please apply or remove the "Mirroring" option using Option Groups.', 'Code': 'InvalidParameterCombination', 'Type': 'Sender'}}

I've verified that I'm setting the option multi_az = True and the option_group_name is set to an option group which has mirroring enabled. Here's my call to create_db_instance. Are there any other settings which need to be set before I can provision this RDS instance which is mirrored?

conn.create_db_instance(db_instance_identifier=new_db_name,
                                    allocated_storage=allocated_storage,
                                    db_instance_class=rds_instance_class,
                                    master_username=master_username,
                                    master_user_password=master_password,
                                    port=port,
                                    engine=rds_engine,
                                    multi_az=rds_multi_az,
                                    auto_minor_version_upgrade=auto_minor_version_upgrade,
                                    db_subnet_group_name=rds_subnet_group,
                                    license_model=license_model,
                                    iops=iops,
                                    vpc_security_group_ids=rds_vpc_security_group,
                                    option_group_name=option_group_name
                                )

I'm also seeing another issue where I can either provision with IOPS or provision with Magnetic disks when I remove the iops option. But, I haven't figured out a way to provision with just General Purpose SSDs.




Gitlab vs S3 for configs/certs/encrypted keys and passwords/dockerfiles/docker images

I am currently torn between using S3 and our private Gitlab for storing the aforementioned items that will be used in building out a production ready private docker registry (2.0) using cloud formation for provisioning and chef to bootstrap the server.

I really want to take out the config files from S3, as well as any dockerfiles I have and start versioning them - so obviously git is perfect for that. But then I am stuck having all my certs, private keys and passwords (these are all encrypted with AWS Key Management Service, so I can really store them anywhere I want).

I would really prefer to have everything in one place. So my question is - is it a big no-no storing (even if encrypted) private keys and passwords in a private on premise git repo. Does it even make sense and/or is bad practice to store things that don't really version like passwords or tarred docker images?




X.509 versus Whte Listing Authentication

My company is transitioning to cloud based application servers. Key applications will continue to run in-house but selected new applications will run on cloud based application servers. Many of the in-house application servers provide REST endpoints to client applications. Right now the company uses white listing for client authentication. This is ok for a single instance cloud services. We use AWS so an Elastic IP (EIP) works perfectly for a single or few instances. However, I believe it is problematic for cloud server applications that scales up and down instances depending upon demand to use our company policy for white listed IP's. Anything beyond a few EIP's becomes difficult. At least in my opinion.

I am thinking of using X.509 certificate name validation. In other words once the certificate is validated and session keys are exchanged I verify the name on the certificate with a list of valid names. If the name matches I proceed with the session. Otherwise, if the names don't match, the session is shut down with a 403 error code. This is done on both the client and server so both authenticate each other. Is it possible to do this name checking in Tomcat as part of the config.xml or something else that is automatic? In other words an automatic way so I don't have to modify the endpoint HTTPS code. Or do I have to modify the HTTPS code to include check for the certificate name? Does this make sense or this there a better way?

Best Regards, Steve Mansfield