EzDevInfo.com

amazon-ecs

Generic Ruby Amazon Product Advertising API

Docker on AWS - what is a difference between Elastic Beanstalk and ECS?

I want to migrate from Heroku to Amazon AWS and I would like to deploy my app in Docker image. So the app consists of:

  • Web server (node.js -> docker image)
  • Worker (node.js -> docker image)
  • Postgres database (Amazon RDS)
  • Redis instance (Amazon ElatiCache?)

With my app I (Web+Worker):

  • I have to be able to scale both web and worker instances manually or auto.
  • update with new image with zero-downtime
  • see realtime/history metrics
  • see realtime/history logs

And now when studying Amazon docs I found "Elastic Beanstalk" and "Amazon EC2 Container Services (ECS)". I was wondering which one should I use to manage my application (docker images)? What is a difference between them?


Source: (StackOverflow)

ECS Service - Automating deploy with new Docker image

I want to automate the deployment of my application by having my ECS service launch with the latest Docker image. From what I've read, the way to deploy a new image version is as follows:

  1. Create a new task revision (after updating the image on your Docker repository).
  2. Update the service and specify the new revision.

This seems to work, but I want to do this all through CLI so I can script it. #2 seems easy enough to do through the AWS CLI with update-service, but I don't see a way to do #1 without specifying the entire Task JSON all over again as with register-task-definition (my JSON will include credentials in environment variables, so I want to have that in as few places as possible).

Is this how I should be automating deployment of my ECS Service updates? And if so, is there a "good" way to have the Task Definition launch a new revision (i.e. without duplicating everything)?


Source: (StackOverflow)

Advertisements

deploying on AWS ECS via task definitions without dockerhub

Currently I'm using task-definitions that refer to custom images in dockerhub to deploy my webapp on ECS (Amazon EC2 Container Service). Is there a way to do this without going through dockerhub i.e. build/deploy the dockerfile locally across cluster nodes?

At the moment, I can only think of sending shell commands over ssh or using a tool like ansible.

Perhaps I'm missing something totally obvious here...


Source: (StackOverflow)

AWS ECS container instance

I have tried to launch an ECS container instance using Ansible EC2 module.

My playbook is as follows.

- name: Launch ECS Container Instance
  ec2:
    key_name: "{{ ec2_keyname }}"
    instance_type: t2.micro
    image: ami-ca01d8ca
    wait: yes
    group: "{{ ec2_security_group }}"
    region: ap-northeast-1
    exact_count: 1
    vpc_subnet_id: "{{ ec2_subnet_id }}"
    count_tag:
      docker-registry: 1
    instance_profile_name: ecsInstanceRole
    instance_tags:
      Name: ECS_docker-registry
      docker-registry: 1
    assign_public_ip: yes

As a result, two instances launched; one of them is configured as I intend, but another has following tags that I don't intend to set.

  • aws:autoscaling:groupName
  • aws:cloudformation:logical-id
  • aws:cloudformation:stack-id
  • aws:cloudformation:stack-name

In addition, I can find these two instances on ECS dashboard.

enter image description here

But its only visible for the cluster "default", and invisible for other clusters.


What I really want to do is;

  • Launch a ECS container instance
  • Register the container instance to a cluster

It's better if I can do the process above with aws-cli, but first I should understand the strange behaviour of container instances and do manually.


Source: (StackOverflow)

docker container exits because of "std in is not a tty"

We are starting a container and run task using aws ecs service, the image got pulled successfully according to the task definition but when the container trying to run the task it exited because of "stdin is not tty". We manually reproduced that error by running docker run {image_name} but didn't figure out a way to fix it. Here's the output:

Initializing built-in extension Generic Event Extension
Initializing built-in extension SHAPE
Initializing built-in extension MIT-SHM
Initializing built-in extension XInputExtension
Initializing built-in extension XTEST
Initializing built-in extension BIG-REQUESTS
Initializing built-in extension SYNC
Initializing built-in extension XKEYBOARD
Initializing built-in extension XC-MISC
Initializing built-in extension SECURITY
Initializing built-in extension XINERAMA
Initializing built-in extension XFIXES
Initializing built-in extension RENDER
Initializing built-in extension RANDR
Initializing built-in extension COMPOSITE
Initializing built-in extension DAMAGE
Initializing built-in extension MIT-SCREEN-SAVER
Initializing built-in extension DOUBLE-BUFFER
Initializing built-in extension RECORD
Initializing built-in extension DPMS
Initializing built-in extension Present
Initializing built-in extension DRI3
Initializing built-in extension X-Resource
Initializing built-in extension XVideo
Initializing built-in extension XVideo-MotionCompensation
Initializing built-in extension SELinux
Initializing built-in extension GLX
stdin: is not a tty

We are using Xvfb to run tests and here are the related dependencies in our dockerfile

# install virtual display 
RUN apt-get -qy install xvfb
RUN apt-get -qy install x11-xkb-utils
RUN apt-get -qy install xserver-xorg-core
RUN apt-get -qy install xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic

Does anyone happen to know how to fix this? Thanks a lot.


Source: (StackOverflow)

ECS not an option in AWS CLI?

I'm just getting started with Amazon EC2 Container Service and I'm trying to follow this guide:

http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_AWSCLI.html

I'm on puTTY (ubuntu) and I signed in and got the AWS CLI with a

sudo apt-get install -y awscli

(note: I am a Mac user, new to all of this) But now when I try to run the first command in the dev guide, I get an error:

$ aws ecs create-cluster --cluster-name MyCluster
usage: aws [options] <command> <subcommand> [parameters]
aws: error: argument command: Invalid choice, valid choices are:

autoscaling                              | cloudformation                                  
cloudfront                               | cloudsearch                                     
cloudtrail                               | cloudwatch                                      
datapipeline                             | directconnect                                   
dynamodb                                 | ec2                                             
elasticache                              | elasticbeanstalk                                
elastictranscoder                        | elb                                             
emr                                      | iam                                             
importexport                             | kinesis                                         
opsworks                                 | rds                                             
redshift                                 | route53                                         
ses                                      | sns                                             
sqs                                      | storagegateway                                  
sts                                      | support                                         
swf                                      | s3api                                           
s3                                       | configure                                       
help

Source: (StackOverflow)

How to register ECS Cluster with Opsworks Stack in CloudFormation?

I can't figure out how to setup an OpsWorks Layer using an ECS Cluster in CloudFormation. My Layer creation fails because of the error below but there doesn't seem to be a clear way to register the cluster with the stack in the template. I tried adding EcsClusterArn to both Stack and Layer but that did not work. The API has a command for it but I'd like to contain everything in my template.

Error:

Attributes - EcsClusterArn: XXX must be registered to the layer's stack first.

Snippet of template:

"ecsCluster" : {
  "Type" : "AWS::ECS::Cluster"
},
...
"opsworksStack" : {
  "Type" : "AWS::OpsWorks::Stack",
  "Properties" : {
    "Name" : "my-stack",
    "ServiceRoleArn" : {
      "Fn::Join" : [ "", [ "arn:aws:iam::", {
        "Ref" : "AWS::AccountId"
      }, ":role/", {
        "Ref" : "ServiceRole"
      } ] ]
    },
    "DefaultInstanceProfileArn" : {
      "Fn::Join" : [ "", [ "arn:aws:iam::", {
        "Ref" : "AWS::AccountId"
      }, ":instance-profile/", {
        "Ref" : "InstanceRole"
      } ] ]
    },
    "UseOpsworksSecurityGroups" : "false",
    "ChefConfiguration" : {
      "BerkshelfVersion" : "3.3.0",
      "ManageBerkshelf" : "true"
    },
    "ConfigurationManager" : {
      "Name" : "Chef",
      "Version" : "11.10"
    }
  }
},
"opsworksLayer" : {
  "Type" : "AWS::OpsWorks::Layer",
  "DependsOn" : "ecsCluster",
  "Properties" : {
    "StackId" : {
      "Ref" : "opsworksStack"
    },
    "Type" : "ecs-cluster",
    "Name" : "my-layer",
    "Shortname" : "my-layer",
    "Attributes" : {
      "EcsClusterArn" : {
        "Fn::Join" : [ "", [ "arn:aws:ecs:", {
          "Ref" : "AWS::Region"
        }, ":", {
          "Ref" : "AWS::AccountId"
        }, ":cluster/", {
          "Ref" : "ecsCluster"
        } ] ]
      }
    },
    "CustomSecurityGroupIds" : [ {
      "Ref" : "ec2DefaultSecurityGroup"
    } ],
    "EnableAutoHealing" : "true",
    "AutoAssignElasticIps" : "false",
    "AutoAssignPublicIps" : "false",
    "InstallUpdatesOnBoot" : "true"
  }
}

Thanks, Thien


Source: (StackOverflow)

Why can't my ECS service register available EC2 instances with my ELB?

I've got an EC2 launch configuration that builds the ECS optimized AMI. I've got an auto scaling group that ensures that I've got at least two available instances at all times. Finally, I've got a load balancer.

I'm trying to create an ECS service that distributes my tasks across the instances in the load balancer.

After reading the documentation for ECS load balancing, it's my understanding that my ASG should not automatically register my EC2 instances with the ELB, because ECS takes care of that. So, my ASG does not specify an ELB. Likewise, my ELB does not have any registered EC2 instances.

When I create my ECS service, I choose the ELB and also select the ecsServiceRole. After creating the service, I never see any instances available in the ECS Instances tab. The service also fails to start any tasks, with a very generic error of ...

service was unable to place a task because the resources could not be found.

I've been at this for about two days now and can't seem to figure out what configuration settings are not properly configured. Does anybody have any ideas as to what might be causing this to not work?

Update @ 06/25/2015:

I think this may have something to do with the ECS_CLUSTER user data setting.

In my EC2 auto scaling launch configuration, if I leave the user data input completely empty, the instances are created with an ECS_CLUSTER value of "default". When this happens, I see an automatically-created cluster, named "default". In this default cluster, I see the instances and can register tasks with the ELB like expected. My ELB health check (HTTP) passes once the tasks are registered with the ELB and all is good in the world.

But, if I change that ECS_CLUSTER setting to something custom I never see a cluster created with that name. If I manually create a cluster with that name, the instances never become visible within the cluster. I can't ever register tasks with the ELB in this scenario.

Any ideas?


Source: (StackOverflow)

How to understand Amazon ECS cluster

I recently tried to deploy docker containers using task definition by AWS. Along the way, I came across the following questions.

  1. How to add an instance to a cluster? When creating a new cluster using Amazon ECS console, how to add a new ec2 instance to the new cluster. In other words, when launching a new ec2 instance, what config is needed in order to allocate it to a user created cluster under Amazon ECS.

  2. How many ECS instances are needed in a cluster, and what are the factors?

  3. If I have two instances (ins1, ins2) in a cluster, and my webapp, db containers are running in ins1. After I updated the running service (through http://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html), I can see the newly created service is running in "ins2", before draining the old service in "ins1". My question is that after my webapp container allocated to another instance, the access IP address becomes another instance IP. How to prevent or what the solution to make the same IP address access to webapp? Not only IP, what about the data after changing to a new instance?


Source: (StackOverflow)

Does Amazon AWS EC2 Container Service support volumes?

The documentation suggests not. There is more information on their task definition page, including a mysql example, where a data volume is usually a reasonably good idea i.e.:

{
  "image": "mysql",
  "name": "db",
  "cpu": 10,
  "memory": 500,
  "essential": true,
  "entryPoint": [
    "/entrypoint.sh"
  ],
  "environment": [
    {
      "name": "MYSQL_ROOT_PASSWORD",
      "value": "pass"
    }
  ],
  "portMappings": []
}

Source: (StackOverflow)

How to configure Amazon container service without docker hub integration

I am trying to setup a new springboot+docker(microservices) based project. The deployment is targeted on aws. Every service has a Dockerfile associated with it. I am thinking of using amazon container service for deployment, but as far as I see it only pulls images from docker hub. I don't want ECS to pull from docker-hub, rather build the images from docker file and then take over the deploying those containers.Is it possible to do? If yes how.


Source: (StackOverflow)

How to use cloudformation to create an ecs cluster?

I would like to use a cloudformation template to create my ecs cluster instead of spinning it up by hand, but I have yet to find a way. Is this simply not implemented yet, where you can create an ecs cluster as a resource in your cloudformation template, seems a bit odd they did not include that.


Source: (StackOverflow)

ECS or EB for a single-container application with Docker?

I deployed a single-container SailsJS application with Docker (image size is around 597.4 MB) and have hooked it up to ElasticBeanstalk.

However, since ECS was built for Docker, might it be better to use that over EB?


Source: (StackOverflow)

Could not generate persistent MAC address for vethXXXXX: No such file or directory

I'm starting to use CoreOS (on AWS ECS). After having it launch my first container, I see this in journalctl:

Could not generate persistent MAC address for vethXXXX: No such file or directory

Here's more context. I've removed the time and instance information, but this is all within the same second. Note there are two distinct veth entries. I don't know if that means anything.

systemd[1]: Started docker container 1234
systemd[1]: Starting docker container 1234
dockerd[595]: time="2015-07-23T23:30:52Z" level=info msg="GET /v1.17/containers/1234/json"
dockerd[595]: time="2015-07-23T23:30:52Z" level=info msg="+job container_inspect(1234)"
systemd-timesyncd[473]: Network configuration changed, trying to establish connection.systemd-udevd[7501]: Could not generate persistent MAC address for vethYYYY: No such file or directory
kernel: device vethXXXX entered promiscuous mode
kernel: IPv6: ADDRCONF(NETDEV_UP): vethXXXX: link is not ready
systemd-udevd[7508]: Could not generate persistent MAC address for vethXXXX: No such file or directory
systemd-networkd[497]: vethXXXX: Configured
kernel: eth0: renamed from vethYYYY
kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethXXXX: link becomes ready
kernel: docker0: port 2(vethXXXX) entered forwarding state
kernel: docker0: port 2(vethXXXX) entered forwarding state
systemd-networkd[497]: vethXXXX: Gained carrier

I found a discussion of this error on Ubuntu and it comes down to removing a udev rule, which doesn't seem to exist on CoreOS. There's a discussion about iptables with OpenVPN, which again doesn't seem to apply. Here's a bridge rule for LXC on Ubuntu. Again, I don't see how to apply that.

I haven't done anything with the networkd or flannel configuration. If the problems are in that area, I need specific steps on how to fix it for use in AWS ECS.


Source: (StackOverflow)

Akka Cluster with bind-port and bind-hostname

After configuring bind-hostname and bind-port in application.conf, as specified by the Akka FAQ, and bringing up the cluster, I'm receiving an error:

[ERROR] [07/09/2015 19:54:24.132] [default-akka.remote.default-remote-dispatcher-20] 
[akka.tcp://default@54.175.105.30:2552/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fdefault%4054.175.105.30%3A2552-757/endpointWriter]
dropping message [class akka.actor.ActorSelectionMessage] 
for non-local recipient[Actor[akka.tcp://default@54.175.105.30:32810/]] 
arriving at [akka.tcp://default@54.175.105.30:32810] 
inbound addresses are [akka.tcp://default@54.175.105.30:2552]

What this seems to say is that the actor has received a message destined for port 32810 (the external port) but its dropping it because the internal port (2552) doesn't match.

The relevant portions of the file are:

  hostname = 54.175.105.30
  port = 32810

  bind-hostname = 172.17.0.44
  bind-port = 2552

I've tried this on 2.4-M1, 2.4-M2, and 2.4-SNAPSHOT, all with the same effect.

Has anyone else encountered this before? Any suggestions?

edit: This actor system is running in ECS in docker containers. The docker container configuration is set to forward from the ephemeral range to 2552 on the container's private IP. ECS is successfully mapping the hostname:port to bind-hosname:bind-port. The actor is successfully running and binding to the local bind-hostname and bind-port, but is dropping messages and emitting the error described above.


Source: (StackOverflow)