EzDevInfo.com

autoscaling interview questions

Top autoscaling frequently asked interview questions

How to create variable number of EC2 instance resources in Cloudformation template?

How to create variable number of EC2 instance resources in Cloudformation template, according to a template parameter?

The EC2 API and management tools allow launching multiple instances of the same AMI, but I can't find how to do this using Cloudformation.


Source: (StackOverflow)

EC2 autoscaling does not put name (tag) on servers

I've created an autoscaling group on EC2 and it's working just fine. Servers scale up and down depending on load. I'd like to have a little more info on the management side and am wondering if there's a way to get the autoscaling group to dynamically add names to the instances that it boots up. I'm referring to adding a Tag with key=Name and value=autogeneratedid.

For example, if I had an autoscaling group called test-group, servers would boot up with the following names:

test-group-1
test-group-2
test-group-3
...

I'd like to find them an enumerate them in the EC2 Management Console, but right now they're just showing up as "empty" names (the Tag key=Name isn't explicitly set on the instances).

Any ideas?


Source: (StackOverflow)

Advertisements

Amazon EC2 AutoScaling CPUUtilization Alarm- INSUFFICIENT DATA

So I've been using Boto in Python to try and configure autoscaling based on CPUUtilization, more or less exactly as specified in this example: http://boto.readthedocs.org/en/latest/autoscale_tut.html

However both alarms in CloudWatch just report:

State Details: State changed to 'INSUFFICIENT_DATA' at 2012/11/12 16:30 UTC. Reason: Unchecked: Initial alarm creation

Auto scaling is working fine but the alarms aren't picking up any CPUUtilization data at all. Any ideas for things I can try?

Edit: The instance itself reports CPU utilisation data, just not when I try and create an alarm in CloudWatch, programatically in python or in the interface. Detailed monitoring is also enabled just in case...

Thanks!


Source: (StackOverflow)

How does Amazon EC2 Auto Scaling work?

I am trying to understand how Amazon implements the auto scaling feature. I can understand how it is triggered but I don't know what exactly happens during the auto scaling. How does it expand. For instance,

If I set the triggering condition as cpu>90. Once the vm's cpu usage increases above 90:

  1. Does it have a template image which will be copied to the new machine and started?
  2. How long will it take to start servicing the new requests ?
  3. Will the old vm have any downtime ?

I understand that it has the capability to provide load balancing between the VMs. But, I cannot find any links/paper which explains how Amazon auto scaling works. It will be great if you can provide me some information regarding the same. Thank you.


Source: (StackOverflow)

Auto Scaling Group launch config changes

I wonder if there is a simple way or best practices on how to ensure all instances within an AutoScaling group have been launched with the current launch-configuration of that AutoScaling group.

To give an example, imagine an auto-scaling group called www-asg with 4 desired instances running webservers behind an ELB. I want to change the AMI or the userdata used to start instances of this auto-scaling group. So I create a new launch configuration www-cfg-v2 and update www-asg to use that.

# create new launch config
as-create-launch-config www-cfg-v2 \
    --image-id 'ami-xxxxxxxx' --instance-type m1.small \
    --group web,asg-www --user-data "..."

# update my asg to use new config
as-update-auto-scaling-group www-asg --launch-configuration www-cfg-v2

By now all 4 running instances still use the old launch configuration. I wonder if there is a simple way of replacing all running instances with new instances to enforce the new configuration, but always ensure that the minimum of instances is kept running.

My current way of achieving this is as follows..

  1. save list of current running instances for given autoscaling group
  2. temporarily increase the number of desired instances +1
  3. wait for the new instance to be available
  4. terminate one instance from the list via

    as-terminate-instance-in-auto-scaling-group i-XXXX \
        --no-decrement-desired-capacity --force
    
  5. wait for the replacement instance to be available

  6. if more than 1 instance is left repeat with 4.
  7. terminate last instance from the list via

    as-terminate-instance-in-auto-scaling-group i-XXXX \
        --decrement-desired-capacity --force
    
  8. done, all instances should now run with same launch config

I have mostly automated this procedure but I feel there must be some better way of achieving the same goal. Anyone knows a better more efficient way?

mathias

Also posted this question in the official AWS EC2 Forum.


Source: (StackOverflow)

AWS AutoScaling, downscale - wait for processes termination

I want to use AWS AutoScaling to scaledown a group of instances when SQS queue is short. These instances do some heavy work that sometimes requires 5-10 minutes to complete. And I want this work to be completed before the instance termination.

I know a lot of people should have faced the same problem. Is it possible on EC2 to handle the AWS termination request and complete all my running processes before the instance is actually terminated? What is the best approach to this?


Source: (StackOverflow)

Windows Azure and dynamic elasticity

Is there a way do do dynamic elasticity in Windows Azure? If my workers begin to get overloaded, or queues start to get too full, or too many workers have no work to do, is there a way to dynamically add or remove workers through code or is that just done manually (requires human intervention) right now? Does anyone know of any plans to add that if its not currently available?


Source: (StackOverflow)

How do you distribute your app across multiple servers using EC2?

For the first time I am developing an app that requires quite a bit of scaling, I have never had an application need to run on multiple instances before.

How is this normally achieved? Do I cluster SQL servers then mirror the programming across all servers and use load balancing?

Or do I separate out the functionality to run some on one server some on another?

Also how do I push out code to all my EC2 windows instances?


Source: (StackOverflow)

Gnuplot minimum and maximum boundaries for autoscaling

How can I limit the autoscaling of gnuplot, so that, as example for the y-max, it is at least a certain value and it would autoscale up to fixed "limit"?

From looking at the documentation, I only see how to fix min-, or max- end of the axis, while the other is being scaled automatically.

About autoscaling on PDF page 93


Source: (StackOverflow)

Autoscaling azure roles based on Service Bus queue length [closed]

Autoscaling Application Block from Microsoft Enterprise Library (WASABi) is a nice piece of software and it allows to change number of instances of your role based on the length of Storage Account Queue.

Can I use Service Bus Queue instead of Storage Account Queue? How do I define Rules xml in this case?


Source: (StackOverflow)

Azure ConstraintRules not working

I am deploying a very simple Azure cloud service.

Trying to get Autoscaling working so I can schedule scaling up/down depending on time of day.

Have everything installed and configured, deploys to Azure without any issues however my rules don't seem to be being adhered to.

Currently I have the following, which I would expect service to run at a minimum of 2 instances but it always stays at 1.

<rules xmlns="http://schemas.microsoft.com/practices/2011/entlib/autoscaling/rules" enabled="true">
  <constraintRules>
    <rule name="Default" description="Default rules" enabled="true" rank="1">
      <actions>
        <range min="2" max="8" target="MyRoleName"/>
      </actions>
    </rule>
  </constraintRules>
</rules>

Feel like I'm missing something really simple but unsure what?

Thank you


Source: (StackOverflow)

AWS Alarm stuck in INSUFFICIENT_DATA while creating

I tried to create a AWS Alarm to watch the SQS. If queue has more than 1 message for 2 minutes, i want to create a alarm to trigger a policy . I used below command to create Alarm

aws cloudwatch put-metric-alarm --alarm-name alarmName --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" --statistic Average --period 60  --evaluation-periods 2 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --dimensions "Name=QueueName,Value=QueueName" "Name=AutoScalingGroupName,Value=asg-name" --alarm-actions "<arn:batch-upscale-policy>" --actions-enable

I can able to see the Alarm in AWS console. But it got stuck in INSUFFICIENT_DATA state. Please give some suggestion to solve this.

Here I'm to listen to queues in other AWS Account. Is it possible??!


Source: (StackOverflow)

What's the correct Cloudwatch/Autoscale settings for extremely short traffic spikes on Amazon Web Services?

I have a site running on amazon elastic beanstalk with the following traffic pattern:

  • ~50 concurrent users normally.
  • ~2000 concurrent users for 1/2 minutes when post is made to Facebook page.

Amazon web services claim to be able to rapidly scale to challenges like this but the "Greater than x for more than 1 minute" setup of cloudwatch doesn't appear to be fast enough for this traffic pattern?

Usually within seconds all the ec2 instances crash, killing all cloudwatch metrics and the whole site is down for 4/6 minutes. So far I've yet to find a configuration that works for this senario.

Here is the graph of a smaller event that also killed the site: enter image description here


Source: (StackOverflow)

What "desired instances" is needed for? AWS Amazon Webservices AutoScaling group

i am facing a strange situation on Amazon AWS. I donĀ“t understand for what the desired Instances number is used for? I have an autoscaling group that contains scale up and scale down actions configured.

I have a custom PHP file that run actions Scale up and Scale down depending on some external factors. I want to know which number I have to write in desired instances to not affect my autoscaling actions.

For example:

  1. I set desired to 2
  2. I have 2 instances running
  3. I run Scale Down action
  4. Instances is 1
  5. Autoscaling group will automatically start another instance, so my scale down is not useful because I ended by having 2 running

What can I do?

Many thanks!


Source: (StackOverflow)

CloudBees and App Cells, Instances and Pools

I am trying to understand the difference between:

  • An app-cell
  • An app instance; and
  • An app pool

For instance, how do I know when it is appropriate to add more app-cells for my app? Or to add more instances? Or to configure a certain subset of them into a pool? Thanks in advance!


Source: (StackOverflow)