amazon-vpc interview questions
Top amazon-vpc frequently asked interview questions
I've found that in some regions (such as us-east-1), only some availability zones are available for creating subnets (and therefore VPC instances). In my case, the zones are us-east-1c, -1d, and -1e, but these vary by account.
I'm building a script that generates subnets and VPC instances, so it would be useful to find out programatically which zones are VPC-capable, especially since I see know reason why the set of zones couldn't change (or at least grow) over time.
This post was asking basically the same question, but the accepted answer doesn't actually provide the info I and that asker were looking for (unless ec2-describe-availability-zones has some VPC-specific parameter I'm not aware of): Amazon VPC Availability
I have figured out one possible workaround, which is to try and create a subnet with a garbage vpc-id and availability zone (ec2-create-subnet -c garbage -i 10.0.0.0/24 -z garbage
). The error message for this call includes a list of the AZs that are able to host subnets, and I can parse that output for the info I'm looking for. However, this feels like a hack, and I don't like relying on error behavior and the specific format of error messages for this kind of thing if I don't have to. Is there a better way?
UPDATE: Adding a bit more detail based on comments...
Calls I make to ec2-describe-availability-zones
ALWAYS return five values: us-east-1a through us-east-1e, but we can only create VPC subnets in 1c, 1d and 1e. We have instances running in all zones except 1b, in which I was unable to launch even a regular instance (it appears to be getting phased out). This account has existed since before the release of the VPC feature, so it's somewhat of a "legacy" account I suppose. That might have something to do with the discrepancy between where I'm allowed to create subnets and VPC instances and when ec2-describe-availability-zones is returning. I'm going to post a question to AWS support and will report any findings here.
Source: (StackOverflow)
I created a new ubuntu instance in AWS, I can ssh connect to it successfully.
However when I try to install packages using this command, it won't work :
sudo apt-get install apache2
...
...
0% [Connecting to ap-southeast-2.ec2.archive.ubuntu.com (91.189.91.23)]^Cubuntu@ip-10-1-0-99:/etc$
This never moves forward !
I tried ping google.com.au, also no response.
Here is the VPC config of AWS:
Network ACL :
Outbound:
Rule # Type Protocol Port Range Destination Allow / Deny
100 ALL Traffic ALL ALL 0.0.0.0/0 ALLOW
* ALL Traffic ALL ALL 0.0.0.0/0 DENY
Inbound :
Rule # Type Protocol Port Range Source Allow / Deny
10 HTTP (80) TCP (6) 80 0.0.0.0/0 ALLOW
120 HTTPS (443) TCP (6) 443 0.0.0.0/0 ALLOW
140 SSH (22) TCP (6) 22 0.0.0.0/0 ALLOW
* ALL Traffic ALL ALL 0.0.0.0/0 DENY
security Group outbound settings :
Type Protocol Port Range Destination
ALL Traffic ALL ALL 0.0.0.0/0
Routing table setting:
Destination Target Status Propagated
10.1.0.0/24 local Active No
0.0.0.0/0 igw-cfe30caa Active No
What could be wrong here ?
EDIT: nslookup & dig command works fine!
Thanks !
Source: (StackOverflow)
I'm using Amazon EC2, and I want to put an internet-facing ELB (load balancer) to 2 instances on a private subnet. I am using VPC with public and private subnets.
- If I just add the private subnet to the ELB, it will not get any connections.
- If I attach both subnets to the ELB then it can access the instances, but it often will get time-outs. (Refer Screenshot 1)
- If I attach to only public subnet then my instance attached to ELB gets OutOfService because I do not have any instance in the Public Subnet, instance count shows 0. (Refer Screenshot 2)
Screenshot 1: Both subnets attached
Screenshot 2: Only public subnet attached
My question is actually an extension to this question. After following all 6 steps mentioned in the accepted answer, I am still getting struck, my instance attached to ELB gets OutOfService. I have even tried with allowing ports in the Security Groups for EC2 instances and ELB, but it did not help.
Please help, I am breaking my head with this.
Source: (StackOverflow)
In AWS I have a VPC set up with a Bastion Host. The bastion host is a single EC2 instance with a public address trough which you can SSH to any other server on the VPC.
I have created an RDS MySQL instance within the VPC and I would like to connect to it using MySQL workbench. I have followed the steps detailed here, however in "Step 6: Setting up remote SSH Configuration", it asks me to "Provide the Public DNS of the Amazon EC2 instance" (i.e. the bastion host).
MySQL workbench then does checks for certain MySQL resources on that server. However, this is not correct in my opinion as I have provided the bastion host's address, which does not have MySQL installed. As a result, the last two checks for "Check location of start/stop commands" and "Check MySQL configuration file" then fail.
I have then tried using the endpoint address of the RDS MySQL instance but with no success (as it is in the private subnet so is not publicly addressable).
It seems that many people have this up and running, but what am I doing wrong here?
Source: (StackOverflow)
Although I've written a fair amount of chef, I'm fairly new to both AWS/VPC and administrating network traffic (especially a bastion host).
Using the knife ec2 plugin, I would like the capability to dynamically create and bootstrap a VM from my developer workstation. The VM should be able to exist in either a public or private subnet of my VPC. I would like to do all of this without use of an elastic IP. I would also like for my bastion host to be hands off (i.e. I would like to avoid having to create explicit per-VM listening tunnels on my bastion host)
I have successfully used the knife ec2 plugin to create a VM in the legacy EC2 model (e.g. outside of my VPC). I am now trying to create an instance in my VPC. On the knife command line, I'm specifying a gateway, security groups, subnet, etc. The VM gets created, but knife fails to ssh to it afterward.
Here's my knife command line:
knife ec2 server create \
--flavor t1.micro \
--identity-file <ssh_private_key> \
--image ami-3fec7956 \
--security-group-ids sg-9721e1f8 \
--subnet subnet-e4764d88 \
--ssh-user ubuntu \
--server-connect-attribute private_ip_address \
--ssh-port 22 \
--ssh-gateway <gateway_public_dns_hostname (route 53)> \
--tags isVPC=true,os=ubuntu-12.04,subnet_type=public-build-1c \
--node-name <VM_NAME>
I suspect that my problem has to do with the configuration of my bastion host. After a day of googling, I wasn't able to find a configuration that works. I'm able to ssh to the bastion host, and from there I can ssh to the newly created VM. I cannot get knife to successfully duplicate this using the gateway argument.
I've played around with /etc/ssh/ssh_config. Here is how it exists today:
ForwardAgent yes
#ForwardX11 no
#ForwardX11Trusted yes
#RhostsRSAAuthentication no
#RSAAuthentication yes
#PasswordAuthentication no
#HostbasedAuthentication yes
#GSSAPIAuthentication no
#GSSAPIDelegateCredentials no
#GSSAPIKeyExchange no
#GSSAPITrustDNS no
#BatchMode no
CheckHostIP no
#AddressFamily any
#ConnectTimeout 0
StrictHostKeyChecking no
IdentityFile ~/.ssh/identity
#IdentityFile ~/.ssh/id_rsa
#IdentityFile ~/.ssh/id_dsa
#Port 22
#Protocol 2,1
#Cipher 3des
#Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
#MACs hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160
#EscapeChar ~
Tunnel yes
#TunnelDevice any:any
#PermitLocalCommand no
#VisualHostKey no
#ProxyCommand ssh -q -W %h:%p gateway.example.com
SendEnv LANG LC_*
HashKnownHosts yes
GSSAPIAuthentication yes
GSSAPIDelegateCredentials no
GatewayPorts yes
I have also set /home/ubuntu/.ssh/identity to the matching private key of my new instance.
UPDATE:
I notice the following in the bastion host's /var/log/auth.log:
May 9 12:15:47 ip-10-0-224-93 sshd[8455]: Invalid user from <WORKSTATION_IP>
May 9 12:15:47 ip-10-0-224-93 sshd[8455]: input_userauth_request: invalid user [preauth]
Source: (StackOverflow)
i am running a nodejs tcp app at my aws linux ec2 instance . the basic code is given below
var net = require('net');
net.createServer(function(socket){
socket.write('hello\n');
socket.on('data', function(data){
socket.write(data.toString().toUpperCase())
});
}).listen(8080);
and its run like charm, but i wanted to run this same app at aws beanstalk (just to get the benefit of auto scaling). ya ya i am not aws ninja. by the way to get the public ip at beanstalk i use aws VPC.
- beanstalk app connect to VPC = checked.
- VPC 8080 port open = checked .
- change hard coded port 8080 to process.env.PORT = checked.
but if i ping anything at port 8080 it does not return 'hello' from the application. what i am missing ?
Source: (StackOverflow)
Does anyone know how to use Openswan to create an IPSec tunnel to a Cisco router on EC2?
I keep reading that people can or they cannot set up the IPSec tunnels on Amazon's cloud. Is it possible or not?
If so, can someone point me to a tutorial where it was successful?
Source: (StackOverflow)
I am manually setting up an Amazon VPC network, and have a need to create a NAT instance. Amazon has VPC specialized AMIs that come in various scales. Due to budget considerations, I am ambling towards using a micro instance of ami-vpc-nat.
I am concerned that with only 613mb, a micro instance may struggle when as more instances are put behind the NAT instance. Please, can anyone who has deployed this microinstance ami-vpc-nat (especially in production) share their thoughts on its performance and throughput.
Source: (StackOverflow)
Currently moving to Amazon EC2 from another VPS provider. We have your typical web server / database server needs. Web servers in front of our database servers. Database servers are not directly accessible from the Internet.
I am wondering if there is any reason to put these servers into an AWS Virtual Private Cloud (VPC) instead of just creating the instances and using security groups to firewall them off.
We are not doing anything fancy just a typical web app.
Any reason to use a VPC or not using a VPC?
Thanks.
Source: (StackOverflow)
We're using Amazon EC2, and we want to put an ELB (load balancer) to 2 instances on a private subnet. If we just add the private subnet to the ELB, it will not get any connections, if we attach both subnets to the ELB then it can access the instances, but it often will get time-outs. Has anyone successfully implemented an ELB within the private subnet of their VPC? If so, could you perhaps explain the procedure to me?
Thanks
Source: (StackOverflow)
I have amazon VPC set through wizard as "public only network", so all my instances are in public subnet.
Instances within VPC that have Elastic IP assigned connect to internet without any troubles.
But instances without elastic IP can't connect anywhere.
Internet gateway is present. Route table in aws console looks like
Destination Target
10.0.0.0/16 local
0.0.0.0/0 igw-nnnnn
and route from inside instance shows
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.0.0.0 * 255.255.255.0 U 0 0 0 eth0
default 10.0.0.1 0.0.0.0 UG 100 0 0 eth0
I tried to open ALL inbound and outbound traffic to 0.0.0.0/0 in security group that an instance belongs to. Still no success.
~$ ping google.com
PING google.com (74.125.224.36) 56(84) bytes of data.
^C
--- google.com ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 5017ms
What else can I do?
Source: (StackOverflow)
A guy I work with gave me the EC2 credentials to log onto his EC2 console. I was not the one who set it up. Some of the instances show a public dns name and others have a blank public DNS. I want to be able to connect to the instances that have a blank public DNS. I have not been able to figure out why these show up as blank.
Source: (StackOverflow)
There are 4 scenarios in AWS VPC configure. But let's look at these two:
- Scenario 1: 1 public subnet.
- Scenario 2: 1 public subnet and 1 private subnet.
Since any instance launched in public subnet does not have EIP (unless it's assigned), it is already not addressable from the Internet. Then:
- Why is there a need for private subnet?
- What exactly are the differences between private and public subnets?
Source: (StackOverflow)
I'm creating a VPC and security groups with boto. If I just create and tag elements in a script I keep getting errors, because the elements aren't ready yet. I can just put in a manual wait, but I prefer to pull them to see if they are actually ready. For the VPCs or subnets I can use something like:
import boto.vpc
v = boto.vpc.VPCConnection(
region=primary_region,
aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key)
vpcs = v.get_all_vpcs()
print vpcs[0].state
with some more logic and a while loop to check if the state is available
, running
or whatever. This works fine for most vpc / aws elements, but some elements like security groups don't have a state attribute when returned with get_all_security_groups
or there equivalent.
How do people check if these elements are ready to be used?
Source: (StackOverflow)
We are adapting our applications CloudFormation template to make use of VPC. Within this template we need to programmatically generate the CIDR blocks used for our VPC subnets, in order to ensure they do not conflict between CloudFormation stacks.
My initial plan had been to generate the CIDRs by concatenating strings together, for example:
"ProxyLoadBalancerSubnetA" : {
"Type" : "AWS::EC2::Subnet",
"Properties" : {
"VpcId" : { "Ref" : "Vpc" },
"AvailabilityZone" : "eu-west-1a",
"CidrBlock" : { "Fn::Join" : [ ".", [ { "Ref" : "VpcCidrPrefix" }, "0.0/24" ] ] }
}
},
Upon further consideration however, we need to use a single VPC rather than having a VPC for each of our stacks.
AWS restrict VPCs to using a maximum of a /16
CIDR block (we have asked for this limit to be raised, but it is apparently not possible). This means it is no longer possible for us to use this concatenation method as each of our stacks require subnets that span more than 255 addresses in total.
I'd like to generate the CIDR blocks on-the-fly rather than having to define them as parameters to the CloudFormation template,
One idea I had was each stack having a "base integer" and adding to that for each subnet's CIDR block.
For example:
"CidrBlock" : { "Fn::Join" : [ ".", [ { "Ref" : "VpcCidrPrefix" }, { "Fn::Sum", [ { "Ref" : "VpcCidrStart" }, 3 ] }, "0/24 ] ] }
Where VpcCidrStart
is an integer that sets the value that the third CIDR octet should start from within the script, and 3
is the subnet number.
Obviously the Fn::Sum
intrinsic function doesn't exist though, so I wanted to know if anyone had a solution to adding integers in VPC (it seems like something that shouldn't be possible, as CloudFormation is string oriented), or a better solution to this conundrum in general.
Source: (StackOverflow)