virtual-machines interview questions
Top virtual-machines frequently asked interview questions
I'm getting started with virtualization so bear with me.
In virtual environments applications run in a hypervisor's layer. So a single physical machine could have many virtual machines on it running multiple applications.
So far so good?
So what happens when a physical machine fails? Wouldn't that make many applications fail all from a single machine?
I'm searching for developing a private cloud with OpenStack, but I want to fully understand virtualization first.
Source: (StackOverflow)
Is there any way to create a virtual machine that you can use in VirtualBox from a physical installation that you have? For instance, if I have Windows XP installed on a physical computer and want to have a virtual version of that machine on a different computer. This would save a ton of time by not having to reinstall and reconfigure the whole OS.
I would think there would be issues with Microsoft's licensing. But even if it's not possible with Windows would it be possible to take a physical Linux machine and create a VirtualBox version of that? Does any other desktop virtualization software provide this feature?
Source: (StackOverflow)
I want to assign my virtual machines MAC addresses so that I can configure DHCP reservations for them so that they always get the same IP address regardless of which host hypervisor they are running on or operating system they are running.
What I need to know is what range of MAC addresses can I use without fear that one day some device may be connected to our network with that MAC?
I have read the Wikipedia article on MAC addresses and this section seems to indicate that if I create an address with the form 02-XX-XX-XX-XX-XX then it is considered a locally administered address.
I would assume this means that no hardware manufacturer would ever use an address starting with 02 so I should be safe to use anything that starts with 02 for my virtual machines?
Thanks for the help.
Source: (StackOverflow)
Can anyone share their experiences (for example, this was great! This failed miserably!) with using the Hyper-V, ESXi, and XenServer virtualization platforms? Cost? Management? features? Handling load and backups and recovery?
And also minimum server requirements?
I thought Xen was a free virtualization platform for Linux. Is there a Xen and a separate XenServer platform?
Opinions and observations would be appreciated for a test rollout for our organization.
Source: (StackOverflow)
I have a number of Xen virtual machines running on a number of Linux servers. These VMs store their disk images in Linux LVM volumes with device names along the lines of /dev/xenVG/SERVER001OS and so on. I'd like to take regular backups of those disk images so I can restore the VMs in case we need to (the LVM devices are already mirrored with DRBD between two physical machines each, I'm just being extra paranoid here).
How do I go about this? Oviously the first step is to snapshot the LVM device, but how do I then transfer data to a backup server in the most efficient manner possible? I could simply copy the whole device, something along the lines of:
dd if=/dev/xenVG/SERVER001OS | ssh administrator@backupserver "dd of=/mnt/largeDisk/SERVER001OS.img"
...but that would take a lot of bandwidth. Is there an rsync-like tool for synching contents of whole disk blocks between remote servers? Something like:
rsync /dev/xenVG/SERVER001OS backupServer:/mnt/largeDisk/SERVER001OS.img
If I understand rsync's man page correctly, the above command won't actually work (will it?), but it shows what I'm aiming for. I understand the --devices rsync option is to copy devices themselves, not the contents of those devices. Making a local copy of the VM image before syncing it with the remote server isn't an option as there isn't the disk space.
Is there a handy utility that can synch between block devices and a backup file on a remote server? I can write one if I have to, but an existing solution would be better. Have I missed an rsync option that does this for me?
Source: (StackOverflow)
We have an ESXi 4.1 server with 48 GB RAM.
For each VM, we are allocating 4GB of memory. Since the server will have 13 virtual machines, my manager thinks this is wrong.
I am going to explain to them that ESXi will actually manage memory itself, but they asked me how much memory I allocated for the ESXi server itself.
I did not allocate any (I have not even heard of an option for allocating memory for the ESXi server itself).
How is memory allocated for ESXi server? How does it over-allocate/distribute RAM among virtual machines without issue?
Source: (StackOverflow)
I've read conflicting advice on this issue so thought I'd ask here.
Should I be running a scheduled defrag within my VM?
Source: (StackOverflow)
Possible Duplicate:
Can a single Virtual Core on a VM use more then 1 physical core?
I'm a co-owner of a Minecraft server that is getting rather large each day, but as we get bigger we're running into the limits of Minecraft and the way it's coded. The game isn't coded to use multiple cores, instead it only uses one core. So talking with a friend he suggested seeing if it was possible to have a virtual machine only have one virtual core yet use three of the four cores on the host machine. I've done some research and can't seem to find any answers. It doesn't matter if the host operating system is Windows or Linux, I'm just curious as to if it can be done.
If it can be done or is done automatically, can you provide links so that I can read upon this and learn more... I am new to virtual machines so go easy.
Source: (StackOverflow)
I am trying to hot-add a file-based disk to a running KVM virtual server. I've created a new disk from scratch using the command
dd of=/home/cloud/vps_59/test.img bs=1 seek=5G count=0
and I was hoping to get it hot-added to the guest by doing this in the virsh shell:
virsh # attach-disk vps_59 /home/cloud/vps_59/test.img \
vdd --driver=file --subdriver=raw
The XML definition of the domain then becomes:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/home/cloud/vps_59/root.img'/>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='file' device='disk'>
<driver name='file' type='raw'/>
<source file='/home/cloud/vps_59/test.img'/>
<target dev='vdd' bus='virtio'/>
</disk>
As you can see, the driver name becomes wrong, it should be driver name='qemu'
as the existing vda
disk. I have tried with --drive=qemu
but it states it is unsupported.
Secondly, I only "see" the newly added drive once I reboot the virtual machine running Ubuntu 10.04.4 LTS. How can I make the drive "hotplug"? I want the virtual machine to "see" the new drive immediately without a reboot.
Source: (StackOverflow)
I read in one of the VMware KB articles that snapshots will directly affect VM performance.
But my team keeps asking me how snapshots can affect performance.
I would like to give them solid reason behind the statement the snapshots are performance killers.
Can anyone explain a little bit theory about how snapshots are actually affecting the performance? Is it just because Disk I/O rate of hard disk would be slow?
Source: (StackOverflow)
I've been using VMWare for many years, running dozens of production servers with very few issues. But I never tried hosting more than 20 VMs on a single physical host.
Here is the idea:
- A stripped down version of Windows XP can live with 512MB of RAM and 4GB disk space.
- $5,000 gets me an 8-core server class machine with 64GB of RAM and four SAS mirrors.
- Since 100 above mentioned VMs fit into this server, my hardware cost is only $50 per VM which is super nice (cheaper than renting VMs at GoDaddy or any other hosting shops).
I'd like to see if anybody is able to achieve this kind of scalability with VMWare? I've done a few tests and bumped into a weird issue. The VM performance starts degrading dramatically once you start up 20 VMs. At the same time, the host server does not show any resource bottlenecks (the disks are 99% idle, CPU utlization is under 15% and there is plenty of free RAM).
I'll appreciate if you can share your success stories around scaling VMWare or any other virtualization technology!
Source: (StackOverflow)
What are some pitfalls or lessons learned after converting existing hardware to a virtualized environment? Is there anything you tried to virtualize but will never do again?
Source: (StackOverflow)
The title pretty much says it all, is there any advantage to giving a VM 2048MB of memory instead of rounding to base-10 and doing 2000MB?
Source: (StackOverflow)
I've done a search and have not found anything addressing issues regarding patching and system updates. I've got guidelines that say servers need to have necessary patches. If I have a VM host then is that an extra layer to patch and update - even with bare metal hypervisors? As opposed to having a metal server? (ie more work and testing and documentation as per my guidelines).
How often do type 1/bare-metal hyper-visors get updated? Does that matter? Does the fact that it is an extra software layer introduce more complexity and risk (security & reliability)? (eg 99% bug free software x 99% bug free software = 98% bug free system)?
(My practical experience is with VMWare Workstation and Server, and VirtualBox.)
Source: (StackOverflow)
Our IT created a VM with 2 CPUs allocated rather than the 4 I requested. Their reason is that the VM performs better with 2 CPUs rather than 4 (according to them). The rationale is that the VM hypervisor (VMWare in this case) waits for all the CPUs to be available before engaging any of them. Thus, it takes longer to wait for 4 rather than 2 CPUs.
Does this statement make sense?
Source: (StackOverflow)