EzDevInfo.com

virtualization interview questions

Top virtualization frequently asked interview questions

How to [politely?] tell software vendor they don't know what they're talking about

Not a technical question, but a valid one nonetheless. Scenario:

HP ProLiant DL380 Gen 8 with 2 x 8-core Xeon E5-2667 CPUs and 256GB RAM running ESXi 5.5. Eight VMs for a given vendor's system. Four VMs for test, four VMs for production. The four servers in each environment perform different functions, e.g.: web server, main app server, OLAP DB server and SQL DB server.

CPU shares configured to stop the test environment from impacting production. All storage on SAN.

We've had some queries regarding performance, and the vendor insists that we need to give the production system more memory and vCPUs. However, we can clearly see from vCenter that the existing allocations aren't being touched, e.g.: a monthly view of CPU utilization on the main application server hovers around 8%, with the odd spike up to 30%. The spikes tend to coincide with the backup software kicking in.

Similar story on RAM - the highest utilization figure across the servers is ~35%.

So, we've been doing some digging, using Process Monitor (Microsoft SysInternals) and Wireshark, and our recommendation to the vendor is that they do some TNS tuning in the first instance. However, this is besides the point.

My question is: how do we get them to acknowledge that the VMware statistics that we've sent them are evidence enough that more RAM/vCPU won't help?

--- UPDATE 12/07/2014 ---

Interesting week. Our IT management have said that we should make the change to the VM allocations, and we're now waiting for some downtime from the business users. Strangely, the business users are the ones saying that certain aspects of the app are running slowly (compared to what, I don't know), but they're going to "let us know" when we can take the system down (grumble, grumble!).

As an aside, the "slow" aspect of the system is apparently not the HTTP(S) element, i.e.: the "thin app" used by most of the users. It sounds like it's the "fat client" installs, used by the main finance bods, that is apparently "slow". This means that we're now considering the client and the client-server interaction in our investigations.

As the initial purpose of the question was to seek assistance as to whether to go down the "poke it" route, or just make the change, and we're now making the change, I'll close it using longneck's answer.

Thank you all for your input; as usual, serverfault has been more than just a forum - it's kind of like a psychologist's couch as well :-)


Source: (StackOverflow)

Difference between KVM and QEMU

I have been reading about KVM and Qemu for sometime. As of now I have a clear understanding of what they do.

KVM supports hardware virtualization to provide near native performance to the Guest Operating sytems. On the other hand QEmu emulates the target operating system.

What I am confused is to what level these two co-ordinate. Like

  1. Who manages the sharing of RAM and/or memory?
  2. Who schedules I/O operations?

Source: (StackOverflow)

Advertisements

What range of MAC addresses can I safely use for my virtual machines?

I want to assign my virtual machines MAC addresses so that I can configure DHCP reservations for them so that they always get the same IP address regardless of which host hypervisor they are running on or operating system they are running.

What I need to know is what range of MAC addresses can I use without fear that one day some device may be connected to our network with that MAC?

I have read the Wikipedia article on MAC addresses and this section seems to indicate that if I create an address with the form 02-XX-XX-XX-XX-XX then it is considered a locally administered address.

I would assume this means that no hardware manufacturer would ever use an address starting with 02 so I should be safe to use anything that starts with 02 for my virtual machines?

Thanks for the help.


Source: (StackOverflow)

Linux on VMware - why use partitioning?

When installing Linux VMs in a virtualized environment (ESXi in my case), are there any compelling reasons to partition the disks (when using ext4) rather than just adding separate disks for each mount point?

The only one I can see is that it makes it somewhat easier to see if there's data present on a disk with e.g. fdisk.

On the other hand, I can see some good reasons for not using partitions (for other than /boot, obviously).

  • Much easier to extend disks. It's just to increase disk size for the VM (typically in VCenter), then rescan the device in VM, and resize the file system online.
  • No more issues with aligning partitions with underlying LUNs.

I have not found much on this topic around. Have I missed something important?


Source: (StackOverflow)

Replace VMware vSphere infrastructure with open source alternatives?

We are planning a slow migration from VMware (and third party apps) to open source alternatives (free would be great).

Basically, we want to start with some little cluster lab, then migrate the production environment (35+ ESX, 1500 VMs) in the future (X years, there is no hurry... yet)

Our bet is CentOS/Scientific Linux as the operating system of choice and KVM as the hypervisor.

The vCenter alternative we are thinking about, is Convirt, but we don't know if all the features we use in VMware will be supplied by Convirt (HA, DRS, clustering,...), or we should try some other alternatives (any ideas?)

The monitoring is being replaced by Nagios and the backup/replication will be replaced by some scripting magic.

So, is there anyone who can give us some advices, or in a similar situation?

PS.- This is my first question in serverfault, and my english level is not so good, but I hope the question is understandable.

PS2.- I forgot to mention that we provide also VDIs. And the alternative we've been thinking is Spice.


Source: (StackOverflow)

Virtualization for Linux (VMware vs VirtualBox vs KVM vs ...)? [closed]

I'm trying to decide on which of these to use. The ones I know about are:

Now ideally I'd like the following features:

  • Ideally to be able to boot a real partition rather than a file representing a virtual hard disk (so it's readable and writable by the host OS);
  • Have good networking support (for example, setting up virtual interfaces for KVM such that they can use DHCP to get a "real" IP address was painful);
  • Has good performance, using the VT hardware support where available;
  • Supports 64-bit guests;
  • Has a good graphical administrator tool; and
  • Has good support for scripting guest creation.

Source: (StackOverflow)

Consumer (or prosumer) SSD's vs. fast HDD in a server environment

What are the pro's and con's of consumer SSDs vs. fast 10-15k spinning drives in a server environment? We cannot use enterprise SSDs in our case as they are prohibitively expensive. Here's some notes about our particular use case:

  • Hypervisor with 5-10 VM's max. No individual VM will be crazy i/o intensive.
  • Internal RAID 10, no SAN/NAS...

I know that enterprise SSDs:

  1. are rated for longer lifespans
  2. and perform more consistently over long periods

than consumer SSDs... but does that mean consumer SSDs are completely unsuitable for a server environment, or will they still perform better than fast spinning drives?

Since we're protected via RAID/backup, I'm more concerned about performance over lifespan (as long as lifespan isn't expected to be crazy low).


Source: (StackOverflow)

Are VMware ESXi 5 patches cumulative?

This seems basic, but I'm confused about the patching strategy involved with manually updating standalone VMware ESXi hosts. The VMware vSphere blog attempts to explain this, but the actual process is still not clear to me.

From the blog:
Say Patch01 includes updates for the following VIBs: "esxi-base", "driver10" and "driver 44". And then later Patch02 comes out with updates to "esxi-base", "driver20" and "driver 44". P2 is cumulative in that the "esxi-base" and "driver44" VIBs will include the updates in Patch01. However, it's important to note that Patch02 not include the "driver 10" VIB as that module was not updated.

This VMware Communities post gives a different answer. This one contradicts the other.

Many of the ESXi installations I encounter are standalone and do not utilize Update Manager. It is possible to update an individual host using the patches make available through the VMWare patch download portal. The process is quite simple, so that part makes sense.

The bigger issue is determining what exactly to actually download and install. In my case, I have a good number of HP-specific ESXi builds that incorporate sensors and management for HP ProLiant hardware.

  • Let's say that those servers start with an ESXi build #474610 from 9/2011.
  • Looking at the patch portal screenshot below, there is a patch for ESXi update01, build #623860. There are also patches for builds #653509 and #702118.
  • Coming an old version of ESXi (e.g. vendor-specific build), what is the proper approach to bring the system fully up-to-date? Which patches are cumulative and which need to be applied sequentially? Is installing the newest build the right approach, or do I need to step back and patch incrementally?
  • Another consideration is the large size of the patch downloads. At sites with limited bandwidth, downloading of multiple ~300mb patches is difficult.

enter image description here


Source: (StackOverflow)

VMware Linux Server -- how can you tell if you are a vm or real hardware?

An interesting question. I have logged into a Linux (most likely SuSE) host. Is there some way that I can tell programmatically that I am a VM host or not?

Also assume that the vmtools are not installed.


Source: (StackOverflow)

How do I know if I'm working on a Virtual Machine or not?

Is there a way to know if the Windows machine I'm working on is virtual or physical? (I'm connecting with RDP to the machine. If it's a virtual machine it is working and handled by VMWare).


Source: (StackOverflow)

When deploying a single server on new hardware, do you virtualize it or not?

There are a few questions that I've found on ServerFault that hint around this topic, and while it may be somewhat opinion-based, I think it can fall into that "good subjective" category based on the below:

Constructive subjective questions:

* tend to have long, not short, answers
* have a constructive, fair, and impartial tone
* invite sharing experiences over opinions
* insist that opinion be backed up with facts and references
* are more than just mindless social fun

So that out of the way.


I'm helping out a fellow sysadmin that is replacing an older physical server running Windows 2003 and he's looking to not only replace the hardware but "upgrade" to 2012 R2 in the process.

In our discussions about his replacement hardware, we discussed the possibility of him installing ESXi and then making the 2012 "server" a VM and migrating the old apps/files/roles from the 2003 server to the VM instead of to a non-VM install on the new hardware.

He doesn't perceive any time in the next few years the need to move anything else to a VM or create additional VMs, so in the end this will either be new hardware running a normal install or new hardware running a single VM on ESXi.

My own experience would lean towards a VM still, there isn't a truly compelling reason to do so other than possibilities that may arise to create additional VMs. But there is the additional overhead and management aspect of the hypervisor now, albeit I have experienced better management capabilities and reporting capabilities with a VM.

So with the premise of hoping this can stay in the "good subjective" category to help others in the future, what experiences/facts/references/constructive answers do you have to help support either outcome (virtualizing or not a single "server")?


Source: (StackOverflow)

Is there a reason to give a VM a round base-2 amount (2048MB, 4096MB, etc) of memory?

The title pretty much says it all, is there any advantage to giving a VM 2048MB of memory instead of rounding to base-10 and doing 2000MB?


Source: (StackOverflow)

Vagrant set default share permissions

When running a vagrant instance the project folder is mounted on /vagrant automatically. However is mounted with the following permissions

# ll -d /vagrant
drwx------ 1 vagrant vagrant 612 Jun 13 14:41 /vagrant/

I need it to be mounted with (at least) 0770 but I can't find how. If I run the mount command I see this output

# mount
v-root on /vagrant type vboxsf (uid=1000,gid=100,rw)

I've tried both chmod and chown/chgrp, but they won't work on that mounted folder so my apache user can't access that folder. I read in Vagrant manual that I can change owner and group but it doesn't mention nothing about permission.

How can I do that?

Another option could be switch to NFS but in this way it won't work on Windows platforms and it need to edit local /etc/exports file and it would require root privileges and also it's pretty annoying, so I'd prefer to not make this change.


Source: (StackOverflow)

How to zero fill a virtual disk's free space on windows for better compression?

How to zero fill a virtual disk's free space on windows for better compression?

I would like a simple open source tool (or at least free) for that. It should probably write an as big as possible file full of 0and erase it afterwards. Only one pass (this is not for security reasons but for compression, we are backing up virtual machines).

Should run from inside windows and not from a disk.

On Linux I do it like this (as a user):

cd
mkdir wipe
sudo sfill -f -l -l -z ./wipe/

Edit 1: I decided to use sdelete from the accepted answer. I had a look at the sdelete's help:

C:\WINDOWS\system32>sdelete /?

SDelete - Secure Delete v1.51
Copyright (C) 1999-2005 Mark Russinovich
Sysinternals - www.sysinternals.com

usage: sdelete [-p passes] [-s] [-q] <file or directory>
       sdelete [-p passes] [-z|-c] [drive letter]
   -c         Zero free space (good for virtual disk optimization)
   -p passes  Specifies number of overwrite passes (default is 1)
   -q         Don't print errors (Quiet)
   -s         Recurse subdirectories
   -z         Clean free space

This is an old version. I used the -c switch from the 2nd invocation and this was quite fast (syntax only valid for older versions before V1.6):

c:\>sdelete -c c: (OUTDATED!)

Edit 2: As scottbb pointed out in his answer below, there was a September 2011 change to the tool (version 1.6) The -c and -z options have changed meanings. The correct usage from 1.6 onwards is

c:\>sdelete -z c:

I have the impresion this does what I want. The sdelete tool is easy to use and easy to get.


Source: (StackOverflow)

What is the difference between PV and HVM virtualization types in ec2?

AWS EC2 offers two types of virtualization of Ubuntu Linux EC2 machines - PV and HVM.

PV: enter image description here

HVM: enter image description here

What is the difference between these types?


Source: (StackOverflow)