EzDevInfo.com

linux-kvm interview questions

Top linux-kvm frequently asked interview questions

Is it possible to run a Windows partition as a VM?

My laptop is setup as dual boot between Windows 7 (64-bit) and Ubuntu Linux (64-bit). Because I spend most of my work time in Linux I need a Windows 7 VM to be able to use Microsoft Office tools, etc. But my laptop only has a 256 GB SSD so having a Windows 7 bootable partition and a VM takes up quite a lot of space.

Is there any way of running the Windows 7 partition as a VM from Linux without converting it to an .IMG file, ideally with KVM? If no, are there any other options that could help me?


Source: (StackOverflow)

Implementing PCI-Passthrough with Linux-KVM on Debian

I am attempting to use PCI-Passthrough to attach an old video card (Radeon 4770) to a virtual machine. I am using Linux-KVM to run my virtual machines on a Debian Linux (Wheezy, 3.2.0-4-amd64) host.

Question

To clarify, I am not sure what the correct 'path' is for implmenting PCI-Passthrough with Linux KVM. At this stage I suspect the correct action is to add CONFIG_DMAR, CONFIG_DMAR_DEFAULT_ON, and CONFIG_PCI_STUB to the "Bus options (PCI etc.)" section of the kernel source and recompile.

But I am not sure if this is an exhaustive list of necessary additions before recompliling. Or if recompiling the kernel is necessary--perhaps there is an easier method?

Of the guides I have referenced, only linux-kvm.org explicitly mentions compiling is necessary. Linux-KVM is already installed and functioning as a hypervisor.

Research

At this point I think my issue is related to my kernel. My primary resource has been the guide at linux-kvm.org (http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM). However, I have found other resources which indicate slightly different methods that are (seemingly) distribution specific:

Fedora--https://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/chap-Virtualization-PCI_passthrough.html

SUSE--"openSUSE: Virtualization with KVM" (Link omitted due to low-relevancy and 2-link limit)

The Fedora guide works until referencing setsebool which appears to be RedHat-specific. The SUSE guide indicates graphics-card assignment is not supported by SUSE, however I am referencing it as well because it indicated I should find a CONFIG_DMAR_DEFAULT_ON string within /boot/config-`uname -r`. The linux-kvm.org site also references CONFIG_DMAR_DEFAULT_ON, so this appears to be a common and necessary component.

Note: I have not found restrictions for graphics cards in guides for Fedora or Debian. The referenced SUSE document is dated 2006-2013.

I cannot find CONFIG_DMAR_DEFAULT_ON in /boot/config-`uname -r` on my system. Further research suggests that CONFIG_DMAR, CONFIG_DMAR_DEFAULT_ON, and CONFIG_PCI_STUB are Linux kernel configuration items that are relevant to the instructions on linux-kvm.org. As such I believe that I need to recompile my host's kernel with these 3 (at least) kernel config items. Booting with intel_iommu=on as a kernel parameter to my host-OS appears to be insufficient.

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

VT-d/IOMMU/KVM Support Confirmation

My research indicates that PCI-Passthrough requires both CPU and Motherboard support for VT-d.

VT-d

I have confirmed that my processor, a non-k INTEL i7-3770 (per ark.intel.com/products/65719), supports VT-d:

Intel® Virtualization Technology for Directed I/O (VT-d) ‡ Yes

My Asrock Z77 Extreme4 motherboard also supports VT-d (per page 62 of the User Manual):

VT-d Use this to enable or disable Intel ® VT-d technology (Intel ® Virtualization Technology for Directed I/O). The default value of this feature is [Disabled].

IOMMU

I verified that my system has IOMMU support:

dmesg | grep -e DMAR -e IOMMU | grep -e "DRHD base" -e "enabled"
[    0.000000] Intel-IOMMU: enabled

KVM

KVM is installed and functional, aside from PCI-Passthrough support:

lsmod | grep kvm
kvm_intel             121968  0 
kvm                   287749  1 kvm_intel

I have ensured that VT-d is enabled via my motherboard's BIOS. As such, I do not suspect hardware/BIOS issues that would prevent the use of VT-d. Regardless, I am unable to successfully detach my video card from my host and reassign it to a virtual machine.

Closing Thoughts

Finally I would like to mention that I also tried testing:

echo "8086 10b9" \> /sys/bus/pci/drivers/pci-stub/new_id
echo "0000:01:00.0" \> /sys/bus/pci/devices/0000:01:00.0/driver/unbind
echo "0000:01:00.0" \> /sys/bus/pci/drivers/pci-stub/bind
echo "8086 10b9" > /sys/bus/pci/drivers/pci-stub/remove_id
kvm -m 512 -boot c -net none -hda debian-7.1.0-amd64-netinst.iso -device pci-assign,host=01:00.0

and got the following error after trying to create the target VM:

Failed to assign device "(null)" : Device or resource busy
*** The driver 'pci-stub' is occupying your device 0000:01:00.0.
***
*** You can try the following commands to free it:
***
*** $ echo "8086 10b9" > /sys/bus/pci/drivers/pci-stub/new_id
*** $ echo "0000:01:00.0" > /sys/bus/pci/drivers/pci-stub/unbind
*** $ echo "0000:01:00.0" > /sys/bus/pci/drivers/pci-stub/bind
*** $ echo "8086 10b9" > /sys/bus/pci/drivers/pci-stub/remove_id
***
kvm: -device pci-assign,host=01:00.0: Device 'pci-assign' could not be initialized

I am guessing this is because the host still will not relinquish control of the video card and is likely due to the kernel not being compiled with the appropriate configuration items.

This is new territory for me so please forgive my inexperience. I would greatly appreciate any feedback whatsoever, even if it is simply confirmation that I am on the right track. Please let me know if I have made a glaring oversight or am over-thinking. Constructive criticism of my question is welcome as well. Let me know if I have not provided enough information to "help you help me" (or if I've included too much!). I would be more than happy to help make my question clearer or easier to answer.

Thank you in advance,


Source: (StackOverflow)

Advertisements

How to enable Virtualization Technology in Samsung Chromebox

I need to enable Virtualization Technology (VT) in my Samsung Chromebox (Series 5) from Google IO 2012 because of Android x86 emulator from Intel. I am on Developer BIOS, but I have no clue, how to modify its settings. Any help or idea is appreciated.


Source: (StackOverflow)

Mirror Port via iptables

I have a dedicated Linux (Debian 7.5) root server, with a number of guests set up. The guests are KVM instances, and get network access via bridge-utils (NAT, internal IPs, use the host as a gateway).

E.g. one KVM is my WebServer guest, and it gets accessible via the host IP this way:

    iptables -t nat -I PREROUTING -p tcp -d 148.251.Y.Z 
--dport 80 -j DNAT --to-destination  192.168.100.X:80 

I do the same with other services, keeping them self-contained, NATed and isolated.

But one guest is supposed to be a network monitor, and shall perform network traffic inspection (like an IDS). Usually, in a non virtual setup I would use VACLs or SPAN ports to mirror the traffic. Of course, inside this one host, I cannot do this (easily, because I don't want to use complex virtual switching approaches).

  1. Can I get a port mirror using iptables, and redirect all ingress and egress traffic to one KVM guest? All guests have a dedicated interface, like vnet1.
  2. Is it possible to selectively forward traffic, based on the protocol (like a VACL forward rule, which only grabs HTTP)?
  3. do the guests need a specific interface setup, when I need to keep vnet1 as a management interface (with an IP)?

I would be happy for a point into the right direction:

iptables         1.4.14-3.1
linux            3.2.55
bridge-utils     1.5-6

Thanks a lot :)


Source: (StackOverflow)

Virtualized Screen Resolution

I have a 64 bit Ubuntu 9.10 workstation with two virtualized guest OSes using KVM/QEMU. Also both 64-bit. One is Fedora 12 the other is beta of Ubuntu 10.04.

The problem is that I would like to use a larger size display that is configured by default. Both guest OSes have a maximum screen resolution of 1024x768. I would like to increase this to something like 1280x900 or 1440x900. The resolution of the host system is 1920x1080.

This configuration appears to be a result of the installation detecting the resolution being reported by the virtual screen during installation.

The only information I have found on the subject suggests modifying the xorg.conf file in the /etc/X11 directory. Neither guest system has this file.

I tried creating one by hand in the Fedora system and managed to render it completely unusable. Not a big deal as this is recently installed and can be reinstalled easily.

Is what I want to do possible? If so, how do I accomplish it?


Source: (StackOverflow)

Where can I config service startup options in Ubuntu?

I'm not used to using Ubuntu or Debian as a server. I'm more accustomed to Red Hat/Fedora ways and even Gentoo (yikes).

Under Red Hat installs, you can often configure most services that start from init using config files in /etc/sysconfig named by the service. Is there and equivalent thing under Ubuntu?

Specifically I'm trying to control how the libvirtd and kvm processes are started as far as command line options go. I need to add the --listen option somewhere.


Source: (StackOverflow)

Virtualbox, VMware, KVM or other for Ubuntu virtualization?

Is any one of these better than the others? What are the differences in practical terms for home use?


Source: (StackOverflow)

Prevent Radeon driver from attaching to specific PCI devices?

I have two Radeon cards in this machine, a Radeon HD 6570 and a Radeon HD 6950:

lspci | grep VGA

01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Turks [Radeon HD 6570]
02:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Cayman PRO [Radeon HD 6950]

I'm trying to get VGA passthrough to work with KVM on Debian 7 (Wheezy), passing through the 6950 as a secondary video card to a Windows 7 guest. This works fine if I blacklist the radeon kernel module via /etc/modprobe.d/.

If I remove the blacklist to run X11 (or even just a KMS console) on the 6570 the radeon module seems to attach to both cards:

dmesg | egrep "01:00.0|02:00.0|radeon"

pci 0000:01:00.0: [1002:6759] type 0 class 0x000300
pci 0000:01:00.0: reg 10: [mem 0xe0000000-0xefffffff 64bit pref]
pci 0000:01:00.0: reg 18: [mem 0xf7e20000-0xf7e3ffff 64bit]
pci 0000:01:00.0: reg 20: [io 0xe000-0xe0ff]
pci 0000:01:00.0: reg 30: [mem 0xf7e00000-0xf7e1ffff pref]
pci 0000:01:00.0: supports D1 D2
pci 0000:02:00.0: [1002:6719] type 0 class 0x000300
pci 0000:02:00.0: reg 10: [mem 0xd0000000-0xdfffffff 64bit pref]
pci 0000:02:00.0: reg 18: [mem 0xf7d20000-0xf7d3ffff 64bit]
pci 0000:02:00.0: reg 20: [io 0xd000-0xd0ff]
pci 0000:02:00.0: reg 30: [mem 0xf7d00000-0xf7d1ffff pref]
pci 0000:02:00.0: supports D1 D2
vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
vgaarb: device added: PCI:0000:02:00.0,decodes=io+mem,owns=none,locks=none
vgaarb: bridge control possible 0000:02:00.0
vgaarb: bridge control possible 0000:01:00.0
pci 0000:01:00.0: Boot video device
[drm] radeon kernel modesetting enabled.
radeon 0000:01:00.0: setting latency timer to 64
radeon 0000:01:00.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
radeon 0000:01:00.0: GTT: 512M 0x0000000040000000 - 0x000000005FFFFFFF
[drm] radeon: 1024M of VRAM memory ready
[drm] radeon: 512M of GTT memory ready.
radeon 0000:01:00.0: irq 46 for MSI/MSI-X
radeon 0000:01:00.0: radeon: using MSI.
[drm] radeon: irq initialized.
radeon 0000:01:00.0: WB enabled
[drm] radeon: ib pool ready.
[drm] radeon: power management initialized
fbcon: radeondrmfb (fb0) is primary device
fb0: radeondrmfb frame buffer device
[drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0
radeon 0000:02:00.0: enabling device (0000 -> 0003)
radeon 0000:02:00.0: setting latency timer to 64
radeon 0000:02:00.0: VRAM: 2048M 0x0000000000000000 - 0x000000007FFFFFFF (2048M used)
radeon 0000:02:00.0: GTT: 512M 0x0000000080000000 - 0x000000009FFFFFFF
[drm] radeon: 2048M of VRAM memory ready
[drm] radeon: 512M of GTT memory ready.
radeon 0000:02:00.0: irq 49 for MSI/MSI-X
radeon 0000:02:00.0: radeon: using MSI.
[drm] radeon: irq initialized.
radeon 0000:02:00.0: WB enabled
[drm] radeon: ib pool ready.
[drm] radeon: power management initialized
fb1: radeondrmfb frame buffer device
[drm] Initialized radeon 2.12.0 20080528 for 0000:02:00.0 on minor 1
[drm] radeon: finishing device.
radeon 0000:02:00.0: ffff88041a941800 unpin not necessary
[drm] radeon: ttm finalized
pci-stub 0000:02:00.0: claimed by stub
pci-stub 0000:02:00.0: irq 49 for MSI/MSI-X

This causes the Windows 7 VM to bluescreen on boot.

How can I configure things so that module radeon only attaches to the 6570 and not the 6950?


Source: (StackOverflow)

How to exit a "virsh console" connection?

Is there any special characters involved? I want to be able to open a console connection in my application and exit upon completion of a task


Source: (StackOverflow)

libvirt/9p/kvm mount in fstab fails to mount at boot time

I am trying to mount a shared folder using qemu-kvm/9p and it fails to work if I add it to the fstab file. I get an error at boot that the device cannot be mounted, yet after start if I run "mount -a" the device will be mounted.

fstab line:

src_mnt /src 9p trans=virtio 0 0

From dmesg I can see:

[    7.606258] 9p: Could not find request transport: virtio

And a few lines later I see the "virtio-pci" entries. I'm not clear on how I would defer mounting until that device is available however.


Source: (StackOverflow)

Change CD-ROM via virsh

I have a KVM virtual machine that is managed via libvirsh. Now I want to use a different ISO image inside the VM.

How do I change the DVD in the virtual drive using virsh?


Source: (StackOverflow)

Linux windowed GPU passthrough

I've read that GPU passthrough on linux (ubuntu/mint) is possible with the correct types of hardware. I'm looking for a specific use case of passthrough and I'm wondering whether technology has advanced enough to allow for it to happen.

I have a linux mint host, and wanting a windows 8/10 guest. CPU/motherboard support vt-d (i7-5820k, asus x99-a). gpus are a pair of gtx970s. I want to: 1) Set up the guest so that it runs within a window on the host, thus allowing me to use something like a unity mode 2) Pass 1 of the GPUs through to the guest 3) When I shut down the guest VM, I want the passed through GPU to return to the host so I can use the pair of GPUs for compute/cuda heavy tasks

There are times where I'd like to game (hence the passthrough), but when I'm actually doing work I often need access to the cuda cores on both GPUs. A lot of the old threads I've read about this suggest that 1 card completely disappears from the host, is there a way to bring it back into action without a reboot?

Normally you'd need 2 monitors for this type of thing, plugging each into a separate GPU. But is it possible to use the second GPU to render a windowed VM within the host, instead of to a 2nd monitor?

Regarding windowed mode, I did see this on the virtualbox site, but I'm not sure if the VM is still windowed in this case: https://www.virtualbox.org/manual/ch09.html#pcipassthrough

I've searched for this and have come up short, but having said that, most of the search results are quite a few years old so it doesn't speak to any advancements in technology since then. The only thing I've found is a video on youtube that suggests it might be possible as it looks like a passed through GPU on a VM running in windowed mode: https://www.youtube.com/watch?v=XY1zDgCxARw


Source: (StackOverflow)

ubuntu libvirt serial console requires ttyS0 restart to connect?

I'm attempting to configure serial access from my libvirt host to one of its guests.

I've configured the device on the guest and started it:

jsharpe@sel-app1:~$ cat /etc/init/ttyS0.conf 
# ttyS0 - getty
#
# This service maintains a getty on ttyS0 from the point the system is
# started until it is shut down again.

start on stopped rc or RUNLEVEL=[2345]
stop on runlevel [!2345]

respawn
exec /sbin/getty -8 9600 ttyS0

jsharpe@sel-app1:~$ sudo restart ttyS0
ttyS0 start/running, process 767

jsharpe@sel-app1:~$ ps aux|grep ttyS0
root       767  0.2  0.0   6080   632 ttyS0    Ss+  17:20   0:00 /sbin/getty -8 9600 ttyS0
jsharpe    769  0.0  0.0   7624   904 pts/0    S+   17:20   0:00 grep --color=auto ttyS0

On the Host, I try to connect with virsh:

jsharpe@twoface:~ $ virsh console sel-app1
Connected to domain sel-app1
Escape character is ^]

... at this point, the host just hangs. I can kill it with ^], but other keystrokes don't show up in the terminal.

Now, back over to guest, let's restart ttyS0:

jsharpe@sel-app1:~$ sudo restart ttyS0
ttyS0 start/running, process 772
jsharpe@sel-app1:~$ ps aux|grep ttyS0
root       772  1.0  0.0   6076   560 ttyS0    Ss+  17:23   0:00 /sbin/getty -8 9600 ttyS0
jsharpe    774  0.0  0.0   7624   904 pts/0    S+   17:23   0:00 grep --color=auto ttyS0

Great, back to the host:

jsharpe@twoface:~ $ virsh console sel-app1
Connected to domain sel-app1
Escape character is ^]

Ubuntu 10.04.3 LTS sel-app1 ttyS0

sel-app1 login: 

A login prompt? So I have to restart ttyS0 after a connection has been attempted? wtf. Note that this isn't a timeout issue. The host/console command will hang indefinitely. It isn't until restarting ttyS0 that the connection happens.


Source: (StackOverflow)

QEMU: How to find bottleneck in my virtual machine?

I am running an ubuntu virtual-machine on a 12.04 ubuntu server. Unfortunately the machine is extremly slow (just htop consumes 15%), even though having 8 cores etc.

  • Do you know a way to find the virtual machines / machines bottleneck? Memory, network bandwidth, disk reading etc.?

  • Do you see any mistake in my configuration?

I use qemu version 1.0.

My configuration file:

<domain type='qemu'>
<name>vm</name>
<memory>10485760</memory>
<currentMemory>8388608</currentMemory>
<vcpu>8</vcpu>
<os>
  <type arch='x86_64' machine='pc-1.0'>hvm</type>
  <boot dev='hd'/>
  <bootmenu enable='no'/>
</os>
  <features>
    <acpi/>
    <apic/>
  </features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
  <emulator>/usr/bin/qemu-system-x86_64</emulator>

  <disk type='file' device='disk'>
    <driver name='qemu' type='qcow2' cache='none'/>
    <source file='/vm//drives/root.qcow2'/>
    <target dev='vda' bus='virtio'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
  </disk>

  <controller type='ide' index='0'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
  </controller>

<interface type='bridge'>
  <mac address='a0:a0:a0:a0'/>
  <source bridge='virbr0'/>
  <model type='rtl8139'/>
  <bandwidth>
  </bandwidth>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
  <input type='tablet' bus='usb'/>
  <input type='mouse' bus='ps2'/>
  <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/>
  <video>
    <model type='cirrus' vram='9216' heads='1'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
  </video>
  <memballoon model='virtio'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
  </memballoon>
  </devices>
</domain>

Source: (StackOverflow)

Cannot create a virtual machine on Fedora: Activation of org.freedesktop.machine1 timed out

I am trying to create a new VM within Fedora using virt-manager. However, I get the following error:

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/create.py", line 1854, in do_install
    guest.start_install(meter=meter)
  File "/usr/share/virt-manager/virtinst/guest.py", line 411, in start_install
    noboot)
  File "/usr/share/virt-manager/virtinst/guest.py", line 475, in _create_guest
    dom = self.conn.createLinux(start_xml or final_xml, 0)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3440, in createLinux
    if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self)
libvirtError: error from service: CreateMachine: Activation of org.freedesktop.machine1 timed out

How do I fix this so that I can create virtual machines?


Source: (StackOverflow)