domingo, 8 de mayo de 2016


  • Updated 25.07.2018 Qemu, Libvirtd, etc from source, works for ubuntu 16.04 and 18.04, should works on the rest in future
  • Updated 02.03.2018 with Ubuntu 18.04 compatibility
  • Fixed Microsoft Hv in QEMU > 2.5. Thanks to @http_error_418 for point to this

KVM - [Ubuntu man] & Virtualization With KVM On Ubuntu (16|18).04/ LTS

Before install!
  • If your CPU is Intel, you need activate in BIOS VT-x
    • (last letter can change, you can activate TxT too, and any other feature, but VT-* is very important)

sudo apt-get install build-essential gcc pkg-config glib-2.0 libglib2.0-dev libsdl1.2-dev libaio-dev libcap-dev libattr1-dev libpixman-1-dev
sudo apt-get build-dep qemu
sudo apt-get install lvm2 ubuntu-virt-server python-vm-builder qemu-kvm qemu-system libvirt-bin ubuntu-vm-builder kvm-ipxe bridge-utils
sudo apt-get install virtinst python-libvirt virt-viewer virt-manager # Virtual Machine Manager

sudo kvm-ok

Correct output:

INFO: /dev/kvm exists
KVM acceleration can be used

sudo reboot

Installing KVM And vmbuilder

First check if your CPU supports hardware virtualization - if this is the case, the command

  • egrep '(vmx|svm)' --color=always /proc/cpuinfo

  • should display something, e.g. like this:

root@server1:~# egrep '(vmx|svm)' --color=always /proc/cpuinfo
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good nopl extd_apicid
pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch lbrv
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow rep_good nopl extd_apicid
pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch lbrv
  • __If nothing is displayed, then your processor doesn’t support hardware virtualization, and you must stop here.___

  • We must add the user as which we’re currently logged in (root) to the group libvirtd:

    • adduser id -un libvirtd
    • adduser id -un kvm
  • You need to log out and log back in for the new group memberships to take effect.

  • To check if KVM has successfully been installed, run

    • virsh -c qemu:///system list
    • virsh list
  • It should display something like this:

    root@server1:~# virsh -c qemu:///system list
     Id Name                 State
  • Before we start our first virtual machine, I recommend to reboot the system:

    • reboot
  • If you don’t do this, you might get an error like open /dev/kvm: Permission denied in the virtual machine logs in the /var/log/libvirt/qemu/ directory.

Creating An Image-Based VM
  • We can now create our first VM - an image-based VM (if you expect lots of traffic and many read- and write operations for that VM, use an LVM-based VM instead as shown in chapter 6 - image-based VMs are heavy on hard disk IO).

  • I want to create my virtual machines in the directory /var/lib/libvirt/images/ (they cannot be created in the /root directory because the libvirt-qemu user doesn’t have read permissions in that directory).

  • We will create a new directory for each VM that we want to create, e.g. /var/lib/libvirt/images/vm1, /var/lib/libvirt/images/vm2, /var/lib/libvirt/images/vm3, and so on, because each VM will have a subdirectory called ubuntu-kvm, and obviously there can be just one such directory in /var/lib/libvirt/images/vm1, for example. If you try to create a second VM in /var/lib/libvirt/images/vm1, for example, you will get an error message saying ubuntu-kvm already exists (unless you run vmbuilder with the –dest=DESTDIR argument):

root@server1:/var/lib/libvirt/images/vm1# vmbuilder kvm ubuntu -c vm2.cfg
2009-05-07 16:32:44,185 INFO     Cleaning up
ubuntu-kvm already exists
  • We will use the vmbuilder tool to create VMs. (You can learn more about vmbuilder here.) vmbuilder uses a template to create virtual machines - this template is located in the /etc/vmbuilder/libvirt/ directory. First we create a copy:
mkdir -p /var/lib/libvirt/images/vm1/mytemplates/libvirt
cp /etc/vmbuilder/libvirt/* /var/lib/libvirt/images/vm1/mytemplates/libvirt/
  • Now we come to the partitioning of our VM. We create a file called vmbuilder.partition…

    • vim /var/lib/libvirt/images/vm1/vmbuilder.partition

*.. and define the desired partitions as follows:

    root 8000
    swap 4000
    /var 20000
  • This defines:

    • root partition (/) with a size of 8000MB,
    • swap partition of 4000MB
    • /var partition of 20000MB.
  • The — line makes that the following partition (/var in this example) is on a separate disk image (i.e., this would create two disk images, one for root and swap and one for /var). Of course, you are free to define whatever partitions you like (as long as you also define root and swap), and of course, they can be in just one disk image - this is just an example.

I want to install openssh-server in the VM. To make sure that each VM gets a unique OpenSSH key, we cannot install openssh-server when we create the VM. Therefore we create a script called that will be executed when the VM is booted for the first time. It will install openssh-server (with a unique key) and also force the user (I will use the default username administrator for my VMs together with the default password howtoforge) to change the password when he logs in for the first time:

  • vim /var/lib/libvirt/images/vm1/
# This script will run the first time the virtual machine boots
# It is ran as root.

# Expire the user account
passwd -e administrator

# Install openssh-server
apt-get update
apt-get install -qqy --force-yes openssh-server

Make sure you replace the username administrator with your default login name.

Now take a look at to learn about the available options.

  • vmbuilder kvm ubuntu --help

To create our first VM, vm1, we go to the VM directory…

* `cd /var/lib/libvirt/images/vm1/`

… and run vmbuilder, e.g. as follows:

vmbuilder kvm ubuntu --suite=precise --flavour=virtual --arch=amd64 --mirror= -o --libvirt=qemu:///system --ip= --gw= --part=vmbuilder.partition --templates=mytemplates --user=administrator --name=Administrator --pass=howtoforge --addpkg=vim-nox --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/var/lib/libvirt/images/vm1/ --mem=256 --hostname=vm1 --bridge=br0

sudo virt-install —connect qemu:///system -n PruebaVM -r 256 —os-type linux —os-variant ubuntukarmic —hvm —cdrom /u/isos/ubuntu-9.10-server-amd64.iso —network network:default —disk path=/u/vms/PruebaVM.img,size=5 —vnc —noautoconsole

  • Most of the options are self-explanatory. –part specifies the file with the partitioning details, relative to our working directory (that’s why we had to go to our VM directory before running vmbuilder), –templates specifies the directory that holds the template file (again relative to our working directory), and –firstboot specifies the firstboot script. –libvirt=qemu:///system tells KVM to add this VM to the list of available virtual machines. –addpkg allows you to specify Ubuntu packages that you want to have installed during the VM creation (see above why you shouldn’t add openssh-server to that list and use the firstboot script instead). –bridge sets up a bridged network; as we have created the bridge br0 in chapter 2, we specify that bridge here.

  • In the –mirror line, you can specify an official Ubuntu repository in –mirror, e.g. If you leave out –mirror, then the default Ubuntu repository will be used.

  • If you specify an IP address in the –ip switch, make sure that you also specify the correct gateway IP using the –gw switch (otherwise vmbuilder will assume that it is the first valid address in the network which might not be correct). Usually the gateway IP is the same that you use in /etc/network/interfaces

The build process can take a few minutes.

Afterwards, you can find an XML configuration file for the VM in /etc/libvirt/qemu/ (=> /etc/libvirt/qemu/vm1.xml):

ls -l /etc/libvirt/qemu/

root@server1:/var/lib/libvirt/images/vm1# ls -l /etc/libvirt/qemu/
total 8
drwxr-xr-x 3 root root 4096 May 21 13:00 networks
-rw------- 1 root root 2082 May 21 13:15 vm1.xml

The disk images are located in the ubuntu-kvm/ subdirectory of our VM directory:

ls -l /var/lib/libvirt/images/vm1/ubuntu-kvm/

root@server1:/var/lib/libvirt/images/vm1# ls -l /var/lib/libvirt/images/vm1/ubuntu-kvm/
total 604312
-rw-r--r-- 1 root root 324337664 May 21 13:14 tmpE4IiRv.qcow2
-rw-r--r-- 1 root root 294715392 May 21 13:15 tmpxvSVOT.qcow2

The disk images are located in the ubuntu-kvm/ subdirectory of our VM directory:

ls -l /var/lib/libvirt/images/vm1/ubuntu-kvm/

root@server1:/var/lib/libvirt/images/vm1# ls -l /var/lib/libvirt/images/vm1/ubuntu-kvm/
total 604312
-rw-r--r-- 1 root root 324337664 May 21 13:14 tmpE4IiRv.qcow2
-rw-r--r-- 1 root root 294715392 May 21 13:15 tmpxvSVOT.qcow2
Creating A Second VM

If you want to create a second VM (vm2), here’s a short summary of the commands:

mkdir -p /var/lib/libvirt/images/vm2/mytemplates/libvirt
cp /etc/vmbuilder/libvirt/* /var/lib/libvirt/images/vm2/mytemplates/libvirt/

vi /var/lib/libvirt/images/vm2/vmbuilder.partition

vi /var/lib/libvirt/images/vm2/

cd /var/lib/libvirt/images/vm2/
vmbuilder kvm ubuntu --suite=precise --flavour=virtual --arch=amd64 --mirror= -o --libvirt=qemu:///system --ip= --gw= --part=vmbuilder.partition --templates=mytemplates --user=administrator --name=Administrator --pass=howtoforge --addpkg=vim-nox --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/var/lib/libvirt/images/vm2/ --mem=256 --hostname=vm2 --bridge=br0

  • (Please note that you don’t have to create a new directory for the VM (/var/lib/libvirt/images/vm2) if you pass the -d DESTDIR argument to the vmbuilder command - it allows you to create a VM in a directory where you’ve already created another VM. In that case you don’t have to create new vmbuilder.partition and files and don’t have to modify the template, but can simply use the existing files:
cd /var/lib/libvirt/images/vm1/
vmbuilder kvm ubuntu --suite=precise --flavour=virtual --arch=amd64 --mirror= -o --libvirt=qemu:///system --ip= --gw= --part=vmbuilder.partition --templates=mytemplates --user=administrator --name=Administrator --pass=howtoforge --addpkg=vim-nox --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/var/lib/libvirt/images/vm1/ --mem=256 --hostname=vm2 --bridge=br0 -d vm2-kvm
Managing A VM
  • VMs can be managed through virsh, the “virtual shell”. To connect to the virtual shell, run

    • virsh --connect qemu:///system
virsh # help
Grouped commands:

 Domain Management (help keyword 'domain'):
    attach-device                  attach device from an XML file
    attach-disk                    attach disk device
    attach-interface               attach network interface
    autostart                      autostart a domain
    blkdeviotune                   Set or query a block device I/O tuning parameters.
    blkiotune                      Get or set blkio parameters
    blockpull                      Populate a disk from its backing image.
    blockjob                       Manage active block operations.
    blockresize                    Resize block device of domain.
    console                        connect to the guest console
    cpu-baseline                   compute baseline CPU
    cpu-compare                    compare host CPU with a CPU described by an XML file
    create                         create a domain from an XML file
    define                         define (but don't start) a domain from an XML file
    destroy                        destroy (stop) a domain
    detach-device                  detach device from an XML file
    detach-disk                    detach disk device
    detach-interface               detach network interface
    domid                          convert a domain name or UUID to domain id
    domif-setlink                  set link state of a virtual interface
    domjobabort                    abort active domain job
    domjobinfo                     domain job information
    domname                        convert a domain id or UUID to domain name
    domuuid                        convert a domain name or id to domain UUID
    domxml-from-native             Convert native config to domain XML
    domxml-to-native               Convert domain XML to native config
    dump                           dump the core of a domain to a file for analysis
    dumpxml                        domain information in XML
    edit                           edit XML configuration for a domain
    inject-nmi                     Inject NMI to the guest
    send-key                       Send keycodes to the guest
    managedsave                    managed save of a domain state
    managedsave-remove             Remove managed save of a domain
    maxvcpus                       connection vcpu maximum
    memtune                        Get or set memory parameters
    migrate                        migrate domain to another host
    migrate-setmaxdowntime         set maximum tolerable downtime
    migrate-setspeed               Set the maximum migration bandwidth
    migrate-getspeed               Get the maximum migration bandwidth
    reboot                         reboot a domain
    reset                          reset a domain
    restore                        restore a domain from a saved state in a file
    resume                         resume a domain
    save                           save a domain state to a file
    save-image-define              redefine the XML for a domain's saved state file
    save-image-dumpxml             saved state domain information in XML
    save-image-edit                edit XML for a domain's saved state file
    schedinfo                      show/set scheduler parameters
    screenshot                     take a screenshot of a current domain console and store it into a file
    setmaxmem                      change maximum memory limit
    setmem                         change memory allocation
    setvcpus                       change number of virtual CPUs
    shutdown                       gracefully shutdown a domain
    start                          start a (previously defined) inactive domain
    suspend                        suspend a domain
    ttyconsole                     tty console
    undefine                       undefine a domain
    update-device                  update device from an XML file
    vcpucount                      domain vcpu counts
    vcpuinfo                       detailed domain vcpu information
    vcpupin                        control or query domain vcpu affinity
    version                        show version
    vncdisplay                     vnc display

 Domain Monitoring (help keyword 'monitor'):
    domblkinfo                     domain block device size information
    domblklist                     list all domain blocks
    domblkstat                     get device block stats for a domain
    domcontrol                     domain control interface state
    domif-getlink                  get link state of a virtual interface
    domifstat                      get network interface stats for a domain
    dominfo                        domain information
    dommemstat                     get memory statistics for a domain
    domstate                       domain state
    list                           list domains

 Host and Hypervisor (help keyword 'host'):
    capabilities                   capabilities
    connect                        (re)connect to hypervisor
    freecell                       NUMA free memory
    hostname                       print the hypervisor hostname
    nodecpustats                   Prints cpu stats of the node.
    nodeinfo                       node information
    nodememstats                   Prints memory stats of the node.
    nodesuspend                    suspend the host node for a given time duration
    qemu-attach                    QEMU Attach
    qemu-monitor-command           QEMU Monitor Command
    sysinfo                        print the hypervisor sysinfo
    uri                            print the hypervisor canonical URI

 Interface (help keyword 'interface'):
    iface-begin                    create a snapshot of current interfaces settings, which can be later commited (iface-commit) or restored (iface-rollback)
    iface-bridge                   create a bridge device and attach an existing network device to it
    iface-commit                   commit changes made since iface-begin and free restore point
    iface-define                   define (but don't start) a physical host interface from an XML file
    iface-destroy                  destroy a physical host interface (disable it / "if-down")
    iface-dumpxml                  interface information in XML
    iface-edit                     edit XML configuration for a physical host interface
    iface-list                     list physical host interfaces
    iface-mac                      convert an interface name to interface MAC address
    iface-name                     convert an interface MAC address to interface name
    iface-rollback                 rollback to previous saved configuration created via iface-begin
    iface-start                    start a physical host interface (enable it / "if-up")
    iface-unbridge                 undefine a bridge device after detaching its slave device
    iface-undefine                 undefine a physical host interface (remove it from configuration)

 Network Filter (help keyword 'filter'):
    nwfilter-define                define or update a network filter from an XML file
    nwfilter-dumpxml               network filter information in XML
    nwfilter-edit                  edit XML configuration for a network filter
    nwfilter-list                  list network filters
    nwfilter-undefine              undefine a network filter

 Networking (help keyword 'network'):
    net-autostart                  autostart a network
    net-create                     create a network from an XML file
    net-define                     define (but don't start) a network from an XML file
    net-destroy                    destroy (stop) a network
    net-dumpxml                    network information in XML
    net-edit                       edit XML configuration for a network
    net-info                       network information
    net-list                       list networks
    net-name                       convert a network UUID to network name
    net-start                      start a (previously defined) inactive network
    net-undefine                   undefine an inactive network
    net-uuid                       convert a network name to network UUID

 Node Device (help keyword 'nodedev'):
    nodedev-create                 create a device defined by an XML file on the node
    nodedev-destroy                destroy (stop) a device on the node
    nodedev-dettach                dettach node device from its device driver
    nodedev-dumpxml                node device details in XML
    nodedev-list                   enumerate devices on this host
    nodedev-reattach               reattach node device to its device driver
    nodedev-reset                  reset node device

 Secret (help keyword 'secret'):
    secret-define                  define or modify a secret from an XML file
    secret-dumpxml                 secret attributes in XML
    secret-get-value               Output a secret value
    secret-list                    list secrets
    secret-set-value               set a secret value
    secret-undefine                undefine a secret

 Snapshot (help keyword 'snapshot'):
    snapshot-create                Create a snapshot from XML
    snapshot-create-as             Create a snapshot from a set of args
    snapshot-current               Get or set the current snapshot
    snapshot-delete                Delete a domain snapshot
    snapshot-dumpxml               Dump XML for a domain snapshot
    snapshot-edit                  edit XML for a snapshot
    snapshot-list                  List snapshots for a domain
    snapshot-parent                Get the name of the parent of a snapshot
    snapshot-revert                Revert a domain to a snapshot

 Storage Pool (help keyword 'pool'):
    find-storage-pool-sources-as   find potential storage pool sources
    find-storage-pool-sources      discover potential storage pool sources
    pool-autostart                 autostart a pool
    pool-build                     build a pool
    pool-create-as                 create a pool from a set of args
    pool-create                    create a pool from an XML file
    pool-define-as                 define a pool from a set of args
    pool-define                    define (but don't start) a pool from an XML file
    pool-delete                    delete a pool
    pool-destroy                   destroy (stop) a pool
    pool-dumpxml                   pool information in XML
    pool-edit                      edit XML configuration for a storage pool
    pool-info                      storage pool information
    pool-list                      list pools
    pool-name                      convert a pool UUID to pool name
    pool-refresh                   refresh a pool
    pool-start                     start a (previously defined) inactive pool
    pool-undefine                  undefine an inactive pool
    pool-uuid                      convert a pool name to pool UUID

 Storage Volume (help keyword 'volume'):
    vol-clone                      clone a volume.
    vol-create-as                  create a volume from a set of args
    vol-create                     create a vol from an XML file
    vol-create-from                create a vol, using another volume as input
    vol-delete                     delete a vol
    vol-download                   Download a volume to a file
    vol-dumpxml                    vol information in XML
    vol-info                       storage vol information
    vol-key                        returns the volume key for a given volume name or path
    vol-list                       list vols
    vol-name                       returns the volume name for a given volume key or path
    vol-path                       returns the volume path for a given volume name or key
    vol-pool                       returns the storage pool for a given volume key or path
    vol-upload                     upload a file into a volume
    vol-wipe                       wipe a vol

 Virsh itself (help keyword 'virsh'):
    cd                             change the current directory
    echo                           echo arguments
    exit                           quit this interactive terminal
    help                           print help
    pwd                            print the current directory
    quit                           quit this interactive terminal

  • shows all VMs, running and inactive:
virsh # list --all
 Id Name                 State
  - vm1                  shut off
  - vm2                  shut off
  • Before you start a new VM for the first time, you must define it from its xml file (located in the /etc/libvirt/qemu/ directory):

    • define /etc/libvirt/qemu/vm1.xml
  • Please note that whenever you modify the VM’s xml file in /etc/libvirt/qemu/, you must run the define command again!

  • Now you can start the VM:

    • start vm1
  • After a few moments, you should be able to connect to the VM with an SSH client such as PuTTY; log in with the default username and password. After the first login you will be prompted to change the password.

  • should now show the VM as running:

virsh # list
 Id Name                 State
  1 vm1                  running
  • To stop a VM, run

    • shutdown vm1
  • To immediately stop it (i.e., pull the power plug), run

    • destroy vm1
  • Suspend a VM:

    • suspend vm1
  • Resume a VM:

    • resume vm1
  • To leave the virtual shell..

    • quit
Creating An LVM-Based VM
  • LVM-based VMs have some advantages over image-based VMs. They are not as heavy on hard disk IO, and they are easier to back up (using LVM snapshots).

  • To use LVM-based VMs, you need a volume group that has some free space that is not allocated to any logical volume. In this example, I use the volume group /dev/vg0 with a size of approx. 465GB…

  • vgdisplay

root@server1:~# vgdisplay
  --- Volume group ---
  VG Name               vg0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               465.29 GiB
  PE Size               4.00 MiB
  Total PE              119115
  Alloc PE / Size       24079 / 94.06 GiB
  Free  PE / Size       95036 / 371.23 GiB
  VG UUID               PRenhH-0MvN-wXCL-nl4i-IfsQ-J6fc-2raYLD

… that contains the logical volumes /dev/vg0/root with a size of approx. 100GB and /dev/vg0/swap_1 with a size of 1GB - the rest is not allocated and can be used for VMs:

  • lvdisplay
root@server1:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg0/root
  VG Name                vg0
  LV UUID                dwnORf-yG3U-x1ZC-Bet1-TOoc-q1Dd-KZnbtw
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                93.13 GiB
  Current LE             23841
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---
  LV Name                /dev/vg0/swap_1
  VG Name                vg0
  LV UUID                ZdPKO6-sZrr-tIRb-PPcl-aWBj-QAUU-fnYUuP
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                952.00 MiB
  Current LE             238
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
  • I will now create the virtual machine vm5 as an LVM-based VM. We can use the vmbuilder__command again. __vmbuilder knows the –raw option which allows to write the VM to a block device (e.g. /dev/vg0/vm5) - I’ve tried this, and it gave back no errors, however, I was not able to boot the VM (start vm5 didn’t show any errors either, but I’ve never been able to access the VM). Therefore, I will create vm5 as an image-based VM first and then convert it into an LVM-based VM.
mkdir -p /var/lib/libvirt/images/vm5/mytemplates/libvirt
cp /etc/vmbuilder/libvirt/* /var/lib/libvirt/images/vm5/mytemplates/libvirt/
  • Make sure that you create all partitions in just one image file, so don’t use — in the vmbuilder.partition file:

  • vim /var/lib/libvirt/images/vm5/vmbuilder.partition

root 8000
swap 2000
/var 10000
  • vim /var/lib/libvirt/images/vm5/
# This script will run the first time the virtual machine boots
# It is ran as root.

# Expire the user account
passwd -e administrator

# Install openssh-server
apt-get update
apt-get install -qqy --force-yes openssh-server
cd /var/lib/libvirt/images/vm5/
vmbuilder kvm ubuntu --suite=precise --flavour=virtual --arch=amd64 --mirror= -o --libvirt=qemu:///system --ip= --gw= --part=vmbuilder.partition --templates=mytemplates --user=administrator --name=Administrator --pass=howtoforge --addpkg=vim-nox --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/var/lib/libvirt/images/vm5/ --mem=256 --hostname=vm5 --bridge=br0
  • As you see from the vmbuilder.partition file, the VM will use a max. of 20GB, so we create a logical volume called /dev/vg0/vm5 with a size of 20GB now:

  • lvcreate -L20G -n vm5 vg0

Don’t create a file system in the new logical volume!

  • We will use the qemu-img command to convert the image to an LVM-based VM.

  • Now we go to the VM’s ubuntu-kvm/ directory…

    • cd /var/lib/libvirt/images/vm5/ubuntu-kvm/

*… and find out how our image is named:

* `ls -l`
root@server1:/var/lib/libvirt/images/vm5/ubuntu-kvm# ls -l
total 592140
-rw-r--r-- 1 root root 606470144 May 21 14:06 tmpesHsUI.qcow2
  • Now that we know the name of our image (tmpN27tbO.qcow2), we can convert it as follows:

    • qemu-img convert tmpesHsUI.qcow2 -O raw /dev/vg0/vm5
  • Afterwards you can delete the disk image:

    • rm -f tmpesHsUI.qcow2
  • Now we must modify the VM’s configuration…

    • virsh edit vm5
  • … and change the following section…

    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/vm5/ubuntu-kvm/tmpesHsUI.qcow2'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' unit='0'/>
  • … so that it looks as follows:
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/dev/vg0/vm5'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' unit='0'/>
  • You can now use virsh to manage the VM:

    • virsh --connect qemu:///system
  • Because we have modified the VM’s XML file, we must run the define command first…

    • define /etc/libvirt/qemu/vm5.xml
  • … before we start the VM:

    • start vm5
KVM-QEMU, QCOW2, QEMU-IMG and Snapshots: Source1 | Source2

There are several different types of snapshots possible. Some idea on that:

  • Internal snapshot: A type of snapshot, where a single QCOW2 file will hold both the ‘saved state’ and the ‘delta’ since that saved point. ‘Internal snapshots’ are very handy because it’s only a single file where all the snapshot info. is captured, and easy to copy/move around the machines.

  • External snapshot: Here, the ‘original qcow2 file’ will be in a ‘read-only’ saved state, and the new qcow2 file(which will be generated once snapshot is created) will be the delta for the changes. So, all the changes will now be written to this delta file. ‘External Snapshots’ are useful for performing backups. Also, external snapshot creates a qcow2 file with the original file as its backing image, and the backing file can be /read/ in parallel with the running qemu.

  • VM State: This will save the guest/domain state to a file. So, if you take a snapshot including VM state, we can then shut off that guest and use the freed up memory for other purposes on the host or for other guests. Internally this calls qemu monitor’s ‘savevm’ command. Note that this only takes care of VM state(and not disk snapshot).

  • Before creating a snapshot, we need to create a snapshot xml file with 2 simple elements (name and description) if you need sensible name for the snapshot. Note that only these two fields are user settable. Rest of the info. will be filled by Libvirt.

$  cat /var/tmp/snap1-f15guest.xml
    <name>snaphot_name </name>


  • List of all available snapshots

    • virsh snapshot-list NAME_HERE
  • To revert to a particular snapshot

    • virsh snapshot-revert name_vm snapshotname
  • Taking a snapshot while the ‘guest’ is running live

    • virsh snapshot-create name_vm /var/tmp/snap1-f15guest.xml
  • To take snapshot of disk image too

    • qemu-img create -f qcow2 -b centos-cleaninstall.img snapshot.img
  • Dump some basic information about the image file. Includes the size on disk of the file, the name of any backing file referenced and a list of the snapshots available.

    • qemu-img info <imagename>
  • Create a simple QCOW2 image file. The disk space used for this file will be relatively small but the maximim storage capacity will be max-storage.
    qemu-img create -f qcow2 <imagename> <max-storage>

  • Create a QCOW2 image file named imagename2 which is based on a backing image, imagename1. The new image file will reference the backing file. Any clusters written by the VM will be written to imagename2 so that the backing file remains unchanged. The backing file can be referenced by many future images but must not be changed by any of them. Warning: If any process modifies the backing file the image file(s) will be corrupted.
    qemu-img create -b <imagename1> -f qcow2 -l <imagename2>

  • List all the snapshots in the specified imagename file.

    • qemu-img snapshot -l <imagename>
  • Create a snapshot and name it snapshot-name. This snapshot is a simple picture of the VM image state at the time the snapshot is created.

    • qemu-img snapshot -c <snapshot-name> <imagename>
  • Apply a snapshot named snapshot-name. This function simply restores the clusters that were saved when the snapshot, snapshot-name, was created. It has the effect of returning the VM image to the state it was in at that time.

    • qemu-img snapshot -a
  • Delete the snapshot named snapshot-name from the specified image file, imagename. Snapshots can gobble-up a significant amount of disk space. The delete command does not actually release any disk space allocated to the image file but it does release the associated clusters - effectively making them available to the VM for future storage.

    • qemu-img snapshot -d <snapshot-name> <imagename>
  • The convert command, when converting to and from the same QCOW2 format, acts as a copy command that only copies the current state of the VM image to the output file. The -p option displays progress information during the copy operation - which is often quite time consuming. The output file, imagename2, will contain all the clusters of a backing file that were referenced in the original image, imagename1. It will not copy any snapshot information. This has the effect of creating a standalone image with no references to any backing image.

    • qemu-img convert -p -f qcow2 <imagename1> -O qcow2 <imagename2>

The QEMU image snapshot create command is, as you might expect, also simple:

  • qemu-img snapshot -c <snapshot-name> <imagename>

    • The snapshot option tells qemu-img that we want to work with snapshots
    • The -c option tells qemu-img that we want to create a snapshot of the current VM image state.
    • The snapshot-name is the ID that we want to assign to this state of the VM image.
    • The imagename is the disk file name for the VM image with which we are working.
  • Putting it all together: Open a shell and change to the directory in which you have your image files. In the following example I am working on the same VM image that I created above and now I’m saving the image state using the name base-win7pro-winupdates-ie9.

  • As you can see there is no output from the snapshot create operation so I follow that up with an info request. Take a look (though it’s slightly edited.)

  • qemu-img snapshot -c base-win7pro-winupdates-ie9 win7demo-kvm.qcow2

  • qemu-img info win7demo-kvm.qcow2

Applying different Snapshots:

  • qemu-img snapshot -a <snapshot-name> <imagename>

    • The snapshot option tells qemu-img that we want to work with snapshots
    • The -a option tells qemu-img that we want to apply a previously created snapshot of the VM image state - ie: we want to apply that snapshot so that it becomes the current image state.
    • The snapshot-name is the ID of the previously created snapshot.
    • The imagename is the disk file name for the VM image with which we are working.
Converting image formats
  • This example will convert a raw image file named centos7.img to a qcow2 image file.

    • qemu-img convert -f raw -O qcow2 centos7.img centos7.qcow2
  • Run the following command to convert a VMDK image file to a raw image file.

    • qemu-img convert -f vmdk -O raw centos7.vmdk centos7.img
  • Run the following command to convert a VMDK image file to a qcow2 image file.

    • qemu-img convert -f vmdk -O qcow2 centos7.vmdk centos7.qcow2
  • VBoxManage: VDI (VirtualBox) to raw

    • VBoxManage clonehd ~/VirtualBox\ VMs/fedora21.vdi fedora21.img --format raw
  • .OVA to QCOW2

    • tar xvf MyAppliance.ova
    • qemu-img convert -O qcow2 MyAppliance-disk1.vmdk MyAppliance.qcow2
Change default image storage
virsh pool-dumpxml default > pool.xml
edit pool.xml # with new name and path
virsh pool-create pool.xml
virsh pool-refresh name
Enabling clipboard transfer (Copy + Paste)
  • To enable copy and paste between virtual machines and host in RedHat KVM do the followings:
    • Shutdown you machine if it is turned on.
    • Click on “Show virtual hardware details“.
    • Choose the “Display VNC” hardware and set the type to “Spice“
    • Click Apply and choose YES to the “You are switching graphics type to spice, would you like to add Spice agent channels?” question.
    • Now, select Video and set the Model to “qxl” and click apply.
    • Turn on your machine and login.
    • Run the followings as super user:
sudo apt-get install spice-vdagent python-spice-client-gtk spice-client spice-client-gtk
  • Then for all clients you need install guest software
  • Reboot the guest, and voila it works.

File draging/sharing

  • ToDo

Modifying KVM (qemu-kvm) settings for malware analysis

Some parts of this block is from:

  • I have found that using libvirt and virsh edit is a simple way to change the settings for the guest OS.

  • General guidelines DON’T RUN VM BEFORE ALL PATCHING:
    0. sh QEMU and seobios correct installer
    1. Change mac address
    2. Add CpuID bit vm detect patching
    3. Add SeoBios info
    4. Speedup snapshots if slow snapshot generation

To edit VM conf -> virsh edit vm_name

0. Use this script to install QEMU and SeoBios

Script: sudo ./


1. Change mac address

 <interface type='network'>
            <mac address='xx:xx:xx:xx:xx:xx'/>
            <source network='default'/>
            <model type='rtl8139'/>
           <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
  • What this script does:

    • Download source of QEMU
    • Patch the VM detect patterns
    • Compile and Install it

    • SeoBios -> Download, patch and install it, replacing the original not patched file

2 Add CpuID bit vm detect patching

  • KVM by default will pass through a feature flag, viewable in ECX as the 31st bit after executing the CPUID instruction with EAX set to 1. Some malware will use this unprivileged instruction to detect its execution in a VM. One way to avoid this is to modify your VM definition as follows: find the following line:
<domain type='kvm'>

Change it to:

<domain type='kvm' xmlns:qemu=''>

Then within the domain element, add the following:

    <qemu:arg value='-cpu'/>
    <qemu:arg value='<REPLACEME>,-hypervisor,kvm=off'/>
  • Instead of using “host”, you can also choose a number of other CPU models from the list displayed with the “qemu-system-i386 -cpu help” command (SandyBridge, Haswell, etc). So replace REPLACEME with your cpu which you selected.

  • Now test your virtual machine, if everything works prepare it for snapshotting while running Cuckoo’s agent. This means the virtual machine needs to be running while you are taking the snapshot. Then you can shut it down. You can finally take a snapshot with the following command:

3 Change BIOS information

  • Start by retrieving the dmidecode information for your host.
          <smbios mode='sysinfo'/>
      <sysinfo type='smbios'>
             <entry name='vendor'>XXXX</entry>
             <entry name='version'>XXXXXX</entry>
             <entry name='date'>XXXXX</entry>
             <entry name='release'>XXXXX</entry>
          <entry name='manufacturer'>XXXXX</entry>
          <entry name='product'>XXXXX</entry>
          <entry name='version'>XXXXX</entry>
          <entry name='serial'>XXXXX</entry>
          <entry name='uuid'>XXXXXXXX</entry> <-- This values has to be the same as the other UUID variable found in the xml file
          <entry name='sku'>XXXXXX</entry>
          <entry name='family'>XXXXXX</entry>


    <smbios mode='sysinfo'/>
  <sysinfo type='smbios'>
         <entry name='vendor'>American Megatrends Inc.</entry>
         <entry name='version'>1101</entry>
         <entry name='date'>02/04/2013</entry>
         <entry name='release'>1.71</entry>
      <entry name='manufacturer'>System manufacturer</entry>
      <entry name='product'>System manufacturer</entry>
      <entry name='version'>System Version</entry>
      <entry name='serial'>System Serial Number</entry>
      <entry name='uuid'>21804FA0-D7DA-11DD-B21C-08606E6814BB</entry>
      <entry name='sku'>SKU</entry>
      <entry name='family'>To be filled by O.E.M</entry>

Create snapshot

$ virsh snapshot-create "<Name of VM>"

  • Having multiple snapshots can cause errors.
    ERROR: No snapshot found for virtual machine VM-Name

  • VM snapshots can be managed using the following commands.

$ virsh snapshot-list "VM-Name"
$ virsh snapshot-delete "VM-Name" 1234567890

Clone patched vm correctly

  • Found on Stackoverflow but I don’t remeber post, sorry author

if [ "$1" = "-h" ]; then
    echo "Usage: $0 vm_name_base <start_from_number> <#vm_to_create> path_where_to_store"
    echo "    $0 Win7x64 0 5 /var/lib/libvirt/images/"

if [ "$#" == 0 ]; then
    echo '[-] You must provide name of vm, number of vms to create and path to base imagen'

for i in $(seq $2 $3); do
    # bad macaddress can be generated
    while [ $worked -eq 1 ]; do
        macaddr=$(dd if=/dev/urandom bs=1024 count=1 2>/dev/null|md5sum|sed 's/^\(..\)\(..\)\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4:\5:\6/')
        virt-clone -n $1_$i -o $1 -m $macaddr -f $4/$1_$i.qcow2
        if [ $? -eq 0 ]; then
    echo '[+] Check vm ->' $1_$i


WebInterface - webvirtmgr

  • install:
sudo apt-get install git python-pip python-libvirt python-libxml2 novnc supervisor nginx
git clone git://
cd webvirtmgr
sudo pip install -r requirements.txt # or python-pip (RedHat, Fedora, CentOS, OpenSuse)
./ syncdb
./ collectstatic

Setup Nginx

Warning: Usually WebVirtMgr is only available from localhost on port 8000. This step will make WebVirtMgr available to everybody on port 80. The webinterface is also unprotected (no https), which means that everybody in between you and the server (people on the same wifi, your local router, your provider, the servers provider, backbones etc.) can see your login credentials in clear text!

Instead you can also skip this step completely + uninstall nginx. By simply redirecting port 8000 to your local machine via SSH. This is much safer because WebVirtMgr is not available to the public any more and you can only access it over an encrypted connection.

  • Example:

ssh user@server:port -L localhost:8000:localhost:8000 -L localhost: 6080:localhost:6080

You should be able to access WebVirtMgr by typing localhost:8000 in your browser after completing the install. Port 6080 is forwarded to make noVNC work.

If you really know what you are doing, feel free to ignore the warning and continue setting up the redirect with nginx:

cd ..
sudo mv webvirtmgr /var/www/         ( CentOS, RedHat, Fedora, Ubuntu )
sudo mv webvirtmgr /srv/www/         ( OpenSuSe )

Add file webvirtmgr.conf in /etc/nginx/conf.d:

server {
    listen 80 default_server;

    server_name $hostname;
    #access_log /var/log/nginx/webvirtmgr_access_log;

    location /static/ {
        root /var/www/webvirtmgr/webvirtmgr; # or /srv instead of /var
        expires max;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_connect_timeout 600;
        proxy_read_timeout 600;
        proxy_send_timeout 600;
        client_max_body_size 1024M; # Set higher depending on your needs

Open nginx.conf out of /etc/nginx/nginx.conf:

sudo vim /etc/nginx/nginx.conf

Comment the Server Section as it is shown in the example:

#    server {
#        listen       80 default_server;
#        server_name  localhost;
#        root         /usr/share/nginx/html;
#        #charset koi8-r;
#        #access_log  /var/log/nginx/host.access.log  main;
#        # Load configuration files for the default server block.
#        include /etc/nginx/default.d/*.conf;
#        location / {
#        }
#        # redirect server error pages to the static page /40x.html
#        #
#        error_page  404              /404.html;
#        location = /40x.html {
#        }
#        # redirect server error pages to the static page /50x.html
#        #
#        error_page   500 502 503 504  /50x.html;
#        location = /50x.html {
#        }
#    }
  • Restart nginx service:

sudo service nginx restart

  • Update SELinux policy

/usr/sbin/setsebool httpd_can_network_connect true

  • make it permanet service: ( OpenSusE ,CentOS, RedHat, Fedora)
    sudo chkconfig supervisord on

Setup Supervisor

  • Run:
$ sudo service novnc stop
$ sudo insserv -r novnc
$ sudo vi /etc/insserv/overrides/novnc
# Provides:          nova-novncproxy
# Required-Start:    $network $local_fs $remote_fs $syslog
# Required-Stop:     $remote_fs
# Default-Start:
# Default-Stop:
# Short-Description: Nova NoVNC proxy
# Description:       Nova NoVNC proxy

sudo chown -R www-data:www-data /var/www/webvirtmgr

  • Add file webvirtmgr.conf in /etc/supervisor/conf.d:
command=/usr/bin/python /var/www/webvirtmgr/ run_gunicorn -c /var/www/webvirtmgr/conf/

command=/usr/bin/python /var/www/webvirtmgr/console/webvirtmgr-console
  • Restart supervisor daemon
sudo service supervisor stop
sudo service supervisor start

Update - CentOS, RedHat, Fedora

  • Read check settings (maybe something has changed) and then:
cd /var/www/webvirtmgr
sudo git pull
sudo ./ collectstatic
sudo service supervisord restart


  • If you have error or not run panel (only for DEBUG or DEVELOP):

./ runserver 0:8000

Enter in your browser:

http://x.x.x.x:8000 (x.x.x.x - your server IP address )

QEMU/KVM and Memory Forensic (Volatility):

How to get ram memory dump:

Executing a virsh dump command sends a request to dump the core of a guest virtual machine to a file so errors in the virtual machine can be diagnosed. Running this command may require you to manually ensure proper permissions on file and path specified by the argument corefilepath. The virsh dump command is similar to a coredump (or the crash utility). To create the virsh dump file, run:

#virsh dump <domain> <corefilepath> [--bypass-cache] { [--live] | [--crash] | [--reset] } [--verbose] [--memory-only]

  • While the domain (guest virtual machine domain name) and corefilepath (location of the newly created core dump file) are mandatory, the following arguments are optional:

    • –live creates a dump file on a running machine and doesn’t pause it.
    • –crash stops the guest virtual machine and generates the dump file. The main difference is that the guest virtual machine will not be listed as Stopped, with the reason as Crashed. Note that in virt-manager the status will be listed as Paused.
    • –reset will reset the guest virtual machine following a successful dump. Note, these three switches are mutually exclusive.
    • –bypass-cache uses O_DIRECT to bypass the file system cache.
    • –memory-only the dump file will be saved as an elf file, and will only include domain’s memory and cpu common register value. This option is very useful if the domain uses host devices directly.
    • –verbose displays the progress of the dump
  • The entire dump process may be monitored using virsh domjobinfo command and can be canceled by running virsh domjobabort.

Assign static guest IP addresses using DHCP on the virtual machine

View the current dnsmasq DHCP configuration

  • Type the following command to list networks

    • # virsh net-list
  • Sample outputs:

 Name                 State      Autostart     Persistent
 default              active     yes           yes
  • To see the default network information, enter:

# virsh net-dumpxml default

  • Sample outputs:
<network connections='2'>
  <forward mode='nat'>
      <port start='1024' end='65535'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:12:fe:35'/>
  <ip address='' netmask=''>
      <range start='' end=''/>
  • The DHCP range is between and

How to configure static guest IP addresses on the VM host

  • First find out your guest VM’s MAC addresses, enter:
# virsh dumpxml {VM-NAME-HERE} | grep -i '<mac'
# virsh dumpxml xenial | grep -i '<mac'
  • Sample outputs:

    • <mac address='52:54:00:4c:40:1c'/>
  • Please note down the MAC addresses of the xenial VM that you want to assign static IP addresses.

Edit the default network

  • Type the following command:

    • # virsh net-edit default
  • Find the following section:

      <range start='' end=''/>
  • Append the static IP as follows after range:

    • <host mac='52:54:00:4c:40:1c' name='xenial' ip=''/>
  • Where,

    mac='52:54:00:4c:40:1c' – VMs mac address
    name='xenial' – VMs name.
    ip='' – VMs static IP.
  • Here is my complete file with three static DHCP entries for three VMs:
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:12:fe:35'/>
  <ip address='' netmask=''>
      <range start='' end=''/>
      <host mac='52:54:00:a0:cc:19' name='centos7' ip=''/>
      <host mac='52:54:00:f7:a1:c8' name='puffy' ip=''/>
      <host mac='52:54:00:4c:40:1c' name='xenial' ip=''/>
  • Restart DHCP service:
# virsh net-destroy default
# virsh net-start default
  • Sample outputs:
    Network default destroyed
    Network default started
  • If you are running the guest/VM called xenial shutdown it:
# virsh shutdown xenial
# /etc/init.d/libvirt-bin restart
# virsh start xenial
# ping -a
  • Sample outputs:
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.518 ms
64 bytes from icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from icmp_seq=3 ttl=64 time=0.327 ms
--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.202/0.349/0.518/0.129 ms
  • Each time the guest or VM called xenial comes online (or rebooted for the kernel update) it will get as static IP address by dnsmasq DHCP server.

virt-sparsify - Make a virtual machine disk sparse

  • sudo apt install libguestfs-tools


  • Typical usage is:

    • virt-sparsify indisk outdisk
    • which copies indisk to outdisk, making the output sparse. outdisk is created, or overwritten if it already exists. The format of the input disk is detected (eg. qcow2) and the same format is used for the output disk.
  • To convert between formats, use the –convert option:

    • virt-sparsify disk.raw --convert qcow2 disk.qcow2
    • Virt-sparsify tries to zero and sparsify free space on every filesystem it can find within the source disk image. You can get it to ignore (don’t zero free space on) certain filesystems by doing:
  • To sparse space of disk image and ignore filesystem

    • virt-sparsify --ignore /dev/sda1 indisk outdisk

Virt-manager for mac

brew tap jeffreywildman/homebrew-virt-manager
brew cask install xquartz
brew install virt-manager virt-viewer

virt-manager -c qemu+ssh://user@libvirthost/system?socket=/var/run/libvirt/libvirt-sock
virt-viewer -c qemu+ssh://user@libvirthost/system?socket=/var/run/libvirt/libvirt-sock

I still can’t connect to a remote URI, why?

  • This formula for virt-manager does not include the openssh-askpass dependency and does not prompt for passwords in a popup window. Here are two workarounds:

Run virt-manager with either the –debug or –no-fork option to get password prompt via the CLI.

  • Set up SSH keys between your local and remote system to avoid the prompt.

Why can’t I connect to a local URI (e.g., qemu:///system)?

  • I’ve not yet tested virt-manager against any local URIs/hypervisors. If you get virt-manager working with a local hypervisor and needed to take any special steps, feel free to share the details.

Everything was working yesterday, but it’s not working today, can you help?

  • If virt-manager or its dependencies have been upgraded recently (brew upgrade), it’s possible that a reinstall may fix the issue (see #39).

Xen/Kvm manual from OpenSuse

Some compilation problem

  • I saw first time in few year problem with compilation, where before it worked just fine. I had installed all libz related packages and still had problem with this.
    • no dependency information found for /usr/local/lib/
  • SOLUTION I dislike this solution but that was uniq way to get it working just fine.
    • sudo sed -i 's/my $ignore_missing_info = 0;/my $ignore_missing_info = 1;/g /usr/bin/dpkg-shlibdeps
    • another options from here not worked for me