Virt Tools Blog Planet



May 22, 2016

Cole Robinson

spice OpenGL/virgl acceleration on Fedora 24

New in Fedora 24 virt is 3D accelerated SPICE graphics, via Virgl. This is kinda-sorta OpenGL passthrough from the VM up to the host machine. Much of the initial support has been around since qemu 2.5, but it's more generally accessible now that SPICE is in the mix, since that's the default display type used by virt-manager and gnome-boxes.

I'll explain below how you can test things on Fedora 24, but first let's cover the hurdles and caveats. This is far from being something that can be turned on by default and there's still serious integration issues to iron out. All of this is regarding usage with libvirt tools.

Caveats and known issues

Because of these issues, we haven't exposed this as a UI knob in any of the tools yet, to save us some redundant bug reports for the above issues from users who are just clicking a cool sounding check box :) Instead you'll need to explicitly opt in via the command line.

Testing it out

You'll need the following packages (or later) to test this:
Once you install a Fedora 24 VM through the standard methods, you can enable spice GL for your VM with these two commands:

$ virt-xml --connect $URI $VM_NAME --confirm --edit --video clearxml=yes,model=virtio,accel3d=yes
$ virt-xml --connect $URI $VM_NAME --confirm --edit --graphics clearxml=yes,type=spice,gl=on,listen=none

The first command will switch the graphics device to 'virtio' and enable the 3D acceleration setting. The second command will set up spice to listen locally only, and enable GL. Make sure to fully poweroff the VM afterwards for the settings to take effect. If you want to make the changes manually with 'virsh edit', the XML specifics are described in the spice GL documentation.

Once your VM has started up, you can verify that everything is working correctly by checking glxinfo output in the VM, 'virgl' should appear in the renderer string:

$ glxinfo | grep virgl
Device: virgl (0x1010)
OpenGL renderer string: Gallium 0.4 on virgl

And of course the more fun test of giving supertuxkart a spin :)

Credit to Dave Airlie, Gerd Hoffman, and Marc-André Lureau for all the great work that got us to this point!

by Cole Robinson ( at May 22, 2016 11:56 AM

May 12, 2016

Richard Jones

Libguestfs appliance boot in under 600ms

$ ./run ./utils/boot-benchmark/boot-benchmark
Warming up the libguestfs cache ...
Running the tests ...

test version: libguestfs 1.33.28
 test passes: 10
host version: Linux 4.4.4-301.fc23.x86_64 #1 SMP Fri Mar 4 17:42:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
    host CPU: Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
     backend: direct               [to change set $LIBGUESTFS_BACKEND]
        qemu: /home/rjones/d/qemu/x86_64-softmmu/qemu-system-x86_64 [to change set $LIBGUESTFS_HV]
qemu version: QEMU emulator version 2.5.94, Copyright (c) 2003-2008 Fabrice Bellard
         smp: 1                    [to change use --smp option]
     memsize: 500                  [to change use --memsize option]
      append:                      [to change use --append option]

Result: 575.9ms ±5.3ms

There are various tricks here:

  1. I’m using the (still!) not upstream qemu DMA patches.
  2. I’ve compiled my own very minimal guest Linux kernel.
  3. I’m using my nearly upstream "crypto: Add a flag allowing the self-tests to be disabled at runtime." patch.
  4. I’ve got two sets of non-upstream libguestfs patches 1, 2
  5. I am not using libvirt, but if you do want to use libvirt, make sure you use the very latest version since it contains an important performance patch.


by rich at May 12, 2016 10:07 PM

Daniel Berrange

Analysis of techniques for ensuring migration completion with KVM

Live migration is a long standing feature in QEMU/KVM (and other competing virtualization platforms), however, by default it does not cope very well with guests whose workload are very memory write intensive. It is very easy to create a guest workload that will ensure a migration will never complete in its default configuration. For example, a guest which continually writes to each byte in a 1 GB region of RAM will never successfully migrate over a 1Gb/sec NIC. Even with a 10Gb/s NIC, a slightly larger guest can dirty memory fast enough to prevent completion without an unacceptably large downtime at switchover. Thus over the years, a number of optional features have been developed for QEMU with the aim to helping migration to complete.

If you don’t want to read the background information on migration features and the testing harness, skip right to the end where there are a set of data tables showing charts of the results, followed by analysis of what this all means.

The techniques available

Measuring impact of the techniques

Understanding what the various techniques do in order to maximise chances of a successful migration is useful, but it is hard to predict how well they will perform in the real world when faced with varying workloads. In particular, are they actually capable of ensuring completion under worst case workloads and what level of performance impact do they actually have on the guest workload. This is a problem that the OpenStack Nova project is currently struggling to get a clear answer on, with a view to improving Nova’s management of libvirt migration. In order to try and provide some guidance in this area, I’ve spent a couple of weeks working on a framework for benchmarking QEMU guest performance when subjected to the various different migration techniques outlined above.

In OpenStack the goal is for migration to be a totally “hands off” operation for cloud administrators. They should be able to request a migration and then forget about it until it completes, without having to baby sit it to apply tuning parameters. The other goal is that the Nova API should not have to expose any hypervisor specific concepts such as post-copy, auto-converge, compression, etc. Essentially Nova itself has to decide which QEMU migration features to use and just “do the right thing” to ensure completion. Whatever approach is chosen needs to be able to cope with any type of guest workload, since the cloud admins will not have any visibility into what applications are actually running inside the guest. With this in mind, when it came to performance testing the QEMU migration features, it was decided to look at their behaviour when faced with the worst case scenario. Thus a stress program was written which would allocate many GB of RAM, and then spawn a thread on each vCPU that would loop forever xor’ing every byte of RAM against an array of bytes taken from /dev/random. This ensures that the guest is both heavy on reads and writes to memory, as well as creating RAM pages which are very hostile towards compression. This stress program was statically linked and built into a ramdisk as the /init program, so that Linux would boot and immediately run this stress workload in a fraction of a second. In order to measure performance of the guest, each time 1 GB of RAM has been touched, the program will print out details of how long it took to update this GB and an absolute timestamp. These records are captured over the serial console from the guest, to be later correlated with what is taking place on the host side wrt migration.

Next up it was time to create a tool to control QEMU from the host and manage the migration process, activating the desired features. A test scenario was defined which encodes details of what migration features are under test and their settings (number of iterations before activating post-copy, bandwidth limits, max downtime values, number of compression threads, etc). A hardware configuration was also defined which expressed the hardware characteristics of the virtual machine running the test (number of vCPUs, size of RAM, host NUMA memory & CPU binding, usage of huge pages, memory locking, etc). The tests/migration/ tool provides the mechanism to invoke the test in any of the possible configurations.For example, to test post-copy migration, switching to post-copy after 3 iterations, allowing 1Gbs bandwidth on a guest with 4 vCPUs and 8 GB of RAM one might run

$ tests/migration/ --cpus 4 --mem 8 --post-copy --post-copy-iters 3 --bandwidth 125 --dst-host myotherhost --transport tcp --output postcopy.json

The myotherhost.json file contains the full report of the test results. This includes all details of the test scenario and hardware configuration, migration status recorded at start of each iteration over RAM, the host CPU usage recorded once a second, and the guest stress test output. The accompanying tests/migration/ tool can consume this data file and produce interactive HTML charts illustrating the results.

$ tests/migration/ --split-guest-cpu --qemu-cpu --vcpu-cpu --migration-iters --output postcopy.html postcopy.json

To assist in making comparisons between runs, however, a set of standardized test scenarios also defined which can be run via a tests/migration/ tool, in which case it is merely required to provide the desired hardware configuration

$ tests/migration/ --cpus 4 --mem 8 --dst-host myotherhost --transport tcp --output myotherhost-4cpu-8gb

This will run all the standard defined test scenarios and save many data files in the myotherhost-4cpu-8gb directory. The same tool can be used to create charts combining multiple data sets at once to allow easy comparison.

Performance results for QEMU 2.6

With the tools written, I went about running some tests against QEMU GIT master codebase, which was effectively the same as the QEMU 2.6 code just released. The pair of hosts used were Dell PowerEdge R420 servers with 8 CPUs and 24 GB of RAM, spread across 2 NUMA nodes. The primary NICs were Broadcom Gigabit, but it has been augmented with Mellanox 10-Gig-E RDMA capable NICs, which is what were picked for transfer of the migration traffic. For the tests I decided to collect data for two distinct hardware configurations, a small uniprocessor guest (1 vCPU and 1 GB of RAM) and a moderately sized multi-processor guest (4 vCPUs and 8 GB of RAM). Memory and CPU binding was specified such that the guests were confined to a single NUMA node to avoid performance measurements being skewed by cross-NUMA node memory accesses. The hosts and guests were all running the RHEL-7 3.10.0-0369.el7.x86_64 kernel.

To understand the impact of different network transports & their latency characteristics, the two hardware configurations were combinatorially expanded against 4 different network configurations – a local UNIX transport, a localhost TCP transport, a remote 10Gbs TCP transport and a remote 10Gbs RMDA transport.

The full set of results are linked from the tables that follow. The first link in each row gives a guest CPU performance comparison for each scenario in that row. The other cells in the row give the full host & guest performance details for that particular scenario

UNIX socket, 1 vCPU, 1 GB RAM

Using UNIX socket migration to local host, guest configured with 1 vCPU and 1 GB of RAM

Scenario Tunable
Pause unlimited BW 0 iters 1 iters 5 iters 20 iters
Pause 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Post-copy unlimited BW 0 iters 1 iters 5 iters 20 iters
Post-copy 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Auto-converge unlimited BW 5% CPU step 10% CPU step 20% CPU step
Auto-converge 10% CPU step 100 mbs 300 mbs 1 gbs 10 gbs unlimited
MT compression unlimited BW 1 thread 2 threads 4 threads
XBZRLE compression unlimited BW 5% cache 10% cache 20% cache 50% cache

UNIX socket, 4 vCPU, 8 GB RAM

Using UNIX socket migration to local host, guest configured with 4 vCPU and 8 GB of RAM

Scenario Tunable
Pause unlimited BW 0 iters 1 iters 5 iters 20 iters
Pause 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Post-copy unlimited BW 0 iters 1 iters 5 iters 20 iters
Post-copy 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Auto-converge unlimited BW 5% CPU step 10% CPU step 20% CPU step
Auto-converge 10% CPU step 100 mbs 300 mbs 1 gbs 10 gbs unlimited
MT compression unlimited BW 1 thread 2 threads 4 threads
XBZRLE compression unlimited BW 5% cache 10% cache 20% cache 50% cache

TCP socket local, 1 vCPU, 1 GB RAM

Using TCP socket migration to local host, guest configured with 1 vCPU and 1 GB of RAM

Scenario Tunable
Pause unlimited BW 0 iters 1 iters 5 iters 20 iters
Pause 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Post-copy unlimited BW 0 iters 1 iters 5 iters 20 iters
Post-copy 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Auto-converge unlimited BW 5% CPU step 10% CPU step 20% CPU step
Auto-converge 10% CPU step 100 mbs 300 mbs 1 gbs 10 gbs unlimited
MT compression unlimited BW 1 thread 2 threads 4 threads
XBZRLE compression unlimited BW 5% cache 10% cache 20% cache 50% cache

TCP socket local, 4 vCPU, 8 GB RAM

Using TCP socket migration to local host, guest configured with 4 vCPU and 8 GB of RAM

Scenario Tunable
Pause unlimited BW 0 iters 1 iters 5 iters 20 iters
Pause 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Post-copy unlimited BW 0 iters 1 iters 5 iters 20 iters
Post-copy 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Auto-converge unlimited BW 5% CPU step 10% CPU step 20% CPU step
Auto-converge 10% CPU step 100 mbs 300 mbs 1 gbs 10 gbs unlimited
MT compression unlimited BW 1 thread 2 threads 4 threads
XBZRLE compression unlimited BW 5% cache 10% cache 20% cache 50% cache

TCP socket remote, 1 vCPU, 1 GB RAM

Using TCP socket migration to remote host, guest configured with 1 vCPU and 1 GB of RAM

Scenario Tunable
Pause unlimited BW 0 iters 1 iters 5 iters 20 iters
Pause 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Post-copy unlimited BW 0 iters 1 iters 5 iters 20 iters
Post-copy 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Auto-converge unlimited BW 5% CPU step 10% CPU step 20% CPU step
Auto-converge 10% CPU step 100 mbs 300 mbs 1 gbs 10 gbs unlimited
MT compression unlimited BW 1 thread 2 threads 4 threads
XBZRLE compression unlimited BW 5% cache 10% cache 20% cache 50% cache

TCP socket remote, 4 vCPU, 8 GB RAM

Using TCP socket migration to remote host, guest configured with 4 vCPU and 8 GB of RAM

Scenario Tunable
Pause unlimited BW 0 iters 1 iters 5 iters 20 iters
Pause 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Post-copy unlimited BW 0 iters 1 iters 5 iters 20 iters
Post-copy 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Auto-converge unlimited BW 5% CPU step 10% CPU step 20% CPU step
Auto-converge 10% CPU step 100 mbs 300 mbs 1 gbs 10 gbs unlimited
MT compression unlimited BW 1 thread 2 threads 4 threads
XBZRLE compression unlimited BW 5% cache 10% cache 20% cache 50% cache

RDMA socket, 1 vCPU, 1 GB RAM

Using RDMA socket migration to remote host, guest configured with 1 vCPU and 1 GB of RAM

Scenario Tunable
Pause unlimited BW 0 iters 1 iters 5 iters 20 iters
Pause 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Post-copy unlimited BW 0 iters 1 iters 5 iters 20 iters
Post-copy 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Auto-converge unlimited BW 5% CPU step 10% CPU step 20% CPU step
Auto-converge 10% CPU step 100 mbs 300 mbs 1 gbs 10 gbs unlimited
MT compression unlimited BW 1 thread 2 threads 4 threads
XBZRLE compression unlimited BW 5% cache 10% cache 20% cache 50% cache

RDMA socket, 4 vCPU, 8 GB RAM

Using RDMA socket migration to remote host, guest configured with 4 vCPU and 8 GB of RAM

Scenario Tunable
Pause unlimited BW 0 iters 1 iters 5 iters 20 iters
Pause 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Post-copy unlimited BW 0 iters 1 iters 5 iters 20 iters
Post-copy 5 iters 100 mbs 300 mbs 1 gbs 10 gbs unlimited
Auto-converge unlimited BW 5% CPU step 10% CPU step 20% CPU step
Auto-converge 10% CPU step 100 mbs 300 mbs 1 gbs 10 gbs unlimited
MT compression unlimited BW 1 thread 2 threads 4 threads
XBZRLE compression unlimited BW 5% cache 10% cache 20% cache 50% cache

Analysis of results

The charts above provide the full set of raw results, from which you are welcome to draw your own conclusions. The test harness is also posted on the qemu-devel mailing list and will hopefully be merged into GIT at some point, so anyone can repeat the tests or run tests to compare other scenarios. What follows now is my interpretation of the results and interesting points they show

Considering all the different features tested, post-copy is the clear winner. It was able to guarantee completion of migration every single time, regardless of guest RAM size with minimal long lasting impact on guest performance. While it did have a notable spike impacting guest performance at time of switch from pre to post copy phases, this impact was short lived, only a few seconds. The next best result was seen with auto-converge which again managed to complete migration in the majority of cases. By comparison with post-copy, the worst case impact seen to the guest CPU performance was the same order of magnitude, but it lasted for a very very long time, many minutes long. In addition in more bandwidth limited scenarios, auto-converge was unable to throttle guest CPUs quickly enough to avoid hitting the overall 5 minute timeout, where as post-copy would always succeed except in the most limited bandwidth scenarios (100Mbs – where no strategy can ever work). The other benefit of post-copy is that only the guest OS thread responsible for the page fault is delayed – other threads in the guest OS will continue running at normal speed if their RAM is already on the host. With auto-converge, all guest CPUs and threads are throttled regardless of whether they are responsible for dirtying memory. IOW post-copy has a targetted performance hit, where as auto-converge is indiscriminate. Finally, as noted earlier, post-copy does have a failure scenario which can result in loosing the VM in post-copy mode if the network to the source host is lost for long enough to timeout the TCP connection. This risk can be mitigated with redundancy at the network layer and it is only at risk for the short period of time the guest is running in post-copy mode, which is mere seconds with 10Gbs link

It was expected that the compression features would fare badly given the guest workload, but the impact was far worse than expected, particularly for MT compression. Given the major requirement compression has in terms of host CPU time (MT compression) or host RAM (XBZRLE compression), they do no appear to be viable as a general purpose features. They should only be used if the workloads are known to be compression friendly, the host has the CPU and/or RAM resources to spare and neither post-copy or auto-converge are possible to use. To make these features more practical to use in an automated general purpose manner, QEMU would have to be enhanced to allow the mgmt application to have directly control over turning them on and off during migration. This would allow the app to try using compression, monitor its effectiveness and then turn compression off if it is being harmful, rather than having to abort the migration entirely and restart it.

There is scope for further testing with RDMA, since the hardware used for testing was limited to 10Gbs. Newer RDMA hardware is supposed to be capable of reaching higher speeds, 40Gbs, even 100 Gbs which would have a correspondingly positive impact on ability to migrate. At least for any speeds of 10Gbs or less though, it does not appear worthwhile to use RDMA, apps would be better off using TCP in combintaion with post-copy.

In terms of network I/O, no matter what guest workload, QEMU is generally capable of saturating whatever link is used for migration for as long as it takes to complete. It is very easy to create workloads that will never complete, and decreasing the bandwidth available just increases the chances of migration. It might be tempting to think that if you have 2 guests, it would take the same total time whether you migrate them one after the other, or migrate them in parallel. This is not necessarily the case though, as with a parallel migration the bandwidth will be shared between them, which increases the chances that neither guest will ever be able to complete. So as a general rule it appears wise to serialize all migration operations on a given host, unless there are multiple NICs available.

In summary, use post-copy if it is available, otherwise use auto-converge. Don’t bother with compression unless the workload is known to be very compression friendly. Don’t bother with RDMA unless it supports more than 10 Gbs, otherwise stick with plain TCP.

by Daniel Berrange at May 12, 2016 03:17 PM

April 29, 2016

Cole Robinson

Using CPU host-passthrough with virt-manager

I described virt-manager's CPU model default in this post. In that post I explained the difficulties of using either of the libvirt options for mirroring the host CPU: mode=host-model still has operational issues, and mode=host-passthrough isn't recommended for use with libvirt over supportability concerns.

Unfortunately since writing that post the situation hasn't improved any, and since host-passthrough is the only reliably way to expose the full capabilities of the host CPU to the VM, users regularly want to enable it. This is particularly apparent if trying to do nested virt, which often doesn't work on Intel CPUs unless host-passthrough is used.

However we don't explicitly expose this option in virt-manager since it's not generally recommended for libvirt usage. You can however still enable it in virt-manager:

by Cole Robinson ( at April 29, 2016 04:48 PM

UEFI support in virt-install and virt-manager

One of the new features in virt-manager 1.2.0 (from back in May) is user friendly support for enabling UEFI.

First a bit about terminology: When UEFI is packaged up to run in an x86 VM, it's often called OVMF. When UEFI is packaged up to run in an AArch64 VM, it's often called AAVMF. But I'll just refer to all of it as UEFI.

Using UEFI with virt-install and virt-manager

The first step to enable this for VMs is to install the binaries. UEFI still has some licensing issues that make it incompatible with Fedora's policies, so the bits are hosted in an external repo. Details for installing the repo and UEFI bits are over here.

Once the bits are installed (and you're on Fedora 22 or later), virt-manager and virt-install provide simple options to enable UEFI when creating VMs.

Marcin has a great post with screenshots describing this for virt-manager (for aarch64, but the steps are identical for x86 VMs).

For virt-install it's as simple as doing:

$ sudo virt-install --boot uefi ...

virt-install will get the binary paths from libvirt and set everything up with the optimal config. If virt-install can't figure out the correct parameters, like if no UEFI binaries are installed, you'll see an error like: ERROR    Error: --boot uefi: Don't know how to setup UEFI for arch 'x86'

See 'virt-install --boot help' if you need to tweak the parameters individually.

Implementing support in applications

Libvirt needs to know about UEFI<->NVRAM config file mapping, so it can advertise it to tools like virt-manager/virt-install. Libvirt looks at a hardcoded list of known host paths to see if any firmware is installed, and if so, lists those paths in domain capabilities output (virsh domcapabilities). Libvirt in Fedora 22+ knows to look for the paths provided by the repo mentioned above, so just installing the firmware is sufficient to make libvirt advertise UEFI support.

The domain capabilities output only lists the firmware path and the associated variable store path. Notably lacking is any indication of what architecture the binaries are meant for. So tools need to determine this mapping themselves.

virt-manager/virt-install and libguestfs use a similar path matching heuristic. The libguestfs code is a good reference:

    match guest_arch with
| "i386" | "i486" | "i586" | "i686" ->
[ "/usr/share/edk2.git/ovmf-ia32/OVMF_CODE-pure-efi.fd",
"/usr/share/edk2.git/ovmf-ia32/OVMF_VARS-pure-efi.fd" ]
| "x86_64" ->
[ "/usr/share/OVMF/OVMF_CODE.fd",
"/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd" ]
| "aarch64" ->
[ "/usr/share/AAVMF/AAVMF_CODE.fd",
"/usr/share/edk2.git/aarch64/vars-template-pflash.raw" ]
| arch ->
error (f_"don't know how to convert UEFI guests for architecture %s")
guest_arch in

Having to track this in every app is quite crappy, but it's the only good solution at the moment. Hopefully long term libvirt will grow some solution that makes this easier for applications.

by Cole Robinson ( at April 29, 2016 04:35 PM

Polkit password-less access for the 'libvirt' group

Many users, who admin their own machines, want to be able to use tools like virt-manager without having to enter a root password. Just google 'virt-manager without password' and see all the hits. I've seen many blogs and articles over the years describing various ways to work around it.

The password prompting is via libvirt's polkit integration. The idea is that we want the applications to run as a regular unprivileged user (running GUI apps as root is considered a no-no), and only use the root authentication for talking to system libvirt instance. Most workarounds suggest installing a polkit rule to allow your user, or a particular user group, to access libvirt without needing to enter the root password.

In libvirt v1.2.16 we finally added official support for this (and backported to Fedora22+). The group is predictably called 'libvirt'. This matches polkit rules that debian and suse were already shipping too.

So just add your user to the 'libvirt' group and enjoy passwordless virt-manager usage:

$ usermod --append --groups libvirt `whoami`

by Cole Robinson ( at April 29, 2016 04:32 PM

qemu:///system vs qemu:///session

If you've spent time using libvirt apps like virt-manager, you've likely seen references to libvirt URIs. The URI is how users or apps tell libvirt what hypervisor (qemu, xen, lxc, etc) to connect to, what host it's on, what authentication method to use, and a few other bits. 

For QEMU/KVM (and a few other hypervisors), there's a concept of system URI vs session URI:

That describes the 'what', but the 'why' of it is a bigger story. The privilege level of the daemon and VMs have pros and cons depending on your usecase. The easiest way to understand the benefit of one over the other is to list the problems with each setup.

qemu:///system runs libvirtd as root, and access is mediated by polkit. This means if you are connecting to it as a regular user (like when launching virt-manager), you need to enter the host root password, which is annoying and not generally desktop usecase friendly. There are ways to work around it but it requires explicit admin configuration.

Desktop use cases also suffer since VMs are running as the 'qemu' user, but the app (like virt-manager) is running as your local user. For example, say you download an ISO to $HOME and want to attach it to a VM. The VM is running as unprivileged user=qemu and can't access your $HOME, so libvirt has to change the ISO file owner to qemu:qemu and virt-manager has to give search access to $HOME for user=qemu. It's a pain for apps to handle, and it's confusing for users, but after dealing with it for a while in virt-manager we've made it generally work. (Though try giving a VM access to a file on a fat32 USB drive that was automounted by your desktop session...)

qemu:///session runs libvirtd and VMs as your unprivileged user. This integrates better with desktop use cases since permissions aren't an issue, no root password is required, and each user has their own separate pool of VMs.

However because nothing in the chain is privileged, any VM setup tasks that need host admin privileges aren't an option. Unfortunately this includes most general purpose networking options.

The default qemu mode in this case is usermode networking (or SLIRP). This is an IP stack implemented in userspace. This has many drawbacks: the VM can not easily be accessed by the outside world, the VM can access talk to the outside world but only over a limited number of networking protocols, and it's very slow.

There is an option for qemu:///session VMs to use a privileged networking setup, via the setuid qemu-bridge-helper. Basically the host admin sets up a bridge, adds it to a whitelist at /etc/qemu/bridge.conf, then it's available for unprivileged qemu instances. By default on Fedora this contains 'virbr0' which is the default virtual network bridge provided by the system libvirtd instance, and what qemu:///system VMs typically use.

gnome-boxes originally used usermode networking, but switched around Fedora 21 timeframe to use virbr0 via qemu-bridge-helper. But that's dependent on virbr0 being set up correctly by the host admin, or via package install (libvirt-daemon-config-network package on Fedora).

qemu:///session also misses some less common features that require host admin privileges, like host PCI device assignment. Also VM autostart doesn't work as expected because the session daemon itself isn't autostarted.

Apps have to decide for themselves which libvirtd mode to use, depending on their use case.

qemu:///system is completely fine for big apps like oVirt and Openstack that require admin access to the virt hosts anyways.

virt-manager largely defaults to qemu:///system because that's what it has always done, and that default long precedes qemu-bridge-helper. We could switch but it would just trade one set of issues for another. virt-manager can be used with qemu:///session though (or any URI for that matter).

libguestfs uses qemu:///session since it avoids all the permission issues and the VM appliance doesn't really care about networking.

gnome-boxes prioritized desktop integration from day 1, so qemu:///session was the natural choice. But they've struggled with the networking issues in various forms.

Other apps are in a pickle: they would like to use qemu:///session to avoid the permission issues, but they also need to tweak the network setup. This is the case vagrant-libvirt currently finds itself in.

by Cole Robinson ( at April 29, 2016 04:11 PM

github 'hub' command line tool

I don't often need to contribute patches to code hosted on github; most of the projects I contribute to are either old school and don't use github for anything but mirroring their main git repo, or are small projects I entirely maintain so I don't submit pull-requests.

But when I do need to submit patches, github's hub tool makes my life a lot simpler, which allows forking repositories and submitting pull-requests very easily from the command line.

The 'hub' tool wants to be installed as an alias for 'git'. I originally tried that, but it made my bash prompt insanely slow since I show the current git branch and dirty state in my bash prompt. When I first encountered this, I filed a bug against the hub tool (with a bogus workaround), and nowadays it seems they have a disclaimer in their README.

Their recommended fix is to s/git/command git/g in, which doesn't work too well if you use the linked fedora suggestion of pointing at the package installed file in /usr/share, so I avoid the alias. You can run 'hub' standalone, but instead I like to do:

 sudo dnf install hub  
ln -s /usr/bin/hub /usr/libexec/git-core/git-hub

Then I can git hub fork and git hub pull-request all I want :)

by Cole Robinson ( at April 29, 2016 04:04 PM

April 22, 2016

Gerd Hoffmann

linux evdev input support in qemu 2.6

Another nice goodie coming in the qemu 2.6 release is support for linux evdev devices. qemu can pick up input events directly from the devices now, which is quite useful in case you don’t use gtk/sdl/vnc/spice for display output. Most common use case for this is vga pass-through. It’s used this way: qemu -object input-linux,id=$name,evdev=/dev/input/event$nr […]

by kraxel at April 22, 2016 12:01 PM

Fedora on Raspberry PI updates

Some updates for the Running Fedora on the Raspberry PI 2 article. There has been great progress on getting the Raspberry PI changes merged upstream in the last half year. u-boot has decent support meanwhile for the whole family, including 64bit support for the Raspberry PI 3. Raspberry PI 2 support has been merged upstream […]

by kraxel at April 22, 2016 05:57 AM

April 21, 2016

Gerd Hoffmann

fbida 2.12 released.

New fbida release is out. Big user-visible item is the new fbpdf application, a console pdf viewer using the poppler library. Can be used instead of the fbgs wrapper script. Download the bits here.

by kraxel at April 21, 2016 07:48 PM

latest updates from virtio-gpu department

It’s been a while since the last virtio-cpu status report. qemu 2.6 release is just around the corner, time for a writeup about what is coming. virgl support was added to qemu 2.5 already. It was supported only by the SDL2 and gtk UIs. Which is a nice start, but has the drawback that qemu […]

by kraxel at April 21, 2016 04:15 PM

April 05, 2016

Daniel Berrange

Improving QEMU security part 5: TLS support for NBD server & client

This blog is part 5 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

For many years now QEMU has had code to support the NBD protocol, either as a client or as a server. The qemu-nbd command line tool can be used to export a disk image over NBD to a remote machine, or connect it directly to the local kernel’s NBD block device driver. The QEMU system emulators also have a block driver that acts as an NBD client, allowing VMs to be run from NBD volumes. More recently the QEMU system emulators gained the ability to export the disks from a running VM as named NBD volumes. The latter is particularly interesting because it is the foundation of live migration with block device replication, allowing VMs to be migrated even if you don’t have shared storage between the two hosts. In common with most network block device protocols, NBD has never offered any kind of data security capability. Administrators are recommended to run NBD over a private LAN/vLAN, use network layer security like IPSec, or tunnel it over some other kind of secure channel. While all these options are capable of working, none are very convenient to use because they require extra setup steps outside of the basic operation of the NBD server/clients. Libvirt has long had the ability to tunnel the QEMU migration channel over its own secure connection to the target host, but this has not been extended to cover the NBD channel(s) opened when doing block migration. While it could theoretically be extended to cover NBD, it would not be ideal from a performance POV because the libvirtd architecture means that the TLS encryption/decryption for multiple separate network connections would be handled by a single thread. For fast networks (10-GigE), libvirt will quickly become the bottleneck on performance even if the CPU has native support for AES.

Thus it was decided that the QEMU NBD client & server would need to be extended to support TLS encryption of the data channel natively. Initially the thought was to just add a flag to the client/server code to indicate that TLS was desired and run the TLS handshake before even starting the NBD protocol. After some discussion with the NBD maintainers though, it was decided to explicitly define a way to support TLS in the NBD protocol negotiation phase. The primary benefit of doing this is to allow clearer error reporting to the user if the client connects to a server requiring use of TLS and the client itself does not support TLS, or vica-verca – ie instead of just seeing what appears to be a mangled NBD handshake and not knowing what it means, the client can clearly report “This NBD server requires use of TLS encryption”.

The extension to the NBD protocol was fairly straightforward. After the initial NBD greeting (where the client & server agree the NBD protocol variant to be used) the client is able to request a number of protocol options. A new option was defined to allow the client to request TLS support. If the server agrees to use TLS, then they perform a standard TLS handshake and the rest of the NBD protocol carries on as normal. To prevent downgrade attacks, if the NBD server requires TLS and the client does not request the TLS option, then it will respond with an error and drop the client. In addition if the server requires TLS, then TLS must be the first option that the client requests – other options are only permitted once the TLS session is active & the server will again drop the client if it tries to request non-TLS options first.

The QEMU NBD implementation was originally using plain POSIX sockets APIs for all its I/O. So the first step in enabling TLS was to update the NBD code so that it used the new general purpose QEMU I/O channel  APIs instead. With that done it was simply a matter of instantiating a new QIOChannelTLS object at the correct part of the protocol handshake and adding various command line options to the QEMU system emulator and qemu-nbd program to allow the user to turn on TLS and configure x509 certificates.

Running a NBD server using TLS can be done as follows:

$ qemu-nbd --object tls-creds-x509,id=tls0,endpoint=server,dir=/home/berrange/qemutls \
           --tls-creds tls0 /path/to/disk/image.qcow2

On the client host, a QEMU guest can then be launched, connecting to this NBD server:

$ qemu-system-x86_64 -object tls-creds-x509,id=tls0,endpoint=client,dir=/home/berrange/qemutls \
                     -drive driver=nbd,host=theotherhost,port=10809,tls-creds=tls0 \
                     ...other QEMU options...

Finally to enable support for live migration with block device replication, the QEMU system monitor APIs gained support for a new parameter when starting the internal NBD server. All of this code was merged in time for the forthcoming QEMU 2.6 release. Work has not yet started to enable TLS with NBD in libvirt, as there is little point securing the NBD protocol streams, until the primary live migration stream is using TLS. More on live migration in a future blog post, as that’s going to be QEMU 2.7 material now.

In this blog series:

by Daniel Berrange at April 05, 2016 09:35 AM

April 04, 2016

Daniel Berrange

Improving QEMU security part 4: generic I/O channel framework to simplify TLS

This blog is part 4 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

Part 2 of this series described the creation of a general purpose API for simplifying TLS session handling inside QEMU, particularly with a view to hiding the complexity of the handshake and x509 certificate validation. The VNC server was converted to use this API, which was a big benefit, but there was still a need to add extra code to support TLS in the I/O paths. Specifically, anywhere that the VNC server would read/write on the network socket, had to be made TLS aware so that it would use plain POSIX send/recv functions vs the TLS wrapped send/recv functions as appropriate. For the VNC server it is actually even more complex, because it also supports websockets, so each I/O point had to choose between plain, TLS, websockets and websockets plus TLS.  As TLS support extends to other areas of QEMU this pattern would continue to complicate I/O paths in each backend.

Clearly there was a need for some form of I/O channel abstraction that would allow TLS to be enabled in each QEMU network backend without having to add conditional logic at every I/O send/recv call. Looking around at the QEMU subsystems that would ultimately need TLS support, showed a variety of approaches currently in use

The GIOChannel APIs used by the character device backend theoretically provide an extensible framework for I/O and there is even a TLS implementation of the GIOChannel API. The two limitations of GIOChannel for QEMU though are that it does not support scatter / gather / vectored I/O APIs and that it does not support file descriptor passing over UNIX sockets. The latter is not a show stopper, since you can still access the socket handle directly to send/recv file descriptors. The lack of vectored I/O though would be a significant issue for migration and NBD servers where performance is very important. While we could potentially extend GIOChannel to add support for new callbacks to do vectored I/O, by the time you’ve done that most of the original GIOChannel code isn’t going to be used, limiting the benefit of starting from GIOChannel as a base. It is also clear that GIOChannel is really not something that is going to get any further development from the GLib maintainers, since their focus is on the new and much better GIO library. This supports file descriptor passing and TLS encryption, but again lacks support for vectored I/O. The bigger show stopper though is that to get access to the TLS support requires depending on a version on GLib that is much newer than what QEMU is willing to use. The existing QEMUFile APIs could form the basis of a general purpose I/O channel system if they were untangled & extracted from migration codebase. One limitation is that QEMUFile only concerns itself with I/O, not the initial channel establishment which is left to the migration core code to deal with, so did not actually provide very much of a foundation on which to build.

After looking through the various approaches in use in QEMU, and potentially available from GLib, it was decided that QEMU would be best served by creating a new general purpose I/O channel API. Thus a new QEMU subsystem was added in the io/ and include/io/ directories to provide a set of classes for I/O over a variety of different data channels. The core design aims were to use the QEMU object model (QOM) framework to provide a standard pattern for extending / subclassing, use the QEMU Error object for all error reporting, file  descriptor passing, main loop watch integration and coroutine integration. Overall the new design took many elements of its design from GIOChannel and the GIO library, and blended them with QEMU’s own codebase design. The initial goal was to provide enough functionality to convert the VNC server as a proof of concept. To this end the following classes were created

To avoid making this blog posting even larger, I won’t go into details of these (the code is available in QEMU git for anyone who’s really interesting), but instead illustrate it with a comparison of the VNC code before & after. First consider the original code in the VNC server for dealing with writing a buffer of data over a plain socket or websocket either with TLS enabled. The following functions existed in the VNC server code to handle all the combinations:

ssize_t vnc_tls_push(const char *buf, size_t len, void *opaque)
    VncState *vs = opaque;
    ssize_t ret;

    ret = send(vs->csock, buf, len, 0);
    if (ret < 0) {
        if (errno == EINTR) {
            goto retry;
        return -1;
    return ret;

ssize_t vnc_client_write_buf(VncState *vs, const uint8_t *data, size_t datalen)
    ssize_t ret;
    int err = 0;
    if (vs->tls) {
        ret = qcrypto_tls_session_write(vs->tls, (const char *)data, datalen);
        if (ret < 0) {
            err = errno;
    } else {
        ret = send(vs->csock, (const void *)data, datalen, 0);
        if (ret < 0) {
            err = socket_error();
    return vnc_client_io_error(vs, ret, err);

long vnc_client_write_ws(VncState *vs)
    long ret;
    vncws_encode_frame(&vs->ws_output, vs->output.buffer, vs->output.offset);
    return vnc_client_write_buf(vs, vs->ws_output.buffer, vs->ws_output.offset);

static void vnc_client_write_locked(void *opaque)
    VncState *vs = opaque;

    if (vs->encode_ws) {
    } else {

After conversion to use the new QIOChannel classes for sockets, websockets and TLS, all of the VNC server code above turned into

ssize_t vnc_client_write_buf(VncState *vs, const uint8_t *data, size_t datalen)
    Error *err = NULL;
    ssize_t ret;
    ret = qio_channel_write(vs->ioc, (const char *)data, datalen, &err);
    return vnc_client_io_error(vs, ret, &err);

It is clearly a major win for maintainability of the VNC server code to have all the TLS and websockets I/O support handled by the QIOChannel APIs. There is no impact to supporting TLS and websockets anywhere in the VNC server I/O paths now. The only place where there is new code is the point where the TLS or websockets session is initiated and this now only requires instantiation of a suitable QIOChannel subclass and registering a callback to be run when the session handshake completes (or fails).

tls = qio_channel_tls_new_server(vs->ioc, vs->vd->tlscreds, vs->vd->tlsaclname, &err);
if (!tls) {
    return 0;

vs->ioc = QIO_CHANNEL(tls);

qio_channel_tls_handshake(tls, vnc_tls_handshake_done, vs, NULL);

Notice that the code is simply replacing the current QIOChannel handle ‘vs->ioc’ with an instance of the QIOChannelTLS class. The vnc_tls_handshake_done method is invoked when the TLS handshake is complete or failed and lets the VNC server continue with the next part of its authentication protocol, or drop the client connection as appropriate. So adding TLS session support to the VNC server comes in at about 10 lines of code now.

In this blog series:

by Daniel Berrange at April 04, 2016 11:26 AM

April 01, 2016

Daniel Berrange

Improving QEMU security part 3: securely passing in credentials

This blog is part 3 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

When configuring a virtual machine, there are a number of places where QEMU requires some form of security sensitive credentials, typically passwords or encryption keys. Historically QEMU has had no standard approach for getting these credentials from the user, so things have grown in an adhoc manner with predictably awful results. If the VNC server is configured to use basic VNC authentication, then it requires a password to be set. When I first wrote patches to add password auth to QEMU’s VNC server it was clearly not desirable to expose the password on the command line, so when configuring VNC you just request password authentication be enabled using -vnc,password and then have to use the monitor interface to set the actual password value “change vnc password“. Until a password has been set via the monitor, the VNC server should reject all clients, except that we’ve accidentally broken this in the past, allowing clients when no server password is set :-( The qcow & qcow2 disk image formats support use of AES for encryption (remember this is horribly broken) and so there needs to be a way to provide the decryption password for this. Originally you had to wait for QEMU to prompt for the disk password on the interactive console. This clearly doesn’t work very nicely when QEMU is being managed by libvirt, so we added another monitor command which allows apps to provide the disk password upfront, avoiding the need to prompt. Fast forward a few years and QEMU’s block device layer gained support for various network protocols including iSCSI, RBD, FTP and HTTP(s). All of these potentially require authentication and thus a password needs to be provided to QEMU. The CURL driver for ftp, http(s) simply skipped support for authentication since there was no easy way to provide the passwords securely. Sadly, the iSCSI and RBD drivers simply decided to allow the password to be provided in the command line. Hence the passwords for RBD and iSCSI are visible in plain text in the process listing and in libvirt’s QEMU log files, which often get attached to bug reports, which has resulted in a CVE being filed against libvirt. I had an intention to add support for the LUKS format in the QEMU block layer which will also require passwords to be provided securely to QEMU, and it would be desirable if the x509 keys provided to QEMU could be encrypted too.

Looking at this mess and the likely future requirements, it was clear that QEMU was in desperate need of a standard mechanism for securely receiving credentials from the user / management app (libvirt). There are a variety of channels via which credentials can be theoretically passed to QEMU:

As mentioned previously, using command line arguments or environment variables is not secure if the credential is passed in plain text, because they are visible in the processing list and log files. It would be possible to create a plain file on disk and write each password to it and use file permissions to ensure only QEMU can read it. Using files is not too bad as long as your host filesystem is on encrypted storage. It has a minor complexity of having to dynamically create files on the fly each time you want to hotplug a new device using a password. Most of these problems can be avoided by using an anonymous pipe, but this is more complicated for end users because for hotplugging devices it would require passing file descriptors over a UNIX socket. Finally the monitor provides a decent secure channel which users / mgmt apps will typically already have open via a UNIX socket. There is a chicken & egg problem with it though, because the credentials are often required at initial QEMU startup when parsing the command line arguments, and the monitor is not available that early.

After considering all the options, it was decided that using plain files and/or anonymous pipes to pass credentials would be the most desirable approach. The qemu_open() method has a convenient feature whereby there is a special path prefix that allows mgmt apps to pass a file descriptor across instead of a regular filename. To enable reuse of existing -object command line argument and object_add monitor commands for definin credentials, the QEMU object model framework (QOM) was used to define a ‘secret’ object class. The ‘secret‘ class has a ‘path‘ property which is the filename containing the credential. For example it could be used

 # echo "letmein" >
 # $QEMU -object secret,id=sec0,

Having written this, I realized that it would be possible to allow passwords to be provided directly via the command line if we allowed secret data to be encrypted with a master key. The idea would be that when a QEMU process is first started, it gets given a new unique AES key via a file. The credentials for individual disks / servers would be encrypted with the master key and then passed directly on the command line. The benefit of this is that the mgmt app only needs to deal with a single file on disk with a well defined lifetime.

First a master key is generated and saved to a file in base64 format

 # openssl rand -base64 32 > master-key.b64

Lets say we have two passwords we need to give to QEMU. We will thus need two initialization vectors

 # openssl rand -base64 16 > sec0-iv.b64
 # openssl rand -base64 16 > sec1-iv.b64

Each password is now encrypted using the master key and its respective initialization vector

 # SEC0=$(printf "letmein" |
          openssl enc -aes-256-cbc -a \
             -K $(base64 -d master-key.b64 | hexdump -v -e '/1 "%02X"') \
             -iv $(base64 -d sec0-iv.b64 | hexdump -v -e '/1 "%02X"'))
 # SEC1=$(printf "1234567" |
          openssl enc -aes-256-cbc -a \
             -K $(base64 -d master-key.b64 | hexdump -v -e '/1 "%02X"') \
             -iv $(base64 -d sec1-iv.b64 | hexdump -v -e '/1 "%02X"'))

Finally when QEMU is launched, three secrets are defined, the first gives the master key via a file, and the others provide the two encrypted user passwords

 # $QEMU \
      -object secret,id=secmaster,format=base64,file=key.b64 \
      -object secret,id=sec0,keyid=secmaster,format=base64,\
              data=$SECRET,iv=$(<sec0-iv.b64) \
      -object secret,id=sec1,keyid=secmaster,format=base64,\
              data=$SECRET,iv=$(<sec1-iv.b64) \
      ...other args using the secrets...

Now we have a way to securely get credentials into QEMU, there just remains the task of associating the secrets with the things in QEMU that need to use them. The TLS credentials object previously added originally required the x509 server key to be provided in an unencrypted PEM file. The tls-creds-x509 object can now gain a new property “passwordid” which provides the ID of a secret object that defines the password to use for decrypting the x509 key.

 # $QEMU \
      -object secret,id=secmaster,format=base64,file=key.b64 \
      -object secret,id=sec0,keyid=secmaster,format=base64,\
              data=$SECRET,iv=$(<sec0-iv.b64) \
      -object tls-creds-x509,id=tls0,dir=/home/berrange/qemutls,endpoint=server,passwordid=sec0 \

Aside from adding support for encrypted x509 certificates, the RBD, iSCSI and CURL block drivers in QEMU have all been updated to allow authentication passwords to be provided using the ‘secret‘ object type. Libvirt will shortly be gaining support to use this facility which will address the long standing problem of RBD/ISCSI passwords being visible in clear text in the QEMU process command line arguments. All the enhancements described in this posting have been merged for the forthcoming QEMU 2.6.0 release so will soon be available to users. The corresponding enhancements to libvirt to make use of these features are under active development.
In this blog series:

by Daniel Berrange at April 01, 2016 03:42 PM

Improving QEMU security part 2: generic TLS support

This blog is part 2 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

After the initial consolidation of cryptographic APIs into one area of the QEMU codebase, it was time to move onto step two, which is where things start to directly benefit the users. With the original patches to support TLS in the VNC server, configuration of the TLS credentials was done as part of the -vnc command line argument. The downside of such an approach is that as we add support for TLS to other arguments like -chardev, the user would have to supply the same TLS information in multiple places. So it was necessary to isolate the TLS credential configuration from the TLS session handling code, enabling a single set of TLS credentials to be associated with multiple network servers (they can of course each have a unique set of credentials if desired). To achieve this, the new code made use of QEMU’s object model framework (QOM) to define a general TLS credentials interface and then provide implementations for anonymous credentials (totally insecure but needed for back compat with existing QEMU features) and for x509 certificates (the preferred & secure option). There are now two QOM types tls-creds-anon and tls-creds-x509 that can be created on the command line via QEMU’s -object argument, or in the monitor using the ‘object_add’ command. The VNC server was converted to use the new TLS credential objects for its configuration, so whereas in QEMU 2.4 VNC with TLS would be configured using


As of QEMU 2.5 the preferred approach is to use the new credential objects

-object tls-creds-x509,id=tls0.endpoint=server,dir=/path/to/certificates/directory

The old CLI syntax is still supported, but gets translated internally to create the right TLS credential objects. By default the x509 credentials will require that the client provide a certificate, which is equivalent to the traditional ‘x509verify‘ option for VNC. To remove the requirement for client certs, the ‘verify-peer=no‘ option can be given when creating the x509 credentials object.

Generating correct x509 certificates is something that users often struggle with and when getting it wrong the failures are pretty hard to debug – usually just resulting in an unhelpful “handshake failed” type error message. To help troubleshoot problems, the new x509 credentials code in QEMU will sanity check all certificates it loads prior to using them. For example, it will check that the CA certificate has basic constraints set to indicate usage as a CA, catching problems where people give a server/client cert instead of a CA cert. Likewise it will check that the server certificate has basic constraints set to indicate usage in a server. It’ll check that the server certificate is actually signed by the CA that is provided and that none of the certs have expired already. These are all things that the client would check when it connects, so we’re not adding / removing security here, just helping administrators to detect misconfiguration of their TLS certificates as early as possible. These same checks have been done in libvirt for several years now and have been very beneficial in reducing the bugs reports we get related to misconfiguration of TLS.

With the generic TLS credential objects created, the second step was to create a general purpose API for handling the TLS protocol inside QEMU, especially the simplifying the handshake which requires a non-negligible amount of code. The TLS session APIs were designed such that they are independent of the underling data transport since while the VNC server always runs over TCP/UNIX sockets, other QEMU backends may wish to run TLS over non-socket based transports. Overall the API for dealing with TLS session establishment in QEMU can be used as follows

  static ssize_t mysock_send(const char *buf, size_t len,
                             void *opaque)
      int fd = GPOINTER_TO_INT(opaque);
      return write(*fd, buf, len);
  static ssize_t mysock_recv(const char *buf, size_t len,
                             void *opaque)
      int fd = GPOINTER_TO_INT(opaque);
      return read(*fd, buf, len);
  static int mysock_run_tls(int sockfd,
                            QCryptoTLSCreds *creds,
                            Error *erp)
      QCryptoTLSSession *sess;
      sess = qcrypto_tls_session_new(creds,
      if (sess == NULL) {
          return -1;
      while (1) {
          if (qcrypto_tls_session_handshake(sess, errp) < 0) {
              return -1;
          switch(qcrypto_tls_session_get_handshake_status(sess)) {
              if (qcrypto_tls_session_check_credentials(sess, errp) < )) {
                  return -1;
              goto done;
              ...wait for GIO_IN event on fd...
              ...wait for GIO_OUT event on fd...
      ....send/recv payload data on sess...

The particularly important thing to note with this example is how the network service (eg VNC, NBD, chardev) that is enabling TLS no longer has to have any knowledge of x509 certificates. They are loaded automatically when the user provides the ‘-object tls-creds-x509‘ argument to QEMU, and they are validated automatically by the call to qcrypto_tls_session_handshake(). This makes it easy to add TLS support to other network backends in QEMU, with minimal overhead significantly lowering the risk of screwing up the security. Since it already had TLS support, the VNC server was converted to use this new TLS session API instead of using the gnutls APIs directly. Once again this had a very positive impact on maintainability of the VNC code, since it allowed countless #ifdef CONFIG_GNUTLS conditionals to be removed which clarified the code flow significantly. This work on TLS and the VNC server all merged for the 2.5 release of QEMU, so is already available to benefit users. There is corresponding libvirt work to be done still to convert over to use the new command line syntax for configuring TLS with QEMU.

In this blog series:

by Daniel Berrange at April 01, 2016 08:51 AM

March 31, 2016

Daniel Berrange

Improving QEMU security part 1: crypto code consolidation

This blog is part 1 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

Many years ago I wrote patches for QEMU to enable use of TLS with the VNC server via the VeNCrypt protocol extension. In those patches I modified the VNC server code to directly call out to gnutls in various places to perform the TLS handshake, validate certificates and encrypt/decrypt data. Fast-forward 8 years and I’m once again looking at QEMU with a view to adding TLS encryption support to many other QEMU network services, in particular character device backends, migration and NBD. The TLS certificate handling code is complex enough that I really didn’t fancy repeating it in multiple different areas of the QEMU codebase, so I started thinking about extracting the TLS code from the VNC server for purpose of easier reuse. Aside from VNC with TLS, QEMU uses cryptographic routines in a number of other areas, AES for qcow2 native encryption (which is horribly broke btw), single DES (yes, really single DES) in the VNC server for the awful VNC password authentication, SHA256 hashing in the quorum block driver and SHA1 hashing in the VNC websockets handshake, and AES in many of its CPU emulation backends for the various architecture specific AES acceleration instructions. QEMU actually has its own built-in impl of AES and DES that is uses, rather than calling out to a 3rd party crypto library, since the emulated CPU instructions need to run distinct internal steps of the AES algorithm, not merely consume the final output.

Looking to the future, as well as the expanded use of TLS, it was clear that use of cryptography will only ever increase in QEMU. For example, support of a LUKS encryption driver in the block layer will need access to countless encryption ciphers and hashes. It would be possible to get access to ciphers and hashes via the gnutls APIs, but sadly it doesn’t expose all the possible algorithms supported by the underlying libraries it uses. For added fun gnutls can be using either libgcrypt or nettle depending on what version of gnutls you have. So if QEMU wanted to get access to algorithms not exposed by gnutls, it would ideally have to support use of two different libraries. It was clear that QEMU would benefit from a consolidated internal API for dealing with anything related to encryption, to isolate the main bulk of the code from needing to directly deal with whatever 3rd party crypto libraries QEMU linked to. Thus I created a new top level directory in the QEMU codebase crypto/ and associated headers include/crypto/ which will contain all the code for interfacing with gnutls, libgcrypt, nettle, and whatever other cryptographic libraries we might need in the future. First of all the existing AES and DES implementations were moved into this directory. Then I created APIs for dealing with hash and cipher algorithms.

The cipher APIs are written to preferentially use either nettle or libcrypt depending on which one gnutls linked to, though this can be overridden via arguments to configure to force a particular choice. For those who really want to build without these 3rd party libraries the APIs can be built to use the internal AES or DES impls as a falback. A short example of encrypting data using AES-128 and CBC mode would look like this

  QCryptoCipher *cipher;
  uint8_t key = ....;
  size_t keylen = 16;
  uint8_t iv = ....;
  if (!qcrypto_cipher_supports(QCRYPTO_CIPHER_ALG_AES_128)) {
     error_report(errp, "Feature <blah> requires AES cipher support");
     return -1;
  cipher = qcrypto_cipher_new(QCRYPTO_CIPHER_ALG_AES_128,
                              key, keylen,
  if (!cipher) {
     return -1;
  if (qcrypto_cipher_set_iv(cipher, iv, keylen, errp) < 0) {
     return -1;
  if (qcrypto_cipher_encrypt(cipher, rawdata, encdata, datalen, errp) < 0) {
     return -1;

The hash algorithms still use the gnutls APIs, though that will change in the 2.7 series to directly use libgcrypt or nettle. The hash APIs are slightly simpler since QEMU doesn’t (currently at least) need the ability to incrementally hash data, so the currently APIs just supporting one-shot hashing of buffers.

  char *digest = NULL;
  if (!qcrypto_hash_supports(QCRYPTO_HASH_ALG_SHA256)) {
     error_report(errp, "Feature <blah> requires sha256 hash support");
     return -1;
  if (qcrypto_hash_digest(QCRYPTO_HASH_ALG_SHA256,
                          buf, len, &digest
                          errp) < 0) {
     return -1;

The qcrypto_hash_digest() method outputs as printable hex characters. There is also qcrypto_hash_bytes() which returns the raw bytes, or qcrypto_hash_base64() which base64 encodes the result. As well as passing a single buffer, it is possible to provide a list of buffers in an ‘struct iovec’

The calls to qcrypto_cipher_supports() and qcrypto_hash_supports() are entirely optional – errors will be raised by other methods if needed, but they offer the opportunity to emit friendly error messages in the code. For example the VNC server can explicitly say which feature it can’t support due to missing DES support. Just converting the existing code in QEMU code to use these new cipher/hash APIs already had significant benefit, because it allowed for many #ifdef CONFIG_GNUTLS statements to be removed from across the codebase, particularly the VNC server. The other benefit is that the internal AES and DES implementations are no longer used by any QEMU code, except for the CPU instruction emulation, which is not even used if running with KVM. So modern KVM accelerated guests will be using well supported, audited & certified cipher & hash implementations which is often important to enterprise distribution vendors. This first stage of consolidation was completed and merged for the QEMU 2.4 release series but it has been invisible to users, mostly just benefiting the QEMU & distro maintainers.

In this blog series:

by Daniel Berrange at March 31, 2016 06:06 PM

March 25, 2016

Richard Jones

libguestfs appliance boot in under 1s

$ time LIBGUESTFS_BACKEND=direct LIBGUESTFS_HV=~/d/qemu/x86_64-softmmu/qemu-system-x86_64 guestfish -a /dev/null run

real	0m0.966s
user	0m0.623s
sys	0m0.281s

However I had to patch qemu to enable DMA loading of the kernel and initrd.

by rich at March 25, 2016 03:50 PM

March 24, 2016

Gerd Hoffmann

fbida 2.11 released

New fbida release is out. Needed something lightweight for drm testing. So I’ve decided to give the fbi display code a overhaul: Untangled the console switching code from the fbdev code. Added support for drm, which also is the new default. Missing console switching support is not a fatal error any more. The last item […]

by kraxel at March 24, 2016 03:13 PM

March 21, 2016

Richard Jones

Getting the libguestfs appliance boot time down to 1.2s

libguestfs can securely mount any disk image, but to do this it requires a small appliance to be run. The appliance is a very cut down Linux distro, but it still takes time to boot. For a while that time has floated around 3-5 seconds. This excludes libguestfs from some important use cases — one being the ability to monitor 1000s of VMs every few minutes (simple maths: 1000×3 > 5×60, so you cannot monitor 1000 VMs every 5 minutes without using a lot of parallel appliances).

Last year Intel announced Clear Containers. You may be forgiven for being confused (unclear?) by what Clear Containers actually is, but Intel’s demo is quite neat. (You can run these commands as non-root, and at time of writing they won’t damage your machine.)

$ wget
$ mv clear-containers-demo.tar.xz clear-containers-demo.tar.bz2
$ bunzip clear-containers-demo.tar.bz2
$ cd containers
$ bash ./

It’s a complete Linux guest that boots in a fraction of a second. I take that as a challenge!

The first step is to have a good idea what all the parts are doing and what is taking the time. Booting an appliance involves several actors — qemu, BIOS, the guest kernel — and without being able to measure how much time each one spends doing things, it’s rather hard to say what needs work or if we’re making improvements. This was why I spent last week unsuccessfully looking at QEMU tracing. I have now settled on a simpler approach which is to time boot messages. The new boot analysis program produces quite clear output:

Warming up the libguestfs cache ...                                                                              
Running the tests in 5 passes ...                                                                                
    pass 1: 798 events collected in 1347184178 ns                                                                
    pass 2: 798 events collected in 1324153548 ns                                                                
    pass 3: 798 events collected in 1342126721 ns                                                                
    pass 4: 798 events collected in 1279500931 ns                                                                
    pass 5: 798 events collected in 1317457653 ns                                                                
Analyzing the results ...                                                                                        
0.000000s: ▲ run mean:1.321973s ±24.0ms (100.0%)                                                              
0.000065s: │ ▲ supermin:build mean:0.010523s ±0.1ms (0.8%)                                                  
           │ │                                                                                               
0.010588s: │ ▼                                                                                               
0.010612s: │ ▲ qemu:feature-detect mean:0.149075s ±4.2ms (11.3%)                                            
           │ │                                                                                               
0.159687s: │ ▼                                                                                               
0.161412s: │ ▲ ▲ qemu mean:1.160562s ±22.6ms (87.8%) qemu:overhead mean:0.123142s ±4.5ms (9.3%)          
           │ │ │                                                                                           
0.263153s: │ │ │ ▲ seabios mean:0.241488s ±2.8ms (18.3%)                                                
           │ │ │ │                                                                                       
0.284554s: │ │ ▼ │                                                                                       
0.284554s: │ │   │ ▲ bios:overhead mean:0.220087s ±2.8ms (16.6%)                                        
           │ │   │ │                                                                                     
0.504641s: │ │   ▼ ▼                                                                                     
0.504641s: │ │ ▲ ▲ kernel mean:0.817332s ±21.4ms (61.8%) kernel:overhead mean:0.374896s ±5.2ms (28.4%) 
           │ │ │ │                                                                                       
0.879537s: │ │ │ ▼                                                                                       
0.879537s: │ │ │ ▲ supermin:mini-initrd mean:0.086014s ±7.9ms (6.5%)                                    
           │ │ │ │                                                                                       
0.881863s: │ │ │ │ ▲ supermin: internal insmod crc32-pclmul.ko mean:0.001399s ±0.1ms (0.1%)           
           │ │ │ │ │                                                                                   
0.883262s: │ │ │ │ ▼                                                                                   
0.883262s: │ │ │ │ ▲ supermin: internal insmod crc32c-intel.ko mean:0.000226s ±0.5ms (0.0%)           
0.883488s: │ │ │ │ ▼                                                                                   
0.883488s: │ │ │ │ ▲ supermin: internal insmod crct10dif-pclmul.ko mean:0.000882s ±0.4ms (0.1%)       
0.884370s: │ │ │ │ ▼                                                                                   
0.884370s: │ │ │ │ ▲ supermin: internal insmod crc32.ko mean:0.001121s ±0.0ms (0.1%)                  
           │ │ │ │ │                                                                                   
0.885490s: │ │ │ │ ▼                                                                                   
0.885490s: │ │ │ │ ▲ supermin: internal insmod virtio.ko mean:0.001634s ±0.5ms (0.1%)                 
           │ │ │ │ │                                                                                   
0.887124s: │ │ │ │ ▼                                                                                   
0.887124s: │ │ │ │ ▲ supermin: internal insmod virtio_ring.ko mean:0.000581s ±0.7ms (0.0%)            
0.887706s: │ │ │ │ ▼                                                                                   
0.887706s: │ │ │ │ ▲ supermin: internal insmod virtio_blk.ko mean:0.001115s ±0.0ms (0.1%)             
           │ │ │ │ │                                                                                   
0.888821s: │ │ │ │ ▼                                                                                   
0.888821s: │ │ │ │ ▲ supermin: internal insmod virtio-rng.ko mean:0.000884s ±0.4ms (0.1%)             
0.889705s: │ │ │ │ ▼                                                                                   
0.889705s: │ │ │ │ ▲ supermin: internal insmod virtio_console.ko mean:0.001923s ±0.4ms (0.1%)         
           │ │ │ │ │                                                                                   
0.891627s: │ │ │ │ ▼                                                                                   
0.891627s: │ │ │ │ ▲ supermin: internal insmod virtio_net.ko mean:0.001483s ±0.4ms (0.1%)             
           │ │ │ │ │                                                                                   
0.893111s: │ │ │ │ ▼                                                                                   
0.893111s: │ │ │ │ ▲ supermin: internal insmod virtio_scsi.ko mean:0.000686s ±0.6ms (0.1%)            
0.893797s: │ │ │ │ ▼                                                                                   
0.893797s: │ │ │ │ ▲ supermin: internal insmod virtio_balloon.ko mean:0.000663s ±0.5ms (0.1%)         
0.894460s: │ │ │ │ ▼                                                                                   
0.894460s: │ │ │ │ ▲ supermin: internal insmod virtio_input.ko mean:0.000875s ±0.4ms (0.1%)           
0.895336s: │ │ │ │ ▼                                                                                   
0.895336s: │ │ │ │ ▲ supermin: internal insmod virtio_mmio.ko mean:0.001097s ±0.0ms (0.1%)            
           │ │ │ │ │                                                                                   
0.896433s: │ │ │ │ ▼                                                                                   
0.896433s: │ │ │ │ ▲ supermin: internal insmod virtio_pci.ko mean:0.050700s ±7.8ms (3.8%)             
           │ │ │ │ │                                                                                   
0.947133s: │ │ │ │ ▼                                                                                   
0.947133s: │ │ │ │ ▲ supermin: internal insmod crc-ccitt.ko mean:0.001144s ±0.6ms (0.1%)              
           │ │ │ │ │                                                                                   
0.948277s: │ │ │ │ ▼                                                                                   
0.948277s: │ │ │ │ ▲ supermin: internal insmod crc-itu-t.ko mean:0.000001s ±0.0ms (0.0%)              
0.948278s: │ │ │ │ ▼                                                                                   
0.948278s: │ │ │ │ ▲ supermin: internal insmod crc8.ko mean:0.001368s ±0.3ms (0.1%)                   
           │ │ │ │ │                                                                                   
0.949646s: │ │ │ │ ▼                                                                                   
0.949646s: │ │ │ │ ▲ supermin: internal insmod libcrc32c.ko mean:0.001043s ±0.9ms (0.1%)              
           │ │ │ │ │                                                                                   
0.950689s: │ │ │ │ ▼                                                                                   
           │ │ │ │                                                                                       
0.965551s: │ │ │ ▼                                                                                       
0.965551s: │ │ │ ▲ ▲ /init mean:0.318045s ±18.0ms (24.1%) bash:overhead mean:0.015855s ±3.1ms (1.2%) 
           │ │ │ │ │                                                                                   
0.981407s: │ │ │ │ ▼                                                                                   
           │ │ │ │                                                                                       
1.283597s: │ │ │ ▼                                                                                       
1.283597s: │ │ │ ▲ guestfsd mean:0.019151s ±1.9ms (1.4%)                                                
           │ │ │ │                                                                                       
1.294818s: │ │ │ │ ▲ shutdown mean:0.027156s ±4.1ms (2.1%)                                            
           │ │ │ │ │                                                                                   
1.302747s: │ │ │ ▼ │                                                                                   
           │ │ │   │                                                                                     
1.321973s: │ │ ▼   │                                                                                     
1.321973s: ▼ ▼     ▼                                                                                       

Armed with this analysis I made a good start on reducing the boot time. It’s now down to 1.2s (on my laptop) and there is scope for sub-second boots.

Some of the things I’ve changed to get to 1.2s:

Some of the things that may reduce boot times further:

By the way: Even if you never use libguestfs, but you do use virtualized linux, this benefits you too.

by rich at March 21, 2016 06:46 PM

March 18, 2016

Amit Shah

FOSSASIA 2016 talk: Virtualization and Containers

I did a talk earlier today at the wonderful venue of the Science Centre Singapore at FOSSASIA 2016, titled ‘Virtualization and Containers.’ Over the last few years, several “cool new” and “next big thing” technologies have been introduced to the world, and these buzzwords leave people all dazed and confused.

One of my aims for this talk was to introduce people to the concepts behind virtualization and containers, explain that these aren’t really new technologies, and why there’s so much interest in them of late.

I also think there’s a lot of misinformation spread around these topics, so this was also an attempt to set some facts straight.

The slides are here, and I will post an update with the link to the video.

by Amit Shah at March 18, 2016 03:42 PM

March 07, 2016

Richard Jones

New in libguestfs: Filesystem forensics support

Thanks to patches supplied by Matteo Cafasso, libguestfs, the library for accessing and modifying disk images is gradually getting support for filesystem forensics.

Initially I have added a Fedora libguestfs-forensics subpackage, which pulls The Sleuth Kit (TSK) into virt-rescue.

Parts of TSK will also be made available as libguestfs APIs so they are callable from other programs.

by rich at March 07, 2016 08:06 PM

February 11, 2016

Kashyap Chamarthy

QEMU command-line: behavior of ‘-serial stdio’ vs. ‘-serial mon:stdio’

[Thanks: To Paolo Bonzini for writing the original patch; and to Alexander Graf for mentioning this on IRC some months ago, I had to go and look up the logs.]

I forget to remember: to avoid QEMU being terminated on SIGINT (Ctrl+c), instead of just stdio, supply the special parameter mon:stdio to -serial option (which redirects the virtual serial port to a host character device). The subtle difference between them:

In summary, a working example command-line:

$ qemu-system-x86_64               \
   -display none                   \
   -nodefconfig                    \
   -nodefaults                     \
   -m 2048                         \
   -device virtio-scsi-pci,id=scsi \
   -device virtio-serial-pci       \
   -serial mon:stdio  \
   -drive file=./fedora23.qcow2,format=qcow2,if=virtio

To toggle between QEMU monitor prompt and the serial console, use Ctrl+a, followed by typing ‘c’. To terminate QEMU, while at monitor prompt, (qemu), type ‘quit’.

by kashyapc at February 11, 2016 05:27 PM

Richard Jones

Tip: FUSE-mount a disk image with Windows drive letters

guestmount is the libguestfs tool for taking a disk image and mounting it under the host filesystem. This works great for Linux disk images:

$ virt-builder centos-7.2
$ mkdir /tmp/mnt
$ guestmount -a centos-7.2.img -i /tmp/mnt
$ ls /tmp/mnt
bin   dev  home  lib64       media  opt   root  sbin  sys  usr
boot  etc  lib   lost+found  mnt    proc  run   srv   tmp  var
$ guestunmount /tmp/mnt

Those files under /tmp/mnt are inside the centos-7.2.img disk image file, and you can read and write them.

guestmount is fine for Windows disk images too, except when Windows has multiple drives, C:, D:, etc., because in that case you’ll only “see” the contents of the C: drive.

But guestmount is nowadays just a wrapper around the “mount-local” API in libguestfs, and you can use that API directly if you want to do anything a bit more complicated … such as exposing Windows drive letters.

Here is a Perl script which uses the mount-local API directly to do this:

#!/usr/bin/perl -w
use strict;
use Sys::Guestfs;
$| = 1;
die "usage: $0 mountpoint disk.img" if @ARGV < 2;
my $mp = shift @ARGV;
my $g = new Sys::Guestfs;
$g->add_drive_opts ($_) foreach @ARGV;
my @roots = $g->inspect_os;
die "$0: no operating system found" if @roots != 1;
my $root = $roots[0];
die "$0: not Windows" if $g->inspect_get_type ($root) ne "windows";
my %map = $g->inspect_get_drive_mappings ($root);
foreach (keys %map) {
    $g->mkmountpoint ("/$_");
    eval { $g->mount ($map{$_}, "/$_") };
    warn "$@ (ignored)\n" if $@;
$g->mount_local ($mp);
print "filesystem mounted on $mp\n";

You can use it like this:

$ mkdir /tmp/mnt
$ ./ /tmp/mnt windows7.img
filesystem ready on /tmp/mnt

in another window:

$ cd /tmp/mnt
$ ls
C  D
$ cd C
$ ls
Documents and Settings
Program Files
$ cd ../..
$ guestunmount /tmp/mnt

(Thanks to Pino Toscano for working out the details)

by rich at February 11, 2016 02:16 PM

February 10, 2016

Amit Shah

Video From My FOSDEM 2016 Talk

The FOSDEM folks have processed and released the video on my “Live Migration of Virtual Machines From The Bottom Up” talk.  Available at this location.

by Amit Shah at February 10, 2016 02:32 PM

February 05, 2016

Zeeshan Ali Khattak

FOSDEM & Dev-x Hackfest 2016

Last week I travelled to Brussels to attend and present at FOSDEM. Since I was going there anyway, I decided to also join 2.5 days of GNOME Developer Experience Hackfest.

Travelling to Brussels is usually pretty easy and fast, thanks to Eurostar but I turned it into a bit of nightmare this time. I had completely forgotten how London public transport is a total disasters in peak hours and hence ended-up arriving too late at the station. Not a big deal, they put me on the next train for free. I decided to go through security already and that's when I realized that I have forgotten my laptop at home. :( Fortunately my nephew (who is also my flatmate) was still at home and was going to travel to city centre anyway so I asked him to bring it with him. After two hours of anxiously waiting, he managed to arrive just in time for train staff to let in the very last late arriving passenger. Phew!

While I didn't have a particular agenda for the hackfest, I had a discussion with Alexander Larsson about sandboxing in xdg-app and how we will implement per-app authorization to location information from Geoclue. The main problem has always been that we have no means of reliably identifying apps and turns out that xdg-app already solved that problem. Each xdg-app has it's ID (i-e the name of it's desktop file w/o the .desktop suffix) in /proc/PID/cgroup file and app can not change that.

So I sat down and started working on this. I was able to finish off the Geoclue part of the solution already before the hackfest ended and now working on gnome-shell (currently the only geoclue app authorizing agent) part. Once done I'll then add settings in gnome-control-center so users can change their mind about whether or not they want an app to be able to access their location. Other than that, I helped test a few xdg-app bundles.

It's important to keep in mind that this solution will still involve trusting the system (non-xdg-app) application as there is no way to reliably identify those. i-e if you download a random script from internet and run it, we can not possibly guarantee that it won't access your location without your consent. Let's hope that xdg-app becomes very ubiquitous and becomes a de-facto standard for distributing your Linux apps in the near future.

FOSDEM was a fun weekend as usual. I didn't attend a lot of talks but met many interesting people and we had chat about various different open source technologies. I was glad to hear that a project I started off as a simple proof-of-concept for GUPnP, is now a days used in automobiles.

My own talk about Geospacial technologies in GNOME went fine except for the fact that I ran out of time towards the end and my Raspberry Pi demo didn't work because I forgot to plug-in the WiFi adaptor. :( Still, I was able to cover most of the topics and Maps demo worked pretty smoothly (there was  weird libchamplain bug I hit but it wasn't very critical at all).

While I came back home pumped with a lot of motivation, unfortunately I managed to catch the infamous FOSDEM flu. I've been resting most of the week and today I started to feel better so I'm writing this late blog post as the first thing, before I completely forget what happened last week.

Oh and last but not the least, many thanks to GNOME foundation for sponsoring my train tickets.

by (zeenix) at February 05, 2016 05:19 PM

January 31, 2016

Amit Shah

FOSDEM 2016 Talk: Live Migration of Virtual Machines From The Bottom Up

I just did a talk titled ‘Live Migration of Virtual Machines From The Bottom Up‘ at the FOSDEM conference in Brussels, Belgium.  The slides are available at this location.

The talk introduced the KVM stack (Linux, KVM, QEMU, libvirt) and live migration; introduced ways the higher layers (especially oVirt and OpenStack) use KVM and migration, and what challenges the KVM team faces in working with varying use-cases and new features added to make migration work, and work faster.

There was a video recording, I will post the link to it in a separate post.

Update: video recording available at this location.

by Amit Shah at January 31, 2016 03:00 PM

January 27, 2016

Amit Shah

Interviews On My FOSDEM 2016 Talk

I was interviewed by the FOSDEM folks on my upcoming talk on KVM Live Migration.

I was also interviewed by Brian Proffitt, who has written an article that serves as a preview for the talk.

Looking forward to the talk this Sunday!

by Amit Shah at January 27, 2016 10:09 PM

January 25, 2016

Amit Shah

SCaLE14x Talk: KVM Weather Report

I did a talk titled ‘KVM Weather Report‘ at the SCaLE conference in Pasadena, California yesterday.  The slides are available at this location.

The talk introduced the KVM stack (Linux, KVM, QEMU, libvirt); briefly went over some features and the communities around the projects, and discussed some of the new features added to the KVM stack in the last year.

Next up is my talk on live migration of VMs at FOSDEM in Belgium.

by Amit Shah at January 25, 2016 01:18 PM

January 20, 2016

Cole Robinson

Tips for querying git tags

With package maintenance, bug triage, and email support, I often need to look at a project's git tags to know about the latest releases, when they were released, and what releases contain certain features. Here's a couple workflow tips that make my life easier.

Better git tag listing

Based on Peter Hutterer's 'git bi' alias for improved branch listing (which is great and highly recommended), I made one for improved tags output that I mapped as 'git tags'. Output looks like:

The alias code is:
tags = "!sh -c ' \
git for-each-ref --format=\"%(refname:short)\" refs/tags | \
while read tag; do \
git --no-pager log -1 --format=format:\"$tag %at\" $tag; echo; \
done | \
sort -k 2 | cut -f 1 --delimiter=\" \" | \
while read tag; do \
fmt=\"%Cred$tag:%Cblue %s %Cgreen%h%Creset (%ai)\"; \
git --no-pager log -1 --format=format:\"$fmt\" $tag; echo; \

Find the first tag that contains a commit

This seems to come up quite a bit for me. An example is here; a user was asking about a virt-install feature, and I wanted to tell them what version it appeared in. I grepped git log, found the commit, then ran:

 $ git describe --contains 87a611b5470d9b86bf57a71ce111fa1d41d8e2cd  

That shows me that v1.0.0 was the first release with the feature they wanted, just take whatever is to the left of the tilde.

This often comes in handy with backporting as well: a developer will point me at a bug fix commit ID, I run git describe to see what upstream version it was released in, so I know what fedora package versions are lacking the fix.

Another tip here is to use the --match option to only search tags matching a particular glob. I've used this to filter out matching against a maintenance or bugfix release branch, when I only wanted to search major version releases.

Don't pull tags from certain remotes

For certain repos like qemu.git, I add a lot of git remotes pointing to individual developer's trees for occasional patch testing. However if trees have lots of non-upstream tags, like for use with pull-requests, they can interfere with my workflow for backporting patches. Use the --no-tags option for this:

 $ git remote add --no-tags $repo

by Cole Robinson ( at January 20, 2016 10:00 AM

Powered by Planet!
Last updated: May 25, 2016 05:01 AM