summary refs log tree commit diff stats
path: root/results/classifier/108/other/176
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--results/classifier/108/other/17616
-rw-r--r--results/classifier/108/other/176102751
-rw-r--r--results/classifier/108/other/176140134
-rw-r--r--results/classifier/108/other/176153570
-rw-r--r--results/classifier/108/other/1761798523
-rw-r--r--results/classifier/108/other/176298
-rw-r--r--results/classifier/108/other/176255878
-rw-r--r--results/classifier/108/other/176270753
-rw-r--r--results/classifier/108/other/176327
-rw-r--r--results/classifier/108/other/1763536167
-rw-r--r--results/classifier/108/other/176416
-rw-r--r--results/classifier/108/other/1765110
-rw-r--r--results/classifier/108/other/176618
-rw-r--r--results/classifier/108/other/176684196
-rw-r--r--results/classifier/108/other/1766896195
-rw-r--r--results/classifier/108/other/176690462
-rw-r--r--results/classifier/108/other/176712630
-rw-r--r--results/classifier/108/other/176717666
-rw-r--r--results/classifier/108/other/176720036
-rw-r--r--results/classifier/108/other/176829575
-rw-r--r--results/classifier/108/other/176906727
-rw-r--r--results/classifier/108/other/1769189144
22 files changed, 1992 insertions, 0 deletions
diff --git a/results/classifier/108/other/176 b/results/classifier/108/other/176
new file mode 100644
index 000000000..886db87e3
--- /dev/null
+++ b/results/classifier/108/other/176
@@ -0,0 +1,16 @@
+device: 0.866
+performance: 0.730
+vnc: 0.489
+PID: 0.452
+permissions: 0.371
+graphic: 0.365
+KVM: 0.294
+other: 0.273
+files: 0.273
+debug: 0.265
+semantic: 0.264
+boot: 0.161
+network: 0.140
+socket: 0.107
+
+virtual machine cpu soft lockup when qemu attach disk
diff --git a/results/classifier/108/other/1761027 b/results/classifier/108/other/1761027
new file mode 100644
index 000000000..c24ce1e92
--- /dev/null
+++ b/results/classifier/108/other/1761027
@@ -0,0 +1,51 @@
+semantic: 0.835
+device: 0.738
+files: 0.677
+performance: 0.650
+PID: 0.647
+socket: 0.617
+graphic: 0.610
+permissions: 0.595
+vnc: 0.572
+boot: 0.533
+network: 0.529
+other: 0.433
+debug: 0.383
+KVM: 0.283
+
+Unexpected error: "AioContext polling is not implemented on Windows"
+
+When run it this error happens:
+Unexpected error in aio_context_set_poll_params() at /home/stefan/src/qemu/repo.or.cz/qemu/ar7/util/aio-win32.c:413:
+C:\Program Files\qemu\qemu-system-x86_64.exe: AioContext polling is not implemented on Windows
+
+This application has requested the Runtime to terminate it in an unusual way.
+Please contact the application's support team for more information.
+
+
+
+System:
+Windows 10 x64
+
+Which version of QEMU are you using? And which parameters are you using when you  start it?
+
+I have that message too with this version:
+
+c:\Tools\QEMU>qemu-system-aarch64.exe -version
+QEMU emulator version 2.11.90 (v2.12.0-rc0-11704-g30195e9d53-dirty)
+
+My launch params are:
+C:\TOOLS\QEMU\qemu-system-aarch64.exe -M raspi3 -kernel D:\QEMU-img\2017-12-04-pcudev01l.img
+
+My system is Windows 7 64bit
+The qemu package downloaded is the 64bit version.
+
+
+Fixed in qemu.git/master and due to be released in QEMU 2.12:
+
+commit 90c558beca0c0ef26db1ed77d1eb8f24a5ea02a1
+Author: Peter Xu <email address hidden>
+Date:   Thu Mar 22 16:56:30 2018 +0800
+
+    iothread: fix breakage on windows
+
diff --git a/results/classifier/108/other/1761401 b/results/classifier/108/other/1761401
new file mode 100644
index 000000000..f83be0e03
--- /dev/null
+++ b/results/classifier/108/other/1761401
@@ -0,0 +1,34 @@
+graphic: 0.881
+vnc: 0.879
+performance: 0.678
+device: 0.639
+semantic: 0.611
+socket: 0.441
+boot: 0.416
+PID: 0.407
+permissions: 0.392
+network: 0.348
+files: 0.310
+other: 0.308
+debug: 0.256
+KVM: 0.077
+
+ARM/Neon: vcvt rounding error
+
+Hello,
+
+While using QEMU commit 47d3b60858d90ac8a0cc3a72af7f95c96781125a (March 28, 2018), I've noticed failures in one of the GCC ARM/Neon tests. The test passes on hardware, and with QEMU-2.11.0, so it looks like a recent regression.
+
+The test builds a vector of 4 float32 with "125.9" as value, then converts them to 4 uint32_t.
+The expected result is 125, but we get 126 instead.
+
+Maybe it's just a matter of default rounding mode?
+
+Hi Christophe -- we think that commit bd49e6027cbc207c, now in master, should have fixed this bug. Could you retry your testcase with a QEMU build including that fix?
+
+
+I updated my QEMU git tree such that it includes commit bd49e6027cbc207c, and the test now passes.
+
+Thanks for the prompt fix!
+
+
diff --git a/results/classifier/108/other/1761535 b/results/classifier/108/other/1761535
new file mode 100644
index 000000000..c5381306b
--- /dev/null
+++ b/results/classifier/108/other/1761535
@@ -0,0 +1,70 @@
+debug: 0.893
+device: 0.871
+performance: 0.804
+files: 0.795
+permissions: 0.763
+PID: 0.740
+socket: 0.739
+vnc: 0.727
+graphic: 0.694
+other: 0.687
+network: 0.636
+semantic: 0.630
+boot: 0.560
+KVM: 0.433
+
+qemu-aarch64-static docker arm64v8/openjdk coredump
+
+I am using qemu-aarch64-static to run the arm64v8/openjdk official image on my x86 machine. Using QEMU master, I immediately hit a bug which hangs the container. With Ubuntu default version qemu-aarch64 version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.24) and qemu-aarch64 version 2.11.1 (v2.11.1-dirty) the hang does not take place.
+
+To reproduce (and get to the core dump):
+
+$ /tmp/tmptgyg3nvh/qemu-aarch64-static/qemu-aarch64-static -version
+qemu-aarch64 version 2.11.91 (v2.12.0-rc1-5-g47d3b60-dirty)
+Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
+
+$ docker run -it -v /tmp/tmptgyg3nvh/qemu-aarch64-static:/usr/bin/qemu-aarch64-static arm64v8/openjdk /bin/bash
+root@bf75cf45d311:/# javac
+Usage: javac <options> <source files>
+where possible options include:
+  -g                         Generate all debugging info
+<...snip...>
+  @<filename>                Read options and filenames from file
+
+qemu: uncaught target signal 11 (Segmentation fault) - core dumped
+...TERMINAL HANGS...
+
+
+To get the core dump, In a separate terminal:
+
+# snapshot the file system of the hung image
+$ docker commit $(docker ps -aqf "name=latest_qemu") qemu_coredump
+
+# connect with known working qemu
+$ docker run -t -v /usr/bin/qemu-aarch64-static:/usr/bin/qemu-aarch64-static  -i qemu_coredump /bin/bash
+
+$$ ls -lat
+total 10608
+<snip>
+-rw-r--r--   1 root root 10792960 Mar 29 18:02 qemu_bash_20180329-180251_1.core
+drwxrwxrwt   5 root root     4096 Mar 29 18:02 tmp
+<snip>
+
+Could you provide a binary that we can use to reproduce, please? (preferably a setup that doesn't require me to figure out how to install and use docker...)
+
+
+I realized I had a javac lying around from last time somebody wanted me to debug a java problem, and I'm also seeing SEGVs with simpler programs like ls (!), so I'll have a look at those and hopefully that will be the same cause as what you're seeing.
+
+
+I think this should be fixed by https://patchwork.ozlabs.org/patch/896295/
+
+(incidentally the segfault is in the guest /bin/sh, not in javac or ls.)
+
+
+Now fixed in master, commit 7f0f4208b3a96, and will be in 2.12.0.
+
+
+Many thanks!
+
+I've just compiled master, and docker/aarch64/openjdk image now works as expected on my x86 machine.
+
diff --git a/results/classifier/108/other/1761798 b/results/classifier/108/other/1761798
new file mode 100644
index 000000000..08b7e3470
--- /dev/null
+++ b/results/classifier/108/other/1761798
@@ -0,0 +1,523 @@
+other: 0.860
+permissions: 0.846
+semantic: 0.837
+debug: 0.823
+device: 0.819
+PID: 0.804
+performance: 0.800
+graphic: 0.786
+KVM: 0.782
+files: 0.758
+vnc: 0.751
+socket: 0.747
+boot: 0.732
+network: 0.728
+
+live migration intermittently fails in CI with "VQ 0 size 0x80 Guest index 0x12c inconsistent with Host index 0x134: delta 0xfff8"
+
+Seen here:
+
+http://logs.openstack.org/37/522537/20/check/legacy-tempest-dsvm-multinode-live-migration/8de6e74/logs/subnode-2/libvirt/qemu/instance-00000002.txt.gz
+
+2018-04-05T21:48:38.205752Z qemu-system-x86_64: -chardev pty,id=charserial0,logfile=/dev/fdset/1,logappend=on: char device redirected to /dev/pts/0 (label charserial0)
+warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
+2018-04-05T21:48:43.153268Z qemu-system-x86_64: VQ 0 size 0x80 Guest index 0x12c inconsistent with Host index 0x134: delta 0xfff8
+2018-04-05T21:48:43.153288Z qemu-system-x86_64: Failed to load virtio-blk:virtio
+2018-04-05T21:48:43.153292Z qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:04.0/virtio-blk'
+2018-04-05T21:48:43.153347Z qemu-system-x86_64: load of migration failed: Operation not permitted
+2018-04-05 21:48:43.198+0000: shutting down, reason=crashed
+
+And in the n-cpu logs on the other host:
+
+http://logs.openstack.org/37/522537/20/check/legacy-tempest-dsvm-multinode-live-migration/8de6e74/logs/screen-n-cpu.txt.gz#_Apr_05_21_48_43_257541
+
+There is a related Red Hat bug:
+
+https://bugzilla.redhat.com/show_bug.cgi?id=1450524
+
+The CI job failures are at present using the Pike UCA:
+
+ii  libvirt-bin                         3.6.0-1ubuntu6.2~cloud0
+
+ii  qemu-system-x86                     1:2.10+dfsg-0ubuntu3.5~cloud0
+
+Maybe when we move to use the Queens UCA we'll have better luck with this:
+
+https://review.openstack.org/#/c/554314/
+
+That has these package versions:
+
+ii  libvirt-bin                         4.0.0-1ubuntu4~cloud0
+
+ii  qemu-system-x86                     1:2.11+dfsg-1ubuntu2~cloud0
+
+The offending instance QEMU from the log that crashed:
+
+http://logs.openstack.org/37/522537/20/check/legacy-tempest-dsvm-multinode-live-migration/8de6e74/logs/subnode-2/libvirt/qemu/instance-00000002.txt.gz
+
+----------------------------------------------------------------------------
+2018-04-05 21:48:38.136+0000: starting up libvirt version: 3.6.0, package: 1ubuntu6.2~cloud0 (Openstack Ubuntu Testing Bot <email address hidden> Wed, 07 Feb 2018 20:05:24 +0000), qemu version: 2.10.1(Debian 1:2.10+dfsg-0ubuntu3.5~cloud0), hostname: ubuntu-xenial-ovh-bhs1-0003361707
+LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-x86_64 -name guest=instance-00000002,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-instance-00000002/master-key.aes -machine pc-i440fx-artful,accel=tcg,usb=off,dump-guest-core=off -m 64 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid e70df34f-395e-4482-9c24-101f12cf635d -smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.0,serial=97e1e00c-8afb-4356-b55e-92e997c5a1a7,uuid=e70df34f-395e-4482-9c24-101f12cf635d,family=Virtual Machine' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-instance-00000002/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/e70df34f-395e-4482-9c24-101f12cf635d/disk,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=29,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:15:ea:59,bus=pci.0,addr=0x3 -add-fd set=1,fd=32 -chardev pty,id=charserial0,logfile=/dev/fdset/1,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
+2018-04-05T21:48:38.205752Z qemu-system-x86_64: -chardev pty,id=charserial0,logfile=/dev/fdset/1,logappend=on: char device redirected to /dev/pts/0 (label charserial0)
+warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
+2018-04-05T21:48:43.153268Z qemu-system-x86_64: VQ 0 size 0x80 Guest index 0x12c inconsistent with Host index 0x134: delta 0xfff8
+2018-04-05T21:48:43.153288Z qemu-system-x86_64: Failed to load virtio-blk:virtio
+2018-04-05T21:48:43.153292Z qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:04.0/virtio-blk'
+2018-04-05T21:48:43.153347Z qemu-system-x86_64: load of migration failed: Operation not permitted
+2018-04-05 21:48:43.198+0000: shutting down, reason=crashed
+----------------------------------------------------------------------------
+
+
+
+I'm not sure that's the same as the bz 1450524 - in this case it's virtio-blk, where I *think* that bz was tracked down to a virtio-balloon and a virtio-net issue.
+
+Hm, looks like the e-r query [1] for this bug doesn't find it if the screen-n-cpu.txt is on the subnode-2:
+
+http://logs.openstack.org/25/566425/4/gate/nova-live-migration/27afb39/logs/subnode-2/screen-n-cpu.txt.gz?level=TRACE#_May_15_00_54_26_234161
+
+[1] https://github.com/openstack-infra/elastic-recheck/blob/master/queries/1761798.yaml
+
+Ran into a similar issue, when snapshotting an instance, it won't start afterwards.
+Root cause is inconsistency between state of device virtio-scsi and the saved VM RAM state in /var/lib/libvirt/qemu/save/
+Removing instance-0000****.save file from thsi folder allows starting the VM back but may cause data corruption.
+Specifics of my case - using virtio-scsi driver and discard enabled, backend is Ceph. Impacted 2 Windows VMs so far.
+
+I also think this is a qemu bug, will do some tests, try to reproduce.
+
+We hit a very similar issue https://zuul.opendev.org/t/openstack/build/d50877ae15db4022b82f4bb1d1d52cea/log/logs/subnode-2/screen-n-cpu.txt?severity=0#13482
+
+Source node 
+https://zuul.opendev.org/t/openstack/build/d50877ae15db4022b82f4bb1d1d52cea/log/logs/subnode-2/libvirt/qemu/instance-0000001a.txt
+
+2020-11-20 14:25:24.887+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.4~cloud0 (Openstack Ubuntu Testing Bot <email address hidden> Tue, 15 Sep 2020 20:36:28 +0000), qemu version: 4.2.1Debian 1:4.2-3ubuntu6.7~cloud0, kernel: 4.15.0-124-generic, hostname: ubuntu-bionic-ovh-bhs1-0021872195
+
+LC_ALL=C \
+
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
+
+HOME=/var/lib/libvirt/qemu/domain-19-instance-0000001a \
+
+XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-19-instance-0000001a/.local/share \
+
+XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-19-instance-0000001a/.cache \
+
+XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-19-instance-0000001a/.config \
+
+QEMU_AUDIO_DRV=none \
+
+/usr/bin/qemu-system-x86_64 \
+
+-name guest=instance-0000001a,debug-threads=on \
+
+-S \
+
+-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-19-instance-0000001a/master-key.aes \
+
+-machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
+
+-cpu qemu64,hypervisor=on,lahf-lm=on \
+
+-m 128 \
+
+-overcommit mem-lock=off \
+
+-smp 1,sockets=1,cores=1,threads=1 \
+
+-uuid 2c468d92-4b19-426a-8c25-16b4624c21a4 \
+
+-smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=22.1.0,serial=2c468d92-4b19-426a-8c25-16b4624c21a4,uuid=2c468d92-4b19-426a-8c25-16b4624c21a4,family=Virtual Machine' \
+
+-no-user-config \
+
+-nodefaults \
+
+-chardev socket,id=charmonitor,fd=35,server,nowait \
+
+-mon chardev=charmonitor,id=monitor,mode=control \
+
+-rtc base=utc \
+
+-no-shutdown \
+
+-boot strict=on \
+
+-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
+
+-blockdev '{"driver":"file","filename":"/opt/stack/data/nova/instances/_base/61bd5e531ab4c82456aa5300ede7266b3610be79","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
+
+-blockdev '{"node-name":"libvirt-2-format","read-only":true,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
+
+-blockdev '{"driver":"file","filename":"/opt/stack/data/nova/instances/2c468d92-4b19-426a-8c25-16b4624c21a4/disk","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
+
+-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-2-format"}' \
+
+-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-1-format,id=virtio-disk0,bootindex=1,write-cache=on \
+
+-netdev tap,fd=37,id=hostnet0 \
+
+-device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:43:11:f4,bus=pci.0,addr=0x3 \
+
+-add-fd set=2,fd=39 \
+
+-chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
+
+-device isa-serial,chardev=charserial0,id=serial0 \
+
+-vnc 127.0.0.1:0 \
+
+-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
+
+-incoming defer \
+
+-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
+
+-object rng-random,id=objrng0,filename=/dev/urandom \
+
+-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
+
+-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+
+-msg timestamp=on
+
+char device redirected to /dev/pts/2 (label charserial0)
+
+virtio: bogus descriptor or out of resources
+
+2020-11-20 14:25:39.911+0000: initiating migration
+
+2020-11-20T14:26:21.409517Z qemu-system-x86_64: terminating on signal 15 from pid 17395 (/usr/sbin/libvirtd)
+
+2020-11-20 14:26:21.610+0000: shutting down, reason=destroyed
+
+
+
+Target node
+https://zuul.opendev.org/t/openstack/build/d50877ae15db4022b82f4bb1d1d52cea/log/logs/libvir
+t/qemu/instance-0000001a.txt
+
+2020-11-20 14:25:11.589+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.4~cloud0 (Openstack Ubuntu Testing Bot <email address hidden> Tue, 15 Sep 2020 20:36:28 +0000), qemu version: 4.2.1Debian 1:4.2-3ubuntu6.7~cloud0, kernel: 4.15.0-124-generic, hostname: ubuntu-bionic-ovh-bhs1-0021872194
+
+LC_ALL=C \
+
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
+
+HOME=/var/lib/libvirt/qemu/domain-10-instance-0000001a \
+
+XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-10-instance-0000001a/.local/share \
+
+XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-10-instance-0000001a/.cache \
+
+XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-10-instance-0000001a/.config \
+
+QEMU_AUDIO_DRV=none \
+
+/usr/bin/qemu-system-x86_64 \
+
+-name guest=instance-0000001a,debug-threads=on \
+
+-S \
+
+-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-10-instance-0000001a/master-key.aes \
+
+-machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
+
+-cpu qemu64 \
+
+-m 128 \
+
+-overcommit mem-lock=off \
+
+-smp 1,sockets=1,cores=1,threads=1 \
+
+-uuid 2c468d92-4b19-426a-8c25-16b4624c21a4 \
+
+-smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=22.1.0,serial=2c468d92-4b19-426a-8c25-16b4624c21a4,uuid=2c468d92-4b19-426a-8c25-16b4624c21a4,family=Virtual Machine' \
+
+-no-user-config \
+
+-nodefaults \
+
+-chardev socket,id=charmonitor,fd=32,server,nowait \
+
+-mon chardev=charmonitor,id=monitor,mode=control \
+
+-rtc base=utc \
+
+-no-shutdown \
+
+-boot strict=on \
+
+-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
+
+-blockdev '{"driver":"file","filename":"/opt/stack/data/nova/instances/_base/61bd5e531ab4c82456aa5300ede7266b3610be79","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
+
+-blockdev '{"node-name":"libvirt-2-format","read-only":true,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
+
+-blockdev '{"driver":"file","filename":"/opt/stack/data/nova/instances/2c468d92-4b19-426a-8c25-16b4624c21a4/disk","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
+
+-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-2-format"}' \
+
+-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-1-format,id=virtio-disk0,bootindex=1,write-cache=on \
+
+-netdev tap,fd=34,id=hostnet0 \
+
+-device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:43:11:f4,bus=pci.0,addr=0x3 \
+
+-add-fd set=2,fd=36 \
+
+-chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
+
+-device isa-serial,chardev=charserial0,id=serial0 \
+
+-vnc 0.0.0.0:0 \
+
+-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
+
+-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
+
+-object rng-random,id=objrng0,filename=/dev/urandom \
+
+-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
+
+-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+
+-msg timestamp=on
+
+char device redirected to /dev/pts/0 (label charserial0)
+
+2020-11-20 14:25:25.637+0000: initiating migration
+
+2020-11-20 14:25:26.776+0000: shutting down, reason=migrated
+
+2020-11-20T14:25:26.777394Z qemu-system-x86_64: terminating on signal 15 from pid 31113 (/usr/sbin/libvirtd)
+
+2020-11-20 14:25:38.909+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8.4~cloud0 (Openstack Ubuntu Testing Bot <email address hidden> Tue, 15 Sep 2020 20:36:28 +0000), qemu version: 4.2.1Debian 1:4.2-3ubuntu6.7~cloud0, kernel: 4.15.0-124-generic, hostname: ubuntu-bionic-ovh-bhs1-0021872194
+
+LC_ALL=C \
+
+PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
+
+HOME=/var/lib/libvirt/qemu/domain-13-instance-0000001a \
+
+XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-13-instance-0000001a/.local/share \
+
+XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-13-instance-0000001a/.cache \
+
+XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-13-instance-0000001a/.config \
+
+QEMU_AUDIO_DRV=none \
+
+/usr/bin/qemu-system-x86_64 \
+
+-name guest=instance-0000001a,debug-threads=on \
+
+-S \
+
+-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-instance-0000001a/master-key.aes \
+
+-machine pc-i440fx-4.2,accel=tcg,usb=off,dump-guest-core=off \
+
+-cpu qemu64,hypervisor=on,lahf-lm=on \
+
+-m 128 \
+
+-overcommit mem-lock=off \
+
+-smp 1,sockets=1,cores=1,threads=1 \
+
+-uuid 2c468d92-4b19-426a-8c25-16b4624c21a4 \
+
+-smbios 'type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=22.1.0,serial=2c468d92-4b19-426a-8c25-16b4624c21a4,uuid=2c468d92-4b19-426a-8c25-16b4624c21a4,family=Virtual Machine' \
+
+-no-user-config \
+
+-nodefaults \
+
+-chardev socket,id=charmonitor,fd=34,server,nowait \
+
+-mon chardev=charmonitor,id=monitor,mode=control \
+
+-rtc base=utc \
+
+-no-shutdown \
+
+-boot strict=on \
+
+-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \
+
+-blockdev '{"driver":"file","filename":"/opt/stack/data/nova/instances/_base/61bd5e531ab4c82456aa5300ede7266b3610be79","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
+
+-blockdev '{"node-name":"libvirt-2-format","read-only":true,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
+
+-blockdev '{"driver":"file","filename":"/opt/stack/data/nova/instances/2c468d92-4b19-426a-8c25-16b4624c21a4/disk","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
+
+-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-2-format"}' \
+
+-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-1-format,id=virtio-disk0,bootindex=1,write-cache=on \
+
+-netdev tap,fd=36,id=hostnet0 \
+
+-device virtio-net-pci,host_mtu=1400,netdev=hostnet0,id=net0,mac=fa:16:3e:43:11:f4,bus=pci.0,addr=0x3 \
+
+-add-fd set=2,fd=38 \
+
+-chardev pty,id=charserial0,logfile=/dev/fdset/2,logappend=on \
+
+-device isa-serial,chardev=charserial0,id=serial0 \
+
+-vnc 0.0.0.0:0 \
+
+-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \
+
+-incoming defer \
+
+-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \
+
+-object rng-random,id=objrng0,filename=/dev/urandom \
+
+-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 \
+
+-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+
+-msg timestamp=on
+
+char device redirected to /dev/pts/1 (label charserial0)
+
+2020-11-20T14:25:40.720757Z qemu-system-x86_64: VQ 0 size 0x80 Guest index 0xb8 inconsistent with Host index 0xe0: delta 0xffd8
+
+2020-11-20T14:25:40.720785Z qemu-system-x86_64: Failed to load virtio-blk:virtio
+
+2020-11-20T14:25:40.720790Z qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:04.0/virtio-blk'
+
+2020-11-20T14:25:40.720824Z qemu-system-x86_64: load of migration failed: Operation not permitted
+
+2020-11-20 14:25:40.778+0000: shutting down, reason=failed
+
+
+
+
+
+in the new case we have the same qemu/libvirt version in both nodes
+
+4.2.1Debian 1:4.2-3ubuntu6.7~cloud0
+
+and in this cae its failng for the virtio-blk device not the memroy ballon
+
+before the migration starts we see
+
+-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
+-msg timestamp=on
+char device redirected to /dev/pts/2 (label charserial0)
+virtio: bogus descriptor or out of resources
+2020-11-20 14:25:39.911+0000: initiating migration
+
+on the destination
+https://zuul.opendev.org/t/openstack/build/d50877ae15db4022b82f4bb1d1d52cea/log/logs/subnode-2/libvirt/qemu/instance-0000001a.txt
+
+then  on the source we see 
+
+char device redirected to /dev/pts/1 (label charserial0)
+2020-11-20T14:25:40.720757Z qemu-system-x86_64: VQ 0 size 0x80 Guest index 0xb8 inconsistent with Host index 0xe0: delta 0xffd8
+2020-11-20T14:25:40.720785Z qemu-system-x86_64: Failed to load virtio-blk:virtio
+2020-11-20T14:25:40.720790Z qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:04.0/virtio-blk'
+2020-11-20T14:25:40.720824Z qemu-system-x86_64: load of migration failed: Operation not permitted
+
+when it fails
+https://zuul.opendev.org/t/openstack/build/d50877ae15db4022b82f4bb1d1d52cea/log/logs/libvirt/qemu/instance-0000001a.txt
+
+
+
+[Copy/pasting my comment from here: https://bugs.launchpad.net/nova/+bug/1737625/comments/4]
+
+I just talked to Dave Gilbert from upstream QEMU. Overall, as I implied in comment#2, this gnarly issue requires specialized debugging, digging deep into the bowels of QEMU, 'virtio-blk' and 'virtio.
+
+That said, Dave notes that we get this "guest index inconsistent" error when the migrated RAM is inconsistent with the migrated 'virtio' device state. And a common case is where a 'virtio' device does an operation after the vCPU is stopped and after RAM has been transmitted.
+
+Dave makes some guesswork of a potential scenario where this can occur:
+
+  - Guest is running
+  - ... live migration starts
+  - ... a "block read" request gets submitteed
+  - ... live migration stops the vCPUs, finishes transmitting RAM
+  - ... the "block read" completes, 'virtio-blk' updates pointers
+  - ... live migration "serializes" the 'virito-blk' state
+
+So the "guest index inconsistent" state would only happen if you got unlucky with the timing of that read.
+
+Another possibility, Dave points out, is that the guest has screwed up the device state somehow; the migration code in 'virtio' checks the state a lot. We have ruled this possibility out becausethe guest is just a garden-variety CirrOS instance idling; nothing special about it.
+
+
+hang on, I've just noticed the :
+
+ char device redirected to /dev/pts/2 (label charserial0)
+virtio: bogus descriptor or out of resources
+2020-11-20 14:25:39.911+0000: initiating migration
+
+I've never seen that 'bogus descriptor or out of resources' one before; the fact that's happening before the migration starts is suspicious that something has already gone wrong before migration started.
+Is that warning present in all these failures?
+
+I see two recent hits we have still logs for.
+
+The one described in comment #10
+
+And another but there the error message is a bit different:
+
+on the migration source host:
+: https://6f0be18d925d64906a23-689ad0b9b6f06bc0c51bfb99bf86ea04.ssl.cf5.rackcdn.com/698706/4/check/nova-grenade-multinode/ee2dbea/logs/libvirt/qemu/instance-0000001b.txt
+
+char device redirected to /dev/pts/0 (label charserial0)
+virtio: zero sized buffers are not allowed
+2020-11-23 22:20:54.297+0000: initiating migration
+
+on the migration destination
+https://6f0be18d925d64906a23-689ad0b9b6f06bc0c51bfb99bf86ea04.ssl.cf5.rackcdn.com/698706/4/check/nova-grenade-multinode/ee2dbea/logs/subnode-2/libvirt/qemu/instance-0000001b.txt
+
+char device redirected to /dev/pts/0 (label charserial0)
+2020-11-23T22:20:55.129189Z qemu-system-x86_64: VQ 0 size 0x80 Guest index 0x62 inconsistent with Host index 0xa1: delta 0xffc1
+2020-11-23T22:20:55.129230Z qemu-system-x86_64: Failed to load virtio-blk:virtio
+2020-11-23T22:20:55.129241Z qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-blk'
+2020-11-23T22:20:55.129259Z qemu-system-x86_64: load of migration failed: Operation not permitted
+
+
+
+
+
+
+
+OK, but that still says in both cases here we've got a virtio error telling us that the queues are broken before migration even starts;  so we should try and figure out why that's happening first.
+
+Is this still an issue with the latest release of QEMU (v6.0)?
+
+I got same issue on centos 7 stein
+
+
+Hello, some news ....I wonder if they can help:
+I am testing with some virtual machine again.
+If I follows this steps it works (but I lost network connection):
+
+1) Detach network interface from instance
+2) Attach network interface to instance
+3) Migrate instance
+4) Loggin into instance using console and restart networking
+
+while if I restart networking before live migration it does not work.
+So, when someone mentioned
+
+########################
+we get this "guest index inconsistent" error when the migrated RAM is inconsistent with the migrated 'virtio' device state. And a common case is where a 'virtio' device does an operation after the vCPU is stopped and after RAM has been transmitted.
+#############################à
+the network traffic could be the problem ?
+Ignazio
+
+Be careful, it might not be the same bug.
+
+Yes, it *shouldn't* be a problem, but if the virtio code in qemu is broken then it will keep accepting incoming packets even when the guest is stopped in the final part of the migration and you get the contents of the RAM taken before the reception ofthe packet, but hte virtio state that's in the migration stream after the reception of the packet, and it's inconsistent.
+
+But the case the other reporter mentioned is on a virtio-blk device; the same thing can happen if the storage device stalls/is slow during the migration code - i.e. a block read takes ages to complete and happens to complete after the point it should have stopped for migration.
+
+Is this still happening with the latest release?
+
+[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.]
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/1762 b/results/classifier/108/other/1762
new file mode 100644
index 000000000..5962fc86b
--- /dev/null
+++ b/results/classifier/108/other/1762
@@ -0,0 +1,98 @@
+graphic: 0.798
+other: 0.773
+permissions: 0.706
+semantic: 0.704
+KVM: 0.701
+performance: 0.672
+socket: 0.645
+files: 0.630
+PID: 0.625
+device: 0.612
+debug: 0.600
+vnc: 0.574
+network: 0.552
+boot: 0.507
+
+Linux RTC issues possibly with RTC_UIE_ON, RTC_UIE_OFF
+Description of problem:
+Running:
+
+```
+hwclock --hctosys
+```
+
+as root, under the running VM using a UEFI bios image, I get:
+
+```
+hwclock: select() to /dev/rtc0 to wait for clock tick timed out
+```
+
+When running the same command on the same disk image but without UEFI,
+that is, just using the SeaBIOS bios, everything works fine.
+
+Running
+
+```
+hwclock --hctosys --directisa
+```
+
+works fine, too.
+
+Running the (compiled) kernel test utility:
+
+
+```
+/usr/src/linux/tools/testing/selftests/rtc/rtctest.c
+
+```
+
+
+```
+TAP version 13
+1..8
+# Starting 8 tests from 2 test cases.
+#  RUN           rtc.date_read ...
+# rtctest.c:49:date_read:Current RTC date/time is 10/07/2023 14:02:11.
+#            OK  rtc.date_read
+ok 1 rtc.date_read
+#  RUN           rtc.date_read_loop ...
+# rtctest.c:88:date_read_loop:Continuously reading RTC time for 30s (with 11ms breaks after every read).
+# rtctest.c:115:date_read_loop:Performed 2752 RTC time reads.
+#            OK  rtc.date_read_loop
+ok 2 rtc.date_read_loop
+#  RUN           rtc.uie_read ...
+# uie_read: Test terminated by timeout
+#          FAIL  rtc.uie_read
+not ok 3 rtc.uie_read
+#  RUN           rtc.uie_select ...
+# rtctest.c:164:uie_select:Expected 0 (0) != rc (0)
+# uie_select: Test terminated by assertion
+#          FAIL  rtc.uie_select
+not ok 4 rtc.uie_select
+#  RUN           rtc.alarm_alm_set ...
+# rtctest.c:202:alarm_alm_set:Alarm time now set to 14:02:52.
+# rtctest.c:214:alarm_alm_set:Expected 0 (0) != rc (0)
+# alarm_alm_set: Test terminated by assertion
+#          FAIL  rtc.alarm_alm_set
+not ok 5 rtc.alarm_alm_set
+#  RUN           rtc.alarm_wkalm_set ...
+# rtctest.c:258:alarm_wkalm_set:Alarm time now set to 10/07/2023 14:02:57.
+# rtctest.c:268:alarm_wkalm_set:Expected 0 (0) != rc (0)
+# alarm_wkalm_set: Test terminated by assertion
+#          FAIL  rtc.alarm_wkalm_set
+not ok 6 rtc.alarm_wkalm_set
+#  RUN           rtc.alarm_alm_set_minute ...
+# rtctest.c:304:alarm_alm_set_minute:Alarm time now set to 14:03:00.
+# rtctest.c:316:alarm_alm_set_minute:Expected 0 (0) != rc (0)
+# alarm_alm_set_minute: Test terminated by assertion
+#          FAIL  rtc.alarm_alm_set_minute
+not ok 7 rtc.alarm_alm_set_minute
+#  RUN           rtc.alarm_wkalm_set_minute ...
+# rtctest.c:360:alarm_wkalm_set_minute:Alarm time now set to 10/07/2023 14:05:00.
+# rtctest.c:370:alarm_wkalm_set_minute:Expected 0 (0) != rc (0)
+# alarm_wkalm_set_minute: Test terminated by assertion
+#          FAIL  rtc.alarm_wkalm_set_minute
+not ok 8 rtc.alarm_wkalm_set_minute
+# FAILED: 2 / 8 tests passed.
+# Totals: pass:2 fail:6 xfail:0 xpass:0 skip:0 error:0
+#
diff --git a/results/classifier/108/other/1762558 b/results/classifier/108/other/1762558
new file mode 100644
index 000000000..75a40a111
--- /dev/null
+++ b/results/classifier/108/other/1762558
@@ -0,0 +1,78 @@
+graphic: 0.710
+KVM: 0.640
+other: 0.629
+debug: 0.529
+boot: 0.470
+device: 0.449
+permissions: 0.438
+performance: 0.426
+PID: 0.388
+semantic: 0.384
+vnc: 0.375
+files: 0.365
+network: 0.356
+socket: 0.314
+
+Many crashes with "memslot_get_virt: slot_id 170 too big"-type errors in 2.12.0 rc2
+
+Since qemu 2.12.0 rc2 - qemu-2.12.0-0.6.rc2.fc29 - landed in Fedora Rawhide, just about all of our openQA-automated tests of Rawhide guests which run with qxl / SPICE graphics in the guest have died partway in, always shortly after the test switches from the installer (an X environment) to a console on a tty. qemu is, I think, hanging. There are always some errors like this right around the time of the hang:
+
+[2018-04-09T20:13:42.0736 UTC] [debug] QEMU: id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
+[2018-04-09T20:13:42.0736 UTC] [debug] QEMU: id 1, group 1, virt start 7f42dbc00000, virt end 7f42dfbfe000, generation 0, delta 7f42dbc00000
+[2018-04-09T20:13:42.0736 UTC] [debug] QEMU: id 2, group 1, virt start 7f42d7a00000, virt end 7f42dba00000, generation 0, delta 7f42d7a00000
+[2018-04-09T20:13:42.0736 UTC] [debug] QEMU: 
+[2018-04-09T20:13:42.0736 UTC] [debug] QEMU: (process:45812): Spice-CRITICAL **: memslot.c:111:memslot_get_virt: slot_id 218 too big, addr=da8e21fbda8e21fb
+
+or occasionally like this:
+
+[2018-04-09T20:13:58.0717 UTC] [debug] QEMU: id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
+[2018-04-09T20:13:58.0720 UTC] [debug] QEMU: id 1, group 1, virt start 7ff093c00000, virt end 7ff097bfe000, generation 0, delta 7ff093c00000
+[2018-04-09T20:13:58.0720 UTC] [debug] QEMU: id 2, group 1, virt start 7ff08fa00000, virt end 7ff093a00000, generation 0, delta 7ff08fa00000
+[2018-04-09T20:13:58.0720 UTC] [debug] QEMU: 
+[2018-04-09T20:13:58.0720 UTC] [debug] QEMU: (process:25622): Spice-WARNING **: memslot.c:68:memslot_validate_virt: virtual address out of range
+[2018-04-09T20:13:58.0720 UTC] [debug] QEMU:     virt=0x0+0x18 slot_id=0 group_id=1
+[2018-04-09T20:13:58.0721 UTC] [debug] QEMU:     slot=0x0-0x0 delta=0x0
+[2018-04-09T20:13:58.0721 UTC] [debug] QEMU: 
+[2018-04-09T20:13:58.0721 UTC] [debug] QEMU: (process:25622): Spice-WARNING **: display-channel.c:2426:display_channel_validate_surface: invalid surface_id 1048576
+[2018-04-09T20:14:14.0728 UTC] [debug] QEMU: id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
+[2018-04-09T20:14:14.0728 UTC] [debug] QEMU: id 1, group 1, virt start 7ff093c00000, virt end 7ff097bfe000, generation 0, delta 7ff093c00000
+[2018-04-09T20:14:14.0728 UTC] [debug] QEMU: id 2, group 1, virt start 7ff08fa00000, virt end 7ff093a00000, generation 0, delta 7ff08fa00000
+[2018-04-09T20:14:14.0728 UTC] [debug] QEMU: 
+[2018-04-09T20:14:14.0728 UTC] [debug] QEMU: (process:25622): Spice-CRITICAL **: memslot.c:122:memslot_get_virt: address generation is not valid, group_id 1, slot_id 0, gen 110, slot_gen 0
+
+The same tests running on Fedora 28 guests on the same hosts are not hanging, and the same tests were not hanging right before the qemu package got updated, so this seems very strongly tied to the new qemu.
+
+These error messages ("memslot_get_virt") do not come from QEMU, but from spice, so please report this problem to the Spice project first (see https://www.spice-space.org/support.html for how to file a bug there).
+
+Nothing about SPICE changed in the affected time frame. This started happening between 2018-04-02 and 2018-04-07. The last time SPICE was changed in Rawhide was on 2018-02-09. However, qemu was bumped from rc1 to rc2 on 2018-04-05.
+
+It's possible that https://bugzilla.redhat.com/show_bug.cgi?id=1564210 is involved; the offending mesa for that bug was built on 2018-04-03, so it also fits the time frame. However, the bug is not happening in Fedora 28 tests, and that same mesa change was sent to Fedora 28, so this seems less likely. The only related thing I've yet found that changed in Rawhide but not Fedora 28 during the time in which this bug started happening on Rawhide but not Fedora 28 is qemu.
+
+...on the other hand, I was clearly not thinking straight in associating this with the qemu version bump in Rawhide, because we don't *run* that qemu. We use the qemu from the worker host, not from the image under test, and the worker hosts are not running Rawhide, and their qemu hasn't changed during the time this problem appeared.
+
+It still seems like the change that triggered this problem is something that changed in Rawhide between 2018-04-02 and 2018-04-07, but whatever that is, it almost certainly isn't qemu. You can close this issue for now, then. Sorry for the thinko.
+
+IMHO it's best to keep this open until we find out what's going on;  it's not impossible it's something that's changed in qemu, and even if it isn't qemu's fault then you won't be the only person who ends up reporting it here, so it'll be good to get the answer.
+
+
+Up to you, of course. Just realized I didn't mention here that I also reported this downstream, and since it turns out to be not triggered by a qemu change I've been doing most of the investigation there:
+
+https://bugzilla.redhat.com/show_bug.cgi?id=1565354
+
+So far it's looking like the change that triggered this is going from kernel 4.16rc7 to kernel 4.17rc4.
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+This got resolved along the way and wasn't really a qemu bug anyway.
+
diff --git a/results/classifier/108/other/1762707 b/results/classifier/108/other/1762707
new file mode 100644
index 000000000..b6b32a978
--- /dev/null
+++ b/results/classifier/108/other/1762707
@@ -0,0 +1,53 @@
+permissions: 0.887
+graphic: 0.853
+device: 0.846
+other: 0.823
+debug: 0.770
+socket: 0.753
+semantic: 0.752
+files: 0.744
+performance: 0.742
+network: 0.740
+PID: 0.705
+KVM: 0.608
+vnc: 0.594
+boot: 0.524
+
+VFIO device gets DMA failures when virtio-balloon leak from highmem to lowmem
+
+Is there any known conflict between VFIO passthrough device and virtio-balloon?
+
+The VM has:
+1. 4GB system memory
+2. one VFIO passthrough device which supports high address memory DMA and uses GFP_HIGHUSER pages.
+3. Memory balloon device with 4GB target.
+
+When setting the memory balloon target to 1GB and 4GB in loop during runtime (I used the command "virsh qemu-monitor-command debian --hmp --cmd balloon 1024"), the VFIO device DMA randomly gets failure.
+
+More clues:
+1. configure 2GB system memory (no highmem) VM, no issue with similar operations
+2. setting the memory balloon to higher like 8GB, no issue with similar operations
+
+I'm also trying to narrow down this issue. It's appreciated for that you guys may share some thoughts.
+
+Ballooning is currently incompatible with device assignment.  When the balloon is inflated (memory removed from the VM), the pages are zapped from the process without actually removing them from the vfio DMA mapping.  The pages are still pinned from the previous mapping, making the balloon inflation ineffective (pages are not available for re-use).  When the balloon is deflated, new (different) pages are faulted in for the previously zapped pages, but these are again not DMA mapped for the IOMMU, so now the physical memory backing a given address in the VM are different for processor and assigned device access and DMA will fail.  In order to support this, QEMU would need to do more than simply zap pages from the process address space, they'd need to be unmapped from the IOMMU, but we can only do that using the original mapping size.  Effectively, memory hotplug is a better solution when device assignment is involved.
+
+Hi Alex, Thanks for your confirming. change the status to invalid.
+
+I think we can raise this issue to libvirt. When using virsh or virt-manager, the memory balloon is still enabled by default even if there's a device assignment. 
+
+Alex, I see this issue is closed but I have a question, do you know if the problem only comes the balloon is resized to return memory to the host. I ask because we have a situation where we will start a VM with balloon enabled, and later it maybe possible a devices is assigned via hot-plug. So I would like to avoid this issue by doing the following:
+
+if a vfio devices is assigned;
+   resize the balloon size the the maximal guest memory
+end 
+
+Then because we know we added a vfio devices never resize the balloon to return memory again.
+
+More information about what we want to do: https://github.com/kata-containers/runtime/pull/793
+
+Regards,
+Carlos
+
+There are two scenarios here, if we have a regular, directly assigned physical device (including VFs), vfio's page pinning will populate the full memory footprint of the guest regardless of the balloon.  The balloon is effectively fully deflated, but the balloon driver in the guest hasn't released the pages back for guest kernel use.  In that case marking the balloon as deflated at least allows those pages to be used since they're allocated.  However, if the assigned device is an mdev device, then the pages might only be pinned on usage, depending on the vendor driver, and pages acquired by the guest balloon driver are unlikely to be used by the in-guest driver for the device.  It's always possible that the mdev vendor driver could pin them anyway, but there is a chance that those pages are actually still freed to the host until that point.  Latest QEMU will of course enable the  balloon inhibitor for either case so further balloon inflation will no longer zap pages.
+
diff --git a/results/classifier/108/other/1763 b/results/classifier/108/other/1763
new file mode 100644
index 000000000..4392e862b
--- /dev/null
+++ b/results/classifier/108/other/1763
@@ -0,0 +1,27 @@
+other: 0.901
+device: 0.892
+graphic: 0.702
+PID: 0.488
+network: 0.439
+semantic: 0.419
+debug: 0.349
+performance: 0.341
+permissions: 0.273
+files: 0.246
+boot: 0.197
+socket: 0.163
+vnc: 0.134
+KVM: 0.009
+
+ldd fails with qemu-aarch64
+Description of problem:
+see the original issue for full details https://github.com/multiarch/qemu-user-static/issues/172
+Steps to reproduce:
+1. docker run --rm -it arm64v8/ubuntu:16.04 ldd /bin/ls
+
+Also possible on other newer OSs (eg: Ubuntu:18.04) with different compiled binaries.
+Additional information:
+```
+WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
+ldd: exited with unknown exit code (139)
+```
diff --git a/results/classifier/108/other/1763536 b/results/classifier/108/other/1763536
new file mode 100644
index 000000000..6c647f0f7
--- /dev/null
+++ b/results/classifier/108/other/1763536
@@ -0,0 +1,167 @@
+other: 0.852
+vnc: 0.824
+performance: 0.817
+PID: 0.772
+device: 0.751
+KVM: 0.741
+files: 0.738
+graphic: 0.731
+socket: 0.731
+debug: 0.726
+semantic: 0.724
+permissions: 0.720
+network: 0.639
+boot: 0.636
+
+go build fails under qemu-ppc64le-static (qemu-user)
+
+I am using qemu-user (built static) in a docker container environment.  When running multi-threaded go commands in the container (go build for example) the process may hang, report segfaults or other errors.  I built qemu-ppc64le from the upstream git (master).
+
+I see the problem running on a multi core system with Intel i7 processors.
+# cat /proc/cpuinfo | grep "model name"
+model name	: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
+model name	: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
+model name	: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
+model name	: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
+model name	: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
+model name	: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
+model name	: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
+model name	: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz
+
+Steps to reproduce:
+1) Build qemu-ppc64le as static and copy into docker build directory named it qemu-ppc64le-static.
+
+2) Add hello.go to docker build dir.
+
+package main
+import "fmt"
+func main() {
+	fmt.Println("hello world")
+}
+
+3) Create the Dockerfile from below:
+
+FROM ppc64le/golang:1.10.1-alpine3.
+COPY qemu-ppc64le-static /usr/bin/
+COPY hello.go /go
+
+4) Build container
+$ docker build -t qemutest -f Dockerfile ./go 
+
+5) Run test
+$ docker run -it qemutest
+
+/go # /usr/bin/qemu-ppc64le-static --version
+qemu-ppc64le version 2.11.93 (v2.12.0-rc3-dirty)
+Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
+
+/go # go version
+go version go1.10.1 linux/ppc64le
+
+/go # go build hello.go
+fatal error: fatal error: stopm holding locksunexpected signal during runtime execution
+
+panic during panic
+[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1003528c]
+
+runtime stack:
+runtime: unexpected return pc for syscall.Syscall6 called from 0xc42007f500
+stack: frame={sp:0xc4203be840, fp:0xc4203be860} stack=[0x4000b7ecf0,0x4000b928f0)
+
+syscall.Syscall6(0x100744e8, 0x3d, 0xc42050c140, 0x20, 0x18, 0x10422b80, 0xc4203be968[signal , 0x10012d88SIGSEGV: segmentation violation, 0xc420594000 code=, 0x00x1 addr=0x0 pc=0x1003528c)
+]
+
+runtime stack:
+	/usr/local/go/src/syscall/asm_linux_ppc64x.s:61runtime.throw(0x10472d19, 0x13)
+ +	/usr/local/go/src/runtime/panic.go:0x6c616 +0x68
+
+
+runtime.stopm()
+	/usr/local/go/src/runtime/proc.go:1939goroutine  +10x158
+ [runtime.exitsyscall0semacquire(0xc42007f500)
+	/usr/local/go/src/runtime/proc.go:3129 +]:
+0x130
+runtime.mcall(0xc42007f500)
+	/usr/local/go/src/runtime/asm_ppc64x.s:183 +0x58sync.runtime_Semacquire
+(0xc4201fab1c)
+	/usr/local/go/src/runtime/sema.go:56 +0x38
+
+----
+Note the results may differ between attempts,  hangs and other faults sometimes happen.
+----
+If I run "go: single threaded I don't see the problem, for example:
+
+/go # GOMAXPROCS=1 go build -p 1 hello.go 
+/go # ./hello
+hello world
+
+I see the same issue with arm64.  I don't think this is a go issue, but don't have a real evidence to prove that.  This problem looks similar to other problem I have seen reported against qemu running multi-threaded applications.
+
+I missed a step for reproduction.  Step 1 should be:
+docker run --rm --privileged multiarch/qemu-user-static:register
+This modprobes binfmt and registers qemu-ppc64le-static as the interpreter for ppc64le executables.
+
+FYI: To workaround this issue you can limit the docker container to a single cpu like this:
+
+docker run --cpuset-cpus 0 -it cross-test3 go build hello.go
+
+This works for docker build as well.
+
+docker build--cpuset-cpus 0 .....
+
+Do you have a simpler repro case (ie one that doesn't require docker) ?
+
+
+I will attempt to find an way to re-create without docker.  The key is we need a way to create a ppc64le (or arm64) fakeroot with go that we can chroot into.  That is easy to do with docker.  BTW: the use case using docker and qemu-user-static is becoming fairly common way to cross build container images.
+
+I care more about the arm64 case, so if you're going to do one then that would be my preference.
+
+
+Using QEMU from tag v2.12.0-rc4 on Ubuntu Xenial ppc64el, it works.
+
+muriloo@jaspion1:~/go-docker$ sudo docker run --rm -it qemutest
+/go # /usr/bin/qemu-ppc64le-static --version
+qemu-ppc64 version 2.11.94 (v2.12.0-rc4-dirty)
+Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
+/go # go version
+go version go1.10.1 linux/ppc64le
+/go # go build hello.go
+/go # ./hello
+hello world
+/go #
+
+Here is how I configured QEMU:
+
+muriloo@jaspion1:~/sources/qemu$ ./configure --target-list=ppc64-linux-user --disable-system --disable-tools --static
+
+muriloo@jaspion1:~$ uname -a
+Linux jaspion1 4.4.0-119-generic #143-Ubuntu SMP Mon Apr 2 16:08:02 UTC 2018 ppc64le ppc64le ppc64le GNU/Linux
+
+With QEMU from tag v2.12.0-rc4 on Fedora 27 x86_64, it works too.
+
+muriloo@laptop$ docker run --rm -it qemutest
+/go # qemu-ppc64le-static --version
+qemu-ppc64le version 2.11.94 (v2.12.0-rc4)
+Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
+/go # go version
+go version go1.10.1 linux/ppc64le
+/go # go build hello.go
+/go # ./hello
+hello world
+/go #
+
+muriloo@laptop$ uname -a
+Linux laptop 4.15.17-300.fc27.x86_64 #1 SMP Thu Apr 12 18:19:17 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
+
+muriloo@laptop$ rpm -q docker
+docker-1.13.1-51.git4032bd5.fc27.x86_64
+
+Thanks for the update,  I will test that version and report back ( it may be a few days)
+
+We recently fixed bug #1696773 which was a cause of various crashes or other problems when trying to run go binaries under linux-user, including "go build hello.go". So I strongly suspect this is a duplicate of that bug. Could you test with the QEMU v4.1.0 rc3 or later, please?
+
+
+Have you ever tried with a newer version of QEMU?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/1764 b/results/classifier/108/other/1764
new file mode 100644
index 000000000..a8e1db5df
--- /dev/null
+++ b/results/classifier/108/other/1764
@@ -0,0 +1,16 @@
+device: 0.861
+performance: 0.433
+debug: 0.424
+graphic: 0.366
+network: 0.322
+semantic: 0.230
+PID: 0.177
+socket: 0.154
+other: 0.104
+boot: 0.078
+permissions: 0.056
+vnc: 0.048
+KVM: 0.045
+files: 0.028
+
+lsusb fails with qemu-system-x86_64 command (qemu-system-x86 package)
diff --git a/results/classifier/108/other/1765 b/results/classifier/108/other/1765
new file mode 100644
index 000000000..5aa4d2905
--- /dev/null
+++ b/results/classifier/108/other/1765
@@ -0,0 +1,110 @@
+other: 0.883
+graphic: 0.849
+semantic: 0.819
+debug: 0.807
+performance: 0.788
+network: 0.784
+permissions: 0.778
+device: 0.772
+vnc: 0.771
+socket: 0.766
+KVM: 0.765
+PID: 0.763
+boot: 0.760
+files: 0.753
+
+Linux kernel fails to boot on powernv machines with nvme device on s390x hosts
+Description of problem:
+When running a powernv guest with nvme device on a s390x host, the guest linux kernel fails to boot with the following panic:
+
+```
+nvme nvme0: pci function 0002:01:00.0
+nvme 0002:01:00.0: enabling device (0100 -> 0102)
+nvme nvme0: 1/0/0 default/read/poll queues
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+nvme nvme0: invalid id 0 completed on queue 0
+BUG: Kernel NULL pointer dereference on read at 0x00000008
+Faulting instruction address: 0xc0000000000c02ec
+Oops: Kernel access of bad area, sig: 11 [#1]
+LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA PowerNV
+Modules linked in: nvme powernv_flash(+) rtc_opal ibmpowernv mtd nvme_core
+CPU: 0 PID: 100 Comm: pb-console Not tainted 5.10.50-openpower1 #2
+NIP:  c0000000000c02ec LR: c00000000050d5dc CTR: c00000000024a2d0
+REGS: c00000003ffdfa00 TRAP: 0300   Not tainted  (5.10.50-openpower1)
+MSR:  9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE>  CR: 24000402  XER: 20000000
+CFAR: c00000000000f3c0 DAR: 0000000000000008 DSISR: 40000000 IRQMASK: 1 
+GPR00: c0000000000ba058 c00000003ffdfc90 c00000000180db00 0000000000000008 
+GPR04: 000000000000000a 0000000000000000 c000000002740210 c00000000274e218 
+GPR08: c00000000183be00 0000000080000000 0000000000000000 c0080000003ba798 
+GPR12: c00000000024a2d0 c000000001a30000 0000000000000000 0000000000000100 
+GPR16: 0000000000000004 0000000000000020 0000000000000100 c00000000bbe8080 
+GPR20: 0000000000000028 c000000001830100 0000000000000001 0000000000000000 
+GPR24: c000000001831a00 c000000001410c00 00000000ffff9097 0000000000400040 
+GPR28: 000000000000000a 000000000000000a 0000000000000008 0000000000000000 
+NIP [c0000000000c02ec] __arch_spin_trylock+0x4/0x24
+LR [c00000000050d5dc] _raw_spin_lock_irqsave+0x2c/0x78
+Call Trace:
+[c00000003ffdfc90] [c00000003ffdfcc0] 0xc00000003ffdfcc0 (unreliable)
+[c00000003ffdfcd0] [c0000000000ba058] complete+0x24/0x64
+[c00000003ffdfd10] [c00000000024a2f8] blk_end_sync_rq+0x28/0x3c
+[c00000003ffdfd30] [c00000000024f44c] __blk_mq_end_request+0x134/0x160
+[c00000003ffdfd70] [c0080000003b481c] nvme_complete_rq+0xcc/0x13c [nvme_core]
+[c00000003ffdfda0] [c0080000000a1078] nvme_pci_complete_rq+0x78/0x108 [nvme]
+[c00000003ffdfdd0] [c00000000024de38] blk_done_softirq+0xc0/0xd0
+[c00000003ffdfe30] [c00000000050da20] __do_softirq+0x238/0x28c
+[c00000003ffdff20] [c0000000000875d4] __irq_exit_rcu+0x80/0xc8
+[c00000003ffdff50] [c000000000087844] irq_exit+0x18/0x30
+[c00000003ffdff70] [c000000000011c4c] __do_irq+0x80/0xa0
+[c00000003ffdff90] [c00000000001d7a4] call_do_irq+0x14/0x24
+[c00000000bff3960] [c000000000011d20] do_IRQ+0xb4/0xbc
+[c00000000bff39f0] [c000000000008fac] hardware_interrupt_common_virt+0x1ac/0x1b0
+--- interrupt: 500 at arch_local_irq_restore+0xac/0xe8
+    LR = __raw_spin_unlock_irq+0x34/0x40
+[c00000000bff3cf0] [0000000000000000] 0x0 (unreliable)
+[c00000000bff3d20] [c0000000000a8344] __raw_spin_unlock_irq+0x34/0x40
+[c00000000bff3d50] [c0000000000a84b0] finish_task_switch+0x160/0x228
+[c00000000bff3df0] [c0000000000aa3d0] schedule_tail+0x20/0x8c
+[c00000000bff3e20] [c00000000000cb50] ret_from_fork+0x4/0x54
+Instruction dump:
+a14d0b7a 7da96b78 2f8a0000 419e0010 39400000 b14d0b7a 7c0004ac a1490b78 
+394affff b1490b78 4e800020 812d0000 <7d401829> 2c0a0000 40c20010 7d20192d 
+---[ end trace 6b7a11c45e4fc465 ]---
+
+Kernel panic - not syncing: Fatal exception
+Rebooting in 30 seconds..
+```
+
+The issue has been noticed while running the avocado tests on a s390x host:
+
+```
+make check-venv
+./tests/venv/bin/avocado run tests/avocado/boot_linux_console.py:BootLinuxConsole.test_ppc_powernv8
+```
+
+But they can also be reproduced manually:
+Steps to reproduce:
+1. wget https://github.com/open-power/op-build/releases/download/v2.7/zImage.epapr
+2. ./qemu-system-ppc64 -nographic -M powernv8 -kernel zImage.epapr -append "console=tty0 console=hvc0" -device pcie-pci-bridge,id=bridge1,bus=pcie.1,addr=0x0 -device nvme,bus=pcie.2,addr=0x0,serial=1234
diff --git a/results/classifier/108/other/1766 b/results/classifier/108/other/1766
new file mode 100644
index 000000000..f0f03e21f
--- /dev/null
+++ b/results/classifier/108/other/1766
@@ -0,0 +1,18 @@
+device: 0.663
+performance: 0.492
+graphic: 0.446
+network: 0.404
+debug: 0.403
+semantic: 0.374
+KVM: 0.348
+vnc: 0.302
+PID: 0.256
+other: 0.254
+boot: 0.247
+socket: 0.117
+permissions: 0.090
+files: 0.018
+
+-strace should print target program counter when SIGSEGV
+Additional information:
+
diff --git a/results/classifier/108/other/1766841 b/results/classifier/108/other/1766841
new file mode 100644
index 000000000..65e43f9fb
--- /dev/null
+++ b/results/classifier/108/other/1766841
@@ -0,0 +1,96 @@
+graphic: 0.823
+semantic: 0.788
+permissions: 0.765
+other: 0.757
+KVM: 0.754
+performance: 0.745
+debug: 0.731
+vnc: 0.714
+device: 0.713
+socket: 0.692
+network: 0.666
+PID: 0.638
+boot: 0.603
+files: 0.548
+
+QEMU 2.12 Running Problem in Windows 7 Installation
+
+QEMU Version: 2.12 (Binary installer qemu-w64-setup-20180424.exe  from Stefan Weil's website so I am not sure I should report it to Weil by email or by this bug report system.)
+Host System: Windows 7 64bit
+Guest System: 9front 6350 (Codename“CONTENTS, MAINTAINED, STABLE”, Release 2018/02/02)
+
+QEMU Command:
+qemu-system-x86_64 -usb -device usb-mouse -hda plan9.qcow2.img -cdrom 9front-6350.iso -boot d
+
+QEMU warning: 
+(qemu-system-x86_64.exe:8844): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file 'D:\qemu\lib\gdk-pixbuf-2.0\2.10.0\loaders.cache': No such file or directory
+
+This likely means that your installation is broken.
+Try running the command
+  gdk-pixbuf-query-loaders > D:\qemu\lib\gdk-pixbuf-2.0\2.10.0\loaders.cache
+to make things work again for the time being.
+
+(qemu-system-x86_64.exe:8844): Gtk-WARNING **: Could not find the icon 'window-minimize-symbolic-ltr'. The 'hicolor' theme was not found either, perhaps you need to install it.
+You can get a copy from:
+        http://icon-theme.freedesktop.org/releases
+
+On Wed, Apr 25, 2018 at 10:23:07AM -0000, Justin wrote:
+> Public bug reported:
+> 
+> QEMU Version: 2.12 (Binary installer qemu-w64-setup-20180424.exe  from Stefan Weil's website so I am not sure I should report it to Weil by email or by this bug report system.)
+> Host System: Windows 7 64bit
+> Guest System: 9front 6350 (Codename“CONTENTS, MAINTAINED, STABLE”, Release 2018/02/02)
+> 
+> QEMU Command:
+> qemu-system-x86_64 -usb -device usb-mouse -hda plan9.qcow2.img -cdrom 9front-6350.iso -boot d
+> 
+> QEMU warning: 
+> (qemu-system-x86_64.exe:8844): GdkPixbuf-WARNING **: Cannot open pixbuf loader module file 'D:\qemu\lib\gdk-pixbuf-2.0\2.10.0\loaders.cache': No such file or directory
+> 
+> This likely means that your installation is broken.
+> Try running the command
+>   gdk-pixbuf-query-loaders > D:\qemu\lib\gdk-pixbuf-2.0\2.10.0\loaders.cache
+> to make things work again for the time being.
+> 
+> (qemu-system-x86_64.exe:8844): Gtk-WARNING **: Could not find the icon 'window-minimize-symbolic-ltr'. The 'hicolor' theme was not found either, perhaps you need to install it.
+> You can get a copy from:
+>         http://icon-theme.freedesktop.org/releases
+> 
+> ** Affects: qemu
+>      Importance: Undecided
+>          Status: New
+> 
+> 
+> ** Tags: installation windows
+> 
+> -- 
+> You received this bug notification because you are a member of qemu-
+> devel-ml, which is subscribed to QEMU.
+> https://bugs.launchpad.net/bugs/1766841
+
+CCing Stefan Weil in case he hasn't seen this yet.
+
+Stefan
+
+
+Both messages are warnings – QEMU will work nevertheless.
+
+The first warning can be fixed as the message says (that needs an additional installation of Cygwin for gdk-pixbuf-query-loaders). It is also suppressed if there is an empty file loaders.cache. Newer installers (for example those from today) will create such an empty file automatically.
+
+I don't get the second warning.
+
+Rechecked in my site, the new installer (20180430) had solved the 1st problem, but it leads to another problem: When uninstall QEMU, the loaders.cache file cannot be deleted automatically.
+
+For 2nd warning, I checked it again, when I choosed fully installing, the icons are displayed correctly; But when I just installed the x64 & x64w simulator, the icons are lost and the 2nd warning message shown. It seems some dependent file is not installed when performing a selected installation.
+
+I discovered that the following directory is not installed when the "Desktop icons" component is unchecked during installation:
+
+    qemu\share\icons
+
+Such directory contains two subdirectories: "Adwaita" and "hicolor". When they're present, the bug does not occur.
+
+The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
+If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/1766896 b/results/classifier/108/other/1766896
new file mode 100644
index 000000000..154c1ea99
--- /dev/null
+++ b/results/classifier/108/other/1766896
@@ -0,0 +1,195 @@
+graphic: 0.723
+other: 0.711
+KVM: 0.705
+vnc: 0.675
+permissions: 0.672
+debug: 0.662
+semantic: 0.635
+device: 0.627
+PID: 0.592
+performance: 0.568
+socket: 0.499
+boot: 0.448
+files: 0.448
+network: 0.315
+
+qemu-system-arm segfault in arm_v7m_mmu_idx_for_secstate 
+
+Attempting to emulate some baremetal ARM cortex-M* firmware with gdb causes a segfault every time.
+
+qemu invocation:
+qemu-system-arm -machine none -cpu cortex-m3 -nographic -monitor null -serial null -s -S -device loader,file=firmware.elf
+
+qemu seems to startup fine with that command. Segfault happens as soon as I connect from another console with
+
+arm-none-eabi-gdb firmware.elf
+> target remote localhost:1234
+# qemu segfaults, and kills arm-none-eabi-gdb along with it
+
+Here's a bt from qemu-system-arm :
+
+*********************************
+#0  armv7m_nvic_neg_prio_requested (opaque=0x0, secure=false)
+    at /home/sac/qemu/src/qemu/hw/intc/armv7m_nvic.c:383
+        s = 0x0
+#1  0x006e4806 in arm_v7m_mmu_idx_for_secstate (secstate=<optimized out>, env=0xb620263c)
+    at /home/sac/qemu/src/qemu/target/arm/cpu.h:2345
+        el = <optimized out>
+        mmu_idx = ARMMMUIdx_MPriv
+        el = <optimized out>
+        mmu_idx = <optimized out>
+#2  cpu_mmu_index (ifetch=false, env=0xb620263c) at /home/sac/qemu/src/qemu/target/arm/cpu.h:2358
+        mmu_idx = <optimized out>
+        el = <optimized out>
+        ifetch = <optimized out>
+        env = 0xb620263c
+        el = <optimized out>
+        mmu_idx = <optimized out>
+        el = <optimized out>
+        el = <optimized out>
+        mmu_idx = <optimized out>
+#3  arm_cpu_get_phys_page_attrs_debug (cs=0xb61fe480, addr=0, attrs=0xbfffc668)
+    at /home/sac/qemu/src/qemu/target/arm/helper.c:9858
+        cpu = 0xb61fe480
+        __func__ = "arm_cpu_get_phys_page_attrs_debug"
+        env = 0xb620263c
+        phys_addr = 6402535376434480864
+        page_size = 5
+        prot = -1239242724
+        ret = <optimized out>
+        fsr = 4294967041
+        fi = {s2addr = 0, stage2 = false, s1ptw = false, ea = false}
+        mmu_idx = <optimized out>
+#4  0x005729d1 in cpu_get_phys_page_attrs_debug (attrs=<optimized out>, addr=<optimized out>, 
+    cpu=<optimized out>) at /home/sac/qemu/src/qemu/include/qom/cpu.h:580
+        cc = <optimized out>
+        cc = <optimized out>
+#5  cpu_memory_rw_debug (cpu=0xb61fe480, addr=0, buf=0xbfffd6dc "", len=4, is_write=0)
+    at /home/sac/qemu/src/qemu/exec.c:3524
+        asidx = <optimized out>
+        attrs = {unspecified = 0, secure = 0, user = 0, requester_id = 15525}
+        l = <optimized out>
+        phys_addr = <optimized out>
+        page = 0
+        __PRETTY_FUNCTION__ = "cpu_memory_rw_debug"
+#6  0x005b4c5e in target_memory_rw_debug (is_write=false, len=4, buf=<optimized out>, addr=0, 
+    cpu=0xb61fe480) at /home/sac/qemu/src/qemu/gdbstub.c:56
+        cc = <optimized out>
+        cc = <optimized out>
+#7  gdb_handle_packet (s=s@entry=0xb6229800, line_buf=line_buf@entry=0xb6229810 "m0,4")
+    at /home/sac/qemu/src/qemu/gdbstub.c:1109
+        cpu = <optimized out>
+        cc = <optimized out>
+        p = 0xb6229813 "4"
+        thread = <optimized out>
+        ch = <optimized out>
+        reg_size = <optimized out>
+        type = <optimized out>
+        res = <optimized out>
+        buf = "m1\000", '\060' <repeats 109 times>, "ffffffff00000000d3010040\000t modification,\n     are permitted in any medium without royalt"...
+        mem_buf = '\000' <repeats 56 times>, "\377\377\377\377\000\000\000\000\323\001\000@", '\000' <repeats 716 times>...
+        registers = <optimized out>
+        addr = 0
+        len = 4
+        __func__ = "gdb_handle_packet"
+#8  0x005b55b3 in gdb_read_byte (ch=100, s=0xb6229800) at /home/sac/qemu/src/qemu/gdbstub.c:1664
+        reply = 43 '+'
+        reply = <optimized out>
+        repeat = <optimized out>
+#9  gdb_chr_receive (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>)
+    at /home/sac/qemu/src/qemu/gdbstub.c:1868
+        i = <optimized out>
+#10 0x00980319 in tcp_chr_read (chan=0xb6c86200, cond=G_IO_IN, opaque=0xb63fc6e0)
+    at chardev/char-socket.c:440
+        chr = <optimized out>
+        __func__ = "tcp_chr_read"
+        s = 0xb63fc6e0
+        buf = "$m0,4#fddInfo#c8read:arm-core.xml:0,ffb#08+;qRelocInsn+;fork-events+;vfork-events+;exec-events+;vContSupported+;QThreadEvents+;no-resumed+#df\363\377\377\000\000\000\000\274\354\377\277", '\000' <repeats 16 times>, "\272\356\377 \274\354\377\277", '\000' <repeats 16 times>, "\373\377\377\377\005\000\000\000"...
+        len = <optimized out>
+        size = <optimized out>
+#11 0xb7808c44 in g_main_context_dispatch () from /usr/lib/libglib-2.0.so.0
+No symbol table info available.
+#12 0x009e14d2 in glib_pollfds_poll () at util/main-loop.c:214
+        context = 0xb645f740
+        pfds = <optimized out>
+        context = <optimized out>
+        pfds = <optimized out>
+#13 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:261
+        context = 0xb645f740
+        ret = 1
+        spin_counter = 0
+        context = <optimized out>
+        ret = <optimized out>
+        spin_counter = 0
+        notified = false
+#14 main_loop_wait (nonblocking=0) at util/main-loop.c:515
+        ret = <optimized out>
+        timeout = 1000
+        timeout_ns = <optimized out>
+#15 0x00561781 in main_loop () at vl.c:1995
+No locals.
+#16 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4911
+        i = <optimized out>
+        snapshot = <optimized out>
+        linux_boot = <optimized out>
+        initrd_filename = <optimized out>
+        kernel_filename = <optimized out>
+        kernel_cmdline = <optimized out>
+        boot_order = <optimized out>
+        boot_once = <optimized out>
+        ds = <optimized out>
+        cyls = <optimized out>
+        heads = <optimized out>
+        secs = <optimized out>
+        translation = <optimized out>
+        opts = <optimized out>
+        machine_opts = <optimized out>
+        hda_opts = <optimized out>
+        icount_opts = <optimized out>
+        accel_opts = <optimized out>
+        olist = <optimized out>
+        optind = 14
+        optarg = 0xbffffcf6 "loader,file=firmware.elf"
+        loadvm = <optimized out>
+        machine_class = <optimized out>
+        cpu_model = <optimized out>
+        vga_model = <optimized out>
+        qtest_chrdev = <optimized out>
+        qtest_log = <optimized out>
+        pid_file = <optimized out>
+        incoming = <optimized out>
+        userconfig = <optimized out>
+        nographic = <optimized out>
+        display_type = <optimized out>
+        display_remote = <optimized out>
+        log_mask = <optimized out>
+        log_file = <optimized out>
+        trace_file = <optimized out>
+        maxram_size = <optimized out>
+        ram_slots = <optimized out>
+        vmstate_dump_file = <optimized out>
+        main_loop_err = 0x0
+        err = 0x0
+        list_data_dirs = <optimized out>
+        dirs = <optimized out>
+        bdo_queue = {sqh_first = 0x0, sqh_last = 0xbffff918}
+        __func__ = "main"
+
+follow-up to IRC discussions with stsquad and danpb : the problem is "-machine none" which prevents all the data structures from being initialized properly.
+
+You also did not specify any amount of RAM with the -m parameter and the "none" machine does not have any RAM by default. 
+
+Yes; cortex-m3 will only work on machine types that are expecting it (ie which instantiate the M profile NVIC interrupt controller, which is really an integral part of the CPU).
+
+We should catch this case and make QEMU exit with a more helpful message.
+
+
+https://patchwork.ozlabs.org/patch/924145/ is a patch which improves our error checking for this case. The command that previously segfaulted should now exit with the error message:
+qemu-system-arm: This board cannot be used with Cortex-M CPUs
+
+
+The patch referred to in comment #4 has now been committed, so from QEMU 3.0 this will fail with a useful error message to tell the user their choice of machine and CPU aren't compatible.
+
+
+https://git.qemu.org/?p=qemu.git;a=commitdiff;h=95f875654ae8b433b5
+
diff --git a/results/classifier/108/other/1766904 b/results/classifier/108/other/1766904
new file mode 100644
index 000000000..43bf45a5a
--- /dev/null
+++ b/results/classifier/108/other/1766904
@@ -0,0 +1,62 @@
+vnc: 0.785
+KVM: 0.783
+other: 0.761
+permissions: 0.732
+performance: 0.719
+network: 0.703
+graphic: 0.695
+debug: 0.691
+semantic: 0.688
+boot: 0.683
+device: 0.683
+socket: 0.682
+PID: 0.681
+files: 0.673
+
+Creating high hdd load (with constant fsyncs) on a SATA disk leads to freezes and errors in guest dmesg
+
+After upgrading from qemu 2.10.0+dfsg-2 to 2.12~rc3+dfsg-2 (on debian sid host), centos 7 guest started to show freezes and ata errors in dmesg during hdd workloads with writing many small files and repeated fsyncs.
+
+Host kernel 4.15.0-3-amd64.
+Guest kernel 3.10.0-693.21.1.el7.x86_64 (slightly older guest kernel was tested too with same result).
+
+Script that reproduces the bug (first run usualy goes smooth, second and later runs result in dmesg errors and freezes):
+
+http://paste.debian.net/hidden/472fb220/
+
+Sample of error messages in guest dmesg:
+
+http://paste.debian.net/hidden/8219e234/
+
+A workaround that I am using right now: I have detached this SATA storage and reattached the same .qcow2 file as SCSI - this has fixed the issue for me.
+
+Copying command line into bug so we don't lose it:
+
+LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=spice /usr/bin/kvm -name guest=myvm.local,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-myvm.local/master-key.aes -machine pc-i440fx-2.8,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu IvyBridge -m 2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid b10ea3d4-410c-4dc3-b9b0-818d43314323 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-myvm.local/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device ahci,id=sata0,bus=pci.0,addr=0x7 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/home/user/data/work/virt-images/myvm.local.qcow2,format=qcow2,if=none,id=drive-sata0-0-0 -device ide-hd,bus=sata0.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=29 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:39:66:3c,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-3-myvm.local/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel1,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on
+
+and ccing in jsnow
+
+Relevant bits appear to be:
+
+-M pc-i1440fx-2.8
+-cpu IvyBridge
+-m 2048 
+-realtime mlock=off 
+-smp 2,sockets=2,cores=1,threads=1
+-device ahci,id=sata0,bus=pci.0,addr=0x7 
+-drive file=/home/user/data/work/virt-images/myvm.local.qcow2,format=qcow2,if=none,id=drive-sata0-0-0 
+-device ide-hd,bus=sata0.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=1 
+
+So this is a 2.8 PC machine that we've configured to use AHCI instead. I see some blips about CHS being zero, but that's expected in response to a (successful) flush (0xE7) command, so it looks like it's stalling out. I'll have to try to reproduce and see if I can trigger the hang.
+
+
+I am getting the exact same issue. The freeze occurred when I tried to install Ubuntu 18.04 with qemu-2.12. However, it seems to be working just fine with qemu-2.11.1. So it seems that something in between 2.11.1 and 2.12 is the culprit.
+
+It's still possible to reproduce this issue with qemu-master (a3ac12fba028df90f7b3dbec924995c126c41022).
+
+Jake, can you try the fix I posted in https://bugs.launchpad.net/qemu/+bug/1769189 ? I'm not actually confident it's the same bug, but it might be worth a shot. It fixes a bug that was made more prominent inbetween 2.11 and 2.12, so it fits the timeline presented here.
+
+@John Snow Thanks! After applying that patch, I couldn't reproduce this anymore. At least for me it seems that these two issues refer to the same bug.
+
+Great, thank you so much for helping!
+
diff --git a/results/classifier/108/other/1767126 b/results/classifier/108/other/1767126
new file mode 100644
index 000000000..a79aa1485
--- /dev/null
+++ b/results/classifier/108/other/1767126
@@ -0,0 +1,30 @@
+graphic: 0.618
+semantic: 0.476
+other: 0.473
+device: 0.433
+performance: 0.369
+debug: 0.191
+network: 0.182
+permissions: 0.158
+boot: 0.115
+PID: 0.098
+files: 0.068
+vnc: 0.042
+socket: 0.042
+KVM: 0.026
+
+Man page documents qemu -drive if=scsi but it no longer works
+
+The qemu man page section documenting the -drive option contains
+
+           if=interface
+               This option defines on which type on interface the drive is
+               connected.  Available types are: ide, scsi, sd, mtd, floppy,
+               pflash, virtio, none.
+
+but if=scsi no longer works as of version 2.12.0.
+
+If you really have to make backwards incompatible changes, it would be helpful if you could at least document them.
+
+The -drive if=scsi still works on the machines that have a SCSI bus by default. And the change for x86 has been documented in the ChangeLog: https://wiki.qemu.org/ChangeLog/2.12
+
diff --git a/results/classifier/108/other/1767176 b/results/classifier/108/other/1767176
new file mode 100644
index 000000000..e15d28664
--- /dev/null
+++ b/results/classifier/108/other/1767176
@@ -0,0 +1,66 @@
+debug: 0.887
+other: 0.863
+performance: 0.855
+device: 0.849
+files: 0.847
+semantic: 0.843
+network: 0.838
+graphic: 0.822
+boot: 0.816
+vnc: 0.812
+socket: 0.810
+PID: 0.787
+permissions: 0.775
+KVM: 0.749
+
+GTK build fails with qemu 2.12.0
+
+With the 2.12.0 release of QEMU passing `--enable-gtk` to configure causes the build to fail. I'm running macOS 10.13.5 with Xcode 9.3 FWIW.
+
+I'm building against GTK 2.24.32, which I appreciate is no longer the preferred version here, but I don't think the error is related to that aspect. I'll try and find the time later to attempt a GTK3 build to check that though.
+
+```
+ui/gtk.c:1147:16: error: use of undeclared identifier 'qemu_input_map_osx_to_qcode'; did you mean 'qemu_input_map_usb_to_qcode'?
+        return qemu_input_map_osx_to_qcode;
+               ^~~~~~~~~~~~~~~~~~~~~~~~~~~
+               qemu_input_map_usb_to_qcode
+/private/tmp/qemu-20180426-60786-1av6pq8/qemu-2.12.0/include/ui/input.h:99:22: note: 'qemu_input_map_usb_to_qcode' declared here
+extern const guint16 qemu_input_map_usb_to_qcode[];
+                     ^
+```
+
+I tried poking around locally by applying the following diff, based off of a very brief glance over the code involved, but that simply causes the build to error out in a different way at a later point, so I assume I'm doing something stupid.
+
+
+```
+diff --git a/Makefile b/Makefile
+index d71dd5b..e857c3c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -313,6 +313,7 @@ KEYCODEMAP_FILES = \
+ 		 ui/input-keymap-qnum-to-qcode.c \
+ 		 ui/input-keymap-usb-to-qcode.c \
+ 		 ui/input-keymap-win32-to-qcode.c \
++		 ui/input-keymap-osx-to-qcode.c \
+ 		 ui/input-keymap-x11-to-qcode.c \
+ 		 ui/input-keymap-xorgevdev-to-qcode.c \
+ 		 ui/input-keymap-xorgkbd-to-qcode.c \
+diff --git a/include/ui/input.h b/include/ui/input.h
+index 16395ab..8183840 100644
+--- a/include/ui/input.h
++++ b/include/ui/input.h
+@@ -101,6 +101,9 @@ extern const guint16 qemu_input_map_usb_to_qcode[];
+ extern const guint qemu_input_map_win32_to_qcode_len;
+ extern const guint16 qemu_input_map_win32_to_qcode[];
+ 
++extern const guint qemu_input_map_osx_to_qcode_len;
++extern const guint16 qemu_input_map_osx_to_qcode[];
++
+ extern const guint qemu_input_map_x11_to_qcode_len;
+ extern const guint16 qemu_input_map_x11_to_qcode[];
+```
+
+Reproduces with GTK+3 rather than GTK+ as expected, FWIW.
+
+Resolved via https://git.qemu.org/?p=qemu.git;a=patch;h=656282d245b49b84d4a1a47d7b7ede482d541776, FYI.
+
diff --git a/results/classifier/108/other/1767200 b/results/classifier/108/other/1767200
new file mode 100644
index 000000000..bb1d49972
--- /dev/null
+++ b/results/classifier/108/other/1767200
@@ -0,0 +1,36 @@
+graphic: 0.822
+device: 0.659
+semantic: 0.544
+network: 0.426
+performance: 0.331
+files: 0.286
+other: 0.284
+permissions: 0.250
+socket: 0.234
+vnc: 0.216
+boot: 0.212
+PID: 0.178
+debug: 0.163
+KVM: 0.009
+
+Kernel Panic Unable to mount root fs on unknown-block(31,3)
+
+Using the latest qemu:
+qemu-system-arm.exe -kernel C:\Users\a\Downloads\kernel-qemu-4.4.34-jessie -cpu arm1176 -m 256 -machine versatilepb -cdrom C:\Users\a\Downloads\picore-9.0.3.img
+
+Gives error:
+Kernel Panic Unable to mount root fs on unknown-block(31,3)
+
+I have tried different ARMv6 ARMv7 images/kernels with the same result.
+
+
+
+Did it work with a previous version of QEMU? If yes, which version? And since you're using -kernel ... don't you maybe have to use -initrd here, too?
+
+That's a guest error message, meaning it couldn't mount the root filesystem. This is almost certainly because you're not telling the guest kernel the right argument for where to find its rootfs (which you've provided with -cdrom). Googling suggests that you're getting this kernel from https://github.com/dhruvvyas90/qemu-rpi-kernel -- which has a readme file which tells you what command line options you need to use. Specifically:
+ * you need to have 'root=/dev/sda2' in your -append argument
+ * you want to use -hda rather than -cdrom
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/1768295 b/results/classifier/108/other/1768295
new file mode 100644
index 000000000..f4b481ecb
--- /dev/null
+++ b/results/classifier/108/other/1768295
@@ -0,0 +1,75 @@
+graphic: 0.681
+permissions: 0.620
+other: 0.606
+semantic: 0.517
+debug: 0.483
+device: 0.474
+vnc: 0.457
+PID: 0.440
+performance: 0.420
+boot: 0.379
+network: 0.370
+KVM: 0.365
+socket: 0.291
+files: 0.284
+
+VLLDM/VLSTM trigger UsageFault in the Secure Mode
+
+The VLLDM/VLSTM instructions trigger UsageFault when they are supposed to behave as NOP.
+
+Version: 
+$ qemu-system-arm --version                                                                               QEMU emulator version 2.11.93
+
+VLLDM and VLSTM are instructions newly added to ARMv8-M Mainline Profile. Although they are FP instructions and the FP support of the M profile is not implemented by QEMU, the Armv8-M Architecture Reference Manual specifies that they should behave as NOP even in this case:
+
+C2.4.268 VLLDM:
+
+> If the Floating-point Extension is not implemented, this instruction is available in Secure state, but behaves as a NOP.
+
+C2.4.269 VLSTM:
+
+> If the Floating-point Extension is not implemented, this instruction is available in Secure state, but behaves as a NOP.
+
+VLLDM and VLSTM are generated automatically by the compiler to save and restore the floating point registers (in a lazy manner) during a Non-Secure function call. An example is shown below:
+
+10000064 <__gnu_cmse_nonsecure_call>:
+10000064:       e92d 4fe0       stmdb   sp!, {r5, r6, r7, r8, r9, sl, fp, lr}
+10000068:       4627            mov     r7, r4
+1000006a:       46a0            mov     r8, r4
+1000006c:       46a1            mov     r9, r4
+1000006e:       46a2            mov     sl, r4
+10000070:       46a3            mov     fp, r4
+10000072:       46a4            mov     ip, r4
+10000074:       b0a2            sub     sp, #136        ; 0x88
+10000076:       ec2d 0a00       vlstm   sp
+1000007a:       f384 8800       msr     CPSR_f, r4
+1000007e:       4625            mov     r5, r4
+10000080:       4626            mov     r6, r4
+10000082:       47a4            blxns   r4
+10000084:       ec3d 0a00       vlldm   sp
+10000088:       b022            add     sp, #136        ; 0x88
+1000008a:       e8bd 8fe0       ldmia.w sp!, {r5, r6, r7, r8, r9, sl, fp, pc}
+
+Yes, you're right -- I hadn't noticed this wrinkle of the architecture. I'll put this on my todo list -- it should be straightforward.
+
+Do you have a convenient test binary that I could use as a test case?
+
+
+I attached a ZIP file containing a set of binary images that reproduces the problem.
+
+Secure.elf and NonSecure.elf contain codes that run in the Secure/Non-Secure mode, respectively. They must be loaded simultaneously by using the generic loader:
+
+    $ qemu-system-arm -device loader,file=Secure.elf -kernel NonSecure.elf -machine mps2-an505 -nographic -s -cpu cortex-m33
+
+The problematic instructions are located in 0x10000064 <__gnu_cmse_nonsecure_call> of Secure.elf. The program runs successfully and outputs some message via UART0 if they are replaced with NOPs, as shown below:
+
+    $ qemu-system-arm -device loader,file=Secure-patched.elf -kernel NonSecure.elf -machine mps2-an505 -nographic -s -cpu cortex-m33
+    I'm running in the Non-Secure mode.
+
+
+Submitted this patch for review which should fix this bug:
+https://patchwork.ozlabs.org/patch/907959/
+
+
+Fix now in master as commit b1e5336a9899016c53d59 (and cc stable), so should be in 3.0 and 2.12.1.
+
diff --git a/results/classifier/108/other/1769067 b/results/classifier/108/other/1769067
new file mode 100644
index 000000000..89714f34b
--- /dev/null
+++ b/results/classifier/108/other/1769067
@@ -0,0 +1,27 @@
+device: 0.849
+network: 0.748
+socket: 0.561
+other: 0.521
+graphic: 0.514
+vnc: 0.466
+PID: 0.425
+performance: 0.411
+boot: 0.384
+files: 0.328
+semantic: 0.258
+permissions: 0.241
+KVM: 0.193
+debug: 0.091
+
+virtio-net ignores the absence of the VIRTIO_NET_F_CTRL_VQ feature bit
+
+When negotiating virtio-net feature bits, the device ignores the absence of the VIRTIO_NET_F_CTRL_VQ bit in driver feature bits and provides three virtqueues, including the control virtqueue, even though the driver is expecting only the receive and transmit virtqueues. Looking into the source code, it appears it always assumes the presence of the control virtqueue, violating thus the VIRTIO version 1.0 specification.
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/151
+
+
diff --git a/results/classifier/108/other/1769189 b/results/classifier/108/other/1769189
new file mode 100644
index 000000000..42e55a10c
--- /dev/null
+++ b/results/classifier/108/other/1769189
@@ -0,0 +1,144 @@
+KVM: 0.892
+permissions: 0.882
+other: 0.877
+performance: 0.875
+debug: 0.871
+device: 0.867
+boot: 0.861
+vnc: 0.859
+files: 0.853
+PID: 0.838
+semantic: 0.832
+network: 0.792
+graphic: 0.791
+socket: 0.700
+
+Issue with qemu 2.12.0 + SATA
+
+(first reported here: https://bugzilla.tianocore.org/show_bug.cgi?id=949 )
+
+I had a Windows 10 VM running perfectly fine with OVMF UEFI, since I upgraded to qemu 2.12, the guests hangs for a couple of minutes, works for a few seconds, and hangs again, etc. By "hang" I mean it doesn't freeze, but it looks like it's waiting on IO or something, I can move the mouse but everything needing disk access is unresponsive.
+
+What doesn't work: qemu 2.12 with OVMF
+What works: using BIOS or downgrading qemu to 2.11.1.
+
+Platform is arch linux 4.16.7 on skylake, I have attached the vm xml file.
+
+
+
+I'm seeing the same Win10 I/O stall problem with qemu 2.12, but with a vm with BIOS, not UEFI.
+The solution for me is only dowgrading for now.
+
+I have the same issue. When I open the task manager on the virtualized Windows 10 VM I see the HDD time is at 100% but the data transfer rate is actual 0b/s. 
+I've tried any combination of the options below and the issue was always reproducible with qemu-2.12.0-1 and never with qemu-2.11.1-2.
+
+Linux kernels:
+- 4.14.5-1
+- 4.16.7
+
+Windows 10 Verson (for the VM) 
+- 1709
+- 1803
+
+Boot HDD for VM
+- Actual SSD (/dev/sda)
+- QCOW2 Image
+
+QEMU 
+- qemu-2.11.1-2
+- qemu-2.12.0-1
+
+
+I also use ARCH Linux and have also downgraded to pre 2.12 QEMU
+
+
+I have done some further tests and the problem seems to be SATA, not UEFI, I have updated the bug description to reflect this.
+
+François: Would it be possible for you to try a bisect build to try and figure out which change in qemu caused the problem?
+
+
+For me it is hangs with SATA, but IDE is fine.
+
+Windows 7 Version (for the VM)
+- SP1
+
+Boot HDD for VM
+- Actual HDD (/dev/sda)
+- QCOW2 Image
+
+QEMU
+- qemu-2.12.0-1 
+
+I've tried bisecting a few times, but since my reproducer wasn't reliable enough, I didn't identify the issue. (I see a bisect reported on qemu ML which seems like a bogus result, similar to mine).
+
+In my case, after the "hang", Windows 10 resets the ahci device after 2 minutes and it continues on until another hang happens. Seems fairly random. I increased the number of vcpu's assigned which seemed to increase the likelihood of the hang.
+
+I also went so far as to instrument an injection of the ahci interrupt via the monitor (total kludge, I know), and the guest did get out of the hung condition right away when I did that.
+
+I tried bisecting as well, and I wound up at:
+
+1a423896 -- five out of five boot attempts succeeded.
+d759c951 -- five out of five boot attempts failed.
+
+
+
+d759c951f3287fad04210a52f2dc93f94cf58c7f is the first bad commit
+commit d759c951f3287fad04210a52f2dc93f94cf58c7f
+Author: Alex Bennée <email address hidden>
+Date:   Tue Feb 27 12:52:48 2018 +0300
+
+    replay: push replay_mutex_lock up the call tree
+
+
+
+
+My methodology was to boot QEMU like this:
+
+./x86_64-softmmu/qemu-system-x86_64 -m 4096 -cpu host -M q35 -enable-kvm -smp 4 -drive id=sda,if=none,file=/home/bos/jhuston/windows_10.qcow -device ide-hd,drive=sda -qmp tcp::4444,server,nowait
+
+and run it three times with -snapshot and see if it hung during boot; if it did, I marked the commit bad. If it did not, I booted and attempted to log in and run CrystalDiskMark. If it froze before I even launched CDM, I marked it bad.
+
+Interestingly enough, on a subsequent (presumably bad) commit (6dc0f529) which hangs fairly reliably on bootup (66%) I can occasionally get into Windows 10 and run CDM -- and that unfortunately does not seem to trigger the error again, so CDM doesn't look like a reliable way to trigger the hangs.
+
+
+
+
+Anyway, d759c951 definitely appears to change the odds of AHCI locking up during boot for me, and I suppose it might have something to do with how it is changing the BQL acquisition/release in main-loop.c, but I am not sure why/what yet.
+
+Before this patch, we only lock the iothread and re-lock it if there was a timeout, and after this patch we *always* lock and unlock the iothread. This is probably just exposing some latent bug in the AHCI emulator that has always existed, but now the odds of seeing it are much higher.
+
+I'll have to dig as to what the race is -- I'm not sure just yet.
+
+
+If those of you who are seeing this bug too could confirm for me that d759c951 appears to be the guilty party, that probably wouldn't hurt.
+
+Thanks!
+--js
+
+I can confirm that for me commit d759c951 does cause / expose the issue.
+
+There might be multiple issues present and I'm having difficulty reliably doing any kind of regression testing here, but I think this patch helps fix at least one of the issues I was seeing that occurs specifically during early boot. It may fix other hangs.
+
+
+
+Oughtta be fixed in current master, will be fixed in 2.12.1 and 3.0.
+
+Hi, Where can I find the fix patch at present?
+
+5694c7eacce6b263ad7497cc1bb76aad746cfd4e ahci: fix PxCI register race
+
+https://git.qemu.org/?p=qemu.git;a=commitdiff;h=5694c7eacce6b263ad7497cc1bb76aad746cfd4e
+
+Could this affect virtio-scsi? I'm not so sure since it's not perfectly reliable to reproduce, but v2.12.0 was hanging for me for a few minutes at a time with virtio-scsi cache=writeback showing 100% disk util%. I never had issues booting up, and didn't try SATA. v2.11.1 was fine.
+
+My first attempt to bisect didn't turn out right... had some false positives I guess. The 2nd attempt (telling git the bads from first try) got to 89e46eb477113550485bc24264d249de9fd1260a as latest good (which is 4 commits *after* the bisect by John Snow) and 7273db9d28d5e1b7a6c202a5054861c1f0bcc446 as bad.
+
+But testing with this patch, it seems to work (or false positive still... after a bunch of usage). And so I didn't finish the bisect.
+
+The fix posted exclusively changes the behavior of AHCI devices; however the locking changes that jostled the AHCI bug loose could in theory jostle loose some bugs in other devices, too.
+
+I don't think it is possible that the fix for AHCI would have any impact on virtio-scsi devices.
+
+If you're seeing issues in virtio-scsi, I'd make a new writeup in a new LP.
+--js
+