summary refs log tree commit diff stats
path: root/results/classifier/zero-shot/108/performance
diff options
context:
space:
mode:
authorChristian Krinitsin <mail@krinitsin.com>2025-07-03 19:39:53 +0200
committerChristian Krinitsin <mail@krinitsin.com>2025-07-03 19:39:53 +0200
commitdee4dcba78baf712cab403d47d9db319ab7f95d6 (patch)
tree418478faf06786701a56268672f73d6b0b4eb239 /results/classifier/zero-shot/108/performance
parent4d9e26c0333abd39bdbd039dcdb30ed429c475ba (diff)
downloadqemu-analysis-dee4dcba78baf712cab403d47d9db319ab7f95d6.tar.gz
qemu-analysis-dee4dcba78baf712cab403d47d9db319ab7f95d6.zip
restructure results
Diffstat (limited to 'results/classifier/zero-shot/108/performance')
-rw-r--r--results/classifier/zero-shot/108/performance/1005192
-rw-r--r--results/classifier/zero-shot/108/performance/101838
-rw-r--r--results/classifier/zero-shot/108/performance/103157
-rw-r--r--results/classifier/zero-shot/108/performance/103231
-rw-r--r--results/classifier/zero-shot/108/performance/103698764
-rw-r--r--results/classifier/zero-shot/108/performance/1042388375
-rw-r--r--results/classifier/zero-shot/108/performance/105616
-rw-r--r--results/classifier/zero-shot/108/performance/107859
-rw-r--r--results/classifier/zero-shot/108/performance/111986141
-rw-r--r--results/classifier/zero-shot/108/performance/112995768
-rw-r--r--results/classifier/zero-shot/108/performance/113556753
-rw-r--r--results/classifier/zero-shot/108/performance/113993
-rw-r--r--results/classifier/zero-shot/108/performance/117349043
-rw-r--r--results/classifier/zero-shot/108/performance/122326
-rw-r--r--results/classifier/zero-shot/108/performance/122828581
-rw-r--r--results/classifier/zero-shot/108/performance/125356357
-rw-r--r--results/classifier/zero-shot/108/performance/131226
-rw-r--r--results/classifier/zero-shot/108/performance/132123
-rw-r--r--results/classifier/zero-shot/108/performance/132146468
-rw-r--r--results/classifier/zero-shot/108/performance/13416
-rw-r--r--results/classifier/zero-shot/108/performance/139993930
-rw-r--r--results/classifier/zero-shot/108/performance/144216
-rw-r--r--results/classifier/zero-shot/108/performance/147345134
-rw-r--r--results/classifier/zero-shot/108/performance/1477306
-rw-r--r--results/classifier/zero-shot/108/performance/152255
-rw-r--r--results/classifier/zero-shot/108/performance/152917355
-rw-r--r--results/classifier/zero-shot/108/performance/154644563
-rw-r--r--results/classifier/zero-shot/108/performance/156949127
-rw-r--r--results/classifier/zero-shot/108/performance/158619466
-rw-r--r--results/classifier/zero-shot/108/performance/158925736
-rw-r--r--results/classifier/zero-shot/108/performance/159033654
-rw-r--r--results/classifier/zero-shot/108/performance/159524077
-rw-r--r--results/classifier/zero-shot/108/performance/165482643
-rw-r--r--results/classifier/zero-shot/108/performance/167238338
-rw-r--r--results/classifier/zero-shot/108/performance/167728
-rw-r--r--results/classifier/zero-shot/108/performance/1714750132
-rw-r--r--results/classifier/zero-shot/108/performance/171624
-rw-r--r--results/classifier/zero-shot/108/performance/171862
-rw-r--r--results/classifier/zero-shot/108/performance/172118752
-rw-r--r--results/classifier/zero-shot/108/performance/172398461
-rw-r--r--results/classifier/zero-shot/108/performance/172570799
-rw-r--r--results/classifier/zero-shot/108/performance/172811674
-rw-r--r--results/classifier/zero-shot/108/performance/173127
-rw-r--r--results/classifier/zero-shot/108/performance/173481062
-rw-r--r--results/classifier/zero-shot/108/performance/173557647
-rw-r--r--results/classifier/zero-shot/108/performance/173764
-rw-r--r--results/classifier/zero-shot/108/performance/174331
-rw-r--r--results/classifier/zero-shot/108/performance/176847
-rw-r--r--results/classifier/zero-shot/108/performance/178428
-rw-r--r--results/classifier/zero-shot/108/performance/178932
-rw-r--r--results/classifier/zero-shot/108/performance/1815889929
-rw-r--r--results/classifier/zero-shot/108/performance/1818207157
-rw-r--r--results/classifier/zero-shot/108/performance/182025
-rw-r--r--results/classifier/zero-shot/108/performance/183449668
-rw-r--r--results/classifier/zero-shot/108/performance/184986
-rw-r--r--results/classifier/zero-shot/108/performance/185312364
-rw-r--r--results/classifier/zero-shot/108/performance/185767
-rw-r--r--results/classifier/zero-shot/108/performance/1859021284
-rw-r--r--results/classifier/zero-shot/108/performance/185908157
-rw-r--r--results/classifier/zero-shot/108/performance/187334135
-rw-r--r--results/classifier/zero-shot/108/performance/187576263
-rw-r--r--results/classifier/zero-shot/108/performance/188145075
-rw-r--r--results/classifier/zero-shot/108/performance/188340062
-rw-r--r--results/classifier/zero-shot/108/performance/188425
-rw-r--r--results/classifier/zero-shot/108/performance/188630629
-rw-r--r--results/classifier/zero-shot/108/performance/189208150
-rw-r--r--results/classifier/zero-shot/108/performance/189570370
-rw-r--r--results/classifier/zero-shot/108/performance/189669
-rw-r--r--results/classifier/zero-shot/108/performance/189675455
-rw-r--r--results/classifier/zero-shot/108/performance/190189288
-rw-r--r--results/classifier/zero-shot/108/performance/190198191
-rw-r--r--results/classifier/zero-shot/108/performance/192617463
-rw-r--r--results/classifier/zero-shot/108/performance/194035
-rw-r--r--results/classifier/zero-shot/108/performance/201468
-rw-r--r--results/classifier/zero-shot/108/performance/201624
-rw-r--r--results/classifier/zero-shot/108/performance/206830
-rw-r--r--results/classifier/zero-shot/108/performance/218335
-rw-r--r--results/classifier/zero-shot/108/performance/218716
-rw-r--r--results/classifier/zero-shot/108/performance/219345
-rw-r--r--results/classifier/zero-shot/108/performance/221618
-rw-r--r--results/classifier/zero-shot/108/performance/231932
-rw-r--r--results/classifier/zero-shot/108/performance/232526
-rw-r--r--results/classifier/zero-shot/108/performance/236523
-rw-r--r--results/classifier/zero-shot/108/performance/239335
-rw-r--r--results/classifier/zero-shot/108/performance/2410107
-rw-r--r--results/classifier/zero-shot/108/performance/246023
-rw-r--r--results/classifier/zero-shot/108/performance/255128
-rw-r--r--results/classifier/zero-shot/108/performance/256528
-rw-r--r--results/classifier/zero-shot/108/performance/257245
-rw-r--r--results/classifier/zero-shot/108/performance/268256
-rw-r--r--results/classifier/zero-shot/108/performance/284828
-rw-r--r--results/classifier/zero-shot/108/performance/28516
-rw-r--r--results/classifier/zero-shot/108/performance/28616
-rw-r--r--results/classifier/zero-shot/108/performance/290628
-rw-r--r--results/classifier/zero-shot/108/performance/34316
-rw-r--r--results/classifier/zero-shot/108/performance/40416
-rw-r--r--results/classifier/zero-shot/108/performance/43016
-rw-r--r--results/classifier/zero-shot/108/performance/44516
-rw-r--r--results/classifier/zero-shot/108/performance/49048478
-rw-r--r--results/classifier/zero-shot/108/performance/49852362
-rw-r--r--results/classifier/zero-shot/108/performance/524447338
-rw-r--r--results/classifier/zero-shot/108/performance/58923129
-rw-r--r--results/classifier/zero-shot/108/performance/59526
-rw-r--r--results/classifier/zero-shot/108/performance/59734
-rw-r--r--results/classifier/zero-shot/108/performance/59735152
-rw-r--r--results/classifier/zero-shot/108/performance/64219
-rw-r--r--results/classifier/zero-shot/108/performance/71934
-rw-r--r--results/classifier/zero-shot/108/performance/75391649
-rw-r--r--results/classifier/zero-shot/108/performance/76097644
-rw-r--r--results/classifier/zero-shot/108/performance/79834768419
-rw-r--r--results/classifier/zero-shot/108/performance/8016
-rw-r--r--results/classifier/zero-shot/108/performance/84937
-rw-r--r--results/classifier/zero-shot/108/performance/86116
-rw-r--r--results/classifier/zero-shot/108/performance/91920
-rw-r--r--results/classifier/zero-shot/108/performance/98574
-rw-r--r--results/classifier/zero-shot/108/performance/99206748
116 files changed, 7930 insertions, 0 deletions
diff --git a/results/classifier/zero-shot/108/performance/1005 b/results/classifier/zero-shot/108/performance/1005
new file mode 100644
index 000000000..0be3dee35
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1005
@@ -0,0 +1,192 @@
+performance: 0.923
+debug: 0.920
+permissions: 0.915
+graphic: 0.904
+device: 0.890
+other: 0.882
+PID: 0.858
+semantic: 0.846
+boot: 0.835
+socket: 0.831
+files: 0.821
+vnc: 0.790
+network: 0.754
+KVM: 0.742
+
+blockdev-del doesn't work after blockdev-backup with incremental, which using dirty-bitmap
+Description of problem:
+After incremental backup with bitmap, blockdev-del doesn't work at target node.  
+Because of this, incremental backup cannot rebase to base node.  
+I refered this. https://qemu-project.gitlab.io/qemu/interop/bitmaps.html#example-incremental-push-backups-without-backing-files
+Steps to reproduce:
+1. `blockdev-add` incremental backup node
+```
+echo '{"execute":"qmp_capabilities"}{"execute":"blockdev-add","arguments":{"driver":"qcow2","node-name":"incre0","file":{"driver":"file","filename":"/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2"}}}' | nc -U /mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/qmp.sock -N
+
+{
+    "return": {
+    }
+}
+```
+2. `blockdev-backup` with `vda` to target `incre0` node
+```
+echo '{"execute":"qmp_capabilities"}{"execute":"blockdev-backup", "arguments": {"device": "vda", "bitmap":"bitmap0", "target": "incre0", "sync": "incremental", "job-id": "incre0-job", "speed": 536870912}}' | nc -U /mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/qmp.sock -N
+
+{
+    "timestamp": {
+        "seconds": 1651050066,
+        "microseconds": 848370
+    },
+    "event": "JOB_STATUS_CHANGE",
+    "data": {
+        "status": "created",
+        "id": "incre0-job"
+    }
+}
+{
+    "timestamp": {
+        "seconds": 1651050066,
+        "microseconds": 848431
+    },
+    "event": "JOB_STATUS_CHANGE",
+    "data": {
+        "status": "running",
+        "id": "incre0-job"
+    }
+}
+{
+    "timestamp": {
+        "seconds": 1651050066,
+        "microseconds": 848464
+    },
+    "event": "JOB_STATUS_CHANGE",
+    "data": {
+        "status": "paused",
+        "id": "incre0-job"
+    }
+}
+{
+    "timestamp": {
+        "seconds": 1651050066,
+        "microseconds": 848485
+    },
+    "event": "JOB_STATUS_CHANGE",
+    "data": {
+        "status": "running",
+        "id": "incre0-job"
+    }
+}
+{
+    "return": {
+    }
+}
+
+```
+3. `query-block-jobs` check `incre0-job` is done
+```
+echo '{"execute":"qmp_capabilities"}{"execute":"query-block-jobs"}' | nc -U /mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/qmp.sock -N
+
+{
+    "return": {
+    }
+}
+{
+    "return": [
+    ]
+}
+```
+4. To release write lock (need to rebase in incre0.qcow2), `blockdev-del`
+```
+echo '{"execute":"qmp_capabilities"}{"execute":"blockdev-del","arguments":{"node-name":"incre0"}' | nc -U /mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/qmp.sock -N
+
+{
+    "return": {
+    }
+}
+```
+5. `qemu-img rebase`
+```
+qemu-img rebase -b base.qcow2 -u incre0.qcow2
+
+qemu-img: Could not open 'incre0.qcow2': Failed to get "write" lock
+Is another process using the image [incre0.qcow2]?
+```
+
+6. check `query-named-block-nodes` after `blockdev-del`
+```
+{
+    "return": [
+        {
+            "iops_rd": 0,
+            "detect_zeroes": "off",
+            "image": {
+                "virtual-size": 53687091200,
+                "filename": "/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2",
+                "cluster-size": 65536,
+                "format": "qcow2",
+                "actual-size": 241340416,
+                "format-specific": {
+                    "type": "qcow2",
+                    "data": {
+                        "compat": "1.1",
+                        "compression-type": "zlib",
+                        "lazy-refcounts": false,
+                        "refcount-bits": 16,
+                        "corrupt": false,
+                        "extended-l2": false
+                    }
+                },
+                "dirty-flag": false
+            },
+            "iops_wr": 0,
+            "ro": false,
+            "node-name": "incre0",
+            "backing_file_depth": 0,
+            "drv": "qcow2",
+            "iops": 0,
+            "bps_wr": 0,
+            "write_threshold": 0,
+            "encrypted": false,
+            "bps": 0,
+            "bps_rd": 0,
+            "cache": {
+                "no-flush": false,
+                "direct": false,
+                "writeback": true
+            },
+            "file": "/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2"
+        },
+        {
+            "iops_rd": 0,
+            "detect_zeroes": "off",
+            "image": {
+                "virtual-size": 240451584,
+                "filename": "/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2",
+                "format": "file",
+                "actual-size": 241340416,
+                "dirty-flag": false
+            },
+            "iops_wr": 0,
+            "ro": false,
+            "node-name": "#block412",
+            "backing_file_depth": 0,
+            "drv": "file",
+            "iops": 0,
+            "bps_wr": 0,
+            "write_threshold": 0,
+            "encrypted": false,
+            "bps": 0,
+            "bps_rd": 0,
+            "cache": {
+                "no-flush": false,
+                "direct": false,
+                "writeback": true
+            },
+            "file": "/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2"
+        },
+        ......
+    ]
+}
+```
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/1018 b/results/classifier/zero-shot/108/performance/1018
new file mode 100644
index 000000000..46ef2cb95
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1018
@@ -0,0 +1,38 @@
+performance: 0.960
+boot: 0.934
+device: 0.929
+graphic: 0.852
+PID: 0.791
+semantic: 0.755
+vnc: 0.722
+permissions: 0.695
+debug: 0.606
+socket: 0.554
+files: 0.484
+other: 0.436
+KVM: 0.411
+network: 0.375
+
+virtio-scsi-pci with iothread results in 100% CPU in qemu 7.0.0
+Description of problem:
+Top reports constant 100% host CPU usage by `qemu-system-x86`. I have narrowed the issue down to the following section of the config:
+```
+        -object iothread,id=t0 \
+        -device virtio-scsi-pci,iothread=t0,num_queues=4 \
+```
+If this is replaced by
+```
+        -device virtio-scsi-pci \
+```
+Then CPU usage is normal (near 0%). 
+
+This problem doesn't appear with qemu 6.2.0 where CPU usage is near 0% even with iothread in the qemu options.
+Steps to reproduce:
+1. Download Kubuntu 22.04 LTS ISO (https://cdimage.ubuntu.com/kubuntu/releases/22.04/release/kubuntu-22.04-desktop-amd64.iso),
+2. Create a root virtual drive for the guest with 'qemu-img create -f qcow2 -o cluster_size=4k kubuntu.img 256G',
+3. Start the guest with the config given above,
+4. Connect to the guest (using spicy for example, password 'p'), select "try kubuntu" in grub menu AND later in the GUI, let it boot to plasma desktop, monitor host CPU usage using 'top'.
+
+(there could be a faster way to reproduce it)
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/1031 b/results/classifier/zero-shot/108/performance/1031
new file mode 100644
index 000000000..642989157
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1031
@@ -0,0 +1,57 @@
+performance: 0.955
+socket: 0.864
+boot: 0.819
+device: 0.790
+PID: 0.740
+graphic: 0.666
+semantic: 0.601
+debug: 0.480
+other: 0.463
+vnc: 0.394
+files: 0.362
+permissions: 0.301
+network: 0.274
+KVM: 0.026
+
+Intel 12th Gen CPU not working with QEMU Hyper-V nested virtualization
+Description of problem:
+When booting with Hyper-V + host-passthrough it gets stuck at tianocore, does not change until I reboot which then loops into windows diagnostics which leads nowhere. Done using Windows 10, tried using newest windows version and 1909.
+
+Specs: Manjaro Gnome 5.15 LTS, i5-12600k, z690 gigabyte aorus elite ddr4, rtx 3070ti.
+
+I’ve spent days trying to figure out what was messing with it and it turned out I could boot when messing with my CPU topology, for some reason my 12th gen + Hyper-V + host-passthrough only works with sockets. Cores and threads above 1 causes boot problems, apart from disabling vme which boots, but the hypervisor does not load.
+
+This fails (normal host-passthrough):
+```
+  <cpu mode="host-passthrough" check="none" migratable="on">
+    <topology sockets="1" dies="1" cores="6" threads="2"/>
+  </cpu>
+```
+
+This boots (-can only change sockets):
+```
+  <cpu mode="host-passthrough" check="none" migratable="on">
+    <topology sockets="12" dies="1" cores="1" threads="1"/>
+  </cpu>
+```
+
+This boots (-no hypervisor):
+```
+<cpu mode="host-passthrough" check="partial" migratable="off">
+    <topology sockets="1" dies="1" cores="6" threads="2"/>
+    <feature policy="disable" name="vme"/>
+  </cpu>
+```
+
+No matter what adjustment I do I cannot change the cores or threads or it will result in a boot failure, host-model just does not work once I boot the machine the host model changes to cooperlake.
+
+My current way of bypassing this is I’ve downloaded the QEMU source code, gone through cpu.c and modified the default skylake-client CPU model to match my CPU, then I added in most of my i5-12600k flags manually, this seems to work with a 35-45% performance drop in CPU and in ram. Without Hyper-V enabled and using the normal host-passthrough I get near bare metal performance.
+
+Tried with multiple versions of QEMU, EDK2, and loads of kernel versions (to add to this my i5-12600k gen does not work on kernel version 5.13 and below) even went ahead to try Ubuntu and had the same problem, my other (i7-9700k) PC works fine with Hyper-V. Also disabled my E-cores through bios resulting in the same issue. CPU pinning the P-cores to the guest does not seem to help.
+Steps to reproduce:
+1. Enable hyper-v in windows features
+2. Restart guest
+3. Boot failure
+Additional information:
+Hyper-V host-passthrough XML:
+https://pst.klgrth.io/paste/yc5wk
diff --git a/results/classifier/zero-shot/108/performance/1032 b/results/classifier/zero-shot/108/performance/1032
new file mode 100644
index 000000000..def2c99de
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1032
@@ -0,0 +1,31 @@
+performance: 0.978
+device: 0.876
+graphic: 0.863
+files: 0.744
+network: 0.736
+socket: 0.729
+boot: 0.717
+PID: 0.695
+permissions: 0.674
+other: 0.541
+semantic: 0.541
+debug: 0.506
+KVM: 0.498
+vnc: 0.497
+
+Slow random performance of virtio-blk
+Steps to reproduce:
+1. Download Virtualbox Windows 11 image from https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/
+2. Download virtio-win-iso: `wget https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.215-2/virtio-win-0.1.215.iso`
+3. Extract WinDev*.zip `unzip WinDev2204Eval.VirtualBox.zip`and import the extracted Ova in VirtualBox (import WinDev with the option "conversion to vdi" clicked)
+4. `qemu-img convert -f vdi -O raw <YourVirtualBoxVMFolder>/WinDev2204Eval-disk001.vdi<YourQemuImgFolder>/WinDev2204Eval-disk001.img`
+5. Start Windows 11 in Qemu: 
+``` 
+qemu-system-x86_64 -enable-kvm -cpu host -device virtio-blk-pci,scsi=off,drive=WinDevDrive,id=virtio-disk0,bootindex=0  -drive file=<YourQemuImgFolder>/WinDev2204Eval-disk001.img,if=none,id=WinDevDrive,format=raw -net nic -net user,hostname=windowsvm -m 8G -monitor stdio -name "Windows" -usbdevice tablet -device virtio-serial -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -cdrom <YourDownloadFolder>/virtio-win-0.1.215.iso
+```
+6. Win 11 won't boot and will go into recovery mode (even the safeboot trick doesn't work here), please follow that [answer](https://superuser.com/questions/1057959/windows-10-in-kvm-change-boot-disk-to-virtio#answer-1200899) to load the viostor driver over recovery cmd
+7. Reboot the VM and it should start
+2. Install CrystalDiskMark 
+3. Execute CrystalDiskMark Benchmark
+Additional information:
+#
diff --git a/results/classifier/zero-shot/108/performance/1036987 b/results/classifier/zero-shot/108/performance/1036987
new file mode 100644
index 000000000..b0a73d432
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1036987
@@ -0,0 +1,64 @@
+performance: 0.996
+graphic: 0.824
+PID: 0.812
+other: 0.782
+permissions: 0.777
+network: 0.769
+socket: 0.751
+semantic: 0.712
+files: 0.696
+device: 0.692
+KVM: 0.646
+vnc: 0.618
+debug: 0.588
+boot: 0.514
+
+compilation error due to bug in savevm.c
+
+Since 
+
+302dfbeb21fc5154c24ca50d296e865a3778c7da
+
+Add xbzrle_encode_buffer and xbzrle_decode_buffer functions
+    
+    For performance we are encoding long word at a time.
+    For nzrun we use long-word-at-a-time NULL-detection tricks from strcmp():
+    using ((lword - 0x0101010101010101) & (~lword) & 0x8080808080808080) test
+    to find out if any byte in the long word is zero.
+    
+    Signed-off-by: Benoit Hudzia <email address hidden>
+    Signed-off-by: Petter Svard <email address hidden>
+    Signed-off-by: Aidan Shribman <email address hidden>
+    Signed-off-by: Orit Wasserman <email address hidden>
+    Signed-off-by: Eric Blake <email address hidden>
+    
+    Reviewed-by: Luiz Capitulino <email address hidden>
+    Reviewed-by: Eric Blake <email address hidden>
+
+ commit arrived into master barnch, I can't compile qemu at all:
+
+savevm.c:2476:13: error: overflow in implicit constant conversion [-Werror=overflow]
+
+Patch is available at http://patchwork.ozlabs.org/patch/177217/
+
+On 15 August 2012 08:44, Evgeny Voevodin <email address hidden> wrote:
+> Since
+>
+> 302dfbeb21fc5154c24ca50d296e865a3778c7da
+>
+> Add xbzrle_encode_buffer and xbzrle_decode_buffer functions
+>  commit arrived into master barnch, I can't compile qemu at all:
+>
+> savevm.c:2476:13: error: overflow in implicit constant conversion
+> [-Werror=overflow]
+
+Fixed by this patch by Alex yesterday:
+ http://patchwork.ozlabs.org/patch/177217/
+
+(not yet in master)
+
+-- PMM
+
+
+http://git.qemu.org/?p=qemu.git;a=commitdiff;h=a5b71725c7067f6805eb30
+
diff --git a/results/classifier/zero-shot/108/performance/1042388 b/results/classifier/zero-shot/108/performance/1042388
new file mode 100644
index 000000000..771e72e27
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1042388
@@ -0,0 +1,375 @@
+semantic: 0.947
+performance: 0.932
+permissions: 0.924
+other: 0.921
+graphic: 0.903
+debug: 0.901
+device: 0.890
+vnc: 0.889
+boot: 0.878
+PID: 0.871
+network: 0.868
+KVM: 0.852
+socket: 0.851
+files: 0.838
+
+qemu: Unsupported syscall: 257 (timer_create)
+
+Running qemu-arm-static for git HEAD. When I try to install ghc from debian into my arm chroot I get:
+
+Setting up ghc (7.4.1-4) ...
+qemu: Unsupported syscall: 257
+ghc: timer_create: Function not implemented
+qemu: Unsupported syscall: 257
+ghc-pkg: timer_create: Function not implemented
+dpkg: error processing ghc (--configure):
+ subprocess installed post-installation script returned error exit status 1
+Errors were encountered while processing:
+ ghc
+E: Sub-process /usr/bin/dpkg returned an error code (1)
+
+Yes, qemu's linux-user emulation layer doesn't currently support any of the posix timer syscalls.
+
+
+Peter Maydell wrote:
+
+> Yes, qemu's linux-user emulation layer doesn't currently support any of
+> the posix timer syscalls.
+
+Any idea how much work is involved to implement this?
+
+
+On 27 August 2012 22:33, Erik de Castro Lopo <email address hidden> wrote:
+> Peter Maydell wrote:
+>> Yes, qemu's linux-user emulation layer doesn't currently support any of
+>> the posix timer syscalls.
+>
+> Any idea how much work is involved to implement this?
+
+A couple of days for somebody who knows what they're doing and has
+a convenient test case.
+
+-- PMM
+
+
+Implementing timer_create along is probably not enough, one would have to implement rest of the related syscalls:
+
+       *  timer_create(): Create a timer.
+       *  timer_settime(2): Arm (start) or disarm (stop) a timer.
+       *  timer_gettime(2): Fetch the time remaining until the next expiration of a timer, along with the interval setting of the timer.
+       *  timer_getoverrun(2): Return the overrun count for the last timer expiration.
+       *  timer_delete(2): Disarm and delete a timer.
+
+Convinient testcases for timer* syscalls exist apparently in ltp suite.
+
+I have a fix for this. I can now successfully install ghc and compile programs with it.
+
+In the process of cleaning up the patch and working on a test for the test suite.
+
+
+Erik,
+
+Is this patch available for public consumption? It doesn't seem to be upstream.
+
+Thanks,
+#matt
+
+Matt Robinson wrote:
+
+> Is this patch available for public consumption? It doesn't seem to be
+> upstream.
+
+Unfortunately not yet. I'm working on getting permission to release it.
+
+Cheers,
+Erik
+-- 
+----------------------------------------------------------------------
+Erik de Castro Lopo
+http://www.mega-nerd.com/
+
+
+Any news on this?
+
+@Eric any news on your patch? Could you please link it here?
+
+Still waiting on approval from my employer's lawyers to release it. Have no idea how long this is going to take.
+
+
+Until proper patch is available I'm using attached temp workaround.
+
+After some testing GHC and produced executables appear to work correctly in foreign arch chroot.
+
+I'm sure there will be issues but I only need compilation to work in foreign arch chroot because I will deploy produced executables to Raspberry Pi anyway. 
+
+cabal-installing works too but I had to comment out anything related to Template Haskell from (to be installed) packages. 
+
+versions:
+
+qemu 1.4.1 (static build)
+host: Linux proton 3.9.2-1-ARCH #1 SMP PREEMPT Sat May 11 20:31:08 CEST 2013 x86_64 GNU/Linux
+foreign: Linux proton 3.9.2-1-ARCH #1 SMP PREEMPT Sat May 11 20:31:08 CEST 2013 armv6l GNU/Linux
+chroot created from 2013-02-09-wheezy-raspbian.img
+ghc 7.4.1-4+rpi1 armhf
+
+
+The two patches have been sent to the qemu-devel mailing list and I will also attach them here.
+?field.comment=The two patches have been sent to the qemu-devel mailing list and I will also attach them here.
+
+
+Latest version of my patch. Also submitted to the qemu-devel mailing list.
+
+
+Bah, the patch in #13 segfaults in some circumstances, the previous one doesn't.
+
+
+This has been fixed in Git in the following commits:
+
+    commit f4f1e10a58cb5ec7806d47d20671e668a52c3e70
+    Author: Erik de Castro Lopo <email address hidden>
+    Date:   Fri Nov 29 18:39:23 2013 +1100
+
+        linux-user: Implement handling of 5 POSIX timer syscalls.
+    
+        Implement timer_create, timer_settime, timer_gettime, timer_getoverrun
+        and timer_delete.
+    
+        Signed-off-by: Erik de Castro Lopo <email address hidden>
+        Signed-off-by: Riku Voipio <email address hidden>
+
+    commit 905bba13ca292cb8c83fe5ccdf8a95bd04168bb1
+    Author: Erik de Castro Lopo <email address hidden>
+    Date:   Fri Nov 29 18:39:22 2013 +1100
+
+        linux-user: Add target struct defs needed for POSIX timer syscalls.
+    
+        Signed-off-by: Erik de Castro Lopo <email address hidden>
+        Signed-off-by: Riku Voipio <email address hidden>
+
+Thi s bug can be closed as resolved.
+
+
+will it be solved in the next qemu upload, right? how long will it take to have it on launchpad builders?
+
+Its currently in git HEAD. It will be in the next full release which I think is 2.0.
+ 
+
+If someone wants to fix what's currently in Ubtuntu they should make a package which includes those two patches.
+
+
+mmm I don't know, I built it in my ppa, with your patch.
+Upgraded the system
+https://code.launchpad.net/~costamagnagianfranco/+archive/firefox/+packages
+Preparing to replace qemu-user 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-user_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-user ...
+Preparing to replace qemu-keymaps 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-keymaps_1.7.0+dfsg-2ubuntu4~saucy1_all.deb) ...
+Unpacking replacement qemu-keymaps ...
+Preparing to replace qemu-system-ppc 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-system-ppc_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-system-ppc ...
+Preparing to replace qemu-system-sparc 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-system-sparc_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-system-sparc ...
+Preparing to replace qemu-system-x86 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-system-x86_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-system-x86 ...
+Preparing to replace qemu-system-arm 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-system-arm_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-system-arm ...
+Preparing to replace qemu-system-misc 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-system-misc_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-system-misc ...
+Preparing to replace qemu-system-mips 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-system-mips_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-system-mips ...
+Preparing to replace qemu-system 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-system_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-system ...
+Preparing to replace qemu-utils 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-utils_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-utils ...
+Preparing to replace qemu 1.5.0+dfsg-3ubuntu5.2 (using .../qemu_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu ...
+Preparing to replace qemu-user-static 1.5.0+dfsg-3ubuntu5.2 (using .../qemu-user-static_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb) ...
+Unpacking replacement qemu-user-static ...
+
+
+pbuilder-dist sid armhf login
+apt-get install ghc
+Setting up ghc (7.6.3-6) ...
+qemu: Unsupported syscall: 257
+ghc: timer_create: Function not implemented
+update-alternatives: using /usr/bin/ghc to provide /usr/bin/haskell-compiler (haskell-compiler) in auto mode
+qemu: Unsupported syscall: 257
+ghc-pkg: timer_create: Function not implemented
+dpkg: error processing package ghc (--configure):
+ subprocess installed post-installation script returned error exit status 1
+
+
+I just tried it here on my system using:
+
+    - QEMU compiled from git HEAD.
+    - ghc 7.6.3-6 from Debian
+
+and I was able to start compiling GHC from git. I didn't let it run to completion because I only have my laptop available at the moment.
+
+I suggest you try debugging some more and maybe try building something smaller than GHC.
+
+
+
+but I just tried to install ghc, not to build it, can you try my ppa?
+
+I don't have a machine running Ubuntu. I onlu lodged a bug here because this is the official bug tracker for Qemu.
+
+
+This my Debian system:
+
+    $ uname -a
+    Linux rolly 3.11-2-amd64 #1 SMP Debian 3.11.10-1 (2013-12-04) x86_64 GNU/Linux
+
+I normally run my qemu chroot using schroot as follows:
+
+    schroot -c armhf
+
+If I need to install packages I schroot as root: 
+
+    schroot -c armhf -u root
+
+In the chroot, I get:
+
+    Linux rolly 3.11-2-amd64 #1 SMP Debian 3.11.10-1 (2013-12-04) armv7l GNU/Linux
+
+and as root I have successfully removed and installed ghc from the Debian repositories.
+
+
+$:~/branches/ettercap (master) $ apt-cache show qemu
+Package: qemu
+Priority: optional
+Section: otherosfs
+Installed-Size: 556
+Maintainer: Ubuntu Developers <email address hidden>
+Architecture: amd64
+Version: 1.7.0+dfsg-2ubuntu4~saucy1
+Suggests: qemu-user-static
+Depends: qemu-system (>= 1.7.0+dfsg-2ubuntu4~saucy1), qemu-user (>= 1.7.0+dfsg-2ubuntu4~saucy1), qemu-utils (>= 1.7.0+dfsg-2ubuntu4~saucy1)
+Filename: pool/main/q/qemu/qemu_1.7.0+dfsg-2ubuntu4~saucy1_amd64.deb
+Size: 230798
+
+--------------------------------
+
+$ pbuilder-dist sid armhf login
+[sudo] password for locutus: 
+W: /home/locutus/.pbuilderrc does not exist
+I: Building the build Environment
+I: extracting base tarball [/home/locutus/pbuilder/sid-armhf-base.tgz]
+
+this command runs qemu
+
+ps ax |grep qemu
+ 1860 pts/8    S      0:00 sudo HOME=/home/locutus ARCHITECTURE=armhf DISTRIBUTION=sid ARCH=armhf DIST=sid DEB_BUILD_OPTIONS= pbuilder --login --distribution sid --buildresult /home/locutus/pbuilder/sid-armhf_result/ --basetgz /home/locutus/pbuilder/sid-armhf-base.tgz --mirror http://ftp.debian.org/debian --debootstrapopts --keyring=/usr/share/keyrings/debian-archive-keyring.gpg --components main contrib non-free --debootstrapopts --arch=armhf --debootstrap qemu-debootstrap
+ 1861 pts/8    S      0:00 /bin/bash /usr/sbin/pbuilder --login --distribution sid --buildresult /home/locutus/pbuilder/sid-armhf_result/ --basetgz /home/locutus/pbuilder/sid-armhf-base.tgz --mirror http://ftp.debian.org/debian --debootstrapopts --keyring=/usr/share/keyrings/debian-archive-keyring.gpg --components main contrib non-free --debootstrapopts --arch=armhf --debootstrap qemu-debootstrap
+ 2616 pts/8    S+     0:00 /usr/bin/qemu-arm-static /bin/bash
+
+
+I don't see any difference between your and my method, both of them seems to be calling qemu-debootstrap
+
+
+Fixed upstream, thanks Eric! Marking as affecting Ubuntu, as even trusty's qemu does not have that fix yet. For the record, lp:platform-api uses posix timers for the sensor emulation, so running its tests will reproduce this qemu problem (and verify its fix).
+
+The attachment "temp workaround to enable compilation and execution of GHC  and produced executables in foreign arch chroot" seems to be a patch.  If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.
+
+[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]
+
+Unfortunately it is still not working with these two patches. The "Unsupported syscall: 257" is gone, but now it fails on EINVAL. I attach a little test C file which uses  a timer. It works fine on x86 and a real arm machine, but under QEMU I get:
+
+$ gcc -o timer_test -Wall  timer_test.c  -lrt
+$ ./timer_test
+Failed to create timer: Invalid argument
+qemu: uncaught target signal 6 (Aborted) - core dumped
+Aborted (core dumped)
+
+So timer_create() does not actually seem to work? I tried some variations like 50 ms, or using CLOCK_REALTIME instead of CLOCK_MONOTONIC, all with the same result.
+
+Thanks  for the test case Martin. Problem confirmed.
+
+The issue is that timer_create allows a number of different callback mechanisms and I had only implemented the one I need.
+
+ Working on it now.
+
+
+
+@erikd,
+
+this is marked Fix Released in QEMU project, but comment #28 suggests that commit f4f1e10a58cb5ec7806d47d20671e668a52c3e70 does not in fact solve this bug.  If there is a set of patches upstream that does fix the bug, please let me know and I'll pull them into trusty.  Thanks much!
+
+The fix that was commited to the Qemu git tree fixed the original test case I had. @pittit then found another test case that fails and I intend to fix that when I find a good chunk of free time. Problem is I only work on Wemu sporadically and it takes me quite a bit of time to get up to speed when I return to work on it.
+
+Have you had any more time to look into this?  Should the QEMU (project) task also be re-marked open?
+
+I've been looking at it over the last week or so and I have submitted a patch toe the qemu-devel mailing list to fix another timer_create() problem sometime in the last week.
+
+Unfortunately the test case @pittit submitted is far harder to support than the original test case. In this case the timer_create() syscall gets passed pointers to functions and data in the target's address space and I have not figured out how to handle that yet.
+
+
+On 9 August 2014 07:15, Erik de Castro Lopo <email address hidden> wrote:
+> Unfortunately the test case @pittit submitted is far harder to support
+> than the original test case. In this case the timer_create() syscall
+> gets passed pointers to functions and data in the target's address space
+> and I have not figured out how to handle that yet.
+
+Didn't we discuss this on the list a while back? You're confusing
+the libc API with the kernel syscall API here -- the kernel definitely
+does not take a pointer to a function to call here. (The timer_create
+manpage explicitly says that the SIGEV_THREAD functionality
+is implemented in the C library, not the kernel.) You can see
+this if you strace it:
+
+clone(child_stack=0xb76e5494,
+flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID,
+parent_tidptr=0xb76e5bd8, {entry_number:6, base_addr:0xb76e5b70,
+limit:1048575, seg_32bit:1, contents:0, read_exec_only:0,
+limit_in_pages:1, seg_not_present:0, useable:1},
+child_tidptr=0xb76e5bd8) = 12666
+rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
+futex(0xb76d324c, FUTEX_WAKE_PRIVATE, 2147483647) = 0
+timer_create(CLOCK_REALTIME, {0x984b098, 32, SIGEV_THREAD_ID,
+{12666}}, {0x1}) = 0
+timer_settime(0x1, 0, {it_interval={0, 0}, it_value={0, 50000000}}, NULL) = 0
+
+Under the hood libc is creating a new thread with clone, and
+what the timer_create() syscall gets passed is a struct including
+the thread ID to be sent a signal when the timer expires (here
+that's 12666).
+
+So all you need to do is support SIGEV_THREAD_ID,
+which I think doesn't require much more than copying
+across the thread ID struct field.
+
+(On the other hand that does mean that all programs which
+use SIGEV_THREAD are by definition multithreaded, which
+puts them into "this isn't supported" territory because of our
+well known and longstanding threading issues.)
+
+-- PMM
+
+
+Patch which seems to at least make the test case work (tested with i386-on-i386 linux-user): http://patchwork.ozlabs.org/patch/378769/
+
+
+Unfortunately it doesn't work with armhf on amd64 linux-user.
+
+Use the test program from comment #27 I get:
+
+    > schroot -c armhf -- ./timer_test_armhf 
+    About to call host's timer_create (0, 0x7fff6ee80720, 0x625b1f40)
+    Host's timer_create returns -22
+    Failed to create timer: Invalid argument
+    qemu: uncaught target signal 6 (Aborted) - core dumped
+    E: Child terminated by signal ‘Aborted’
+
+(Yes I made very certain the schroot was using my freshly compiled version of qemu-arm-static).
+
+
+@erikd,
+
+can you check whether this has been fixed in wily?
+
+I finally got round to looking into why the test case from comment #27 worked on x86-64 guests and i386-guest-on-i386-host but not on arm-on-x86-64. This turns out to be a wrong structure definition which meant we weren't handling the 32-bit-guest-on-64-bit-host combinations correctly. I've sent a patch:
+
+http://patchwork.ozlabs.org/patch/665274/
+
+I think this should tie up the last loose end in this bug report so once it gets into master we can close it.
+
+
diff --git a/results/classifier/zero-shot/108/performance/1056 b/results/classifier/zero-shot/108/performance/1056
new file mode 100644
index 000000000..c1cd1a289
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1056
@@ -0,0 +1,16 @@
+performance: 0.992
+device: 0.928
+graphic: 0.756
+debug: 0.590
+network: 0.450
+boot: 0.339
+other: 0.308
+PID: 0.288
+semantic: 0.281
+permissions: 0.277
+vnc: 0.230
+files: 0.219
+socket: 0.216
+KVM: 0.003
+
+Bad Performance of Windows 11 ARM64 VM on Windows 11 Qemu 7.0 Host System
diff --git a/results/classifier/zero-shot/108/performance/1078 b/results/classifier/zero-shot/108/performance/1078
new file mode 100644
index 000000000..ebc4a6561
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1078
@@ -0,0 +1,59 @@
+performance: 0.956
+device: 0.922
+graphic: 0.910
+debug: 0.852
+boot: 0.829
+semantic: 0.793
+other: 0.771
+vnc: 0.756
+PID: 0.754
+permissions: 0.745
+files: 0.722
+socket: 0.674
+KVM: 0.629
+network: 0.509
+
+qemu-system-arm: unable to use LPAE
+Description of problem:
+Failed to run qemu: qemu-system-arm: Addressing limited to 32 bits,
+but memory exceeds it by 1073741824 bytes
+Steps to reproduce:
+1. ./configure --target-list=arm-softmmu
+2. make
+3.
+./qemu-system-arm \
+-machine virt,highmem=on \
+-cpu cortex-a15 -smp 4 \
+-m 4096 \
+-kernel ./zImage \
+-drive id=disk0,file=./rootfs.ext4,if=none,format=raw \
+-object rng-random,filename=/dev/urandom,id=rng0 \
+-device virtio-rng-pci,rng=rng0 \
+-device virtio-blk-device,drive=disk0 \
+-device virtio-gpu-pci \
+-serial mon:stdio -serial null \
+-nographic \
+-append 'root=/dev/vda rw mem=4096M ip=dhcp console=ttyAMA0 console=hvc0'
+Additional information:
+We set physical address bits to 40 if ARM_FEATURE_LPAE is enabled. But ARM_FEATURE_V7VE also implies ARM_FEATURE_LPAE as set later in arm_cpu_realizefn.
+
+We should add condition for ARM_FEATURE_V7VE, otherwise we would not be able to use highmem larger than 3GB even though we have enabled highmem, since we would fail and return right from machvirt_init. 
+
+I have already made a patch to fix this issue.
+https://gitlab.com/realhezhe/qemu/-/commit/4dad8167c1c1a7695af88d8929e8d7f6399177de
+`hw/arm/virt.c`
+```c
+        if (object_property_get_bool(cpuobj, "aarch64", NULL)) {
+            pa_bits = arm_pamax(armcpu);
+        } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
+        } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)
+                || arm_feature(&armcpu->env, ARM_FEATURE_V7VE)) {
+            /* v7 with LPAE */
+            pa_bits = 40;
+        } else {
+```
+
+After applying the patch, I can make sure that the pa_bits has already been set to 40, but qemu hangs later. By bisecting I found if the following commit is reverted qemu can boot up successfully..
+39a1fd2528 ("target/arm: Fix handling of LPAE block descriptors")
+
+It can't be quickly determined what's going on here at my side. Maybe the author can help give some hints. Thanks.
diff --git a/results/classifier/zero-shot/108/performance/1119861 b/results/classifier/zero-shot/108/performance/1119861
new file mode 100644
index 000000000..450fb39d4
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1119861
@@ -0,0 +1,41 @@
+performance: 0.982
+graphic: 0.919
+other: 0.665
+device: 0.633
+semantic: 0.456
+permissions: 0.346
+socket: 0.325
+vnc: 0.308
+boot: 0.258
+PID: 0.237
+network: 0.210
+debug: 0.149
+files: 0.137
+KVM: 0.033
+
+Poor console performance in Windows 7
+
+As part of its conformance test suite, Wine tests the behavior of the Windows console API. Part of this test involves opening a test console and scrolling things around. The test probably does not need to perform that many scroll operations to achieve its goal. However as is it illustrates a significant performance issue in QEmu. Unfortunately it does so by timing out (the tests must run in less than 2 minutes). Here are the run times on a few configurations:
+
+ 10s - QEmu 1.4 + Q9450@2.6GHz + Windows XP + QXL + QXL driver
+  8s - QEmu 1.12 + Opteron 6128 + Windows XP + QXL + QXL driver
+127s - QEmu 1.12 + Opteron 6128 + Windows 7 + cirrus + vga driver
+127s - QEmu 1.12 + Opteron 6128 + Windows 7 + QXL + QXL driver
+147s - QEmu 1.12 + Opteron 6128 + Windows 7 + vmvga + vga driver
+145s - QEmu 1.12 + Opteron 6128 + Windows 7 + vmvga + vmware driver (xpdm, no better with all graphics effects disabled)
+
+ 10s - Metal + Atom N270 + Windows XP + GMA 950 + Intel driver
+  6s - Metal + i5-3317U + Windows 8 + HD4000 + Intel driver
+  3s - VMware + Q9450@2.6GHz + Windows XP + vmvga + vmware driver
+ 65s - VMware + Q9450@2.6GHz + Windows 7 + vmvga + vmware driver
+
+So when running on the bare metal all versions of Windows are about as fast. However in QEmu Windows 7 is almost 16 times slower than Windows XP! VMware is impacted too but it's still maintains a good lead in performance.
+
+Disabling all graphics effects did not help so it's not clear that the fault lies with Windows 7's compositing desktop window manager. Maybe it has to do with the lack of a proper wddm driver?
+
+
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+It does seem to be ok now. The test did get simplified to remove parts that were mostly redundant so it runs faster now. But still it now takes the same time, 7 seconds, on the VMware and QEMU Windows 7 VMs. So as far as I'm concerned this can be closed.
+
diff --git a/results/classifier/zero-shot/108/performance/1129957 b/results/classifier/zero-shot/108/performance/1129957
new file mode 100644
index 000000000..96998ef54
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1129957
@@ -0,0 +1,68 @@
+performance: 0.995
+boot: 0.818
+graphic: 0.755
+device: 0.678
+permissions: 0.673
+vnc: 0.658
+files: 0.492
+debug: 0.469
+semantic: 0.449
+PID: 0.416
+socket: 0.404
+network: 0.381
+other: 0.330
+KVM: 0.130
+
+Performance issue running quest image on qemu compiled for Win32 platform
+
+I'm seeing performance issues when booting a guest image on qemu 1.4.0 compiled for the Win32 platform.
+The same image boots a lot faster on the same computer running qemu/linux on Fedora via VmWare, and even running the Win32 exectuable via Wine performs better than running qemu natively on Win32.
+
+Although I'm not the author of the image, it is located here:
+http://people.freebsd.org/~wpaul/qemu/vxworks.img
+
+All testing has been done on QEMU 1.4.0.
+
+I'm also attaching a couple of gprof logs. For these I have disabled ssp in qemu by removing "-fstack-protector-all" and "-D_FORTIFY_SOURCE=2" from the qemu configure script.
+
+qemu-perf-linux.txt
+================
+Machine - Windows XP - VmWare - Fedora - QEMU
+
+qemu-perf-win32.txt
+=================
+Machine - Windows XP - QEMU
+
+qemu-perf-wine.txt
+================
+Machine - Windows XP - VmWare - Fedora - Wine - QEMU
+
+
+
+
+
+
+
+For linux, the build is done by the native Fedora 18 gcc, 4.7.2
+For Win32, the build is done by Fedora 18's mingw compiler, 4.7.2
+
+Configuration for Win32 (from config.log):
+# Configured with: './configure' '--disable-guest-agent' '--disable-vnc' '--disable-werror' '--extra-cflags=-pg' '--extra-ldflags=-pg' '--target-list=i386-softmmu' '--cross-prefix=i686-w64-mingw32-'
+
+NOTE: debug is not enabled, since it breaks current QEMU build (undefined references to 'ffs')
+
+Configuration for Linux (from config.log):
+# Configured with: './configure' '--disable-guest-agent' '--disable-vnc' '--disable-werror' '--extra-cflags=-pg' '--extra-ldflags=-pg' '--target-list=i386-softmmu' '--enable-debug' '--enable-kvm'
+
+NOTE: although I pass --enable-kvm to configure, I haven't passed it to qemu when running the executables
+
+Commandline for running on Win32 (started from a Cygwin terminal) and also with Fedora+Wine:
+./qemu/i386-softmmu/qemu-system-i386w.exe -L qemu/pc-bios/ vxworks.img
+
+Commandline for running on Fedora:
+./qemu/i386-softmmu/qemu-system-i386 -L qemu/pc-bios/ vxworks.img
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU and the latest version of MinGW? Do you also see the problem with the builds from https://qemu.weilnetz.de/ ? Or could we close this ticket nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1135567 b/results/classifier/zero-shot/108/performance/1135567
new file mode 100644
index 000000000..aa9ebcae9
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1135567
@@ -0,0 +1,53 @@
+performance: 0.944
+KVM: 0.938
+graphic: 0.930
+semantic: 0.888
+network: 0.881
+device: 0.877
+other: 0.876
+PID: 0.839
+permissions: 0.784
+boot: 0.781
+debug: 0.761
+socket: 0.700
+files: 0.697
+vnc: 0.666
+
+QXL crashes a Windows 7 guest if host goes into screensaver
+
+Note: if further information is required, I'll be glad to supply it.
+
+I am using on the host
+- HP z800 with 72GB RAM and 2x x5680
+- Gentoo 64-bit host (3.7.9 kernel, FGLRX RADEON driver 13.1)
+- LIBVIRT 1.0.2 with QEMU(-KVM) 1.4.0
+
+The guest:
+- Windows 7 32-bit
+- 2GB allocated
+- 2 CPU
+- using virtio for everything (disk,net,memballoon)
+- Display = SPICE with spice channel
+- Video driver is qxl (ram says 64MB)
+- Spice-guest-tools 0.52 installed
+
+When I use QXL and  have the guest open in Virt-Manager/Virt-Viewer and let the host go into screensaver mode, the Win7 crashes hard.
+
+When I change video to VGA, it survives the screen saver,  no problem at all ,smooth sailing.
+
+regards
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+Hello, 
+
+Obviously my hardware configuration and versions etc. have changed. 
+
+Also, I have taken to not using screensavers in the virtual machines anymore i.e. disabled.
+
+BUT: I have set it up as similarly as possible and the crash/freeze/hang up with current versions of the drivers seems to be gone.
+
+So I guess the ticket can be closed.
+
+Thanks for checking! ... so I'm closing this ticket now.
+
diff --git a/results/classifier/zero-shot/108/performance/1139 b/results/classifier/zero-shot/108/performance/1139
new file mode 100644
index 000000000..9be98a83f
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1139
@@ -0,0 +1,93 @@
+performance: 0.959
+network: 0.927
+device: 0.909
+other: 0.857
+files: 0.853
+graphic: 0.852
+socket: 0.845
+PID: 0.784
+vnc: 0.777
+boot: 0.774
+semantic: 0.765
+debug: 0.745
+permissions: 0.734
+KVM: 0.630
+
+block/nbd.c and drive backup to a remote nbd server
+Description of problem:
+Good afternoon!
+
+I trying to copy attached drive content to remote NBD server via drive-backup QMP method. I'he tested two very similar ways but with very different performance. First is a backuping to exported NBD at another server. Second way is a backuping to same server but with connecting to /dev/nbd*. 
+
+Exporting qcow2 via nbd:
+```
+(nbd) ~ # qemu-nbd -p 12345 -x backup --cache=none --aio=native --persistent -f qcow2 backup.qcow2
+
+(qemu) ~ # qemu-img info nbd://10.0.0.1:12345/backup
+image: nbd://10.0.0.1:12345/backup
+file format: raw
+virtual size: 10 GiB (10737418240 bytes)
+disk size: unavailable
+```
+
+Starting drive backuping via QMP:
+
+```
+{
+	"execute": "drive-backup",
+	"arguments": {
+		"device": "disk",
+		"sync": "full",
+		"target": "nbd://10.0.0.1:12345/backup",
+		"mode": "existing"
+	}
+}
+```
+
+With process starting qemu notifying about warning:
+
+> warning: The target block device doesn't provide information about the block size and it doesn't have a backing file. The default block size of 65536 bytes is used. If the actual block size of the target exceeds this default, the backup may be unusable
+
+And backup process is limited by speed around 30MBps, watched by iotop
+
+
+Second way to creating backup
+
+Exporting qcow2 via nbd:
+```
+(nbd) ~ # qemu-nbd -p 12345 -x backup --cache=none --aio=native --persistent -f qcow2 backup.qcow2
+```
+
+```
+(qemu) ~ # qemu-img info nbd://10.0.0.1:12345/backup
+image: nbd://10.0.0.1:12345/backup
+file format: raw
+virtual size: 10 GiB (10737418240 bytes)
+disk size: unavailable
+(qemu) ~ # qemu-nbd -c /dev/nbd0 nbd://10.0.0.1:12345/backup
+(qemu) ~ # qemu-img info /dev/nbd0
+image: /dev/nbd0
+file format: raw
+virtual size: 10 GiB (10737418240 bytes)
+disk size: 0 B
+```
+
+Starting drive backuping via QMP to local nbd device:
+
+```
+{
+	"execute": "drive-backup",
+	"arguments": {
+		"device": "disk",
+		"sync": "full",
+		"target": "/dev/nbd0",
+		"mode": "existing"
+	}
+}
+```
+
+Backup process started without previous warning, and speed limited around 100MBps (network limit)
+
+So I have question: how I can get same performance without connection network device to local block nbd device at the qemu host?
+
+Kind regards
diff --git a/results/classifier/zero-shot/108/performance/1173490 b/results/classifier/zero-shot/108/performance/1173490
new file mode 100644
index 000000000..0a1e8d72c
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1173490
@@ -0,0 +1,43 @@
+performance: 0.935
+KVM: 0.795
+graphic: 0.750
+network: 0.677
+device: 0.589
+socket: 0.478
+semantic: 0.454
+files: 0.407
+permissions: 0.386
+PID: 0.351
+other: 0.315
+debug: 0.311
+boot: 0.267
+vnc: 0.228
+
+virtio net adapter driver with kvm slow on winxp
+
+# lsb_release -a
+No LSB modules are available.
+Distributor ID: Ubuntu
+Description:    Ubuntu 12.04.1 LTS
+Release:        12.04
+Codename:       precise
+
+#virsh version 
+Compiled against library: libvirt 1.0.4
+Using library: libvirt 1.0.4
+Using API: QEMU 1.0.4
+Running hypervisor: QEMU 1.2.0
+
+windows xp clean install with spice-guest-tools-0.52.exe from
+  http://spice-space.org/download/windows/spice-guest-tools/spice-guest-tools-0.52.exe
+
+it comes very slow , and the Interrupts process got very high cpu usage(above 60%).
+when i switch the net adapter from virtio to default(rtl8139) ,it works well.
+
+spice-guest-tools-0.3 works well.
+In spice-guest-tools-0.52 and 0.59, svchost.exe will use 50% cpu.
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1223 b/results/classifier/zero-shot/108/performance/1223
new file mode 100644
index 000000000..62d2ebced
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1223
@@ -0,0 +1,26 @@
+performance: 0.955
+graphic: 0.931
+other: 0.895
+device: 0.833
+debug: 0.826
+vnc: 0.796
+network: 0.655
+semantic: 0.570
+KVM: 0.499
+PID: 0.490
+files: 0.473
+socket: 0.423
+permissions: 0.370
+boot: 0.367
+
+When the disk is offline, why does the migration not time out and the virtual machine keeps hanging
+Description of problem:
+I want to the migrate end auto after the disk is offline
+Steps to reproduce:
+1.migrate to other host
+
+2.Manually construct disk offline when migrating
+
+3.the vm is hangs,and migrate wait for the disk recovery,i need to it timeout and report the failed migration 
+rather than hangs ,what should i do
+![image](/uploads/c1ec6e1f59524888ea8e5c1df131037e/image.png)
diff --git a/results/classifier/zero-shot/108/performance/1228285 b/results/classifier/zero-shot/108/performance/1228285
new file mode 100644
index 000000000..4e5037f5b
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1228285
@@ -0,0 +1,81 @@
+performance: 0.981
+socket: 0.946
+network: 0.904
+device: 0.814
+vnc: 0.797
+semantic: 0.786
+PID: 0.709
+other: 0.675
+permissions: 0.638
+graphic: 0.597
+debug: 0.572
+files: 0.533
+boot: 0.432
+KVM: 0.394
+
+e1000 nic TCP performances
+
+Hi,
+
+Here is the context :
+
+$ qemu -name A -m 1024 -net nic vlan=0,model=e1000 -net socket,vlan=0,listen=127.0.0.1:7000
+$ qemu -name B -m 1024 -net nic vlan=0,model=e1000 -net socket,vlan=0,connect=127.0.0.1:7000
+
+The bandwidth is really tiny :
+
+    . Iperf3 reports about 30 Mb/sec
+    . NetPerf reports about 50 Mb/sec
+
+
+With UDP sockets, there is no problem at all :
+
+    . Iperf3 reports about 1 Gb/sec
+    . NetPerf reports about 950 Mb/sec
+
+
+I've noticed this fact only with the e1000 NIC, not with others (rtl8139,virtio, etc.)
+I've used the main GIT version of QEMU.
+
+
+Thanks in advance.
+
+See you,
+VInce
+
+On Fri, Sep 20, 2013 at 05:21:23PM -0000, Vincent Autefage wrote:
+> Here is the context :
+> 
+> $ qemu -name A -m 1024 -net nic vlan=0,model=e1000 -net socket,vlan=0,listen=127.0.0.1:7000
+> $ qemu -name B -m 1024 -net nic vlan=0,model=e1000 -net socket,vlan=0,connect=127.0.0.1:7000
+> 
+> The bandwidth is really tiny :
+> 
+>     . Iperf3 reports about 30 Mb/sec
+>     . NetPerf reports about 50 Mb/sec
+> 
+> 
+> With UDP sockets, there is no problem at all :
+> 
+>     . Iperf3 reports about 1 Gb/sec
+>     . NetPerf reports about 950 Mb/sec
+> 
+> 
+> I've noticed this fact only with the e1000 NIC, not with others (rtl8139,virtio, etc.)
+> I've used the main GIT version of QEMU.
+
+It's interesting that you see good performance over -netdev socket TCP
+with the other NIC models.
+
+I don't know what the issue would be, you'll probably need to dig
+further to discover the problem.  Using wireshark might be a good start.
+Try to figure out where the delay is incurred and then instrument that
+code to find out the cause.
+
+Stefan
+
+
+Looking through old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1253563 b/results/classifier/zero-shot/108/performance/1253563
new file mode 100644
index 000000000..b7852693c
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1253563
@@ -0,0 +1,57 @@
+performance: 0.924
+socket: 0.666
+device: 0.645
+graphic: 0.634
+semantic: 0.564
+network: 0.504
+PID: 0.457
+other: 0.446
+debug: 0.401
+permissions: 0.372
+vnc: 0.246
+files: 0.209
+KVM: 0.197
+boot: 0.190
+
+bad performance with rng-egd backend
+
+
+1. create listen socket
+# cat /dev/random | nc -l localhost 1024
+
+2. start vm with rng-egd backend
+
+./x86_64-softmmu/qemu-system-x86_64 --enable-kvm -mon chardev=qmp,mode=control,pretty=on -chardev socket,id=qmp,host=localhost,port=1234,server,nowait -m 2000 -device virtio-net-pci,netdev=h1,id=vnet0 -netdev tap,id=h1 -vnc :0 -drive file=/images/RHEL-64-virtio.qcow2 \
+-chardev socket,host=localhost,port=1024,id=chr0 \
+-object rng-egd,chardev=chr0,id=rng0 \
+-device virtio-rng-pci,rng=rng0,max-bytes=1024000,period=1000
+
+(guest) # dd if=/dev/hwrng of=/dev/null
+
+note: cancelling dd process by Ctrl+c, it will return the read speed.
+
+Problem:   the speed is around 1k/s
+
+===================
+
+If I use rng-random backend (filename=/dev/random), the speed is about 350k/s).
+
+It seems that when the request entry is added to the list, we don't read the data from queue list immediately.
+The chr_read() is delayed, the virtio_notify() is delayed.  the next request will also be delayed. It effects the speed.
+
+I tried to change rng_egd_chr_can_read() always returns 1,  the speed is improved to (about 400k/s)
+
+Problem: we can't poll the content in time currently
+
+
+Any thoughts?
+
+Thanks, Amos
+
+Looking through old bug tickets... is this still an issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+
+Let's close this bug, it's passed 6 years.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1312 b/results/classifier/zero-shot/108/performance/1312
new file mode 100644
index 000000000..06c1119e3
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1312
@@ -0,0 +1,26 @@
+performance: 0.984
+debug: 0.904
+device: 0.788
+files: 0.742
+network: 0.691
+permissions: 0.690
+vnc: 0.690
+PID: 0.677
+graphic: 0.672
+socket: 0.571
+semantic: 0.508
+boot: 0.310
+other: 0.260
+KVM: 0.115
+
+TCP performance problems - GSO/TSO, MSS, 8139 related (Ignores lower MTU from PMTUD/MSS)
+Description of problem:
+MTU handling on guests using an RTL8139 virtualized NIC is broken; net/hw/8139.c works with a static MTU of 1500b for TCP offloading, leading to low throughput when clients connect from sub 1500MTU networks. PMTUD is ignored, and locking to a lower MTU in the OS mitigates the issue.
+Steps to reproduce:
+1. Create a guest with an RTL8139 nic
+2. Try to retrieve a file from a client behind a sub 1500 MTU link
+3. Observe low bandwidth due to retransmits
+Additional information:
+I just debugged this issue for an NGO which, for whatever reason, had an RTL8139 NIC in their guest. After i finally traced this to the RTL8139, i found this qemu-devel/netdev thread from six years ago, which apparently already debugged this issue and proposed a patch: https://lore.kernel.org/all/20161114162505.GD26664@stefanha-x1.localdomain/
+
+I did not test the patch proposed there, but note that `net/hw/8139.c` still looks as discussed in that qemu-devel/netdev thread. As i haven't found a bug report in the archives, i figured you might want to know.
diff --git a/results/classifier/zero-shot/108/performance/1321 b/results/classifier/zero-shot/108/performance/1321
new file mode 100644
index 000000000..6b5543f64
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1321
@@ -0,0 +1,23 @@
+performance: 0.995
+graphic: 0.872
+device: 0.793
+other: 0.462
+socket: 0.450
+boot: 0.368
+semantic: 0.323
+network: 0.321
+PID: 0.231
+debug: 0.214
+permissions: 0.169
+vnc: 0.155
+files: 0.018
+KVM: 0.001
+
+qemu-system-i386 runs slow after upgrading legacy project from qemu 2.9.0  to 7.1.0
+Description of problem:
+Using several custom serial and irq devices including timers.
+The same code (after some customisation in order to compile with new 7.1.0 API and meson build system runs about 50% slower.
+We had to remove "-icount 4" switch which worked fine with 2.9.0 just to get to this point.
+Even running with multi-threaded tcg did not help.
+We don't use the new ptimer API but rather the old QEMUTimer.
+Any suggestions to why we encounter this vast performance degradation?
diff --git a/results/classifier/zero-shot/108/performance/1321464 b/results/classifier/zero-shot/108/performance/1321464
new file mode 100644
index 000000000..2411efbc8
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1321464
@@ -0,0 +1,68 @@
+performance: 0.986
+other: 0.778
+semantic: 0.775
+graphic: 0.774
+PID: 0.755
+device: 0.641
+socket: 0.641
+network: 0.566
+permissions: 0.523
+debug: 0.515
+boot: 0.508
+files: 0.499
+vnc: 0.372
+KVM: 0.306
+
+qemu/block/qcow2.c:1942: possible performance problem ?
+
+I just ran static analyser cppcheck over today (20140520) qemu source code.
+
+It said many things, including
+
+[qemu/block/qcow2.c:1942] -> [qemu/block/qcow2.c:1943]: (performance) Buffer 'pad_buf' is being writ
+ten before its old content has been used.
+
+Source code is
+
+            memset(pad_buf, 0, s->cluster_size);
+            memcpy(pad_buf, buf, nb_sectors * BDRV_SECTOR_SIZE);
+
+Worth tuning ?
+
+Similar problem here
+
+[qemu/block/qcow.c:815] -> [qemu/block/qcow.c:816]: (performance) Buffer 'pad_buf' is being written 
+before its old content has been used.
+
+and
+
+[qemu/hw/i386/acpi-build.c:1265] -> [qemu/hw/i386/acpi-build.c:1267]: (performance) Buffer 'dsdt' is
+ being written before its old content has been used.
+
+I can only speak for qcow2 and qcow, but for those places, I don't think it is worth fixing. First of all, both are image formats, so the bottleneck is generally the disk on which the images are stored and not main memory, so an overeager memset should not cause any problems.
+
+For both, the relevant piece of code is in qcow2/qcow_write_compressed() which are rarely used anyway (as far as I know) and even if used, they have additional overhead due to having to compress data first, so “fixing” the memset() won't make them noticibly faster.
+
+I don't know about the ACPI thing, but to me it seems that it's copying data to a temporary buffer and then overwriting its beginning with zeroes. From my very limited ACPI knowledge I'd guess this function is called at some point during qemu startup, so it doesn't seem worth optimizing either.
+
+On Tue, May 20, 2014 at 11:21:05PM -0000, Max Reitz wrote:
+> I can only speak for qcow2 and qcow, but for those places, I don't think
+> it is worth fixing. First of all, both are image formats, so the
+> bottleneck is generally the disk on which the images are stored and not
+> main memory, so an overeager memset should not cause any problems.
+> 
+> For both, the relevant piece of code is in qcow2/qcow_write_compressed()
+> which are rarely used anyway (as far as I know) and even if used, they
+> have additional overhead due to having to compress data first, so
+> “fixing” the memset() won't make them noticibly faster.
+
+I agree.  It won't make a noticable difference and the compressed writes
+are only done in qemu-img convert, not for running guests.
+
+But patches to change this wouldn't hurt either.
+
+Stefan
+
+
+Thanks for reporting this, but it looks like the related code has been removed a while ago (there is no more "pad_buf" in qcow.c or qcow2.c), so closing this ticket. If you still can reproduce the (same or similar) problem with the latest version of QEMU, please open a new ticket instead.
+
diff --git a/results/classifier/zero-shot/108/performance/134 b/results/classifier/zero-shot/108/performance/134
new file mode 100644
index 000000000..fffdcc23a
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/134
@@ -0,0 +1,16 @@
+performance: 0.985
+device: 0.830
+network: 0.679
+boot: 0.464
+vnc: 0.424
+graphic: 0.287
+socket: 0.255
+PID: 0.247
+files: 0.166
+permissions: 0.150
+debug: 0.143
+semantic: 0.122
+other: 0.076
+KVM: 0.020
+
+Performance improvement when using "QEMU_FLATTEN" with softfloat type conversions
diff --git a/results/classifier/zero-shot/108/performance/1399939 b/results/classifier/zero-shot/108/performance/1399939
new file mode 100644
index 000000000..47e28ea22
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1399939
@@ -0,0 +1,30 @@
+performance: 0.974
+graphic: 0.876
+device: 0.823
+vnc: 0.760
+other: 0.757
+files: 0.743
+permissions: 0.729
+semantic: 0.722
+PID: 0.698
+network: 0.677
+socket: 0.664
+debug: 0.607
+boot: 0.601
+KVM: 0.371
+
+Qemu build with -faltivec and maltivec support  in 
+
+if is possible add the build support for qemu for have the  -faltivec -maltivec in CPPFLAGS  for make the emulation more faster on PPC equiped machine . 
+Thank you
+
+We assume that your C compiler generates decently optimised code that uses the features of your host CPU with just the standard -O2 optimisation flag. If this isn't the case, you can use configure's --extra-cflags argument (eg "--extra-cflags=-faltivec -maltivec") to get the build process to pass arbitrary flags to the compiler. Is that not sufficient here?
+
+
+Will check it , i had been made my personal build modding the Makefile with altivec commands in CPPFLAGS.
+i dont know if it was a placebo effect but look like everything is more faster.
+
+
+
+Closing this ticket since adding CPPFLAGS to configure is possible.
+
diff --git a/results/classifier/zero-shot/108/performance/1442 b/results/classifier/zero-shot/108/performance/1442
new file mode 100644
index 000000000..0ba2534ae
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1442
@@ -0,0 +1,16 @@
+performance: 0.950
+device: 0.848
+graphic: 0.470
+semantic: 0.365
+debug: 0.250
+permissions: 0.140
+boot: 0.097
+vnc: 0.080
+other: 0.033
+PID: 0.027
+socket: 0.025
+network: 0.025
+KVM: 0.018
+files: 0.006
+
+RISC-V qemu, get cpu tick
diff --git a/results/classifier/zero-shot/108/performance/1473451 b/results/classifier/zero-shot/108/performance/1473451
new file mode 100644
index 000000000..cf35964dd
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1473451
@@ -0,0 +1,34 @@
+performance: 0.936
+semantic: 0.764
+other: 0.750
+files: 0.715
+graphic: 0.692
+device: 0.682
+network: 0.584
+permissions: 0.544
+vnc: 0.463
+boot: 0.433
+debug: 0.399
+socket: 0.394
+PID: 0.362
+KVM: 0.199
+
+Please support the native bios format for dec alpha
+
+Currently qemu-system-alpha -bios parameter takes an ELF image.
+However HP maintains firmware updates for those systems.
+
+Some example rom files can be found here ftp://ftp.hp.com/pub/alphaserver/firmware/current_platforms/v7.3_release/DS20_DS20e/
+
+It might allow things like using the SRM firmware.
+The ARC(nt) firmware would allow to build and test windows applications for that platforms without having the relevant hardware
+
+QEMU does not really implement a "true" ev67.
+
+We cheat and implement something that is significantly faster to emulate.
+E.g. doing all TLB refill within qemu, rather than in the PALcode.
+
+So, no, there's no chance of running true SRM or ARC firmware.
+
+But In that case it’s impossible to emulate or even compile Windows for Dec Alpha.
+
diff --git a/results/classifier/zero-shot/108/performance/1477 b/results/classifier/zero-shot/108/performance/1477
new file mode 100644
index 000000000..a275f0f06
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1477
@@ -0,0 +1,306 @@
+performance: 0.938
+graphic: 0.919
+permissions: 0.908
+KVM: 0.908
+debug: 0.906
+PID: 0.906
+other: 0.903
+semantic: 0.901
+device: 0.900
+socket: 0.894
+boot: 0.883
+vnc: 0.874
+network: 0.846
+files: 0.799
+
+hot-plugged interface are not working after live migration
+Description of problem:
+After a live migration are perform for a vm then hot-plug interface pci didn't show up, but did found a SCSI storage controller is created. I checked libvirt did send qmp command to qemu `[pid 320011] 1673945683.378537 write(42, "{"execute":"device_add","arguments":{"driver":"virtio-net-pci","netdev":"hostua-test","id":"ua-test","mac":"00:e0:4c:6a:3b:51","bus":"pci.7","addr":"0x0"},"id":"libvirt-200"}rn", 176) = 176
+`
+Steps to reproduce:
+1. Perform a live migration by issue command `virsh migrate --live --persistent --verbose --unsafe --p2p demo-vm qemu+tls://node8/system?pkipath=/etc/pki/libvirt/private/`
+2. Then on the destination node that vm moved, create a bridge deivce `ip link add br-test1 type bridge`
+3. Create a tap.xml file with following code
+   ```
+   <interface type='bridge'>
+     <mac address='00:e0:4c:6a:3b:51'/>
+     <source bridge='br-test1'/>
+     <model type="virtio"/>
+     <alias name='ua-test'/>
+   </interface>
+   ```
+4. Save origin pci information
+```
+$ virsh console demo-vm
+# Save origin pci information 
+[root@demo-vm ~]# lshw > before
+```
+5. Hot-plug an interface `virsh attach-device demo-vm tap.xml-backup --live --config`
+6. Dumpxml of demo-vm
+```
+<domain type='kvm' id='226'>
+  <name>demo-vm</name>
+  <uuid>cc74b867-3fb4-5e4f-bbce-33df21a89416</uuid>
+  <metadata>
+    <kubevirt xmlns="http://kubevirt.io">
+      <uid>79db3d82-ce8f-44e8-96a5-940cc37c0064</uid>
+      <graceperiod>
+        <deletionGracePeriodSeconds>30</deletionGracePeriodSeconds>
+      </graceperiod>
+    </kubevirt>
+  </metadata>
+  <maxMemory slots='16' unit='KiB'>134217728</maxMemory>
+  <memory unit='KiB'>1048576</memory>
+  <currentMemory unit='KiB'>1048576</currentMemory>
+  <vcpu placement='static' current='1'>128</vcpu>
+  <iothreads>1</iothreads>
+  <resource>
+    <partition>/machine</partition>
+  </resource>
+  <sysinfo type='smbios'>
+    <system>
+      <entry name='uuid'>cc74b867-3fb4-5e4f-bbce-33df21a89416</entry>
+    </system>
+  </sysinfo>
+  <os>
+    <type arch='x86_64' machine='pc-q35-rhel8.6.0'>hvm</type>
+    <smbios mode='sysinfo'/>
+  </os>
+  <features>
+    <acpi/>
+  </features>
+  <cpu mode='custom' match='exact' check='full'>
+    <model fallback='forbid'>Skylake-Server-IBRS</model>
+    <vendor>Intel</vendor>
+    <topology sockets='128' dies='1' cores='1' threads='1'/>
+    <feature policy='require' name='ss'/>
+    <feature policy='require' name='vmx'/>
+    <feature policy='require' name='pdcm'/>
+    <feature policy='require' name='hypervisor'/>
+    <feature policy='require' name='tsc_adjust'/>
+    <feature policy='require' name='clflushopt'/>
+    <feature policy='require' name='umip'/>
+    <feature policy='require' name='pku'/>
+    <feature policy='require' name='md-clear'/>
+    <feature policy='require' name='stibp'/>
+    <feature policy='require' name='arch-capabilities'/>
+    <feature policy='require' name='ssbd'/>
+    <feature policy='require' name='xsaves'/>
+    <feature policy='require' name='ibpb'/>
+    <feature policy='require' name='ibrs'/>
+    <feature policy='require' name='amd-stibp'/>
+    <feature policy='require' name='amd-ssbd'/>
+    <feature policy='require' name='skip-l1dfl-vmentry'/>
+    <feature policy='require' name='pschange-mc-no'/>
+    <feature policy='disable' name='mpx'/>
+    <numa>
+      <cell id='0' cpus='0-127' memory='1048576' unit='KiB'/>
+    </numa>
+  </cpu>
+  <clock offset='utc'/>
+  <on_poweroff>destroy</on_poweroff>
+  <on_reboot>restart</on_reboot>
+  <on_crash>destroy</on_crash>
+  <devices>
+    <emulator>/usr/libexec/qemu-kvm</emulator>
+    <disk type='network' device='disk' model='virtio-non-transitional'>
+      <driver name='qemu' type='raw' error_policy='stop' discard='unmap'/>
+      <auth username='rbd-provisioner'>
+        <secret type='ceph' uuid='8fedf300-282c-4531-a66d-ca2691aaa88b'/>
+      </auth>
+      <source protocol='rbd' name='demo-pool/vol-5e83bed9-a2a3-11ed-bee4-3cfdfee07278' index='2'>
+        <host name='xx.xx.xx.xx' port='6789'/>
+        <host name='xx.xx.xx.xx' port='6789'/>
+        <host name='xx.xx.xx.xx' port='6789'/>
+      </source>
+      <target dev='vda' bus='virtio'/>
+      <boot order='1'/>
+      <alias name='ua-bootdisk'/>
+      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
+    </disk>
+    <disk type='file' device='disk' model='virtio-non-transitional'>
+      <driver name='qemu' type='raw' cache='writethrough' error_policy='stop' discard='unmap'/>
+      <source file='/var/run/kubevirt-ephemeral-disks/cloud-init-data/demo-vm/configdrive.iso' index='1'/>
+      <backingStore/>
+      <target dev='vdb' bus='virtio'/>
+      <alias name='ua-cloudinitdisk'/>
+      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
+    </disk>
+    <controller type='usb' index='0' model='none'>
+      <alias name='usb'/>
+    </controller>
+    <controller type='scsi' index='0' model='virtio-non-transitional'>
+      <alias name='scsi0'/>
+      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
+    </controller>
+    <controller type='virtio-serial' index='0' model='virtio-non-transitional'>
+      <alias name='virtio-serial0'/>
+      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
+    </controller>
+    <controller type='pci' index='0' model='pcie-root'>
+      <alias name='pcie.0'/>
+    </controller>
+    <controller type='pci' index='1' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='1' port='0x10'/>
+      <alias name='pci.1'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='pci' index='2' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='2' port='0x11'/>
+      <alias name='pci.2'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
+    </controller>
+    <controller type='pci' index='3' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='3' port='0x12'/>
+      <alias name='pci.3'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
+    </controller>
+    <controller type='pci' index='4' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='4' port='0x13'/>
+      <alias name='pci.4'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
+    </controller>
+    <controller type='pci' index='5' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='5' port='0x14'/>
+      <alias name='pci.5'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
+    </controller>
+    <controller type='pci' index='6' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='6' port='0x15'/>
+      <alias name='pci.6'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
+    </controller>
+    <controller type='pci' index='7' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='7' port='0x16'/>
+      <alias name='pci.7'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
+    </controller>
+    <controller type='pci' index='8' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='8' port='0x18'/>
+      <alias name='pci.8'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='pci' index='9' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='9' port='0x19'/>
+      <alias name='pci.9'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
+    </controller>
+    <controller type='pci' index='10' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='10' port='0x1a'/>
+      <alias name='pci.10'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
+    </controller>
+    <controller type='pci' index='11' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='11' port='0x1b'/>
+      <alias name='pci.11'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
+    </controller>
+    <controller type='pci' index='12' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='12' port='0x1c'/>
+      <alias name='pci.12'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
+    </controller>
+    <controller type='sata' index='0'>
+      <alias name='ide'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
+    </controller>
+    <interface type='ethernet'>
+      <mac address='00:00:00:6a:d3:bc'/>
+      <target dev='e6250550b78a43a' managed='yes'/>
+      <model type='virtio'/>
+      <mtu size='1500'/>
+      <alias name='ua-attachnet1'/>
+      <rom enabled='no'/>
+      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
+    </interface>
+    <interface type='bridge'>
+      <mac address='00:e0:4c:6a:3b:51'/>
+      <source bridge='br-test1'/>
+      <target dev='vnet5'/>
+      <model type='virtio'/>
+      <alias name='ua-test'/>
+      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
+    </interface>
+    <serial type='pty'>
+      <source path='/dev/pts/31'/>
+      <log file='/var/log/vm/79db3d82-ce8f-44e8-96a5-940cc37c0064/console.log' append='off'/>
+      <target type='isa-serial' port='0'>
+        <model name='isa-serial'/>
+      </target>
+      <alias name='serial0'/>
+    </serial>
+    <console type='pty' tty='/dev/pts/31'>
+      <source path='/dev/pts/31'/>
+      <log file='/var/log/vm/79db3d82-ce8f-44e8-96a5-940cc37c0064/console.log' append='off'/>
+      <target type='serial' port='0'/>
+      <alias name='serial0'/>
+    </console>
+    <channel type='unix'>
+      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-226-demo-vm/org.qemu.guest_agent.0'/>
+      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
+      <alias name='channel0'/>
+      <address type='virtio-serial' controller='0' bus='0' port='1'/>
+    </channel>
+    <input type='mouse' bus='ps2'>
+      <alias name='input0'/>
+    </input>
+    <input type='keyboard' bus='ps2'>
+      <alias name='input1'/>
+    </input>
+    <graphics type='vnc' port='5920' autoport='yes' listen='0.0.0.0'>
+      <listen type='address' address='0.0.0.0'/>
+    </graphics>
+    <audio id='1' type='none'/>
+    <video>
+      <model type='vga' vram='16384' heads='1' primary='yes'/>
+      <alias name='video0'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
+    </video>
+    <memballoon model='virtio-non-transitional'>
+      <alias name='balloon0'/>
+      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
+    </memballoon>
+  </devices>
+  <seclabel type='dynamic' model='dac' relabel='yes'>
+    <label>+107:+107</label>
+    <imagelabel>+107:+107</imagelabel>
+  </seclabel>
+</domain>
+``` 
+7. Console to vm and check pci
+```
+$ virsh console demo-vm
+# no additional nic found in `ip a` list
+[root@demo-vm ~]# ip a
+# Compare pci
+[root@demo-vm ~]# lshw > after
+# instead of a virtio network pci i saw a virtio SCSI is created
+[root@demo-vm ~]# diff before after
+# output
+  *-scsi                    
+       description: SCSI storage controller
+       product: Virtio SCSI
+       vendor: Red Hat, Inc.
+       physical id: 0
+       bus info: pci@0000:02:00.0
+       version: 01
+       width: 64 bits
+       clock: 33MHz
+       capabilities: scsi msix pm pciexpress bus_master cap_list
+       configuration: driver=virtio-pci latency=0
+       resources: irq:22 memory:fe600000-fe600fff memory:fc400000-fc403fff
+```
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/1522 b/results/classifier/zero-shot/108/performance/1522
new file mode 100644
index 000000000..ad6528e61
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1522
@@ -0,0 +1,55 @@
+performance: 0.941
+boot: 0.906
+graphic: 0.882
+device: 0.856
+vnc: 0.782
+semantic: 0.770
+PID: 0.704
+socket: 0.686
+files: 0.670
+debug: 0.597
+other: 0.595
+permissions: 0.465
+network: 0.322
+KVM: 0.054
+
+Floppy controller returns the wrong thing for multitrack reads which span tracks
+Description of problem:
+I've just discovered that the Minix 1 and 2 operating systems no longer boot on qemu.
+
+Investigation reveals the following:
+
+- when Minix reads a 1024-byte block from disk, it issues a two-sector multitrack read to the FDC.
+- if the FDC runs out of sectors when it's on head 0, it automatically switches to head 1 (this is correct).
+- if the FDC runs out of sectors when it's on head 1, it stops the transfer (which is what is supposed to happen).
+
+What qemu does for the latter case is that it will automatically seek to the next track and switch to head 0. It then sets the SEEK COMPLETE bit in the status register. Minix sees this but isn't expecting it, because this shouldn't be emitted for reads and writes, and fails thinking it's an error.
+
+For example, here's the logging for such a transfer:
+
+```
+FLOPPY: Start transfer at 0 1 4f 11 (2878)
+FLOPPY: direction=1 (1024 - 10240)
+FLOPPY: copy 512 bytes (1024 0 10240) 0 pos 1 4f (17-0x00000b3e 0x00167c00)
+FLOPPY: seek to next sector (1 4f 11 => 2878)     <--- reads the last sector of head 1 track 0x4f
+FLOPPY: copy 512 bytes (1024 512 10240) 0 pos 1 4f (18-0x00000b3f 0x00167e00)
+FLOPPY: seek to next sector (1 4f 12 => 2879)     <--- attempt to move to the next sector, which fails
+FLOPPY: seek to next track (0 50 01 => 2879)      <--- moved to next track, which shouldn't happen
+FLOPPY: end transfer 1024 1024 10240
+FLOPPY: transfer status: 00 00 00 (20)            <--- status report
+```
+
+Transfer status 20 is the SEEK COMPLETE bit. For a normal head switch, that should be 04 (with the NOW ON HEAD 1 bit set).
+
+For reference, see page 5-13 of the uPD765 datasheet here: https://www.cpcwiki.eu/imgs/f/f3/UPD765_Datasheet_OCRed.pdf It says:
+
+> IF MT is high, a multitrack operation is performed.
+> If MT = 1 after finishing read/write operation on side 0,
+> FDC will automatically start command searching for sector
+> 1 on side 1
+Steps to reproduce:
+1. `qemu-system-i386 --fda images/minix-2.0-root-720kB.img`
+2. Press = to boot.
+3. Observe the 'Unrecoverable Read` errors as the ramdisk is loaded. (The system will still boot, but will then crash if you try to do anything due to a corrupt ramdisk.)
+
+[minix-2.0-root-720kB.img.bz2](/uploads/77d34db96f353d92cdb2d01928b8fc01/minix-2.0-root-720kB.img.bz2)
diff --git a/results/classifier/zero-shot/108/performance/1529173 b/results/classifier/zero-shot/108/performance/1529173
new file mode 100644
index 000000000..61ccaba03
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1529173
@@ -0,0 +1,55 @@
+performance: 0.961
+graphic: 0.856
+vnc: 0.835
+device: 0.828
+boot: 0.755
+files: 0.739
+PID: 0.678
+permissions: 0.628
+socket: 0.627
+other: 0.553
+debug: 0.453
+semantic: 0.450
+KVM: 0.348
+network: 0.285
+
+Absolutely slow Windows XP SP3 installation
+
+Host: Linux 4.3.3 vanilla x86-64/Qemu 2.5 i686 (mixed env)
+Guest: Windows XP Professional SP3 (i686)
+
+This is my launch string:
+
+$ qemu-system-i386 \
+-name "Windows XP Professional SP3" \
+-vga std \
+-net nic,model=pcnet \
+-cpu core2duo \
+-smp cores=2 \
+-cdrom /tmp/en_winxp_pro_with_sp3_vl.iso \
+-hda Windows_XP.qcow \
+-boot d \
+-net nic \
+-net user \
+-m 1536 \
+-localtime
+
+Console output:
+
+warning: TCG doesn't support requested feature: CPUID.01H:EDX.vme [bit 1]
+warning: TCG doesn't support requested feature: CPUID.80000001H:EDX.syscall [bit 11]
+warning: TCG doesn't support requested feature: CPUID.80000001H:EDX.lm|i64 [bit 29]
+warning: TCG doesn't support requested feature: CPUID.01H:EDX.vme [bit 1]
+warning: TCG doesn't support requested feature: CPUID.80000001H:EDX.syscall [bit 11]
+warning: TCG doesn't support requested feature: CPUID.80000001H:EDX.lm|i64 [bit 29]
+
+After hitting 35% installation more or less stalls (it actually doesn't but it progresses 1% a minute which is totally unacceptable).
+
+That was without KVM acceleration, so perhaps it's how it's meant to be.
+
+With KVM everything is fast and smooth.
+
+For integer workloads such as installing an OS you should expect TCG to be about 12x slower than KVM on average. That is on current master; note that TCG has gotten faster in the last couple of years. See a performance comparison from v2.7.0 to v2.11.0 for SPEC06 here: https://imgur.com/a/5P5zj
+
+I've therefore marked the report as invalid, as I don't think the aforementioned speedups will change your experience dramatically.
+
diff --git a/results/classifier/zero-shot/108/performance/1546445 b/results/classifier/zero-shot/108/performance/1546445
new file mode 100644
index 000000000..43d8a2017
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1546445
@@ -0,0 +1,63 @@
+performance: 0.950
+graphic: 0.862
+network: 0.862
+device: 0.845
+PID: 0.804
+permissions: 0.801
+semantic: 0.800
+other: 0.789
+socket: 0.770
+debug: 0.695
+KVM: 0.688
+boot: 0.679
+files: 0.602
+vnc: 0.416
+
+support vhost user without specifying vhostforce
+
+[Impact]
+
+ * vhost-user falls back to virtio-net which causes performance lose without specifying the vhostforce option. But it should be the default behavior for vhost-user, since guests using PMD doesn't support msi-x.
+
+[Test Case]
+
+  create a vhost-user virtio backend without specifying the vhostforce option, i.e. -netdev type=vhost-user,id=mynet1,chardev=<char_dev_for_the_controll_channel>
+  start the VM
+  vhost-user is not enabled
+
+[Regression Potential]
+
+ * none
+
+vhost user nic doesn't support non msi guests(like pxe stage) by default.
+Vhost user nic can't fall back to qemu like normal vhost net nic does. So we should
+enable it for non msi guests.
+
+The problem has been fix in qemu upstream  - http://git.qemu.org/?p=qemu.git;a=commitdiff;h=24f938a682d934b133863eb421aac33592f7a09e. And the patch needs to be backported to 1:2.2+dfsg-5expubuntu9.8 .
+
+
+
+The attachment "debian patch for qemu 1:2.2+dfsg" seems to be a debdiff.  The ubuntu-sponsors team has been subscribed to the bug report so that they can review and hopefully sponsor the debdiff.  If the attachment isn't a patch, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are member of the ~ubuntu-sponsors, unsubscribe the team.
+
+[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issue please contact him.]
+
+Hello Liang, or anyone else affected,
+
+Accepted qemu into kilo-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.
+
+Please help us by testing this new package. To enable the -proposed repository:
+
+  sudo add-apt-repository cloud-archive:kilo-proposed
+  sudo apt-get update
+
+Your feedback will aid us getting this update out to other Ubuntu users.
+
+If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-kilo-needed to verification-kilo-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-kilo-failed. In either case, details of your testing will help us make a better decision.
+
+Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!
+
+Tested with 1:2.2+dfsg-5expubuntu9.7~cloud2, and the fix works for me.
+
+FYI, following additional regression tests, today we promoted qemu 2.2+dfsg-5expubuntu9.7~cloud2 from kilo-proposed to kilo-updates in the Ubuntu Cloud Archive.
+
+
diff --git a/results/classifier/zero-shot/108/performance/1569491 b/results/classifier/zero-shot/108/performance/1569491
new file mode 100644
index 000000000..09971aac9
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1569491
@@ -0,0 +1,27 @@
+performance: 0.975
+graphic: 0.872
+device: 0.793
+other: 0.657
+network: 0.626
+semantic: 0.565
+vnc: 0.415
+socket: 0.409
+boot: 0.409
+permissions: 0.408
+files: 0.288
+PID: 0.272
+debug: 0.225
+KVM: 0.029
+
+qemu system i386 poor performance on e5500 core
+
+I had been tested with generic core net building or with mtune e5500 but i have the same result: performances 
+are extremly low compared with other classes of powerpc cpu.
+The strange is the 5020 2ghz in all emulators been tested by me is comparable with a 970MP 2.7 ghz in speed and benchmarks but im facing the half of performance in i386-soft-mmu compared with a 2.5 ghz 970MP.
+
+I'm triaging old bugs: Can you provide command lines, versions, and steps to test and measure the relative performance?
+
+At the very least, please try to confirm on the latest version of QEMU (5.2.0-rc0, if possible) to update this report.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1586194 b/results/classifier/zero-shot/108/performance/1586194
new file mode 100644
index 000000000..b6344d032
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1586194
@@ -0,0 +1,66 @@
+performance: 0.925
+vnc: 0.923
+other: 0.918
+device: 0.917
+network: 0.899
+PID: 0.898
+graphic: 0.891
+semantic: 0.890
+socket: 0.887
+boot: 0.881
+permissions: 0.865
+debug: 0.859
+KVM: 0.848
+files: 0.845
+
+VNC reverse broken in qemu 2.6.0
+
+Hi all,
+
+I recently tried to upgrade from Qemu 2.4.1 to 2.6.0, but found some problems with VNC reverse connections.
+
+1) In "-vnc 172.16.1.3:5902,reverse" used to mean "connect to port 5902"
+   That seems to have changed changed since 2.4.1, the thing after the IP address is now interpreted
+   as a display number. If that change was intentional, the man-page needs to be fixed.
+
+2) After subtracting 5900 from that port number (-vnc 172.16.1.3:2,reverse), I ran into a segfault.
+
+---8<---   
+Program received signal SIGSEGV, Segmentation fault.
+qio_channel_socket_get_local_address (ioc=0x0, errp=errp@entry=0x7fffffffe118) at io/channel-socket.c:33
+33          return socket_sockaddr_to_address(&ioc->localAddr,
+(gdb) bt
+#0  qio_channel_socket_get_local_address (ioc=0x0, errp=errp@entry=0x7fffffffe118) at io/channel-socket.c:33
+#1  0x000055555594c0f5 in vnc_init_basic_info_from_server_addr (errp=0x7fffffffe118, info=0x555558f35990, 
+    ioc=<optimized out>) at ui/vnc.c:146
+#2  vnc_server_info_get (vd=0x7fffecc4b010) at ui/vnc.c:223
+#3  0x000055555595192a in vnc_qmp_event (vs=0x555558f41f30, vs=0x555558f41f30, event=QAPI_EVENT_VNC_CONNECTED)
+    at ui/vnc.c:279
+#4  vnc_connect (vd=vd@entry=0x7fffecc4b010, sioc=sioc@entry=0x555558f34c00, skipauth=skipauth@entry=false, 
+    websocket=websocket@entry=false) at ui/vnc.c:2994
+#5  0x00005555559520d8 in vnc_display_open (id=id@entry=0x555556437650 "default", errp=errp@entry=0x7fffffffe228)
+    at ui/vnc.c:3773
+#6  0x0000555555952fd3 in vnc_init_func (opaque=<optimized out>, opts=<optimized out>, errp=<optimized out>)
+    at ui/vnc.c:3868
+#7  0x0000555555a011da in qemu_opts_foreach (list=<optimized out>, func=0x555555952fa0 <vnc_init_func>, opaque=0x0, 
+    errp=0x0) at util/qemu-option.c:1116
+#8  0x00005555556dcfbe in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4592
+--->8---
+
+A git bisect shows that this happens since
+
+---8<---
+commit 98481bfcd661daa3c160cc87a297b0e60a307788
+Author: Eric Blake <email address hidden>
+Date:   Mon Oct 26 16:34:45 2015 -0600
+
+    vnc: Hoist allocation of VncBasicInfo to callers
+--->8--- 
+
+TIA
+  Andi
+
+I think this has been fixed in QEMU 2.7, likely with the following commit:
+http://git.qemu.org/?p=qemu.git;a=commitdiff;h=3e7f136d8b4383d99f
+
+
diff --git a/results/classifier/zero-shot/108/performance/1589257 b/results/classifier/zero-shot/108/performance/1589257
new file mode 100644
index 000000000..8ab0bc742
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1589257
@@ -0,0 +1,36 @@
+performance: 0.963
+boot: 0.913
+network: 0.876
+device: 0.826
+graphic: 0.652
+vnc: 0.546
+other: 0.488
+semantic: 0.473
+debug: 0.417
+socket: 0.411
+permissions: 0.390
+PID: 0.301
+files: 0.208
+KVM: 0.127
+
+Boot with OVMF extremely slow to bootloader
+
+I have used Arch Linux in the past with the same version (2.5.0), the exact same OVMF code and vars, and the exact same VM settings with no issues. Now with Ubuntu, I am having the issue where boot up until Windows takes about 10x longer. Every CPU thread/core allocated gets used 100% while this is happening. After that, everything operates as normal. There is no abnormal logs produced by qemu, or I don't know how to debug.
+
+Here are my settings:
+
+Host:
+Ubuntu 16.04
+Qemu 2.5.0
+Relevant configs attached
+
+Guest:
+Windows 10
+VirtIO raw disk image
+VirtIO network
+Typical VGA passthrough setup, everything operating normally
+
+
+
+I've solved the problem by using the ovmf package in apt instead of the firmware I've had before. Apparently, the older firmware was only compatible with an older kernel, and a newer kernel with the older firmware would cause the issue.
+
diff --git a/results/classifier/zero-shot/108/performance/1590336 b/results/classifier/zero-shot/108/performance/1590336
new file mode 100644
index 000000000..227ed9d80
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1590336
@@ -0,0 +1,54 @@
+performance: 0.935
+graphic: 0.916
+other: 0.912
+device: 0.834
+semantic: 0.752
+socket: 0.742
+files: 0.731
+PID: 0.648
+permissions: 0.645
+network: 0.640
+debug: 0.617
+vnc: 0.597
+boot: 0.460
+KVM: 0.385
+
+qemu-arm does not reject vrintz on non-v8 cpu
+
+Hello,
+
+It seems that qemu-arm does not reject some v8-only instructions as it should, but executes them "correctly".
+
+For instance, while compiling/running some of the GCC ARM instrinsics tests, we noticed that
+vrintz should be rejected on cortex-a9 for instance, while it is executed as if the instruction was supported.
+
+objdump says:
+   1074c:       f3fa05a0        vrintz.f32      d16, d16
+and qemu -d in_asm says:
+0x0001074c:  f3fa05a0      vabal.u<illegal width 64>    q8, d26, d16
+
+The problem is still present in qemu-2.6.0
+
+Should be fixed by http://patchwork.ozlabs.org/patch/633105/
+
+
+I confirm your patch does fix the problem.
+
+You may still want to fix the disassembler such that it dumps the right instruction, but that would be a separate fix.
+
+Thanks for your quick support.
+
+
+On 9 June 2016 at 20:14, Christophe Lyon <email address hidden> wrote:
+> You may still want to fix the disassembler such that it dumps the right
+> instruction, but that would be a separate fix.
+
+Unfortunately the disassembler is the pre-GPLv3 binutils one,
+so we can't just update it (and I'm not particularly inclined
+to independently re-implement all the 32-bit instruction set
+changes post that change).
+
+thanks
+-- PMM
+
+
diff --git a/results/classifier/zero-shot/108/performance/1595240 b/results/classifier/zero-shot/108/performance/1595240
new file mode 100644
index 000000000..788001ea6
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1595240
@@ -0,0 +1,77 @@
+performance: 0.959
+files: 0.939
+other: 0.913
+PID: 0.908
+graphic: 0.901
+device: 0.880
+permissions: 0.875
+semantic: 0.855
+debug: 0.854
+socket: 0.841
+network: 0.838
+KVM: 0.748
+vnc: 0.739
+boot: 0.731
+
+Error by clone github.com qemu repository
+
+Hi.
+
+C:\Java\sources\kvm> git clone https://github.com/qemu/qemu.git
+Cloning into 'qemu'...
+remote: Counting objects: 279563, done.
+remote: Total 279563 (delta 0), reused 0 (delta 0), pack-reused 279563R
+Receiving objects: 100% (279563/279563), 122.45 MiB | 3.52 MiB/s, done.
+Resolving deltas: 100% (221942/221942), done.
+Checking connectivity... done.
+error: unable to create file hw/misc/aux.c (No such file or directory)
+error: unable to create file include/hw/misc/aux.h (No such file or directory)
+Checking out files: 100% (4795/4795), done.
+fatal: unable to checkout working tree
+warning: Clone succeeded, but checkout failed.
+You can inspect what was checked out with 'git status'
+and retry the checkout with 'git checkout -f HEAD'
+
+
+
+Windows has problems with any file named 'aux.*'.  The solution would be
+for qemu to rename it to something else, for the sake of Windows.
+
+On 06/22/2016 10:06 AM, Алексей Курган wrote:
+> Public bug reported:
+> 
+> Hi.
+> 
+> C:\Java\sources\kvm> git clone https://github.com/qemu/qemu.git
+> Cloning into 'qemu'...
+> remote: Counting objects: 279563, done.
+> remote: Total 279563 (delta 0), reused 0 (delta 0), pack-reused 279563R
+> Receiving objects: 100% (279563/279563), 122.45 MiB | 3.52 MiB/s, done.
+> Resolving deltas: 100% (221942/221942), done.
+> Checking connectivity... done.
+> error: unable to create file hw/misc/aux.c (No such file or directory)
+> error: unable to create file include/hw/misc/aux.h (No such file or directory)
+> Checking out files: 100% (4795/4795), done.
+> fatal: unable to checkout working tree
+> warning: Clone succeeded, but checkout failed.
+> You can inspect what was checked out with 'git status'
+> and retry the checkout with 'git checkout -f HEAD'
+> 
+> ** Affects: qemu
+>      Importance: Undecided
+>          Status: New
+> 
+> ** Attachment added: "2016-06-22_19-08-06.png"
+>    https://bugs.launchpad.net/bugs/1595240/+attachment/4688593/+files/2016-06-22_19-08-06.png
+> 
+
+-- 
+Eric Blake   eblake redhat com    +1-919-301-3266
+Libvirt virtualization library http://libvirt.org
+
+
+
+Patch has been included in QEMU v2.7.0:
+http://git.qemu.org/?p=qemu.git;a=commitdiff;h=e0dadc1e9ef1f35208e5d2a
+
+
diff --git a/results/classifier/zero-shot/108/performance/1654826 b/results/classifier/zero-shot/108/performance/1654826
new file mode 100644
index 000000000..c8418862c
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1654826
@@ -0,0 +1,43 @@
+performance: 0.920
+graphic: 0.918
+device: 0.900
+KVM: 0.885
+other: 0.860
+semantic: 0.825
+vnc: 0.810
+debug: 0.676
+permissions: 0.670
+boot: 0.669
+PID: 0.668
+network: 0.668
+files: 0.647
+socket: 0.644
+
+Holding key down using input-linux freezes guest
+
+Qemu release version 2.8.0
+KVM, kernel 4.9.1
+
+When using the -object input-linux capability in qemu for passthrough of input/evdev devices, I found that when a key is held for a few seconds or more (such as ctrl key), the guest system freezes until the key is released. In some cases, mouse control is also lost following one of these "freezes". I also noticed that one of the four cpu cores I have the guest pinned to ramps to 100% during these freezes.
+
+Thought I might add:
+
+The qemu command line option equivalents for mouse and keyboard:
+
+-object input-linux,id=kbd,evdev=/dev/input/by-path/platform-i8042-serio-0-event-kbd,repeat=on,grab_all=on \
+-object input-linux,id=ms1,evdev=/dev/input/by-id/usb-ROCCAT_ROCCAT_Kone_Pure-event-mouse
+
+quick workaround: drop "repeat=on".
+some guests seem to have problems with that, not debugged yet why.
+
+I have tried without "repeat=on" option, and with 2.8.0 I still seem to be getting weird behavior with mouse dropping out at points, and with keys seemingly being continued to be pressed (ie still running around in an fps game after releasing the key). I also experienced at one point l-ctrl+r-ctrl not passing keyboard control to guest, and needed to VNC in to shutdown/restart guest (this was after plugging in a usb xbox360 controller, not sure if related).
+
+I am getting the same issue where I can even hear the sound glitch out. I'm on Arch Linux.
+
+The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting all older bugs to
+"Incomplete" now.
+If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1672383 b/results/classifier/zero-shot/108/performance/1672383
new file mode 100644
index 000000000..a38b03a4a
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1672383
@@ -0,0 +1,38 @@
+performance: 0.933
+device: 0.519
+PID: 0.517
+boot: 0.462
+vnc: 0.421
+graphic: 0.395
+socket: 0.387
+files: 0.385
+network: 0.375
+permissions: 0.306
+semantic: 0.281
+other: 0.241
+debug: 0.196
+KVM: 0.009
+
+Slow Windows XP load after commit a9353fe897ca2687e5b3385ed39e3db3927a90e0
+
+I've recently discovered, that in QEMU 2.8+ my Windows XP loading time has significantly worsened. In 2.7 it took 30-40 second to boot, but in 2.8 it became 2-2,5 minutes.
+
+I've used Git bisect, and found out that the change happened after commit a9353fe897ca2687e5b3385ed39e3db3927a90e0, which, as far as I can tell from the commit message, handled race condition when invalidating breakpoint.
+
+I've set a breakpoint in static void breakpoint_invalidate(CPUState *cpu, target_ulong pc), and here's a backtrace:
+#0  cpu_breakpoint_insert (cpu=cpu@entry=0x555556a73be0, pc=144, 
+    flags=flags@entry=32, breakpoint=breakpoint@entry=0x555556a7c670)
+    at /media/sdd2/qemu-work/exec.c:830
+#1  0x00005555558746ac in hw_breakpoint_insert (env=env@entry=0x555556a7be60, 
+    index=index@entry=0) at /media/sdd2/qemu-work/target-i386/bpt_helper.c:64
+#2  0x00005555558748ed in cpu_x86_update_dr7 (env=0x555556a7be60, 
+    new_dr7=<optimised out>)
+    at /media/sdd2/qemu-work/target-i386/bpt_helper.c:160
+#3  0x00007fffa17421f6 in code_gen_buffer ()
+#4  0x000055555577fcb4 in cpu_tb_exec (itb=<optimised out>, 
+    itb=<optimised out>, cpu=0x7fff8b7763b0)
+    at /media/sdd2/qemu-work/cpu-exec.c:164
+It seems that XP sets some breakpoints during it's load, and it leads to frequent TB flushes and slow execution.
+
+Supposedly fixed by commit 406bc339b0505fcfc2ffcbca1f05a3756e338a65
+
diff --git a/results/classifier/zero-shot/108/performance/1677 b/results/classifier/zero-shot/108/performance/1677
new file mode 100644
index 000000000..43cf5bac2
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1677
@@ -0,0 +1,28 @@
+performance: 0.972
+graphic: 0.892
+boot: 0.868
+device: 0.839
+semantic: 0.743
+socket: 0.706
+files: 0.678
+vnc: 0.658
+PID: 0.639
+debug: 0.603
+permissions: 0.406
+network: 0.309
+other: 0.284
+KVM: 0.193
+
+qemu-system-x86_64 cannot run on Windows when -smp is specified with a value higher than `1`. An important argument for any expectation of VM performance
+Description of problem:
+qemu-system-x86_64 seems to crash on Windows the moment you try to use -smp to define more vcpus, even the basic usage of `-smp 4` will cause qemu to segfault after the guest's boot option is selected.
+Steps to reproduce:
+1. `qemu-system-x86_64 -smp 4 -cdrom rhel-9.2-x86_64-dvd.iso -drive if=pflash,format=raw,unit=0,readonly=on,file=edk2-x64/OVMF_CODE.fd -m 6G -nodefaults -serial mon:stdio`
+2. Select the boot option to begin your installation
+3. qemu hangs for 10 or so seconds then throws a Segmentation Fault.
+Additional information:
+1. This does not happen if -smp arguments are omitted, but running VMs with a single vcpu thread is slow and painful.
+2. This still happens even without OVMF (Traditional bios booting)
+3. This still happens even without -defaults and without a serial device
+
+Only output from qemu at death is `Segmentation fault`
diff --git a/results/classifier/zero-shot/108/performance/1714750 b/results/classifier/zero-shot/108/performance/1714750
new file mode 100644
index 000000000..0a228b2f8
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1714750
@@ -0,0 +1,132 @@
+performance: 0.958
+device: 0.954
+permissions: 0.948
+other: 0.944
+debug: 0.943
+semantic: 0.942
+graphic: 0.941
+socket: 0.934
+files: 0.931
+PID: 0.924
+network: 0.922
+vnc: 0.912
+KVM: 0.907
+boot: 0.851
+
+2.10.0 cannot be installed on case-insensitive file system
+
+The https://download.qemu.org/qemu-2.10.0.tar.bz2 tarball cannot be unpacked on a case-insensitive file system because it has a file qemu-2.10.0/roms/u-boot/scripts/Kconfig and a directory qemu-2.10.0/roms/u-boot/scripts/kconfig. This prevents installation on most macOS systems since by default the file system is case insensitive. The 2.10.0 upgrade is blocked in Homebrew due to this issue. See https://github.com/Homebrew/homebrew-core/pull/17467. This is a regression from 2.9.0, which didn't have this problem.
+
+That's apparently a problem with U-Boot. Could you please report this issue to the U-Boot project instead (see https://github.com/u-boot/u-boot)? We only include the u-boot sources in the QEMU tarballs, but we do not maintain them in the QEMU project, so we can not fix this issue here for you, sorry.
+
+The offending commit is https://github.com/u-boot/u-boot/commit/61304dbec36dc445bbe7d2c19b4da0695861e0a8 so it should be possible to downgrade u-boot until it gets fixed upstream, no?
+
+I don't think it would be wise to downgrade u-boot. You can always just skip unpacking the u-boot sources -- we don't actually build them, we just ship them for license compliance reasons.
+
+
+Hmm I'll try some magic tar invocations.
+
+>we don't actually build them, we just ship them for license compliance reasons.
+
+Would you be in compliance with the license if the u-boot sources were themselves in a tarball inside your qemu tarball?
+
+Yes, but that's not how we ship them today. (We're actually considering having the ROM blob sources be in an entirely separate tarball from the QEMU sources, for unrelated reasons).
+
+We should fix this bug by:
+ (1) getting u-boot to fix it upstream
+ (2) moving to a fixed u-boot
+
+
+I agree. But it's not really tenable in the interim for the 2.9.0 tarball not to be able to even be unpacked on macOS.
+
+There is a simple workaround: use
+  tar xf qemu-2.10.0.tar.xz --exclude qemu-2.10.0/roms/u-boot/scripts/Kconfig
+
+
+Right. The issue is that solution is O(n) not O(1).
+
+Eh? That command line is not particularly slow, especially compared to the time it takes to download the tarball in the first place.
+
+
+I mean in that every user is going to have to figure this out individually until it's fixed.
+
+In any case it will not be a problem for our Homebrew users, as I will do this:
+```
+diff --git a/Formula/qemu.rb b/Formula/qemu.rb
+index 16a54af167..db0e68d103 100644
+--- a/Formula/qemu.rb
++++ b/Formula/qemu.rb
+@@ -1,10 +1,20 @@
++# Fix extraction on case-insentive file systems.
++# Reported 4 Sep 2017 https://bugs.launchpad.net/qemu/+bug/1714750
++# This is actually an issue with u-boot and may take some time to sort out.
++class QemuDownloadStrategy < CurlDownloadStrategy
++  def stage
++    exclude = "#{name}-#{version}/roms/u-boot/scripts/Kconfig"
++    safe_system "tar", "xjf", cached_location, "--exclude", exclude
++    chdir
++  end
++end
++
+ class Qemu < Formula
+   desc "x86 and PowerPC Emulator"
+   homepage "https://www.qemu.org/"
+-  url "https://download.qemu.org/qemu-2.9.0.tar.bz2"
+-  sha256 "00bfb217b1bb03c7a6c3261b819cfccbfb5a58e3e2ceff546327d271773c6c14"
+-  revision 2
+-
++  url "https://download.qemu.org/qemu-2.10.0.tar.bz2",
++      :using => QemuDownloadStrategy
++  sha256 "7e9f39e1306e6dcc595494e91c1464d4b03f55ddd2053183e0e1b69f7f776d48"
+   head "https://git.qemu.org/git/qemu.git"
+ 
+   bottle do
+```
+https://github.com/Homebrew/homebrew-core/pull/17467
+
+Yes, it's awkward for users who are on OSX (or Windows, I assume). But the 2.10.0 release is already out and we can't change it -- if this bug had been reported for one of the 2.10.x release candidates it would maybe have been a release blocker. As it is we have to wait for a 2.10.1 release (and we need to actually fix the problem, preferably by getting u-boot upstream to do so).
+
+Thanks for putting the workaround into the homebrew packaging in the meantime.
+
+
+>Thanks for putting the workaround into the homebrew packaging in the meantime.
+
+You're welcome. I have now shipped the 2.10.0 binaries.
+
+>if this bug had been reported for one of the 2.10.x release candidates it would maybe have been a release blocker
+
+We should add a `devel` spec to the qemu formula for the next release candidates. I will try to remember.
+
+> As it is we have to wait for a 2.10.1 release
+
+Yeah that is unfortunate.
+
+>(and we need to actually fix the problem, preferably by getting u-boot upstream to do so).
+
+Agreed. If only their mailing list would confirm my subscription ... lol
+
+
+See https://lists.denx.de/pipermail/u-boot/2017-September/304728.html.
+
+Thanks for taking care of that @ubuntu-weilnetz
+
+Update: Sam Protsenko has kindly written and submitted a u-boot patch which resolves the filename clash:
+https://lists.denx.de/pipermail/u-boot/2017-September/307910.html
+
+
+Hurray!
+
+Somebody re-reported this which reminded me that we forgot to tidy up the loose ends here.
+Current status:
+ * this is fixed in upstream u-boot with their commit 610eec7f0593574 (committed October 2017, and in u-boot release v2017.11 and later)
+ * in QEMU's release process we put in a workaround in our commit d0dead3b6df7f6cd97 which puts the u-boot sources in their own tarball rather than extracted; this went into QEMU release 2.11.0
+ * we are still shipping the same old version of u-boot we were in 2.10
+
+So ideally we'd finish fixing this bug report by:
+ * updating our u-boot to some version v2017.11 or later
+ * removing the d0dead3b6df7f6cd97 workaround
+
+
+We updated our u-boot sources to v2019.01 in QEMU commit f2a3b549e357041f86d7e, and we removed the scripts/make-release workaround in commit 082c0543baa6f23770, so all the loose ends I mentioned in comment #18 are now fixed and will be in QEMU 4.0.
+
+
diff --git a/results/classifier/zero-shot/108/performance/1716 b/results/classifier/zero-shot/108/performance/1716
new file mode 100644
index 000000000..7d38db8f3
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1716
@@ -0,0 +1,24 @@
+performance: 0.976
+graphic: 0.967
+PID: 0.910
+semantic: 0.907
+files: 0.856
+debug: 0.844
+permissions: 0.828
+vnc: 0.822
+network: 0.812
+device: 0.798
+socket: 0.704
+other: 0.686
+KVM: 0.628
+boot: 0.586
+
+Cannot  raise low memory using max-ram-below-4g on current i440fx
+Description of problem:
+We have a use case where we have a virtual machine with at least 8 Gb of RAM and at least 3.5Gb of it in the low memory. However, I could not achieve it this far with QEMU, only on the deprecated i440fx 1.7 architecture. The size of lowmem is never greater than 3 Gb, except if I assign memory to the vm between 3 Gb and 3.5 Gb. If I go even a slightly above 3.5 Gb then it falls back to 3 Gb.
+
+I did some research and I found the source file hw/i386/pc_piix.c. There is a piece of code which is responsible for setting the low memory at the beginning of function pc_init1(). It seems that the problem lies in the property `gigabyt_align` of all i440fx architectures newer than 1.7. The comment which explains this piece of code does not mention at all that raising lowmem does not work on newer pc architectures. According to the comments setting the size of lowmem based of the `max-ram-below-4g` option should happen before the gigabyte alignment, not after it. Anyway, it does not make sense because with default being 3 Gb gigabyte alignment always means 3 Gb so raising is not possible at all. The last example of the comment clearly states that raising should be possible using the newest `pc` architecture: `qemu -M pc,max-ram-below-4g=4G -m 3968M  -> 3968M low (=4G-128M)`. However, according to the code below the comment this is not the way it works because gigabyte alignment happens after.
+
+To solve the problem there are two possibilities: if this is a bug then the solution is obvious, the gigabyte aligment should happen before applying the `max-ram-below-4g` option. If this is not a bug but the expected way of working then there could be an option to override the `gigabyte_align` attribute from command line.
+
+What do you think?
diff --git a/results/classifier/zero-shot/108/performance/1718 b/results/classifier/zero-shot/108/performance/1718
new file mode 100644
index 000000000..f4bb65308
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1718
@@ -0,0 +1,62 @@
+performance: 0.952
+device: 0.899
+graphic: 0.860
+semantic: 0.740
+boot: 0.609
+vnc: 0.577
+permissions: 0.577
+PID: 0.557
+network: 0.556
+KVM: 0.508
+files: 0.476
+debug: 0.476
+other: 0.444
+socket: 0.410
+
+Strange throttle-group test results
+Description of problem:
+I have a question about throttle-group test results.
+
+I did a test to limit IO by applying THROTTLE-GROUP and the expected result is not what I expected
+
+The setup environment looks like this throttle-group to x-iops-total=500, x-bps-total=524288000 and throttling vdb, benchmarked with fio command
+
+```
+# mount -t xfs /dev/vdb1 /mnt/disk
+
+# fio --direct=1 --bs=1M --iodepth=128 --rw=read --size=1G --numjobs=1 --runtime=600 --time_based --name=/mnt/disk/fio-file --ioengine=libaio --output=/mnt/disk/read-1M
+```
+
+When I test with a --bs value of 1M, I get 500Mib throughput.
+![iops_500-1M](/uploads/f63ecbfdb13adc87bd4524f5298a224c/iops_500-1M.png)
+
+
+When I test with a --bs value of 2m, I don't get 500Mibs but 332Mibs throughput.
+```
+fio --direct=1 --bs=2M --iodepth=128 --rw=read --size=1G --numjobs=1 --runtime=600 --time_based --name=/mnt/disk/fio-file --ioengine=libaio --output=/mnt/disk/read-2M
+```
+![iops_500-2M](/uploads/0a384fd9f026943e5e40af1c4b5d6dcd/iops_500-2M.png)
+
+
+If I set the qemu x-iops-total value to 1500 and the fio --bs value to 2M test again, I get 500Mib throughput.
+
+![iops_1500-2M](/uploads/f31eb8213d034d612e915e355b52a324/iops_1500-2M.png)
+
+
+To summarize, here is the Test result.
+
+| fio bs | qemu x-iops-total | qemu x-bps-total | Result iops |Result throughput
+| ------ | ------ |------ |------ |------ |
+| 2M     | 1500   | 524288000 | 250 |  500 |
+| **2M** |**500** | **524288000** | **166** |  **332** |
+| 1M     | 1500   | 524288000 | 500 |  500 |
+| 1M     |  500.  | 524288000 | 500 |  500 |
+
+
+When the --bs value is 2M and the x-iops-total value is 500, the throughput should be 500, but it is not, so I don't know what the problem is.
+
+If there is anything I missed, please let me know.
+Steps to reproduce:
+1. Apply throttle-group to vdb and start the VM
+2. mount vdb1
+3. test fio
diff --git a/results/classifier/zero-shot/108/performance/1721187 b/results/classifier/zero-shot/108/performance/1721187
new file mode 100644
index 000000000..426c49a74
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1721187
@@ -0,0 +1,52 @@
+performance: 0.940
+graphic: 0.916
+device: 0.671
+boot: 0.486
+other: 0.484
+semantic: 0.388
+PID: 0.325
+files: 0.242
+socket: 0.221
+permissions: 0.142
+debug: 0.112
+vnc: 0.112
+network: 0.108
+KVM: 0.044
+
+install Centos7 or fedora27 on qemu on windows8.1
+
+Hello,
+I have tried to install CentOs or Fedora27 on my Windows8 using QEMU. I work on notepad with 4GB.
+Unfortunatly, my touchpad nor my usb-mouse are not recognise on the graphical installation of CentOs and Fedora installation. So, I cannot install them.
+Here are the commands I use for installation :
+
+qemu-img create -f qcow2 fedora27b2_hd.qcow2 80G
+
+qemu-system-x86_64 -k fr -hda fedora27b2_hd.qcow2 -cdrom Fedora-Workstation-Live-x86_64-27_Beta-1.5.iso -m 512 -boot d
+
+I have tried to add the option : -device usb-mouse  but, I got the error message that no 'usb-bus' found for the usb-mouse device.
+
+What is wrong ?  QEMU or my installation command ?
+
+Thank, BRgds,
+Laurent
+
+Which version of QEMU are you using? Did you compile QEMU on your own or are you using a pre-build binary?
+Anyway, to be able to use USB devices, you've got to specify the "-usb" parameter when starting QEMU.
+
+I use qemu-w64-setup-20170830.exe on Windows8-64bits
+I tried the following command, but it is very, very slow :
+
+qemu-img create centos7_hd.img 80G
+
+qemu-system-x86_64 -k fr -cpu core2duo -m 1024 -usb -device usb-mouse -hda centos7_hd.img --drive media=cdrom,file=CentOS-7-x86_64-Everything-1708.iso,readonly
+
+BRgds,
+Laurent
+
+
+So I assume the mouse is working now? I think we then can close this ticket.
+Concerning the speed: QEMU is emulating the CPU by default, so this is of course slower than running everything natively. You've got to use an accelerator to get more speed - for Windows, you can use HAXM: https://www.qemu.org/2017/11/22/haxm-usage-windows/
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1723984 b/results/classifier/zero-shot/108/performance/1723984
new file mode 100644
index 000000000..491cf37b5
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1723984
@@ -0,0 +1,61 @@
+performance: 0.950
+device: 0.934
+files: 0.927
+semantic: 0.915
+permissions: 0.891
+socket: 0.886
+other: 0.881
+graphic: 0.881
+network: 0.873
+debug: 0.870
+PID: 0.766
+boot: 0.737
+vnc: 0.715
+KVM: 0.669
+
+ID_MMFR0 has an invalid value on aarch64 cpu (A57, A53)
+
+The ID_MMFR0 register, accessed from aarch64 state as an invalid value:
+- ARM ARM v8 documentation (D7.2 General system control registers) described bits AuxReg[23:20] to be
+  "In ARMv8-A the only permitted value is 0010"
+- Cortex A53 and Cortex A57 TRM describe the value to be 0x10201105, so AuxReg[23:20] is 0010 too
+- in QEMU target/arm/cpu64.c, the relevant value is
+  cpu->id_mmfr0 = 0x10101105;
+
+The 1 should be changed to 2.
+
+Spotted & Tested on the following qemu revision:
+
+commit 48ae1f60d8c9a770e6da64407984d84e25253c69
+Merge: 78b62d3 b867eaa
+Author: Peter Maydell <email address hidden>
+Date:   Mon Oct 16 14:28:13 2017 +0100
+
+QEMU's behaviour in this case is matching the hardware. We claim to model an r1p0 (based on the MIDR value we report), and for the r1p0 the A53 and A57 reported the ID_MMFR0 as 0x10101105 -- this is documented in the TRMs for that rev of the CPUs. r1p3 reports the 0x10201105 you describe, but this isn't the rev of the CPU we claim to be.
+
+In theory we could bump the rXpX but I'm not sure there's much point unless it's causing a real problem (we'd need to check what else might have changed between the two revisions).
+
+
+Oh I see. I didn't check older TRM since the ARM ARM was quite strict on the value, sorry.
+I'll read the MIDR to have a more robust code then. Thank you.
+
+You shouldn't need to read the MIDR at all.
+
+There are two sensible strategies for software I think:
+
+ (1) trust the architectural statement that v8 implies that the AIFSR and ADFSR both exist -- AIUI both QEMU and the hardware implementations that report 0001 in this MMFR0 field do actually implement those registers, so this is safe.
+
+ (2) read and pay attention to the AuxReg field, by handling 0001 as "only Auxiliary Control Register is supported, AIFSR and ADFSR are not supported". This will work fine too -- on implementations that report 0001 you may be not using the AIFSR/ADFSR but that's ok because on those implementations they only RAZ/WI anyhow so you couldn't do anything interesting with them anyway.
+
+If your code is genuinely v8 only then (1) is easiest. If you also need to support ARMv7 then (2) is best, because 0001 is a permitted value in ID_MMFR0 for an ARMv7 implementation, so you need to handle it regardless of the A53/A57 behaviour.
+
+Neither approach requires detecting and special casing A53/A57 revisions via the MIDR.
+
+
+I see your point. Thank you for the advice. I'm doing some low-level check to be sure to be on a known platform, so this midr based code is very localized. For the "core" of the kernel, I'm mostly using (1) as access to MMU registers are localized in armv7/armv8 specialized sub-directories.
+
+
+
+Thanks for the update -- I'm going to close this bug. (Incidentally, my experience with checks of the "insist we're on a known platform with ID register values we recognize" kind is that they're more trouble than they're worth, especially if you plan running the software in an emulator.)
+
+
diff --git a/results/classifier/zero-shot/108/performance/1725707 b/results/classifier/zero-shot/108/performance/1725707
new file mode 100644
index 000000000..2d0843036
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1725707
@@ -0,0 +1,99 @@
+performance: 0.925
+debug: 0.918
+semantic: 0.909
+graphic: 0.894
+network: 0.889
+other: 0.879
+device: 0.872
+vnc: 0.858
+permissions: 0.818
+PID: 0.817
+boot: 0.810
+KVM: 0.788
+files: 0.780
+socket: 0.745
+
+QEMU sends excess VNC data to websockify even when network is poor
+
+Description of problem
+-------------------------
+In my latest topic, I reported a bug relate to QEMU's websocket:
+https://bugs.launchpad.net/qemu/+bug/1718964
+
+It has been fixed but someone mentioned that he met the same problem when using QEMU with a standalone websocket proxy.
+That makes me confused because in that scenario QEMU will get a "RAW" VNC connection.
+So I did a test and found that there indeed existed some problems. The problem is:
+
+When the client's network is poor (on a low speed WAN), QEMU still sends a lot of data to the websocket proxy, then the client get stuck. It seems that only QEMU has this problem, other VNC servers works fine.
+
+Environment
+-------------------------
+All of the following versions have been tested:
+
+QEMU: 2.8.1.1 / 2.9.1 / 2.10.1 / master (Up to date)
+Host OS: Ubuntu 16.04 Server LTS / CentOS 7 x86_64_1611
+Websocket Proxy: websockify 0.6.0 / 0.7.0 / 0.8.0 / master
+VNC Web Client: noVNC 0.5.1 / 0.61 / 0.62 / master
+Other VNC Servers: TigerVNC 1.8 / x11vnc 0.9.13 / TightVNC 2.8.8
+
+Steps to reproduce:
+-------------------------
+100% reproducible.
+
+1. Launch a QEMU instance (No need websocket option):
+qemu-system-x86_64 -enable-kvm -m 6G ./win_x64.qcow2 -vnc :0
+
+2. Launch websockify on a separate host and connect to QEMU's VNC port
+
+3. Open VNC Web Client (noVNC/vnc.html) in browser and connect to websockify
+
+4. Play a video (e.g. Watch YouTube) on VM (To produce a lot of frame buffer update)
+
+5. Limit (e.g. Use NetLimiter) the client inbound bandwidth to 300KB/S (To simulate a low speed WAN)
+
+6. Then client's output gets stuck(less than 1 fps), the cursor is almost impossible to move
+
+7. Monitor network traffic on the proxy server
+
+Current result:
+-------------------------
+Monitor Downlink/Uplink network traffic on the proxy server
+(Refer to the attachments for more details).
+
+1. Used with QEMU
+- D: 5.9 MB/s U: 5.7 MB/s (Client on LAN)
+- D: 4.3 MB/s U: 334 KB/s (Client on WAN)
+
+2. Used with other VNC servers
+- D: 5.9 MB/s U: 5.6 MB/s (Client on LAN)
+- D: 369 KB/s U: 328 KB/s (Client on WAN)
+
+It is found that when the client's network is poor, all the VNC servers (tigervnc/x11vnc/tightvnc) 
+will reduce the VNC data send to websocket proxy (uplink and downlink symmetry), but QEMU never drop any frames and still sends a lot of data to websockify, the client has no capacity to accept so much data, more and more data are accumulated in the websockify, then it crashes.
+
+Expected results:
+-------------------------
+When the client's network is poor (WAN), QEMU will reduce the VNC data send to websocket proxy.
+
+
+
+
+
+
+
+This is nothing specific to websockets AFAIK. Even using regular VNC QEMU doesn't try to dynamically throttle data / quality settings.
+
+NB, if websockify crashes, then that is a serious flaw in websockify - it shouldn't read an unbounded amount of data from QEMU, if it is unable to send it onto the client.  If websockify stopped reading data from QEMU, then QEMU would in turn stop sending it once the TCP buffer was full
+
+
+Reference:
+https://github.com/novnc/noVNC/issues/431#issuecomment-71883085
+
+QEMU uses many more (30x) operations with much smaller amounts of data than other VNC server, perhaps this leads to the different result.
+
+The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
+If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1728116 b/results/classifier/zero-shot/108/performance/1728116
new file mode 100644
index 000000000..6b15eccba
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1728116
@@ -0,0 +1,74 @@
+performance: 0.981
+graphic: 0.976
+other: 0.944
+PID: 0.935
+device: 0.921
+semantic: 0.911
+socket: 0.908
+files: 0.880
+network: 0.871
+debug: 0.860
+KVM: 0.855
+permissions: 0.843
+vnc: 0.833
+boot: 0.793
+
+Empty /proc/self/auxv (linux-user)
+
+The userspace Linux API virtualization used to fake access to /proc/self/auxv, to provide meaningful data for the guest process.
+
+For newer qemu versions, this fails: The openat() is intercepted, but there's no content: /proc/self/auxv has length zero (i.e. reading from it returns 0 bytes).
+
+Good:
+
+$ x86_64-linux-user/qemu-x86_64 /usr/bin/cat /proc/self/auxv | wc -c
+256 /proc/self/auxv
+
+Bad:
+
+$ x86_64-linux-user/qemu-x86_64 /usr/bin/cat /proc/self/auxv | wc -c
+0 /proc/self/auxv
+
+This worked in 2.7.1, and fails in 2.10.1.
+
+This causes e.g. any procps-ng-based tool to segfault while reading from /proc/self/auxv in an endless loop (probably worth another bug report...)
+
+Doing a "git bisect" shows that this commit: https://github.com/qemu/qemu/commit/7c4ee5bcc introduced the problem.
+
+It might be a simple logic (subtraction in the wrong direction?) or sign-ness error: Adding some logging (to v2.10.1)
+
+diff --git a/linux-user/syscall.c b/linux-user/syscall.c
+index 9b6364a..49285f9 100644
+--- a/linux-user/syscall.c
++++ b/linux-user/syscall.c
+@@ -7469,6 +7469,9 @@ static int open_self_auxv(void *cpu_env, int fd)
+     abi_ulong len = ts->info->auxv_len;
+     char *ptr;
+ 
++    gemu_log(TARGET_ABI_FMT_lu"\n", len);
++    gemu_log(TARGET_ABI_FMT_ld"\n", len);
++
+     /*
+      * Auxiliary vector is stored in target process stack.
+      * read in whole auxv vector and copy it to file
+
+shows this output:
+
+$  x86_64-linux-user/qemu-x86_64 /usr/bin/cat /proc/self/auxv | wc -c
+18446744073709551264
+-352
+0
+
+And 352 could be the expected length.
+
+Oops, yes, commit 7c4ee5bcc82e643 broke this -- it switched the order in which we fill in the AUXV info, but forgot to adjust the calculation of the length, which as you've guessed we now get backwards.
+
+
+I've just sent this patch which fixes this bug:
+https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg01199.html
+(it turns out it wasn't quite as simple as getting the sign wrong, we were subtracting two things that were totally wrong).
+
+
+Fix has been released with QEMU 2.11:
+https://git.qemu.org/?p=qemu.git;a=commitdiff;h=f516511ea84d8bb3395d6e
+
diff --git a/results/classifier/zero-shot/108/performance/1731 b/results/classifier/zero-shot/108/performance/1731
new file mode 100644
index 000000000..65d5129b1
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1731
@@ -0,0 +1,27 @@
+performance: 0.971
+graphic: 0.967
+boot: 0.948
+device: 0.935
+semantic: 0.685
+PID: 0.639
+socket: 0.573
+vnc: 0.492
+permissions: 0.479
+debug: 0.325
+other: 0.249
+files: 0.245
+network: 0.054
+KVM: 0.017
+
+i440fx ide cdrom pathological slow on early win10 install  screen
+Description of problem:
+if you choose i440fx virtual hardware (default in proxmox) for windows 10 instead of q35 , from power on to the windows boot logo is 10 times slower.  you need to wait more then 1m45s on my hardware until the blinking cursor in the upper left goes away and the blue windows bootlogo appears. that leads to false assumption, that your setup hangs. 
+
+what's causing this slownewss?
+
+is implementation really that bad?
+
+i did compare read performance of ide, sata and scsi cdrom in linux vm and cannot observe such a big difference.
+
+see
+https://forum.proxmox.com/threads/win10-installation-pathological-slowness-with-i440fx-ide-cdrom.129351/
diff --git a/results/classifier/zero-shot/108/performance/1734810 b/results/classifier/zero-shot/108/performance/1734810
new file mode 100644
index 000000000..75730c00f
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1734810
@@ -0,0 +1,62 @@
+performance: 0.972
+graphic: 0.942
+other: 0.930
+KVM: 0.927
+device: 0.792
+permissions: 0.741
+boot: 0.740
+network: 0.696
+semantic: 0.695
+socket: 0.667
+files: 0.644
+PID: 0.640
+vnc: 0.606
+debug: 0.514
+
+Windows guest virtual PC running abnormally slow
+
+Guest systems running Windows 10 in a virtualized environment run unacceptably slow, with no option in Boxes to offer the virtual machine more (or less) cores from my physical CPU.
+
+ProblemType: Bug
+DistroRelease: Ubuntu 17.10
+Package: gnome-boxes 3.26.1-1
+ProcVersionSignature: Ubuntu 4.13.0-17.20-lowlatency 4.13.8
+Uname: Linux 4.13.0-17-lowlatency x86_64
+ApportVersion: 2.20.7-0ubuntu3.5
+Architecture: amd64
+CurrentDesktop: ubuntu:GNOME
+Date: Tue Nov 28 00:37:11 2017
+ProcEnviron:
+ TERM=xterm-256color
+ PATH=(custom, no user)
+ XDG_RUNTIME_DIR=<set>
+ LANG=en_US.UTF-8
+ SHELL=/bin/bash
+SourcePackage: gnome-boxes
+UpgradeStatus: No upgrade log present (probably fresh install)
+
+
+
+Any news or fixes?
+
+Which command line parameters are passed to QEMU? Is your system able to use KVM (e.g. did you enable virtualization support in your BIOS)?
+
+I am constantly running Windows 10 and Windows Server 2016 and I don't experience specific slowdowns.
+
+QEMU command line is needed to understand the specific setup that might be problematic.
+
+If you don't provide the CLI parameters, there's no way we can help here, sorry. So marking this as "invalid" for the QEMU project.
+
+Windows installs are still acting abnormally slow on the latest Gnome Boxes flatpaks in Ubuntu 18.10.
+I'll try to get my CLI parameters and add it to the bug.
+
+Sorry if this sounds dumb, where do I find my CLI Parameters for my Windows VM?
+
+Jeb, if you open a bug against QEMU here, we expect some information how QEMU is run. If you only interact with Gnome Boxes, then please only open a bug against Boxes - best in their Bug tracker here: https://bugzilla.gnome.org/ ... I guess nobody of the Boxes project is checking Launchpad, so reporting Boxes bugs here in Launchpad does not make much sense.
+
+At least please try to answer my questions in comment #3: Is virtualization enabled in your BIOS? Is KVM enabled on your system (i.e. are the kvm.ko and kvm_intel.ko or kvm_amd.ko modules loaded)?
+
+And for the CLI parameters, you could run this in a console window for example, after starting your guest:
+
+ps aux | grep qemu
+
diff --git a/results/classifier/zero-shot/108/performance/1735576 b/results/classifier/zero-shot/108/performance/1735576
new file mode 100644
index 000000000..0c31ffca0
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1735576
@@ -0,0 +1,47 @@
+performance: 0.949
+device: 0.895
+PID: 0.815
+graphic: 0.784
+other: 0.742
+boot: 0.731
+files: 0.723
+network: 0.686
+permissions: 0.676
+socket: 0.654
+semantic: 0.652
+vnc: 0.577
+KVM: 0.469
+debug: 0.369
+
+Support more than 4G memory for guest with Intel HAXM acceleration
+
+setup:
+
+host: windows 7 professional 64bit
+guest: centos 7
+qemu 2.10.92
+haxm 6.2.1
+
+issue: when assign 4096M or more memory to the guest, I got following error message:
+E:\qemuvm\vm-svr>qemu-system-x86_64 -accel hax -hda centos-1.vdi -m 4096
+HAX is working and emulator runs in fast virt mode.
+Failed to allocate 0 memory
+hax_transaction_commit: Failed mapping @0x0000000000000000+0xc0000000 flags 00
+hax_transaction_commit: Failed mapping @0x0000000100000000+0x40000000 flags 00
+VCPU shutdown request
+VCPU shutdown request
+if I change memory to 4095M, guest VM boot up without issue
+
+E:\qemuvm\vm-svr>qemu-system-x86_64 -accel hax -hda centos-1.vdi -m 4095
+HAX is working and emulator runs in fast virt mode.
+
+
+This is known limitation, I already raised a request on HAXM github site for fix this: https://github.com/intel/haxm/issues/13, and it got accepted will be fixed in next haxm release; however it seems there is also qemu side work (according to haxm dev), so I raise this for qemu side fix;
+
+update:
+according to haxm dev, they will submit a patch for qemu side of work;
+
+
+Fix has been included here:
+https://git.qemu.org/?p=qemu.git;a=commitdiff;h=7a5235c9e679c58be4
+
diff --git a/results/classifier/zero-shot/108/performance/1737 b/results/classifier/zero-shot/108/performance/1737
new file mode 100644
index 000000000..7be27bdd6
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1737
@@ -0,0 +1,64 @@
+performance: 0.958
+graphic: 0.919
+files: 0.862
+device: 0.848
+socket: 0.826
+permissions: 0.825
+PID: 0.799
+debug: 0.786
+vnc: 0.774
+network: 0.701
+semantic: 0.687
+boot: 0.686
+other: 0.581
+KVM: 0.346
+
+qemu-aarch64: Incorrect result for ssra instruction when using vector lengths of 1024-bit or higher.
+Description of problem:
+```
+#include <arm_sve.h>
+#include <stdio.h>
+
+#define SZ 32
+
+int main(int argc, char* argv[]) {
+  svbool_t pg = svptrue_b64();
+  uint64_t VL = svcntd();
+
+  fprintf(stderr, "One SVE vector can hold %li uint64_ts\n", VL);
+
+  int64_t sr[SZ], sx[SZ], sy[SZ];
+  uint64_t ur[SZ], ux[SZ], uy[SZ];
+
+  for (uint64_t i = 0; i < SZ; ++i) {
+    sx[i] = ux[i] = 0;
+    sy[i] = uy[i] = 1024;
+  }
+
+  for (uint64_t i = 0; i < SZ; i+=VL) {
+    fprintf(stderr, "Processing elements %li - %li\n", i, i + VL - 1);
+
+    svint64_t SX = svld1(pg, sx + i);
+    svint64_t SY = svld1(pg, sy + i);
+    svint64_t SR = svsra(SX, SY, 4);
+    svst1(pg, sr + i, SR);
+
+    svuint64_t UX = svld1(pg, ux + i);
+    svuint64_t UY = svld1(pg, uy + i);
+    svuint64_t UR = svsra(UX, UY, 4);
+    svst1(pg, ur + i, UR);
+  }
+
+  for (uint64_t i = 0; i < SZ; ++i) {
+    fprintf(stderr, "sr[%li]=%li, ur[%li]\n", i, sr[i], ur[i]);
+  }
+
+  return 0;
+}
+```
+Steps to reproduce:
+1. Build the above C source using "gcc -march=armv9-a -O1 ssra.c", can also use clang.
+2. Run with "qemu-aarch64 -cpu max,sve-default-vector-length=64 ./a.out" and you'll see the expected result of 64 (signed and unsigned)
+3. Run with "qemu-aarch64 -cpu max,sve-default-vector-length=128 ./a.out" and you'll see the expected result of 64 for unsigned but the signed result is 0. This suggests the emulation of SVE2 ssra instruction is incorrect for this and bigger vector lengths.
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/1743 b/results/classifier/zero-shot/108/performance/1743
new file mode 100644
index 000000000..7a98463d0
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1743
@@ -0,0 +1,31 @@
+performance: 0.942
+device: 0.923
+graphic: 0.913
+PID: 0.809
+socket: 0.726
+files: 0.620
+vnc: 0.588
+network: 0.587
+permissions: 0.508
+boot: 0.507
+semantic: 0.451
+debug: 0.436
+other: 0.379
+KVM: 0.031
+
+QEm+Android emulator crashes on x86 host (but not mac M1)
+Description of problem:
+Using QEmu+Android emulator crashes when using tflite on x86 hosts (but not M1 macs).
+Steps to reproduce:
+1. Install android toolchain, including emulator (sdkmanager, adb, avdmanager etc)
+2. Start android emulator on an x86 host
+3. Follow instructions to download and run tflite benchmarking tool [here](https://www.tensorflow.org/lite/performance/measurement)
+4. Crashes with the following error
+
+```
+06-27 17:38:28.093  8355  8355 F ndk_translation: vendor/unbundled_google/libs/ndk_translation/intrinsics/intrinsics_impl_x86_64.cc:86: CHECK failed: 524288 == 0
+```
+
+We have tried with many different models and the result is always the same. The same models run fine when the emulator runs on a mac M1 host.
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/1768 b/results/classifier/zero-shot/108/performance/1768
new file mode 100644
index 000000000..96a9da01e
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1768
@@ -0,0 +1,47 @@
+performance: 0.967
+graphic: 0.794
+device: 0.756
+debug: 0.538
+vnc: 0.511
+semantic: 0.438
+network: 0.338
+socket: 0.336
+PID: 0.321
+other: 0.318
+boot: 0.307
+files: 0.288
+permissions: 0.165
+KVM: 0.084
+
+Could not allocate more than ~2GB with qemu-user
+Description of problem:
+On qemu-user, failed to allocate more than about 2GB on 32bit platform supporting up to 4GB (arm, ppc, etc.)
+Steps to reproduce:
+1. Try to allocate more than 2GB [e.g. for(i=0;i<64;i++) if(malloc(64*1024*1024)==NULL) perror("Failed to allocate 64MB");]
+2. Only 1 64MB chunck is allocated in the upper 2GB memory space
+3. Failed to allocate after about 2GB.
+Additional information:
+The problem is in **pageflags_find** and **pageflags_next** functions (found in _accel/tcg/user-exec.c_) 3rd parameters, that should be **target_ulong** instead of incorrect _target_long_ (the parameter will be converted signed extended to uint64_t).
+The testing program is the following:
+```
+#include <stdio.h>
+#include <stdlib.h>
+
+int main(int argc,char *argv[]) {
+  unsigned int a;
+  unsigned int i;
+  char *al;
+  unsigned int sss=1U*1024*1024*64;
+  for(a=0;a<128;a++) {
+    al=malloc(sss);
+    if(al!=NULL) {
+      printf("ALLOC OK %u (%08lX)!\n",sss*(a+1),al);
+    }
+    else {
+      printf("Cannot alloc %d\n",(a+1)*sss);
+      perror("Cannot alloc");
+      exit(1);
+    }
+  }
+}
+```
diff --git a/results/classifier/zero-shot/108/performance/1784 b/results/classifier/zero-shot/108/performance/1784
new file mode 100644
index 000000000..587cea97e
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1784
@@ -0,0 +1,28 @@
+performance: 0.955
+device: 0.921
+graphic: 0.911
+other: 0.895
+permissions: 0.808
+files: 0.771
+semantic: 0.757
+network: 0.695
+boot: 0.679
+debug: 0.637
+vnc: 0.633
+socket: 0.606
+PID: 0.577
+KVM: 0.206
+
+Mac M1 Max / Debian guest / Luks password / Switching to graphical login manager (lightdm/Gdm) hangs in 75%
+Description of problem:
+In approximately 70% of cases I start QEMU with a Debian guest where the Debian guest was installed with full disk encryption, QEMU 'hangs' (does not respond') after I unlock the encrypted guest and the guest tries to start the graphical login manager (gdm or lightdm).
+
+I need to force quit QEMU, restart it multiple times until the start of the graphical login manager works.
+Steps to reproduce:
+1. Install Debian with (guided) full disk encryption and either the Gnome or the XFCE desktop environment
+2. To be able to unlock the hard disk after the installation finished, the Linux boot parameter 'console=tty1' needs to be added within grub to the Linux command line
+3. Try to restart/reboot QEMU  several times and QEMU will become unresponsive multiple times in this process.
+Additional information:
+I encounter this problem for several months now, with different versions of QEMU, macOS and Debian.
+
+There is one observation, which might help: I installed [DropBear](https://packages.debian.org/buster/dropbear-initramfs) to experiment with remote unlocking of Luks encrypted Linux boxes. It seems, that QEMU does not go into the unresponsive state, when I unlock the hard disk via SSH and not focus the QEMU window until after the graphical login manager started. (Only tried remote unlocking a few times so it is too early to confirm if this works 100% of the time.
diff --git a/results/classifier/zero-shot/108/performance/1789 b/results/classifier/zero-shot/108/performance/1789
new file mode 100644
index 000000000..1f34abf45
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1789
@@ -0,0 +1,32 @@
+performance: 0.956
+device: 0.940
+graphic: 0.899
+PID: 0.892
+debug: 0.857
+vnc: 0.830
+semantic: 0.812
+boot: 0.725
+network: 0.719
+KVM: 0.690
+socket: 0.675
+permissions: 0.672
+other: 0.504
+files: 0.233
+
+First connection to spice hangs after 1 min
+Description of problem:
+After starting a VM the first connection to spice logs this errors:
+
+```
+2023-07-25T16:00:47.497042Z qemu-system-x86_64: warning: Spice: main:0 (0x7f1a3fca5b90): invalid net test stage, ping id 0 test id 0 stage 4
+2023-07-25T16:00:47.497170Z qemu-system-x86_64: warning: Spice: main:0 (0x7f1a3fca5b90): invalid net test stage, ping id 0 test id 0 stage 0
+```
+
+And after 60 seconds the spice viewer is closed with this error:
+```
+2023-07-25T16:01:47.384207Z qemu-system-x86_64: warning: Spice: main:0 (0x7f1a3fca5b90): rcc 0x7f1a1968cb60 has been unresponsive for more than 30000 ms, disconnecting
+```
+Steps to reproduce:
+1. Start vm with spice
+2. Connect to spice
+3. Wait for at least 60 seconds and the viewer will close
diff --git a/results/classifier/zero-shot/108/performance/1815889 b/results/classifier/zero-shot/108/performance/1815889
new file mode 100644
index 000000000..fc7a8830d
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1815889
@@ -0,0 +1,929 @@
+performance: 0.957
+device: 0.957
+permissions: 0.950
+debug: 0.950
+semantic: 0.949
+other: 0.940
+graphic: 0.939
+PID: 0.923
+socket: 0.921
+files: 0.918
+boot: 0.908
+network: 0.903
+vnc: 0.887
+KVM: 0.858
+
+qemu-system-x86_64 crashed with signal 31 in __pthread_setaffinity_new()
+
+Unable to launch Default Fedora 29 images in gnome-boxes
+
+ProblemType: Crash
+DistroRelease: Ubuntu 19.04
+Package: qemu-system-x86 1:3.1+dfsg-2ubuntu1
+ProcVersionSignature: Ubuntu 4.19.0-12.13-generic 4.19.18
+Uname: Linux 4.19.0-12-generic x86_64
+ApportVersion: 2.20.10-0ubuntu20
+Architecture: amd64
+Date: Thu Feb 14 11:00:45 2019
+ExecutablePath: /usr/bin/qemu-system-x86_64
+KvmCmdLine: COMMAND         STAT  EUID  RUID   PID  PPID %CPU COMMAND
+MachineType: Dell Inc. Precision T3610
+ProcEnviron: PATH=(custom, user)
+ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.19.0-12-generic root=UUID=939b509b-d627-4642-a655-979b44972d17 ro splash quiet vt.handoff=1
+Signal: 31
+SourcePackage: qemu
+StacktraceTop:
+ __pthread_setaffinity_new (th=<optimized out>, cpusetsize=128, cpuset=0x7f5771fbf680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+ () at /usr/lib/x86_64-linux-gnu/dri/radeonsi_dri.so
+ () at /usr/lib/x86_64-linux-gnu/dri/radeonsi_dri.so
+ start_thread (arg=<optimized out>) at pthread_create.c:486
+ clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
+Title: qemu-system-x86_64 crashed with signal 31 in __pthread_setaffinity_new()
+UpgradeStatus: Upgraded to disco on 2018-11-14 (91 days ago)
+UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo video
+dmi.bios.date: 11/14/2018
+dmi.bios.vendor: Dell Inc.
+dmi.bios.version: A18
+dmi.board.name: 09M8Y8
+dmi.board.vendor: Dell Inc.
+dmi.board.version: A01
+dmi.chassis.type: 7
+dmi.chassis.vendor: Dell Inc.
+dmi.modalias: dmi:bvnDellInc.:bvrA18:bd11/14/2018:svnDellInc.:pnPrecisionT3610:pvr00:rvnDellInc.:rn09M8Y8:rvrA01:cvnDellInc.:ct7:cvr:
+dmi.product.name: Precision T3610
+dmi.product.sku: 05D2
+dmi.product.version: 00
+dmi.sys.vendor: Dell Inc.
+
+
+
+StacktraceTop:
+ __pthread_setaffinity_new (th=<optimized out>, cpusetsize=128, cpuset=0x7f5771fbf680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+ ?? () from /tmp/apport_sandbox_8_pwkx51/usr/lib/x86_64-linux-gnu/dri/radeonsi_dri.so
+ ?? ()
+ ?? ()
+ ?? ()
+
+
+
+
+
+
+
+
+I can confirm the reported issue
+
+Trace looks similar:
+--- stack trace ---
+#0  0x00007f1570fec0bf in __pthread_setaffinity_new (th=<optimized out>, cpusetsize=128, cpuset=0x7f156d4e3680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+        __arg2 = 128
+        _a3 = 139730004883072
+        _a1 = 22587
+        resultvar = <optimized out>
+        __arg3 = 139730004883072
+        __arg1 = 22587
+        _a2 = 128
+        pd = <optimized out>
+        res = <optimized out>
+#1  0x00007f156dc8dc73 in ?? () from /usr/lib/x86_64-linux-gnu/dri/i965_dri.so
+No symbol table info available.
+#2  0x00007f156dc8d5d7 in ?? () from /usr/lib/x86_64-linux-gnu/dri/i965_dri.so
+No symbol table info available.
+#3  0x00007f1570fe1164 in start_thread (arg=<optimized out>) at pthread_create.c:486
+        ret = <optimized out>
+        pd = <optimized out>
+        now = <optimized out>
+        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {139730004887296, -2085932122569588158, 140733496626446, 140733496626447, 0, 139730004883520, 2100820740254843458, 2100830499542516290}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
+        not_first_call = <optimized out>
+#4  0x00007f1570f09def in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
+No locals.
+--- source code stack trace ---
+#0  0x00007f1570fec0bf in __pthread_setaffinity_new (th=<optimized out>, cpusetsize=128, cpuset=0x7f156d4e3680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+  [Error: pthread_setaffinity.c was not found in source tree]
+#1  0x00007f156dc8dc73 in ?? () from /usr/lib/x86_64-linux-gnu/dri/i965_dri.so
+#2  0x00007f156dc8d5d7 in ?? () from /usr/lib/x86_64-linux-gnu/dri/i965_dri.so
+#3  0x00007f1570fe1164 in start_thread (arg=<optimized out>) at pthread_create.c:486
+  [Error: pthread_create.c was not found in source tree]
+#4  0x00007f1570f09def in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
+  [Error: clone.S was not found in source tree]
+
+
+libvirt XML that was generated:
+<domain type="kvm">
+  <name>fedora29-wor</name>
+  <uuid>2f4e83f7-18ed-45e2-bbf7-eef9f1c6c6c0</uuid>
+  <title>Fedora 29 Workstation</title>
+  <metadata>
+    <boxes:gnome-boxes xmlns:boxes="https://wiki.gnome.org/Apps/Boxes">
+      <os-state>live</os-state>
+      <media-id>http://fedoraproject.org/fedora/29:0</media-id>
+      <media>/home/paelzer/Fedora-Workstation-Live-x86_64-29-1.2.iso</media>
+    </boxes:gnome-boxes>
+    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
+      <libosinfo:os id="http://fedoraproject.org/fedora/29"/>
+    </libosinfo:libosinfo>
+  </metadata>
+  <memory unit="KiB">2097152</memory>
+  <currentMemory unit="KiB">2097152</currentMemory>
+  <vcpu placement="static">2</vcpu>
+  <os>
+    <type arch="x86_64" machine="pc-q35-3.1">hvm</type>
+    <boot dev="cdrom"/>
+    <boot dev="hd"/>
+  </os>
+  <features>
+    <acpi/>
+    <apic/>
+  </features>
+  <cpu mode="host-passthrough" check="none">
+    <topology sockets="1" cores="2" threads="1"/>
+  </cpu>
+  <clock offset="utc">
+    <timer name="rtc" tickpolicy="catchup"/>
+    <timer name="pit" tickpolicy="delay"/>
+    <timer name="hpet" present="no"/>
+  </clock>
+  <on_poweroff>destroy</on_poweroff>
+  <on_reboot>destroy</on_reboot>
+  <on_crash>destroy</on_crash>
+  <pm>
+    <suspend-to-mem enabled="no"/>
+    <suspend-to-disk enabled="no"/>
+  </pm>
+  <devices>
+    <emulator>/usr/bin/qemu-system-x86_64</emulator>
+    <disk type="file" device="disk">
+      <driver name="qemu" type="qcow2" cache="writeback"/>
+      <source file="/home/paelzer/.local/share/gnome-boxes/images/fedora29-wor"/>
+      <target dev="vda" bus="virtio"/>
+      <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
+    </disk>
+    <disk type="file" device="cdrom">
+      <driver name="qemu" type="raw"/>
+      <source file="/home/paelzer/Fedora-Workstation-Live-x86_64-29-1.2.iso" startupPolicy="mandatory"/>
+      <target dev="hdc" bus="sata"/>
+      <readonly/>
+      <address type="drive" controller="0" bus="0" target="0" unit="2"/>
+    </disk>
+    <controller type="usb" index="0" model="ich9-ehci1">
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x1d" function="0x7"/>
+    </controller>
+    <controller type="usb" index="0" model="ich9-uhci1">
+      <master startport="0"/>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x1d" function="0x0" multifunction="on"/>
+    </controller>
+    <controller type="usb" index="0" model="ich9-uhci2">
+      <master startport="2"/>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x1d" function="0x1"/>
+    </controller>
+    <controller type="usb" index="0" model="ich9-uhci3">
+      <master startport="4"/>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x1d" function="0x2"/>
+    </controller>
+    <controller type="sata" index="0">
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
+    </controller>
+    <controller type="pci" index="0" model="pcie-root"/>
+    <controller type="pci" index="1" model="pcie-root-port">
+      <model name="pcie-root-port"/>
+      <target chassis="1" port="0x10"/>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
+    </controller>
+    <controller type="pci" index="2" model="pcie-root-port">
+      <model name="pcie-root-port"/>
+      <target chassis="2" port="0x11"/>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
+    </controller>
+    <controller type="pci" index="3" model="pcie-root-port">
+      <model name="pcie-root-port"/>
+      <target chassis="3" port="0x12"/>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
+    </controller>
+    <controller type="pci" index="4" model="pcie-root-port">
+      <model name="pcie-root-port"/>
+      <target chassis="4" port="0x13"/>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
+    </controller>
+    <controller type="pci" index="5" model="pcie-root-port">
+      <model name="pcie-root-port"/>
+      <target chassis="5" port="0x14"/>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
+    </controller>
+    <controller type="virtio-serial" index="0">
+      <address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
+    </controller>
+    <controller type="ccid" index="0">
+      <address type="usb" bus="0" port="1"/>
+    </controller>
+    <interface type="user">
+      <mac address="52:54:00:ee:17:af"/>
+      <model type="virtio"/>
+      <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
+    </interface>
+    <smartcard mode="passthrough" type="spicevmc">
+      <address type="ccid" controller="0" slot="0"/>
+    </smartcard>
+    <serial type="pty">
+      <target type="isa-serial" port="0">
+        <model name="isa-serial"/>
+      </target>
+    </serial>
+    <console type="pty">
+      <target type="serial" port="0"/>
+    </console>
+    <channel type="spicevmc">
+      <target type="virtio" name="com.redhat.spice.0"/>
+      <address type="virtio-serial" controller="0" bus="0" port="1"/>
+    </channel>
+    <channel type="spiceport">
+      <source channel="org.spice-space.webdav.0"/>
+      <target type="virtio" name="org.spice-space.webdav.0"/>
+      <address type="virtio-serial" controller="0" bus="0" port="2"/>
+    </channel>
+    <input type="tablet" bus="usb">
+      <address type="usb" bus="0" port="2"/>
+    </input>
+    <input type="mouse" bus="ps2"/>
+    <input type="keyboard" bus="ps2"/>
+    <graphics type="spice">
+      <listen type="none"/>
+      <image compression="off"/>
+      <gl enable="yes"/>
+    </graphics>
+    <sound model="ich9">
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x1b" function="0x0"/>
+    </sound>
+    <video>
+      <model type="virtio" heads="1" primary="yes">
+        <acceleration accel3d="yes"/>
+      </model>
+      <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
+    </video>
+    <redirdev bus="usb" type="spicevmc">
+      <address type="usb" bus="0" port="3"/>
+    </redirdev>
+    <redirdev bus="usb" type="spicevmc">
+      <address type="usb" bus="0" port="4"/>
+    </redirdev>
+    <redirdev bus="usb" type="spicevmc">
+      <address type="usb" bus="0" port="5"/>
+    </redirdev>
+    <redirdev bus="usb" type="spicevmc">
+      <address type="usb" bus="0" port="6"/>
+    </redirdev>
+    <memballoon model="virtio">
+      <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
+    </memballoon>
+  </devices>
+</domain>
+
+Interestingly, the Ubuntu 18.10 image works.
+So is it really an attribute of the guest that breaks it?
+
+
+BTW - Arr, why does it spawn its own libvirtd ?!
+Dear gnome boxes what are you doing?
+0  1000 21610     1  20   0 85807204 68912 poll_s SLl pts/2     0:00 /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0/WebKitWebProcess 2 15
+0  1000 21612     1  20   0 85772584 34132 poll_s SLl pts/2     0:00 /usr/lib/x86_64-linux-gnu/webkit2gtk-4.0/WebKitNetworkProcess 3 15
+0  1000 21649     1  20   0 1391464 39144 poll_s Sl  ?          0:00 /usr/sbin/libvirtd --timeout=30
+
+Thanks to "lsof +fg -p" some important paths:
+
+The guest log is in /home/paelzer/.cache/libvirt/qemu/log/ubuntu18.10.log
+Control sockets are at
+/run/user/1000/libvirt/libvirt-sock
+/run/user/1000/libvirt/libvirt-admin-sock
+
+Now lets try to poke at it without that UI around it ....
+
+
+The following gets me to non boxy libvirt:
+$ virsh -c qemu+unix:///session?socket=/run/user/1000/libvirt/libvirt-sock list --all
+
+For now I'll assume that it is NOT depending on the guest, but lets modify the working Ubuntu guest one by one to become more like the F29 guest and we will see.
+
+1. different disks/iso's/MAC (obviously)
+2. F29 has gl enabled on the spice graphics
+3. video F29: virtio Ubuntu: qxl
+4. video has <acceleration accel3d='yes'/> set
+
+That is all the difference, so it seems 3d'ish to me.
+
+First change
+<model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
+to
+<model type='virtio' heads='1' primary='yes'>
+=> still working
+
+Second change enable gl
+<gl enable='no'/>
+to
+<gl enable='yes'/>
+
+=> Broken
+
+Lets take back the First change but keep only the second.
+=> still broken.
+
+So it is the enablement of gl which I work on anyway recently (some apparmor changes to make it work in my former setup).
+
+Thanks for sharing this bug, but I need to analyze more in depth what is wrong here, but that might take a while.
+
+Note: Since your guest crashed on start the crash has no private data - marking the bug public ...
+
+
+For the time being as a workaround:
+ virsh -c qemu+unix:///session?socket=/run/user/1000/libvirt/libvirt-sock edit fedora29-wor
+(assuming that is your guest name as well)
+and switch off the gl enablement.
+Gives me a perfectly working guest, hope that helps you for now until a real fix is found.
+
+FTR: this guest XML (not out of gnome-boxes) works on the very same Host system.
+This runs qxl + gl=yes as well and does not fail.
+We need to find what the difference is between those is as well.
+
+<domain type='kvm'>
+  <name>ubuntu18.04</name>
+  <uuid>2f6bde7c-1d3d-498a-b96c-8920f165fa4c</uuid>
+  <metadata>
+    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
+      <libosinfo:os id="http://ubuntu.com/ubuntu/18.04"/>
+    </libosinfo:libosinfo>
+  </metadata>
+  <memory unit='KiB'>2097152</memory>
+  <currentMemory unit='KiB'>2097152</currentMemory>
+  <vcpu placement='static'>2</vcpu>
+  <os>
+    <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
+    <boot dev='hd'/>
+  </os>
+  <features>
+    <acpi/>
+    <apic/>
+    <vmport state='off'/>
+  </features>
+  <cpu mode='host-model' check='partial'>
+    <model fallback='allow'/>
+  </cpu>
+  <clock offset='utc'>
+    <timer name='rtc' tickpolicy='catchup'/>
+    <timer name='pit' tickpolicy='delay'/>
+    <timer name='hpet' present='no'/>
+  </clock>
+  <on_poweroff>destroy</on_poweroff>
+  <on_reboot>restart</on_reboot>
+  <on_crash>destroy</on_crash>
+  <pm>
+    <suspend-to-mem enabled='no'/>
+    <suspend-to-disk enabled='no'/>
+  </pm>
+  <devices>
+    <emulator>/usr/bin/qemu-system-x86_64</emulator>
+    <disk type='file' device='disk'>
+      <driver name='qemu' type='qcow2'/>
+      <source file='/var/lib/libvirt/images/ubuntu18.04.qcow2'/>
+      <target dev='vda' bus='virtio'/>
+      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
+    </disk>
+    <disk type='file' device='cdrom'>
+      <driver name='qemu' type='raw'/>
+      <target dev='sda' bus='sata'/>
+      <readonly/>
+      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+    </disk>
+    <controller type='usb' index='0' model='ich9-ehci1'>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x7'/>
+    </controller>
+    <controller type='usb' index='0' model='ich9-uhci1'>
+      <master startport='0'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='usb' index='0' model='ich9-uhci2'>
+      <master startport='2'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x1'/>
+    </controller>
+    <controller type='usb' index='0' model='ich9-uhci3'>
+      <master startport='4'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x2'/>
+    </controller>
+    <controller type='sata' index='0'>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
+    </controller>
+    <controller type='pci' index='0' model='pcie-root'/>
+    <controller type='pci' index='1' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='1' port='0x10'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
+    </controller>
+    <controller type='pci' index='2' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='2' port='0x11'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
+    </controller>
+    <controller type='pci' index='3' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='3' port='0x12'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
+    </controller>
+    <controller type='pci' index='4' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='4' port='0x13'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
+    </controller>
+    <controller type='pci' index='5' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='5' port='0x14'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
+    </controller>
+    <controller type='pci' index='6' model='pcie-root-port'>
+      <model name='pcie-root-port'/>
+      <target chassis='6' port='0x15'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
+    </controller>
+    <controller type='virtio-serial' index='0'>
+      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
+    </controller>
+    <interface type='network'>
+      <mac address='52:54:00:8c:31:fc'/>
+      <source network='default'/>
+      <model type='virtio'/>
+      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
+    </interface>
+    <serial type='pty'>
+      <target type='isa-serial' port='0'>
+        <model name='isa-serial'/>
+      </target>
+    </serial>
+    <console type='pty'>
+      <target type='serial' port='0'/>
+    </console>
+    <channel type='unix'>
+      <target type='virtio' name='org.qemu.guest_agent.0'/>
+      <address type='virtio-serial' controller='0' bus='0' port='1'/>
+    </channel>
+    <channel type='spicevmc'>
+      <target type='virtio' name='com.redhat.spice.0'/>
+      <address type='virtio-serial' controller='0' bus='0' port='2'/>
+    </channel>
+    <input type='tablet' bus='usb'>
+      <address type='usb' bus='0' port='1'/>
+    </input>
+    <input type='mouse' bus='ps2'/>
+    <input type='keyboard' bus='ps2'/>
+    <graphics type='spice'>
+      <listen type='none'/>
+      <image compression='off'/>
+      <gl enable='yes'/>
+    </graphics>
+    <sound model='ich9'>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
+    </sound>
+    <video>
+      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
+      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
+    </video>
+    <redirdev bus='usb' type='spicevmc'>
+      <address type='usb' bus='0' port='2'/>
+    </redirdev>
+    <redirdev bus='usb' type='spicevmc'>
+      <address type='usb' bus='0' port='3'/>
+    </redirdev>
+    <memballoon model='virtio'>
+      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
+    </memballoon>
+    <rng model='virtio'>
+      <backend model='random'>/dev/urandom</backend>
+      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
+    </rng>
+  </devices>
+</domain>
+
+P.S. I'm on a trip next week so further response might take a while, sorry
+
+Since my domain ran gl fine I was eliminating more differences one by one, keeping <gl enable='yes'/> to check if there is a second ingredient needed.
+
+- do not set acceleration on virtio vido dev
+- machine type q35 -> i440fx (and all pcie->pci that comes with that)
+- 1 instead of 4 vcpus
+- no host passthrough
+- no boot from CD
+- add pae feature
+- remove rtc/pit/hpet clock attributes
+- usb ich9-[eu]hci1 -> piix3-uhci
+- no smartcard entry
+- no usb tablet
+- use cirrus video card
+- virtio channel
+- no PM config
+- console virtio serial
+- no soundcard
+- reduce memory
+
+None of it makes it work, but the files are nearly identical now
+
+That left only the actual disk+iso of fedora vs ubuntu cloudimg based qcow and that the boxes VM used userspace networking. Still the issue remained.
+
+But I realized there is one more difference, the Boxes VM runs in user context while mine is a system level VM (qemu:///system) running the gl essentially headless until one connects to the local spice port.
+But the gnome boxes VM was having the UI up immediately connecting to it once available.
+
+So I defined the XML of the gnome-boxes VM in my qemu:///system libvirt context.
+This - as expected (I copied the files to /var/lib/libvirt/images and adapted the paths).
+This makes it work which is at least some lead to follow.
+
+I can make the viewers (virt-viewer / virt-manager) crash when attaching to it semi-remotely - but that might be a broken setup for a local only spice definition.
+
+When attaching viewers locally it works just fine.
+
+In none of those cases qemu crashes, so it clearly isn't the same. Both fail at some glib errors which makes sense since I try to remote (though ssh) use local only features.
+
+
+So to summarize:
+- crash with gl enabled
+- only triggers if run in user context
+- gl works in system context (local viewers can attach and it works)
+
+I'm out of obvious "change the config to check what it is" options.
+But since it is at least reproducible I'll focus on the qemu backtrace itself next ...
+
+Stack trace with slightly more info as all DBG and source is installed here.
+
+--- stack trace ---
+#0  0x00007f2325ae00bf in __pthread_setaffinity_new (th=<optimized out>, cpusetsize=cpusetsize@entry=128, cpuset=cpuset@entry=0x7f2321fe5680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+        __arg2 = 128
+        _a3 = 139788870899328
+        _a1 = 17325
+        resultvar = <optimized out>
+        __arg3 = 139788870899328
+        __arg1 = 17325
+        _a2 = 128
+        pd = <optimized out>
+        res = <optimized out>
+#1  0x00007f23227abd83 in util_queue_thread_func (input=input@entry=0x55a59a695bd0) at ../src/util/u_queue.c:252
+        cpuset = {__bits = {18446744073709551615 <repeats 16 times>}}
+        queue = 0x55a59a8952d0
+        thread_index = 0
+        __PRETTY_FUNCTION__ = "util_queue_thread_func"
+#2  0x00007f23227ab6e7 in impl_thrd_routine (p=<optimized out>) at ../src/../include/c11/threads_posix.h:87
+        pack = {func = 0x7f23227aba70 <util_queue_thread_func>, arg = 0x55a59a695bd0}
+#3  0x00007f2325ad5164 in start_thread (arg=<optimized out>) at pthread_create.c:486
+        ret = <optimized out>
+        pd = <optimized out>
+        now = <optimized out>
+        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {139788870903552, 9195723382052266688, 140723610455422, 140723610455423, 0, 139788870899776, -9089523756422225216, -9089514281776799040}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
+        not_first_call = <optimized out>
+#4  0x00007f23259fddef in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
+No locals.
+--- source code stack trace ---
+#0  0x00007f2325ae00bf in __pthread_setaffinity_new (th=<optimized out>, cpusetsize=cpusetsize@entry=128, cpuset=cpuset@entry=0x7f2321fe5680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+  [Error: pthread_setaffinity.c was not found in source tree]
+#1  0x00007f23227abd83 in util_queue_thread_func (input=input@entry=0x55a59a695bd0) at ../src/util/u_queue.c:252
+  [Error: u_queue.c was not found in source tree]
+#2  0x00007f23227ab6e7 in impl_thrd_routine (p=<optimized out>) at ../src/../include/c11/threads_posix.h:87
+  [Error: threads_posix.h was not found in source tree]
+#3  0x00007f2325ad5164 in start_thread (arg=<optimized out>) at pthread_create.c:486
+  [Error: pthread_create.c was not found in source tree]
+#4  0x00007f23259fddef in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
+  [Error: clone.S was not found in source tree]
+
+Eventually it is an "Program terminated with signal SIGSYS, Bad system call"
+So we need to find what is bad about it.
+
+
+
+(gdb) info threads
+  Id   Target Id                         Frame 
+* 1    Thread 0x7f2321fe6700 (LWP 17325) 0x00007f2325ae00bf in __pthread_setaffinity_new (th=<optimized out>, cpusetsize=cpusetsize@entry=128, cpuset=cpus
+    et@entry=0x7f2321fe5680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+  2    Thread 0x7f2323ad3500 (LWP 17322) 0x00007f2326fe0fb7 in dri_bind_extensions (dri=dri@entry=0x55a59a7583e0, matches=matches@entry=0x7f2326fec34
+    0 <dri_core_extensions>, extensions=<optimized out>) at ../src/gbm/backends/dri/gbm_dri.c:286
+  3    Thread 0x7f2323acf700 (LWP 17323) syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
+
+A discussion with the kernel team pointed to seccomp at first:
+...
+<apw> grep it appears that seccomp is the only thing which triggers that signal
+
+The stack in the breaking cases uses this by default
+-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
+
+resourcecontrol is defined as:
+"Disable process affinity and schedular priority"
+
+Interestingly that is the global default, the qemu://system qemu also runs with the same.
+I'd assume that:
+  libgl1-mesa-dri:amd64: /usr/lib/x86_64-linux-gnu/dri/i965_dri.so
+behaves differently depending if it is on a local UI session or not.
+And it gets punished as soon as it tries to set-affinity which it might only do in that case.
+
+Implemented by
+- https://git.qemu.org/?p=qemu.git;a=commit;h=24f8cdc5722476e12d8e39d71f66311b4fa971c1
+Similar issue being fixed last year
+- https://git.qemu.org/?p=qemu.git;a=commit;h=056de1e894155fbb99e7b43c1c4382d4920cf437
+
+Libvirt has no means to fin-control it (yet), only to switch the hole feature of sandboxing on/off.
+
+That matches what we see - it fails on init when spawning threads - most likely there it will set the affinity.
+
+From Ubuntu's POV this is rather new as the code in Mesa came in with the fresh 18.3.0_rc4-1
+It is possible that no one else saw it so far ...
+It is in mesa upstream since
+  https://github.com/mesa3d/mesa/commit/d877451b48a59ab0f9a4210fc736f51da5851c9a
+
+But opinions might differ ...
+I'll subscribe upstream qemu to this bug and then post a summary here.
+This will mirror the bug updates to the Mailing List, if there is no harsh feedback I'll propose a patch to remove sched_setaffinity from the list of blocked calls.
+
+Summary:
+- qemu crash when using GL
+- "sched_setaffinity" is the syscall that is seccomp blocked and kills qemu
+- the mesa i915 drivers (and your radeon as well) will do that call
+- it is blocked by the current qemu -sanbox on,...,resourcecontrol=deny which is libvirts default
+- Implemented by qemu 24f8cdc572
+- Similar issue being fixed last year qemu 056de1e894
+- new code in mesa 18.3 since mesa d877451b48
+
+I think we just need to allow sched_setaffinity with these new mesa drivers in the wild.
+The alternative to detect gl usage in libvirt and only then allow ressourcecontrol IMHO seems over-engineered (needs internals to actually pass the need of seccomp subsets to be switched) and not better (more syscalls will be non-blocked then as the -secomp interface isn't fine grained).
+
+OTOH the man page literally says "... Disable process affinity ...", so I'm not sure we can just remove it. Maybe split resourcecontrol in two, put *affinity* in the new one and make the default being not blocked - so that upper layers like libvirt will work until one explicitly states ... -sandbox on,affinity=on which no one wanting to use GL would do. That again seems too much.
+Well the discussion will happen either here on ML/bug or latter when submitting an RFC for it.
+
+IMHO that mesa change is not valid. It is settings its affinity to run on all threads which is definitely *NOT* something we want to be allowed.  Management applications want to control which CPUs QEMU runs on, and as such Mesa should honour the CPU placement that the QEMU process has.
+
+This is a great example of why QEMU wants to use seccomp to block affinity changes to prevent something silently trying to use more CPUs than are assigned to this QEMU.
+
+(I reported that issue a few days ago too: https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg06066.html)
+
+Perhaps we can teach mesa to not change CPU affinity (some option, or environment variable, or seccomp check).
+
+Daniel, when virgl/mesa will be running in a separate process (thanks to vhost-user-gpu), I suppose the rendering process will be free to change the CPU affinity. Does that make a difference if mesa thread is in qemu or a separate process, in this case?
+
+As & when libvirt & QEMU supports the external vhost processes for this I expect it will still restrict the CPU affinity and apply seccomp filters that likely to be as strict as they are today at minimum.
+
+I did wonder if we could set the action for some syscalls to be "errno" instead of "kill process", but I worry that could then result in silent mis-behaviour as processes fail to check return value as they blindly assume the call cannot fail.
+
+We should probably talk with mesa developers about providing a config option to prevent this affinity change. An env variable is workable if there's no other mechanism they can expose.
+
+See also mesa bug:
+https://bugs.freedesktop.org/show_bug.cgi?id=109695
+
+Thanks Daniel and MarcAndre for chiming in here.
+Atfer thinking more about it I agree to Daniel that actually mesa should honor and stick with its affinity assignment.
+
+For documentation purpose: the solution proposed on the ML is at https://lists.freedesktop.org/archives/mesa-dev/2019-February/215926.html
+I also added a bug tracker to the fredesktop bug as task.
+
+@Ubuntu-Desktop Team (now subscribed) - is there a chance we can revert [1] in mesa before it will be released with Disco for now. That would be needed until an accepted solution throughout the stack of libvirt/qemu/mesa is found?
+Otherwise using GL backed qemu graphics will fail as outlined in the bug.
+
+Once such a cross-package solution to the problem is found we can (if needed at all) SRU back the set of changes to all components required.
+
+[1]: https://github.com/mesa3d/mesa/commit/d877451b48a59ab0f9a4210fc736f51da5851c9a
+
+Adding Timo who maintainers mesa.
+
+
+Since upgrading Mesa from 18.2 to 18.3, launching a QEMU virtual machine with Spice OpenGL enabled (for virgl), causes QEMU to crash with SIGSYS inside the radeonsi driver. The reason for this is that the QEMU sandbox option 'resourcecontrol=deny' disables the sched_setaffinity syscall called in pthread_setaffinity_np, which is now used by the radeonsi driver.
+
+A simple way to reproduce this problem is:
+$ gdb --batch --ex run --ex bt --args qemu-system-x86_64 -spice gl=on -sandbox on,resourcecontrol=deny
+[Thread debugging using libthread_db enabled]
+Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
+[New Thread 0x7ffff45aa700 (LWP 23432)]
+[New Thread 0x7ffff08e5700 (LWP 23433)]
+[New Thread 0x7fffe3fff700 (LWP 23434)]
+[New Thread 0x7fffe37fe700 (LWP 23435)]
+
+Thread 4 "qemu-system-x86" received signal SIGSYS, Bad system call.
+[Switching to Thread 0x7fffe3fff700 (LWP 23434)]
+0x00007ffff68cc9cf in __pthread_setaffinity_new (th=<optimized out>, cpusetsize=cpusetsize@entry=128, cpuset=cpuset@entry=0x7fffe3ffe680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+34	../sysdeps/unix/sysv/linux/pthread_setaffinity.c: No such file or directory.
+#0  0x00007ffff68cc9cf in __pthread_setaffinity_new (th=<optimized out>, cpusetsize=cpusetsize@entry=128, cpuset=cpuset@entry=0x7fffe3ffe680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34
+#1  0x00007ffff12ba2b3 in util_queue_thread_func (input=input@entry=0x55555640b1f0) at ../src/util/u_queue.c:252
+#2  0x00007ffff12b9c17 in impl_thrd_routine (p=<optimized out>) at ../src/../include/c11/threads_posix.h:87
+#3  0x00007ffff68c1fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
+#4  0x00007ffff67f280f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
+
+
+The problematic code at src/util/u_queue.c:252 was added in the following commit:
+commit d877451b48a59ab0f9a4210fc736f51da5851c9a
+Author: Marek Olšák <email address hidden>
+Date:   Mon Oct 1 15:51:06 2018 -0400
+
+    util/u_queue: add UTIL_QUEUE_INIT_SET_FULL_THREAD_AFFINITY
+    
+    Initial version discussed with Rob Clark under a different patch name.
+    This approach leaves his driver unaffected.
+
+
+Since setting the thread affinity seems non-essential here, the failing syscall should be handled gracefully, for example by setting a signal handler to ignore the SIGSYS signal.
+
+Mesa needs a way to query that it can't set thread affinity.
+
+To check for the availability of the syscall, one can try it in a child process and see if the child is terminated by a signal, e.g. like this:
+
+#include <stdbool.h>
+#include <unistd.h>
+#include <sys/resource.h>
+#include <sys/syscall.h>
+#include <sys/wait.h>
+
+static bool
+can_set_affinity()
+{
+   pid_t pid = fork();
+   int status = 0;
+   if (!pid) {
+      /* Disable coredumps, because a SIGSYS crash is expected. */
+      struct rlimit limit = { 0 };
+      limit.rlim_cur = 1;
+      limit.rlim_max = 1;
+      setrlimit(RLIMIT_CORE, &limit);
+      /* Test the syscall in the child process. */
+      syscall(SYS_sched_setaffinity, 0, 0, 0);
+      _exit(0);
+   } else if (pid < 0) {
+      return false;
+   }
+   if (waitpid(pid, &status, 0) < 0) {
+      return false;
+   }
+   if (WIFSIGNALED(status)) {
+      /* The child process was terminated by a signal,
+       * thus the syscall cannot be used.
+       */
+      return false;
+   }
+   return true;
+}
+
+(In reply to Ahzo from comment #2)
+> To check for the availability of the syscall, one can try it in a child
+> process and see if the child is terminated by a signal, e.g. like this:
+
+Afraid not, QEMU's seccomp filter blocks use of fork() too :-)
+
+(In reply to Ahzo from comment #0)
+> The problematic code at src/util/u_queue.c:252 was added in the following
+> commit:
+> commit d877451b48a59ab0f9a4210fc736f51da5851c9a
+> Author: Marek Olšák <email address hidden>
+> Date:   Mon Oct 1 15:51:06 2018 -0400
+> 
+>     util/u_queue: add UTIL_QUEUE_INIT_SET_FULL_THREAD_AFFINITY
+>     
+>     Initial version discussed with Rob Clark under a different patch name.
+>     This approach leaves his driver unaffected.
+> 
+> 
+> Since setting the thread affinity seems non-essential here, the failing
+> syscall should be handled gracefully, for example by setting a signal
+> handler to ignore the SIGSYS signal.
+
+I'm curious what motivated this change to start with ?  Even if QEMU was not enforcing seccomp filters, I think I'd consider it a bug for mesa to be setting its process affinity in this way.  The mgmt application or sysadmin has decided that the process must have a certain affinity, based on how it/they want the host CPUs utilized. Why is mesa wanting to override this administrative policy decision to restrict CPU usage ?
+
+(In reply to Daniel P. Berrange from comment #4)
+> 
+> I'm curious what motivated this change to start with ?  Even if QEMU was not
+> enforcing seccomp filters, I think I'd consider it a bug for mesa to be
+> setting its process affinity in this way.  The mgmt application or sysadmin
+> has decided that the process must have a certain affinity, based on how
+> it/they want the host CPUs utilized. Why is mesa wanting to override this
+> administrative policy decision to restrict CPU usage ?
+
+To improve performance on modern multi-core NUMA architectures.
+
+Sent a quick RFC for an env variable workaround on the ML "[PATCH] RFC: Workaround for pthread_setaffinity_np() seccomp filtering".
+
+(In reply to Daniel P. Berrange from comment #4)
+> I'm curious what motivated this change to start with ?  Even if QEMU was not
+> enforcing seccomp filters, I think I'd consider it a bug for mesa to be
+> setting its process affinity in this way.  The mgmt application or sysadmin
+> has decided that the process must have a certain affinity, based on how
+> it/they want the host CPUs utilized. Why is mesa wanting to override this
+> administrative policy decision to restrict CPU usage ?
+
+The correct solution is to fix pthread_setaffinity such that it returns an error code instead of crashing.
+
+An even better solution would be to have a virtual thread affinity that only the application can see and change, which should be silently masked by administrative policies not visible to the application.
+
+(In reply to Marek Olšák from comment #7)
+> An even better solution would be to have a virtual thread affinity that only
+> the application can see and change, which should be silently masked by
+> administrative policies not visible to the application.
+
+Mesa doesn't really need explicit thread affinity at all. All it wants is that certain sets of threads run on the same CPU module; it doesn't care which particular CPU module that is. What's really needed is an API to express this affinity between threads, instead of to specific CPU cores.
+
+(In reply to Daniel P. Berrange from comment #3)
+> (In reply to Ahzo from comment #2)
+> > To check for the availability of the syscall, one can try it in a child
+> > process and see if the child is terminated by a signal, e.g. like this:
+> 
+> Afraid not, QEMU's seccomp filter blocks use of fork() too :-)
+
+Maybe it should, at least when using the spawn=deny option, but currently it doesn't. That option only blocks the fork, vfork and execve syscalls, but glibc's fork() function uses the clone syscall, and thus continues to work.
+However, that behavior might be different when using other C library implementations, so it wouldn't be correct to rely on this.
+One could use clone() instead of fork(), but future versions of qemu might block the clone syscall, as well.
+
+Unfortunately, I'm not aware of a proper solution for this bug short of adding a new API to the kernel.
+
+You can test 19.0~rc6 with this reverted on a ppa:
+
+ppa:canonical-x/x-staging
+
+should be built in 30min
+
+Hi Timo,
+I tried to test with the mesa from ppa:canonical-x/x-staging
+But there is a dependency issue in that PPA - I can't install all packages from there.
+It seems most of the X* packages will need a transition for the new mesa and those are not in this ppa right now.
+
+Installing all that I can from the PPA doesn't resolve the issue, is there something more you need to upload to the PPA - or are there other things I'd need to do to install all of mesa?
+
+This is the current mix of rc5/6 it gave me :-/
+libegl-mesa0:amd64         19.0.0~rc5-1ubuntu0.1
+libegl1-mesa:amd64         19.0.0~rc6-1ubuntu0.1
+libgl1-mesa-dri:amd64      19.0.0~rc5-1ubuntu0.1
+libgl1-mesa-glx:amd64      19.0.0~rc6-1ubuntu0.1
+libglapi-mesa:amd64        19.0.0~rc5-1ubuntu0.1
+libglx-mesa0:amd64         19.0.0~rc5-1ubuntu0.1
+libwayland-egl1-mesa:amd64 19.0.0~rc6-1ubuntu0.1
+mesa-va-drivers:amd64      19.0.0~rc5-1ubuntu0.1
+mesa-vdpau-drivers:amd64   19.0.0~rc5-1ubuntu0.1
+
+I don't have that issue on a chroot, so you should at least tell me why it would refuse to upgrade them all.. apt should show an error
+
+The PPA was built against -proposed so I had to enable that to install all libs.
+That done the 19.0.0~rc6-1ubuntu0.1 with the set affinity change reverted works quite nicely.
+
+It would be great to get that into Ubuntu 19.04 until the involved upstreams agreed how to proceed with it and we can then sort out what to do in which package. Which after all might be after cutoff and in 19.10 then.
+
+Thanks Timo, let me know if you need another verification on this at any point to drive it into 19.04.
+
+We're getting down to just a few bugs blocking 19.0, so I'm pinging those bugs to see what the progress is?
+
+I'm removing this from the 19.0 blocking tracker. Generally we don't add bugs to block a release if they were present in the previous release, additionally there doesn't seem to be any consensus on a solution, at this moment. If there is a fix implemented I'd be happy to pull that into a later 19.0 release.
+
+This bug was fixed in the package mesa - 19.0.0-1ubuntu1
+
+---------------
+mesa (19.0.0-1ubuntu1) disco; urgency=medium
+
+  * Merge from Debian. (LP: #1818516)
+  * revert-set-full-thread-affinity.diff: Fix qemu crash. (LP: #1815889)
+
+ -- Timo Aaltonen <email address hidden>  Thu, 14 Mar 2019 18:48:18 +0200
+
+(In reply to Michel Dänzer from comment #8)
+> Mesa doesn't really need explicit thread affinity at all. All it wants is
+> that certain sets of threads run on the same CPU module; it doesn't care
+> which particular CPU module that is. What's really needed is an API to
+> express this affinity between threads, instead of to specific CPU cores.
+
+I think the thread affinity API is a correct way to optimize for CPU cache topologies. pthread is a basic user API. Security policies shouldn't disallow pthread functions.
+
+FYI the QEMU change merged in the following pull request changed to return an EPERM errno for the thread affinity syscalls:
+
+commit 12f067cc14b90aef60b2b7d03e1df74cc50a0459
+Merge: 84bdc58c06 035121d23a
+Author: Peter Maydell <email address hidden>
+Date:   Thu Mar 28 12:04:52 2019 +0000
+
+    Merge remote-tracking branch 'remotes/otubo/tags/pull-seccomp-20190327' into staging
+    
+    pull-seccomp-20190327
+    
+    # gpg: Signature made Wed 27 Mar 2019 12:12:39 GMT
+    # gpg:                using RSA key DF32E7C0F0FFF9A2
+    # gpg: Good signature from "Eduardo Otubo (Senior Software Engineer) <email address hidden>" [full]
+    # Primary key fingerprint: D67E 1B50 9374 86B4 0723  DBAB DF32 E7C0 F0FF F9A2
+    
+    * remotes/otubo/tags/pull-seccomp-20190327:
+      seccomp: report more useful errors from seccomp
+      seccomp: don't kill process for resource control syscalls
+    
+    Signed-off-by: Peter Maydell <email address hidden>
+
+IOW, mesa's usage of this syscalls will still be blocked, but it will no longer kill the process.
+
+Thank you Daniel,
+we will most likely keep Disco as-is for now and merge this in 19.10 where then mesa can drop the revert. I tagged it for 19.10 to be revisited.
+
+This problem was solved by qemu [1], so this mesa bug can be closed.
+
+[1] https://git.qemu.org/git/qemu.git/?a=commitdiff;h=9a1565a03b79d80b236bc7cc2dbce52a2ef3a1b8
+
+Reopening/Assigning to TImo for eoan since there is a patch which can we dropped once qemu is fixed
+
+I believe this was fixed by qemu 4.0 in eoan.
+
+This bug was fixed in the package mesa - 19.2.4-1ubuntu1
+
+---------------
+mesa (19.2.4-1ubuntu1) focal; urgency=medium
+
+  * Merge from Debian.
+  * revert-set-full-thread-affinity.diff: Dropped, qemu is fixed now in
+    eoan and up. (LP: #1815889)
+
+ -- Timo Aaltonen <email address hidden>  Wed, 20 Nov 2019 20:17:00 +0200
+
diff --git a/results/classifier/zero-shot/108/performance/1818207 b/results/classifier/zero-shot/108/performance/1818207
new file mode 100644
index 000000000..b2aac2a24
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1818207
@@ -0,0 +1,157 @@
+performance: 0.970
+PID: 0.968
+other: 0.965
+device: 0.963
+KVM: 0.960
+debug: 0.956
+files: 0.937
+semantic: 0.930
+graphic: 0.919
+permissions: 0.913
+vnc: 0.875
+socket: 0.811
+boot: 0.752
+network: 0.733
+
+[aarch64] VM status remains "running" after it's suspended
+
+The issue is observed on aarch64 (I didn't check x86) with latest upstream QEMU bits. 
+
+Steps to reproduce:
+
+1) start guest
+
+2) suspend guest with this command:
+
+# echo mem > /sys/power/state
+
+  Check console messages, which should indicate that guest has been suspended.
+
+3) check guest status through HMP command "info status":
+
+  (qemu) info status
+   info status
+   VM status: running
+
+Note it's "running", which is incorrect. 
+
+QEMU version:
+
+# qemu-system-aarch64 --version
+QEMU emulator version 3.1.50 (v3.1.0-2203-g9403bcc)
+Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
+
+The issue prevents user from resuming a suspended guest through "system_wakeup" HMP command, because QEMU thinks the guest is in running state and does nothing.
+
+I think the issues occurs because qemu_system_wakeup_request() doesn't get called. It seems the root cause is with ACPI related code.
+
+In order for the guest kernel to expose ACPI S3 suspend to a privileged
+guest user, the guest kernel first checks if the platform (hardware and
+firmware) support ACPI S3. This in turn depends on whether the DSDT
+offers a package called _S3.
+
+For example, on the "pc" machine type, S3 can be disabled on the QEMU
+command line with "-global PIIX4_PM.disable_s3=1". On the "q35" machine
+type, the same is achievable with "-global ICH9-LPC.disable_s3=1". One
+thing both of these switches do is that they hide the _S3 package in the
+DSDT from the guest OS (they prevent the generation of the _S3 package
+in the DSDT).
+
+On the "virt" machine type, the "_S3" package is not generated at all,
+in the DSDT. For this reason, ACPI S3 is never valid for the guest
+kernel to expose.
+
+Now let's look at the kernel docs:
+
+https://www.kernel.org/doc/html/v4.18/admin-guide/pm/sleep-states.html#suspend-to-ram
+
+> On ACPI-based systems [Suspend-to-RAM] is mapped to the S3 system
+> state defined by ACPI.
+
+However, when you write "mem" to "/sys/power/state", the guest kernel is
+not required to Suspend-to-RAM (and hence enter ACPI S3):
+
+https://www.kernel.org/doc/html/v4.18/admin-guide/pm/sleep-states.html#basic-sysfs-interfaces-for-system-suspend-and-hibernation
+
+> The string "mem" is interpreted in accordance with the contents of the
+> mem_sleep file described below
+
+> The strings that may be present in [mem_sleep] are "s2idle", "shallow"
+> and "deep". The string "s2idle" always represents suspend-to-idle and,
+> by convention, "shallow" and "deep" represent standby and
+> suspend-to-RAM, respectively.
+
+This is what I get:
+
+> # uname -r
+> 4.14.0-115.2.2.el7a.aarch64
+>
+> # cat /sys/power/state
+> freeze mem disk
+>
+> # cat /sys/power/mem_sleep
+> [s2idle]
+
+Therefore, when you write "mem" to "/sys/power/state", the guest kernel
+picks "s2idle" (suspend-to-idle,
+<https://www.kernel.org/doc/html/v4.18/admin-guide/pm/sleep-states.html#s2idle>):
+
+> This is a generic, pure software, light-weight variant of system
+> suspend (also referred to as S2I or S2Idle). [...] by freezing user
+> space, suspending the timekeeping and putting all I/O devices into
+> low-power states [...] This state can be used on platforms without
+> support for standby or suspend-to-RAM [...]
+
+And that's why the "info status" HMP command reports "VM status:
+running".
+
+(Side comment: the ArmVirtQemu firmware from edk2 doesn't support ACPI
+S3 either.)
+
+
+Hi Laszlo, 
+
+Thanks much for your detailed explanation. It has been very helpful.
+
+> In order for the guest kernel to expose ACPI S3 suspend to a privileged
+> guest user, the guest kernel first checks if the platform (hardware and
+> firmware) support ACPI S3. This in turn depends on whether the DSDT
+> offers a package called _S3.
+>
+> ...
+> 
+> On the "virt" machine type, the "_S3" package is not generated at all,
+> in the DSDT. For this reason, ACPI S3 is never valid for the guest
+> kernel to expose.
+
+Now that you said that, I googled about this and found ARM has PSCI,
+which is an ARM specific PM interface standard and can be used together
+with ACPI or FDT:
+
+http://infocenter.arm.com/help/topic/com.arm.doc.den0022d/Power_State_Coordination_Interface_PDD_v1_1_DEN0022D.pdf
+
+The spec defines SYSTEM_SUSPEND in section 5.19. The command was 
+introduced in PSCI 1.0 for similar purpose as ACPI S3. It's optional.
+
+Then I find the following: 
+
+1) My VM's firmware supports PCSI v1.0. The following were from dmesg:
+
+[    0.000000] psci: probing for conduit method from ACPI.
+[    0.000000] psci: PSCIv1.0 detected in firmware.
+[    0.000000] psci: Using standard PSCI v0.2 function IDs
+[    0.000000] psci: Trusted OS migration not required
+
+2) The PSCI code in QEMU supports 0.2 version only. See target/arm/psci.c.
+
+I don't completely understand how they work together (for example, I think
+PSCI requests should be handled by firmware, how come QEMU gets a chance
+to handle them), but I guess the issue might be with QEMU not supporting PSCI 1.0.
+
+> # cat /sys/power/mem_sleep
+> [s2idle] 
+
+Yes, I observed the same on both my VM and host (my host's firmware supports
+PSCI 1.0 also, not sure why kernel thinks it doesn't support suspend-to-RAM).
+
+
diff --git a/results/classifier/zero-shot/108/performance/1820 b/results/classifier/zero-shot/108/performance/1820
new file mode 100644
index 000000000..5f5b48fd7
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1820
@@ -0,0 +1,25 @@
+performance: 0.992
+device: 0.972
+graphic: 0.940
+semantic: 0.636
+network: 0.507
+other: 0.483
+permissions: 0.430
+debug: 0.417
+boot: 0.366
+PID: 0.345
+vnc: 0.331
+KVM: 0.303
+socket: 0.129
+files: 0.111
+
+whpx is slower than tcg
+Description of problem:
+I find whpx much slower than tcg, which is rather odd.
+Steps to reproduce:
+1. Enable Hyper-V
+2. run qemu with **-accel whpx,kernel-irqchip=off**
+Additional information:
+my cpu: intel i7 6500u
+memory: 8go
+my gpu: intel graphics 520 hd
diff --git a/results/classifier/zero-shot/108/performance/1834496 b/results/classifier/zero-shot/108/performance/1834496
new file mode 100644
index 000000000..cb383bb17
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1834496
@@ -0,0 +1,68 @@
+performance: 0.927
+PID: 0.831
+debug: 0.789
+permissions: 0.776
+device: 0.757
+graphic: 0.755
+network: 0.722
+semantic: 0.696
+other: 0.687
+files: 0.655
+vnc: 0.621
+KVM: 0.602
+socket: 0.580
+boot: 0.558
+
+Regressions on arm target with some GCC tests
+
+Hi,
+
+After trying qemu master:
+commit 474f3938d79ab36b9231c9ad3b5a9314c2aeacde
+Merge: 68d7ff0 14f5d87
+Author: Peter Maydell <email address hidden>
+Date:   Fri Jun 21 15:40:50 2019 +0100
+
+I found several regressions compared to qemu-3.1 when running the GCC testsuite.
+I'm attaching a tarball containing several GCC tests (binaries), needed shared libs, and a short script to run all the tests.
+
+All tests used to pass w/o error (one of them is verbose), but with a recent qemu, all of them make qemu crash:
+
+qemu: uncaught target signal 6 (Aborted) - core dumped
+
+This was noticed with GCC master configured with
+--target arm-none-linux-gnueabi
+--with-mode arm
+--with-cpu cortex-a9
+
+and calling qemu with --cpu cortex-a9 (the script uses "any", this makes no difference).
+
+I have noticed other failures with arm-v8 code, but this is probably the same root cause. Since it's a bit tedious to manually rebuild & extract the testcases, I'd prefer to start with this subset, and I can extract more if needed later.
+
+Thanks
+
+
+
+I bisected a chunk of the errors to:
+
+  commit c6fb8c0cf704c4a1a48c3e99e995ad4c58150dab (refs/bisect/bad)
+  Author: Richard Henderson <email address hidden>
+  Date:   Mon Feb 25 11:42:35 2019 -0800
+
+      tcg/i386: Support INDEX_op_extract2_{i32,i64}
+
+      Signed-off-by: Richard Henderson <email address hidden>
+
+Specifically I think when tcg_gen_deposit_i32 handles the if (ofs + len == 32) case.
+
+
+Fixed by:
+
+Subject: [PATCH for-4.1] tcg: Fix constant folding of INDEX_op_extract2_i32
+Date: Tue,  9 Jul 2019 14:19:00 +0200
+Message-Id: <email address hidden>
+
+
+I confirm this patch fixes the problem I reported. Thanks!
+
+
diff --git a/results/classifier/zero-shot/108/performance/1849 b/results/classifier/zero-shot/108/performance/1849
new file mode 100644
index 000000000..5824cda07
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1849
@@ -0,0 +1,86 @@
+performance: 0.929
+device: 0.928
+graphic: 0.886
+boot: 0.844
+files: 0.784
+PID: 0.758
+permissions: 0.731
+semantic: 0.694
+network: 0.614
+debug: 0.463
+other: 0.441
+socket: 0.329
+vnc: 0.235
+KVM: 0.106
+
+Problems with building riscv Linux using qemu on wsl2
+Description of problem:
+execute:
+
+`qemu-system-riscv64 -M virt -m 256M -nographic -kernel /home/ysc/test/linux-6.1.46/arch/riscv/boot/Image -drive file=rootfs.img,format=raw,id=hd0 -device virtio-blk-device,drive=hd0 -append "root=/dev/vda rw console=ttyS0"`
+
+**appear:**
+
+OpenSBI
+
+/ \_\_ \\ / **_| \_ \_ | | | | | \_\_ \__\_ \_ \_\_ | (_**\_ | |_) || | | | | | '\_ \\ / \_ \\ '\_ \\ \__\_ | \_ \< | | | |\*\*| | |_) | \_\_/ | | |) | |) || | \_\_**/| .**/ \_\*\*|_| |_|**_/|\___\_/_**| | | |\_|
+
+Platform Name : riscv-virtio,qemu
+
+Platform Features : medeleg Platform HART Count : 1
+
+Platform IPI Device : aclint-mswi
+
+Platform Timer Device : aclint-mtimer @ 10000000Hz
+
+Platform Console Device : uart8250 Platform HSM Device : ---
+
+Platform Reboot Device : sifive_test Platform Shutdown Device : sifive_test
+
+Firmware Base : 0x80000000
+
+Firmware Size : 252 KB
+
+Runtime SBI Version : 0.3
+
+Domain0 Name : root
+
+Domain0 Boot HART : 0
+
+Domain0 HARTs : 0\*
+
+Domain0 Region00 : 0x0000000002000000-0x000000000200ffff (I)
+
+Domain0 Region01 : 0x0000000080000000-0x000000008003ffff ()
+
+Domain0 Region02 : 0x0000000000000000-0xffffffffffffffff (R,W,X)
+
+Domain0 Next Address : 0x0000000080200000 Domain0 Next Arg1 : 0x000000008f000000
+
+Domain0 Next Mode : S-mode Domain0 SysReset : yes
+
+Boot HART ID : 0
+
+Boot HART Domain : root
+
+Boot HART ISA : rv64imafdcsuh
+
+Boot HART Features : scounteren,mcounteren,time
+
+Boot HART PMP Count : 16
+
+Boot HART PMP Granularity : 4
+
+Boot HART PMP Address Bits: 54
+
+Boot HART MHPM Count : 0
+
+Boot HART MIDELEG : 0x0000000000001666
+
+Boot HART MEDELEG : 0x0000000000f0b509
+
+When I run qemu, it's stuck here
+Steps to reproduce:
+1. Build the kernel file using Linux-6.1.46
+2. Use busbox to build rootfs
+3. run qemu
diff --git a/results/classifier/zero-shot/108/performance/1853123 b/results/classifier/zero-shot/108/performance/1853123
new file mode 100644
index 000000000..1abec56ce
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1853123
@@ -0,0 +1,64 @@
+performance: 0.931
+KVM: 0.893
+device: 0.877
+network: 0.813
+graphic: 0.785
+files: 0.664
+semantic: 0.654
+debug: 0.632
+PID: 0.605
+other: 0.570
+permissions: 0.467
+boot: 0.388
+socket: 0.364
+vnc: 0.364
+
+Memory synchronization error between kvm and target, e1000(dpdk)
+
+Hi folks.
+
+I use linux with dpdk drivers on the target system, and e1000 emulation device with tap interface for host. I use kvm for accelerate.
+Version qemu 4.0.94 and master (Nov 12 10:14:33 2019)
+Version dpdk stable-17.11.4
+Version linux host 4.15.0-66-generic (ubuntu 18.04)
+
+I type command "ping <target ip> -f" and wait about 1-2 minutes. Network subsystem freezes.
+
+For receive the eth pack from host system (tap interface) to host system the e1000 using ring buffer. 
+
+The e1000 write body of eth pack, set E1000_RXD_STAT_DD flag and move RDH (Ring Device Head).
+(file hw/net/e1000.c function e1000_receive_iov() )
+
+The dpdk driver is reading from E1000_RXD_STAT_DD flags (ignoring RDH), if flag is set: read buffer, unset flag E1000_RXD_STAT_DD and move RDT (Ring Device Tail).
+(source drivers/net/e1000/em_rxtx.c function eth_em_recv_scattered_pkts() )
+
+I see what the driver unet E1000_RXD_STAT_DD (rxdp->status = 0; ), but sometimes rxdp->status remains equal to 7. On the next cycle, this this buffer is read, RDT moved to far. RDH becomes equal RDT and network is freezes.
+
+If I insert some delay after unset E1000_RXD_STAT_DD, and repeatedly unset E1000_RXD_STAT_DD (if rxdp->status == 7 ), then all work fine.
+If check E1000_RXD_STAT_DD without delay, status rxdp->status always valid.
+
+This only appears on kvm. If I use tcg all works fine.
+
+I trying set watchpoint for memory on the qemu (for tcg), and see, that for one package cycle of set/unse STAT_DD repeated once.
+
+I trying set watchpoint for memory on the qemu (for kvm), and see, that rxdp->status changed to 0(unset) only once, but is changes immediately before set flag. 
+
+
+Please help me with advice on how to catch and fix this error. 
+Theoretically, it would help me to trace the memory access when writing to E1000_RXD_STAT_DD, RHD and RDT, both from the target and the host system. But I have no idea how this can be done.
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1857 b/results/classifier/zero-shot/108/performance/1857
new file mode 100644
index 000000000..ffa6f04e0
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1857
@@ -0,0 +1,67 @@
+performance: 0.996
+device: 0.938
+socket: 0.916
+graphic: 0.901
+PID: 0.895
+debug: 0.861
+semantic: 0.842
+permissions: 0.841
+vnc: 0.817
+network: 0.817
+files: 0.779
+KVM: 0.713
+boot: 0.639
+other: 0.489
+
+Major qemu-aarch64 performance slowdown since commit 59b6b42cd3
+Description of problem:
+I have observed a major performance slowdown between qemu 8.0.0 and 8.1.0:
+
+
+qemu 8.0.0: 0.8s
+
+qemu 8.1.0: 6.8s
+
+
+After bisecting the commits between 8.0.0 and 8.1.0, the offending commit is 59b6b42cd3:
+
+
+commit 59b6b42cd3446862567637f3a7ab31d69c9bef51
+Author: Richard Henderson <richard.henderson@linaro.org>
+Date:   Tue Jun 6 10:19:39 2023 +0100
+
+    target/arm: Enable FEAT_LSE2 for -cpu max
+
+    Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
+    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
+    Message-id: 20230530191438.411344-21-richard.henderson@linaro.org
+    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
+
+
+Reverting the commit in latest master fixes the problem:
+
+qemu 8.0.0: 0.8s
+
+qemu 8.1.0: 6.8s
+
+qemu master + revert 59b6b42cd3: 0.8s
+
+Alternatively, specify `-cpu cortex-a35` to disable LSE2:
+
+`time ./qemu-aarch64 -cpu cortex-a35`: 0.8s
+
+`time ./qemu-aarch64`: 6.77s
+
+The slowdown is also observed when running qemu-aarch64 on aarch64 machine:
+
+`time ./qemu-aarch64 /usr/bin/node -e 1`: 2.91s
+
+`time ./qemu-aarch64 -cpu cortex-a35 /usr/bin/node -e 1`: 1.77s
+
+The slowdown on x86_64 machine is small: 362ms -> 378ms.
+Steps to reproduce:
+1. Run `time ./qemu-aarch64 node-aarch64 -e 1` (node-aarch64 is NodeJS v16 built for AArch64)
+2. Using qemu master, the output says `0.8s`
+3. Using qemu master with commit 59b6b42cd3 reverted, the output says `6.77s`
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/1859021 b/results/classifier/zero-shot/108/performance/1859021
new file mode 100644
index 000000000..fa453b521
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1859021
@@ -0,0 +1,284 @@
+performance: 0.966
+debug: 0.965
+graphic: 0.964
+permissions: 0.962
+device: 0.960
+semantic: 0.953
+PID: 0.951
+network: 0.950
+other: 0.948
+socket: 0.947
+boot: 0.944
+files: 0.926
+vnc: 0.921
+KVM: 0.874
+
+qemu-system-aarch64 (tcg):  cval + voff overflow not handled, causes qemu to hang
+
+The Armv8 architecture reference manual states that for any timer set (e.g. CNTP* and CNTV*), the condition for such timer to generate an interrupt (if enabled & unmasked) is:
+
+CVAL <= CNT(P/V)CT
+
+Although this is arguably sloppy coding, I have seen code that is therefore assuming it can set CVAL to a very high value (e.g. UINT64_MAX) and leave the interrupt enabled in CTL, and never get the interrupt.
+
+On latest master commit as the time of writing, there is an integer overflow in target/arm/helper.c gt_recalc_timer affecting the virtual timer when the interrupt is enabled in CTL:
+
+    /* Next transition is when we hit cval */
+    nexttick = gt->cval + offset;
+
+When this overflow happens, I notice that qemu is no longer responsive and that I have to SIGKILL the process:
+    - qemu takes nearly all the cpu time of the cores it is running on (e.g. 50% cpu usage if running on half the cores) and is completely unresponsive
+    - no guest interrupt (reported via -d int) is generated
+
+Here the minimal code example to reproduce the issue:
+
+    mov     x0, #1
+    msr     cntvoff_el2, x0
+    mov     x0, #-1
+    msr     cntv_cval_el0, x0
+    mov     x0, #1
+    msr     cntv_ctl_el0, x0 // interrupt generation enabled, not masked; qemu will start to hang here
+
+Options used:
+-nographic -machine virt,virtualization=on,gic-version=2,accel=tcg -cpu cortex-a57
+-smp 4 -m 1024 -kernel whatever.elf -d unimp,guest_errors,int -semihosting-config enable,target=native
+-serial mon:stdio
+
+Version used: 4.2
+
+Bug: https://bugs.launchpad.net/bugs/1859021
+
+Signed-off-by: Alex Bennée <email address hidden>
+---
+ tests/tcg/aarch64/system/vtimer.c         | 48 +++++++++++++++++++++++
+ tests/tcg/aarch64/Makefile.softmmu-target |  4 ++
+ 2 files changed, 52 insertions(+)
+ create mode 100644 tests/tcg/aarch64/system/vtimer.c
+
+diff --git a/tests/tcg/aarch64/system/vtimer.c b/tests/tcg/aarch64/system/vtimer.c
+new file mode 100644
+index 00000000000..42f2f7796c7
+--- /dev/null
++++ b/tests/tcg/aarch64/system/vtimer.c
+@@ -0,0 +1,48 @@
++/*
++ * Simple Virtual Timer Test
++ *
++ * Copyright (c) 2020 Linaro Ltd
++ *
++ * SPDX-License-Identifier: GPL-2.0-or-later
++ */
++
++#include <inttypes.h>
++#include <minilib.h>
++
++/* grabbed from Linux */
++#define __stringify_1(x...) #x
++#define __stringify(x...)   __stringify_1(x)
++
++#define read_sysreg(r) ({                                           \
++            uint64_t __val;                                         \
++            asm volatile("mrs %0, " __stringify(r) : "=r" (__val)); \
++            __val;                                                  \
++})
++
++#define write_sysreg(r, v) do {                     \
++        uint64_t __val = (uint64_t)(v);             \
++        asm volatile("msr " __stringify(r) ", %x0"  \
++                 : : "rZ" (__val));                 \
++} while (0)
++
++int main(void)
++{
++    int i;
++
++    ml_printf("VTimer Test\n");
++
++    write_sysreg(cntvoff_el2, 1);
++    write_sysreg(cntv_cval_el0, -1);
++    write_sysreg(cntv_ctl_el0, 1);
++
++    ml_printf("cntvoff_el2=%lx\n", read_sysreg(cntvoff_el2));
++    ml_printf("cntv_cval_el0=%lx\n", read_sysreg(cntv_cval_el0));
++    ml_printf("cntv_ctl_el0=%lx\n", read_sysreg(cntv_ctl_el0));
++
++    /* Now read cval a few times */
++    for (i = 0; i < 10; i++) {
++        ml_printf("%d: cntv_cval_el0=%lx\n", i, read_sysreg(cntv_cval_el0));
++    }
++
++    return 0;
++}
+diff --git a/tests/tcg/aarch64/Makefile.softmmu-target b/tests/tcg/aarch64/Makefile.softmmu-target
+index 7b4eede3f07..62cdddbb215 100644
+--- a/tests/tcg/aarch64/Makefile.softmmu-target
++++ b/tests/tcg/aarch64/Makefile.softmmu-target
+@@ -62,3 +62,7 @@ run-memory-replay: memory-replay run-memory-record
+ 	  "$< on $(TARGET_NAME)")
+ 
+ EXTRA_TESTS+=memory-record memory-replay
++
++# vtimer test
++QEMU_EL2_MACHINE=-machine virt,virtualization=on,gic-version=2 -cpu cortex-a57 -smp 4
++run-vtimer: QEMU_OPTS=$(QEMU_EL2_MACHINE) $(QEMU_SEMIHOST)  -kernel
+-- 
+2.20.1
+
+
+
+If we don't detect this we will be stuck in a busy loop as we schedule
+a timer for before now which will continually trigger gt_recalc_timer
+even though we haven't reached the state required to trigger the IRQ.
+
+Bug: https://bugs.launchpad.net/bugs/1859021
+Cc: <email address hidden>
+Signed-off-by: Alex Bennée <email address hidden>
+---
+ target/arm/helper.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/target/arm/helper.c b/target/arm/helper.c
+index 19a57a17da5..eb17106f7bd 100644
+--- a/target/arm/helper.c
++++ b/target/arm/helper.c
+@@ -2481,6 +2481,9 @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
+         } else {
+             /* Next transition is when we hit cval */
+             nexttick = gt->cval + offset;
++            if (nexttick < gt->cval) {
++                nexttick = UINT64_MAX;
++            }
+         }
+         /* Note that the desired next expiry time might be beyond the
+          * signed-64-bit range of a QEMUTimer -- in this case we just
+-- 
+2.20.1
+
+
+
+On Fri, 10 Jan 2020 at 16:16, Alex Bennée <email address hidden> wrote:
+>
+> If we don't detect this we will be stuck in a busy loop as we schedule
+> a timer for before now which will continually trigger gt_recalc_timer
+> even though we haven't reached the state required to trigger the IRQ.
+>
+> Bug: https://bugs.launchpad.net/bugs/1859021
+> Cc: <email address hidden>
+> Signed-off-by: Alex Bennée <email address hidden>
+> ---
+>  target/arm/helper.c | 3 +++
+>  1 file changed, 3 insertions(+)
+>
+> diff --git a/target/arm/helper.c b/target/arm/helper.c
+> index 19a57a17da5..eb17106f7bd 100644
+> --- a/target/arm/helper.c
+> +++ b/target/arm/helper.c
+> @@ -2481,6 +2481,9 @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
+>          } else {
+>              /* Next transition is when we hit cval */
+>              nexttick = gt->cval + offset;
+> +            if (nexttick < gt->cval) {
+> +                nexttick = UINT64_MAX;
+> +            }
+>          }
+
+There's something odd going on with this code. Adding a bit of context:
+
+        uint64_t offset = timeridx == GTIMER_VIRT ?
+                                      cpu->env.cp15.cntvoff_el2 : 0;
+        uint64_t count = gt_get_countervalue(&cpu->env);
+        /* Note that this must be unsigned 64 bit arithmetic: */
+        int istatus = count - offset >= gt->cval;
+        [...]
+        if (istatus) {
+            /* Next transition is when count rolls back over to zero */
+            nexttick = UINT64_MAX;
+        } else {
+            /* Next transition is when we hit cval */
+            nexttick = gt->cval + offset;
+        }
+
+I think this patch is correct, in that the 'nexttick' values
+are all absolute and this cval/offset combination implies
+that the next timer interrupt is going to be in a future
+so distant we can't even fit the duration in a uint64_t.
+
+But the other half of the 'if' also looks wrong: that's
+for the case of "timer has fired, how long until the
+wraparound causes the interrupt line to go low again?".
+UINT64_MAX is right for the EL1 case where offset is 0,
+but the offset might actually be set such that the wrap
+around happens fairly soon. We want to calculate the
+tick when (count - offset) hits 0, saturated to
+UINT64_MAX. It's getting late here and I couldn't figure
+out what that expression should be with 15 minutes of
+fiddling around with pen and paper diagrams. I'll have another
+go tomorrow if nobody else gets there first...
+
+thanks
+-- PMM
+
+
+On Thu, 16 Jan 2020 at 18:45, Peter Maydell <email address hidden> wrote:
+> There's something odd going on with this code. Adding a bit of context:
+>
+>         uint64_t offset = timeridx == GTIMER_VIRT ?
+>                                       cpu->env.cp15.cntvoff_el2 : 0;
+>         uint64_t count = gt_get_countervalue(&cpu->env);
+>         /* Note that this must be unsigned 64 bit arithmetic: */
+>         int istatus = count - offset >= gt->cval;
+>         [...]
+>         if (istatus) {
+>             /* Next transition is when count rolls back over to zero */
+>             nexttick = UINT64_MAX;
+>         } else {
+>             /* Next transition is when we hit cval */
+>             nexttick = gt->cval + offset;
+>         }
+>
+> I think this patch is correct, in that the 'nexttick' values
+> are all absolute and this cval/offset combination implies
+> that the next timer interrupt is going to be in a future
+> so distant we can't even fit the duration in a uint64_t.
+>
+> But the other half of the 'if' also looks wrong: that's
+> for the case of "timer has fired, how long until the
+> wraparound causes the interrupt line to go low again?".
+> UINT64_MAX is right for the EL1 case where offset is 0,
+> but the offset might actually be set such that the wrap
+> around happens fairly soon. We want to calculate the
+> tick when (count - offset) hits 0, saturated to
+> UINT64_MAX. It's getting late here and I couldn't figure
+> out what that expression should be with 15 minutes of
+> fiddling around with pen and paper diagrams. I'll have another
+> go tomorrow if nobody else gets there first...
+
+With a fresher brain:
+
+For the if (istatus) branch we want the absolute tick
+when (count - offset) wraps round to 0, saturated to UINT64_MAX.
+I think this is:
+    if (offset <= count) {
+        nexttick = UINT64_MAX;
+    } else {
+        nexttick = offset;
+    }
+
+Should we consider this a separate bugfix to go in its own patch?
+
+thanks
+-- PMM
+
+
+A different approach was posted that basically elides the overflow case by not scheduling timers for IRQ events which have already happened:
+
+  https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg07915.html
+
+
+This is an automated cleanup. This bug report has been moved
+to QEMU's new bug tracker on gitlab.com and thus gets marked
+as 'expired' now. Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/60
+
+
diff --git a/results/classifier/zero-shot/108/performance/1859081 b/results/classifier/zero-shot/108/performance/1859081
new file mode 100644
index 000000000..fec08f501
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1859081
@@ -0,0 +1,57 @@
+performance: 0.927
+graphic: 0.812
+other: 0.794
+semantic: 0.760
+device: 0.742
+network: 0.659
+boot: 0.654
+vnc: 0.632
+files: 0.605
+socket: 0.559
+permissions: 0.512
+PID: 0.480
+debug: 0.404
+KVM: 0.362
+
+Mouse way too fast when Qemu is on a Windows VM with a OS 9 Guest
+
+On a server, I have a Windows 10 VM with Qemu 4.1.0 (latest) from https://qemu.weilnetz.de/w64/ installed.
+There I have a Mac OS 9.2.2 machine.
+Now if I connect to the Windows VM with VNC or RDP or even VMWare console, the Mouse in the Mac OS Guest inside Qemu is waaaay to fast. Even when lowering the mouse speed in the Mac OS mouse setting, one pixel in the Host (Windows 10 VM) still moves the mouse by 10 pixels inside the Qemu machine.
+I tried different resolutions but that does not help.
+Is there any way to fix this or any way how I can provide more information?
+Thanks
+
+What is the QEMU command-line you use?
+Does this problem exist with the usb mouse (-device usb-mouse)?
+Could you try upgrading to the latest version of QEMU and see if the issue is resolved please?
+
+The command line I currently use is:
+
+".\qemu-4.2.0-win64\qemu-system-ppc.exe" -L pc-bios -boot c -M mac99,via=pmu -m 512 ^
+-prom-env "auto-boot?=true" -prom-env "boot-args=-v" -prom-env "vga-ndrv?=true" ^
+-drive file=c:\qemu\MacOS9.2.img,format=raw,media=disk ^
+-drive file=c:\qemu\MacOS9.2.2_Universal_Install.iso,format=raw,media=cdrom ^
+-sdl ^
+-netdev user,id=network01 -device sungem,netdev=network01 ^
+-device VGA,edid=on
+
+I also tried by adding "-device usb-mouse" but it does not make any difference.
+I now tried with 4.2.0 from omledom (yesterday with 4.1.0 from weilnetz.
+There is no difference in 4.1.0 and 4.2.0 with or without the usb-mouse.
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1873341 b/results/classifier/zero-shot/108/performance/1873341
new file mode 100644
index 000000000..12ab0a23d
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1873341
@@ -0,0 +1,35 @@
+performance: 0.923
+device: 0.877
+graphic: 0.862
+KVM: 0.768
+vnc: 0.663
+socket: 0.557
+other: 0.528
+boot: 0.470
+PID: 0.429
+semantic: 0.423
+files: 0.397
+debug: 0.334
+permissions: 0.303
+network: 0.252
+
+Qemu Win98 VM with KVM videocard passthrough DOS mode video is not working for most of games..
+
+Hello,
+im using Win98 machine with KVM videocards passthrough which is working fine, but when i try Windows 98 - Dosbox mode, there is something work with all videocards which i tried PCI-E/PCI - Nvidia, 3Dfx, Matrox.
+
+ Often is framerate is very slow, as slideshow:
+Doom 2, Blood, even for Fdisk start - i can see how its slowly rendering individual lines, or its not working at all - freeze / black screen only - Warcraft 2 demo (vesa 640x480). 
+
+ There is something wrong with it.
+
+ Qemu 2.11 + 4.2, Linux Mint 19.3. Gigabyte Z170 MB.
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/253
+
+
diff --git a/results/classifier/zero-shot/108/performance/1875762 b/results/classifier/zero-shot/108/performance/1875762
new file mode 100644
index 000000000..a2b74f530
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1875762
@@ -0,0 +1,63 @@
+performance: 0.952
+graphic: 0.895
+device: 0.851
+semantic: 0.817
+files: 0.816
+permissions: 0.774
+other: 0.756
+KVM: 0.742
+vnc: 0.676
+PID: 0.666
+socket: 0.631
+debug: 0.631
+network: 0.566
+boot: 0.565
+
+Poor disk performance on sparse VMDKs
+
+Found in QEMU 4.1, and reproduced on master.
+
+QEMU appears to suffer from remarkably poor disk performance when writing to sparse-extent VMDKs. Of course it's to be expected that allocation takes time and sparse VMDKs peform worse than allocated VMDKs, but surely not on the orders of magnitude I'm observing. On my system, the fully allocated write speeds are approximately 1.5GB/s, while the fully sparse write speeds can be as low as 10MB/s. I've noticed that adding "cache unsafe" reduces the issue dramatically, bringing speeds up to around 750MB/s. I don't know if this is still slow or if this perhaps reveals a problem with the default caching method.
+
+To reproduce the issue I've attached two 4GiB VMDKs. Both are completely empty and both are technically sparse-extent VMDKs, but one is 100% pre-allocated and the other is 100% unallocated. If you attach these VMDKs as second and third disks to an Ubuntu VM running on QEMU (with KVM) and measure their write performance (using dd to write to /dev/sdb and /dev/sdc for example) the difference in write speeds is clear.
+
+For what it's worth, the flags I'm using that relate to the VMDK are as follows:
+
+`-drive if=none,file=sparse.vmdk,id=hd0,format=vmdk -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=hd0`
+
+
+
+On Tue, Apr 28, 2020 at 10:45:07PM -0000, Alan Murtagh wrote:
+> QEMU appears to suffer from remarkably poor disk performance when
+> writing to sparse-extent VMDKs. Of course it's to be expected that
+> allocation takes time and sparse VMDKs peform worse than allocated
+> VMDKs, but surely not on the orders of magnitude I'm observing.
+
+Hi Alan,
+This is expected behavior. The VMDK block driver is not intended for
+running VMs. It is primarily there for qemu-img convert support.
+
+You can get good performance by converting the image file to qcow2 or
+raw instead.
+
+The effort required to develop a high-performance image format driver
+for non-trivial file formats like VMDK is quite high. Therefore only
+qcow2 goes through the lengths required to deliver good performance
+(request parallelism, metadata caching, optimizing metadata update
+dependencies, etc).
+
+The non-native image format drivers are simple and basically only work
+well for sequential I/O with no parallel requests. That's all qemu-img
+convert needs!
+
+If someone volunteers to optimize VMDK then I'm sure the patches could
+be merged. In the meantime I suggest using QEMU's native image formats:
+qcow2 or raw.
+
+Stefan
+
+
+Thanks Stefan.
+
+Ok, I'm closing this now, since this is the expected behavior according to Stefan's description.
+
diff --git a/results/classifier/zero-shot/108/performance/1881450 b/results/classifier/zero-shot/108/performance/1881450
new file mode 100644
index 000000000..043768f86
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1881450
@@ -0,0 +1,75 @@
+performance: 0.952
+files: 0.938
+other: 0.864
+graphic: 0.802
+device: 0.789
+debug: 0.775
+PID: 0.769
+permissions: 0.766
+semantic: 0.763
+boot: 0.751
+socket: 0.708
+vnc: 0.622
+network: 0.605
+KVM: 0.473
+
+Emulation of a math function fails for m68k Linux user mode
+
+Please check the attached math-example.c file.
+When running the m68k executable under QEMU, it results in an "Illegal instruction" error.
+Other targets don't produce this error.
+
+Steps to reproduce the bug:
+
+1. Download the math-example.c attached file.
+2. Compile it by running:
+        m68k-linux-gnu-gcc -O2 -static math-example.c -o math-example-m68k -lm
+3. Run the executable with QEMU:
+        /build/qemu-5.0.0/build-gcc/m68k-linux-user/qemu-m68k math-example-m68k 
+
+The output of execution is:
+        Profiling function expm1f():
+        qemu: uncaught target signal 4 (Illegal instruction) - core dumped
+        Illegal instruction (core dumped)
+
+Expected output:
+        Profiling function expm1f():
+          Elapsed time: 47 ms
+          Control result: 71804.953125
+
+
+
+
+
+Tracing gives me:
+
+IN: expm1f
+0x800005cc:  fetoxm1x %fp2,%fp0
+Disassembler disagrees with translator over instruction decoding
+Please report this to <email address hidden>
+
+(gdb) x/2hx 0x800005cc
+0x800005cc:	0xf200	0x0808
+
+The instruction is not implemented in qemu. I fix that.
+
+
+
+Fix available.
+
+Execution doesn't fail anymore:
+
+  Profiling function expm1f():
+    Elapsed time: 41 ms
+    Control result: 71805.108342
+
+Control result matches real hardware one:
+
+  Profiling function expm1f():
+    Elapsed time: 2152 ms
+    Control result: 71805.108342
+
+
+Fixed here:
+https://git.qemu.org/?p=qemu.git;a=commitdiff;h=250b1da35d579f423
+
diff --git a/results/classifier/zero-shot/108/performance/1883400 b/results/classifier/zero-shot/108/performance/1883400
new file mode 100644
index 000000000..40b5fba85
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1883400
@@ -0,0 +1,62 @@
+performance: 0.964
+semantic: 0.834
+graphic: 0.820
+device: 0.788
+other: 0.746
+files: 0.741
+PID: 0.726
+permissions: 0.694
+network: 0.685
+socket: 0.684
+vnc: 0.675
+KVM: 0.533
+boot: 0.498
+debug: 0.320
+
+Windows 10 extremely slow and unresponsive
+
+Hi,
+
+Fedora 32, x64
+qemu-5.0.0-2.fc32.x86_64
+
+https://www.microsoft.com/en-us/software-download/windows10ISO
+Win10_2004_English_x64.iso
+
+Windows 10 is excruciatingly slow since upgrading to 5.0.0-2.fc32.  Disabling your repo and downgrading to 2:4.2.0-7.fc32 and corrects the issue (the package in the Fedora repo).
+
+You can duplicate this off of the Windows 10 ISO (see above) and do not even have to install Windows 10 itself.
+
+Please fix,
+
+Many thanks,
+-T
+
+On Sun, Jun 14, 2020 at 01:30:07AM -0000, Toddandmargo-n wrote:
+> Public bug reported:
+> 
+> Hi,
+> 
+> Fedora 32, x64
+> qemu-5.0.0-2.fc32.x86_64
+> 
+> https://www.microsoft.com/en-us/software-download/windows10ISO
+> Win10_2004_English_x64.iso
+> 
+> Windows 10 is excruciatingly slow since upgrading to 5.0.0-2.fc32.
+> Disabling your repo and downgrading to 2:4.2.0-7.fc32 and corrects the
+> issue (the package in the Fedora repo).
+> 
+> You can duplicate this off of the Windows 10 ISO (see above) and do not
+> even have to install Windows 10 itself.
+
+Could this be a duplicate of
+https://bugs.launchpad.net/qemu/+bug/1877716?
+
+Stefan
+
+
+1877716 sounds exactly like what I experienced.
+
+ok, closing this as a duplicate
+
diff --git a/results/classifier/zero-shot/108/performance/1884 b/results/classifier/zero-shot/108/performance/1884
new file mode 100644
index 000000000..fc196a207
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1884
@@ -0,0 +1,25 @@
+performance: 0.922
+graphic: 0.790
+device: 0.729
+vnc: 0.688
+PID: 0.594
+socket: 0.522
+network: 0.519
+permissions: 0.479
+semantic: 0.458
+debug: 0.441
+boot: 0.390
+files: 0.386
+other: 0.249
+KVM: 0.141
+
+avocado-system-* CI jobs are unreliable
+Description of problem:
+The avocado-system-* CI jobs fail randomly:
+https://gitlab.com/qemu-project/qemu/-/jobs/5058610614  
+https://gitlab.com/qemu-project/qemu/-/jobs/5058610654  
+https://gitlab.com/qemu-project/qemu/-/jobs/5030428571  
+
+I don't know how to interpret the test output. Until these CI jobs pass reliably it won't be possible for me to identify when a subtest that is actually healthy/reliable breaks.
+
+Please take a look at the logs and fix or remove unreliable test cases.
diff --git a/results/classifier/zero-shot/108/performance/1886306 b/results/classifier/zero-shot/108/performance/1886306
new file mode 100644
index 000000000..883edad92
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1886306
@@ -0,0 +1,29 @@
+performance: 0.941
+graphic: 0.795
+device: 0.668
+semantic: 0.584
+other: 0.445
+files: 0.420
+network: 0.415
+socket: 0.345
+vnc: 0.291
+PID: 0.263
+permissions: 0.247
+debug: 0.188
+boot: 0.161
+KVM: 0.083
+
+qemu running slow when the window is in background
+
+Reported by <jedinix> on IRC:
+
+QEMU almost freezes when running with `GDK_BACKEND=x11` set and the parameter `gl=on` added to the `-display` option.
+
+GDK_BACKEND=x11 qemu-system-x86_64 -nodefaults -no-user-config -enable-kvm -machine q35 -cpu host -m 4G -display gtk,gl=on -vga std -usb -device usb-kbd -drive file=/tmp/Win10.qcow2,media=disk,format=qcow2 -drive file=~/Downloads/Win10_2004_EnglishInternational_x64.iso,media=cdrom
+
+Leaving out `GDK_BACKEND=x11` or `gl=on` fixes the issue.
+
+I think there is quite a bit of information missing here? Which host OS / distribution are we talking about here? Which parameters were used for "configure"? Which QEMU version has been used? Is it still reproducible with the latest version? ... thus I wonder whether this should get closed, or whether it's worth the effort to move this to the new tracker at Gitlab?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1892081 b/results/classifier/zero-shot/108/performance/1892081
new file mode 100644
index 000000000..73f5ff2ef
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1892081
@@ -0,0 +1,50 @@
+performance: 0.959
+graphic: 0.587
+device: 0.569
+vnc: 0.526
+network: 0.423
+semantic: 0.395
+socket: 0.363
+boot: 0.320
+files: 0.300
+permissions: 0.261
+KVM: 0.249
+PID: 0.218
+other: 0.162
+debug: 0.145
+
+Performance improvement when using "QEMU_FLATTEN" with softfloat type conversions
+
+Attached below is a matrix multiplication program for double data
+types. The program performs the casting operation "(double)rand()"
+when generating random numbers.
+
+This operation calls the integer to float softfloat conversion
+function "int32_to_float_64".
+
+Adding the "QEMU_FLATTEN" attribute to the function definition
+decreases the instructions per call of the function by about 63%.
+
+Attached are before and after performance screenshots from
+KCachegrind.
+
+
+
+
+
+
+
+Confirmed, although "65% decrease" is on 0.44% of the total
+execution for this test case, so the decrease isn't actually
+noticeable.
+
+Nevertheless, it's a simple enough change.
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/134
+
+
diff --git a/results/classifier/zero-shot/108/performance/1895703 b/results/classifier/zero-shot/108/performance/1895703
new file mode 100644
index 000000000..8a91a471c
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1895703
@@ -0,0 +1,70 @@
+performance: 0.981
+permissions: 0.891
+graphic: 0.714
+device: 0.548
+PID: 0.342
+semantic: 0.335
+KVM: 0.215
+boot: 0.177
+network: 0.160
+debug: 0.146
+vnc: 0.136
+files: 0.108
+other: 0.105
+socket: 0.104
+
+performance degradation in tcg since Meson switch
+
+The buildsys conversion to Meson (1d806cef0e3..7fd51e68c34)
+introduced a degradation in performance in some TCG targets:
+
+--------------------------------------------------------
+Test Program: matmult_double
+--------------------------------------------------------
+Target              Instructions     Previous    Latest
+                                     1d806cef   7fd51e68
+----------  --------------------  ----------  ----------
+alpha              3 233 957 639       -----     +7.472%
+m68k               3 919 110 506       -----    +18.433%
+--------------------------------------------------------
+
+Original report from Ahmed Karaman with further testing done
+by Aleksandar Markovic:
+https://<email address hidden>/msg740279.html
+
+Can I get a sample statically linked m68k binary that exhibits this effect?
+
+
+
+I get
+
+$ qemu-m68k ./matmult_double-m68k
+Error while loading /home/pbonzini/matmult_double-m68k: Permission denied
+
+
+Paolo: what are the permissions on matmult_double-m68k on your local fs? (needs to be readable/executable by you)
+
+
+Uff, of course...
+
+This patch shold fix the regression:
+
+diff --git a/configure b/configure
+index 0004c46525..0786144043 100755
+--- a/configure
++++ b/configure
+@@ -7414,6 +7414,7 @@ NINJA=${ninja:-$PWD/ninjatool} $meson setup \
+         -Dwerror=$(if test "$werror" = yes; then echo true; else echo false; fi) \
+         -Dstrip=$(if test "$strip_opt" = yes; then echo true; else echo false; fi) \
+         -Db_pie=$(if test "$pie" = yes; then echo true; else echo false; fi) \
++        -Db_staticpic=$(if test "$pie" = yes; then echo true; else echo false; fi) \
+         -Db_coverage=$(if test "$gcov" = yes; then echo true; else echo false; fi) \
+ 	-Dmalloc=$malloc -Dmalloc_trim=$malloc_trim -Dsparse=$sparse \
+ 	-Dkvm=$kvm -Dhax=$hax -Dwhpx=$whpx -Dhvf=$hvf \
+
+
+This was fixed initially by commit 0c3dd50eaecbfe2, which is the change suggested in Paolo's comment #6, and then refined by commit a5cb7c5afe717d4.
+
+
+Released with QEMU v5.2.0.
+
diff --git a/results/classifier/zero-shot/108/performance/1896 b/results/classifier/zero-shot/108/performance/1896
new file mode 100644
index 000000000..213c55f82
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1896
@@ -0,0 +1,69 @@
+performance: 0.922
+semantic: 0.893
+other: 0.855
+graphic: 0.807
+debug: 0.804
+device: 0.789
+network: 0.783
+socket: 0.749
+PID: 0.748
+permissions: 0.643
+files: 0.612
+vnc: 0.539
+boot: 0.380
+KVM: 0.363
+
+Use `qemu_exit()` function instead of `exit()`
+Additional information:
+I just saw the similar refactoring for the GDB part of QEMU and thought it might be useful in more general case too: https://lore.kernel.org/qemu-devel/20230907112640.292104-1-chigot@adacore.com/T/#m540552946cfa960b34c4d76d2302324f5de8627f
+
+```
+$ rg "exit\(0" -t c -l
+gdbstub/gdbstub.c
+qemu-edid.c
+subprojects/libvhost-user/libvhost-user.c
+semihosting/arm-compat-semi.c
+softmmu/async-teardown.c
+softmmu/device_tree.c
+softmmu/vl.c
+softmmu/runstate.c
+os-posix.c
+dtc/util.c
+dtc/dtc.c
+dtc/tests/dumptrees.c
+qemu-keymap.c
+qemu-io.c
+contrib/ivshmem-server/main.c
+contrib/rdmacm-mux/main.c
+tests/qtest/vhost-user-blk-test.c
+tests/qtest/fuzz/fuzz.c
+tests/qtest/fuzz/generic_fuzz.c
+tests/unit/test-seccomp.c
+tests/unit/test-rcu-list.c
+tests/unit/rcutorture.c
+tests/bench/qht-bench.c
+tests/bench/atomic64-bench.c
+tests/bench/atomic_add-bench.c
+tests/unit/test-iov.c
+tests/tcg/multiarch/linux/linux-test.c
+tests/tcg/aarch64/mte-3.c
+tests/tcg/aarch64/pauth-2.c
+tests/tcg/aarch64/mte-5.c
+tests/tcg/aarch64/mte-6.c
+tests/tcg/aarch64/mte-2.c
+tests/tcg/cris/libc/check_glibc_kernelversion.c
+tests/tcg/cris/libc/check_lz.c
+tests/tcg/s390x/signals-s390x.c
+tests/tcg/i386/hello-i386.c
+tests/tcg/cris/bare/sys.c
+tests/tcg/ppc64/mtfsf.c
+qemu-nbd.c
+net/net.c
+hw/nvram/eeprom93xx.c
+hw/arm/allwinner-r40.c
+hw/rdma/rdma_backend.c
+hw/watchdog/watchdog.c
+trace/control.c
+hw/pci/pci.c
+hw/misc/sifive_test.c
+```
diff --git a/results/classifier/zero-shot/108/performance/1896754 b/results/classifier/zero-shot/108/performance/1896754
new file mode 100644
index 000000000..ba0f14c7e
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1896754
@@ -0,0 +1,55 @@
+performance: 0.936
+boot: 0.745
+other: 0.709
+graphic: 0.611
+semantic: 0.609
+device: 0.579
+network: 0.543
+PID: 0.539
+debug: 0.508
+socket: 0.504
+permissions: 0.497
+files: 0.447
+vnc: 0.400
+KVM: 0.281
+
+Performance degradation for WinXP boot time after b55f54bc
+
+Qemu 5.1 loads Windows XP in TCG mode 5-6 times slower (~2 minutes) than 4.2 (25 seconds), I git bisected it, and it appears that commit b55f54bc965607c45b5010a107a792ba333ba654 causes this issue. Probably similar to an older fixed bug https://bugs.launchpad.net/qemu/+bug/1672383
+
+Command line is trivial: qemu-system-x86_64 -nodefaults -vga std -m 4096M -hda WinXP.qcow2 -monitor stdio -snapshot
+
+The QEMU project is currently moving its bug tracking to another system.
+For this we need to know which bugs are still valid and which could be
+closed already. Thus we are setting the bug state to "Incomplete" now.
+
+If the bug has already been fixed in the latest upstream version of QEMU,
+then please close this ticket as "Fix released".
+
+If it is not fixed yet and you think that this bug report here is still
+valid, then you have two options:
+
+1) If you already have an account on gitlab.com, please open a new ticket
+for this problem in our new tracker here:
+
+    https://gitlab.com/qemu-project/qemu/-/issues
+
+and then close this ticket here on Launchpad (or let it expire auto-
+matically after 60 days). Please mention the URL of this bug ticket on
+Launchpad in the new ticket on GitLab.
+
+2) If you don't have an account on gitlab.com and don't intend to get
+one, but still would like to keep this ticket opened, then please switch
+the state back to "New" or "Confirmed" within the next 60 days (other-
+wise it will get closed as "Expired"). We will then eventually migrate
+the ticket automatically to the new system (but you won't be the reporter
+of the bug in the new system and thus you won't get notified on changes
+anymore).
+
+Thank you and sorry for the inconvenience.
+
+
+Ticket has been moved here (thanks, Maksim!):
+https://gitlab.com/qemu-project/qemu/-/issues/286
+Thus closing this one at Launchpad now.
+
diff --git a/results/classifier/zero-shot/108/performance/1901892 b/results/classifier/zero-shot/108/performance/1901892
new file mode 100644
index 000000000..665061c79
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1901892
@@ -0,0 +1,88 @@
+performance: 0.929
+graphic: 0.925
+permissions: 0.906
+semantic: 0.901
+device: 0.893
+debug: 0.893
+KVM: 0.892
+PID: 0.891
+other: 0.887
+files: 0.874
+vnc: 0.861
+boot: 0.817
+socket: 0.811
+network: 0.759
+
+qemu-img create corrupts the qcow2 if the file already exists
+
+When creating a disk using qemu-img create command, if the destination path of the qcow2 file already exists, it will show the error saying that it cannot get a lock so it exits with exit status 1 but it will corrupt the qcow2 file anyway.
+
+Steps to reproduce:
+1. Have a guest running with a root (vda) and a second device (vdc).
+In my case is a clean Ubuntu 16.04 image with kernel 4.4.0-190-generic x86_64
+vdc disk is called testadddisk-3.qcow2
+2. vdc is an xfs over lvm.
+pvcreacte /dev/vdc
+vgcreate myVg /dev/vdc
+lvcreate -l+100%FREE -n myLv myVg
+mkfs.xfs /dev/mapper/myVg-myLv
+mount /dev/mapper/myVg-myLv /mnt
+3. Create disk IO on that device in the guest.
+while true ; do dd if=/dev/zero of=/mnt/testfile bs=1024 count=1000 ; sleep 1; done
+4. Execute the command to create a new device but use the same name of the device attached:
+sudo qemu-img create -f qcow2 testadddisk-3.qcow2 20G
+The output of the command is this:
+Formatting 'testadddisk-3.qcow2', fmt=qcow2 size=21474836480 cluster_size=65536 lazy_refcounts=off refcount_bits=16
+qemu-img: testadddisk-3.qcow2: Failed to get "write" lock
+Is another process using the image?
+
+The write continues in the guest but when it is shutdown, when it is powered on again you get this:
+error: Failed to start domain testadddisk
+error: internal error: process exited while connecting to monitor: 2020-10-27T22:00:51.628374Z qemu-system-x86_64: -drive file=/var/lib/vmImages/testadddisk-3.qcow2,format=qcow2,if=none,id=drive-virtio-disk2: Image is not in qcow2 format
+
+I run the qemu-img create command with an strace and I believe that first it tries to open the file in write mode, then does a truncate on it and after that says it cannot get a lock. The output is in the file attached. As well as the guest xml just in case.
+
+The host: 
+Ubuntu 18.04.5 LTS
+4.15.0-112-generic x86_64
+qemu packages installed:
+ii  qemu-block-extra:amd64                 1:2.11+dfsg-1ubuntu7.32                         amd64        extra block backend modules for qemu-system and qemu-utils
+ii  qemu-kvm                               1:2.11+dfsg-1ubuntu7.31                         amd64        QEMU Full virtualization on x86 hardware
+ii  qemu-system-common                     1:2.11+dfsg-1ubuntu7.32                         amd64        QEMU full system emulation binaries (common files)
+ii  qemu-system-x86                        1:2.11+dfsg-1ubuntu7.31                         amd64        QEMU full system emulation binaries (x86)
+ii  qemu-utils                             1:2.11+dfsg-1ubuntu7.32                         amd64        QEMU utilities
+
+
+
+The QEMU project is currently moving its bug tracking to another system.
+For this we need to know which bugs are still valid and which could be
+closed already. Thus we are setting the bug state to "Incomplete" now.
+
+If the bug has already been fixed in the latest upstream version of QEMU,
+then please close this ticket as "Fix released".
+
+If it is not fixed yet and you think that this bug report here is still
+valid, then you have two options:
+
+1) If you already have an account on gitlab.com, please open a new ticket
+for this problem in our new tracker here:
+
+    https://gitlab.com/qemu-project/qemu/-/issues
+
+and then close this ticket here on Launchpad (or let it expire auto-
+matically after 60 days). Please mention the URL of this bug ticket on
+Launchpad in the new ticket on GitLab.
+
+2) If you don't have an account on gitlab.com and don't intend to get
+one, but still would like to keep this ticket opened, then please switch
+the state back to "New" or "Confirmed" within the next 60 days (other-
+wise it will get closed as "Expired"). We will then eventually migrate
+the ticket automatically to the new system (but you won't be the reporter
+of the bug in the new system and thus you won't get notified on changes
+anymore).
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1901981 b/results/classifier/zero-shot/108/performance/1901981
new file mode 100644
index 000000000..9d9f40596
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1901981
@@ -0,0 +1,91 @@
+performance: 0.948
+permissions: 0.938
+debug: 0.915
+files: 0.899
+network: 0.893
+device: 0.890
+semantic: 0.880
+PID: 0.877
+graphic: 0.870
+socket: 0.868
+other: 0.820
+vnc: 0.820
+boot: 0.796
+KVM: 0.788
+
+assert issue locates in hw/usb/dev-storage.c:248: usb_msd_send_status
+
+Hello,
+
+I found an assertion failure through hw/usb/dev-storage.c.
+
+This was found in latest version 5.1.0.
+
+--------
+
+qemu-system-x86_64: hw/usb/dev-storage.c:248: usb_msd_send_status: Assertion `s->csw.sig == cpu_to_le32(0x53425355)' failed.
+[1]    29544 abort      sudo  -enable-kvm -boot c -m 2G -drive format=qcow2,file=./ubuntu.img -nic
+
+To reproduce the assertion failure, please run the QEMU with following command line.
+
+
+$ qemu-system-x86_64 -enable-kvm -boot c -m 2G -drive format=qcow2,file=./ubuntu.img -nic user,model=rtl8139,hostfwd=tcp:0.0.0.0:5555-:22 -device piix4-usb-uhci,id=uhci -device usb-storage,drive=mydrive -drive id=mydrive,file=null-co://,size=2M,format=raw,if=none
+
+The poc is attached.
+
+
+
+poc doens't run on fedora:
+uhci: common.c:59: gva_to_gpa: Assertion `gfn != -1' failed.
+
+Can you build qemu with DEBUG_MSD enabled (see hw/usb/dev-storage.c),
+then attach both stderr log and stacktrace?
+
+thanks.
+
+Sorry, my reproduced environment is as follows:
+    Host: ubuntu 18.04
+    Guest: ubuntu 18.04
+
+Stderr log is as follows:
+usb-msd: Reset
+usb-msd: Command on LUN 0
+usb-msd: Command tag 0x0 flags 00000000 len 0 data 0
+[scsi.0 id=0] INQUIRY 0x00 0x00 0x00 0x01 0x00 - from-dev len=1
+usb-msd: Deferring packet 0x6110002d2d40 [wait status]
+usb-msd: Command status 0 tag 0x0, len 256
+qemu-system-x86_64: hw/usb/dev-storage.c:248: usb_msd_send_status: Assertion `s->csw.sig == cpu_to_le32(0x53425355)' failed.
+[1]    643 abort      sudo  -enable-kvm -boot c -m 4G -drive format=qcow2,file=./ubuntu.img -nic
+
+
+Backtrace is as follows:
+#0  0x00007f8b36a63f47 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
+#1  0x00007f8b36a658b1 in __GI_abort () at abort.c:79
+#2  0x00007f8b36a5542a in __assert_fail_base (fmt=0x7f8b36bdca38 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x55aef41e7440 "s->csw.sig == cpu_to_le32(0x53425355)", file=file@entry=0x55aef41e7180 "hw/usb/dev-storage.c", line=line@entry=248, function=function@entry=0x55aef41e7980 <__PRETTY_FUNCTION__.29124> "usb_msd_send_status") at assert.c:92
+#3  0x00007f8b36a554a2 in __GI___assert_fail (assertion=assertion@entry=0x55aef41e7440 "s->csw.sig == cpu_to_le32(0x53425355)", file=file@entry=0x55aef41e7180 "hw/usb/dev-storage.c", line=line@entry=248, function=function@entry=0x55aef41e7980 <__PRETTY_FUNCTION__.29124> "usb_msd_send_status") at assert.c:101
+#4  0x000055aef32226d5 in usb_msd_send_status (s=0x623000001d00, p=0x6110002e3500) at hw/usb/dev-storage.c:248
+#5  0x000055aef322804e in usb_msd_handle_data (dev=0x623000001d00, p=0x6110002e3500) at hw/usb/dev-storage.c:525
+#6  0x000055aef30bc46a in usb_device_handle_data (dev=dev@entry=0x623000001d00, p=p@entry=0x6110002e3500) at hw/usb/bus.c:179
+#7  0x000055aef30a0ab4 in usb_process_one (p=p@entry=0x6110002e3500) at hw/usb/core.c:387
+#8  0x000055aef30a9db0 in usb_handle_packet (dev=0x623000001d00, p=p@entry=0x6110002e3500) at hw/usb/core.c:419
+#9  0x000055aef30fe890 in uhci_handle_td (s=s@entry=0x61f000002a80, q=0x6060000c9200, q@entry=0x0, qh_addr=qh_addr@entry=0, td=td@entry=0x7ffd88f90620, td_addr=<optimized out>, int_mask=int_mask@entry=0x7ffd88f905a0) at hw/usb/hcd-uhci.c:899
+#10 0x000055aef3104c6f in uhci_process_frame (s=s@entry=0x61f000002a80) at hw/usb/hcd-uhci.c:1075
+#11 0x000055aef31098e0 in uhci_frame_timer (opaque=0x61f000002a80) at hw/usb/hcd-uhci.c:1174
+#12 0x000055aef3ae5f95 in timerlist_run_timers (timer_list=0x60b000051be0) at util/qemu-timer.c:572
+#13 0x000055aef3ae619b in qemu_clock_run_timers (type=QEMU_CLOCK_VIRTUAL) at util/qemu-timer.c:586
+#14 0x000055aef3ae6922 in qemu_clock_run_all_timers () at util/qemu-timer.c:672
+#15 0x000055aef3aca63d in main_loop_wait (nonblocking=0) at util/main-loop.c:523
+#16 0x000055aef1f320f5 in qemu_main_loop () at /home/zjusvn/new-hyper/qemu-5.1.0/softmmu/vl.c:1676
+#17 0x000055aef397475c in main (argc=18, argv=0x7ffd88f90e98, envp=0x7ffd88f90f30) at /home/zjusvn/new-hyper/qemu-5.1.0/softmmu/main.c:49
+#18 0x00007f8b36a46b97 in __libc_start_main (main=0x55aef397471d <main>, argc=18, argv=0x7ffd88f90e98, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffd88f90e88) at ../csu/libc-start.c:310
+#19 0x000055aef1a3481a in _start ()
+
+thanks.
+
+https://git.kraxel.org/cgit/qemu/log/?h=sirius/usb-asserts
+can you try that branch?
+
+OK, It seems to be fixed now. 
+
+Released with QEMU v5.2.0.
+
diff --git a/results/classifier/zero-shot/108/performance/1926174 b/results/classifier/zero-shot/108/performance/1926174
new file mode 100644
index 000000000..2797623d9
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1926174
@@ -0,0 +1,63 @@
+performance: 0.924
+graphic: 0.828
+other: 0.815
+device: 0.807
+files: 0.703
+permissions: 0.663
+semantic: 0.622
+PID: 0.619
+network: 0.591
+debug: 0.590
+vnc: 0.577
+socket: 0.555
+KVM: 0.458
+boot: 0.447
+
+Laggy and/or displaced mouse input on CloudReady (Chrome OS) VM
+
+This weekend I tried to get a CloudReady (Chrome OS) VM running on qemu 5.2. This seems to wok quite well, performance seems to be great in fact. Only problem is mouse input.
+
+Using SDL display, there is no visible mouse unless I set "show-cursor=on". After that the mouse pointer flickers a bit and most of the time is displaced so I need to press below a button in order to hit it. After switching to fullscreen and back using ctrl-alt-f this effect seems to be fixed for a while but the mouse pointer does not reach all parts of the emulated screen anymore.
+
+Using SPICE instead the mouse pointer is drawn, but it is *very* laggy. In fact it is only drawn every few seconds so it is unusable but placement seems to be correct. Text input is instant, so general emulation speed is not an issue here.
+
+To reproduce, download the free image from https://www.neverware.com/freedownload#home-edition-install
+
+Then run one of the following commands:
+
+qemu-system-x86_64 -drive driver=raw,file=cloudready-free-89.3.3-64bit.bin -machine pc,accel=kvm -m 2048 -device virtio-vga,virgl=on -display sdl,gl=on,show-cursor=on -usb -device usb-mouse -device intel-hda -device hda-duplex
+
+qemu-system-x86_64 -drive driver=raw,file=cloudready-free-89.3.3-64bit.bin -machine pc,accel=kvm -m 2048 -device virtio-vga,virgl=on -display spice-app,gl=on -usb -device usb-mouse -device intel-hda -device hda-duplex
+
+The QEMU project is currently moving its bug tracking to another system.
+For this we need to know which bugs are still valid and which could be
+closed already. Thus we are setting the bug state to "Incomplete" now.
+
+If the bug has already been fixed in the latest upstream version of QEMU,
+then please close this ticket as "Fix released".
+
+If it is not fixed yet and you think that this bug report here is still
+valid, then you have two options:
+
+1) If you already have an account on gitlab.com, please open a new ticket
+for this problem in our new tracker here:
+
+    https://gitlab.com/qemu-project/qemu/-/issues
+
+and then close this ticket here on Launchpad (or let it expire auto-
+matically after 60 days). Please mention the URL of this bug ticket on
+Launchpad in the new ticket on GitLab.
+
+2) If you don't have an account on gitlab.com and don't intend to get
+one, but still would like to keep this ticket opened, then please switch
+the state back to "New" or "Confirmed" within the next 60 days (other-
+wise it will get closed as "Expired"). We will then eventually migrate
+the ticket automatically to the new system (but you won't be the reporter
+of the bug in the new system and thus you won't get notified on changes
+anymore).
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/1940 b/results/classifier/zero-shot/108/performance/1940
new file mode 100644
index 000000000..a2cebf4bf
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/1940
@@ -0,0 +1,35 @@
+performance: 0.923
+graphic: 0.914
+device: 0.871
+other: 0.861
+socket: 0.803
+boot: 0.803
+semantic: 0.726
+debug: 0.710
+PID: 0.703
+permissions: 0.667
+network: 0.521
+files: 0.323
+KVM: 0.268
+vnc: 0.208
+
+Saving vm with shared folder results in Error: State blocked by non-migratable device  '000.../vhost-user-fs'
+Description of problem:
+Saving a vm with savevm in the QEMU Monitor with a shared folder causes the following error message:
+`Error: State blocked by non-migratable device '0000:00:05.0/vhost-user-fs'`
+Steps to reproduce:
+1. Get an qcow2 image that can boot (not sure if working qcow2 image is actually needed)
+2. Start virtiofsd with this /usr/libexec/virtiofsd --socket-path=/tmp/virtiofs_socket -o source=/path/to/share
+3. Run qemu-system-x86_64 -m 4G -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on -numa node,memdev=mem  -smp 2 -hda image.qcow2 -vga qxl -virtfs local,path=/path/to/share,mount_tag=share,security_model=passthrough,id=virtiofs -chardev socket,id=char0,path=/tmp/virtiofs_socket -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=share
+4. Let the image boot and/or go into the QEMU monitor.
+5. type savevm testvm
+6. See error.
+Additional information:
+This happens with both the legacy virtio-fs and the rust version.
+
+According to the first reply to https://gitlab.com/virtio-fs/virtiofsd/-/issues/81 there needs to be "a lot of changes not only in virtiofsd but also in the rust-vmm crates and qemu (and maybe in the vhost-user protocol)" so I'm reporting this here in the hopes it will speed something up.
+
+I followed the following to get virtiofsd working with command line QEMU:
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+This is blocking our migration from VirtualBox because it doesn't have problems like this. The least I need is a work around or alternative shared filesystem. We are trying to avoid networked shares.
diff --git a/results/classifier/zero-shot/108/performance/2014 b/results/classifier/zero-shot/108/performance/2014
new file mode 100644
index 000000000..1ce28fdf6
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2014
@@ -0,0 +1,68 @@
+performance: 0.929
+device: 0.905
+graphic: 0.877
+debug: 0.770
+socket: 0.731
+PID: 0.641
+network: 0.614
+semantic: 0.611
+vnc: 0.507
+other: 0.499
+boot: 0.406
+KVM: 0.377
+permissions: 0.328
+files: 0.319
+
+virtio: bounce.in_use==true in virtqueue_map_desc()
+Description of problem:
+
+Steps to reproduce:
+1. Build EDK II (edk2-stable202311) for riscv64
+2. Build UEFI SCT (commit 81dfa8d53d4290) for riscv64
+3. Run the UEFI SCT
+4. Observe the message "qemu: virtio: bogus descriptor or out of resources" after which the execution stalls.
+
+The full procedure is described in https://github.com/xypron/sct_release_test
+
+To save time you can call `sct -u` and select only test 'MediaAccessTest\\BlockIOProtocolTest'. Run it with `F9`.
+Additional information:
+virtqueue_map_desc() may be called for a large buffers size `sz`. It will then call dma_memory_map() multiple times in a loop. In address_space_map() `bounce.in_use` is set to `true` on the first call. Each subsequent call is bound to fail.
+
+To verify this is the cause I applied the following diff:
+
+```plaintext
+diff --git a/system/physmem.c b/system/physmem.c
+index a63853a7bc..12b3c2f828 100644
+--- a/system/physmem.c
++++ b/system/physmem.c
+@@ -3151,12 +3151,16 @@ void *address_space_map(AddressSpace *as,
+ 
+     if (!memory_access_is_direct(mr, is_write)) {
+         if (qatomic_xchg(&bounce.in_use, true)) {
++           fprintf(stderr, "bounce.in_use in address_space_map\n");
++
+             *plen = 0;
+             return NULL;
+         }
+         /* Avoid unbounded allocations */
+         l = MIN(l, TARGET_PAGE_SIZE);
+         bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
++       if (!bounce.buffer)
++           fprintf(stderr, "Out of memory in address_space_map\n");
+         bounce.addr = addr;
+         bounce.len = l;
+```
+
+and saw this output:
+
+```plaintext
+Logfile: "\sct\Log\MediaAccessTest\BlockIOProtocolTest0\ReadBlocks_Conf_0_0_8261
+59D3-04A5-4CCE-8431-344707A8B57A.log"
+Test Started: 12/02/23  08:43a
+------------------------------------------------------------
+Current Device: Acpi(PNP0A03,0)/Pci(3|0)
+Bounce.in_use in address_space_map
+qemu: virtio: bogus descriptor or out of resources
+```
+
+See related bug #850.
diff --git a/results/classifier/zero-shot/108/performance/2016 b/results/classifier/zero-shot/108/performance/2016
new file mode 100644
index 000000000..041c96f15
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2016
@@ -0,0 +1,24 @@
+performance: 0.970
+graphic: 0.911
+device: 0.886
+PID: 0.669
+debug: 0.635
+semantic: 0.434
+socket: 0.407
+files: 0.384
+boot: 0.296
+vnc: 0.262
+other: 0.179
+network: 0.170
+permissions: 0.157
+KVM: 0.023
+
+-virtfs not working on windows
+Description of problem:
+performing the above returns
+qemu-system-aarch64.exe: -virtfs abc: There is no option group 'virtfs'
+qemu-system-aarch64.exe: -virtfs abc: virtfs support is disabled
+Steps to reproduce:
+1.qemu-system-aarch64.exe -virtfs abc
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/2068 b/results/classifier/zero-shot/108/performance/2068
new file mode 100644
index 000000000..818358738
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2068
@@ -0,0 +1,30 @@
+performance: 0.984
+graphic: 0.947
+device: 0.756
+boot: 0.655
+semantic: 0.581
+other: 0.535
+permissions: 0.496
+debug: 0.440
+socket: 0.418
+PID: 0.398
+vnc: 0.275
+files: 0.194
+KVM: 0.181
+network: 0.144
+
+Regression: 8.1.3 -> 8.2.0 breaks virtio vga driver
+Description of problem:
+I have a number of emulated arch linuxes using the same x11/kde configuration. After updating from 8.1.3 to 8.2.0, they all broke in the following way:
+- screen tearing/artifacts seen from bios up until sddm
+- sddm is possibly affected
+- kde/x11 has so many artifacts that its unusable. if i attempt to write in a console window, i can only see parts of what ive written if i attempt to gently resize the bottom of the window. clicking the menu item will only render the menu 1/6 times and only partly. however if I click where I remember the shutdown button to be, the system shuts down immediately, so thi seems to be purely a graphics issue.
+- starting with -vga qxl fixes all issues.
+Steps to reproduce:
+1. make new qemu, install arch/kde
+2. boot said qemu with -vga virtio option
+3. observe issue from the moment it boots
+Additional information:
+Using nVidia card and drivers on host.
+
+Removing x86-video-vesa on the guest system seemed to significant improve performance. There are still many artifacts but its almost usable with this driver removed.
diff --git a/results/classifier/zero-shot/108/performance/2183 b/results/classifier/zero-shot/108/performance/2183
new file mode 100644
index 000000000..34cefbea1
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2183
@@ -0,0 +1,35 @@
+performance: 0.945
+boot: 0.916
+graphic: 0.830
+device: 0.789
+semantic: 0.758
+PID: 0.506
+other: 0.505
+vnc: 0.499
+socket: 0.431
+debug: 0.426
+network: 0.298
+permissions: 0.293
+KVM: 0.251
+files: 0.198
+
+aarch-64 emulation much slower since release 8.1.5 (issue also present on 8.2.1)
+Description of problem:
+Since QEMU 8.1.5 our aarch64 based emulation got much slower. We use a linux 5.4 kernel which we cross-compile with the ARM toolchain. Things that are noticable:
+- Boot time got a lot longer
+- All memory accesses seem to take 3x longer (can be verified by e.g. executing below script, address does not matter):
+```
+date
+for i in $(seq 0 1000); do
+    devmem 0x200000000 2>/dev/null
+done
+date
+```
+Steps to reproduce:
+Just boot an ARM based kernel on the virt machine and execute above script.
+Additional information:
+I've tried reproducing the issue on the master branch. There the issue is not present. It only seems to be present on releases 8.1.5 and 8.2.1. 
+
+I've narrowed the problem down to following commit on the 8.2 branch (@bonzini): ef74024b76bf285e247add8538c11cb3c7399a1a accel/tcg: Revert mapping of PCREL translation block to multiple virtual addresses.
+
+Let me know if any other information / tests are required.
diff --git a/results/classifier/zero-shot/108/performance/2187 b/results/classifier/zero-shot/108/performance/2187
new file mode 100644
index 000000000..9cec4b52d
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2187
@@ -0,0 +1,16 @@
+performance: 0.937
+graphic: 0.644
+debug: 0.539
+semantic: 0.274
+vnc: 0.202
+KVM: 0.174
+device: 0.143
+boot: 0.052
+PID: 0.016
+permissions: 0.008
+other: 0.007
+network: 0.007
+socket: 0.006
+files: 0.001
+
+system/cpu: deadlock in pause_all_vcpus()
diff --git a/results/classifier/zero-shot/108/performance/2193 b/results/classifier/zero-shot/108/performance/2193
new file mode 100644
index 000000000..cc348b9a6
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2193
@@ -0,0 +1,45 @@
+performance: 0.989
+boot: 0.937
+graphic: 0.929
+other: 0.891
+files: 0.869
+device: 0.818
+PID: 0.641
+network: 0.614
+semantic: 0.610
+vnc: 0.580
+debug: 0.529
+KVM: 0.489
+permissions: 0.479
+socket: 0.431
+
+qemu-system-mips64el 70 times slower than qemu -ppc64, -riscv64, -s390x
+Description of problem:
+I installed Debian 12 inside a `qemu-system-mips64el` virtual machine. The performances are awfully slow, roughly 70 times slower than other qemu targets on the same host, namely ppc64, riscv64, s390x.
+
+The idea is to recompile and test an open source project on various platforms.
+
+Using a command such as `time make path/to/bin/file.o`, I compiled one single source file on the host and within qemu for various targets. The same source file, inside the same project, is used in all cases.
+
+The results are shown below (the "x" number between parentheses is the time factor compared to the compilation on the host).
+
+- Host (native): 0m1.316s
+- qemu-system-ppc64: 0m31.622s (x24)
+- qemu-system-riscv64: 0m40.691s (x31)
+- qemu-system-s390x: 0m43.459s (x33)
+- qemu-system-mips64el: 48m33.587s (x2214)
+
+The compilation of the same source is 24 to 33 times slower on the first three emulated targets, compared to the same compilation on the host, which is understandable. However, the same compilation on the mips64el target is 2214 time slower than the host, roughly 70 times slower than other emulated targets.
+
+Why do we have such a tremendous difference between qemu mips64el and other targets?
+Additional information:
+For reference, here are the other qemu to boot the other targets. Guest OS are Debian 12 or Ubuntu 22.
+```
+qemu-system-ppc64 -smp 8 -m 8192 -nographic ...
+qemu-system-riscv64 -machine virt -smp 8 -m 8192 -nographic ...
+qemu-system-s390x -machine s390-ccw-virtio -cpu max,zpci=on -smp 8 -m 8192 -nographic ...
+```
+
+The other targets use `-smp 8` while qemu-system-mips64el does not support smp. However, the test compiles one single source file and does not (or marginally) use more than one CPU.
+
+Arguably, each compilation addresses a different target, uses a different backend, and the compilation time is not necessarily identical. OK, but 70 times slower seems way too much for this.
diff --git a/results/classifier/zero-shot/108/performance/2216 b/results/classifier/zero-shot/108/performance/2216
new file mode 100644
index 000000000..6917c000f
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2216
@@ -0,0 +1,18 @@
+performance: 0.956
+graphic: 0.919
+semantic: 0.567
+device: 0.429
+vnc: 0.187
+boot: 0.164
+network: 0.162
+KVM: 0.155
+debug: 0.112
+other: 0.065
+PID: 0.063
+socket: 0.057
+permissions: 0.030
+files: 0.016
+
+Incresaed artifacts generation speed with paralleled process
+Additional information:
+`parallel-jobs` was referenced `main`
diff --git a/results/classifier/zero-shot/108/performance/2319 b/results/classifier/zero-shot/108/performance/2319
new file mode 100644
index 000000000..7439dff0d
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2319
@@ -0,0 +1,32 @@
+performance: 0.922
+graphic: 0.911
+semantic: 0.851
+device: 0.824
+other: 0.760
+socket: 0.746
+PID: 0.736
+vnc: 0.726
+network: 0.692
+debug: 0.663
+permissions: 0.616
+boot: 0.553
+files: 0.549
+KVM: 0.417
+
+SPARC32-bit SDIV of negative divisor gives wrong result
+Description of problem:
+SDIV of negative divisor gives wrong result because of typo in helper_sdiv(). This is true for QEMU 9.0.0 and earlier.
+
+Place -1 in the Y register and -128 in another reg, then -120 in another register and do SDIV into a result register, instead of the proper value of 1 for the result, the incorrect value of 0 is produced.
+
+There is a typo in target/sparc/helper.c that causes the divisor to be consider unsigned, this patch fixes it:
+
+\*\*\* helper.c.ori Tue Apr 23 16:23:45 2024 --- helper.c Mon Apr 29 20:14:07 2024
+
+---
+
+\*\*\* 121,127 \*\*\*\* return (uint32_t)(b32 \< 0 ? INT32_MAX : INT32_MIN) | (-1ull \<\< 32); }
+
+! a64 /= b; r = a64; if (unlikely(r != a64)) { return (uint32_t)(a64 \< 0 ? INT32_MIN : INT32_MAX) | (-1ull \<\< 32); --- 121,127 ---- return (uint32_t)(b32 \< 0 ? INT32_MAX : INT32_MIN) | (-1ull \<\< 32); }
+
+! a64 /= b32; r = a64; if (unlikely(r != a64)) { return (uint32_t)(a64 \< 0 ? INT32_MIN : INT32_MAX) | (-1ull \<\< 32);
diff --git a/results/classifier/zero-shot/108/performance/2325 b/results/classifier/zero-shot/108/performance/2325
new file mode 100644
index 000000000..6f52535ec
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2325
@@ -0,0 +1,26 @@
+performance: 0.980
+KVM: 0.964
+device: 0.837
+graphic: 0.830
+debug: 0.735
+semantic: 0.625
+permissions: 0.533
+other: 0.470
+vnc: 0.397
+socket: 0.397
+PID: 0.265
+boot: 0.255
+files: 0.213
+network: 0.168
+
+[Performance Regression] Constant freezes on Alder lake and Raptor lake CPUs.
+Description of problem:
+Strangely, no logs are recorded. The guest just freezes. It can however be rescued by a simple pause and unpause.
+
+This issue only happens when using the KVM hypervisor. Other hypervisors are fine.
+
+This issue does NOT happen when I tested my Intel Core i7 8700K.
+Steps to reproduce:
+1. Create a basic virtual machine for Windows 11 (Or 10).
+2. Run it for about 5 - 30 minutes (Sometimes it happens in 20 seconds or even less).
+3. The problem should occur.
diff --git a/results/classifier/zero-shot/108/performance/2365 b/results/classifier/zero-shot/108/performance/2365
new file mode 100644
index 000000000..29d936ee4
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2365
@@ -0,0 +1,23 @@
+performance: 0.963
+graphic: 0.946
+boot: 0.873
+device: 0.804
+debug: 0.683
+vnc: 0.642
+semantic: 0.612
+PID: 0.567
+other: 0.549
+socket: 0.380
+KVM: 0.338
+network: 0.278
+files: 0.249
+permissions: 0.100
+
+[Regression v8.2/v9.0+] stuck at SeaBIOS for >30s with 100% CPU (1T)
+Description of problem:
+starting our Linux direct-kernel-boot VMs with same args on different hosts/hardware will get stuck at SeaBIOS for 30-60s with 100% 1T CPU load starting with v8.2 and also in v9.0. v9.0.0 and v8.2.3 - v8.1.5 is OK. To be clear, everything seems to be fine after that, though I did not do any benchmarks to compare performance. It just delays (re)booting by almost 1 minute, which is a shame, because before that update/regression it was instant and our VMs only take 4s to boot, which is now more like 60s.
+Downgrading to v8.1 instantly fixes it, upgrading to v8.2/v9.0 instantly breaks it.
+Steps to reproduce:
+1. start VM with same args on different versions
+
+somehow if I save this bug with `/label ~"kind::Bug"` it disappears, so I'm unable to add/keep the label
diff --git a/results/classifier/zero-shot/108/performance/2393 b/results/classifier/zero-shot/108/performance/2393
new file mode 100644
index 000000000..af1eaf72d
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2393
@@ -0,0 +1,35 @@
+performance: 0.932
+semantic: 0.874
+boot: 0.872
+graphic: 0.849
+other: 0.817
+device: 0.771
+vnc: 0.700
+PID: 0.654
+socket: 0.543
+KVM: 0.535
+network: 0.512
+permissions: 0.500
+files: 0.442
+debug: 0.320
+
+qemu: seabios hangs for 10~15 sec at boot with `-machine q35`
+Description of problem:
+Whenever i'm starting a virtual machine i'm having the issue that seabios (or at least that's what i see) hangs for about 10~15 seconds. In that time on of the cpu cores runs at 100%.
+This issue isn't new actually. I'm having this already for quite some time and a i think for at least the last 2 major versions. I haven't looked into it since it isn't a big issue, just annoying.
+Today i've looked into it and as far as i can see, this issue is always present with the flag `-machine q35`, which is the default for my vm's. If i set it to `-machine pc`, booting works as expected. However i also found a "workaround" where the vm's starting immediately (with `-machine q35` enabled), which is by simply adding a iso image to the command line (via -cdrom) - even though it's not used.
+
+This means:
+- 15 sec delay: qemu-system-x86_64 -machine q35
+- works immediately: qemu-system-x86_64 -machine q35 -cdrom /mnt/data/vm/isos/openSUSE-Tumbleweed-DVD-x86_64-Snapshot20230303-Media.iso
+
+Please note that most of my vm's usually start booting from a kernel image directly (-kernel /mnt/data/vm/kernel/gentoo-latest -initrd /mnt/data/vm/kernel/initrd-v5.cpio.gz) - but even in that case settings a cdrom (image) would fix the issue.
+Also, the image needs to be a valid one, if i set an empty file or /dev/null the issue would remain.
+Further more, i have the same issue on a second computer. This also runs on Gentoo Linux and is also a AMD Ryzen. (in case this is relevant)
+Steps to reproduce:
+1. qemu-system-x86_64 -machine q35
+2. wait about 10-15sec before boot continues
+Additional information:
+I was thinking to add an Screenshot of the hanging boot process, but the only text written there is:
+SeaBIOS (version 1.16.0-20220807_005459-localhost)
+with a blinking cursor below
diff --git a/results/classifier/zero-shot/108/performance/2410 b/results/classifier/zero-shot/108/performance/2410
new file mode 100644
index 000000000..3bfea4304
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2410
@@ -0,0 +1,107 @@
+performance: 0.930
+graphic: 0.926
+device: 0.898
+permissions: 0.885
+network: 0.875
+debug: 0.868
+files: 0.864
+other: 0.858
+vnc: 0.855
+PID: 0.847
+boot: 0.833
+semantic: 0.828
+socket: 0.823
+KVM: 0.814
+
+linux-user: `Setsockopt` with IP_OPTIONS returns "Protocol not available" error
+Description of problem:
+It seems that call to `setsockopt(sd, SOL_IP, IP_OPTIONS,_)` behaves differently on RISC-V Qemu than on x64 Linux. 
+On Linux syscall returns 0, but on Qemu it fails with `Protocol not available`.
+According [man](https://man7.org/linux/man-pages/man7/ip.7.html) `IP_OPTIONS` on `SOCK_STREAM` socket "should work".
+Steps to reproduce:
+1. Use below toy program `setsockopt.c` and compile it without optimizations like:
+```
+    gcc -Wall -W -Wextra -std=gnu17 -pedantic setsockopt.c -o setsockopt
+```
+
+```
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <arpa/inet.h>
+#include <netinet/in.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+int main() {
+    {
+        int sd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+        if(sd < 0) {
+            perror("Opening stream socket error");
+            exit(1);
+        }
+        else
+            printf("Opening stream socket....OK.\n");
+
+        struct sockaddr_in local_address = {AF_INET, htons(1234), {inet_addr("255.255.255.255")}, {0}};
+        int err = connect(sd, (struct sockaddr*)&local_address, (socklen_t)16);
+
+        if (err < 0) {
+            perror("Connect error");
+            close(sd);
+        }
+        else
+            printf("Connect...OK.\n");
+    }
+    {
+        int sd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+        if(sd < 0) {
+            perror("Opening stream socket error");
+            exit(1);
+        }
+        else
+            printf("Opening stream socket....OK.\n");
+
+        char option[4] = {0};
+        if(setsockopt(sd, SOL_IP, IP_OPTIONS, (char *)option, sizeof(option)) < 0) {
+            perror("setsockopt error");
+            close(sd);
+            exit(1);
+        }
+        else
+            printf("setsockopt...OK.\n");
+
+        struct sockaddr_in local_address = {AF_INET, htons(1234), {inet_addr("255.255.255.255")}, {0}};
+        int err = connect(sd, (struct sockaddr*)&local_address, (socklen_t)16);
+
+        if (err < 0) {
+            perror("Connect error");
+            close(sd);
+        }
+        else
+            printf("Connect...OK.\n");
+    }
+    return 0;
+}
+```
+
+
+2. Run program on Qemu and compare output with output from x64 build. In my case it looks like:
+```
+root@AMDC4705:~/runtime/connect$ ./setsockopt-x64
+Opening stream socket....OK.
+Connect error: Network is unreachable
+Opening stream socket....OK.
+setsockopt...OK.
+Connect error: Network is unreachable
+
+root@AMDC4705:/runtime/connect# ./setsockopt-riscv
+Opening stream socket....OK.
+Connect error: Network is unreachable
+Opening stream socket....OK.
+setsockopt error: Protocol not available
+```
+Additional information:
+In above demo option `value` is quite artificial. However I tried passing many different `option` arguments (with same `SOL_IP` + `IP_OPTIONS` combination) but always ended up with `setsockopt` failure. 
+From the other hand on x64 it worked fine. Then I realized that appropriate path in Qemu was unimplemented: https://github.com/qemu/qemu/blob/master/linux-user/syscall.c#L2141
diff --git a/results/classifier/zero-shot/108/performance/2460 b/results/classifier/zero-shot/108/performance/2460
new file mode 100644
index 000000000..b5ebecdf4
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2460
@@ -0,0 +1,23 @@
+performance: 0.998
+semantic: 0.903
+graphic: 0.880
+device: 0.695
+other: 0.671
+network: 0.551
+vnc: 0.543
+socket: 0.528
+files: 0.449
+PID: 0.387
+permissions: 0.375
+boot: 0.359
+debug: 0.351
+KVM: 0.082
+
+Significant performance degradation of qemu-x86_64 starting from version 3 on aarch64
+Description of problem:
+When I ran CoreMark with different qemu user-mode versions,guest x86-64-> host arm64, I found that the performance was highest with QEMU 2.x versions, and there was a significant performance degradation starting from QEMU version 3. What is the reason?
+
+|  |             |             |             |             |             |             |            |             |             |             |             |
+|------------------------------------------|-------------|-------------|-------------|-------------|-------------|-------------|------------|-------------|-------------|-------------|-------------|
+| qemu version                             | 2.5.1       | 2.8.0       | 2.9.0       | 2.9.1       | 3.0.0       | 4.0.0       | 5.2.0      | 6.2.0       | 7.2.13      | 8.2.6       | 9.0.1       |
+| coremark score                           | 3905.995703 | 4465.947153 | 4534.119247 | 4538.577912 | 1167.337886 | 1163.399453 | 928.348384 | 1327.051954 | 1301.659616 | 1034.714677 | 1085.304971 |
diff --git a/results/classifier/zero-shot/108/performance/2551 b/results/classifier/zero-shot/108/performance/2551
new file mode 100644
index 000000000..234b5834f
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2551
@@ -0,0 +1,28 @@
+performance: 0.993
+graphic: 0.974
+device: 0.854
+boot: 0.784
+semantic: 0.704
+debug: 0.586
+vnc: 0.586
+network: 0.492
+PID: 0.465
+KVM: 0.291
+other: 0.234
+socket: 0.211
+files: 0.163
+permissions: 0.144
+
+RTC time could run slow 3s than host time when clock=vm & base=UTC
+Description of problem:
+When start qemu with `-rtc base=utc,clock=vm`, sometime guest time can slower 3s than host. There's no problem (also didn't be noticed) as we often start ntp service, who will adjust our system time. But let's talk about if we havn't enable NTP service(for example system just booted)
+
+After inspect into the code, i found that there are two problem we should think about:
+#
+Steps to reproduce:
+1. start vm with `-rtc base=utc,clock=vm`
+2. disable NTP (OS specific)`systemctl disable --now ntpd;systemctl disable --now ntpdate`
+3. reboot in the guest
+4. after guest started, compare guest time with host time(at the same time) `date +'%F %T.%3N'`
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/2565 b/results/classifier/zero-shot/108/performance/2565
new file mode 100644
index 000000000..df98cb5de
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2565
@@ -0,0 +1,28 @@
+performance: 0.995
+graphic: 0.965
+device: 0.793
+debug: 0.768
+other: 0.704
+semantic: 0.700
+PID: 0.595
+KVM: 0.588
+permissions: 0.471
+vnc: 0.242
+socket: 0.238
+boot: 0.175
+files: 0.116
+network: 0.088
+
+Bisected: 176e3783f2ab14 results in a heavy performance regression with the SDL interface
+Description of problem:
+With the patch  176e3783f2ab14 a significant 3D performance regression was introduced when using the SDL gui and VirGL. Before the patch glxgears runs at about 4000 FPS on my machine, with the patch this drops to about 150 FPS, and if one moves the mouse the reported frame rate drops even more.
+Steps to reproduce:
+1. Run the qemu like given above with a current Debian-SID guest
+2. Start glxgears from a terminal 
+3. Move the mouse continuously to see the extra drop in frame rate
+Additional information:
+* (Guest) OpenGL Renderer string: virgl (AMD Radeon RX 6700 XT (radeonsi, navi22, LLVM 18.1.8 ...)
+* Reverting the commit 176e3783f2ab14 fixes the problem on SDL 
+* I don't think the host kernel version is an issue here (namely the KVM patches that are required to run Venus on discrete graphics cards) 
+* I've seen a similar issue when using GTK, but other that with SDL it's already present in version 7.2.11 (the one I used as a "good" base when I was bisecting the regression) - so I was not able to bisect yet.
+* I've looked around in the code and I'm aware the that commit *shouldn't* have the impact it seems to have. I can only assume that there is some unexpected side effect when creating the otherwise unused renderer.
diff --git a/results/classifier/zero-shot/108/performance/2572 b/results/classifier/zero-shot/108/performance/2572
new file mode 100644
index 000000000..bddb5d1c1
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2572
@@ -0,0 +1,45 @@
+performance: 0.939
+device: 0.930
+graphic: 0.839
+semantic: 0.727
+PID: 0.716
+vnc: 0.711
+debug: 0.699
+other: 0.684
+network: 0.648
+files: 0.629
+permissions: 0.610
+socket: 0.606
+boot: 0.579
+KVM: 0.545
+
+Guest os=Windows , qemu. Shutdown very slow. Memory allocation issue.
+Description of problem:
+simplifiying - libvirt config:
+```
+<memory unit='KiB'>33554432</memory>
+  <currentMemory unit='KiB'>131072</currentMemory>
+```
+when use `<currentMemory>` less than `<memory>` - at/after shutdown of guest os cpu hangs on 100% and lasts long- approximately 3-5 minutes
+if change to
+```
+<memory unit='KiB'>33554432</memory>
+  <currentMemory unit='KiB'>33554432</currentMemory>
+```
+then shutdown takes less some seconds
+
+problem occurs not (shutdown of VM takes some seconds) in cases when not used balloon device:
+1 `<currentMemory>` equal to `<memory>`
+2 memballoon driver disabled in windows
+3 memballoon disabled on libvirt with "model=none" (and therefore not passed to qemu command line)
+Additional information:
+on the guest :
+ * used drivers from virtio-win-0.1.262.iso - membaloon ver 100.95.104.26200 
+ * possible combination of all or some components 
+
+monitored next: 
+`virsh dommemstat VMName` at shutdown time there grows "rss" till MaxMem, but very slowly.
+aLso on `virsh setmem VMName --live --size 32G` 
+rss grows slow - but takes 2 times less than at simple shutdown time ( = at shutdown seems occurs memory allocation and deallocation at the same time)
+
+so something with some or all libvirt/qemu/balloon parts not so nice
diff --git a/results/classifier/zero-shot/108/performance/2682 b/results/classifier/zero-shot/108/performance/2682
new file mode 100644
index 000000000..20203374c
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2682
@@ -0,0 +1,56 @@
+performance: 0.942
+device: 0.891
+graphic: 0.850
+socket: 0.830
+files: 0.810
+network: 0.803
+KVM: 0.789
+PID: 0.749
+boot: 0.736
+vnc: 0.708
+semantic: 0.707
+permissions: 0.687
+debug: 0.664
+other: 0.483
+
+QEMU throws errors at the beginning of building
+Description of problem:
+QEMU throws errors at the beginning of building:
+```
+ninja: no work to do.
+/tmp/qemu-8.1.5/build/pyvenv/bin/meson introspect --targets --tests --benchmarks | /tmp/qemu-8.1.5/build/pyvenv/bin/python3 -B scripts/mtest2make.py > Makefile.mtest
+pc-bios/optionrom: -fcf-protection=none detected
+pc-bios/optionrom: -fno-pie detected
+pc-bios/optionrom: -no-pie detected
+pc-bios/optionrom: -fno-stack-protector detected
+pc-bios/optionrom: -Wno-array-bounds detected
+pc-bios/optionrom: Assembling multiboot.o
+pc-bios/optionrom: Assembling linuxboot.o
+pc-bios/optionrom: Assembling multiboot_dma.o
+pc-bios/optionrom: Compiling linuxboot_dma.o
+pc-bios/optionrom: Assembling pvh.o
+pc-bios/optionrom: Assembling kvmvapic.o
+pc-bios/optionrom: Compiling pvh_main.o
+pc-bios/optionrom: Linking multiboot.img
+pc-bios/optionrom: Linking linuxboot.img
+pc-bios/optionrom: Linking kvmvapic.img
+pc-bios/optionrom: Extracting raw object multiboot.raw
+/bin/sh: 1: -O: not found
+make[1]: *** [Makefile:53: multiboot.raw] Error 127
+make[1]: *** Waiting for unfinished jobs....
+pc-bios/optionrom: Linking multiboot_dma.img
+pc-bios/optionrom: Extracting raw object linuxboot.raw
+/bin/sh: 1: -O: not found
+make[1]: *** [Makefile:53: linuxboot.raw] Error 127
+make: *** [Makefile:190: pc-bios/optionrom/all] Error 2
+make: *** Waiting for unfinished jobs....
+[1/10003] Generating trace/trace-hw_i2c.h with a custom command
+
+...
+```
+Then proceeds the building. Whether it is failing at the end is not reliabily reproducible as it do fail one time and builds successfully at the next time. However, i don't know if these errors will cause runtime problems in the case of a successful build.
+Steps to reproduce:
+1. `../configure --enable-strip --audio-drv-list=alsa --enable-tools --enable-modules`
+2. `make -j16`
+Additional information:
+Configuration log is available here: http://oscomp.hu/depot/qemu-8.1.5-configure.log
diff --git a/results/classifier/zero-shot/108/performance/2848 b/results/classifier/zero-shot/108/performance/2848
new file mode 100644
index 000000000..27c89cc86
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2848
@@ -0,0 +1,28 @@
+performance: 0.965
+graphic: 0.758
+device: 0.691
+semantic: 0.657
+vnc: 0.558
+socket: 0.461
+other: 0.405
+PID: 0.317
+network: 0.305
+boot: 0.303
+KVM: 0.214
+files: 0.168
+permissions: 0.166
+debug: 0.132
+
+i386 max_cpus off by one
+Description of problem:
+X86 VMs are currently limited to 255 vCPUs (`mc->max_cpus = 255;` in `pc.c`).
+The first occurrence i can find of this limit is in d3e9db933f416c9f1c04df4834d36e2315952e42 from 2005 where both `MAX_APICS` and `MAX_CPUS` was set to 255. This is becoming relevant for some people as servers with 256 cores become more available. 
+
+**Can we increase the limit to 256 vCPUs?** 
+I think so. 
+
+Today, the APIC id limit (see `apic_id_limit` in `x86-common.c`) is based on the CPU id limit. 
+According to the a comment for `typdef uint32_t apic_id_t;` (see `topology.h`), we can have 256 APICs, but more APICs require x2APIC support. 
+APIC seems to be no hindrance to increase max_cpus to 256. 
+
+**Can we increase the limit to 512?** Maybe not? We need x2APIC support of which i have no clue. Also there is always a performance risk of exceeding the size at which current data structures work efficiently.
diff --git a/results/classifier/zero-shot/108/performance/285 b/results/classifier/zero-shot/108/performance/285
new file mode 100644
index 000000000..ec6dabe7a
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/285
@@ -0,0 +1,16 @@
+performance: 0.921
+device: 0.696
+debug: 0.564
+network: 0.496
+graphic: 0.442
+files: 0.271
+PID: 0.217
+other: 0.175
+boot: 0.165
+semantic: 0.141
+permissions: 0.078
+vnc: 0.051
+socket: 0.041
+KVM: 0.008
+
+qemu-user child process hangs when forking due to glib allocation
diff --git a/results/classifier/zero-shot/108/performance/286 b/results/classifier/zero-shot/108/performance/286
new file mode 100644
index 000000000..90f08770a
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/286
@@ -0,0 +1,16 @@
+performance: 0.997
+device: 0.879
+graphic: 0.872
+boot: 0.857
+network: 0.370
+debug: 0.278
+socket: 0.222
+files: 0.211
+permissions: 0.206
+semantic: 0.175
+other: 0.091
+vnc: 0.046
+PID: 0.035
+KVM: 0.001
+
+Performance degradation for WinXP boot time after b55f54bc
diff --git a/results/classifier/zero-shot/108/performance/2906 b/results/classifier/zero-shot/108/performance/2906
new file mode 100644
index 000000000..c90ce92d7
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/2906
@@ -0,0 +1,28 @@
+performance: 0.982
+graphic: 0.802
+device: 0.801
+semantic: 0.617
+other: 0.399
+permissions: 0.351
+files: 0.332
+debug: 0.323
+boot: 0.307
+network: 0.289
+PID: 0.278
+socket: 0.272
+vnc: 0.236
+KVM: 0.065
+
+x86 (32-bit) multicore very slow, but x86-64 is fast (on macOS arm64 host)
+Description of problem:
+More cores doesn't slow down a x86-32 guest on an x86-64 host, nor does it slow down an x86-64 guest on an arm64 host. However, adding extra cores massively slows down an x86-32 guest on an arm64 host.
+Steps to reproduce:
+1. Run 32-bit guest or 32-bit installer
+2.
+3.
+
+I have replicated this over several OSes using homebrew qemu, source-built qemu and UTM. This is not to be confused with a different bug in UTM that caused its version of QEMU to be slow.
+
+This also seems to apply to 32-bit processes in an x86-64 guest.
+Additional information:
+https://github.com/utmapp/UTM/issues/5468
diff --git a/results/classifier/zero-shot/108/performance/343 b/results/classifier/zero-shot/108/performance/343
new file mode 100644
index 000000000..86ca395d1
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/343
@@ -0,0 +1,16 @@
+performance: 0.928
+device: 0.905
+semantic: 0.819
+graphic: 0.464
+network: 0.462
+other: 0.328
+permissions: 0.164
+vnc: 0.141
+boot: 0.139
+files: 0.133
+debug: 0.087
+socket: 0.050
+PID: 0.025
+KVM: 0.009
+
+madvise reports success, but doesn't implement WIPEONFORK.
diff --git a/results/classifier/zero-shot/108/performance/404 b/results/classifier/zero-shot/108/performance/404
new file mode 100644
index 000000000..16a581037
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/404
@@ -0,0 +1,16 @@
+performance: 0.958
+device: 0.822
+graphic: 0.654
+boot: 0.611
+network: 0.559
+permissions: 0.499
+semantic: 0.489
+other: 0.437
+socket: 0.390
+files: 0.363
+vnc: 0.241
+debug: 0.155
+PID: 0.154
+KVM: 0.036
+
+Windows XP takes much longer to boot in TCG mode since 5.0
diff --git a/results/classifier/zero-shot/108/performance/430 b/results/classifier/zero-shot/108/performance/430
new file mode 100644
index 000000000..23bccf26a
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/430
@@ -0,0 +1,16 @@
+performance: 0.927
+device: 0.921
+debug: 0.513
+graphic: 0.466
+semantic: 0.317
+network: 0.287
+vnc: 0.228
+boot: 0.178
+other: 0.152
+permissions: 0.124
+files: 0.111
+PID: 0.069
+socket: 0.056
+KVM: 0.000
+
+Microsoft Hyper-V acceleration not working
diff --git a/results/classifier/zero-shot/108/performance/445 b/results/classifier/zero-shot/108/performance/445
new file mode 100644
index 000000000..8192e58e1
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/445
@@ -0,0 +1,16 @@
+performance: 0.955
+device: 0.909
+graphic: 0.482
+boot: 0.256
+semantic: 0.253
+other: 0.183
+debug: 0.180
+permissions: 0.114
+vnc: 0.025
+network: 0.023
+files: 0.016
+PID: 0.011
+socket: 0.009
+KVM: 0.001
+
+QEMU + DOS keyboard behavior
diff --git a/results/classifier/zero-shot/108/performance/490484 b/results/classifier/zero-shot/108/performance/490484
new file mode 100644
index 000000000..90d5d6850
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/490484
@@ -0,0 +1,78 @@
+performance: 0.936
+semantic: 0.914
+graphic: 0.910
+device: 0.886
+other: 0.886
+PID: 0.884
+KVM: 0.868
+boot: 0.859
+debug: 0.846
+vnc: 0.833
+permissions: 0.831
+network: 0.789
+socket: 0.715
+files: 0.706
+
+running 64bit client in 64bit host with intel crashes
+
+Binary package hint: qemu-kvm
+
+running windows 7 VM halts on early boot with
+
+kvm: unhandled exit 80000021
+kvm_run returned -22
+
+ProblemType: Bug
+Architecture: amd64
+Date: Mon Nov 30 21:28:54 2009
+DistroRelease: Ubuntu 9.10
+KvmCmdLine: Error: command ['ps', '-C', 'kvm', '-F'] failed with exit code 1: UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
+MachineType: System manufacturer P5Q-PRO
+NonfreeKernelModules: fglrx
+Package: kvm (not installed)
+ProcCmdLine: BOOT_IMAGE=/vmlinuz-2.6.31-14-generic root=UUID=17a8e181-fac7-461e-8cad-8aea97be2536 ro quiet splash
+ProcEnviron:
+ LANGUAGE=en_US:en
+ PATH=(custom, user)
+ LANG=en_US.UTF-8
+ SHELL=/bin/bash
+ProcVersionSignature: Ubuntu 2.6.31-14.48-generic
+SourcePackage: qemu-kvm
+Uname: Linux 2.6.31-14-generic x86_64
+dmi.bios.date: 07/10/2008
+dmi.bios.vendor: American Megatrends Inc.
+dmi.bios.version: 1004
+dmi.board.asset.tag: To Be Filled By O.E.M.
+dmi.board.name: P5Q-PRO
+dmi.board.vendor: ASUSTeK Computer INC.
+dmi.board.version: Rev 1.xx
+dmi.chassis.asset.tag: Asset-1234567890
+dmi.chassis.type: 3
+dmi.chassis.vendor: Chassis Manufacture
+dmi.chassis.version: Chassis Version
+dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvr1004:bd07/10/2008:svnSystemmanufacturer:pnP5Q-PRO:pvrSystemVersion:rvnASUSTeKComputerINC.:rnP5Q-PRO:rvrRev1.xx:cvnChassisManufacture:ct3:cvrChassisVersion:
+dmi.product.name: P5Q-PRO
+dmi.product.version: System Version
+dmi.sys.vendor: System manufacturer
+
+
+
+Thanks for the information.
+
+regards
+chuck
+
+Hey Chuck-
+
+You marked this confirmed... Are you able to reproduce this?
+
+Hi Sarunas-
+
+Were you able to install windows7 and just the reboot failed?  Or are you using a windows7 image that was installed elsewhere (or otherwise)?
+
+Anthony, any idea of the state of 64bit Windows7 on a 64bit QEMU host?
+
+I was able to install windows7 and just the reboot failed. It all works in VirtualBox OSE though.
+
+Looks like the install failed to succeed and there was not an MBR written.
+
diff --git a/results/classifier/zero-shot/108/performance/498523 b/results/classifier/zero-shot/108/performance/498523
new file mode 100644
index 000000000..85fc141b1
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/498523
@@ -0,0 +1,62 @@
+performance: 0.984
+device: 0.926
+other: 0.905
+files: 0.894
+permissions: 0.883
+graphic: 0.877
+semantic: 0.866
+PID: 0.861
+socket: 0.807
+network: 0.805
+vnc: 0.784
+debug: 0.665
+KVM: 0.619
+boot: 0.611
+
+Add on-line write compression support to qcow2
+
+This is a wishlist item.  Launchpad really need a way for the submitter to indicate this.
+
+It would be really cool if qemu were to support disk compression on-line for writes.
+
+I know this wouldn't be really easy.  Although most OS's use blocks, you can really only count on being able to compress 512-byte sectors, which doesn't give much room for a good compression ratio.  Moreover, the index indicating where in the image file each sector is located would be complex to manage, since the compressed blocks would be variable sized, and you'd be wanting to do some kind of best-fit allocation of space in the image file.  (If you were to make the image file compressed block size granularity, say, 64 bytes, you could probably do this best fit O(1).)  If you were to buffer enough writes, you could group arbitrary sequences of written sectors into blocks to compress (which with writeback could be sent to a helper thread on another CPU, so the throughput would be good).
+
++1 vote for this feature.
+
+As far as I know, QEMU v5.1 now has support for compression filters, e.g. by creating a qcow2 image with:
+
+ qemu-img create -f qcow2 -o compression_type=zlib image.qcow2 1G
+
+... so I think we can finally mark this ticket here as done.
+
+On 11/19/20 3:39 AM, Thomas Huth wrote:
+> As far as I know, QEMU v5.1 now has support for compression filters,
+> e.g. by creating a qcow2 image with:
+> 
+>  qemu-img create -f qcow2 -o compression_type=zlib image.qcow2 1G
+> 
+> ... so I think we can finally mark this ticket here as done.
+
+That says what compression type to use when writing the entire disk in
+one pass, but not online write compression. I think we may be a bit
+premature in calling this 'fix released', although I'm not certain we
+will ever try to add the feature requested.
+
+> 
+> ** Changed in: qemu
+>        Status: Confirmed => Fix Released
+> 
+
+-- 
+Eric Blake, Principal Software Engineer
+Red Hat, Inc.           +1-919-301-3226
+Virtualization:  qemu.org | libvirt.org
+
+
+
+Ok, sorry, seems like I mis-understood that new compression_type feature. If the requested feature will likely never be implemented, should we move this to WontFix instead?
+
+The compression filter can be used e.g. with -drive driver=compress,file.driver=qcow2,file.file.filename=foo.qcow2.  However, it shouldn’t be used lightly, as it will only do the right thing in very specific circumstances, namely every cluster that’s written to must not be allocated already.  So writing to the same cluster twice will not work.  (Which is why I was hesitant to merge this filter, but in the end I was contend with the fact that it’s at least difficult enough to use that unsuspecting users hopefully won’t accidentally enable it.)
+
+(It should be noted that this is not a limitation of the compression filter, though, but of qcow2’s implementation (VMDK isn’t any better).  So technically qemu has the feature now, but qcow2 is still missing it.)
+
diff --git a/results/classifier/zero-shot/108/performance/524447 b/results/classifier/zero-shot/108/performance/524447
new file mode 100644
index 000000000..4c0b27ecb
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/524447
@@ -0,0 +1,338 @@
+performance: 0.928
+network: 0.899
+device: 0.896
+other: 0.893
+permissions: 0.887
+PID: 0.879
+vnc: 0.871
+debug: 0.865
+socket: 0.858
+graphic: 0.841
+semantic: 0.839
+KVM: 0.832
+files: 0.811
+boot: 0.795
+
+virsh save is very slow
+
+As reported here: http://www.redhat.com/archives/libvir-list/2009-December/msg00203.html
+
+"virsh save" is very slow - it writes the image at around 1MB/sec on my test system.
+
+(I think I saw a bug report for this issue on Fedora's bugzilla, but I can't find it now...)
+
+Confirmed under Karmic.
+
+Anthony-
+
+Any upstream update here?  Looks like this issue has been reported on the qemu-devel@mailing list.  Curious if or when we might expect a fix for this?
+
+Thanks!
+
+I'm marking confirmed, since there's external references to this situation happening.
+
+This stops me migrating away from VMware Server to KVM/libvirt. I use suspend (to disk) for backup an when shutting down the host. It would need nearly an hour to save all domains on our server. Why are there so few responses here an in the qemu-Mailinglist? Does nobody care?
+
+I can reproduce with:
+
+x86_64-softmmu/qemu-system-x86_64 -hda ~/images/linux.img -snapshot -m 4G -monitor stdio -enable-kvm
+QEMU 0.12.50 monitor - type 'help' for more information
+(qemu) migrate_set_speed 1G
+(qemu) migrate -d "exec:dd of=foo.img"
+
+On:
+
+commit d9b73e47a3d596c5b33802597ec5bd91ef3348e2
+Author: Corentin Chary <email address hidden>
+Date:   Tue Jun 1 23:05:44 2010 +0200
+
+    vnc: add missing target for vnc-encodings-*.o
+
+
+Even though the rate limit is set at 1G, we're not getting more than 1-2MB/s of migration traffic.
+
+This actually turns out to be related to dd's default block size.  By default, dd uses a block size of 512.  The effect of this is that qemu fills the pipe buffer very quickly because dd just is submitting very small requests (that will require a RMW).
+
+If you set an explict block size with dd (via bs=1M), you'll notice a significant improvement in throughput.
+
+So I think this turns out to be a libvirt issue, not a qemu issue.
+
+I filed an upstream libvirt bug for the dd block size issue:
+
+https://bugzilla.redhat.com/show_bug.cgi?id=599091
+
+Changing to libvirt as commentary here, and on the upstream bug report by Cole indicate a fix has been commit that improves this performance.  
+
+Re-introducing qemu-kvm, as commentary on qemu-devel mailing list suggest there could be a timing concern meaning poor performance.  Leaving Libvirt on this report, as upstream libvirt have quoted improved performance adjusting the block size for dd.  However, Qemu feel that the real issue is in the qemu/kvm code.
+
+Thanks.
+
+Do you have a link for the qemu-devel thread? I had a look at http://lists.gnu.org/archive/html/qemu-devel/ but couldn't see anything.
+
+Just a note that the 0.8.1 release available in maverick gives me about
+a 50-second save for a 512M memory image (producing 100M outfile).  The
+patch listed above and suspected of speeding the saves is not in 0.8.1.
+When I hand-apply just that patch, saves take about 8 seconds, but
+restore fails.  Presumably taking the whole of latest git (or 0.8.2
+whenever it is released) will result in both working and fast
+save/restore.
+
+
+You may want to try the patch to qemu that avi just posted to the qemu-devel mailing list. I think this would probably fix your issue.
+
+Iggy, which patch exactly? I don't seem to be able to find it.
+
+Frederic, this patch:
+http://<email address hidden>/msg37743.html
+
+Frederick,
+
+please let me know if you can confirm that this patch fixes it for you.  If you need
+me to set up a ppa with that patch, please let me know.
+
+@earl,
+
+thanks for finding the specific patch!
+
+Will a fix for this go into maverick?
+
+This is quite critical for using kvm/libvirt for virtual server hosting on maverick.
+
+The patch is in 0.13.0, so changing the status.
+
+How should I interpret "Fix Released"?
+
+qemu in maverick is still 0.12.5 and 0.12.3 in lucid.
+
+Will this not be fixed in current stable LTS and non-LTS releases?
+
+Michael Tokarev <email address hidden> writes:
+
+> 03.01.2011 16:23, EsbenHaabendal wrote:
+>> How should I interpret "Fix Released"?
+>> 
+>> qemu in maverick is still 0.12.5 and 0.12.3 in lucid.
+>
+> Not all the world is ubuntu.  In qemu (and qemu-kvm) the
+> issue is fixed in 0.13, which were released quite some
+> time ago.
+>
+>> Will this not be fixed in current stable LTS and non-LTS releases?
+>
+> There's no "stable LTS" and "non-LTS" releases in qemu,
+> there are plain releases.
+
+Ok.  I see.
+
+And the current importance for libvirt (Ubuntu) and qemu-kvm (Ubuntu) is
+marked as "Wishlist".
+
+So my question goes to these two components.  When can we expect to see
+this fixed in current Ubuntu releases, of which I currently count at
+least maverick and lucid.
+
+Hi,
+
+please test the qemu-kvm packages in ppa:serge-hallyn/virt for lucid (0.12.3+noroms-0ubuntu10slowsave2) and maverick (0.12.5+noroms-0ubuntu7slowsave2), which have the proposed patch from upstream.  If they succeed, then I will proceed with the SRU.
+
+
+
+
+
+
+In order to proceed with SRU, we need someone to confirm that the debs in comment #21 or #22 work for them.
+
+Using ubuntu natty narwhal installed today (2011-03-24) I tried to do a snapshot with the help of libvirt. Here are the results using natty version of qemu-kvm and libvirt and using presented slowdown packages.
+
+root@koberec:~# time virsh snapshot-create 1
+Domain snapshot 1300968929 created
+
+real    4m39.594s
+user    0m0.000s
+sys     0m0.020s
+root@koberec:~# cd /storage/slowsave/
+root@koberec:/storage/slowsave# dpkg -l | grep -E 'libvirt|qemu'                                                                                                                                                 
+ii  libvirt-bin                     0.8.8-1ubuntu5                           the programs for the libvirt library
+ii  libvirt0                        0.8.8-1ubuntu5                           library for interfacing with different virtualization systems
+ii  qemu-common                     0.14.0+noroms-0ubuntu3                   qemu common functionality (bios, documentation, etc)
+ii  qemu-kvm                        0.14.0+noroms-0ubuntu3                   Full virtualization on i386 and amd64 hardware
+root@koberec:/storage/slowsave# dpkg -r qemu-common qemu-kvm                                                                                                                                                     
+root@koberec:/storage/slowsave# dpkg -i qemu-common_0.12.5+noroms-0ubuntu7.2_all.deb qemu-kvm_0.12.5+noroms-0ubuntu7.2_amd64.deb 
+root@koberec:/storage/slowsave# pkill kvm; sleep 5; service libvirt-bin restart
+root@koberec:/storage/slowsave# time virsh snapshot-create 1
+Domain snapshot 1300969754 created
+
+real    2m22.055s
+user    0m0.000s
+sys     0m0.010s
+root@koberec:/storage/slowsave# qemu-img snapshot -l /storage/debian.qcow2 | tail -n 1
+8         1300969754              57M 2011-03-24 08:29:14   00:03:37.652
+root@koberec:/storage/slowsave# virsh console 1
+Connected to domain vm
+Escape character is ^]
+
+Debian GNU/Linux 5.0 debian ttyS0
+
+debian login: root
+Password: 
+Last login: Thu Mar 24 08:15:18 EDT 2011 on ttyS0
+Linux debian 2.6.26-2-amd64 #1 SMP Thu Sep 16 15:56:38 UTC 2010 x86_64
+
+The programs included with the Debian GNU/Linux system are free software;
+the exact distribution terms for each program are described in the
+individual files in /usr/share/doc/*/copyright.
+
+Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
+permitted by applicable law.
+debian:~# free -m
+             total       used       free     shared    buffers     cached
+Mem:           561         39        521          0          4         16
+-/+ buffers/cache:         19        542
+Swap:          478          0        478
+debian:~# 
+root@koberec:/storage/slowsave# dd if=/dev/urandom of=/storage/emptyfile bs=1M count=40
+40+0 records in
+40+0 records out
+41943040 bytes (42 MB) copied, 5.4184 s, 7.7 MB/s
+
+
+I am not sure if my measurements are relevant to anything in here, but I hope so.
+
+Thanks for that info.  That is unexpected.  Could you send the xml description of the domain you were snapshotting, as well as the format of the backing file (i.e. qemu-img info filename.img) and what filesystem it is stored on (or whether it is LVM)?  I'd like to try to reproduce it.
+
+Since you are seeing this in natty, it seems certain that while your symptom is the same as that in the original bug report, the cause is different.  So it may be best to open a new bug report to track down the new issue in natty.
+
+
+To be clear, please re-install the stock natty packages, do a virsh snapshot-create, and then do 'ubuntu-bug libvirt-bin' to file a new bug.  Then please give the info I asked for in comment 25 in that bug.
+
+Thanks!
+
+In reply to question #26 https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/741887
+
+In reply to question #25: everything is included in #27. Is it enough?
+
+Yes, thanks.
+
+I'd like to help get this fixed, particularly in Lucid.  What can I do?  Does #21 and #22 still need testing?
+
+@Jeff,
+
+they do still need testing.  However at this point new ones need to be generated.  There is a bit of a backlog on libvirt updates  to push.  Depending on how those go, I could get packages into -proposed either next week or in 2-3 weeks.
+
+I'll make a note to queue this, and ping here when I've pushed a fix to -proposed, asking you to test.
+
+Thanks!
+
+Oops, this is for qemu-kvm, not libvirt.  That can go immediately.
+
+(setting importance to medium because it has a moderate impact on a core application, and especially because it has no workaround)
+
+Ok, great!  Thanks for the quick response.  I did just now get finished testing the packages you attached in #21 using my lucid box.  Saves of a 256Mb guest went from ~50 seconds to ~3.  So it does seem to fix the issue.  I can set up a Maverick box if you need it tested there as well.
+
+I checked for basic functionality with the packages you provided and the basics seem to work, (virsh start, stop, restore, save. Guest disk, net, vnc) but I didn't do much more.  How deep should I go testing for regressions?
+
+Actually maverick is waiting for a fix for bug 790145 to be verified, but lucid is free.  I've uploaded the proposed fix to lucid-proposed, it's waiting for an SRU admin to approve it.  I will also post the amd64 lucid .debs at http://people.canonical.com/~serge/qemu-slow-save/.
+
+Hello Chris, or anyone else affected,
+
+Accepted qemu-kvm into lucid-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!
+
+Tested 0.12.3+noroms-0ubuntu9.14 on Lucid amd64 with all available updates.  Save speed is now approx 3 seconds for a 256Mb guest.  Tested virsh with start, stop, save, restore, suspend, resume, shutdown, destroy.  Tested guest with smp, virtio disk, virtio net, vnc display.  Everything worked as expected.
+
+If there's more I can do, let me know.  Thanks!
+
+Jeff thanks for the testing!
+
+I'll take that as verification.. marking verification-done. It just needs to wait 7 days to clear -proposed in case it causes any unintended regressions.
+
+I'm sorry, the lucid qemu-kvm update has been superseded by a security update in 0.12.3+noroms-0ubuntu9.15.
+
+I'm sorry - per the rules listed in https://wiki.ubuntu.com/StableReleaseUpdates, only bugs which are >= high priority are eligible for SRU. If you feel this bug should be high priority, please say so (with rationale) here.
+
+An updated package for lucid through natty will be placed in the ubuntu-virt ppa (https://launchpad.net/~ubuntu-virt/+archive/ppa) as an alternative way to get this fix.
+
+
+The page you referenced doesn't include anything that I can find about the ticket priority level.  It states that "Stable release updates will, in general, only be issued in order to fix high-impact bugs" and provides several examples.  Among them is "Bugs which do not fit under above categories, [security vulnerabilities, severe regressions, or loss of user data] but (1) have an obviously safe patch and (2) affect an application rather than critical infrastructure packages (like X.org or the kernel)."
+
+I submit that this is an "obviously safe patch."  The change is small, simple, isolated, has been tested to work, and doesn't change any interfaces.  Is qemu-kvm considered a "critical infrastructure package" or not?  I don't know the answer to that, but I can see valid arguments both for and against.
+
+I also have an example of a potential data loss situation, though it is admittedly somewhat weak.  I'll spare everyone the narrative but I'll share it if it would be helpful.
+
+Serge - why do you think this can't be SRU'd?  It's already been accepted into lucid-proposed once, then verified, and the only reason it's not in lucid-updates is that it got superseded by a security upload before the 7-day testing period had elapsed.
+
+If you made a new upload to lucid-proposed based on the new security upload I see no reason why it couldn't be accepted and then copied to -updates after it's been verified and the 7-day testing period has ended.
+
+I see activity around this bug is going on and on, but I don't understand -- is the talk about this patch --
+http://anonscm.debian.org/gitweb/?p=collab-maint/qemu-kvm.git;a=commit;h=7e32b4ca0ea280a2e8f4d9ace1a15d5e633d9a95
+
+?
+
+Michael: Yes, that is the correct patch. 
+
+I just wanted to point out that we've this patch in Debian since ages, and it's been included in upstream for a long time too.  Added a debbug reference for this as well.
+
+@Serge @Chris - So it sounds like this _could_ make it into Lucid? Anyone I can bribe to make that happen?
+
+As an aside, I have been running LTS versions for 8 years and I must say it seems we need a different priority scale for LTS. This bug renders the use of kvm in 10.04 very painful and the plan would be to let that exist for 5 years? Feels like a lot of key improvements are overlooked because they don't make your machine explode, but from a sys admins perspective, it feels like the risks of running the latest versions so bug fixes trickle in outweighs the missing bits in an unmaintained LTS :/
+
+This issue + this one (https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/555981) makes for a sad day.
+
+Hello Chris, or anyone else affected,
+
+Accepted qemu-kvm into lucid-proposed. The package will build now and be available in a few hours. Please test and give feedback here. See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you in advance!
+
+Lucid 10.04.4 amd64 host.  2.6.32-38-server.  All packages up to date. 
+
+Guest:
+	Win 7 64bit
+	1Gb RAM (all in use in guest)
+	2 vproc
+	VirtIO disk (virtio-win-0.1-22)
+	VirtIO network
+	2 IDE cdroms
+	VNC display
+
+virsh save:
+	0.12.3+noroms-0ubuntu9.17		503.8 seconds
+	0.12.3+noroms-0ubuntu9.18		26.4 seconds
+
+Tested virsh start, stop, save, restore, suspend, resume, shutdown, destroy.  All works as expected.  Guest functionality unchanged.
+
+Thanks Jeff! Barring any regressions being reported, this should arrive in lucid-updates around the 24th.
+
+Is something holding up the release to lucid-updates?
+
+Ben, yes, sorry I missed the fact that there was already another bug that needs verification in lucid-proposed. Bug #592010 needs to be verified, or reverted, before this one can proceed to lucid-updates. Verification is tricky, since one needs to do a hardy -> lucid upgrade to verify it.
+
+I'll go stand up some vms to test that one out.
+
+I tested that other bug.  As far as I can tell it is not fixed.  I haven't gotten any sort of response on it for a week.  So... now what?
+
+This bug was fixed in the package qemu-kvm - 0.12.3+noroms-0ubuntu9.18
+
+---------------
+qemu-kvm (0.12.3+noroms-0ubuntu9.18) lucid-proposed; urgency=low
+
+  [ Michael Tokarev ]
+  * QEMUFileBuffered:-indicate-that-were-ready-when-the-underlying-file-is-ready.diff
+   (patch from upstream to speed up migration dramatically)
+   (closes: #597517) (LP: #524447)
+
+  [ Serge Hallyn ]
+  * debian/control: make qemu-common replace qemu (<< 0.12.3+noroms-0ubuntu9.17)
+    (LP: #592010)
+ -- Serge Hallyn <email address hidden>   Mon, 13 Feb 2012 11:24:18 -0600
+
+Just wanted to say thanks to everyone who got this fix out. Works great!
+
+See https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/524447/comments/5
+
+Apparently increasing the dd block size greatly increases the speed of the domain save operation.
+
+commit 20206a4bc9f1293c69eca79290a55a5fa19976d5 in libvirt git changes the dd blocksize to 1M. This decreased the time required for a save of a suspended 512MB guest from 3min47sec to 56sec.
+
+An additional patch avoids the overhead of seeking to a 1M alignment:
+https://www.redhat.com/archives/libvir-list/2010-June/msg00239.html
+
+This is included in 0.8.2. In addition upstream QEMU has identified & fixed a flaw that had significant speed impact
+
diff --git a/results/classifier/zero-shot/108/performance/589231 b/results/classifier/zero-shot/108/performance/589231
new file mode 100644
index 000000000..8cade384a
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/589231
@@ -0,0 +1,29 @@
+performance: 0.963
+KVM: 0.859
+device: 0.837
+graphic: 0.830
+other: 0.735
+network: 0.606
+semantic: 0.550
+PID: 0.494
+socket: 0.422
+debug: 0.416
+boot: 0.392
+files: 0.296
+vnc: 0.273
+permissions: 0.261
+
+cirrus vga is very slow in qemu-kvm-0.12
+
+As has been reported multiple times (*), there were a regression in qemu-kvm from 0.11 to 0.12, which causes significant slowdown in cirrus vga emulation.  For windows guests, where "standard VGA" driver works reasonable well, -vga std is a good workaround. But for e.g. linux guests, where vesa driver is painfully slow by its own, that's not a solution.
+
+(*)
+ debian qemu-kvm bug report #574988: http://bugs.debian.org/574988#17
+ debian qemu bugreport (might be related): http://bugs.debian.org/575720
+ kvm mailinglist thread: http://<email address hidden>/msg33459.html
+ another kvm ml thread: http://<email address hidden>/msg32744.html
+
+QEMU 0.12 is pretty much outdated - has this been fixed in a newer version of QEMU? I.e. do you think we can close this bug nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/595 b/results/classifier/zero-shot/108/performance/595
new file mode 100644
index 000000000..be8c9e775
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/595
@@ -0,0 +1,26 @@
+performance: 0.926
+graphic: 0.905
+device: 0.885
+vnc: 0.823
+other: 0.742
+semantic: 0.729
+network: 0.687
+PID: 0.633
+permissions: 0.581
+socket: 0.474
+debug: 0.425
+boot: 0.357
+files: 0.335
+KVM: 0.018
+
+QEMU VNC mouse doesn't move in tablet mode os9
+Description of problem:
+What I am trying to do is have a headless os9 running in QEMU on ubuntu and use the native vnc support in QEMU to access the screen. That is setup and works as expected but the mouse only works in ps/2 mode and that is clearly very undesirable (mouse is never lined up). I set it up in tablet mode and when I am in the QEMU window on the host the mouse works perfect (I added tablet mode to os9 with: https://github.com/kanjitalk755/macos9-usb-tablet). That same tablet mode results in the mouse not moving at all over vnc, if I ctrl+alt 2 and switch the mouse type from tablet mode it starts working again but not lined up at all as expected, cant get to any buttons on edges. Is there anyone in here that ran into this? Am I the only one using QEMU VNC?
+
+Iv thought about running a vnc application on the vm itself but performance was meh at best. Any tips would be worth a lot to me, its a sin to say but I am trying to adapt this into a production environment...
+
+Upon further investigation this seems to be a issue on Linux. I am testing the QEMU on windows and its working as expected over VNC. That is to say if QEMU is running on a windows host, it just works over vnc with tablet mode. So what could be causing Linux version to not work? I did compile it from source, are there any configure flags I am missing? I am trying to run it on Ubuntu server 21.04
+Steps to reproduce:
+1.add vnc option to parameters
+2.enable tablet mode and install driver in os9
+3.connect to vnc and mouse doesn't move
diff --git a/results/classifier/zero-shot/108/performance/597 b/results/classifier/zero-shot/108/performance/597
new file mode 100644
index 000000000..6ccc8dff8
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/597
@@ -0,0 +1,34 @@
+performance: 0.931
+device: 0.815
+graphic: 0.769
+other: 0.697
+network: 0.682
+boot: 0.550
+semantic: 0.540
+PID: 0.513
+socket: 0.471
+vnc: 0.388
+debug: 0.348
+permissions: 0.302
+KVM: 0.232
+files: 0.183
+
+sunhme sometimes causes the VM to hang forever
+Description of problem:
+When using sunhme, sometimes on receiving traffic (and doing disk IO?) it will get slower and slower until it becomes entirely unresponsive, which does not happen on the real hardware I have sitting next to me (Sun Netra T1, running the same OS+kernel, though not the same image)
+
+virtio-net-pci does not, so far, demonstrate the problem, and neither does just sending a lot of traffic out over the sunhme interface, so it appears to require receiving or some more complex interaction.
+
+It doesn't always happen immediately, it sometimes takes a couple of tries with the command, but when it does, it's gone.
+
+Output logged to console below.
+Steps to reproduce:
+1. Log into VM (rich/omgqemu)
+2. sudo apt clean;sudo apt update;
+3. If it doesn't lock up the VM, repeat step 2 a few times.
+Additional information:
+Disk image can be found [here](https://www.dropbox.com/s/0oosyf7xej44v9n/sunhme_repro_disk.tgz?dl=0) (tarred in the hope that it does something reasonable with sparseness)
+ 
+Console output can be found [here](https://www.dropbox.com/s/t1wxx41vzv8p3l6/sunhme%20sadness.txt?dl=0)
+
+Ah yes, [the initrd and vmlinux](https://www.dropbox.com/s/t7i4gs7poqaeanz/oops_boot.tgz?dl=0) would help, wouldn't they, though I imagine the ones in the VM itself would boot...
diff --git a/results/classifier/zero-shot/108/performance/597351 b/results/classifier/zero-shot/108/performance/597351
new file mode 100644
index 000000000..2cf131599
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/597351
@@ -0,0 +1,52 @@
+performance: 0.968
+graphic: 0.682
+device: 0.648
+semantic: 0.548
+other: 0.447
+network: 0.415
+socket: 0.313
+PID: 0.182
+permissions: 0.171
+debug: 0.136
+KVM: 0.111
+boot: 0.082
+vnc: 0.043
+files: 0.036
+
+Slow UDP performance with virtio device
+
+I'm working on an app that is very sensitive to round-trip latency
+between the guest and host, and qemu/kvm seems to be significantly
+slower than it needs to be.
+
+The attached program is a ping/pong over UDP.  Call it with a single
+argument to start a listener/echo server on that port.  With three
+arguments it becomes a counted "pinger" that will exit after a
+specified number of round trips for performance measurements.  For
+example:
+
+  $ gcc -o udp-pong udp-pong.c
+  $ ./udp-pong 12345 &                       # start a listener on port 12345
+  $ time ./udp-pong 127.0.0.1 12345 1000000  # time a million round trips
+
+When run on the loopback device on a single machine (true on the host
+or within a guest), I get about 100k/s.
+
+When run across a port forward using "user" networking on qemu (or
+kvm, the performance is the same) and the default rtl8139 driver (both
+the host and guest are Ubuntu Lucid), I get about 10k/s.  This seems
+very slow, but perhaps unavoidably so?
+
+When run in the same configuration using the "virtio" driver, I get
+only 2k/s.  This is almost certainly a bug in the virtio driver, given
+that it's a paravirtualized device that is 5x slower than the "slow"
+hardware emulation.
+
+I get no meaningful change in performance between kvm/qemu.
+
+
+
+Triaging old bug tickets ... can you still reproduce this issue with the latest version of QEMU? Have you already tried vhost?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/642 b/results/classifier/zero-shot/108/performance/642
new file mode 100644
index 000000000..3aee8c93e
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/642
@@ -0,0 +1,19 @@
+performance: 0.993
+device: 0.887
+graphic: 0.817
+semantic: 0.793
+other: 0.690
+boot: 0.391
+files: 0.356
+network: 0.277
+PID: 0.219
+vnc: 0.206
+debug: 0.127
+socket: 0.056
+permissions: 0.030
+KVM: 0.001
+
+Slow QEMU I/O on macOS host
+Description of problem:
+QEMU on macOS host gives very low I/O speed. Tested with fio tool, compared to linux host
+Tested on QEMU v6.1.0, and the recent master
diff --git a/results/classifier/zero-shot/108/performance/719 b/results/classifier/zero-shot/108/performance/719
new file mode 100644
index 000000000..2096c5c9b
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/719
@@ -0,0 +1,34 @@
+performance: 0.996
+graphic: 0.977
+network: 0.956
+device: 0.890
+PID: 0.780
+semantic: 0.756
+debug: 0.737
+socket: 0.731
+vnc: 0.705
+permissions: 0.609
+KVM: 0.555
+boot: 0.419
+other: 0.348
+files: 0.168
+
+live migration's performance with compression enabled is much worse than compression disabled
+Description of problem:
+
+Steps to reproduce:
+1. Run QEMU the Guests with 1Gpbs network on source host and destination host with QEMU command line
+2. Run some memory work loads on Guest, for example, ./memtester 1G 1
+3. Set migration parameters in QEMU monitor. On source and destination, 
+   execute: #migrate_set_capability compress on
+   Other compression parameters are all default. 
+4. Run migrate command, # migrate -d tcp:10.156.208.154:4000
+5. The results: 
+   - without compression:  total time:  197366 ms   throughput:   937.81 mbps  transferred Ram: 22593703 kbytes 
+   - with compression: total time:  281711 ms   throughput:  90.24 mbps    transferred Ram: 3102898 kbytes  
+
+When compression is enabled, the compression transferred ram is reduced a lot. But the throughput is down badly.
+The total time of live migration with compression is longer than without compression. 
+I tried with 100G network bandwidth, it also has the same problem.
+Additional information:
+
diff --git a/results/classifier/zero-shot/108/performance/753916 b/results/classifier/zero-shot/108/performance/753916
new file mode 100644
index 000000000..1bceb5098
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/753916
@@ -0,0 +1,49 @@
+performance: 0.983
+graphic: 0.863
+debug: 0.851
+files: 0.845
+boot: 0.836
+other: 0.832
+semantic: 0.800
+network: 0.795
+device: 0.784
+PID: 0.783
+permissions: 0.778
+vnc: 0.741
+socket: 0.688
+KVM: 0.441
+
+performance bug with SeaBios 0.6.x
+
+in my tests SeaBios 0.5.1 has the best performance (100% faster)
+i run qemu port in windows xp (phenom II x4 945, 4 gigas ram DDR3) and windows xp (Pentium 4, 1 giga ram ddr)
+
+Hi. Thanks for reporting this issue.
+
+Can you tell us a bit more about the problem?
+I'm not sure what the cause could be, but perhaps we can understand it better with some of the following information (plus anything else you can think of that could be related):
+ - What version of QEMU are you running on each machine?
+ - Did you build it yourself? If so, can you describe how? If not, can you provide a pointer to where you got it?
+ - What are you running as the guest environment(s)?
+ - I'm assuming that Windows XP is the host environment (two different host machines from your description). Which version / service packs do you have installed?
+ - How did you do the tests? For example, what is the benchmarking tool or load that you are using? How are you using those tools / loads? Can you provide the numbers for each host?
+
+i use QEMU for test PEs (preinstaled environments) in pendrives with a bat script
+
+#
+SET SDL_VIDEODRIVER=directx
+qemu.exe -m 512 -localtime -M pc -hda \\.\physicaldrive1
+#
+
+my workstation run Windows XP SP3 with all hotfixes, and i use QEMU 0.14.0 (this port http://www.megaupload.com/?d=8LUG85F9)
+
+i run syslinux loader for Linux PLD rescue .iso file
+
+i record a test with camstudio http://www.megaupload.com/?d=37LDTOS3
+
+OK, from your test.swf file, I assume that the way you're testing is the boot-up of a Linux ISO, and that "100%" is an estimate of boot speed.
+
+I'm really not sure what the problem is. I can only suggest that you try various SeaBIOS versions and try to isolate which version is the problem. It also might be worth seeing if the problem affects other Linux distro boot-up.
+
+SeaBios 0.x is pretty outdated nowadays, so I think we should close this bug ... anyway, if you still have problems with SeaBios, you likely should it report on the SeaBios mailing list (https://www.seabios.org/Mailinglist) instead of using the QEMU bugtracker.
+
diff --git a/results/classifier/zero-shot/108/performance/760976 b/results/classifier/zero-shot/108/performance/760976
new file mode 100644
index 000000000..13a254a54
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/760976
@@ -0,0 +1,44 @@
+performance: 0.988
+boot: 0.721
+device: 0.694
+graphic: 0.632
+semantic: 0.582
+other: 0.506
+PID: 0.495
+network: 0.472
+socket: 0.469
+debug: 0.450
+files: 0.401
+vnc: 0.338
+permissions: 0.305
+KVM: 0.102
+
+Nexenta 3.0.1 fails to install
+
+The latest git version of qemu (commit 420b6c317de87890e06225de6e2f8af7bf714df0) fails to boot Nexenta3.0.1. I don't know if this is a bug in nextenta, or in QEMU or both.
+
+You can obtain a bootable image of Nextenta from http://www.nexenta.org/releases/nexenta-core-platform_3.0.1-b134_x86.iso.zip
+
+Host: Linux/x86_64 gcc4.5 ./configure --enable-linux-aio --enable-io-thread --enable-kvm
+
+qemu-img create nexenta3.0.1 3G
+qemu -hda nexenta3.0.1 -cdrom nexenta-core-platform3.0.1-b134x86.iso -boot d -k en-us -m 256
+
+Boots to grub OK, but when you hit install you get panic[cpu0]/thread=fec226c0: vmem_hash_delete(d4404690, d445abc0, 0): bad free.
+
+You get the same error with or without -enable-kvm
+
+
+
+I have found that I can get to the installer if I give the -no-acpi argument.
+
+As others have noted, Netexnta is very slow.  To get any sort of speed I used the 32-bit version and disabled QEMU disc caching, thus:
+
+qemu -drive file=nexenta3.0.1,index=0,media=disk,cache=unsafe -cdrom nexenta-core-platform_3.0.1-b134_x86.iso -boot d -k en-us -m 512 -enable-kvm -no-acpi
+
+Even then, performance is painful.
+
+Triaging old bug tickets ... Is this issue still reproducible with the latest version of QEMU?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/zero-shot/108/performance/79834768 b/results/classifier/zero-shot/108/performance/79834768
new file mode 100644
index 000000000..95c9f99ec
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/79834768
@@ -0,0 +1,419 @@
+performance: 0.952
+debug: 0.943
+other: 0.943
+permissions: 0.939
+graphic: 0.933
+semantic: 0.920
+PID: 0.916
+device: 0.915
+socket: 0.912
+files: 0.904
+vnc: 0.885
+boot: 0.880
+KVM: 0.840
+network: 0.830
+
+[Qemu-devel] [BUG] Windows 7 got stuck easily while run PCMark10 application
+
+Hi,
+
+We hit a bug in our test while run PCMark 10 in a windows 7 VM,
+The VM got stuck and the wallclock was hang after several minutes running
+PCMark 10 in it.
+It is quite easily to reproduce the bug with the upstream KVM and Qemu.
+
+We found that KVM can not inject any RTC irq to VM after it was hang, it fails 
+to
+Deliver irq in ioapic_set_irq() because RTC irq is still pending in ioapic->irr.
+
+static int ioapic_set_irq(struct kvm_ioapic *ioapic, unsigned int irq,
+                  int irq_level, bool line_status)
+{
+… …
+         if (!irq_level) {
+                  ioapic->irr &= ~mask;
+                  ret = 1;
+                  goto out;
+         }
+… …
+         if ((edge && old_irr == ioapic->irr) ||
+             (!edge && entry.fields.remote_irr)) {
+                  ret = 0;
+                  goto out;
+         }
+
+According to RTC spec, after RTC injects a High level irq, OS will read CMOS’s
+register C to to clear the irq flag, and pull down the irq electric pin.
+
+For Qemu, we will emulate the reading operation in cmos_ioport_read(),
+but Guest OS will fire a write operation before to tell which register will be 
+read
+after this write, where we use s->cmos_index to record the following register 
+to read.
+
+But in our test, we found that there is a possible situation that Vcpu fails to 
+read
+RTC_REG_C to clear irq, This could happens while two VCpus are writing/reading
+registers at the same time, for example, vcpu 0 is trying to read RTC_REG_C,
+so it write RTC_REG_C first, where the s->cmos_index will be RTC_REG_C,
+but before it tries to read register C, another vcpu1 is going to read RTC_YEAR,
+it changes s->cmos_index to RTC_YEAR by a writing action.
+The next operation of vcpu0 will be lead to read RTC_YEAR, In this case, we 
+will miss
+calling qemu_irq_lower(s->irq) to clear the irq. After this, kvm will never 
+inject RTC irq,
+and Windows VM will hang.
+static void cmos_ioport_write(void *opaque, hwaddr addr,
+                              uint64_t data, unsigned size)
+{
+    RTCState *s = opaque;
+
+    if ((addr & 1) == 0) {
+        s->cmos_index = data & 0x7f;
+    }
+……
+static uint64_t cmos_ioport_read(void *opaque, hwaddr addr,
+                                 unsigned size)
+{
+    RTCState *s = opaque;
+    int ret;
+    if ((addr & 1) == 0) {
+        return 0xff;
+    } else {
+        switch(s->cmos_index) {
+
+According to CMOS spec, ‘any write to PROT 0070h should be followed by an 
+action to PROT 0071h or the RTC
+Will be RTC will be left in an unknown state’, but it seems that we can not 
+ensure this sequence in qemu/kvm.
+
+Any ideas ?
+
+Thanks,
+Hailiang
+
+Pls see the trace of kvm_pio:
+
+       CPU 1/KVM-15567 [003] .... 209311.762579: kvm_pio: pio_read at 0x70 size 
+1 count 1 val 0xff
+       CPU 1/KVM-15567 [003] .... 209311.762582: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0x89
+       CPU 1/KVM-15567 [003] .... 209311.762590: kvm_pio: pio_read at 0x71 size 
+1 count 1 val 0x17
+       CPU 0/KVM-15566 [005] .... 209311.762611: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0xc
+       CPU 1/KVM-15567 [003] .... 209311.762615: kvm_pio: pio_read at 0x70 size 
+1 count 1 val 0xff
+       CPU 1/KVM-15567 [003] .... 209311.762619: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0x88
+       CPU 1/KVM-15567 [003] .... 209311.762627: kvm_pio: pio_read at 0x71 size 
+1 count 1 val 0x12
+       CPU 0/KVM-15566 [005] .... 209311.762632: kvm_pio: pio_read at 0x71 size 
+1 count 1 val 0x12
+       CPU 1/KVM-15567 [003] .... 209311.762633: kvm_pio: pio_read at 0x70 size 
+1 count 1 val 0xff
+       CPU 0/KVM-15566 [005] .... 209311.762634: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0xc           <--- Firstly write to 0x70, cmo_index = 0xc & 
+0x7f = 0xc
+       CPU 1/KVM-15567 [003] .... 209311.762636: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0x86       <-- Secondly write to 0x70, cmo_index = 0x86 & 
+0x7f = 0x6, cover the cmo_index result of first time
+       CPU 0/KVM-15566 [005] .... 209311.762641: kvm_pio: pio_read at 0x71 size 
+1 count 1 val 0x6      <--  vcpu0 read 0x6 because cmo_index is 0x6 now
+       CPU 1/KVM-15567 [003] .... 209311.762644: kvm_pio: pio_read at 0x71 size 
+1 count 1 val 0x6     <-  vcpu1 read 0x6
+       CPU 1/KVM-15567 [003] .... 209311.762649: kvm_pio: pio_read at 0x70 size 
+1 count 1 val 0xff
+       CPU 1/KVM-15567 [003] .... 209311.762669: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0x87
+       CPU 1/KVM-15567 [003] .... 209311.762678: kvm_pio: pio_read at 0x71 size 
+1 count 1 val 0x1
+       CPU 1/KVM-15567 [003] .... 209311.762683: kvm_pio: pio_read at 0x70 size 
+1 count 1 val 0xff
+       CPU 1/KVM-15567 [003] .... 209311.762686: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0x84
+       CPU 1/KVM-15567 [003] .... 209311.762693: kvm_pio: pio_read at 0x71 size 
+1 count 1 val 0x10
+       CPU 1/KVM-15567 [003] .... 209311.762699: kvm_pio: pio_read at 0x70 size 
+1 count 1 val 0xff
+       CPU 1/KVM-15567 [003] .... 209311.762702: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0x82
+       CPU 1/KVM-15567 [003] .... 209311.762709: kvm_pio: pio_read at 0x71 size 
+1 count 1 val 0x25
+       CPU 1/KVM-15567 [003] .... 209311.762714: kvm_pio: pio_read at 0x70 size 
+1 count 1 val 0xff
+       CPU 1/KVM-15567 [003] .... 209311.762717: kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0x80
+
+
+Regards,
+-Gonglei
+
+From: Zhanghailiang
+Sent: Friday, December 01, 2017 3:03 AM
+To: address@hidden; address@hidden; Paolo Bonzini
+Cc: Huangweidong (C); Gonglei (Arei); wangxin (U); Xiexiangyou
+Subject: [BUG] Windows 7 got stuck easily while run PCMark10 application
+
+Hi,
+
+We hit a bug in our test while run PCMark 10 in a windows 7 VM,
+The VM got stuck and the wallclock was hang after several minutes running
+PCMark 10 in it.
+It is quite easily to reproduce the bug with the upstream KVM and Qemu.
+
+We found that KVM can not inject any RTC irq to VM after it was hang, it fails 
+to
+Deliver irq in ioapic_set_irq() because RTC irq is still pending in ioapic->irr.
+
+static int ioapic_set_irq(struct kvm_ioapic *ioapic, unsigned int irq,
+                  int irq_level, bool line_status)
+{
+… …
+         if (!irq_level) {
+                  ioapic->irr &= ~mask;
+                  ret = 1;
+                  goto out;
+         }
+… …
+         if ((edge && old_irr == ioapic->irr) ||
+             (!edge && entry.fields.remote_irr)) {
+                  ret = 0;
+                  goto out;
+         }
+
+According to RTC spec, after RTC injects a High level irq, OS will read CMOS’s
+register C to to clear the irq flag, and pull down the irq electric pin.
+
+For Qemu, we will emulate the reading operation in cmos_ioport_read(),
+but Guest OS will fire a write operation before to tell which register will be 
+read
+after this write, where we use s->cmos_index to record the following register 
+to read.
+
+But in our test, we found that there is a possible situation that Vcpu fails to 
+read
+RTC_REG_C to clear irq, This could happens while two VCpus are writing/reading
+registers at the same time, for example, vcpu 0 is trying to read RTC_REG_C,
+so it write RTC_REG_C first, where the s->cmos_index will be RTC_REG_C,
+but before it tries to read register C, another vcpu1 is going to read RTC_YEAR,
+it changes s->cmos_index to RTC_YEAR by a writing action.
+The next operation of vcpu0 will be lead to read RTC_YEAR, In this case, we 
+will miss
+calling qemu_irq_lower(s->irq) to clear the irq. After this, kvm will never 
+inject RTC irq,
+and Windows VM will hang.
+static void cmos_ioport_write(void *opaque, hwaddr addr,
+                              uint64_t data, unsigned size)
+{
+    RTCState *s = opaque;
+
+    if ((addr & 1) == 0) {
+        s->cmos_index = data & 0x7f;
+    }
+……
+static uint64_t cmos_ioport_read(void *opaque, hwaddr addr,
+                                 unsigned size)
+{
+    RTCState *s = opaque;
+    int ret;
+    if ((addr & 1) == 0) {
+        return 0xff;
+    } else {
+        switch(s->cmos_index) {
+
+According to CMOS spec, ‘any write to PROT 0070h should be followed by an 
+action to PROT 0071h or the RTC
+Will be RTC will be left in an unknown state’, but it seems that we can not 
+ensure this sequence in qemu/kvm.
+
+Any ideas ?
+
+Thanks,
+Hailiang
+
+On 01/12/2017 08:08, Gonglei (Arei) wrote:
+>
+First write to 0x70, cmos_index = 0xc & 0x7f = 0xc
+>
+       CPU 0/KVM-15566 kvm_pio: pio_write at 0x70 size 1 count 1 val 0xc>
+>
+Second write to 0x70, cmos_index = 0x86 & 0x7f = 0x6>        CPU 1/KVM-15567
+>
+kvm_pio: pio_write at 0x70 size 1 count 1 val 0x86> vcpu0 read 0x6 because
+>
+cmos_index is 0x6 now:>        CPU 0/KVM-15566 kvm_pio: pio_read at 0x71 size
+>
+1 count 1 val 0x6> vcpu1 read 0x6:>        CPU 1/KVM-15567 kvm_pio: pio_read
+>
+at 0x71 size 1 count 1 val 0x6
+This seems to be a Windows bug.  The easiest workaround that I
+can think of is to clear the interrupts already when 0xc is written,
+without waiting for the read (because REG_C can only be read).
+
+What do you think?
+
+Thanks,
+
+Paolo
+
+I also think it's windows bug, the problem is that it doesn't occur on xen 
+platform. And there are some other works need to be done while reading REG_C. 
+So I wrote that patch.
+
+Thanks,
+Gonglei
+发件人:Paolo Bonzini
+收件人:龚磊,张海亮,qemu-devel,Michael S. Tsirkin
+抄送:黄伟栋,王欣,谢祥有
+时间:2017-12-02 01:10:08
+主题:Re: [BUG] Windows 7 got stuck easily while run PCMark10 application
+
+On 01/12/2017 08:08, Gonglei (Arei) wrote:
+>
+First write to 0x70, cmos_index = 0xc & 0x7f = 0xc
+>
+CPU 0/KVM-15566 kvm_pio: pio_write at 0x70 size 1 count 1 val 0xc>
+>
+Second write to 0x70, cmos_index = 0x86 & 0x7f = 0x6>        CPU 1/KVM-15567
+>
+kvm_pio: pio_write at 0x70 size 1 count 1 val 0x86> vcpu0 read 0x6 because
+>
+cmos_index is 0x6 now:>        CPU 0/KVM-15566 kvm_pio: pio_read at 0x71 size
+>
+1 count 1 val 0x6> vcpu1 read 0x6:>        CPU 1/KVM-15567 kvm_pio: pio_read
+>
+at 0x71 size 1 count 1 val 0x6
+This seems to be a Windows bug.  The easiest workaround that I
+can think of is to clear the interrupts already when 0xc is written,
+without waiting for the read (because REG_C can only be read).
+
+What do you think?
+
+Thanks,
+
+Paolo
+
+On 01/12/2017 18:45, Gonglei (Arei) wrote:
+>
+I also think it's windows bug, the problem is that it doesn't occur on
+>
+xen platform.
+It's a race, it may just be that RTC PIO is faster in Xen because it's
+implemented in the hypervisor.
+
+I will try reporting it to Microsoft.
+
+Thanks,
+
+Paolo
+
+>
+Thanks,
+>
+Gonglei
+>
+*发件人:*Paolo Bonzini
+>
+*收件人:*龚磊,张海亮,qemu-devel,Michael S. Tsirkin
+>
+*抄送:*黄伟栋,王欣,谢祥有
+>
+*时间:*2017-12-02 01:10:08
+>
+*主题:*Re: [BUG] Windows 7 got stuck easily while run PCMark10 application
+>
+>
+On 01/12/2017 08:08, Gonglei (Arei) wrote:
+>
+> First write to 0x70, cmos_index = 0xc & 0x7f = 0xc
+>
+>        CPU 0/KVM-15566 kvm_pio: pio_write at 0x70 size 1 count 1 val 0xc>
+>
+> Second write to 0x70, cmos_index = 0x86 & 0x7f = 0x6>        CPU 1/KVM-15567
+>
+> kvm_pio: pio_write at 0x70 size 1 count 1 val 0x86> vcpu0 read 0x6 because
+>
+> cmos_index is 0x6 now:>        CPU 0/KVM-15566 kvm_pio: pio_read at 0x71
+>
+> size 1 count 1 val 0x6> vcpu1
+>
+read 0x6:>        CPU 1/KVM-15567 kvm_pio: pio_read at 0x71 size 1 count
+>
+1 val 0x6
+>
+This seems to be a Windows bug.  The easiest workaround that I
+>
+can think of is to clear the interrupts already when 0xc is written,
+>
+without waiting for the read (because REG_C can only be read).
+>
+>
+What do you think?
+>
+>
+Thanks,
+>
+>
+Paolo
+
+On 2017/12/2 2:37, Paolo Bonzini wrote:
+On 01/12/2017 18:45, Gonglei (Arei) wrote:
+I also think it's windows bug, the problem is that it doesn't occur on
+xen platform.
+It's a race, it may just be that RTC PIO is faster in Xen because it's
+implemented in the hypervisor.
+No, In Xen, it does not has such problem because it injects the RTC irq without
+checking whether its previous irq been cleared or not, which we do has such 
+checking
+in KVM.
+
+static int ioapic_set_irq(struct kvm_ioapic *ioapic, unsigned int irq,
+        int irq_level, bool line_status)
+{
+   ... ...
+    if (!irq_level) {
+        ioapic->irr &= ~mask; -->clear the RTC irq in irr, Or we will can not 
+inject RTC irq.
+        ret = 1;
+        goto out;
+    }
+
+I agree that we move the operation of clearing RTC irq from cmos_ioport_read() 
+to
+cmos_ioport_write() to ensure the action been done.
+
+Thanks,
+Hailiang
+I will try reporting it to Microsoft.
+
+Thanks,
+
+Paolo
+Thanks,
+Gonglei
+*发件人:*Paolo Bonzini
+*收件人:*龚磊,张海亮,qemu-devel,Michael S. Tsirkin
+*抄送:*黄伟栋,王欣,谢祥有
+*时间:*2017-12-02 01:10:08
+*主题:*Re: [BUG] Windows 7 got stuck easily while run PCMark10 application
+
+On 01/12/2017 08:08, Gonglei (Arei) wrote:
+First write to 0x70, cmos_index = 0xc & 0x7f = 0xc
+        CPU 0/KVM-15566 kvm_pio: pio_write at 0x70 size 1 count 1 val 0xc> Second write to 
+0x70, cmos_index = 0x86 & 0x7f = 0x6>        CPU 1/KVM-15567 kvm_pio: pio_write at 0x70 
+size 1 count 1 val 0x86> vcpu0 read 0x6 because cmos_index is 0x6 now:>        CPU 
+0/KVM-15566 kvm_pio: pio_read at 0x71 size 1 count 1 val 0x6> vcpu1
+read 0x6:>        CPU 1/KVM-15567 kvm_pio: pio_read at 0x71 size 1 count
+1 val 0x6
+This seems to be a Windows bug.  The easiest workaround that I
+can think of is to clear the interrupts already when 0xc is written,
+without waiting for the read (because REG_C can only be read).
+
+What do you think?
+
+Thanks,
+
+Paolo
+.
+
diff --git a/results/classifier/zero-shot/108/performance/80 b/results/classifier/zero-shot/108/performance/80
new file mode 100644
index 000000000..4d5e798b2
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/80
@@ -0,0 +1,16 @@
+performance: 0.947
+device: 0.812
+graphic: 0.500
+network: 0.344
+semantic: 0.339
+other: 0.284
+boot: 0.236
+PID: 0.167
+permissions: 0.154
+debug: 0.108
+vnc: 0.105
+files: 0.058
+KVM: 0.013
+socket: 0.010
+
+[Feature request] qemu-img multi-threaded compressed image conversion
diff --git a/results/classifier/zero-shot/108/performance/849 b/results/classifier/zero-shot/108/performance/849
new file mode 100644
index 000000000..a458d5500
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/849
@@ -0,0 +1,37 @@
+performance: 0.929
+graphic: 0.885
+other: 0.713
+device: 0.676
+semantic: 0.637
+boot: 0.296
+permissions: 0.269
+debug: 0.229
+vnc: 0.217
+PID: 0.205
+KVM: 0.131
+socket: 0.100
+network: 0.060
+files: 0.053
+
+High mouse polling rate stutters some applications
+Description of problem:
+There are couple of instances where moving the mouse would slow down some applications, especially for games
+
+https://www.reddit.com/r/VFIO/comments/ect3sd/having_an_issue_with_my_vm_where_games_stutter/
+
+https://www.reddit.com/r/VFIO/comments/n9hwtg/game_fps_drop_on_mouse_input/
+
+https://www.reddit.com/r/VFIO/comments/ln1uwb/evdev_mouse_passthrough_with_1000hz_mouse_causes/
+
+https://www.reddit.com/r/VFIO/comments/se92rq/looking_for_advice_on_poor_gpu_passthrough/
+
+I myself included, is impacted by this mysterious issue, I'm not pretty sure whether this is related to VFIO or QEMU or both, but I'm definitely sure this is a kind of regression in between since I had no such issue before.
+Steps to reproduce:
+1. Do a GPU passthrough
+2. Get a mouse capable of outputting high polling rate like 1000Hz, usually they are categorized as gaming mouses
+3. Start any 3D applications, including stuff like Unreal Engine 4 Editor or any games
+4. See mysterious stuttering
+Additional information:
+I'm using an AMD Ryzen 7 3700X CPU as the host, but I have made scripts that pins CPU to the VM to get better performance speculatively by putting the threads on the same CCX to minimize memory latency as much as possible. This alleviated some terrible lag, but not by much. (like 11 FPS to 20 FPS if you move your mouse which is still crappy compared to 90+ FPS when static)
+
+I suspect there is something wrong with the USB subsystem.
diff --git a/results/classifier/zero-shot/108/performance/861 b/results/classifier/zero-shot/108/performance/861
new file mode 100644
index 000000000..7add5d8e0
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/861
@@ -0,0 +1,16 @@
+performance: 0.982
+device: 0.830
+KVM: 0.468
+network: 0.416
+PID: 0.382
+other: 0.342
+permissions: 0.320
+debug: 0.302
+graphic: 0.295
+semantic: 0.288
+boot: 0.284
+socket: 0.222
+files: 0.199
+vnc: 0.126
+
+Using qemu+kvm is slower than using qemu in rv6(xv6 rust porting)
diff --git a/results/classifier/zero-shot/108/performance/919 b/results/classifier/zero-shot/108/performance/919
new file mode 100644
index 000000000..14832f3fe
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/919
@@ -0,0 +1,20 @@
+performance: 0.942
+graphic: 0.872
+device: 0.680
+semantic: 0.600
+debug: 0.547
+other: 0.320
+boot: 0.284
+network: 0.272
+socket: 0.195
+vnc: 0.163
+files: 0.127
+permissions: 0.112
+PID: 0.108
+KVM: 0.006
+
+Slow in Windows
+Description of problem:
+Eg . Win8.1 in QEMU on Windows is very slow and other os also are very slow
+Steps to reproduce:
+Just run a qemu instance
diff --git a/results/classifier/zero-shot/108/performance/985 b/results/classifier/zero-shot/108/performance/985
new file mode 100644
index 000000000..16c2a052a
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/985
@@ -0,0 +1,74 @@
+performance: 0.952
+boot: 0.719
+network: 0.700
+other: 0.600
+device: 0.594
+PID: 0.578
+graphic: 0.573
+debug: 0.442
+semantic: 0.404
+permissions: 0.298
+files: 0.235
+socket: 0.199
+vnc: 0.083
+KVM: 0.058
+
+pkg_add is working very slow on NetBSD
+Description of problem:
+pkg_add is working very slow, it installs one package in ~30 minutes although network speed is normal.
+Steps to reproduce:
+1. `wget https://cdn.netbsd.org/pub/NetBSD/NetBSD-9.2/images/NetBSD-9.2-amd64.iso`
+2. `qemu-img create -f qcow2 disk.qcow2 15G`
+3. Install
+```
+qemu-system-x86_64 -m 2048 -enable-kvm \
+  -drive if=virtio,file=disk.qcow2,format=qcow2 \
+  -netdev user,id=mynet0,hostfwd=tcp::7722-:22 \
+  -device e1000,netdev=mynet0 \
+  -cdrom NetBSD-9.2-amd64.iso
+```
+       # Installation steps
+       - 1) Boot Normally
+       - a) Installation messages in English
+       - a) unchanged
+       - a) Install NetBSD to hard disk
+       - b) Yes
+       - a) 15G
+       - a) GPT
+       - a) This is the correct geometry
+       - b) Use default partition sizes
+       - x) Partition sizes are ok
+       - b) Yes
+       - a) Use BIOS console
+       - b) Installation without X11
+       - a) CD-ROM / DVD / install image media
+       - Hit enter to continue
+       - a) configure network (Select defaults here, perform autoconf)
+       - x) Finished configuring
+       - Hit enter to continue
+       - x) Exit Install System
+       - Close QEMU
+4. Run
+```
+ qemu-system-x86_64 -m 2048 \
+  -drive if=virtio,file=disk.qcow2,format=qcow2 \
+  -enable-kvm  \
+  -netdev user,id=mynet0,hostfwd=tcp:127.0.0.1:7722-:22 \
+  -device e1000,netdev=mynet0
+```
+5. Login as root
+6. In NetBSD
+```
+export PKG_PATH="http://cdn.NetBSD.org/pub/pkgsrc/packages/NetBSD/$(uname -p)/$(uname -r)/All/" && \
+pkg_add pkgin
+
+```
+You should see that each of the package's installation takes ~30 minutes.
+Additional information:
+NetBSD 9.2 is also tested in Debian 11 with 'QEMU 6.2.0' and encountered same slowness. 
+
+NetBSD 7.1 and 8.1 are tested on openSUSE Tumbleweed and encountered same slowness.
+
+OpenBSD's pkg_add is working correctly.
+
+I am not sure if it will help but Virtualbox(at least 6.1) is working correctly.
diff --git a/results/classifier/zero-shot/108/performance/992067 b/results/classifier/zero-shot/108/performance/992067
new file mode 100644
index 000000000..b55634563
--- /dev/null
+++ b/results/classifier/zero-shot/108/performance/992067
@@ -0,0 +1,48 @@
+performance: 0.975
+KVM: 0.938
+graphic: 0.916
+boot: 0.869
+other: 0.847
+vnc: 0.801
+debug: 0.792
+device: 0.786
+permissions: 0.784
+files: 0.772
+semantic: 0.717
+socket: 0.606
+PID: 0.581
+network: 0.579
+
+Windows 2008R2 very slow cold boot when >4GB memory
+
+I've been having a consistent problem booting 2008R2 guests with 4096MB of RAM or greater. On the initial boot the KVM process starts out with a ~200MB memory allocation and will use 100% of all CPU allocated to it. The RES memory of the KVM process slowly rises by around 200mb every few minutes until it reaches it's memory allocation (several hours in some cases). Whilst this is happening the guest will usually blue screen with the message of -
+
+A clock interrupt was not received on a secondary processor within the allocated time interval
+
+If I let the KVM process continue to run it will eventually allocate the required memory the guest will run at full speed, usually restarting after the blue screen and booting into startup repair. From here you can restart it and it will boot perfectly. Once booted the guest has no performance issues at all. 
+
+I've tried everything I could think of. Removing PAE, playing with huge pages, different kernels, different userspaces, different systems, different backing file systems, different processor feature set, with or without Virtio etc. My best theory is that the problem is caused by Windows 2008 zeroing out all the memory on boot and something is causing this to be held up or slowed to a crawl. The hosts always have memory free to boot the guest and are not using swap at all. 
+
+Nothing so far has solved the issue. A few observations I've made about the issue are - 
+Large memory 2008R2 guests seem to boot fine (or with a small delay) when they are the first to boot on the host after a reboot
+Sometimes dropping the disk cache (echo 1 > /proc/sys/vm/drop_caches) will cause them to boot faster
+
+
+The hosts I've tried are -
+All Nehalem based (5540, 5620 and 5660)
+Host ram of 48GB, 96GB and 192GB
+Storage on NFS, Gluster and local (ext4, xfs and zfs)
+QED, QCOW and RAW formats
+Scientific Linux 6.1 with the standard kernel 2.6.32, 2.6.38 and 3.3.1
+KVM userspaces 0.12, 0.14 and (currently) 0.15.1
+
+This should be resolved by using Hyper-V relaxed timers which is in the latest development version of QEMU.  You would need to add -cpu host,+hv_relaxed to the command line to verify this.
+
+Thanks for the quick reply,
+
+I pulled the latest version from Git and on first attempt it said the hv_relaxed feature was not present. I checked the source and the 'hv_relaxed' feature was not included in a 'feature_name' array so the flag was being discarded before it could be enabled. 
+
+Once added in to the 'feature_name' array it was enabled but the VM crashes on boot with a blue screen and the error message "Phase0_exception" followed by a reboot.
+
+Triaging old bug tickets... QEMU 0.12/0.14/0.15 is pretty outdated nowadays. Can you still reproduce this behavior with the latest version of QEMU? If not, I think we should close this bug...
+