summary refs log tree commit diff stats
path: root/results/classifier/118/performance
diff options
context:
space:
mode:
Diffstat (limited to 'results/classifier/118/performance')
-rw-r--r--results/classifier/118/performance/100131
-rw-r--r--results/classifier/118/performance/1005207
-rw-r--r--results/classifier/118/performance/101853
-rw-r--r--results/classifier/118/performance/103172
-rw-r--r--results/classifier/118/performance/103246
-rw-r--r--results/classifier/118/performance/103698779
-rw-r--r--results/classifier/118/performance/10531
-rw-r--r--results/classifier/118/performance/105631
-rw-r--r--results/classifier/118/performance/107642
-rw-r--r--results/classifier/118/performance/111431
-rw-r--r--results/classifier/118/performance/111986156
-rw-r--r--results/classifier/118/performance/112636953
-rw-r--r--results/classifier/118/performance/112995783
-rw-r--r--results/classifier/118/performance/1139108
-rw-r--r--results/classifier/118/performance/117349058
-rw-r--r--results/classifier/118/performance/120375
-rw-r--r--results/classifier/118/performance/122341
-rw-r--r--results/classifier/118/performance/122828596
-rw-r--r--results/classifier/118/performance/12331
-rw-r--r--results/classifier/118/performance/125356372
-rw-r--r--results/classifier/118/performance/125831
-rw-r--r--results/classifier/118/performance/127731
-rw-r--r--results/classifier/118/performance/131241
-rw-r--r--results/classifier/118/performance/132138
-rw-r--r--results/classifier/118/performance/132146483
-rw-r--r--results/classifier/118/performance/133133440
-rw-r--r--results/classifier/118/performance/13431
-rw-r--r--results/classifier/118/performance/135452979
-rw-r--r--results/classifier/118/performance/139491
-rw-r--r--results/classifier/118/performance/139993945
-rw-r--r--results/classifier/118/performance/142659373
-rw-r--r--results/classifier/118/performance/144231
-rw-r--r--results/classifier/118/performance/147208397
-rw-r--r--results/classifier/118/performance/147345149
-rw-r--r--results/classifier/118/performance/1494962
-rw-r--r--results/classifier/118/performance/152079
-rw-r--r--results/classifier/118/performance/152270
-rw-r--r--results/classifier/118/performance/152917370
-rw-r--r--results/classifier/118/performance/154644578
-rw-r--r--results/classifier/118/performance/156957
-rw-r--r--results/classifier/118/performance/156949142
-rw-r--r--results/classifier/118/performance/158661155
-rw-r--r--results/classifier/118/performance/158925751
-rw-r--r--results/classifier/118/performance/159033669
-rw-r--r--results/classifier/118/performance/159524092
-rw-r--r--results/classifier/118/performance/16231
-rw-r--r--results/classifier/118/performance/163567
-rw-r--r--results/classifier/118/performance/1661758157
-rw-r--r--results/classifier/118/performance/167238353
-rw-r--r--results/classifier/118/performance/167743
-rw-r--r--results/classifier/118/performance/169359
-rw-r--r--results/classifier/118/performance/171877
-rw-r--r--results/classifier/118/performance/172096942
-rw-r--r--results/classifier/118/performance/172118767
-rw-r--r--results/classifier/118/performance/172398476
-rw-r--r--results/classifier/118/performance/1725707114
-rw-r--r--results/classifier/118/performance/172811689
-rw-r--r--results/classifier/118/performance/173009944
-rw-r--r--results/classifier/118/performance/173142
-rw-r--r--results/classifier/118/performance/173481077
-rw-r--r--results/classifier/118/performance/173557662
-rw-r--r--results/classifier/118/performance/173779
-rw-r--r--results/classifier/118/performance/174346
-rw-r--r--results/classifier/118/performance/1750229770
-rw-r--r--results/classifier/118/performance/176862
-rw-r--r--results/classifier/118/performance/178443
-rw-r--r--results/classifier/118/performance/17931
-rw-r--r--results/classifier/118/performance/179046066
-rw-r--r--results/classifier/118/performance/179805772
-rw-r--r--results/classifier/118/performance/180882449
-rw-r--r--results/classifier/118/performance/181172042
-rw-r--r--results/classifier/118/performance/182040
-rw-r--r--results/classifier/118/performance/183366881
-rw-r--r--results/classifier/118/performance/183449683
-rw-r--r--results/classifier/118/performance/184086574
-rw-r--r--results/classifier/118/performance/1841990416
-rw-r--r--results/classifier/118/performance/185312379
-rw-r--r--results/classifier/118/performance/185782
-rw-r--r--results/classifier/118/performance/185908172
-rw-r--r--results/classifier/118/performance/187334150
-rw-r--r--results/classifier/118/performance/187576278
-rw-r--r--results/classifier/118/performance/187741861
-rw-r--r--results/classifier/118/performance/188072275
-rw-r--r--results/classifier/118/performance/188145090
-rw-r--r--results/classifier/118/performance/188249766
-rw-r--r--results/classifier/118/performance/188340077
-rw-r--r--results/classifier/118/performance/188359384
-rw-r--r--results/classifier/118/performance/188440
-rw-r--r--results/classifier/118/performance/188630644
-rw-r--r--results/classifier/118/performance/1886602160
-rw-r--r--results/classifier/118/performance/189182985
-rw-r--r--results/classifier/118/performance/189208165
-rw-r--r--results/classifier/118/performance/189570385
-rw-r--r--results/classifier/118/performance/189684
-rw-r--r--results/classifier/118/performance/189675470
-rw-r--r--results/classifier/118/performance/1901981106
-rw-r--r--results/classifier/118/performance/191334157
-rw-r--r--results/classifier/118/performance/19231
-rw-r--r--results/classifier/118/performance/192617478
-rw-r--r--results/classifier/118/performance/194050
-rw-r--r--results/classifier/118/performance/201483
-rw-r--r--results/classifier/118/performance/201639
-rw-r--r--results/classifier/118/performance/202331
-rw-r--r--results/classifier/118/performance/206845
-rw-r--r--results/classifier/118/performance/212831
-rw-r--r--results/classifier/118/performance/214941
-rw-r--r--results/classifier/118/performance/215331
-rw-r--r--results/classifier/118/performance/21831
-rw-r--r--results/classifier/118/performance/218350
-rw-r--r--results/classifier/118/performance/218731
-rw-r--r--results/classifier/118/performance/219360
-rw-r--r--results/classifier/118/performance/221633
-rw-r--r--results/classifier/118/performance/224131
-rw-r--r--results/classifier/118/performance/22631
-rw-r--r--results/classifier/118/performance/231947
-rw-r--r--results/classifier/118/performance/232541
-rw-r--r--results/classifier/118/performance/234475
-rw-r--r--results/classifier/118/performance/23631
-rw-r--r--results/classifier/118/performance/236538
-rw-r--r--results/classifier/118/performance/2410122
-rw-r--r--results/classifier/118/performance/241735
-rw-r--r--results/classifier/118/performance/246038
-rw-r--r--results/classifier/118/performance/247231
-rw-r--r--results/classifier/118/performance/248350
-rw-r--r--results/classifier/118/performance/251931
-rw-r--r--results/classifier/118/performance/255143
-rw-r--r--results/classifier/118/performance/256282
-rw-r--r--results/classifier/118/performance/256543
-rw-r--r--results/classifier/118/performance/257260
-rw-r--r--results/classifier/118/performance/268271
-rw-r--r--results/classifier/118/performance/268678
-rw-r--r--results/classifier/118/performance/268931
-rw-r--r--results/classifier/118/performance/27731
-rw-r--r--results/classifier/118/performance/281781
-rw-r--r--results/classifier/118/performance/282153
-rw-r--r--results/classifier/118/performance/284843
-rw-r--r--results/classifier/118/performance/28531
-rw-r--r--results/classifier/118/performance/28631
-rw-r--r--results/classifier/118/performance/288331
-rw-r--r--results/classifier/118/performance/290041
-rw-r--r--results/classifier/118/performance/290643
-rw-r--r--results/classifier/118/performance/30131
-rw-r--r--results/classifier/118/performance/32131
-rw-r--r--results/classifier/118/performance/34331
-rw-r--r--results/classifier/118/performance/40431
-rw-r--r--results/classifier/118/performance/40931
-rw-r--r--results/classifier/118/performance/44531
-rw-r--r--results/classifier/118/performance/47231
-rw-r--r--results/classifier/118/performance/48531
-rw-r--r--results/classifier/118/performance/49048493
-rw-r--r--results/classifier/118/performance/49852377
-rw-r--r--results/classifier/118/performance/53431
-rw-r--r--results/classifier/118/performance/54831
-rw-r--r--results/classifier/118/performance/54931
-rw-r--r--results/classifier/118/performance/58873568
-rw-r--r--results/classifier/118/performance/58923144
-rw-r--r--results/classifier/118/performance/59749
-rw-r--r--results/classifier/118/performance/59735167
-rw-r--r--results/classifier/118/performance/601946107
-rw-r--r--results/classifier/118/performance/6231
-rw-r--r--results/classifier/118/performance/64234
-rw-r--r--results/classifier/118/performance/67233
-rw-r--r--results/classifier/118/performance/68031
-rw-r--r--results/classifier/118/performance/71949
-rw-r--r--results/classifier/118/performance/72179361
-rw-r--r--results/classifier/118/performance/72431
-rw-r--r--results/classifier/118/performance/75391664
-rw-r--r--results/classifier/118/performance/75631
-rw-r--r--results/classifier/118/performance/76097659
-rw-r--r--results/classifier/118/performance/7831
-rw-r--r--results/classifier/118/performance/78131
-rw-r--r--results/classifier/118/performance/8031
-rw-r--r--results/classifier/118/performance/8131
-rw-r--r--results/classifier/118/performance/81531
-rw-r--r--results/classifier/118/performance/82131
-rw-r--r--results/classifier/118/performance/84952
-rw-r--r--results/classifier/118/performance/86131
-rw-r--r--results/classifier/118/performance/86445
-rw-r--r--results/classifier/118/performance/87431
-rw-r--r--results/classifier/118/performance/91935
-rw-r--r--results/classifier/118/performance/92867672
-rw-r--r--results/classifier/118/performance/93356
-rw-r--r--results/classifier/118/performance/98589
-rw-r--r--results/classifier/118/performance/99206763
-rw-r--r--results/classifier/118/performance/99763152
185 files changed, 12386 insertions, 0 deletions
diff --git a/results/classifier/118/performance/1001 b/results/classifier/118/performance/1001
new file mode 100644
index 00000000..19a46a4e
--- /dev/null
+++ b/results/classifier/118/performance/1001
@@ -0,0 +1,31 @@
+performance: 0.824
+device: 0.797
+peripherals: 0.521
+graphic: 0.463
+i386: 0.373
+vnc: 0.371
+x86: 0.355
+risc-v: 0.355
+architecture: 0.307
+network: 0.296
+ppc: 0.293
+debug: 0.282
+arm: 0.258
+boot: 0.210
+user-level: 0.175
+hypervisor: 0.165
+virtual: 0.145
+semantic: 0.137
+files: 0.119
+TCG: 0.089
+PID: 0.082
+socket: 0.074
+assembly: 0.064
+kernel: 0.059
+register: 0.056
+permissions: 0.054
+mistranslation: 0.040
+VMM: 0.005
+KVM: 0.004
+
+query the current cursor position with QMP
diff --git a/results/classifier/118/performance/1005 b/results/classifier/118/performance/1005
new file mode 100644
index 00000000..bf55ad11
--- /dev/null
+++ b/results/classifier/118/performance/1005
@@ -0,0 +1,207 @@
+performance: 0.923
+user-level: 0.923
+register: 0.921
+debug: 0.920
+permissions: 0.915
+graphic: 0.904
+architecture: 0.903
+device: 0.890
+assembly: 0.881
+arm: 0.870
+virtual: 0.870
+PID: 0.858
+semantic: 0.846
+boot: 0.835
+socket: 0.831
+files: 0.821
+ppc: 0.806
+vnc: 0.790
+hypervisor: 0.789
+kernel: 0.781
+TCG: 0.773
+mistranslation: 0.772
+peripherals: 0.768
+network: 0.754
+risc-v: 0.754
+KVM: 0.742
+x86: 0.627
+VMM: 0.620
+i386: 0.550
+
+blockdev-del doesn't work after blockdev-backup with incremental, which using dirty-bitmap
+Description of problem:
+After incremental backup with bitmap, blockdev-del doesn't work at target node.  
+Because of this, incremental backup cannot rebase to base node.  
+I refered this. https://qemu-project.gitlab.io/qemu/interop/bitmaps.html#example-incremental-push-backups-without-backing-files
+Steps to reproduce:
+1. `blockdev-add` incremental backup node
+```
+echo '{"execute":"qmp_capabilities"}{"execute":"blockdev-add","arguments":{"driver":"qcow2","node-name":"incre0","file":{"driver":"file","filename":"/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2"}}}' | nc -U /mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/qmp.sock -N
+
+{
+    "return": {
+    }
+}
+```
+2. `blockdev-backup` with `vda` to target `incre0` node
+```
+echo '{"execute":"qmp_capabilities"}{"execute":"blockdev-backup", "arguments": {"device": "vda", "bitmap":"bitmap0", "target": "incre0", "sync": "incremental", "job-id": "incre0-job", "speed": 536870912}}' | nc -U /mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/qmp.sock -N
+
+{
+    "timestamp": {
+        "seconds": 1651050066,
+        "microseconds": 848370
+    },
+    "event": "JOB_STATUS_CHANGE",
+    "data": {
+        "status": "created",
+        "id": "incre0-job"
+    }
+}
+{
+    "timestamp": {
+        "seconds": 1651050066,
+        "microseconds": 848431
+    },
+    "event": "JOB_STATUS_CHANGE",
+    "data": {
+        "status": "running",
+        "id": "incre0-job"
+    }
+}
+{
+    "timestamp": {
+        "seconds": 1651050066,
+        "microseconds": 848464
+    },
+    "event": "JOB_STATUS_CHANGE",
+    "data": {
+        "status": "paused",
+        "id": "incre0-job"
+    }
+}
+{
+    "timestamp": {
+        "seconds": 1651050066,
+        "microseconds": 848485
+    },
+    "event": "JOB_STATUS_CHANGE",
+    "data": {
+        "status": "running",
+        "id": "incre0-job"
+    }
+}
+{
+    "return": {
+    }
+}
+
+```
+3. `query-block-jobs` check `incre0-job` is done
+```
+echo '{"execute":"qmp_capabilities"}{"execute":"query-block-jobs"}' | nc -U /mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/qmp.sock -N
+
+{
+    "return": {
+    }
+}
+{
+    "return": [
+    ]
+}
+```
+4. To release write lock (need to rebase in incre0.qcow2), `blockdev-del`
+```
+echo '{"execute":"qmp_capabilities"}{"execute":"blockdev-del","arguments":{"node-name":"incre0"}' | nc -U /mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/qmp.sock -N
+
+{
+    "return": {
+    }
+}
+```
+5. `qemu-img rebase`
+```
+qemu-img rebase -b base.qcow2 -u incre0.qcow2
+
+qemu-img: Could not open 'incre0.qcow2': Failed to get "write" lock
+Is another process using the image [incre0.qcow2]?
+```
+
+6. check `query-named-block-nodes` after `blockdev-del`
+```
+{
+    "return": [
+        {
+            "iops_rd": 0,
+            "detect_zeroes": "off",
+            "image": {
+                "virtual-size": 53687091200,
+                "filename": "/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2",
+                "cluster-size": 65536,
+                "format": "qcow2",
+                "actual-size": 241340416,
+                "format-specific": {
+                    "type": "qcow2",
+                    "data": {
+                        "compat": "1.1",
+                        "compression-type": "zlib",
+                        "lazy-refcounts": false,
+                        "refcount-bits": 16,
+                        "corrupt": false,
+                        "extended-l2": false
+                    }
+                },
+                "dirty-flag": false
+            },
+            "iops_wr": 0,
+            "ro": false,
+            "node-name": "incre0",
+            "backing_file_depth": 0,
+            "drv": "qcow2",
+            "iops": 0,
+            "bps_wr": 0,
+            "write_threshold": 0,
+            "encrypted": false,
+            "bps": 0,
+            "bps_rd": 0,
+            "cache": {
+                "no-flush": false,
+                "direct": false,
+                "writeback": true
+            },
+            "file": "/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2"
+        },
+        {
+            "iops_rd": 0,
+            "detect_zeroes": "off",
+            "image": {
+                "virtual-size": 240451584,
+                "filename": "/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2",
+                "format": "file",
+                "actual-size": 241340416,
+                "dirty-flag": false
+            },
+            "iops_wr": 0,
+            "ro": false,
+            "node-name": "#block412",
+            "backing_file_depth": 0,
+            "drv": "file",
+            "iops": 0,
+            "bps_wr": 0,
+            "write_threshold": 0,
+            "encrypted": false,
+            "bps": 0,
+            "bps_rd": 0,
+            "cache": {
+                "no-flush": false,
+                "direct": false,
+                "writeback": true
+            },
+            "file": "/mnt/7b12fe9c-fa0f-4f2a-82b1-3a6cd4e15ae8/temp/incre0.qcow2"
+        },
+        ......
+    ]
+}
+```
+Additional information:
+
diff --git a/results/classifier/118/performance/1018 b/results/classifier/118/performance/1018
new file mode 100644
index 00000000..ef05f36c
--- /dev/null
+++ b/results/classifier/118/performance/1018
@@ -0,0 +1,53 @@
+x86: 0.990
+performance: 0.960
+virtual: 0.959
+boot: 0.934
+device: 0.929
+architecture: 0.900
+hypervisor: 0.882
+graphic: 0.852
+PID: 0.791
+register: 0.781
+user-level: 0.766
+semantic: 0.755
+assembly: 0.742
+kernel: 0.729
+vnc: 0.722
+permissions: 0.695
+ppc: 0.694
+i386: 0.623
+peripherals: 0.606
+debug: 0.606
+socket: 0.554
+risc-v: 0.544
+VMM: 0.527
+TCG: 0.522
+files: 0.484
+arm: 0.484
+mistranslation: 0.462
+KVM: 0.411
+network: 0.375
+
+virtio-scsi-pci with iothread results in 100% CPU in qemu 7.0.0
+Description of problem:
+Top reports constant 100% host CPU usage by `qemu-system-x86`. I have narrowed the issue down to the following section of the config:
+```
+        -object iothread,id=t0 \
+        -device virtio-scsi-pci,iothread=t0,num_queues=4 \
+```
+If this is replaced by
+```
+        -device virtio-scsi-pci \
+```
+Then CPU usage is normal (near 0%). 
+
+This problem doesn't appear with qemu 6.2.0 where CPU usage is near 0% even with iothread in the qemu options.
+Steps to reproduce:
+1. Download Kubuntu 22.04 LTS ISO (https://cdimage.ubuntu.com/kubuntu/releases/22.04/release/kubuntu-22.04-desktop-amd64.iso),
+2. Create a root virtual drive for the guest with 'qemu-img create -f qcow2 -o cluster_size=4k kubuntu.img 256G',
+3. Start the guest with the config given above,
+4. Connect to the guest (using spicy for example, password 'p'), select "try kubuntu" in grub menu AND later in the GUI, let it boot to plasma desktop, monitor host CPU usage using 'top'.
+
+(there could be a faster way to reproduce it)
+Additional information:
+
diff --git a/results/classifier/118/performance/1031 b/results/classifier/118/performance/1031
new file mode 100644
index 00000000..81f70300
--- /dev/null
+++ b/results/classifier/118/performance/1031
@@ -0,0 +1,72 @@
+performance: 0.955
+hypervisor: 0.877
+socket: 0.864
+architecture: 0.856
+boot: 0.819
+device: 0.790
+kernel: 0.750
+ppc: 0.749
+PID: 0.740
+graphic: 0.666
+user-level: 0.640
+arm: 0.629
+x86: 0.616
+virtual: 0.613
+semantic: 0.601
+register: 0.567
+debug: 0.480
+TCG: 0.411
+VMM: 0.404
+vnc: 0.394
+files: 0.362
+permissions: 0.301
+risc-v: 0.276
+network: 0.274
+mistranslation: 0.203
+peripherals: 0.170
+assembly: 0.152
+i386: 0.036
+KVM: 0.026
+
+Intel 12th Gen CPU not working with QEMU Hyper-V nested virtualization
+Description of problem:
+When booting with Hyper-V + host-passthrough it gets stuck at tianocore, does not change until I reboot which then loops into windows diagnostics which leads nowhere. Done using Windows 10, tried using newest windows version and 1909.
+
+Specs: Manjaro Gnome 5.15 LTS, i5-12600k, z690 gigabyte aorus elite ddr4, rtx 3070ti.
+
+I’ve spent days trying to figure out what was messing with it and it turned out I could boot when messing with my CPU topology, for some reason my 12th gen + Hyper-V + host-passthrough only works with sockets. Cores and threads above 1 causes boot problems, apart from disabling vme which boots, but the hypervisor does not load.
+
+This fails (normal host-passthrough):
+```
+  <cpu mode="host-passthrough" check="none" migratable="on">
+    <topology sockets="1" dies="1" cores="6" threads="2"/>
+  </cpu>
+```
+
+This boots (-can only change sockets):
+```
+  <cpu mode="host-passthrough" check="none" migratable="on">
+    <topology sockets="12" dies="1" cores="1" threads="1"/>
+  </cpu>
+```
+
+This boots (-no hypervisor):
+```
+<cpu mode="host-passthrough" check="partial" migratable="off">
+    <topology sockets="1" dies="1" cores="6" threads="2"/>
+    <feature policy="disable" name="vme"/>
+  </cpu>
+```
+
+No matter what adjustment I do I cannot change the cores or threads or it will result in a boot failure, host-model just does not work once I boot the machine the host model changes to cooperlake.
+
+My current way of bypassing this is I’ve downloaded the QEMU source code, gone through cpu.c and modified the default skylake-client CPU model to match my CPU, then I added in most of my i5-12600k flags manually, this seems to work with a 35-45% performance drop in CPU and in ram. Without Hyper-V enabled and using the normal host-passthrough I get near bare metal performance.
+
+Tried with multiple versions of QEMU, EDK2, and loads of kernel versions (to add to this my i5-12600k gen does not work on kernel version 5.13 and below) even went ahead to try Ubuntu and had the same problem, my other (i7-9700k) PC works fine with Hyper-V. Also disabled my E-cores through bios resulting in the same issue. CPU pinning the P-cores to the guest does not seem to help.
+Steps to reproduce:
+1. Enable hyper-v in windows features
+2. Restart guest
+3. Boot failure
+Additional information:
+Hyper-V host-passthrough XML:
+https://pst.klgrth.io/paste/yc5wk
diff --git a/results/classifier/118/performance/1032 b/results/classifier/118/performance/1032
new file mode 100644
index 00000000..82d947ee
--- /dev/null
+++ b/results/classifier/118/performance/1032
@@ -0,0 +1,46 @@
+performance: 0.978
+peripherals: 0.929
+architecture: 0.893
+device: 0.876
+virtual: 0.867
+graphic: 0.863
+x86: 0.854
+user-level: 0.813
+hypervisor: 0.793
+files: 0.744
+network: 0.736
+socket: 0.729
+assembly: 0.726
+boot: 0.717
+mistranslation: 0.700
+PID: 0.695
+permissions: 0.674
+TCG: 0.671
+kernel: 0.655
+VMM: 0.614
+risc-v: 0.613
+register: 0.554
+semantic: 0.541
+debug: 0.506
+arm: 0.499
+KVM: 0.498
+vnc: 0.497
+ppc: 0.383
+i386: 0.311
+
+Slow random performance of virtio-blk
+Steps to reproduce:
+1. Download Virtualbox Windows 11 image from https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/
+2. Download virtio-win-iso: `wget https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.215-2/virtio-win-0.1.215.iso`
+3. Extract WinDev*.zip `unzip WinDev2204Eval.VirtualBox.zip`and import the extracted Ova in VirtualBox (import WinDev with the option "conversion to vdi" clicked)
+4. `qemu-img convert -f vdi -O raw <YourVirtualBoxVMFolder>/WinDev2204Eval-disk001.vdi<YourQemuImgFolder>/WinDev2204Eval-disk001.img`
+5. Start Windows 11 in Qemu: 
+``` 
+qemu-system-x86_64 -enable-kvm -cpu host -device virtio-blk-pci,scsi=off,drive=WinDevDrive,id=virtio-disk0,bootindex=0  -drive file=<YourQemuImgFolder>/WinDev2204Eval-disk001.img,if=none,id=WinDevDrive,format=raw -net nic -net user,hostname=windowsvm -m 8G -monitor stdio -name "Windows" -usbdevice tablet -device virtio-serial -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -cdrom <YourDownloadFolder>/virtio-win-0.1.215.iso
+```
+6. Win 11 won't boot and will go into recovery mode (even the safeboot trick doesn't work here), please follow that [answer](https://superuser.com/questions/1057959/windows-10-in-kvm-change-boot-disk-to-virtio#answer-1200899) to load the viostor driver over recovery cmd
+7. Reboot the VM and it should start
+2. Install CrystalDiskMark 
+3. Execute CrystalDiskMark Benchmark
+Additional information:
+#
diff --git a/results/classifier/118/performance/1036987 b/results/classifier/118/performance/1036987
new file mode 100644
index 00000000..6a325df2
--- /dev/null
+++ b/results/classifier/118/performance/1036987
@@ -0,0 +1,79 @@
+performance: 0.996
+mistranslation: 0.885
+graphic: 0.824
+PID: 0.812
+ppc: 0.783
+permissions: 0.777
+network: 0.769
+user-level: 0.766
+register: 0.754
+socket: 0.751
+architecture: 0.733
+semantic: 0.712
+TCG: 0.700
+files: 0.696
+device: 0.692
+VMM: 0.685
+kernel: 0.659
+hypervisor: 0.654
+KVM: 0.646
+vnc: 0.618
+debug: 0.588
+arm: 0.584
+virtual: 0.568
+peripherals: 0.565
+x86: 0.559
+risc-v: 0.543
+i386: 0.525
+boot: 0.514
+assembly: 0.504
+
+compilation error due to bug in savevm.c
+
+Since 
+
+302dfbeb21fc5154c24ca50d296e865a3778c7da
+
+Add xbzrle_encode_buffer and xbzrle_decode_buffer functions
+    
+    For performance we are encoding long word at a time.
+    For nzrun we use long-word-at-a-time NULL-detection tricks from strcmp():
+    using ((lword - 0x0101010101010101) & (~lword) & 0x8080808080808080) test
+    to find out if any byte in the long word is zero.
+    
+    Signed-off-by: Benoit Hudzia <email address hidden>
+    Signed-off-by: Petter Svard <email address hidden>
+    Signed-off-by: Aidan Shribman <email address hidden>
+    Signed-off-by: Orit Wasserman <email address hidden>
+    Signed-off-by: Eric Blake <email address hidden>
+    
+    Reviewed-by: Luiz Capitulino <email address hidden>
+    Reviewed-by: Eric Blake <email address hidden>
+
+ commit arrived into master barnch, I can't compile qemu at all:
+
+savevm.c:2476:13: error: overflow in implicit constant conversion [-Werror=overflow]
+
+Patch is available at http://patchwork.ozlabs.org/patch/177217/
+
+On 15 August 2012 08:44, Evgeny Voevodin <email address hidden> wrote:
+> Since
+>
+> 302dfbeb21fc5154c24ca50d296e865a3778c7da
+>
+> Add xbzrle_encode_buffer and xbzrle_decode_buffer functions
+>  commit arrived into master barnch, I can't compile qemu at all:
+>
+> savevm.c:2476:13: error: overflow in implicit constant conversion
+> [-Werror=overflow]
+
+Fixed by this patch by Alex yesterday:
+ http://patchwork.ozlabs.org/patch/177217/
+
+(not yet in master)
+
+-- PMM
+
+
+http://git.qemu.org/?p=qemu.git;a=commitdiff;h=a5b71725c7067f6805eb30
+
diff --git a/results/classifier/118/performance/105 b/results/classifier/118/performance/105
new file mode 100644
index 00000000..52360af1
--- /dev/null
+++ b/results/classifier/118/performance/105
@@ -0,0 +1,31 @@
+performance: 0.879
+device: 0.874
+arm: 0.606
+debug: 0.522
+graphic: 0.483
+network: 0.454
+assembly: 0.389
+kernel: 0.388
+risc-v: 0.384
+architecture: 0.377
+PID: 0.323
+socket: 0.312
+mistranslation: 0.305
+register: 0.288
+i386: 0.278
+VMM: 0.278
+ppc: 0.274
+permissions: 0.238
+vnc: 0.234
+semantic: 0.209
+peripherals: 0.205
+virtual: 0.197
+x86: 0.193
+boot: 0.186
+user-level: 0.175
+hypervisor: 0.133
+files: 0.132
+TCG: 0.024
+KVM: 0.011
+
+Gdb hangs when trying to single-step after an invalid instruction
diff --git a/results/classifier/118/performance/1056 b/results/classifier/118/performance/1056
new file mode 100644
index 00000000..69eb4170
--- /dev/null
+++ b/results/classifier/118/performance/1056
@@ -0,0 +1,31 @@
+performance: 0.992
+virtual: 0.946
+architecture: 0.933
+device: 0.928
+arm: 0.866
+graphic: 0.756
+debug: 0.590
+network: 0.450
+hypervisor: 0.413
+ppc: 0.347
+boot: 0.339
+register: 0.326
+PID: 0.288
+semantic: 0.281
+permissions: 0.277
+vnc: 0.230
+files: 0.219
+socket: 0.216
+mistranslation: 0.192
+risc-v: 0.159
+VMM: 0.158
+peripherals: 0.153
+user-level: 0.135
+TCG: 0.123
+kernel: 0.063
+assembly: 0.058
+KVM: 0.003
+i386: 0.002
+x86: 0.001
+
+Bad Performance of Windows 11 ARM64 VM on Windows 11 Qemu 7.0 Host System
diff --git a/results/classifier/118/performance/1076 b/results/classifier/118/performance/1076
new file mode 100644
index 00000000..40a2892e
--- /dev/null
+++ b/results/classifier/118/performance/1076
@@ -0,0 +1,42 @@
+performance: 0.891
+device: 0.882
+graphic: 0.863
+files: 0.841
+peripherals: 0.827
+socket: 0.689
+architecture: 0.657
+boot: 0.627
+semantic: 0.598
+ppc: 0.562
+permissions: 0.536
+network: 0.534
+user-level: 0.527
+register: 0.505
+PID: 0.493
+arm: 0.490
+risc-v: 0.476
+vnc: 0.476
+i386: 0.449
+x86: 0.439
+hypervisor: 0.432
+VMM: 0.339
+TCG: 0.318
+mistranslation: 0.305
+assembly: 0.279
+debug: 0.260
+virtual: 0.258
+kernel: 0.164
+KVM: 0.139
+
+AC97+DirectSound only polls for audio every 10ms with no way to change it
+Description of problem:
+The AC97 device emulation, at least in combination with the DirectSound backend, only polls for audio every 10ms, meaning that DMA interrupts are received at a maximum frequency of 100Hz. This applies regardless of how large the buffers in the AC97's buffer list are, meaning that if one buffer takes less than 10ms to play, glitches can be heard with no possible mitigations on the host system.
+
+I came across this when fiddling with Serenity's own latencies in the AC97 driver and userland mixer. As soon as less than 512-sample buffers are used, audio becomes glitchy. Based on timing tests, kernel and userland processing of audio combined takes less than 200μs for one buffer, while the lowest average rate that DMA interrupts are received at is almost exactly 10ms.
+
+No changes to the dsound latency option, as listed [here](https://www.qemu.org/docs/master/system/invocation.html?highlight=dsound), made any difference; I tried as low as 2ms: `-audiodev dsound,id=snd0,latency=2000`. As far as I can tell there are no IRQ- or latency-related options for the AC97 emulation.
+Steps to reproduce:
+1. Use SerenityOS as of the above commit.
+2. Before building, include an audio file in Base/home/anon; most ordinary FLAC, WAV and MP3 files created without options with ffmpeg should work.
+3. Boot Serenity in QEMU on Windows without any special run configuration.
+4. Play the audio file with `aplay <filename>`, hear glitches.
diff --git a/results/classifier/118/performance/1114 b/results/classifier/118/performance/1114
new file mode 100644
index 00000000..dec4d6f7
--- /dev/null
+++ b/results/classifier/118/performance/1114
@@ -0,0 +1,31 @@
+performance: 0.910
+user-level: 0.763
+device: 0.703
+network: 0.663
+debug: 0.636
+graphic: 0.492
+vnc: 0.442
+socket: 0.438
+ppc: 0.322
+architecture: 0.250
+arm: 0.249
+boot: 0.245
+PID: 0.239
+semantic: 0.195
+VMM: 0.137
+TCG: 0.127
+register: 0.102
+i386: 0.086
+virtual: 0.059
+mistranslation: 0.045
+peripherals: 0.045
+risc-v: 0.037
+permissions: 0.030
+files: 0.025
+assembly: 0.015
+x86: 0.014
+KVM: 0.007
+hypervisor: 0.006
+kernel: 0.004
+
+Non-deterministic hang in libvfio-user:functional/test-client-server test causing timeout in CentOS 8 CI job
diff --git a/results/classifier/118/performance/1119861 b/results/classifier/118/performance/1119861
new file mode 100644
index 00000000..89de6952
--- /dev/null
+++ b/results/classifier/118/performance/1119861
@@ -0,0 +1,56 @@
+performance: 0.982
+graphic: 0.919
+device: 0.633
+arm: 0.480
+semantic: 0.456
+ppc: 0.405
+mistranslation: 0.391
+register: 0.354
+permissions: 0.346
+i386: 0.327
+socket: 0.325
+architecture: 0.322
+vnc: 0.308
+boot: 0.258
+user-level: 0.248
+PID: 0.237
+risc-v: 0.227
+network: 0.210
+x86: 0.205
+VMM: 0.174
+kernel: 0.174
+virtual: 0.157
+debug: 0.149
+peripherals: 0.145
+files: 0.137
+assembly: 0.123
+hypervisor: 0.117
+TCG: 0.111
+KVM: 0.033
+
+Poor console performance in Windows 7
+
+As part of its conformance test suite, Wine tests the behavior of the Windows console API. Part of this test involves opening a test console and scrolling things around. The test probably does not need to perform that many scroll operations to achieve its goal. However as is it illustrates a significant performance issue in QEmu. Unfortunately it does so by timing out (the tests must run in less than 2 minutes). Here are the run times on a few configurations:
+
+ 10s - QEmu 1.4 + Q9450@2.6GHz + Windows XP + QXL + QXL driver
+  8s - QEmu 1.12 + Opteron 6128 + Windows XP + QXL + QXL driver
+127s - QEmu 1.12 + Opteron 6128 + Windows 7 + cirrus + vga driver
+127s - QEmu 1.12 + Opteron 6128 + Windows 7 + QXL + QXL driver
+147s - QEmu 1.12 + Opteron 6128 + Windows 7 + vmvga + vga driver
+145s - QEmu 1.12 + Opteron 6128 + Windows 7 + vmvga + vmware driver (xpdm, no better with all graphics effects disabled)
+
+ 10s - Metal + Atom N270 + Windows XP + GMA 950 + Intel driver
+  6s - Metal + i5-3317U + Windows 8 + HD4000 + Intel driver
+  3s - VMware + Q9450@2.6GHz + Windows XP + vmvga + vmware driver
+ 65s - VMware + Q9450@2.6GHz + Windows 7 + vmvga + vmware driver
+
+So when running on the bare metal all versions of Windows are about as fast. However in QEmu Windows 7 is almost 16 times slower than Windows XP! VMware is impacted too but it's still maintains a good lead in performance.
+
+Disabling all graphics effects did not help so it's not clear that the fault lies with Windows 7's compositing desktop window manager. Maybe it has to do with the lack of a proper wddm driver?
+
+
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+It does seem to be ok now. The test did get simplified to remove parts that were mostly redundant so it runs faster now. But still it now takes the same time, 7 seconds, on the VMware and QEMU Windows 7 VMs. So as far as I'm concerned this can be closed.
+
diff --git a/results/classifier/118/performance/1126369 b/results/classifier/118/performance/1126369
new file mode 100644
index 00000000..dfeba58a
--- /dev/null
+++ b/results/classifier/118/performance/1126369
@@ -0,0 +1,53 @@
+performance: 0.850
+graphic: 0.658
+device: 0.643
+semantic: 0.573
+architecture: 0.452
+kernel: 0.440
+boot: 0.419
+hypervisor: 0.413
+network: 0.411
+ppc: 0.394
+mistranslation: 0.392
+socket: 0.384
+vnc: 0.358
+risc-v: 0.355
+PID: 0.332
+files: 0.326
+permissions: 0.325
+x86: 0.325
+register: 0.315
+debug: 0.309
+arm: 0.302
+virtual: 0.297
+i386: 0.285
+VMM: 0.274
+peripherals: 0.258
+KVM: 0.258
+user-level: 0.257
+TCG: 0.238
+assembly: 0.198
+
+qemu-img snapshot -c is unreasonably slow
+
+Something fishy is going on with qcow2 internal snapshot creation times.  I don't know if this is a regression because I haven't used internal snapshots in the past.
+
+QEMU 1.4-rc2:
+$ qemu-img create -f qcow2 test.qcow2 -o size=50G,preallocation=metadata
+$ time qemu-img snapshot -c new test.qcow2
+real	3m39.147s
+user	0m10.748s
+sys	0m26.165s
+
+(This is on an SSD)
+
+I expect snapshot creation to take under 1 second.
+
+Skipping the bdrv_flush() in update_cluster_refcount() gives a huge speed-up from over 3 minutes down to <1 second.  I think Kevin already discovered this in the past.
+
+Now we need to figure out how to safely perform the updates without flushing after each L2 table refcount increment.
+
+Looking through old bug tickets... have this issue been fixed, i.e. could we close this ticket nowadays? Or is there still something left to do?
+
+Code inspection shows that qcow2_update_cluster_refcount() no longer calls bdrv_flush().  This issue has been fixed.
+
diff --git a/results/classifier/118/performance/1129957 b/results/classifier/118/performance/1129957
new file mode 100644
index 00000000..2af41b74
--- /dev/null
+++ b/results/classifier/118/performance/1129957
@@ -0,0 +1,83 @@
+performance: 0.995
+boot: 0.818
+architecture: 0.798
+graphic: 0.755
+peripherals: 0.735
+device: 0.678
+permissions: 0.673
+vnc: 0.658
+kernel: 0.558
+VMM: 0.535
+files: 0.492
+ppc: 0.483
+debug: 0.469
+hypervisor: 0.458
+i386: 0.452
+semantic: 0.449
+user-level: 0.435
+PID: 0.416
+socket: 0.404
+network: 0.381
+TCG: 0.329
+virtual: 0.275
+register: 0.255
+mistranslation: 0.253
+assembly: 0.212
+risc-v: 0.185
+arm: 0.181
+x86: 0.155
+KVM: 0.130
+
+Performance issue running quest image on qemu compiled for Win32 platform
+
+I'm seeing performance issues when booting a guest image on qemu 1.4.0 compiled for the Win32 platform.
+The same image boots a lot faster on the same computer running qemu/linux on Fedora via VmWare, and even running the Win32 exectuable via Wine performs better than running qemu natively on Win32.
+
+Although I'm not the author of the image, it is located here:
+http://people.freebsd.org/~wpaul/qemu/vxworks.img
+
+All testing has been done on QEMU 1.4.0.
+
+I'm also attaching a couple of gprof logs. For these I have disabled ssp in qemu by removing "-fstack-protector-all" and "-D_FORTIFY_SOURCE=2" from the qemu configure script.
+
+qemu-perf-linux.txt
+================
+Machine - Windows XP - VmWare - Fedora - QEMU
+
+qemu-perf-win32.txt
+=================
+Machine - Windows XP - QEMU
+
+qemu-perf-wine.txt
+================
+Machine - Windows XP - VmWare - Fedora - Wine - QEMU
+
+
+
+
+
+
+
+For linux, the build is done by the native Fedora 18 gcc, 4.7.2
+For Win32, the build is done by Fedora 18's mingw compiler, 4.7.2
+
+Configuration for Win32 (from config.log):
+# Configured with: './configure' '--disable-guest-agent' '--disable-vnc' '--disable-werror' '--extra-cflags=-pg' '--extra-ldflags=-pg' '--target-list=i386-softmmu' '--cross-prefix=i686-w64-mingw32-'
+
+NOTE: debug is not enabled, since it breaks current QEMU build (undefined references to 'ffs')
+
+Configuration for Linux (from config.log):
+# Configured with: './configure' '--disable-guest-agent' '--disable-vnc' '--disable-werror' '--extra-cflags=-pg' '--extra-ldflags=-pg' '--target-list=i386-softmmu' '--enable-debug' '--enable-kvm'
+
+NOTE: although I pass --enable-kvm to configure, I haven't passed it to qemu when running the executables
+
+Commandline for running on Win32 (started from a Cygwin terminal) and also with Fedora+Wine:
+./qemu/i386-softmmu/qemu-system-i386w.exe -L qemu/pc-bios/ vxworks.img
+
+Commandline for running on Fedora:
+./qemu/i386-softmmu/qemu-system-i386 -L qemu/pc-bios/ vxworks.img
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU and the latest version of MinGW? Do you also see the problem with the builds from https://qemu.weilnetz.de/ ? Or could we close this ticket nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1139 b/results/classifier/118/performance/1139
new file mode 100644
index 00000000..93e9f514
--- /dev/null
+++ b/results/classifier/118/performance/1139
@@ -0,0 +1,108 @@
+performance: 0.959
+network: 0.927
+device: 0.909
+ppc: 0.884
+files: 0.853
+graphic: 0.852
+socket: 0.845
+mistranslation: 0.842
+peripherals: 0.813
+hypervisor: 0.810
+virtual: 0.807
+architecture: 0.804
+kernel: 0.797
+register: 0.793
+PID: 0.784
+VMM: 0.783
+vnc: 0.777
+boot: 0.774
+assembly: 0.770
+semantic: 0.765
+x86: 0.749
+debug: 0.745
+permissions: 0.734
+user-level: 0.715
+arm: 0.678
+risc-v: 0.676
+i386: 0.661
+KVM: 0.630
+TCG: 0.613
+
+block/nbd.c and drive backup to a remote nbd server
+Description of problem:
+Good afternoon!
+
+I trying to copy attached drive content to remote NBD server via drive-backup QMP method. I'he tested two very similar ways but with very different performance. First is a backuping to exported NBD at another server. Second way is a backuping to same server but with connecting to /dev/nbd*. 
+
+Exporting qcow2 via nbd:
+```
+(nbd) ~ # qemu-nbd -p 12345 -x backup --cache=none --aio=native --persistent -f qcow2 backup.qcow2
+
+(qemu) ~ # qemu-img info nbd://10.0.0.1:12345/backup
+image: nbd://10.0.0.1:12345/backup
+file format: raw
+virtual size: 10 GiB (10737418240 bytes)
+disk size: unavailable
+```
+
+Starting drive backuping via QMP:
+
+```
+{
+	"execute": "drive-backup",
+	"arguments": {
+		"device": "disk",
+		"sync": "full",
+		"target": "nbd://10.0.0.1:12345/backup",
+		"mode": "existing"
+	}
+}
+```
+
+With process starting qemu notifying about warning:
+
+> warning: The target block device doesn't provide information about the block size and it doesn't have a backing file. The default block size of 65536 bytes is used. If the actual block size of the target exceeds this default, the backup may be unusable
+
+And backup process is limited by speed around 30MBps, watched by iotop
+
+
+Second way to creating backup
+
+Exporting qcow2 via nbd:
+```
+(nbd) ~ # qemu-nbd -p 12345 -x backup --cache=none --aio=native --persistent -f qcow2 backup.qcow2
+```
+
+```
+(qemu) ~ # qemu-img info nbd://10.0.0.1:12345/backup
+image: nbd://10.0.0.1:12345/backup
+file format: raw
+virtual size: 10 GiB (10737418240 bytes)
+disk size: unavailable
+(qemu) ~ # qemu-nbd -c /dev/nbd0 nbd://10.0.0.1:12345/backup
+(qemu) ~ # qemu-img info /dev/nbd0
+image: /dev/nbd0
+file format: raw
+virtual size: 10 GiB (10737418240 bytes)
+disk size: 0 B
+```
+
+Starting drive backuping via QMP to local nbd device:
+
+```
+{
+	"execute": "drive-backup",
+	"arguments": {
+		"device": "disk",
+		"sync": "full",
+		"target": "/dev/nbd0",
+		"mode": "existing"
+	}
+}
+```
+
+Backup process started without previous warning, and speed limited around 100MBps (network limit)
+
+So I have question: how I can get same performance without connection network device to local block nbd device at the qemu host?
+
+Kind regards
diff --git a/results/classifier/118/performance/1173490 b/results/classifier/118/performance/1173490
new file mode 100644
index 00000000..975d0aee
--- /dev/null
+++ b/results/classifier/118/performance/1173490
@@ -0,0 +1,58 @@
+performance: 0.935
+hypervisor: 0.838
+KVM: 0.795
+graphic: 0.750
+virtual: 0.706
+peripherals: 0.684
+network: 0.677
+architecture: 0.630
+device: 0.589
+user-level: 0.504
+x86: 0.491
+socket: 0.478
+semantic: 0.454
+mistranslation: 0.440
+kernel: 0.431
+files: 0.407
+VMM: 0.400
+permissions: 0.386
+ppc: 0.372
+PID: 0.351
+debug: 0.311
+i386: 0.284
+boot: 0.267
+register: 0.249
+risc-v: 0.243
+vnc: 0.228
+arm: 0.216
+TCG: 0.197
+assembly: 0.130
+
+virtio net adapter driver with kvm slow on winxp
+
+# lsb_release -a
+No LSB modules are available.
+Distributor ID: Ubuntu
+Description:    Ubuntu 12.04.1 LTS
+Release:        12.04
+Codename:       precise
+
+#virsh version 
+Compiled against library: libvirt 1.0.4
+Using library: libvirt 1.0.4
+Using API: QEMU 1.0.4
+Running hypervisor: QEMU 1.2.0
+
+windows xp clean install with spice-guest-tools-0.52.exe from
+  http://spice-space.org/download/windows/spice-guest-tools/spice-guest-tools-0.52.exe
+
+it comes very slow , and the Interrupts process got very high cpu usage(above 60%).
+when i switch the net adapter from virtio to default(rtl8139) ,it works well.
+
+spice-guest-tools-0.3 works well.
+In spice-guest-tools-0.52 and 0.59, svchost.exe will use 50% cpu.
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1203 b/results/classifier/118/performance/1203
new file mode 100644
index 00000000..0b08adee
--- /dev/null
+++ b/results/classifier/118/performance/1203
@@ -0,0 +1,75 @@
+performance: 0.821
+KVM: 0.819
+virtual: 0.815
+permissions: 0.815
+user-level: 0.803
+register: 0.802
+graphic: 0.797
+hypervisor: 0.777
+peripherals: 0.774
+device: 0.769
+semantic: 0.767
+debug: 0.760
+risc-v: 0.759
+mistranslation: 0.746
+boot: 0.746
+assembly: 0.742
+files: 0.737
+TCG: 0.734
+network: 0.722
+arm: 0.720
+VMM: 0.714
+socket: 0.712
+vnc: 0.698
+PID: 0.691
+architecture: 0.686
+ppc: 0.675
+kernel: 0.664
+i386: 0.644
+x86: 0.574
+
+migrate with block-dirty-bitmap (disk size is big enough) can't be finished
+Description of problem:
+when disk size is big enough(this case using the 4T,related to the bandwith of migration), migrate the VM with block-dirty-bitmap , 
+the migration will not be finished!
+Steps to reproduce:
+1. **Start up the source VM,using the commands**: 
+
+/usr/libexec/qemu-kvm -name guest=i-00001C,debug-threads=on  -machine pc,accel=kvm,usb=off,dump-guest-core=off -cpu qemu64,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 4096 -smp 4,sockets=1,cores=4,threads=1   -uuid 991c2994-e1c9-48c0-9554-6b23e43900eb -smbios type=1,manufacturer=data,serial=7C1A9ABA-02DD-4E7D-993C-E1CDAB47A19B,family="Virtual Machine" -no-user-config -nodefaults -device sga  -rtc base=2022-09-09T02:54:38,clock=host,driftfix=slew -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=on,splash-time=0,strict=on -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x6 -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0xa -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0xb -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0xc -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0xd -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0xe -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-1,readonly=on -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1,bootindex=2 -drive if=none,id=drive-fdc0-0-0,readonly=on  -drive file=/datastore/e88e2b29-cd39-4b21-9629-5ef2458f7ddd/c08fee8e-caf4-4217-ab4d-351a021c2c3d,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,num-queues=1,bus=pci.1,addr=0x1,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -device usb-tablet,id=input0,bus=usb.0,port=1     -device intel-hda,id=sound0,bus=pci.0,addr=0x3 -device hda-micro,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -sandbox off -device pvpanic,ioport=1285 -msg timestamp=on -qmp tcp:127.0.0.1:4444,server,nowait 
+
+**Start the dst VM using commands as:**
+
+/usr/libexec/qemu-kvm -name guest=i-00001C,debug-threads=on  -machine pc,accel=kvm,usb=off,dump-guest-core=off -cpu qemu64,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 4096 -smp 4,sockets=1,cores=4,threads=1   -uuid 991c2994-e1c9-48c0-9554-6b23e43900eb -smbios type=1,manufacturer=data,serial=7C1A9ABA-02DD-4E7D-993C-E1CDAB47A19B,family="Virtual Machine" -no-user-config -nodefaults -device sga  -rtc base=2022-09-09T02:54:38,clock=host,driftfix=slew -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=on,splash-time=0,strict=on -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x6 -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0xa -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0xb -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0xc -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0xd -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0xe -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-1,readonly=on -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1,bootindex=2 -drive if=none,id=drive-fdc0-0-0,readonly=on  -drive file=/datastore/e88e2b29-cd39-4b21-9629-5ef2458f7ddd/c08fee8e-caf4-4217-ab4d-351a021c2c3d,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,num-queues=1,bus=pci.1,addr=0x1,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -device usb-tablet,id=input0,bus=usb.0,port=1     -device intel-hda,id=sound0,bus=pci.0,addr=0x3 -device hda-micro,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -sandbox off -device pvpanic,ioport=1285 -msg timestamp=on -qmp tcp:127.0.0.1:4444,server,nowait -incoming tcp:0:3333
+
+2. **image info as:**
+
+image: /datastore/e88e2b29-cd39-4b21-9629-5ef2458f7ddd/c08fee8e-caf4-4217-ab4d-351a021c2c3d
+
+file format: qcow2
+virtual size: 4.0T (4380866641920 bytes)
+disk size: 1.0M
+cluster_size: 65536
+
+Format specific information:
+    compat: 1.1
+    lazy refcounts: false
+    refcount bits: 16
+    corrupt: false
+
+3. **Add the bitmap :** {"execute":"block-dirty-bitmap-add","arguments":{"node":"drive-virtio-disk0", "name":"bitmap-2022-09-09-16-10-23"}}
+4. **set the dirty-bitmaps capability** :{ "execute": "migrate-set-capabilities" , "arguments":{"capabilities":[ {"capability":"dirty-bitmaps","state": true }]}}
+5. **start migrate ** { "execute": "migrate", "arguments": { "uri": "tcp:10.49.35.23:3333" } }
+6. **quert migrate parameters** {"execute":"query-migrate-parameters"} the retrun message :
+{"return": {"cpu-throttle-tailslow": false, "xbzrle-cache-size": 67108864, "cpu-throttle-initial": 20, "announce-max": 550, "decompress-threads": 2, "compress-threads": 8, "compress-level": 1, "multifd-channels": 2, "multifd-zstd-level": 1, "announce-initial": 50, "block-incremental": false, "compress-wait-thread": true, "downtime-limit": 300, "tls-authz": "", "multifd-compression": "none", "announce-rounds": 5, "announce-step": 100, "tls-creds": "", "multifd-zlib-level": 1, "max-cpu-throttle": 99, "max-postcopy-bandwidth": 0, "tls-hostname": "", "throttle-trigger-threshold": 50, "max-bandwidth": 134217728, "x-checkpoint-delay": 20000, "cpu-throttle-increment": 10}}
+
+7. **query-migrate-capabilities** :
+{"execute":"query-migrate-capabilities"} the retrun message :
+{"return": [{"state": false, "capability": "xbzrle"}, {"state": false, "capability": "rdma-pin-all"}, {"state": false, "capability": "auto-converge"}, {"state": false, "capability": "zero-blocks"}, {"state": false, "capability": "compress"}, {"state": false, "capability": "events"}, {"state": false, "capability": "postcopy-ram"}, {"state": false, "capability": "x-colo"}, {"state": false, "capability": "release-ram"}, {"state": false, "capability": "return-path"}, {"state": false, "capability": "pause-before-switchover"}, {"state": false, "capability": "multifd"}, {"state": true, "capability": "dirty-bitmaps"}, {"state": false, "capability": "postcopy-blocktime"}, {"state": false, "capability": "late-block-activate"}, {"state": false, "capability": "x-ignore-shared"}, {"state": false, "capability": "validate-uuid"}, {"state": false, "capability": "background-snapshot"}]}
+
+8. **query the info of migrate** using the command {"execute":"query-migrate"}
+{"return": {"expected-downtime": 0, "status": "active", "setup-time": 64, "total-time": 1320361, "ram": {"total": 4295499776, "postcopy-requests": 0, "dirty-sync-count": 7909410, "multifd-bytes": 0, "pages-per-second": 80, "page-size": 4096, "remaining": 0, "mbps": 3.5006399999999998, "transferred": 430971236, "duplicate": 1048569, "dirty-pages-rate": 66, "skipped": 0, "normal-bytes": 357560320, "normal": 87295}}}
+
+**the state of migrate is always active ,no matter how long it takes.**
+The bug is : migration with big block dirty bitmap  can not be finished
+Additional information:
+
diff --git a/results/classifier/118/performance/1223 b/results/classifier/118/performance/1223
new file mode 100644
index 00000000..90e09536
--- /dev/null
+++ b/results/classifier/118/performance/1223
@@ -0,0 +1,41 @@
+performance: 0.955
+graphic: 0.931
+virtual: 0.914
+device: 0.833
+debug: 0.826
+VMM: 0.811
+vnc: 0.796
+hypervisor: 0.699
+architecture: 0.658
+network: 0.655
+user-level: 0.610
+risc-v: 0.607
+kernel: 0.582
+semantic: 0.570
+mistranslation: 0.568
+assembly: 0.503
+KVM: 0.499
+PID: 0.490
+files: 0.473
+socket: 0.423
+i386: 0.422
+x86: 0.384
+register: 0.383
+TCG: 0.372
+permissions: 0.370
+boot: 0.367
+ppc: 0.367
+peripherals: 0.341
+arm: 0.213
+
+When the disk is offline, why does the migration not time out and the virtual machine keeps hanging
+Description of problem:
+I want to the migrate end auto after the disk is offline
+Steps to reproduce:
+1.migrate to other host
+
+2.Manually construct disk offline when migrating
+
+3.the vm is hangs,and migrate wait for the disk recovery,i need to it timeout and report the failed migration 
+rather than hangs ,what should i do
+![image](/uploads/c1ec6e1f59524888ea8e5c1df131037e/image.png)
diff --git a/results/classifier/118/performance/1228285 b/results/classifier/118/performance/1228285
new file mode 100644
index 00000000..dde5c9b5
--- /dev/null
+++ b/results/classifier/118/performance/1228285
@@ -0,0 +1,96 @@
+performance: 0.981
+socket: 0.946
+peripherals: 0.929
+network: 0.904
+architecture: 0.888
+ppc: 0.842
+device: 0.814
+vnc: 0.797
+semantic: 0.786
+user-level: 0.758
+PID: 0.709
+kernel: 0.687
+register: 0.672
+hypervisor: 0.655
+permissions: 0.638
+mistranslation: 0.632
+assembly: 0.608
+graphic: 0.597
+debug: 0.572
+TCG: 0.553
+files: 0.533
+risc-v: 0.532
+VMM: 0.530
+x86: 0.517
+arm: 0.474
+virtual: 0.465
+i386: 0.444
+boot: 0.432
+KVM: 0.394
+
+e1000 nic TCP performances
+
+Hi,
+
+Here is the context :
+
+$ qemu -name A -m 1024 -net nic vlan=0,model=e1000 -net socket,vlan=0,listen=127.0.0.1:7000
+$ qemu -name B -m 1024 -net nic vlan=0,model=e1000 -net socket,vlan=0,connect=127.0.0.1:7000
+
+The bandwidth is really tiny :
+
+    . Iperf3 reports about 30 Mb/sec
+    . NetPerf reports about 50 Mb/sec
+
+
+With UDP sockets, there is no problem at all :
+
+    . Iperf3 reports about 1 Gb/sec
+    . NetPerf reports about 950 Mb/sec
+
+
+I've noticed this fact only with the e1000 NIC, not with others (rtl8139,virtio, etc.)
+I've used the main GIT version of QEMU.
+
+
+Thanks in advance.
+
+See you,
+VInce
+
+On Fri, Sep 20, 2013 at 05:21:23PM -0000, Vincent Autefage wrote:
+> Here is the context :
+> 
+> $ qemu -name A -m 1024 -net nic vlan=0,model=e1000 -net socket,vlan=0,listen=127.0.0.1:7000
+> $ qemu -name B -m 1024 -net nic vlan=0,model=e1000 -net socket,vlan=0,connect=127.0.0.1:7000
+> 
+> The bandwidth is really tiny :
+> 
+>     . Iperf3 reports about 30 Mb/sec
+>     . NetPerf reports about 50 Mb/sec
+> 
+> 
+> With UDP sockets, there is no problem at all :
+> 
+>     . Iperf3 reports about 1 Gb/sec
+>     . NetPerf reports about 950 Mb/sec
+> 
+> 
+> I've noticed this fact only with the e1000 NIC, not with others (rtl8139,virtio, etc.)
+> I've used the main GIT version of QEMU.
+
+It's interesting that you see good performance over -netdev socket TCP
+with the other NIC models.
+
+I don't know what the issue would be, you'll probably need to dig
+further to discover the problem.  Using wireshark might be a good start.
+Try to figure out where the delay is incurred and then instrument that
+code to find out the cause.
+
+Stefan
+
+
+Looking through old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/123 b/results/classifier/118/performance/123
new file mode 100644
index 00000000..7a14db52
--- /dev/null
+++ b/results/classifier/118/performance/123
@@ -0,0 +1,31 @@
+performance: 0.867
+device: 0.838
+user-level: 0.668
+arm: 0.566
+debug: 0.555
+graphic: 0.545
+network: 0.519
+architecture: 0.462
+boot: 0.410
+permissions: 0.347
+socket: 0.346
+PID: 0.334
+ppc: 0.329
+semantic: 0.301
+i386: 0.253
+kernel: 0.243
+register: 0.241
+VMM: 0.206
+x86: 0.197
+TCG: 0.193
+vnc: 0.150
+virtual: 0.142
+files: 0.126
+risc-v: 0.125
+mistranslation: 0.112
+assembly: 0.081
+peripherals: 0.076
+hypervisor: 0.031
+KVM: 0.003
+
+qemu-cris segfaults upon loading userspace binary
diff --git a/results/classifier/118/performance/1253563 b/results/classifier/118/performance/1253563
new file mode 100644
index 00000000..7fab27a3
--- /dev/null
+++ b/results/classifier/118/performance/1253563
@@ -0,0 +1,72 @@
+performance: 0.924
+socket: 0.666
+device: 0.645
+graphic: 0.634
+x86: 0.603
+kernel: 0.585
+semantic: 0.564
+architecture: 0.526
+register: 0.513
+network: 0.504
+peripherals: 0.475
+PID: 0.457
+mistranslation: 0.439
+assembly: 0.425
+debug: 0.401
+user-level: 0.394
+permissions: 0.372
+ppc: 0.366
+virtual: 0.361
+hypervisor: 0.348
+vnc: 0.246
+files: 0.209
+arm: 0.206
+KVM: 0.197
+VMM: 0.191
+boot: 0.190
+i386: 0.180
+risc-v: 0.153
+TCG: 0.082
+
+bad performance with rng-egd backend
+
+
+1. create listen socket
+# cat /dev/random | nc -l localhost 1024
+
+2. start vm with rng-egd backend
+
+./x86_64-softmmu/qemu-system-x86_64 --enable-kvm -mon chardev=qmp,mode=control,pretty=on -chardev socket,id=qmp,host=localhost,port=1234,server,nowait -m 2000 -device virtio-net-pci,netdev=h1,id=vnet0 -netdev tap,id=h1 -vnc :0 -drive file=/images/RHEL-64-virtio.qcow2 \
+-chardev socket,host=localhost,port=1024,id=chr0 \
+-object rng-egd,chardev=chr0,id=rng0 \
+-device virtio-rng-pci,rng=rng0,max-bytes=1024000,period=1000
+
+(guest) # dd if=/dev/hwrng of=/dev/null
+
+note: cancelling dd process by Ctrl+c, it will return the read speed.
+
+Problem:   the speed is around 1k/s
+
+===================
+
+If I use rng-random backend (filename=/dev/random), the speed is about 350k/s).
+
+It seems that when the request entry is added to the list, we don't read the data from queue list immediately.
+The chr_read() is delayed, the virtio_notify() is delayed.  the next request will also be delayed. It effects the speed.
+
+I tried to change rng_egd_chr_can_read() always returns 1,  the speed is improved to (about 400k/s)
+
+Problem: we can't poll the content in time currently
+
+
+Any thoughts?
+
+Thanks, Amos
+
+Looking through old bug tickets... is this still an issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+
+Let's close this bug, it's passed 6 years.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1258 b/results/classifier/118/performance/1258
new file mode 100644
index 00000000..0e91ff6c
--- /dev/null
+++ b/results/classifier/118/performance/1258
@@ -0,0 +1,31 @@
+performance: 0.879
+device: 0.847
+network: 0.838
+register: 0.584
+arm: 0.559
+PID: 0.492
+files: 0.484
+boot: 0.468
+risc-v: 0.446
+VMM: 0.446
+permissions: 0.445
+semantic: 0.413
+architecture: 0.383
+socket: 0.380
+debug: 0.372
+hypervisor: 0.320
+graphic: 0.305
+kernel: 0.291
+vnc: 0.285
+peripherals: 0.281
+TCG: 0.262
+virtual: 0.216
+ppc: 0.194
+user-level: 0.100
+KVM: 0.096
+x86: 0.083
+assembly: 0.074
+mistranslation: 0.068
+i386: 0.050
+
+nios2 system tests are exiting early and not showing an error
diff --git a/results/classifier/118/performance/1277 b/results/classifier/118/performance/1277
new file mode 100644
index 00000000..fb7eb0a9
--- /dev/null
+++ b/results/classifier/118/performance/1277
@@ -0,0 +1,31 @@
+performance: 0.856
+graphic: 0.792
+device: 0.782
+debug: 0.620
+PID: 0.612
+VMM: 0.570
+TCG: 0.556
+assembly: 0.546
+KVM: 0.541
+vnc: 0.529
+kernel: 0.500
+x86: 0.489
+risc-v: 0.483
+ppc: 0.474
+i386: 0.464
+hypervisor: 0.463
+arm: 0.460
+peripherals: 0.459
+socket: 0.454
+architecture: 0.449
+register: 0.447
+network: 0.440
+boot: 0.424
+semantic: 0.371
+permissions: 0.353
+mistranslation: 0.257
+virtual: 0.256
+files: 0.193
+user-level: 0.160
+
+two instructions has executed twice
diff --git a/results/classifier/118/performance/1312 b/results/classifier/118/performance/1312
new file mode 100644
index 00000000..c7a1810c
--- /dev/null
+++ b/results/classifier/118/performance/1312
@@ -0,0 +1,41 @@
+performance: 0.984
+virtual: 0.922
+debug: 0.904
+device: 0.788
+kernel: 0.773
+files: 0.742
+network: 0.691
+ppc: 0.690
+permissions: 0.690
+vnc: 0.690
+PID: 0.677
+graphic: 0.672
+TCG: 0.627
+socket: 0.571
+hypervisor: 0.525
+semantic: 0.508
+risc-v: 0.497
+peripherals: 0.468
+arm: 0.464
+architecture: 0.455
+VMM: 0.417
+register: 0.414
+x86: 0.379
+boot: 0.310
+i386: 0.309
+mistranslation: 0.282
+user-level: 0.238
+assembly: 0.154
+KVM: 0.115
+
+TCP performance problems - GSO/TSO, MSS, 8139 related (Ignores lower MTU from PMTUD/MSS)
+Description of problem:
+MTU handling on guests using an RTL8139 virtualized NIC is broken; net/hw/8139.c works with a static MTU of 1500b for TCP offloading, leading to low throughput when clients connect from sub 1500MTU networks. PMTUD is ignored, and locking to a lower MTU in the OS mitigates the issue.
+Steps to reproduce:
+1. Create a guest with an RTL8139 nic
+2. Try to retrieve a file from a client behind a sub 1500 MTU link
+3. Observe low bandwidth due to retransmits
+Additional information:
+I just debugged this issue for an NGO which, for whatever reason, had an RTL8139 NIC in their guest. After i finally traced this to the RTL8139, i found this qemu-devel/netdev thread from six years ago, which apparently already debugged this issue and proposed a patch: https://lore.kernel.org/all/20161114162505.GD26664@stefanha-x1.localdomain/
+
+I did not test the patch proposed there, but note that `net/hw/8139.c` still looks as discussed in that qemu-devel/netdev thread. As i haven't found a bug report in the archives, i figured you might want to know.
diff --git a/results/classifier/118/performance/1321 b/results/classifier/118/performance/1321
new file mode 100644
index 00000000..0bdf9709
--- /dev/null
+++ b/results/classifier/118/performance/1321
@@ -0,0 +1,38 @@
+i386: 0.997
+performance: 0.995
+peripherals: 0.906
+graphic: 0.872
+device: 0.793
+x86: 0.696
+TCG: 0.648
+architecture: 0.606
+socket: 0.450
+boot: 0.368
+semantic: 0.323
+network: 0.321
+ppc: 0.283
+register: 0.264
+user-level: 0.253
+PID: 0.231
+debug: 0.214
+permissions: 0.169
+mistranslation: 0.169
+arm: 0.158
+vnc: 0.155
+assembly: 0.054
+virtual: 0.050
+VMM: 0.018
+files: 0.018
+kernel: 0.013
+risc-v: 0.008
+hypervisor: 0.004
+KVM: 0.001
+
+qemu-system-i386 runs slow after upgrading legacy project from qemu 2.9.0  to 7.1.0
+Description of problem:
+Using several custom serial and irq devices including timers.
+The same code (after some customisation in order to compile with new 7.1.0 API and meson build system runs about 50% slower.
+We had to remove "-icount 4" switch which worked fine with 2.9.0 just to get to this point.
+Even running with multi-threaded tcg did not help.
+We don't use the new ptimer API but rather the old QEMUTimer.
+Any suggestions to why we encounter this vast performance degradation?
diff --git a/results/classifier/118/performance/1321464 b/results/classifier/118/performance/1321464
new file mode 100644
index 00000000..6e993af4
--- /dev/null
+++ b/results/classifier/118/performance/1321464
@@ -0,0 +1,83 @@
+performance: 0.986
+i386: 0.830
+user-level: 0.803
+semantic: 0.775
+graphic: 0.774
+PID: 0.755
+ppc: 0.699
+kernel: 0.674
+device: 0.641
+socket: 0.641
+peripherals: 0.608
+network: 0.566
+architecture: 0.559
+register: 0.541
+x86: 0.539
+permissions: 0.523
+debug: 0.515
+mistranslation: 0.510
+boot: 0.508
+files: 0.499
+arm: 0.499
+hypervisor: 0.480
+VMM: 0.395
+risc-v: 0.392
+vnc: 0.372
+TCG: 0.363
+assembly: 0.339
+virtual: 0.331
+KVM: 0.306
+
+qemu/block/qcow2.c:1942: possible performance problem ?
+
+I just ran static analyser cppcheck over today (20140520) qemu source code.
+
+It said many things, including
+
+[qemu/block/qcow2.c:1942] -> [qemu/block/qcow2.c:1943]: (performance) Buffer 'pad_buf' is being writ
+ten before its old content has been used.
+
+Source code is
+
+            memset(pad_buf, 0, s->cluster_size);
+            memcpy(pad_buf, buf, nb_sectors * BDRV_SECTOR_SIZE);
+
+Worth tuning ?
+
+Similar problem here
+
+[qemu/block/qcow.c:815] -> [qemu/block/qcow.c:816]: (performance) Buffer 'pad_buf' is being written 
+before its old content has been used.
+
+and
+
+[qemu/hw/i386/acpi-build.c:1265] -> [qemu/hw/i386/acpi-build.c:1267]: (performance) Buffer 'dsdt' is
+ being written before its old content has been used.
+
+I can only speak for qcow2 and qcow, but for those places, I don't think it is worth fixing. First of all, both are image formats, so the bottleneck is generally the disk on which the images are stored and not main memory, so an overeager memset should not cause any problems.
+
+For both, the relevant piece of code is in qcow2/qcow_write_compressed() which are rarely used anyway (as far as I know) and even if used, they have additional overhead due to having to compress data first, so “fixing” the memset() won't make them noticibly faster.
+
+I don't know about the ACPI thing, but to me it seems that it's copying data to a temporary buffer and then overwriting its beginning with zeroes. From my very limited ACPI knowledge I'd guess this function is called at some point during qemu startup, so it doesn't seem worth optimizing either.
+
+On Tue, May 20, 2014 at 11:21:05PM -0000, Max Reitz wrote:
+> I can only speak for qcow2 and qcow, but for those places, I don't think
+> it is worth fixing. First of all, both are image formats, so the
+> bottleneck is generally the disk on which the images are stored and not
+> main memory, so an overeager memset should not cause any problems.
+> 
+> For both, the relevant piece of code is in qcow2/qcow_write_compressed()
+> which are rarely used anyway (as far as I know) and even if used, they
+> have additional overhead due to having to compress data first, so
+> “fixing” the memset() won't make them noticibly faster.
+
+I agree.  It won't make a noticable difference and the compressed writes
+are only done in qemu-img convert, not for running guests.
+
+But patches to change this wouldn't hurt either.
+
+Stefan
+
+
+Thanks for reporting this, but it looks like the related code has been removed a while ago (there is no more "pad_buf" in qcow.c or qcow2.c), so closing this ticket. If you still can reproduce the (same or similar) problem with the latest version of QEMU, please open a new ticket instead.
+
diff --git a/results/classifier/118/performance/1331334 b/results/classifier/118/performance/1331334
new file mode 100644
index 00000000..b8cce6ac
--- /dev/null
+++ b/results/classifier/118/performance/1331334
@@ -0,0 +1,40 @@
+performance: 0.871
+device: 0.747
+network: 0.636
+vnc: 0.560
+register: 0.501
+semantic: 0.486
+ppc: 0.484
+risc-v: 0.476
+architecture: 0.448
+PID: 0.415
+boot: 0.402
+VMM: 0.401
+socket: 0.397
+debug: 0.378
+TCG: 0.374
+graphic: 0.319
+files: 0.311
+permissions: 0.303
+mistranslation: 0.296
+hypervisor: 0.294
+user-level: 0.246
+x86: 0.241
+peripherals: 0.238
+arm: 0.222
+kernel: 0.202
+i386: 0.147
+assembly: 0.125
+virtual: 0.117
+KVM: 0.015
+
+driftfix=none and migration on Win7 guest causes time to go 10 times as fast
+
+With -rtc base=localtime,clock=host,driftfix=none on a Win7 guest, stopping it with migration and then starting it again about 1 hour later makes the guest time go 10 times as fast as real time until Windows is rebooted.  I have tried qith qemu-2.0.0 and the problem still exists there.
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+I am unable to reproduce this with qemu 2.11.0
+
+OK, thanks for checking again! So I'm closing this ticket now.
+
diff --git a/results/classifier/118/performance/134 b/results/classifier/118/performance/134
new file mode 100644
index 00000000..55b042f2
--- /dev/null
+++ b/results/classifier/118/performance/134
@@ -0,0 +1,31 @@
+performance: 0.985
+device: 0.830
+network: 0.679
+arm: 0.547
+kernel: 0.480
+boot: 0.464
+architecture: 0.439
+vnc: 0.424
+ppc: 0.291
+graphic: 0.287
+VMM: 0.285
+risc-v: 0.273
+i386: 0.262
+socket: 0.255
+PID: 0.247
+x86: 0.203
+TCG: 0.197
+hypervisor: 0.192
+peripherals: 0.188
+files: 0.166
+permissions: 0.150
+register: 0.148
+debug: 0.143
+semantic: 0.122
+virtual: 0.094
+user-level: 0.072
+assembly: 0.040
+mistranslation: 0.022
+KVM: 0.020
+
+Performance improvement when using "QEMU_FLATTEN" with softfloat type conversions
diff --git a/results/classifier/118/performance/1354529 b/results/classifier/118/performance/1354529
new file mode 100644
index 00000000..cfa2adde
--- /dev/null
+++ b/results/classifier/118/performance/1354529
@@ -0,0 +1,79 @@
+performance: 0.832
+graphic: 0.780
+device: 0.740
+socket: 0.738
+files: 0.736
+kernel: 0.718
+semantic: 0.705
+network: 0.681
+arm: 0.647
+vnc: 0.644
+ppc: 0.640
+architecture: 0.637
+risc-v: 0.605
+register: 0.597
+PID: 0.587
+boot: 0.560
+peripherals: 0.535
+permissions: 0.528
+hypervisor: 0.470
+TCG: 0.455
+i386: 0.454
+x86: 0.438
+mistranslation: 0.414
+user-level: 0.410
+VMM: 0.406
+KVM: 0.393
+virtual: 0.353
+assembly: 0.306
+debug: 0.295
+
+qemu-io: Assert failure on the fuzzed qcow2 image
+
+'qemu-io -c write' failed on the fuzzed image with missed refcount tables:
+
+Sequence:
+ 1. Unpack the attached archive, make a copy of test.img
+ 2. Put copy.img and backing_img.cow in the same directory
+ 3. Execute
+   qemu-io copy.img -c 'write 2856960 208896'
+
+Result: qemu-io was killed by SIGIOT with the reason:
+
+qemu-io: block/qcow2-cluster.c:910: handle_copied: Assertion `*host_offset == 0 
+|| offset_into_cluster(s, guest_offset) == offset_into_cluster(s, *host_offset)'
+ failed.
+
+qemu.git HEAD 2d591ce2aeebf
+
+
+
+Hi,
+
+The problem here is that an L2 table contains an offset which is not aligned on cluster boundaries. To turn the failed assertion into an EIO (and probably we also want to mark the image corrupt), we'd have to verify every single L2 entry when it is read.
+
+We can (and should) most certainly do that, but as it doesn't seem too urgent, it may take some time.
+
+Max
+
+Hi,
+
+This issue has been fixed in master (5f77ef69a195098baddfdc6d189f1b4a94587378):
+
+$ ./qemu-io copy.img -c 'write 2856960 208896'
+qcow2_free_clusters failed: Invalid argument
+qcow2_free_clusters failed: Invalid argument
+qcow2_free_clusters failed: Invalid argument
+qcow2_free_clusters failed: Invalid argument
+qcow2_free_clusters failed: Invalid argument
+qcow2_free_clusters failed: File too large
+qcow2_free_clusters failed: Invalid argument
+qcow2: Image is corrupt: Cannot free unaligned cluster 0xfffffffffffe00; further non-fatal corruption events will be suppressed
+qcow2_free_clusters failed: Invalid argument
+qcow2: Marking image as corrupt: Data cluster offset 0xfffffe00 unaligned (guest offset: 0x2e1000); further corruption events will be suppressed
+write failed: Input/output error
+
+Thanks for your report (and your fuzzer),
+
+Max
+
diff --git a/results/classifier/118/performance/1394 b/results/classifier/118/performance/1394
new file mode 100644
index 00000000..6220972f
--- /dev/null
+++ b/results/classifier/118/performance/1394
@@ -0,0 +1,91 @@
+performance: 0.845
+user-level: 0.814
+architecture: 0.792
+graphic: 0.782
+device: 0.776
+ppc: 0.775
+peripherals: 0.666
+network: 0.632
+semantic: 0.631
+PID: 0.571
+socket: 0.555
+arm: 0.528
+mistranslation: 0.522
+vnc: 0.518
+permissions: 0.492
+boot: 0.465
+assembly: 0.452
+virtual: 0.451
+TCG: 0.417
+debug: 0.413
+files: 0.394
+register: 0.384
+hypervisor: 0.354
+VMM: 0.354
+x86: 0.347
+risc-v: 0.309
+i386: 0.226
+KVM: 0.118
+kernel: 0.076
+
+Byte-swapping issue in getresuid on qemu-user-sparc64
+Description of problem:
+When calling getresuid() in the big-endian sparc64 guest, the uid_t values are written with their 16-bit halves reversed.
+
+For example, the UID 0x000003e8 (1000) becomes 0x03e80000.
+Steps to reproduce:
+A minimal test case looks like this:
+```c
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <sys/types.h>
+#include <pwd.h>
+#include <unistd.h>
+
+int main(int argc, char **argv) {
+	struct passwd *pw = getpwuid(geteuid());
+	if (pw == NULL) {
+		perror("getpwuid failure");
+		return 1;
+	}
+	printf("getpwuid()->pw_uid=0x%08x\n", pw->pw_uid);
+
+	uid_t ruid = 0, euid = 0, suid = 0;
+	if (getresuid(&ruid, &euid, &suid) != 0) {
+		perror("getresuid failure");
+		return 1;
+	}
+	printf("getresuid()->suid=0x%08x\n", suid);
+
+	return 0;
+}
+```
+
+Compile and run with:
+```
+$ sparc64-linux-gnu-gcc -Wall -O0 -g -o sid-sparc64/test test.c
+$ sudo chroot sid-sparc64
+[chroot] $ qemu-sparc64-static ./test
+```
+
+Alternatively, static compilation without a chroot is also possible (despite a warning about `getpwuid()`):
+```
+$ sparc64-linux-gnu-gcc -static -Wall -O0 -g -o test test.c
+$ qemu-sparc64-static ./test
+```
+
+Expected output:
+```
+$ ./test 
+getpwuid()->pw_uid=0x000003e8
+getresuid()->suid=0x000003e8
+```
+
+Actual output:
+```
+$ ./test 
+getpwuid()->pw_uid=0x000003e8
+getresuid()->suid=0x03e80000
+```
+Additional information:
+I'm not sure if this is a glibc, qemu or kernel issue, but it doesn't occur outside qemu.
diff --git a/results/classifier/118/performance/1399939 b/results/classifier/118/performance/1399939
new file mode 100644
index 00000000..9fcebde4
--- /dev/null
+++ b/results/classifier/118/performance/1399939
@@ -0,0 +1,45 @@
+performance: 0.974
+ppc: 0.912
+graphic: 0.876
+device: 0.823
+register: 0.789
+mistranslation: 0.777
+architecture: 0.773
+vnc: 0.760
+files: 0.743
+permissions: 0.729
+semantic: 0.722
+PID: 0.698
+user-level: 0.693
+TCG: 0.693
+risc-v: 0.688
+network: 0.677
+socket: 0.664
+VMM: 0.629
+debug: 0.607
+boot: 0.601
+kernel: 0.593
+arm: 0.590
+peripherals: 0.573
+hypervisor: 0.520
+i386: 0.442
+virtual: 0.421
+assembly: 0.375
+KVM: 0.371
+x86: 0.311
+
+Qemu build with -faltivec and maltivec support  in 
+
+if is possible add the build support for qemu for have the  -faltivec -maltivec in CPPFLAGS  for make the emulation more faster on PPC equiped machine . 
+Thank you
+
+We assume that your C compiler generates decently optimised code that uses the features of your host CPU with just the standard -O2 optimisation flag. If this isn't the case, you can use configure's --extra-cflags argument (eg "--extra-cflags=-faltivec -maltivec") to get the build process to pass arbitrary flags to the compiler. Is that not sufficient here?
+
+
+Will check it , i had been made my personal build modding the Makefile with altivec commands in CPPFLAGS.
+i dont know if it was a placebo effect but look like everything is more faster.
+
+
+
+Closing this ticket since adding CPPFLAGS to configure is possible.
+
diff --git a/results/classifier/118/performance/1426593 b/results/classifier/118/performance/1426593
new file mode 100644
index 00000000..6892ce10
--- /dev/null
+++ b/results/classifier/118/performance/1426593
@@ -0,0 +1,73 @@
+performance: 0.862
+architecture: 0.861
+user-level: 0.853
+hypervisor: 0.844
+graphic: 0.844
+semantic: 0.798
+permissions: 0.780
+mistranslation: 0.780
+kernel: 0.747
+device: 0.731
+ppc: 0.728
+arm: 0.715
+PID: 0.686
+register: 0.643
+debug: 0.640
+peripherals: 0.604
+risc-v: 0.572
+files: 0.568
+VMM: 0.565
+network: 0.561
+assembly: 0.558
+vnc: 0.555
+socket: 0.551
+x86: 0.526
+TCG: 0.525
+i386: 0.520
+boot: 0.509
+KVM: 0.463
+virtual: 0.413
+
+linux-user: doesn't handle guest setting its memory ulimit very small
+
+using the latest build from git (hash 041ccc922ee474693a2869d4e3b59e920c739bc0 ) and all older versions i have tested.
+i am using an amd64 host with an arm chroot using "qemu-user arm cortex-a8" cpu emulation to run it
+
+building coreutils hangs on "checking whether printf survives out-of-memory conditions"
+
+i have not had time to dig into the build system to isolate the test yet, there were old reports of this bug but i can no longer find them on google.
+
+On 28 February 2015 at 09:01, aaron <email address hidden> wrote:
+> Public bug reported:
+>
+> using the latest build from git (hash 041ccc922ee474693a2869d4e3b59e920c739bc0 ) and all older versions i have tested.
+> i am using an amd64 host with an arm chroot using "qemu-user arm cortex-a8" cpu emulation to run it
+>
+> building coreutils hangs on "checking whether printf survives out-of-
+> memory conditions"
+>
+> i have not had time to dig into the build system to isolate the test
+> yet, there were old reports of this bug but i can no longer find them on
+> google.
+
+Yes, I seem to recall looking at this one before. QEMU's linux-user
+code doesn't try to isolate the guest's memory allocations from
+its own allocations. So if the guest sets the memory limit to
+something very small then the chances are good that this will
+result in one of QEMU's internal allocations failing, and then
+QEMU will probably exit with an error or possibly crash or hang
+(some of our error handling on these allocations is not good).
+
+For this kind of test to work correctly we would need to fake
+the memory limit syscalls rather than just passing them through
+to the host, and then also do all the accounting to track how
+much memory the guest has allocated. That's a fair amount of
+work so it's unlikely this bug will be fixed unless somebody
+who cares about it submits patches, I'm afraid.
+
+-- PMM
+
+
+I've just noticed that this is a duplicate of bug 1163034 (gnutls uses a very similar configure test), so I'm going to mark this one as a dup of that one, since it's older.
+
+
diff --git a/results/classifier/118/performance/1442 b/results/classifier/118/performance/1442
new file mode 100644
index 00000000..ac29f6f0
--- /dev/null
+++ b/results/classifier/118/performance/1442
@@ -0,0 +1,31 @@
+risc-v: 0.988
+architecture: 0.986
+performance: 0.950
+mistranslation: 0.879
+device: 0.848
+virtual: 0.476
+graphic: 0.470
+semantic: 0.365
+debug: 0.250
+register: 0.161
+assembly: 0.161
+permissions: 0.140
+hypervisor: 0.110
+boot: 0.097
+vnc: 0.080
+kernel: 0.071
+ppc: 0.061
+peripherals: 0.045
+x86: 0.041
+arm: 0.040
+VMM: 0.029
+PID: 0.027
+socket: 0.025
+network: 0.025
+user-level: 0.021
+KVM: 0.018
+i386: 0.007
+files: 0.006
+TCG: 0.005
+
+RISC-V qemu, get cpu tick
diff --git a/results/classifier/118/performance/1472083 b/results/classifier/118/performance/1472083
new file mode 100644
index 00000000..ec984669
--- /dev/null
+++ b/results/classifier/118/performance/1472083
@@ -0,0 +1,97 @@
+performance: 0.808
+permissions: 0.785
+architecture: 0.744
+graphic: 0.725
+register: 0.718
+files: 0.695
+arm: 0.691
+KVM: 0.674
+peripherals: 0.662
+virtual: 0.655
+hypervisor: 0.655
+x86: 0.652
+device: 0.642
+assembly: 0.628
+TCG: 0.626
+user-level: 0.624
+PID: 0.621
+semantic: 0.610
+vnc: 0.606
+debug: 0.606
+VMM: 0.605
+network: 0.604
+boot: 0.602
+ppc: 0.593
+mistranslation: 0.589
+socket: 0.578
+kernel: 0.564
+i386: 0.562
+risc-v: 0.499
+
+Qemu 2.1.2 hang when stop command
+
+Qemu 2.1.2, Linux kernel 3.13.6, this is the stack.
+
+#0  in ppoll () from /lib/x86_64-linux-gnu/libc.so.6
+#1  in qemu_poll_ns (fds=0x7fa82a8de380, nfds=1, timeout=-1) at qemu-timer.c:314
+#2  in aio_poll (ctx=0x7fa82a8b5000, blocking=true) at aio-posix.c:250
+#3  in bdrv_drain_all () at block.c:1924
+#4  in do_vm_stop (state=RUN_STATE_PAUSED) at /qemu-2.1.2/cpus.c:544
+#5  in vm_stop (state=RUN_STATE_PAUSED) at /qemu-2.1.2/cpus.c:1227
+#6  in qmp_stop (errp=0x7ffffb6dcaf8) at qmp.c:98
+#7  in qmp_marshal_input_stop (mon=0x7fa82a8e0970, qdict=0x7fa830295020, ret=0x7ffffb6dcb48) at qmp-marshal.c:2806
+#8  in qmp_call_cmd (mon=0x7fa82a8e0970, cmd=0x7fa8290558a0, params=0x7fa830295020)  at /qemu-2.1.2/monitor.c:5038
+#9  in handle_qmp_command (parser=0x7fa82a8e0a28, tokens=0x7fa82a8d9b50) at /qemu-2.1.2/monitor.c:5104
+#10 in json_message_process_token (lexer=0x7fa82a8e0a30, token=0x7fa830122b60, type=JSON_OPERATOR, x=39, y=17865) at qobject/json-streamer.c:87
+#11 in json_lexer_feed_char (lexer=0x7fa82a8e0a30, ch=125 '}', flush=false) at qobject/json-lexer.c:303
+#12 in json_lexer_feed (lexer=0x7fa82a8e0a30, buffer=0x7ffffb6dcdb0 "}\315m\373\377\177", size=1) at qobject/json-lexer.c:356
+#13 in json_message_parser_feed (parser=0x7fa82a8e0a28, buffer=0x7ffffb6dcdb0 "}\315m\373\377\177", size=1) at qobject/json-streamer.c:111
+#14 in monitor_control_read (opaque=0x7fa82a8e0970, buf=0x7ffffb6dcdb0 "}\315m\373\377\177", size=1) at /qemu-2.1.2/monitor.c:5125
+#15 in qemu_chr_be_write (s=0x7fa82a8c2020, buf=0x7ffffb6dcdb0 "}\315m\373\377\177", len=1) at qemu-char.c:213
+#16 in tcp_chr_read (chan=0x7fa82a8c4ba0, cond=G_IO_IN, opaque=0x7fa82a8c2020) at qemu-char.c:2729
+#17 in g_main_context_dispatch () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
+#18 in glib_pollfds_poll () at main-loop.c:190
+#19 in os_host_main_loop_wait (timeout=24000000) at main-loop.c:235
+#20 in main_loop_wait (nonblocking=0) at main-loop.c:484
+#21 in main_loop () at vl.c:2034
+#22 in main (argc=55, argv=0x7ffffb6de338, envp=0x7ffffb6de4f8) at vl.c:4583
+
+On Tue, Jul 07, 2015 at 05:36:38AM -0000, changlimin wrote:
+> Qemu 2.1.2, Linux kernel 3.13.6, this is the stack.
+
+If you are running a distro packaged QEMU, please report this to the
+distro.
+
+If you have built QEMU from source, please try the latest stable release
+(QEMU 2.3).
+
+> #0  in ppoll () from /lib/x86_64-linux-gnu/libc.so.6
+> #1  in qemu_poll_ns (fds=0x7fa82a8de380, nfds=1, timeout=-1) at qemu-timer.c:314
+> #2  in aio_poll (ctx=0x7fa82a8b5000, blocking=true) at aio-posix.c:250
+> #3  in bdrv_drain_all () at block.c:1924
+> #4  in do_vm_stop (state=RUN_STATE_PAUSED) at /qemu-2.1.2/cpus.c:544
+> #5  in vm_stop (state=RUN_STATE_PAUSED) at /qemu-2.1.2/cpus.c:1227
+> #6  in qmp_stop (errp=0x7ffffb6dcaf8) at qmp.c:98
+> #7  in qmp_marshal_input_stop (mon=0x7fa82a8e0970, qdict=0x7fa830295020, ret=0x7ffffb6dcb48) at qmp-marshal.c:2806
+> #8  in qmp_call_cmd (mon=0x7fa82a8e0970, cmd=0x7fa8290558a0, params=0x7fa830295020)  at /qemu-2.1.2/monitor.c:5038
+> #9  in handle_qmp_command (parser=0x7fa82a8e0a28, tokens=0x7fa82a8d9b50) at /qemu-2.1.2/monitor.c:5104
+> #10 in json_message_process_token (lexer=0x7fa82a8e0a30, token=0x7fa830122b60, type=JSON_OPERATOR, x=39, y=17865) at qobject/json-streamer.c:87
+> #11 in json_lexer_feed_char (lexer=0x7fa82a8e0a30, ch=125 '}', flush=false) at qobject/json-lexer.c:303
+> #12 in json_lexer_feed (lexer=0x7fa82a8e0a30, buffer=0x7ffffb6dcdb0 "}\315m\373\377\177", size=1) at qobject/json-lexer.c:356
+> #13 in json_message_parser_feed (parser=0x7fa82a8e0a28, buffer=0x7ffffb6dcdb0 "}\315m\373\377\177", size=1) at qobject/json-streamer.c:111
+> #14 in monitor_control_read (opaque=0x7fa82a8e0970, buf=0x7ffffb6dcdb0 "}\315m\373\377\177", size=1) at /qemu-2.1.2/monitor.c:5125
+> #15 in qemu_chr_be_write (s=0x7fa82a8c2020, buf=0x7ffffb6dcdb0 "}\315m\373\377\177", len=1) at qemu-char.c:213
+> #16 in tcp_chr_read (chan=0x7fa82a8c4ba0, cond=G_IO_IN, opaque=0x7fa82a8c2020) at qemu-char.c:2729
+> #17 in g_main_context_dispatch () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
+> #18 in glib_pollfds_poll () at main-loop.c:190
+> #19 in os_host_main_loop_wait (timeout=24000000) at main-loop.c:235
+> #20 in main_loop_wait (nonblocking=0) at main-loop.c:484
+> #21 in main_loop () at vl.c:2034
+> #22 in main (argc=55, argv=0x7ffffb6de338, envp=0x7ffffb6de4f8) at vl.c:4583
+
+There is not enough information here to determine the cause of the hang.
+Please post the QEMU command-line so we know the guest configuration.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1473451 b/results/classifier/118/performance/1473451
new file mode 100644
index 00000000..a275469c
--- /dev/null
+++ b/results/classifier/118/performance/1473451
@@ -0,0 +1,49 @@
+performance: 0.936
+mistranslation: 0.906
+semantic: 0.764
+virtual: 0.739
+files: 0.715
+graphic: 0.692
+user-level: 0.684
+device: 0.682
+architecture: 0.677
+network: 0.584
+ppc: 0.545
+permissions: 0.544
+kernel: 0.519
+vnc: 0.463
+arm: 0.462
+boot: 0.433
+hypervisor: 0.419
+peripherals: 0.414
+register: 0.408
+i386: 0.401
+debug: 0.399
+socket: 0.394
+risc-v: 0.392
+x86: 0.366
+PID: 0.362
+VMM: 0.315
+assembly: 0.300
+TCG: 0.217
+KVM: 0.199
+
+Please support the native bios format for dec alpha
+
+Currently qemu-system-alpha -bios parameter takes an ELF image.
+However HP maintains firmware updates for those systems.
+
+Some example rom files can be found here ftp://ftp.hp.com/pub/alphaserver/firmware/current_platforms/v7.3_release/DS20_DS20e/
+
+It might allow things like using the SRM firmware.
+The ARC(nt) firmware would allow to build and test windows applications for that platforms without having the relevant hardware
+
+QEMU does not really implement a "true" ev67.
+
+We cheat and implement something that is significantly faster to emulate.
+E.g. doing all TLB refill within qemu, rather than in the PALcode.
+
+So, no, there's no chance of running true SRM or ARC firmware.
+
+But In that case it’s impossible to emulate or even compile Windows for Dec Alpha.
+
diff --git a/results/classifier/118/performance/1494 b/results/classifier/118/performance/1494
new file mode 100644
index 00000000..4095f7a2
--- /dev/null
+++ b/results/classifier/118/performance/1494
@@ -0,0 +1,962 @@
+performance: 0.818
+architecture: 0.778
+register: 0.759
+risc-v: 0.757
+vnc: 0.752
+virtual: 0.732
+ppc: 0.730
+permissions: 0.729
+graphic: 0.729
+x86: 0.726
+device: 0.725
+semantic: 0.723
+user-level: 0.720
+debug: 0.701
+peripherals: 0.700
+VMM: 0.690
+files: 0.682
+arm: 0.681
+TCG: 0.665
+mistranslation: 0.662
+hypervisor: 0.658
+assembly: 0.655
+PID: 0.648
+network: 0.644
+KVM: 0.628
+socket: 0.623
+kernel: 0.587
+boot: 0.586
+i386: 0.454
+
+[regression] [bisected] [coredump] qemu-x86_64 segfaults on ppc64le (4K page size) when downloading go dependencies: unexpected fault address 0x0
+Description of problem:
+qemu-x86_64 segfaults when trying to compile yay inside an Arch Linux x86_64 chroot from a Gentoo Linux ppc64le (4K page size) host. Hardware is a Raptor CS Talos 2 Power 9.
+
+It works with qemu-7.2 so this is a regression in git master (or less likely with the patch).
+
+```
+[niko@talos2 yay]$ makepkg -s
+==> Making package: yay 11.3.2-1 (Wed 15 Feb 2023 05:03:01 PM CET)
+==> Checking runtime dependencies...
+==> Checking buildtime dependencies...
+==> Retrieving sources...
+  -> Found yay-11.3.2.tar.gz
+==> Validating source files with sha256sums...
+    yay-11.3.2.tar.gz ... Passed
+==> Extracting sources...
+  -> Extracting yay-11.3.2.tar.gz with bsdtar
+==> Removing existing $pkgdir/ directory...
+==> Starting build()...
+go build -trimpath -mod=readonly -modcacherw  -ldflags '-X "main.yayVersion=11.3.2" -X "main.localePath=/usr/share/locale/" -linkmode=external' -buildmode=pie -o yay
+go: downloading github.com/Jguer/votar v1.0.0
+go: downloading github.com/Jguer/aur v1.0.1
+go: downloading github.com/Jguer/go-alpm/v2 v2.1.2
+go: downloading github.com/Morganamilo/go-pacmanconf v0.0.0-20210502114700-cff030e927a5
+go: downloading golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab
+go: downloading github.com/Morganamilo/go-srcinfo v1.0.0
+go: downloading golang.org/x/term v0.0.0-20220722155259-a9ba230a4035
+go: downloading github.com/leonelquinteros/gotext v1.5.0
+go: downloading github.com/adrg/strutil v0.3.0
+go: downloading golang.org/x/text v0.3.7
+make: *** [Makefile:113: yay] Illegal instruction (core dumped)
+```
+
+```
+[niko@talos2 yay]$ makepkg -s
+==> Making package: yay 11.3.2-1 (Wed 15 Feb 2023 05:10:01 PM CET)
+==> Checking runtime dependencies...
+==> Checking buildtime dependencies...
+==> Retrieving sources...
+  -> Found yay-11.3.2.tar.gz
+==> Validating source files with sha256sums...
+    yay-11.3.2.tar.gz ... Passed
+==> Extracting sources...
+  -> Extracting yay-11.3.2.tar.gz with bsdtar
+==> Starting build()...
+go build -trimpath -mod=readonly -modcacherw  -ldflags '-X "main.yayVersion=11.3.2" -X "main.localePath=/usr/share/locale/" -linkmode=external' -buildmode=pie -o yay
+go: downloading github.com/Jguer/votar v1.0.0
+go: downloading github.com/Jguer/go-alpm/v2 v2.1.2
+go: downloading github.com/Morganamilo/go-srcinfo v1.0.0
+go: downloading github.com/Jguer/aur v1.0.1
+go: downloading github.com/leonelquinteros/gotext v1.5.0
+go: downloading github.com/Morganamilo/go-pacmanconf v0.0.0-20210502114700-cff030e927a5
+go: downloading golang.org/x/term v0.0.0-20220722155259-a9ba230a4035
+go: downloading github.com/adrg/strutil v0.3.0
+go: downloading golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab
+go: downloading golang.org/x/text v0.3.7
+# math/bits
+unexpected fault address 0x0
+fatal error: fault
+[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0xabb70a]
+
+goroutine 4 [running]:
+runtime.throw({0xdbf491?, 0x1?})
+	/usr/lib/go/src/runtime/panic.go:1047 +0x5d fp=0xc0001d7750 sp=0xc0001d7720 pc=0x4389fd
+runtime.sigpanic()
+	/usr/lib/go/src/runtime/signal_unix.go:851 +0x28a fp=0xc0001d77b0 sp=0xc0001d7750 pc=0x44f4ea
+cmd/compile/internal/ssa.ValHeap.Less({{0xc0001ae1c0, 0x4, 0x8}, {0xc0001de700, 0x28, 0x100}}, 0x8?, 0xc0001de700?)
+	/usr/lib/go/src/cmd/compile/internal/ssa/schedule.go:59 +0xaa fp=0xc0001d77e0 sp=0xc0001d77b0 pc=0xabb70a
+cmd/compile/internal/ssa.(*ValHeap).Less(0x4?, 0x8?, 0xc0001de700?)
+	<autogenerated>:1 +0x77 fp=0xc0001d7860 sp=0xc0001d77e0 pc=0xad7677
+container/heap.up({0xf24240, 0xc00019eb40}, 0xc00081b370?)
+	/usr/lib/go/src/container/heap/heap.go:92 +0x7e fp=0xc0001d7898 sp=0xc0001d7860 pc=0x7024de
+container/heap.Push({0xf24240, 0xc00019eb40}, {0xdb1f80?, 0xc00081b370?})
+	/usr/lib/go/src/container/heap/heap.go:53 +0x5a fp=0xc0001d78c0 sp=0xc0001d7898 pc=0x7022da
+cmd/compile/internal/ssa.schedule(0xc0004ea000)
+	/usr/lib/go/src/cmd/compile/internal/ssa/schedule.go:349 +0x151c fp=0xc0001d7eb0 sp=0xc0001d78c0 pc=0xabcd9c
+cmd/compile/internal/ssa.Compile(0xc0004ea000)
+	/usr/lib/go/src/cmd/compile/internal/ssa/compile.go:97 +0x963 fp=0xc0001dbb68 sp=0xc0001d7eb0 pc=0x76bc43
+cmd/compile/internal/ssagen.buildssa(0xc00071b540, 0x2)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:572 +0x2027 fp=0xc0001dbea8 sp=0xc0001dbb68 pc=0xaf0527
+cmd/compile/internal/ssagen.Compile(0xc00071b540, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:185 +0x4c fp=0xc0001dbf70 sp=0xc0001dbea8 pc=0xae796c
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0001dbfb0 sp=0xc0001dbf70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0001dbfe0 sp=0xc0001dbfb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0001dbfe8 sp=0xc0001dbfe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 1 [semacquire]:
+runtime.gopark(0xc0000062e8?, 0xc00019a050?, 0x0?, 0xa0?, 0x4027dba128?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc0005ad968 sp=0xc0005ad948 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.semacquire1(0xc0008c4768, 0x20?, 0x1, 0x0, 0x0?)
+	/usr/lib/go/src/runtime/sema.go:160 +0x20f fp=0xc0005ad9d0 sp=0xc0005ad968 pc=0x44c9af
+sync.runtime_Semacquire(0xc0008b8000?)
+	/usr/lib/go/src/runtime/sema.go:62 +0x27 fp=0xc0005ada08 sp=0xc0005ad9d0 pc=0x46a6e7
+sync.(*WaitGroup).Wait(0xc000659800?)
+	/usr/lib/go/src/sync/waitgroup.go:116 +0x4b fp=0xc0005ada30 sp=0xc0005ada08 pc=0x487deb
+cmd/compile/internal/gc.compileFunctions()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:183 +0x235 fp=0xc0005ada88 sp=0xc0005ada30 pc=0xcc6675
+cmd/compile/internal/gc.Main(0xdf8e28)
+	/usr/lib/go/src/cmd/compile/internal/gc/main.go:332 +0x13aa fp=0xc0005adf20 sp=0xc0005ada88 pc=0xcc86aa
+main.main()
+	/usr/lib/go/src/cmd/compile/main.go:57 +0xdd fp=0xc0005adf80 sp=0xc0005adf20 pc=0xcf00bd
+runtime.main()
+	/usr/lib/go/src/runtime/proc.go:250 +0x207 fp=0xc0005adfe0 sp=0xc0005adf80 pc=0x43b2e7
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0005adfe8 sp=0xc0005adfe0 pc=0x46e201
+
+goroutine 2 [force gc (idle)]:
+runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009afb0 sp=0xc00009af90 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.forcegchelper()
+	/usr/lib/go/src/runtime/proc.go:305 +0xb0 fp=0xc00009afe0 sp=0xc00009afb0 pc=0x43b550
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009afe8 sp=0xc00009afe0 pc=0x46e201
+created by runtime.init.6
+	/usr/lib/go/src/runtime/proc.go:293 +0x25
+
+goroutine 17 [GC sweep wait]:
+runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc000096780 sp=0xc000096760 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.bgsweep(0x0?)
+	/usr/lib/go/src/runtime/mgcsweep.go:278 +0x8e fp=0xc0000967c8 sp=0xc000096780 pc=0x425cce
+runtime.gcenable.func1()
+	/usr/lib/go/src/runtime/mgc.go:178 +0x26 fp=0xc0000967e0 sp=0xc0000967c8 pc=0x41aee6
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000967e8 sp=0xc0000967e0 pc=0x46e201
+created by runtime.gcenable
+	/usr/lib/go/src/runtime/mgc.go:178 +0x6b
+
+goroutine 18 [GC scavenge wait]:
+runtime.gopark(0xc000194000?, 0xf1ad58?, 0x1?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc000096f70 sp=0xc000096f50 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.(*scavengerState).park(0x1487ce0)
+	/usr/lib/go/src/runtime/mgcscavenge.go:400 +0x53 fp=0xc000096fa0 sp=0xc000096f70 pc=0x423c13
+runtime.bgscavenge(0x0?)
+	/usr/lib/go/src/runtime/mgcscavenge.go:628 +0x45 fp=0xc000096fc8 sp=0xc000096fa0 pc=0x4241e5
+runtime.gcenable.func2()
+	/usr/lib/go/src/runtime/mgc.go:179 +0x26 fp=0xc000096fe0 sp=0xc000096fc8 pc=0x41ae86
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000096fe8 sp=0xc000096fe0 pc=0x46e201
+created by runtime.gcenable
+	/usr/lib/go/src/runtime/mgc.go:179 +0xaa
+
+goroutine 33 [finalizer wait]:
+runtime.gopark(0x43ba92?, 0x4027cf9f48?, 0x0?, 0x0?, 0xc00009a770?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009a628 sp=0xc00009a608 pc=0x43b716
+runtime.runfinq()
+	/usr/lib/go/src/runtime/mfinal.go:193 +0x107 fp=0xc00009a7e0 sp=0xc00009a628 pc=0x419f27
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009a7e8 sp=0xc00009a7e0 pc=0x46e201
+created by runtime.createfing
+	/usr/lib/go/src/runtime/mfinal.go:163 +0x45
+
+goroutine 49 [select]:
+runtime.gopark(0xc0004e6fb0?, 0x2?, 0xf0?, 0x6d?, 0xc0004e6f6c?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc0004e6da0 sp=0xc0004e6d80 pc=0x43b716
+runtime.selectgo(0xc0004e6fb0, 0xc0004e6f68, 0xc000504000?, 0x0, 0xd26040?, 0x1)
+	/usr/lib/go/src/runtime/select.go:327 +0x7be fp=0xc0004e6ee0 sp=0xc0004e6da0 pc=0x44b8be
+cmd/compile/internal/gc.compileFunctions.func3()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:141 +0x132 fp=0xc0004e6fe0 sp=0xc0004e6ee0 pc=0xcc6a12
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0004e6fe8 sp=0xc0004e6fe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:134 +0xf8
+
+goroutine 3 [runnable]:
+runtime.asyncPreempt2()
+	/usr/lib/go/src/runtime/preempt.go:307 +0x3f fp=0xc0004f7858 sp=0xc0004f7838 pc=0x439e1f
+runtime.asyncPreempt()
+	/usr/lib/go/src/runtime/preempt_amd64.s:53 +0xdb fp=0xc0004f79e0 sp=0xc0004f7858 pc=0x46f85b
+cmd/compile/internal/ssa.is64BitInt(0xc000141480)
+	/usr/lib/go/src/cmd/compile/internal/ssa/rewrite.go:218 +0xa fp=0xc0004f79e8 sp=0xc0004f79e0 pc=0x7e2e8a
+cmd/compile/internal/ssa.rewriteValueAMD64_OpLoad(0xc00086a458)
+	/usr/lib/go/src/cmd/compile/internal/ssa/rewriteAMD64.go:29312 +0x51 fp=0xc0004f7a28 sp=0xc0004f79e8 pc=0x884911
+cmd/compile/internal/ssa.rewriteValueAMD64(0xc00089f678?)
+	/usr/lib/go/src/cmd/compile/internal/ssa/rewriteAMD64.go:838 +0x31be fp=0xc0004f7a48 sp=0xc0004f7a28 pc=0x819bbe
+cmd/compile/internal/ssa.applyRewrite(0xc0001b2000, 0xdf92a8, 0xdf9348, 0x1)
+	/usr/lib/go/src/cmd/compile/internal/ssa/rewrite.go:133 +0x1016 fp=0xc0004f7e80 sp=0xc0004f7a48 pc=0x7e27d6
+cmd/compile/internal/ssa.lower(0xc0001b2000?)
+	/usr/lib/go/src/cmd/compile/internal/ssa/lower.go:10 +0x2f fp=0xc0004f7eb0 sp=0xc0004f7e80 pc=0x7b4c4f
+cmd/compile/internal/ssa.Compile(0xc0001b2000)
+	/usr/lib/go/src/cmd/compile/internal/ssa/compile.go:97 +0x963 fp=0xc0004fbb68 sp=0xc0004f7eb0 pc=0x76bc43
+cmd/compile/internal/ssagen.buildssa(0xc00071b900, 0x3)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:572 +0x2027 fp=0xc0004fbea8 sp=0xc0004fbb68 pc=0xaf0527
+cmd/compile/internal/ssagen.Compile(0xc00071b900, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:185 +0x4c fp=0xc0004fbf70 sp=0xc0004fbea8 pc=0xae796c
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0004fbfb0 sp=0xc0004fbf70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0004fbfe0 sp=0xc0004fbfb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0004fbfe8 sp=0xc0004fbfe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 50 [runnable]:
+cmd/compile/internal/ssa.(*Value).clobbersFlags(0xc0007ce6d8?)
+	/usr/lib/go/src/cmd/compile/internal/ssa/flagalloc.go:243 +0xd5 fp=0xc0000dbbf0 sp=0xc0000dbbe8 pc=0x79c375
+cmd/compile/internal/ssa.flagalloc(0xc0000ca000)
+	/usr/lib/go/src/cmd/compile/internal/ssa/flagalloc.go:39 +0x172a fp=0xc0000dbeb0 sp=0xc0000dbbf0 pc=0x79c0ca
+cmd/compile/internal/ssa.Compile(0xc0000ca000)
+	/usr/lib/go/src/cmd/compile/internal/ssa/compile.go:97 +0x963 fp=0xc0000dfb68 sp=0xc0000dbeb0 pc=0x76bc43
+cmd/compile/internal/ssagen.buildssa(0xc00071ba40, 0x1)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:572 +0x2027 fp=0xc0000dfea8 sp=0xc0000dfb68 pc=0xaf0527
+cmd/compile/internal/ssagen.Compile(0xc00071ba40, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:185 +0x4c fp=0xc0000dff70 sp=0xc0000dfea8 pc=0xae796c
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0000dffb0 sp=0xc0000dff70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0000dffe0 sp=0xc0000dffb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000dffe8 sp=0xc0000dffe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 51 [runnable]:
+cmd/compile/internal/ssa.(*Value).clobbersFlags(0xc000780d90?)
+	/usr/lib/go/src/cmd/compile/internal/ssa/flagalloc.go:243 +0xd5 fp=0xc00091bbf0 sp=0xc00091bbe8 pc=0x79c375
+cmd/compile/internal/ssa.flagalloc(0xc000774540)
+	/usr/lib/go/src/cmd/compile/internal/ssa/flagalloc.go:39 +0x172a fp=0xc00091beb0 sp=0xc00091bbf0 pc=0x79c0ca
+cmd/compile/internal/ssa.Compile(0xc000774540)
+	/usr/lib/go/src/cmd/compile/internal/ssa/compile.go:97 +0x963 fp=0xc00091fb68 sp=0xc00091beb0 pc=0x76bc43
+cmd/compile/internal/ssagen.buildssa(0xc000703900, 0x0)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:572 +0x2027 fp=0xc00091fea8 sp=0xc00091fb68 pc=0xaf0527
+cmd/compile/internal/ssagen.Compile(0xc000703900, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:185 +0x4c fp=0xc00091ff70 sp=0xc00091fea8 pc=0xae796c
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc00091ffb0 sp=0xc00091ff70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc00091ffe0 sp=0xc00091ffb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00091ffe8 sp=0xc00091ffe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+# unicode/utf8
+unexpected fault address 0x0
+fatal error: fault
+[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x411410]
+
+goroutine 19 [running]:
+runtime.throw({0xdbf491?, 0x4000804d28?})
+	/usr/lib/go/src/runtime/panic.go:1047 +0x5d fp=0xc0004f1830 sp=0xc0004f1800 pc=0x4389fd
+runtime.sigpanic()
+	/usr/lib/go/src/runtime/signal_unix.go:851 +0x28a fp=0xc0004f1890 sp=0xc0004f1830 pc=0x44f4ea
+runtime.mapaccess2_fast32(0xc0009b3c00, 0xc000562000, 0x6be729)
+	/usr/lib/go/src/runtime/map_fast32.go:53 +0x170 fp=0xc0004f1898 sp=0xc0004f1890 pc=0x411410
+cmd/compile/internal/ssagen.genssa(0xc000562000, 0xc000988ee0)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:6964 +0x965 fp=0xc0004f1ea8 sp=0xc0004f1898 pc=0xb27345
+cmd/compile/internal/ssagen.Compile(0xc0000fcb40, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:195 +0x26f fp=0xc0004f1f70 sp=0xc0004f1ea8 pc=0xae7b8f
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0004f1fb0 sp=0xc0004f1f70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0004f1fe0 sp=0xc0004f1fb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0004f1fe8 sp=0xc0004f1fe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 1 [semacquire]:
+runtime.gopark(0x20?, 0xd7ca20?, 0x0?, 0x60?, 0xc00003a600?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc0005a9968 sp=0xc0005a9948 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.semacquire1(0xc0006d5a88, 0x20?, 0x1, 0x0, 0x0?)
+	/usr/lib/go/src/runtime/sema.go:160 +0x20f fp=0xc0005a99d0 sp=0xc0005a9968 pc=0x44c9af
+sync.runtime_Semacquire(0xc0000fdb80?)
+	/usr/lib/go/src/runtime/sema.go:62 +0x27 fp=0xc0005a9a08 sp=0xc0005a99d0 pc=0x46a6e7
+sync.(*WaitGroup).Wait(0xc0008ca400?)
+	/usr/lib/go/src/sync/waitgroup.go:116 +0x4b fp=0xc0005a9a30 sp=0xc0005a9a08 pc=0x487deb
+cmd/compile/internal/gc.compileFunctions()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:183 +0x235 fp=0xc0005a9a88 sp=0xc0005a9a30 pc=0xcc6675
+cmd/compile/internal/gc.Main(0xdf8e28)
+	/usr/lib/go/src/cmd/compile/internal/gc/main.go:332 +0x13aa fp=0xc0005a9f20 sp=0xc0005a9a88 pc=0xcc86aa
+main.main()
+	/usr/lib/go/src/cmd/compile/main.go:57 +0xdd fp=0xc0005a9f80 sp=0xc0005a9f20 pc=0xcf00bd
+runtime.main()
+	/usr/lib/go/src/runtime/proc.go:250 +0x207 fp=0xc0005a9fe0 sp=0xc0005a9f80 pc=0x43b2e7
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0005a9fe8 sp=0xc0005a9fe0 pc=0x46e201
+
+goroutine 2 [force gc (idle)]:
+runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009cfb0 sp=0xc00009cf90 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.forcegchelper()
+	/usr/lib/go/src/runtime/proc.go:305 +0xb0 fp=0xc00009cfe0 sp=0xc00009cfb0 pc=0x43b550
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009cfe8 sp=0xc00009cfe0 pc=0x46e201
+created by runtime.init.6
+	/usr/lib/go/src/runtime/proc.go:293 +0x25
+
+goroutine 3 [GC sweep wait]:
+runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009d780 sp=0xc00009d760 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.bgsweep(0x0?)
+	/usr/lib/go/src/runtime/mgcsweep.go:278 +0x8e fp=0xc00009d7c8 sp=0xc00009d780 pc=0x425cce
+runtime.gcenable.func1()
+	/usr/lib/go/src/runtime/mgc.go:178 +0x26 fp=0xc00009d7e0 sp=0xc00009d7c8 pc=0x41aee6
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009d7e8 sp=0xc00009d7e0 pc=0x46e201
+created by runtime.gcenable
+	/usr/lib/go/src/runtime/mgc.go:178 +0x6b
+
+goroutine 4 [GC scavenge wait]:
+runtime.gopark(0xc0000380e0?, 0xf1ad58?, 0x1?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009df70 sp=0xc00009df50 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.(*scavengerState).park(0x1487ce0)
+	/usr/lib/go/src/runtime/mgcscavenge.go:400 +0x53 fp=0xc00009dfa0 sp=0xc00009df70 pc=0x423c13
+runtime.bgscavenge(0x0?)
+	/usr/lib/go/src/runtime/mgcscavenge.go:628 +0x45 fp=0xc00009dfc8 sp=0xc00009dfa0 pc=0x4241e5
+runtime.gcenable.func2()
+	/usr/lib/go/src/runtime/mgc.go:179 +0x26 fp=0xc00009dfe0 sp=0xc00009dfc8 pc=0x41ae86
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009dfe8 sp=0xc00009dfe0 pc=0x46e201
+created by runtime.gcenable
+	/usr/lib/go/src/runtime/mgc.go:179 +0xaa
+
+goroutine 17 [finalizer wait]:
+runtime.gopark(0x43ba92?, 0x4027cf9fe8?, 0x0?, 0x0?, 0xc00009c770?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009c628 sp=0xc00009c608 pc=0x43b716
+runtime.runfinq()
+	/usr/lib/go/src/runtime/mfinal.go:193 +0x107 fp=0xc00009c7e0 sp=0xc00009c628 pc=0x419f27
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009c7e8 sp=0xc00009c7e0 pc=0x46e201
+created by runtime.createfing
+	/usr/lib/go/src/runtime/mfinal.go:163 +0x45
+
+goroutine 49 [runnable]:
+runtime.asyncPreempt2()
+	/usr/lib/go/src/runtime/preempt.go:307 +0x3f fp=0xc0009c1308 sp=0xc0009c12e8 pc=0x439e1f
+runtime.asyncPreempt()
+	/usr/lib/go/src/runtime/preempt_amd64.s:53 +0xdb fp=0xc0009c1490 sp=0xc0009c1308 pc=0x46f85b
+cmd/compile/internal/ssa.PopulateABIInRegArgOps(0xc000754700)
+	/usr/lib/go/src/cmd/compile/internal/ssa/debug.go:436 +0x57 fp=0xc0009c16f0 sp=0xc0009c1490 pc=0x779bb7
+cmd/compile/internal/ssa.BuildFuncDebug(0xc00041e5a0?, 0xc000754700, 0xc000000049?, 0xa?, 0xc00098c000)
+	/usr/lib/go/src/cmd/compile/internal/ssa/debug.go:578 +0x1d6 fp=0xc0009c1898 sp=0xc0009c16f0 pc=0x77adb6
+cmd/compile/internal/ssagen.genssa(0xc000754700, 0xc0004bfce0)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:7157 +0xf2b fp=0xc0009c1ea8 sp=0xc0009c1898 pc=0xb2790b
+cmd/compile/internal/ssagen.Compile(0xc0000fc8c0, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:195 +0x26f fp=0xc0009c1f70 sp=0xc0009c1ea8 pc=0xae7b8f
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0009c1fb0 sp=0xc0009c1f70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0009c1fe0 sp=0xc0009c1fb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0009c1fe8 sp=0xc0009c1fe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 5 [select]:
+runtime.gopark(0xc00009e7b0?, 0x2?, 0xf0?, 0xe5?, 0xc00009e76c?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009e5a0 sp=0xc00009e580 pc=0x43b716
+runtime.selectgo(0xc00009e7b0, 0xc00009e768, 0x0?, 0x0, 0xd26040?, 0x1)
+	/usr/lib/go/src/runtime/select.go:327 +0x7be fp=0xc00009e6e0 sp=0xc00009e5a0 pc=0x44b8be
+cmd/compile/internal/gc.compileFunctions.func3()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:141 +0x132 fp=0xc00009e7e0 sp=0xc00009e6e0 pc=0xcc6a12
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009e7e8 sp=0xc00009e7e0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:134 +0xf8
+
+goroutine 50 [runnable]:
+runtime.asyncPreempt2()
+	/usr/lib/go/src/runtime/preempt.go:307 +0x3f fp=0xc0001cd308 sp=0xc0001cd2e8 pc=0x439e1f
+runtime.asyncPreempt()
+	/usr/lib/go/src/runtime/preempt_amd64.s:53 +0xdb fp=0xc0001cd490 sp=0xc0001cd308 pc=0x46f85b
+cmd/compile/internal/ssa.PopulateABIInRegArgOps(0xc000192000)
+	/usr/lib/go/src/cmd/compile/internal/ssa/debug.go:436 +0x57 fp=0xc0001cd6f0 sp=0xc0001cd490 pc=0x779bb7
+cmd/compile/internal/ssa.BuildFuncDebug(0xc0001a65a0?, 0xc000192000, 0xc000000049?, 0x12?, 0xc0001a4000)
+	/usr/lib/go/src/cmd/compile/internal/ssa/debug.go:578 +0x1d6 fp=0xc0001cd898 sp=0xc0001cd6f0 pc=0x77adb6
+cmd/compile/internal/ssagen.genssa(0xc000192000, 0xc00019f260)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:7157 +0xf2b fp=0xc0001cdea8 sp=0xc0001cd898 pc=0xb2790b
+cmd/compile/internal/ssagen.Compile(0xc0000fca00, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:195 +0x26f fp=0xc0001cdf70 sp=0xc0001cdea8 pc=0xae7b8f
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0001cdfb0 sp=0xc0001cdf70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0001cdfe0 sp=0xc0001cdfb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0001cdfe8 sp=0xc0001cdfe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 20 [runnable]:
+runtime.asyncPreempt2()
+	/usr/lib/go/src/runtime/preempt.go:307 +0x3f fp=0xc0008e10f8 sp=0xc0008e10d8 pc=0x439e1f
+runtime.asyncPreempt()
+	/usr/lib/go/src/runtime/preempt_amd64.s:53 +0xdb fp=0xc0008e1280 sp=0xc0008e10f8 pc=0x46f85b
+cmd/compile/internal/ssagen.AddrAuto(0xc000201ed0, 0x81308171a15?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:7649 +0x94 fp=0xc0008e12a8 sp=0xc0008e1280 pc=0xb2d9d4
+cmd/compile/internal/amd64.ssaGenValue(0xc0008bec60, 0xc000781ab0)
+	/usr/lib/go/src/cmd/compile/internal/amd64/ssa.go:1037 +0x13dc fp=0xc0008e1898 sp=0xc0008e12a8 pc=0xb3d6bc
+cmd/compile/internal/ssagen.genssa(0xc0004e4000, 0xc0008e8070)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:7024 +0x3ff8 fp=0xc0008e1ea8 sp=0xc0008e1898 pc=0xb2a9d8
+cmd/compile/internal/ssagen.Compile(0xc0000fcc80, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:195 +0x26f fp=0xc0008e1f70 sp=0xc0008e1ea8 pc=0xae7b8f
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0008e1fb0 sp=0xc0008e1f70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0008e1fe0 sp=0xc0008e1fb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0008e1fe8 sp=0xc0008e1fe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+# internal/reflectlite
+unexpected fault address 0x0
+fatal error: fault
+[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x66b360]
+
+goroutine 6 [running]:
+runtime.throw({0xdbf491?, 0x8?})
+	/usr/lib/go/src/runtime/panic.go:1047 +0x5d fp=0xc0000dc720 sp=0xc0000dc6f0 pc=0x4389fd
+runtime.sigpanic()
+	/usr/lib/go/src/runtime/signal_unix.go:851 +0x28a fp=0xc0000dc780 sp=0xc0000dc720 pc=0x44f4ea
+cmd/compile/internal/abi.(*assignState).regAllocate(0xc0000dc7a0, 0x41b2b1, {0x14cdd40, 0xc0000dc808}, 0xa8)
+	/usr/lib/go/src/cmd/compile/internal/abi/abiutils.go:607 fp=0xc0000dc788 sp=0xc0000dc780 pc=0x66b360
+cmd/compile/internal/abi.(*assignState).assignParamOrReturn(0xc0000dc8a8, 0xc0008977a0, {0xf23198, 0xc000a0b080}, 0x20?)
+	/usr/lib/go/src/cmd/compile/internal/abi/abiutils.go:777 +0x165 fp=0xc0000dc840 sp=0xc0000dc788 pc=0x66bae5
+cmd/compile/internal/abi.(*ABIConfig).ABIAnalyzeFuncType(0xc0000bca60, 0xc00089b710)
+	/usr/lib/go/src/cmd/compile/internal/abi/abiutils.go:404 +0x3d7 fp=0xc0000dc9f8 sp=0xc0000dc840 pc=0x669a17
+cmd/compile/internal/abi.(*ABIConfig).ABIAnalyze(0xd41a80?, 0xc0000d0600?, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/abi/abiutils.go:432 +0x5d fp=0xc0000dcb68 sp=0xc0000dc9f8 pc=0x669e7d
+cmd/compile/internal/ssagen.buildssa(0xc0008cc500, 0x1)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:455 +0x1209 fp=0xc0000dcea8 sp=0xc0000dcb68 pc=0xaef709
+cmd/compile/internal/ssagen.Compile(0xc0008cc500, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:185 +0x4c fp=0xc0000dcf70 sp=0xc0000dcea8 pc=0xae796c
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0000dcfb0 sp=0xc0000dcf70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0000dcfe0 sp=0xc0000dcfb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000dcfe8 sp=0xc0000dcfe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 1 [runnable]:
+runtime.gopark(0xc0000be000?, 0xc0004edec0?, 0xb0?, 0x99?, 0xc000739978?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc000739910 sp=0xc0007398f0 pc=0x43b716
+runtime.chansend(0xc000d540c0, 0xc0007399e8, 0x1, 0xc0007399d8?)
+	/usr/lib/go/src/runtime/chan.go:259 +0x42e fp=0xc000739998 sp=0xc000739910 pc=0x40602e
+runtime.chansend1(0xd7ca20?, 0x4000803501?)
+	/usr/lib/go/src/runtime/chan.go:145 +0x1d fp=0xc0007399c8 sp=0xc000739998 pc=0x405bdd
+cmd/compile/internal/gc.compileFunctions.func4(0xc00180cc40)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:160 +0x27 fp=0xc0007399e8 sp=0xc0007399c8 pc=0xcc68a7
+cmd/compile/internal/gc.compileFunctions.func5({0xc001099500, 0x222, 0x350?})
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:170 +0x53 fp=0xc000739a30 sp=0xc0007399e8 pc=0xcc6713
+cmd/compile/internal/gc.compileFunctions()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:181 +0x1ff fp=0xc000739a88 sp=0xc000739a30 pc=0xcc663f
+cmd/compile/internal/gc.Main(0xdf8e28)
+	/usr/lib/go/src/cmd/compile/internal/gc/main.go:332 +0x13aa fp=0xc000739f20 sp=0xc000739a88 pc=0xcc86aa
+main.main()
+	/usr/lib/go/src/cmd/compile/main.go:57 +0xdd fp=0xc000739f80 sp=0xc000739f20 pc=0xcf00bd
+runtime.main()
+	/usr/lib/go/src/runtime/proc.go:250 +0x207 fp=0xc000739fe0 sp=0xc000739f80 pc=0x43b2e7
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000739fe8 sp=0xc000739fe0 pc=0x46e201
+
+goroutine 2 [force gc (idle)]:
+runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009cfb0 sp=0xc00009cf90 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.forcegchelper()
+	/usr/lib/go/src/runtime/proc.go:305 +0xb0 fp=0xc00009cfe0 sp=0xc00009cfb0 pc=0x43b550
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009cfe8 sp=0xc00009cfe0 pc=0x46e201
+created by runtime.init.6
+	/usr/lib/go/src/runtime/proc.go:293 +0x25
+
+goroutine 3 [GC sweep wait]:
+runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009d780 sp=0xc00009d760 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.bgsweep(0x0?)
+	/usr/lib/go/src/runtime/mgcsweep.go:278 +0x8e fp=0xc00009d7c8 sp=0xc00009d780 pc=0x425cce
+runtime.gcenable.func1()
+	/usr/lib/go/src/runtime/mgc.go:178 +0x26 fp=0xc00009d7e0 sp=0xc00009d7c8 pc=0x41aee6
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009d7e8 sp=0xc00009d7e0 pc=0x46e201
+created by runtime.gcenable
+	/usr/lib/go/src/runtime/mgc.go:178 +0x6b
+
+goroutine 4 [GC scavenge wait]:
+runtime.gopark(0xc0000380e0?, 0xf1ad58?, 0x1?, 0x0?, 0x0?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009df70 sp=0xc00009df50 pc=0x43b716
+runtime.goparkunlock(...)
+	/usr/lib/go/src/runtime/proc.go:387
+runtime.(*scavengerState).park(0x1487ce0)
+	/usr/lib/go/src/runtime/mgcscavenge.go:400 +0x53 fp=0xc00009dfa0 sp=0xc00009df70 pc=0x423c13
+runtime.bgscavenge(0x0?)
+	/usr/lib/go/src/runtime/mgcscavenge.go:628 +0x45 fp=0xc00009dfc8 sp=0xc00009dfa0 pc=0x4241e5
+runtime.gcenable.func2()
+	/usr/lib/go/src/runtime/mgc.go:179 +0x26 fp=0xc00009dfe0 sp=0xc00009dfc8 pc=0x41ae86
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009dfe8 sp=0xc00009dfe0 pc=0x46e201
+created by runtime.gcenable
+	/usr/lib/go/src/runtime/mgc.go:179 +0xaa
+
+goroutine 17 [finalizer wait]:
+runtime.gopark(0x43ba92?, 0x4027d38828?, 0x0?, 0x0?, 0xc00009c770?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009c628 sp=0xc00009c608 pc=0x43b716
+runtime.runfinq()
+	/usr/lib/go/src/runtime/mfinal.go:193 +0x107 fp=0xc00009c7e0 sp=0xc00009c628 pc=0x419f27
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009c7e8 sp=0xc00009c7e0 pc=0x46e201
+created by runtime.createfing
+	/usr/lib/go/src/runtime/mfinal.go:163 +0x45
+
+goroutine 5 [runnable]:
+runtime.asyncPreempt2()
+	/usr/lib/go/src/runtime/preempt.go:307 +0x3f fp=0xc000194658 sp=0xc000194638 pc=0x439e1f
+runtime.asyncPreempt()
+	/usr/lib/go/src/runtime/preempt_amd64.s:53 +0xdb fp=0xc0001947e0 sp=0xc000194658 pc=0x46f85b
+cmd/compile/internal/ssagen.(*state).stmt(0xc0001fe500, {0xf2a400, 0xc000a990a0?})
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1432 +0xbb fp=0xc000194b68 sp=0xc0001947e0 pc=0xaf509b
+cmd/compile/internal/ssagen.(*state).stmtList(...)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1421
+cmd/compile/internal/ssagen.buildssa(0xc0005d9540, 0x2)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:552 +0x1ee6 fp=0xc000194ea8 sp=0xc000194b68 pc=0xaf03e6
+cmd/compile/internal/ssagen.Compile(0xc0005d9540, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:185 +0x4c fp=0xc000194f70 sp=0xc000194ea8 pc=0xae796c
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc000194fb0 sp=0xc000194f70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc000194fe0 sp=0xc000194fb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000194fe8 sp=0xc000194fe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 33 [select]:
+runtime.gopark(0xc0000997b0?, 0x2?, 0xf0?, 0x95?, 0xc00009976c?)
+	/usr/lib/go/src/runtime/proc.go:381 +0xd6 fp=0xc0000995a0 sp=0xc000099580 pc=0x43b716
+runtime.selectgo(0xc0000997b0, 0xc000099768, 0xc000110000?, 0x0, 0xd26040?, 0x1)
+	/usr/lib/go/src/runtime/select.go:327 +0x7be fp=0xc0000996e0 sp=0xc0000995a0 pc=0x44b8be
+cmd/compile/internal/gc.compileFunctions.func3()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:141 +0x132 fp=0xc0000997e0 sp=0xc0000996e0 pc=0xcc6a12
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000997e8 sp=0xc0000997e0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:134 +0xf8
+
+goroutine 22 [runnable]:
+runtime.asyncPreempt2()
+	/usr/lib/go/src/runtime/preempt.go:307 +0x3f fp=0xc0000ad2d0 sp=0xc0000ad2b0 pc=0x439e1f
+runtime.asyncPreempt()
+	/usr/lib/go/src/runtime/preempt_amd64.s:53 +0xdb fp=0xc0000ad458 sp=0xc0000ad2d0 pc=0x46f85b
+cmd/compile/internal/ssagen.(*state).stmt(0xc0016ada00, {0xf295c0, 0xc00134d450?})
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1455 +0x19a fp=0xc0000ad7e0 sp=0xc0000ad458 pc=0xaf517a
+cmd/compile/internal/ssagen.(*state).stmtList(...)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1421
+cmd/compile/internal/ssagen.(*state).stmt(0xc0016ada00, {0xf29800, 0xc0013eebc0?})
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1441 +0x45e5 fp=0xc0000adb68 sp=0xc0000ad7e0 pc=0xaf95c5
+cmd/compile/internal/ssagen.(*state).stmtList(...)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1421
+cmd/compile/internal/ssagen.buildssa(0xc0005d8b40, 0x3)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:552 +0x1ee6 fp=0xc0000adea8 sp=0xc0000adb68 pc=0xaf03e6
+cmd/compile/internal/ssagen.Compile(0xc0005d8b40, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:185 +0x4c fp=0xc0000adf70 sp=0xc0000adea8 pc=0xae796c
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc0000adfb0 sp=0xc0000adf70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc0000adfe0 sp=0xc0000adfb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000adfe8 sp=0xc0000adfe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+
+goroutine 49 [runnable]:
+runtime.asyncPreempt2()
+	/usr/lib/go/src/runtime/preempt.go:307 +0x3f fp=0xc0017932d0 sp=0xc0017932b0 pc=0x439e1f
+runtime.asyncPreempt()
+	/usr/lib/go/src/runtime/preempt_amd64.s:53 +0xdb fp=0xc001793458 sp=0xc0017932d0 pc=0x46f85b
+cmd/compile/internal/ssagen.(*state).stmt(0xc001794000, {0xf295c0, 0xc00140a960?})
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1455 +0x19a fp=0xc0017937e0 sp=0xc001793458 pc=0xaf517a
+cmd/compile/internal/ssagen.(*state).stmtList(...)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1421
+cmd/compile/internal/ssagen.(*state).stmt(0xc001794000, {0xf2a400, 0xc000a92a10?})
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1436 +0x150 fp=0xc001793b68 sp=0xc0017937e0 pc=0xaf5130
+cmd/compile/internal/ssagen.(*state).stmtList(...)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:1421
+cmd/compile/internal/ssagen.buildssa(0xc0005d9680, 0x0)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/ssa.go:552 +0x1ee6 fp=0xc001793ea8 sp=0xc001793b68 pc=0xaf03e6
+cmd/compile/internal/ssagen.Compile(0xc0005d9680, 0x0?)
+	/usr/lib/go/src/cmd/compile/internal/ssagen/pgen.go:185 +0x4c fp=0xc001793f70 sp=0xc001793ea8 pc=0xae796c
+cmd/compile/internal/gc.compileFunctions.func5.1(0x0?)
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:171 +0x3a fp=0xc001793fb0 sp=0xc001793f70 pc=0xcc681a
+cmd/compile/internal/gc.compileFunctions.func3.1()
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:153 +0x32 fp=0xc001793fe0 sp=0xc001793fb0 pc=0xcc6c52
+runtime.goexit()
+	/usr/lib/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc001793fe8 sp=0xc001793fe0 pc=0x46e201
+created by cmd/compile/internal/gc.compileFunctions.func3
+	/usr/lib/go/src/cmd/compile/internal/gc/compile.go:152 +0x245
+```
+Steps to reproduce:
+1. Create an Arch Linux chroot from a bootstrap tarball: https://wiki.archlinux.org/title/Install_Arch_Linux_from_existing_Linux#Method_A:_Using_the_bootstrap_tarball_(recommended)
+2. Chroot into it using the following script:
+```
+#!/bin/bash
+
+basedir="/home/niko/chroots/arch-x86_64"
+cp /etc/resolv.conf ${basedir}/etc/
+cp /usr/bin/qemu-x86_64 ${basedir}/usr/bin/
+sed -i 's!#Server = http://archlinux.mirror.garr.it/archlinux/$repo/os/$arch!Server = http://archlinux.mirror.garr.it/archlinux/$repo/os/$a>
+mount --make-slave --bind  ${basedir} ${basedir}
+mount -t proc none ${basedir}/proc
+mount -t sysfs none ${basedir}/sys/
+mount --make-rslave --rbind /dev ${basedir}/dev
+mount --make-rslave --rbind /run ${basedir}/run
+chroot ${basedir} /bin/bash
+sleep 3
+umount -R ${basedir}/run
+umount -R ${basedir}/dev
+umount ${basedir}/sys
+umount ${basedir}/proc
+umount ${basedir}
+mount | grep chroots | grep arch-x86_64 | grep -v snap
+```
+3. Initialize pacaman keyring and update system:
+```
+# pacman-key --init
+# pacman-key --populate
+# pacman -Syu
+```
+4. Download the yay `PKGBUILD` from the AUR (https://aur.archlinux.org/cgit/aur.git/plain/PKGBUILD?h=yay) and run `makepkg -s`
+5. Wait for the crash.
+Additional information:
+```
+Wed 2023-02-15 17:03:22 CET   495600 1000 1000 SIGILL  present  /home/niko/chroots/arch-x86_64/usr/bin/qemu-x86_64                         >
+Wed 2023-02-15 17:11:25 CET   509058 1000 1000 SIGSEGV present  /home/niko/chroots/arch-x86_64/usr/bin/qemu-x86_64                         >
+talos2 ~ # coredumpctl gdb 495600
+           PID: 495600 (go)
+           UID: 1000 (niko)
+           GID: 1000 (niko)
+        Signal: 4 (ILL)
+     Timestamp: Wed 2023-02-15 17:03:21 CET (13min ago)
+  Command Line: /usr/bin/qemu-x86_64 /usr/bin/go build -trimpath -mod=readonly -modcacherw -ldflags $'-X "main.yayVersion=11.3.2" -X "main.localePath=/usr/share/locale/" -linkmode=external' -buildmode=pie -o yay
+    Executable: /home/niko/chroots/arch-x86_64/usr/bin/qemu-x86_64
+ Control Group: /user.slice/user-1000.slice/user@1000.service/session.slice/vte-spawn-a3a4897b-7df3-4b3e-a8fc-91898d4e7b51.scope
+          Unit: user@1000.service
+     User Unit: vte-spawn-a3a4897b-7df3-4b3e-a8fc-91898d4e7b51.scope
+         Slice: user-1000.slice
+     Owner UID: 1000 (niko)
+       Boot ID: 33cad872d21043ebbe3dd6581bdd28c6
+    Machine ID: b3e834569b8ff461391f5ac061feb773
+      Hostname: talos2
+       Storage: /var/lib/systemd/coredump/core.go.1000.33cad872d21043ebbe3dd6581bdd28c6.495600.1676477001000000.zst (present)
+  Size on Disk: 7.4M
+       Message: Process 495600 (go) of user 1000 dumped core.
+
+GNU gdb (Gentoo 12.1 vanilla) 12.1
+Copyright (C) 2022 Free Software Foundation, Inc.
+License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
+This is free software: you are free to change and redistribute it.
+There is NO WARRANTY, to the extent permitted by law.
+Type "show copying" and "show warranty" for details.
+This GDB was configured as "powerpc64le-unknown-linux-gnu".
+Type "show configuration" for configuration details.
+For bug reporting instructions, please see:
+<https://bugs.gentoo.org/>.
+Find the GDB manual and other documentation resources online at:
+    <http://www.gnu.org/software/gdb/documentation/>.
+
+For help, type "help".
+Type "apropos word" to search for commands related to "word"...
+Reading symbols from /home/niko/chroots/arch-x86_64/usr/bin/qemu-x86_64...
+BFD: warning: /var/tmp/coredump-8CHpqc: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000002
+BFD: warning: /var/tmp/coredump-8CHpqc: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001
+BFD: warning: /var/tmp/coredump-8CHpqc: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002
+
+warning: Can't open file /usr/lib/ld-linux-x86-64.so.2 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libresolv.so.2 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libc.so.6 during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/f2/f2133743f1bb49d82c152c57fea6c71755a865029a19ff845dd27e420f5fa0be-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/89/89e115246dee235465b64002c5ab8a7df3a3f1b776d78dab9cd937c4892860a0-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/79/791d1887c70eed91dc52577c14310590649cb307ef557d46d8cc10df4704a957-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/86/86d5a3a0121a98ed0805aa57bc14d0bd85178c04054816d99495336d86c5bf5f-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/31/31d19f3051c8985f29f85ea43d9445e4b848c58a17a79d4e726280a9bced5743-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/79/79d75d9215f18cbef6b6a66468f78efd92edc13f7093f600b1552032732410aa-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/bc/bcdca8f344789eb190a1124fe919aa975d08f07250bfc6d780b0ae0cc28092fe-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/86/86e73e7b053ab6e1e1d149b5d1bbba621bfc3d0bbc66ec6310c072c82a7221e7-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/b1/b12eb8399331175352cb92b971280ba5c0c501c6ffa5c330921c3c0667c5f199-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/32/3264d3f95a5e2e731c952060b0cd4cb3bc37ff884513397336d40c935d098e5b-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/19/1977592d2d60e1dd1025609428d04f6cc17283759aae0c97bd8f35e8a341679b-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/a9/a9b261a012c19401c1fd78154a20f58bb7778e731e17f903eb3fe8ed3a5ddd59-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/96/9697f94a563c1bd04db2a82ac1770821f97548911f616dd1149bb87d0f48d65b-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/56/56e54c1dc0b6bee517915ce0bdf694a3b94f4de88b2cabb987b645e1255594fb-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/7e/7e9d9d14f25fe76951999c17680e09a181c5f14c9c4f30fe6bff8d0d669590c3-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/f4/f431652315a861a2a85b47ae12cfc99734b3db4754aa35c9158254e4ba3507c0-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/1c/1c51052ad1af6b1a1575f9bbc3f4616ed673578a285ae9a29f5548eed68c05dd-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/be/be3037525f021f1d7e2e8499d3dcc0f44be39adf70eb91979c96db3e5645bcf1-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/55/557fa6c4030abc2d7b6407ce3093ae5772939f1a2595be09dd670ecd1c5ec8f2-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/69/69a73f1b9f395cf4a1666dde8d7971a0bd9313fbfa55a5109eb02e59b301be09-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/ab/abc0750a5bd45b2346aa5bc87092f67207e9436307e3e32cb470952f87d13fb6-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/da/dadc71547f56ab68eccefd0d571599f99739a3d75acc444d97829d6ab62a6922-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/91/915619420aacc3b5964e7537365661258ab52ec44fb7ab29790258822c793de5-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/68/6834d594cb4ffe53979a0c4611bb5200e6e0c580acb42f4383ed2fb6a93d758d-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/c6/c6ccbc76ef432925fc1a5ea22ab750ac591f3e8983d2f33f54b01c799e3a274d-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/89/893c62418d079bf692b5ff17db226ae3d0fefdc87cbd0c64f30c437677a288ec-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/d8/d8666c5d7807c5a08b30f2278a1efd691c188804b3090704bd0b3f8ef06c40d9-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/d4/d401ca16783c19ff776f35305023173b63e2610e313b9a793734af80afda4e83-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/d0/d0593989dbf79e26b8bf6705325c6b44044e560a22c3ab81d320c67dcd97f1eb-d during file-backed mapping note processing
+
+warning: Can't open file /home/niko/.cache/go-build/57/572953ae015634b922f88b0191804a949206100adb6bd2454db615e2774dbe30-d during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libnss_mymachines.so.2 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libcap.so.2.67 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libgcc_s.so.1 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libnss_resolve.so.2 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libm.so.6 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libnss_myhostname.so.2 during file-backed mapping note processing
+
+warning: core file may not match specified executable file.
+[New LWP 495627]
+[New LWP 495604]
+[New LWP 495603]
+[New LWP 495614]
+[New LWP 495602]
+[New LWP 495610]
+[New LWP 495618]
+[New LWP 495606]
+[New LWP 495621]
+[New LWP 495608]
+[New LWP 495612]
+[New LWP 495629]
+[New LWP 495615]
+[New LWP 495622]
+[New LWP 495600]
+[New LWP 495605]
+[New LWP 495623]
+[New LWP 495630]
+[New LWP 495616]
+[New LWP 495633]
+[New LWP 495634]
+[New LWP 495635]
+[New LWP 495636]
+[New LWP 495637]
+[New LWP 495632]
+[New LWP 495609]
+[New LWP 495620]
+[New LWP 495617]
+[New LWP 495624]
+[New LWP 495628]
+[New LWP 495625]
+[New LWP 495607]
+[New LWP 495613]
+[New LWP 495626]
+[New LWP 495619]
+[New LWP 495611]
+[New LWP 495631]
+[Thread debugging using libthread_db enabled]
+Using host libthread_db library "/usr/lib64/libthread_db.so.1".
+Core was generated by `/usr/bin/qemu-x86_64 /usr/bin/go build -trimpath -mod=readonly -modcacherw -ldf'.
+Program terminated with signal SIGILL, Illegal instruction.
+#0  0x00003fff9d5d7284 in code_gen_buffer ()
+[Current thread is 1 (Thread 0x3fff4bf3c380 (LWP 495627))]
+(gdb) info threads
+  Id   Target Id                          Frame 
+* 1    Thread 0x3fff4bf3c380 (LWP 495627) 0x00003fff9d5d7284 in code_gen_buffer ()
+  2    Thread 0x3fffa85ec380 (LWP 495604) 0x000000001029ba2c in __lll_lock_wait ()
+  3    Thread 0x3fffa862d380 (LWP 495603) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  4    Thread 0x3fffa8362380 (LWP 495614) 0x00000000100ef210 in tb_jmp_cache_get_tb (hash=3271, jc=0x3fff6c00c5c0)
+    at ../accel/tcg/tb-jmp-cache.h:38
+  5    Thread 0x3fffa8eaf380 (LWP 495602) 0x00000000102cf6ec in syscall ()
+  6    Thread 0x3fffa8466380 (LWP 495610) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  7    Thread 0x3fffa815d380 (LWP 495618) 0x00000000101e07c8 in g_hash_table_lookup ()
+  8    Thread 0x3fffa856a380 (LWP 495606) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  9    Thread 0x3fffa809a380 (LWP 495621) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  10   Thread 0x3fffa84e8380 (LWP 495608) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  11   Thread 0x3fffa83e4380 (LWP 495612) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  12   Thread 0x3fff4beba380 (LWP 495629) 0x00003fff9c1c84b8 in code_gen_buffer ()
+  13   Thread 0x3fffa8321380 (LWP 495615) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  14   Thread 0x3fffa86ae380 (LWP 495622) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  15   Thread 0x200f4000 (LWP 495600)     safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  16   Thread 0x3fffa85ab380 (LWP 495605) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  17   Thread 0x3fffa8059380 (LWP 495623) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  18   Thread 0x3fff4be79380 (LWP 495630) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  19   Thread 0x3fffa82e0380 (LWP 495616) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  20   Thread 0x3fff4bdb6380 (LWP 495633) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  21   Thread 0x3fff4bd75380 (LWP 495634) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  22   Thread 0x3fff4bd34380 (LWP 495635) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  23   Thread 0x3fff4bcf3380 (LWP 495636) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  24   Thread 0x3fff4bcb2380 (LWP 495637) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  25   Thread 0x3fff4bdf7380 (LWP 495632) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  26   Thread 0x3fffa84a7380 (LWP 495609) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  27   Thread 0x3fffa80db380 (LWP 495620) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  28   Thread 0x3fffa829f380 (LWP 495617) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  29   Thread 0x3fff4bfff380 (LWP 495624) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  30   Thread 0x3fff4befb380 (LWP 495628) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  31   Thread 0x3fff4bfbe380 (LWP 495625) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  32   Thread 0x3fffa8529380 (LWP 495607) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  33   Thread 0x3fffa83a3380 (LWP 495613) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  34   Thread 0x3fff4bf7d380 (LWP 495626) 0x00003fff9d5d7284 in code_gen_buffer ()
+  35   Thread 0x3fffa811c380 (LWP 495619) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  36   Thread 0x3fffa8425380 (LWP 495611) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+  37   Thread 0x3fff4be38380 (LWP 495631) safe_syscall_base () at ../common-user/host/ppc64/safe-syscall.inc.S:75
+(gdb) quit
+talos2 ~ # coredumpctl gdb 509058
+           PID: 509058 (make)
+           UID: 1000 (niko)
+           GID: 1000 (niko)
+        Signal: 11 (SEGV)
+     Timestamp: Wed 2023-02-15 17:11:24 CET (6min ago)
+  Command Line: /usr/bin/qemu-x86_64 /usr/bin/make VERSION=11.3.2 DESTDIR=/home/niko/devel/yay/pkg/yay PREFIX=/usr build
+    Executable: /home/niko/chroots/arch-x86_64/usr/bin/qemu-x86_64
+ Control Group: /user.slice/user-1000.slice/user@1000.service/session.slice/vte-spawn-a3a4897b-7df3-4b3e-a8fc-91898d4e7b51.scope
+          Unit: user@1000.service
+     User Unit: vte-spawn-a3a4897b-7df3-4b3e-a8fc-91898d4e7b51.scope
+         Slice: user-1000.slice
+     Owner UID: 1000 (niko)
+       Boot ID: 33cad872d21043ebbe3dd6581bdd28c6
+    Machine ID: b3e834569b8ff461391f5ac061feb773
+      Hostname: talos2
+       Storage: /var/lib/systemd/coredump/core.make.1000.33cad872d21043ebbe3dd6581bdd28c6.509058.1676477484000000.zst (present)
+  Size on Disk: 1.0M
+       Message: Process 509058 (make) of user 1000 dumped core.
+
+GNU gdb (Gentoo 12.1 vanilla) 12.1
+Copyright (C) 2022 Free Software Foundation, Inc.
+License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
+This is free software: you are free to change and redistribute it.
+There is NO WARRANTY, to the extent permitted by law.
+Type "show copying" and "show warranty" for details.
+This GDB was configured as "powerpc64le-unknown-linux-gnu".
+Type "show configuration" for configuration details.
+For bug reporting instructions, please see:
+<https://bugs.gentoo.org/>.
+Find the GDB manual and other documentation resources online at:
+    <http://www.gnu.org/software/gdb/documentation/>.
+
+For help, type "help".
+Type "apropos word" to search for commands related to "word"...
+Reading symbols from /home/niko/chroots/arch-x86_64/usr/bin/qemu-x86_64...
+BFD: warning: /var/tmp/coredump-jyYs2x: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000002
+BFD: warning: /var/tmp/coredump-jyYs2x: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0008002
+BFD: warning: /var/tmp/coredump-jyYs2x: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010001
+BFD: warning: /var/tmp/coredump-jyYs2x: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0010002
+
+warning: Can't open file /usr/lib/ld-linux-x86-64.so.2 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libguile-3.0.so.1.6.0 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libc.so.6 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libgc.so.1.5.1 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libffi.so.8.1.2 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libunistring.so.5.0.0 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libgmp.so.10.4.1 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libcrypt.so.2.0.0 during file-backed mapping note processing
+
+warning: Can't open file /usr/lib/libm.so.6 during file-backed mapping note processing
+
+warning: core file may not match specified executable file.
+[New LWP 509058]
+[New LWP 509060]
+[Thread debugging using libthread_db enabled]
+Using host libthread_db library "/usr/lib64/libthread_db.so.1".
+Core was generated by `/usr/bin/qemu-x86_64 /usr/bin/make VERSION=11.3.2 DESTDIR=/home/niko/devel/yay/'.
+Program terminated with signal SIGSEGV, Segmentation fault.
+#0  0x0000000010278798 in sigsuspend ()
+[Current thread is 1 (Thread 0x1bde9000 (LWP 509058))]
+(gdb) info threads
+  Id   Target Id                          Frame 
+* 1    Thread 0x1bde9000 (LWP 509058)     0x0000000010278798 in sigsuspend ()
+  2    Thread 0x3fffae0ef380 (LWP 509060) 0x00000000102cf6ec in syscall ()
+(gdb) quit
+```
+
+Download coredumps:
+
+https://drive.google.com/file/d/1GosaobKvmuRg-olaA1-aZcp7zAZcWmcF/view?usp=share_link
+
+https://drive.google.com/file/d/1N0BmlBIYY-qT5lHqlrKXvPL_FdYl3GfI/view?usp=share_link
diff --git a/results/classifier/118/performance/1520 b/results/classifier/118/performance/1520
new file mode 100644
index 00000000..aaf49745
--- /dev/null
+++ b/results/classifier/118/performance/1520
@@ -0,0 +1,79 @@
+performance: 0.916
+peripherals: 0.911
+architecture: 0.909
+TCG: 0.893
+assembly: 0.885
+graphic: 0.876
+device: 0.871
+boot: 0.865
+permissions: 0.861
+mistranslation: 0.852
+x86: 0.852
+semantic: 0.838
+hypervisor: 0.836
+register: 0.827
+PID: 0.819
+virtual: 0.818
+debug: 0.808
+kernel: 0.805
+network: 0.793
+risc-v: 0.781
+user-level: 0.778
+files: 0.774
+arm: 0.765
+socket: 0.761
+ppc: 0.741
+VMM: 0.735
+KVM: 0.699
+vnc: 0.678
+i386: 0.662
+
+x86 TCG acceleration running on s390x with -smp > host cpus slowed down by x10
+Description of problem:
+This boots up a trivial guest using OVMF, when the conditions below are given it runs ~10x slower.
+
+I have found this breaking our tests of qemu 7.2 [(which due to Debian adding the offending change as backport is affected)](https://salsa.debian.org/qemu-team/qemu/-/blob/master/debian/patches/master/acpi-cpuhp-fix-guest-visible-maximum-access-size-to-.patch) by runnig an order of magnitude slower.
+
+
+I was tracing it down (insert a long strange trip here) and found that it occurs:
+- only with patch dab30fb "acpi: cpuhp: fix guest-visible maximum access size to the legacy reg block" applied
+  - latest master is still affetced
+- only with s390x running emulation of x86
+  - emulating x86 on ppc64 didn't show the same behavior
+- only with -smp > host cpus
+  - smp 2 with 1 host cpu => slow
+  - smp 4 with 2 host cpu => slow
+  - any case where host cpu >= smp => fast
+
+On average good cases are on a 2964 s390x machine taking ~5-6 seconds for the good case.
+The bad case is close to 60s which is the timeout of the automated tests.
+
+We all know -smp shouldn't be >host-cpus, and I totally admit that this is the definition of an edge case.
+But I do not know what else might be affected and this just happened to be what the test does by default - and a slowdown by x10 seems too much even for edge cases to be just ignored.
+And while we could just bump up the timeout (and probably will as an interim workaround) I wanted to file it here for your awareness.
+Steps to reproduce:
+You can recreate the same by using the commandline above and timing things on your own.
+
+Or you can use the [autopkgtest of edk2 in Ubuntu](https://git.launchpad.net/ubuntu/+source/edk2/tree/debian/tests/shell.py#n214) which have [shown this](https://autopkgtest.ubuntu.com/results/autopkgtest-lunar/lunar/s390x/e/edk2/20230224_094012_c95f4@/log.gz) first.
+Additional information:
+Only signed OVMF cases are affected, while aavmf and other OVMF are more or less on the same speed.
+
+```
+1 CPU / 1GB Memory
+7.0     7.2
+6.54s   58.32s test_ovmf_ms
+6.72s   56.96s test_ovmf_4m_ms
+7.54s   55.47s test_ovmf_4m_secboot
+7.56s   49.88s test_ovmf_secboot
+7.01s   39.79s test_ovmf32_4m_secboot
+7.38s    7.43s  test_aavmf32
+7.27s    7.30s  test_aavmf
+7.26s    7.26s  test_aavmf_snakeoil
+5.83s    5.95s  test_ovmf_4m
+5.61s    5.81s  test_ovmf_q35
+5.51s    5.64s  test_ovmf_pc
+5.26s    5.42s  test_ovmf_snakeoil
+```
+
+Highlighting @cborntra since it is somewhat s390x related and @mjt0k as the patch is applied as backport in Debian.
+I didn't find the handle of Laszlo (Author) to highlight him as well.
diff --git a/results/classifier/118/performance/1522 b/results/classifier/118/performance/1522
new file mode 100644
index 00000000..d45d1ba1
--- /dev/null
+++ b/results/classifier/118/performance/1522
@@ -0,0 +1,70 @@
+i386: 0.970
+register: 0.946
+performance: 0.941
+boot: 0.906
+ppc: 0.884
+graphic: 0.882
+device: 0.856
+x86: 0.808
+architecture: 0.792
+mistranslation: 0.782
+vnc: 0.782
+semantic: 0.770
+PID: 0.704
+socket: 0.686
+peripherals: 0.679
+files: 0.670
+kernel: 0.665
+hypervisor: 0.652
+debug: 0.597
+user-level: 0.494
+permissions: 0.465
+arm: 0.446
+assembly: 0.370
+network: 0.322
+risc-v: 0.270
+virtual: 0.237
+TCG: 0.236
+VMM: 0.139
+KVM: 0.054
+
+Floppy controller returns the wrong thing for multitrack reads which span tracks
+Description of problem:
+I've just discovered that the Minix 1 and 2 operating systems no longer boot on qemu.
+
+Investigation reveals the following:
+
+- when Minix reads a 1024-byte block from disk, it issues a two-sector multitrack read to the FDC.
+- if the FDC runs out of sectors when it's on head 0, it automatically switches to head 1 (this is correct).
+- if the FDC runs out of sectors when it's on head 1, it stops the transfer (which is what is supposed to happen).
+
+What qemu does for the latter case is that it will automatically seek to the next track and switch to head 0. It then sets the SEEK COMPLETE bit in the status register. Minix sees this but isn't expecting it, because this shouldn't be emitted for reads and writes, and fails thinking it's an error.
+
+For example, here's the logging for such a transfer:
+
+```
+FLOPPY: Start transfer at 0 1 4f 11 (2878)
+FLOPPY: direction=1 (1024 - 10240)
+FLOPPY: copy 512 bytes (1024 0 10240) 0 pos 1 4f (17-0x00000b3e 0x00167c00)
+FLOPPY: seek to next sector (1 4f 11 => 2878)     <--- reads the last sector of head 1 track 0x4f
+FLOPPY: copy 512 bytes (1024 512 10240) 0 pos 1 4f (18-0x00000b3f 0x00167e00)
+FLOPPY: seek to next sector (1 4f 12 => 2879)     <--- attempt to move to the next sector, which fails
+FLOPPY: seek to next track (0 50 01 => 2879)      <--- moved to next track, which shouldn't happen
+FLOPPY: end transfer 1024 1024 10240
+FLOPPY: transfer status: 00 00 00 (20)            <--- status report
+```
+
+Transfer status 20 is the SEEK COMPLETE bit. For a normal head switch, that should be 04 (with the NOW ON HEAD 1 bit set).
+
+For reference, see page 5-13 of the uPD765 datasheet here: https://www.cpcwiki.eu/imgs/f/f3/UPD765_Datasheet_OCRed.pdf It says:
+
+> IF MT is high, a multitrack operation is performed.
+> If MT = 1 after finishing read/write operation on side 0,
+> FDC will automatically start command searching for sector
+> 1 on side 1
+Steps to reproduce:
+1. `qemu-system-i386 --fda images/minix-2.0-root-720kB.img`
+2. Press = to boot.
+3. Observe the 'Unrecoverable Read` errors as the ramdisk is loaded. (The system will still boot, but will then crash if you try to do anything due to a corrupt ramdisk.)
+
+[minix-2.0-root-720kB.img.bz2](/uploads/77d34db96f353d92cdb2d01928b8fc01/minix-2.0-root-720kB.img.bz2)
diff --git a/results/classifier/118/performance/1529173 b/results/classifier/118/performance/1529173
new file mode 100644
index 00000000..28733aae
--- /dev/null
+++ b/results/classifier/118/performance/1529173
@@ -0,0 +1,70 @@
+i386: 0.983
+performance: 0.961
+x86: 0.950
+TCG: 0.913
+architecture: 0.895
+graphic: 0.856
+vnc: 0.835
+device: 0.828
+kernel: 0.773
+ppc: 0.761
+boot: 0.755
+files: 0.739
+VMM: 0.736
+register: 0.728
+risc-v: 0.705
+PID: 0.678
+permissions: 0.628
+socket: 0.627
+arm: 0.626
+mistranslation: 0.582
+user-level: 0.483
+debug: 0.453
+semantic: 0.450
+virtual: 0.378
+KVM: 0.348
+hypervisor: 0.300
+network: 0.285
+assembly: 0.206
+peripherals: 0.174
+
+Absolutely slow Windows XP SP3 installation
+
+Host: Linux 4.3.3 vanilla x86-64/Qemu 2.5 i686 (mixed env)
+Guest: Windows XP Professional SP3 (i686)
+
+This is my launch string:
+
+$ qemu-system-i386 \
+-name "Windows XP Professional SP3" \
+-vga std \
+-net nic,model=pcnet \
+-cpu core2duo \
+-smp cores=2 \
+-cdrom /tmp/en_winxp_pro_with_sp3_vl.iso \
+-hda Windows_XP.qcow \
+-boot d \
+-net nic \
+-net user \
+-m 1536 \
+-localtime
+
+Console output:
+
+warning: TCG doesn't support requested feature: CPUID.01H:EDX.vme [bit 1]
+warning: TCG doesn't support requested feature: CPUID.80000001H:EDX.syscall [bit 11]
+warning: TCG doesn't support requested feature: CPUID.80000001H:EDX.lm|i64 [bit 29]
+warning: TCG doesn't support requested feature: CPUID.01H:EDX.vme [bit 1]
+warning: TCG doesn't support requested feature: CPUID.80000001H:EDX.syscall [bit 11]
+warning: TCG doesn't support requested feature: CPUID.80000001H:EDX.lm|i64 [bit 29]
+
+After hitting 35% installation more or less stalls (it actually doesn't but it progresses 1% a minute which is totally unacceptable).
+
+That was without KVM acceleration, so perhaps it's how it's meant to be.
+
+With KVM everything is fast and smooth.
+
+For integer workloads such as installing an OS you should expect TCG to be about 12x slower than KVM on average. That is on current master; note that TCG has gotten faster in the last couple of years. See a performance comparison from v2.7.0 to v2.11.0 for SPEC06 here: https://imgur.com/a/5P5zj
+
+I've therefore marked the report as invalid, as I don't think the aforementioned speedups will change your experience dramatically.
+
diff --git a/results/classifier/118/performance/1546445 b/results/classifier/118/performance/1546445
new file mode 100644
index 00000000..b5546f19
--- /dev/null
+++ b/results/classifier/118/performance/1546445
@@ -0,0 +1,78 @@
+performance: 0.950
+user-level: 0.875
+graphic: 0.862
+network: 0.862
+virtual: 0.859
+device: 0.845
+PID: 0.804
+permissions: 0.801
+semantic: 0.800
+architecture: 0.785
+socket: 0.770
+assembly: 0.753
+kernel: 0.711
+register: 0.700
+ppc: 0.697
+debug: 0.695
+hypervisor: 0.688
+KVM: 0.688
+boot: 0.679
+mistranslation: 0.661
+VMM: 0.643
+TCG: 0.628
+peripherals: 0.615
+files: 0.602
+risc-v: 0.596
+arm: 0.573
+x86: 0.491
+i386: 0.421
+vnc: 0.416
+
+support vhost user without specifying vhostforce
+
+[Impact]
+
+ * vhost-user falls back to virtio-net which causes performance lose without specifying the vhostforce option. But it should be the default behavior for vhost-user, since guests using PMD doesn't support msi-x.
+
+[Test Case]
+
+  create a vhost-user virtio backend without specifying the vhostforce option, i.e. -netdev type=vhost-user,id=mynet1,chardev=<char_dev_for_the_controll_channel>
+  start the VM
+  vhost-user is not enabled
+
+[Regression Potential]
+
+ * none
+
+vhost user nic doesn't support non msi guests(like pxe stage) by default.
+Vhost user nic can't fall back to qemu like normal vhost net nic does. So we should
+enable it for non msi guests.
+
+The problem has been fix in qemu upstream  - http://git.qemu.org/?p=qemu.git;a=commitdiff;h=24f938a682d934b133863eb421aac33592f7a09e. And the patch needs to be backported to 1:2.2+dfsg-5expubuntu9.8 .
+
+
+
+The attachment "debian patch for qemu 1:2.2+dfsg" seems to be a debdiff.  The ubuntu-sponsors team has been subscribed to the bug report so that they can review and hopefully sponsor the debdiff.  If the attachment isn't a patch, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are member of the ~ubuntu-sponsors, unsubscribe the team.
+
+[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issue please contact him.]
+
+Hello Liang, or anyone else affected,
+
+Accepted qemu into kilo-proposed. The package will build now and be available in the Ubuntu Cloud Archive in a few hours, and then in the -proposed repository.
+
+Please help us by testing this new package. To enable the -proposed repository:
+
+  sudo add-apt-repository cloud-archive:kilo-proposed
+  sudo apt-get update
+
+Your feedback will aid us getting this update out to other Ubuntu users.
+
+If this package fixes the bug for you, please add a comment to this bug, mentioning the version of the package you tested, and change the tag from verification-kilo-needed to verification-kilo-done. If it does not fix the bug for you, please add a comment stating that, and change the tag to verification-kilo-failed. In either case, details of your testing will help us make a better decision.
+
+Further information regarding the verification process can be found at https://wiki.ubuntu.com/QATeam/PerformingSRUVerification . Thank you in advance!
+
+Tested with 1:2.2+dfsg-5expubuntu9.7~cloud2, and the fix works for me.
+
+FYI, following additional regression tests, today we promoted qemu 2.2+dfsg-5expubuntu9.7~cloud2 from kilo-proposed to kilo-updates in the Ubuntu Cloud Archive.
+
+
diff --git a/results/classifier/118/performance/1569 b/results/classifier/118/performance/1569
new file mode 100644
index 00000000..5b0a472c
--- /dev/null
+++ b/results/classifier/118/performance/1569
@@ -0,0 +1,57 @@
+performance: 0.883
+debug: 0.873
+semantic: 0.856
+vnc: 0.850
+boot: 0.830
+graphic: 0.828
+virtual: 0.766
+device: 0.759
+architecture: 0.751
+PID: 0.740
+ppc: 0.704
+VMM: 0.695
+permissions: 0.679
+peripherals: 0.661
+kernel: 0.660
+register: 0.660
+hypervisor: 0.650
+files: 0.646
+network: 0.617
+socket: 0.599
+assembly: 0.583
+user-level: 0.548
+risc-v: 0.512
+x86: 0.472
+arm: 0.408
+i386: 0.405
+TCG: 0.392
+KVM: 0.213
+mistranslation: 0.200
+
+NVMe FS operations hang after suspending and resuming both guest and host
+Description of problem:
+Hello and thank you for your work on QEMU!
+
+Using the NVMe driver with my Seagate FireCuda 530 2TB M.2 works fine until I encounter this problem, which is reliably reproducible for me.
+
+When I suspend the guest and then suspend (s2idle) my host all is well until I resume the guest (manually with `virsh dompmwakeup $VMNAME`, after the host has resumed). Although the guest resumes and is interactive, it seems that anything involving filesystem operations hang forever and do not return.
+
+Suspending and resuming the Linux guest seems to work perfectly if I don't suspend/resume the host.
+
+Ultimately what I'm wanting to do is share the drive between VMs with qemu-storage-daemon. I can reproduce the problem in that scenario in much the same way. Using PCI passthrough with the same VM and device works fine and doesn't exhibit this problem.
+
+Hopefully that's clear enough - let me know if there's anything else I can provide.
+Steps to reproduce:
+1. Create a VM with a dedicated NVMe disk.
+2. Boot an ISO and install to the disk.
+3. Verify that suspend and resume works when not suspending the host.
+4. Suspend the guest.
+5. Suspend the host.
+6. Wake the host.
+7. Wake the guest.
+8. Try just about anything that isn't likely already cached somewhere: `du -s /etc`.
+Additional information:
+I've attached the libvirt domain XML[1] and libvirtd debug logs for QEMU[2] ("1:qemu") that covers suspending the guest & host, resuming host & guest and doing something to cause a hang. I tried to leave enough time afterwards for any timeout to occur.
+
+1. [nvme-voidlinux.xml](/uploads/1dea47af096ce58175f7aa526eca455e/nvme-voidlinux.xml)
+2. [nvme-qemu-debug.log](/uploads/42d3bed456a795069023a61d38fa5ccd/nvme-qemu-debug.log)
diff --git a/results/classifier/118/performance/1569491 b/results/classifier/118/performance/1569491
new file mode 100644
index 00000000..96d86128
--- /dev/null
+++ b/results/classifier/118/performance/1569491
@@ -0,0 +1,42 @@
+i386: 0.985
+performance: 0.975
+architecture: 0.896
+graphic: 0.872
+device: 0.793
+mistranslation: 0.694
+ppc: 0.667
+network: 0.626
+semantic: 0.565
+register: 0.447
+vnc: 0.415
+socket: 0.409
+boot: 0.409
+permissions: 0.408
+user-level: 0.389
+risc-v: 0.320
+files: 0.288
+arm: 0.284
+PID: 0.272
+debug: 0.225
+x86: 0.220
+TCG: 0.210
+peripherals: 0.210
+virtual: 0.202
+VMM: 0.179
+assembly: 0.134
+hypervisor: 0.132
+kernel: 0.052
+KVM: 0.029
+
+qemu system i386 poor performance on e5500 core
+
+I had been tested with generic core net building or with mtune e5500 but i have the same result: performances 
+are extremly low compared with other classes of powerpc cpu.
+The strange is the 5020 2ghz in all emulators been tested by me is comparable with a 970MP 2.7 ghz in speed and benchmarks but im facing the half of performance in i386-soft-mmu compared with a 2.5 ghz 970MP.
+
+I'm triaging old bugs: Can you provide command lines, versions, and steps to test and measure the relative performance?
+
+At the very least, please try to confirm on the latest version of QEMU (5.2.0-rc0, if possible) to update this report.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1586611 b/results/classifier/118/performance/1586611
new file mode 100644
index 00000000..5236eaca
--- /dev/null
+++ b/results/classifier/118/performance/1586611
@@ -0,0 +1,55 @@
+performance: 0.879
+virtual: 0.873
+graphic: 0.843
+hypervisor: 0.843
+device: 0.778
+KVM: 0.762
+PID: 0.759
+peripherals: 0.709
+semantic: 0.688
+ppc: 0.688
+network: 0.679
+permissions: 0.676
+vnc: 0.632
+mistranslation: 0.608
+files: 0.603
+kernel: 0.597
+i386: 0.584
+VMM: 0.572
+architecture: 0.567
+socket: 0.559
+register: 0.552
+boot: 0.511
+x86: 0.506
+debug: 0.459
+user-level: 0.418
+risc-v: 0.400
+arm: 0.388
+TCG: 0.337
+assembly: 0.271
+
+usb-hub can not be detached when detach usb  device from VM
+
+I give a host usb device to guest in the way of passthrough,use "virsh attach-device" cmd. In guest os,use "lsusb" cmd I can see two devices have been added,one is usb device and the other is usb-hub(0409:55aa NEC Corp. Hub).
+when I use "virsh detach-device" detach the usb device,in guest os the usb-hub was still exists.
+It can create a bad impression when operating the VM,for example,suspend and resume the VM,qemu would report that:
+
+2016-05-24T12:03:54.434369Z qemu-kvm: Unknown savevm section or instance '0000:00:01.2/2/usb-hub' 0
+
+2016-05-24T12:03:54.434742Z qemu-kvm: load of migration failed: Invalid argument  
+
+From qemu's code,it can be sure that the usb-hub is generated by qemu,and the process of detaching usb-hub has already been executed,but failed.With adding print information,error as follows:
+libusbx: error [do_close] Device handle closed while transfer was still being processed, but the device is still connected as far as we know
+libusbx: warning [do_close] A cancellation for an in-flight transfer hasn't completed but closing the device handle
+
+I found that when I attached an usb device to the VM, the VM would add an usb-hub automatically if there was no usb-hub.
+After adding an usb-hub,the VM assigned a port to the actual usb device. When detaching the usb device,the qemu only detach the port,without detaching the usb-hub.So when doing action like migrating or suspending/resumming,the VM will fail. 
+
+Try detach the usb-hub device by the virsh detach-device usb-hub.xml?
+
+Of course using virtual usb controller is normal,The situation of the problems is to use the passthrough usb devices
+
+The usb-hub device should be deleted when the usb device was detached. When do you fix this bug?
+
+Use a newer libvirt version which manages usb addressing and assigns usb devices to usb ports.  This is required to make sure the physical device tree is the same after vmsave/vmload or live migration.
+
diff --git a/results/classifier/118/performance/1589257 b/results/classifier/118/performance/1589257
new file mode 100644
index 00000000..09a381eb
--- /dev/null
+++ b/results/classifier/118/performance/1589257
@@ -0,0 +1,51 @@
+performance: 0.963
+boot: 0.913
+network: 0.876
+peripherals: 0.839
+device: 0.826
+architecture: 0.776
+virtual: 0.680
+graphic: 0.652
+kernel: 0.586
+mistranslation: 0.585
+vnc: 0.546
+arm: 0.474
+semantic: 0.473
+user-level: 0.449
+debug: 0.417
+socket: 0.411
+permissions: 0.390
+register: 0.359
+risc-v: 0.351
+hypervisor: 0.329
+VMM: 0.316
+PID: 0.301
+i386: 0.297
+ppc: 0.283
+x86: 0.277
+assembly: 0.221
+files: 0.208
+TCG: 0.156
+KVM: 0.127
+
+Boot with OVMF extremely slow to bootloader
+
+I have used Arch Linux in the past with the same version (2.5.0), the exact same OVMF code and vars, and the exact same VM settings with no issues. Now with Ubuntu, I am having the issue where boot up until Windows takes about 10x longer. Every CPU thread/core allocated gets used 100% while this is happening. After that, everything operates as normal. There is no abnormal logs produced by qemu, or I don't know how to debug.
+
+Here are my settings:
+
+Host:
+Ubuntu 16.04
+Qemu 2.5.0
+Relevant configs attached
+
+Guest:
+Windows 10
+VirtIO raw disk image
+VirtIO network
+Typical VGA passthrough setup, everything operating normally
+
+
+
+I've solved the problem by using the ovmf package in apt instead of the firmware I've had before. Apparently, the older firmware was only compatible with an older kernel, and a newer kernel with the older firmware would cause the issue.
+
diff --git a/results/classifier/118/performance/1590336 b/results/classifier/118/performance/1590336
new file mode 100644
index 00000000..fdec00db
--- /dev/null
+++ b/results/classifier/118/performance/1590336
@@ -0,0 +1,69 @@
+performance: 0.935
+graphic: 0.916
+architecture: 0.904
+assembly: 0.846
+device: 0.834
+arm: 0.833
+user-level: 0.778
+semantic: 0.752
+register: 0.750
+socket: 0.742
+files: 0.731
+ppc: 0.726
+PID: 0.648
+permissions: 0.645
+mistranslation: 0.642
+network: 0.640
+debug: 0.617
+TCG: 0.601
+vnc: 0.597
+kernel: 0.586
+peripherals: 0.585
+risc-v: 0.583
+hypervisor: 0.556
+virtual: 0.502
+VMM: 0.487
+boot: 0.460
+x86: 0.430
+i386: 0.394
+KVM: 0.385
+
+qemu-arm does not reject vrintz on non-v8 cpu
+
+Hello,
+
+It seems that qemu-arm does not reject some v8-only instructions as it should, but executes them "correctly".
+
+For instance, while compiling/running some of the GCC ARM instrinsics tests, we noticed that
+vrintz should be rejected on cortex-a9 for instance, while it is executed as if the instruction was supported.
+
+objdump says:
+   1074c:       f3fa05a0        vrintz.f32      d16, d16
+and qemu -d in_asm says:
+0x0001074c:  f3fa05a0      vabal.u<illegal width 64>    q8, d26, d16
+
+The problem is still present in qemu-2.6.0
+
+Should be fixed by http://patchwork.ozlabs.org/patch/633105/
+
+
+I confirm your patch does fix the problem.
+
+You may still want to fix the disassembler such that it dumps the right instruction, but that would be a separate fix.
+
+Thanks for your quick support.
+
+
+On 9 June 2016 at 20:14, Christophe Lyon <email address hidden> wrote:
+> You may still want to fix the disassembler such that it dumps the right
+> instruction, but that would be a separate fix.
+
+Unfortunately the disassembler is the pre-GPLv3 binutils one,
+so we can't just update it (and I'm not particularly inclined
+to independently re-implement all the 32-bit instruction set
+changes post that change).
+
+thanks
+-- PMM
+
+
diff --git a/results/classifier/118/performance/1595240 b/results/classifier/118/performance/1595240
new file mode 100644
index 00000000..4d36b38b
--- /dev/null
+++ b/results/classifier/118/performance/1595240
@@ -0,0 +1,92 @@
+performance: 0.959
+files: 0.939
+virtual: 0.910
+PID: 0.908
+graphic: 0.901
+device: 0.880
+assembly: 0.877
+permissions: 0.875
+semantic: 0.855
+debug: 0.854
+arm: 0.849
+socket: 0.841
+ppc: 0.841
+network: 0.838
+user-level: 0.835
+kernel: 0.829
+register: 0.823
+architecture: 0.820
+TCG: 0.780
+hypervisor: 0.772
+mistranslation: 0.760
+KVM: 0.748
+vnc: 0.739
+x86: 0.735
+boot: 0.731
+i386: 0.721
+VMM: 0.709
+risc-v: 0.686
+peripherals: 0.677
+
+Error by clone github.com qemu repository
+
+Hi.
+
+C:\Java\sources\kvm> git clone https://github.com/qemu/qemu.git
+Cloning into 'qemu'...
+remote: Counting objects: 279563, done.
+remote: Total 279563 (delta 0), reused 0 (delta 0), pack-reused 279563R
+Receiving objects: 100% (279563/279563), 122.45 MiB | 3.52 MiB/s, done.
+Resolving deltas: 100% (221942/221942), done.
+Checking connectivity... done.
+error: unable to create file hw/misc/aux.c (No such file or directory)
+error: unable to create file include/hw/misc/aux.h (No such file or directory)
+Checking out files: 100% (4795/4795), done.
+fatal: unable to checkout working tree
+warning: Clone succeeded, but checkout failed.
+You can inspect what was checked out with 'git status'
+and retry the checkout with 'git checkout -f HEAD'
+
+
+
+Windows has problems with any file named 'aux.*'.  The solution would be
+for qemu to rename it to something else, for the sake of Windows.
+
+On 06/22/2016 10:06 AM, Алексей Курган wrote:
+> Public bug reported:
+> 
+> Hi.
+> 
+> C:\Java\sources\kvm> git clone https://github.com/qemu/qemu.git
+> Cloning into 'qemu'...
+> remote: Counting objects: 279563, done.
+> remote: Total 279563 (delta 0), reused 0 (delta 0), pack-reused 279563R
+> Receiving objects: 100% (279563/279563), 122.45 MiB | 3.52 MiB/s, done.
+> Resolving deltas: 100% (221942/221942), done.
+> Checking connectivity... done.
+> error: unable to create file hw/misc/aux.c (No such file or directory)
+> error: unable to create file include/hw/misc/aux.h (No such file or directory)
+> Checking out files: 100% (4795/4795), done.
+> fatal: unable to checkout working tree
+> warning: Clone succeeded, but checkout failed.
+> You can inspect what was checked out with 'git status'
+> and retry the checkout with 'git checkout -f HEAD'
+> 
+> ** Affects: qemu
+>      Importance: Undecided
+>          Status: New
+> 
+> ** Attachment added: "2016-06-22_19-08-06.png"
+>    https://bugs.launchpad.net/bugs/1595240/+attachment/4688593/+files/2016-06-22_19-08-06.png
+> 
+
+-- 
+Eric Blake   eblake redhat com    +1-919-301-3266
+Libvirt virtualization library http://libvirt.org
+
+
+
+Patch has been included in QEMU v2.7.0:
+http://git.qemu.org/?p=qemu.git;a=commitdiff;h=e0dadc1e9ef1f35208e5d2a
+
+
diff --git a/results/classifier/118/performance/162 b/results/classifier/118/performance/162
new file mode 100644
index 00000000..ecc58570
--- /dev/null
+++ b/results/classifier/118/performance/162
@@ -0,0 +1,31 @@
+performance: 0.828
+device: 0.659
+ppc: 0.528
+vnc: 0.483
+mistranslation: 0.453
+TCG: 0.445
+network: 0.444
+architecture: 0.428
+arm: 0.375
+VMM: 0.364
+semantic: 0.306
+risc-v: 0.281
+graphic: 0.233
+debug: 0.217
+i386: 0.196
+PID: 0.195
+files: 0.164
+peripherals: 0.159
+hypervisor: 0.155
+permissions: 0.154
+boot: 0.152
+x86: 0.125
+register: 0.111
+virtual: 0.105
+socket: 0.103
+KVM: 0.080
+kernel: 0.070
+user-level: 0.069
+assembly: 0.037
+
+util/path.c/follow_path() does not handle "/" well
diff --git a/results/classifier/118/performance/1635 b/results/classifier/118/performance/1635
new file mode 100644
index 00000000..a0cd7b22
--- /dev/null
+++ b/results/classifier/118/performance/1635
@@ -0,0 +1,67 @@
+performance: 0.864
+graphic: 0.645
+i386: 0.631
+architecture: 0.481
+device: 0.466
+PID: 0.281
+mistranslation: 0.253
+ppc: 0.248
+permissions: 0.216
+register: 0.193
+kernel: 0.190
+semantic: 0.177
+socket: 0.168
+hypervisor: 0.134
+peripherals: 0.127
+files: 0.126
+vnc: 0.122
+arm: 0.095
+debug: 0.082
+network: 0.074
+VMM: 0.069
+virtual: 0.067
+assembly: 0.065
+risc-v: 0.063
+boot: 0.059
+user-level: 0.041
+x86: 0.024
+TCG: 0.014
+KVM: 0.005
+
+Slow graphics output under aarch64 hvf (no dirty bitmap tracking)
+Description of problem:
+When using a display adapter such as `bochs-display` (which, yes, I realize is not the ideal choice for an aarch64 guest, but it works fine under TCG and KVM, so bear with me) under `hvf` acceleration on an M1 Mac, display output is slow enough to be measured in seconds-per-frame.
+
+The issue seems to stem from each write to the framebuffer memory resulting in a data abort, while the expected behavior is that only one such write results in a data abort exception, which is handled by marking the region dirty and then subsequent writes do not yield exceptions until the display management in QEMU resets the dirty flag. Instead, every pixel drawn causes the VM to trap, and performance is degraded.
+Steps to reproduce:
+1. Start an aarch64 HVF guest with the `bochs-display` display adapter.
+2. Observe performance characteristics.
+3.
+Additional information:
+I reported this issue on IRC around a year ago, and was provided with a patch by @agraf which I have confirmed works. That patch was shared on the `qemu-devel` mailing list in February, 2022, with a response from @pm215: https://lists.gnu.org/archive/html/qemu-devel/2022-02/msg00609.html
+
+As a quick summary, the patch takes this snippet from the i386 HVF target:
+
+https://gitlab.com/qemu-project/qemu/-/blob/master/target/i386/hvf/hvf.c#L132-138
+
+And applies a variation of it to the ARM target when handling a data abort exception, before this assert:
+
+https://gitlab.com/qemu-project/qemu/-/blob/master/target/arm/hvf/hvf.c#L1381
+
+Something to the effect of:
+
+```c
+        if (iswrite) {
+            uint64_t gpa = hvf_exit->exception.physical_address;
+            hvf_slot *slot = hvf_find_overlap_slot(gpa, 1);
+
+            if (slot && slot->flags & HVF_SLOT_LOG) {
+                memory_region_set_dirty(slot->region, 0, slot->size);
+                hv_vm_protect(slot->start, slot->size, HV_MEMORY_READ |
+                              HV_MEMORY_WRITE | HV_MEMORY_EXEC);
+                break;
+            }
+        }
+```
+
+I am reporting this issue now as I updated my git checkout with the release of QEMU 8.0.0 and was surprised to find that the patch had never made it upstream and the issue persists.
diff --git a/results/classifier/118/performance/1661758 b/results/classifier/118/performance/1661758
new file mode 100644
index 00000000..5f026ebb
--- /dev/null
+++ b/results/classifier/118/performance/1661758
@@ -0,0 +1,157 @@
+performance: 0.889
+device: 0.871
+user-level: 0.857
+graphic: 0.853
+risc-v: 0.849
+architecture: 0.847
+register: 0.838
+permissions: 0.832
+virtual: 0.831
+debug: 0.815
+assembly: 0.806
+arm: 0.787
+PID: 0.786
+peripherals: 0.782
+semantic: 0.775
+files: 0.769
+kernel: 0.759
+socket: 0.755
+network: 0.754
+vnc: 0.744
+boot: 0.728
+ppc: 0.719
+KVM: 0.717
+TCG: 0.712
+mistranslation: 0.700
+hypervisor: 0.697
+VMM: 0.608
+x86: 0.490
+i386: 0.409
+
+qemu-nbd causes data corruption in VDI-format disk images
+
+Hi,
+
+This is a duplicate of #1422307.  I can't figure out a way to re-open
+it--the status of "Fix Released" is changeable only by a project
+maintainer or bug supervisor--so I'm opening a new bug to make sure
+this gets looked at again.
+
+qemu-nbd will sometimes corrupt VDI disk images.  The bug was thought
+to be fixed in commit f0ab6f109630940146cbaf47d0cd99993ddba824, but
+I'm able to reproduce it in both that commit and in the latest commit
+(a951316b8a5c3c63254f20a826afeed940dd4cba).  I just needed to run more
+iterations of the test.  It's possible that it was partially fixed, or
+that the added serialization made it harder to catch this
+non-deterministic bug, but the same symptoms persist: data corruption
+of VDI-format disk images.
+
+This affects at least qemu-nbd.  I haven't tried reproducing the issue
+with qemu proper or qemu-img, but the original bug report suggests
+that the bug in the common VDI backend may corrupt data written by
+those programs.
+
+Please let me know if I can provide any further information or help
+with testing.  Thank you very much for looking into this!
+
+Test procedure
+**************
+
+The procedure used is the one given by Max Reitz (xanclic) in the
+original bug report, comment 3
+(https://bugs.launchpad.net/qemu/+bug/1422307/comments/3), in the
+section "VDI and NBD over /dev/nbd0", but with up to 1000 iterations
+instead of 10:
+
+  $ cd ~/qemu-origfix-f0ab6f1/bin
+  $ dd if=/dev/urandom of=blob.raw bs=1M count=64
+  64+0 records in
+  64+0 records out
+  67108864 bytes (67 MB) copied, 4.36475 s, 15.4 MB/s
+  $ sudo sh -c 'for i in $(seq 0 999); do ./qemu-img create -f vdi test.vdi 64M > /dev/null; ./qemu-nbd -c /dev/nbd0 test.vdi; sleep 1; ./qemu-img convert -n blob.raw /dev/nbd0; ./qemu-img convert /dev/nbd0 test1.raw; sync; echo 1 > /proc/sys/vm/drop_caches; ./qemu-img convert /dev/nbd0 test2.raw; ./qemu-nbd -d /dev/nbd0 > /dev/null; if ! ./qemu-img compare -q test1.raw test2.raw; then md5sum test1.raw test2.raw; echo "$i failed"; break; fi; done; echo "done"'
+27a66c3a8ac2cf06f2c925968ea9e964  test1.raw
+2da9bf169041a7c2bd144c4ab3a29aea  test2.raw
+64 failed
+done
+
+I've run this process a handful of times, and I've seen it take as
+little as 10 iterations and as many as 161 (taking 32 minutes in the
+latter case).  Please be patient.  Putting the images on tmpfs will
+probably help it go faster, and I have successfully reproduced the
+issue on tmpfs in addition to ext4.
+
+Nothing different was needed to reproduce the issue in a directory
+containing a build of the latest commit.  It still takes somewhere
+around 1-200 iterations to find, in my testing.
+
+Build procedure
+***************
+
+  $ git clone git://git.qemu-project.org/qemu.git
+  [omitted]
+  $ git clone qemu qemu-origfix-f0ab6f1
+  Cloning into 'qemu-origfix-f0ab6f1'...
+  done.
+  $ cd qemu-origfix-f0ab6f1
+  $ git checkout f0ab6f109630940146cbaf47d0cd99993ddba824
+  Note: checking out 'f0ab6f109630940146cbaf47d0cd99993ddba824'.
+  
+  You are in 'detached HEAD' state. You can look around, make experimental
+  changes and commit them, and you can discard any commits you make in this
+  state without impacting any branches by performing another checkout.
+  
+  If you want to create a new branch to retain commits you create, you may
+  do so (now or later) by using -b with the checkout command again. Example:
+  
+    git checkout -b new_branch_name
+  
+  HEAD is now at f0ab6f1... block/vdi: Add locking for parallel requests
+  $ mkdir bin
+  $ cd bin
+  $ script -c'time (../configure --enable-debug --target-list=x86_64-softmmu && make -j6; echo "result: $?")'
+  Script started, file is typescript
+  [omitted; the build typescript is attached separately]
+    LINK  x86_64-softmmu/qemu-system-x86_64
+  result: 0
+  
+  real    1m5.733s
+  user    2m3.904s
+  sys     0m13.828s
+  Script done, file is typescript
+
+Nothing different was done when building the latest commit (besides
+cloning to a different directory, and not running `git checkout`).
+
+Environment
+***********
+
+  * Machine: x86_64
+  
+  * Hypervisor: Xen 4.4 (Debian package xen-hypervisor-4.4-amd64,
+    version 4.4.1-9+deb8u8)
+  
+  * A Xen domU (guest) for building QEMU and reproducing the issue.
+    All testing was done within the virtual machine for convenience
+    and access to better hardware than what I have for my development
+    machine (I expected the build to take much longer than it really
+    does).
+  
+      - x86_64 architecture with six VCPUs and 1.2 GiB RAM allocated,
+        operating in HVM (fully virtualized) mode.
+      
+      - Distribution: Debian 8.7 Jessie amd64
+      
+      - Kernel: Linux 3.16.0 x86_64 (Debian package
+        linux-image-3.16.0-4-amd64, version 3.16.39-1)
+      
+      - Compiler: GCC 4.9.2 (Debian package gcc-4.9, version 4.9.2-10)
+
+
+
+The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting all older bugs to
+"Incomplete" now.
+If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1672383 b/results/classifier/118/performance/1672383
new file mode 100644
index 00000000..f250e20c
--- /dev/null
+++ b/results/classifier/118/performance/1672383
@@ -0,0 +1,53 @@
+performance: 0.933
+device: 0.519
+PID: 0.517
+x86: 0.489
+ppc: 0.473
+architecture: 0.468
+boot: 0.462
+i386: 0.438
+vnc: 0.421
+graphic: 0.395
+socket: 0.387
+files: 0.385
+network: 0.375
+user-level: 0.363
+peripherals: 0.337
+hypervisor: 0.337
+kernel: 0.331
+TCG: 0.327
+permissions: 0.306
+VMM: 0.285
+semantic: 0.281
+arm: 0.255
+risc-v: 0.254
+register: 0.211
+debug: 0.196
+mistranslation: 0.141
+virtual: 0.135
+assembly: 0.102
+KVM: 0.009
+
+Slow Windows XP load after commit a9353fe897ca2687e5b3385ed39e3db3927a90e0
+
+I've recently discovered, that in QEMU 2.8+ my Windows XP loading time has significantly worsened. In 2.7 it took 30-40 second to boot, but in 2.8 it became 2-2,5 minutes.
+
+I've used Git bisect, and found out that the change happened after commit a9353fe897ca2687e5b3385ed39e3db3927a90e0, which, as far as I can tell from the commit message, handled race condition when invalidating breakpoint.
+
+I've set a breakpoint in static void breakpoint_invalidate(CPUState *cpu, target_ulong pc), and here's a backtrace:
+#0  cpu_breakpoint_insert (cpu=cpu@entry=0x555556a73be0, pc=144, 
+    flags=flags@entry=32, breakpoint=breakpoint@entry=0x555556a7c670)
+    at /media/sdd2/qemu-work/exec.c:830
+#1  0x00005555558746ac in hw_breakpoint_insert (env=env@entry=0x555556a7be60, 
+    index=index@entry=0) at /media/sdd2/qemu-work/target-i386/bpt_helper.c:64
+#2  0x00005555558748ed in cpu_x86_update_dr7 (env=0x555556a7be60, 
+    new_dr7=<optimised out>)
+    at /media/sdd2/qemu-work/target-i386/bpt_helper.c:160
+#3  0x00007fffa17421f6 in code_gen_buffer ()
+#4  0x000055555577fcb4 in cpu_tb_exec (itb=<optimised out>, 
+    itb=<optimised out>, cpu=0x7fff8b7763b0)
+    at /media/sdd2/qemu-work/cpu-exec.c:164
+It seems that XP sets some breakpoints during it's load, and it leads to frequent TB flushes and slow execution.
+
+Supposedly fixed by commit 406bc339b0505fcfc2ffcbca1f05a3756e338a65
+
diff --git a/results/classifier/118/performance/1677 b/results/classifier/118/performance/1677
new file mode 100644
index 00000000..2b757d89
--- /dev/null
+++ b/results/classifier/118/performance/1677
@@ -0,0 +1,43 @@
+performance: 0.972
+graphic: 0.892
+x86: 0.879
+boot: 0.868
+device: 0.839
+semantic: 0.743
+risc-v: 0.736
+socket: 0.706
+ppc: 0.680
+files: 0.678
+vnc: 0.658
+PID: 0.639
+architecture: 0.635
+kernel: 0.615
+debug: 0.603
+hypervisor: 0.568
+VMM: 0.534
+arm: 0.447
+register: 0.422
+permissions: 0.406
+i386: 0.403
+mistranslation: 0.373
+peripherals: 0.348
+user-level: 0.348
+virtual: 0.341
+network: 0.309
+TCG: 0.296
+assembly: 0.269
+KVM: 0.193
+
+qemu-system-x86_64 cannot run on Windows when -smp is specified with a value higher than `1`. An important argument for any expectation of VM performance
+Description of problem:
+qemu-system-x86_64 seems to crash on Windows the moment you try to use -smp to define more vcpus, even the basic usage of `-smp 4` will cause qemu to segfault after the guest's boot option is selected.
+Steps to reproduce:
+1. `qemu-system-x86_64 -smp 4 -cdrom rhel-9.2-x86_64-dvd.iso -drive if=pflash,format=raw,unit=0,readonly=on,file=edk2-x64/OVMF_CODE.fd -m 6G -nodefaults -serial mon:stdio`
+2. Select the boot option to begin your installation
+3. qemu hangs for 10 or so seconds then throws a Segmentation Fault.
+Additional information:
+1. This does not happen if -smp arguments are omitted, but running VMs with a single vcpu thread is slow and painful.
+2. This still happens even without OVMF (Traditional bios booting)
+3. This still happens even without -defaults and without a serial device
+
+Only output from qemu at death is `Segmentation fault`
diff --git a/results/classifier/118/performance/1693 b/results/classifier/118/performance/1693
new file mode 100644
index 00000000..9e7ce24a
--- /dev/null
+++ b/results/classifier/118/performance/1693
@@ -0,0 +1,59 @@
+performance: 0.805
+graphic: 0.796
+architecture: 0.796
+mistranslation: 0.760
+files: 0.685
+device: 0.684
+network: 0.647
+vnc: 0.645
+socket: 0.632
+semantic: 0.629
+kernel: 0.622
+ppc: 0.612
+PID: 0.586
+permissions: 0.564
+risc-v: 0.563
+boot: 0.552
+TCG: 0.549
+debug: 0.517
+VMM: 0.506
+register: 0.500
+x86: 0.495
+peripherals: 0.428
+arm: 0.389
+assembly: 0.357
+hypervisor: 0.288
+KVM: 0.253
+user-level: 0.226
+virtual: 0.182
+i386: 0.086
+
+qemu-system-nios2 not working on s390x (big endian) hosts
+Description of problem:
+qemu-system-nios2 fails to boot a Linux kernel on s390x hosts.
+Steps to reproduce:
+1. wget https://qemu-advcal.gitlab.io/qac-best-of-multiarch/download/day14.tar.xz
+2. tar -xJf day14.tar.xz 
+3. cd day14/
+4. qemu-system-nios2 -nographic -kernel vmlinux.elf
+Additional information:
+When running with "-d in_asm", it seems like the code initially starts executing ok, but in one of the early translation blocks, there is a difference when comparing the log with a run from a x86 host:
+
+```
+IN: fdt_check_header
+0xc81afd48:  ldw	r3,0(r4)
+0xc81afd4c:  srli	r5,r3,24
+0xc81afd50:  slli	r2,r3,24
+0xc81afd54:  or	r2,r2,r5
+0xc81afd58:  slli	r5,r3,8
+0xc81afd5c:  srli	r3,r3,8
+0xc81afd60:  andhi	r5,r5,255
+0xc81afd64:  andi	r3,r3,65280
+0xc81afd68:  or	r2,r2,r5
+0xc81afd6c:  or	r2,r2,r3
+0xc81afd70:  movhi	r3,53262
+0xc81afd74:  addi	r3,r3,-275
+0xc81afd78:  bne	r2,r3,0xc81afde8
+```
+
+On the x86 host, the branch at the end is not taken, while on the s390x host, the branch is taken.
diff --git a/results/classifier/118/performance/1718 b/results/classifier/118/performance/1718
new file mode 100644
index 00000000..3e3ec901
--- /dev/null
+++ b/results/classifier/118/performance/1718
@@ -0,0 +1,77 @@
+performance: 0.952
+virtual: 0.940
+device: 0.899
+graphic: 0.860
+semantic: 0.740
+i386: 0.712
+ppc: 0.691
+hypervisor: 0.688
+VMM: 0.654
+user-level: 0.652
+mistranslation: 0.611
+architecture: 0.610
+boot: 0.609
+vnc: 0.577
+permissions: 0.577
+risc-v: 0.568
+peripherals: 0.561
+PID: 0.557
+network: 0.556
+kernel: 0.550
+KVM: 0.508
+register: 0.491
+files: 0.476
+debug: 0.476
+x86: 0.435
+TCG: 0.418
+socket: 0.410
+arm: 0.404
+assembly: 0.362
+
+Strange throttle-group test results
+Description of problem:
+I have a question about throttle-group test results.
+
+I did a test to limit IO by applying THROTTLE-GROUP and the expected result is not what I expected
+
+The setup environment looks like this throttle-group to x-iops-total=500, x-bps-total=524288000 and throttling vdb, benchmarked with fio command
+
+```
+# mount -t xfs /dev/vdb1 /mnt/disk
+
+# fio --direct=1 --bs=1M --iodepth=128 --rw=read --size=1G --numjobs=1 --runtime=600 --time_based --name=/mnt/disk/fio-file --ioengine=libaio --output=/mnt/disk/read-1M
+```
+
+When I test with a --bs value of 1M, I get 500Mib throughput.
+![iops_500-1M](/uploads/f63ecbfdb13adc87bd4524f5298a224c/iops_500-1M.png)
+
+
+When I test with a --bs value of 2m, I don't get 500Mibs but 332Mibs throughput.
+```
+fio --direct=1 --bs=2M --iodepth=128 --rw=read --size=1G --numjobs=1 --runtime=600 --time_based --name=/mnt/disk/fio-file --ioengine=libaio --output=/mnt/disk/read-2M
+```
+![iops_500-2M](/uploads/0a384fd9f026943e5e40af1c4b5d6dcd/iops_500-2M.png)
+
+
+If I set the qemu x-iops-total value to 1500 and the fio --bs value to 2M test again, I get 500Mib throughput.
+
+![iops_1500-2M](/uploads/f31eb8213d034d612e915e355b52a324/iops_1500-2M.png)
+
+
+To summarize, here is the Test result.
+
+| fio bs | qemu x-iops-total | qemu x-bps-total | Result iops |Result throughput
+| ------ | ------ |------ |------ |------ |
+| 2M     | 1500   | 524288000 | 250 |  500 |
+| **2M** |**500** | **524288000** | **166** |  **332** |
+| 1M     | 1500   | 524288000 | 500 |  500 |
+| 1M     |  500.  | 524288000 | 500 |  500 |
+
+
+When the --bs value is 2M and the x-iops-total value is 500, the throughput should be 500, but it is not, so I don't know what the problem is.
+
+If there is anything I missed, please let me know.
+Steps to reproduce:
+1. Apply throttle-group to vdb and start the VM
+2. mount vdb1
+3. test fio
diff --git a/results/classifier/118/performance/1720969 b/results/classifier/118/performance/1720969
new file mode 100644
index 00000000..c5689c7a
--- /dev/null
+++ b/results/classifier/118/performance/1720969
@@ -0,0 +1,42 @@
+performance: 0.830
+device: 0.564
+graphic: 0.452
+vnc: 0.278
+semantic: 0.274
+ppc: 0.272
+mistranslation: 0.257
+i386: 0.243
+x86: 0.222
+network: 0.198
+VMM: 0.186
+PID: 0.155
+TCG: 0.148
+arm: 0.105
+register: 0.104
+socket: 0.099
+risc-v: 0.095
+peripherals: 0.080
+kernel: 0.079
+debug: 0.069
+boot: 0.069
+permissions: 0.069
+virtual: 0.064
+KVM: 0.063
+hypervisor: 0.062
+files: 0.056
+user-level: 0.055
+architecture: 0.038
+assembly: 0.023
+
+qemu/memory.c:206:  pointless copies of large structs ?
+
+[qemu/memory.c:206]: (performance) Function parameter 'a' should be passed by reference.
+[qemu/memory.c:207]: (performance) Function parameter 'b' should be passed by reference.
+
+Source code is
+
+static bool memory_region_ioeventfd_equal(MemoryRegionIoeventfd a,
+                                          MemoryRegionIoeventfd b)
+
+Fix committed and sent upstream: https://github.com/qemu/qemu/commit/73bb753d24a702b37913ce4b5ddb6dca40dab067
+
diff --git a/results/classifier/118/performance/1721187 b/results/classifier/118/performance/1721187
new file mode 100644
index 00000000..87ed6d6d
--- /dev/null
+++ b/results/classifier/118/performance/1721187
@@ -0,0 +1,67 @@
+performance: 0.940
+graphic: 0.916
+peripherals: 0.730
+mistranslation: 0.672
+device: 0.671
+boot: 0.486
+architecture: 0.454
+ppc: 0.436
+semantic: 0.388
+user-level: 0.376
+x86: 0.373
+PID: 0.325
+virtual: 0.246
+files: 0.242
+register: 0.230
+socket: 0.221
+i386: 0.168
+permissions: 0.142
+hypervisor: 0.127
+kernel: 0.124
+debug: 0.112
+vnc: 0.112
+network: 0.108
+risc-v: 0.084
+TCG: 0.084
+assembly: 0.082
+VMM: 0.069
+arm: 0.056
+KVM: 0.044
+
+install Centos7 or fedora27 on qemu on windows8.1
+
+Hello,
+I have tried to install CentOs or Fedora27 on my Windows8 using QEMU. I work on notepad with 4GB.
+Unfortunatly, my touchpad nor my usb-mouse are not recognise on the graphical installation of CentOs and Fedora installation. So, I cannot install them.
+Here are the commands I use for installation :
+
+qemu-img create -f qcow2 fedora27b2_hd.qcow2 80G
+
+qemu-system-x86_64 -k fr -hda fedora27b2_hd.qcow2 -cdrom Fedora-Workstation-Live-x86_64-27_Beta-1.5.iso -m 512 -boot d
+
+I have tried to add the option : -device usb-mouse  but, I got the error message that no 'usb-bus' found for the usb-mouse device.
+
+What is wrong ?  QEMU or my installation command ?
+
+Thank, BRgds,
+Laurent
+
+Which version of QEMU are you using? Did you compile QEMU on your own or are you using a pre-build binary?
+Anyway, to be able to use USB devices, you've got to specify the "-usb" parameter when starting QEMU.
+
+I use qemu-w64-setup-20170830.exe on Windows8-64bits
+I tried the following command, but it is very, very slow :
+
+qemu-img create centos7_hd.img 80G
+
+qemu-system-x86_64 -k fr -cpu core2duo -m 1024 -usb -device usb-mouse -hda centos7_hd.img --drive media=cdrom,file=CentOS-7-x86_64-Everything-1708.iso,readonly
+
+BRgds,
+Laurent
+
+
+So I assume the mouse is working now? I think we then can close this ticket.
+Concerning the speed: QEMU is emulating the CPU by default, so this is of course slower than running everything natively. You've got to use an accelerator to get more speed - for Windows, you can use HAXM: https://www.qemu.org/2017/11/22/haxm-usage-windows/
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1723984 b/results/classifier/118/performance/1723984
new file mode 100644
index 00000000..81057126
--- /dev/null
+++ b/results/classifier/118/performance/1723984
@@ -0,0 +1,76 @@
+architecture: 0.959
+register: 0.956
+performance: 0.950
+peripherals: 0.946
+device: 0.934
+kernel: 0.927
+files: 0.927
+user-level: 0.924
+arm: 0.919
+semantic: 0.915
+permissions: 0.891
+socket: 0.886
+graphic: 0.881
+network: 0.873
+ppc: 0.873
+debug: 0.870
+assembly: 0.869
+risc-v: 0.857
+hypervisor: 0.848
+mistranslation: 0.827
+VMM: 0.825
+virtual: 0.782
+PID: 0.766
+boot: 0.737
+vnc: 0.715
+KVM: 0.669
+x86: 0.638
+TCG: 0.624
+i386: 0.482
+
+ID_MMFR0 has an invalid value on aarch64 cpu (A57, A53)
+
+The ID_MMFR0 register, accessed from aarch64 state as an invalid value:
+- ARM ARM v8 documentation (D7.2 General system control registers) described bits AuxReg[23:20] to be
+  "In ARMv8-A the only permitted value is 0010"
+- Cortex A53 and Cortex A57 TRM describe the value to be 0x10201105, so AuxReg[23:20] is 0010 too
+- in QEMU target/arm/cpu64.c, the relevant value is
+  cpu->id_mmfr0 = 0x10101105;
+
+The 1 should be changed to 2.
+
+Spotted & Tested on the following qemu revision:
+
+commit 48ae1f60d8c9a770e6da64407984d84e25253c69
+Merge: 78b62d3 b867eaa
+Author: Peter Maydell <email address hidden>
+Date:   Mon Oct 16 14:28:13 2017 +0100
+
+QEMU's behaviour in this case is matching the hardware. We claim to model an r1p0 (based on the MIDR value we report), and for the r1p0 the A53 and A57 reported the ID_MMFR0 as 0x10101105 -- this is documented in the TRMs for that rev of the CPUs. r1p3 reports the 0x10201105 you describe, but this isn't the rev of the CPU we claim to be.
+
+In theory we could bump the rXpX but I'm not sure there's much point unless it's causing a real problem (we'd need to check what else might have changed between the two revisions).
+
+
+Oh I see. I didn't check older TRM since the ARM ARM was quite strict on the value, sorry.
+I'll read the MIDR to have a more robust code then. Thank you.
+
+You shouldn't need to read the MIDR at all.
+
+There are two sensible strategies for software I think:
+
+ (1) trust the architectural statement that v8 implies that the AIFSR and ADFSR both exist -- AIUI both QEMU and the hardware implementations that report 0001 in this MMFR0 field do actually implement those registers, so this is safe.
+
+ (2) read and pay attention to the AuxReg field, by handling 0001 as "only Auxiliary Control Register is supported, AIFSR and ADFSR are not supported". This will work fine too -- on implementations that report 0001 you may be not using the AIFSR/ADFSR but that's ok because on those implementations they only RAZ/WI anyhow so you couldn't do anything interesting with them anyway.
+
+If your code is genuinely v8 only then (1) is easiest. If you also need to support ARMv7 then (2) is best, because 0001 is a permitted value in ID_MMFR0 for an ARMv7 implementation, so you need to handle it regardless of the A53/A57 behaviour.
+
+Neither approach requires detecting and special casing A53/A57 revisions via the MIDR.
+
+
+I see your point. Thank you for the advice. I'm doing some low-level check to be sure to be on a known platform, so this midr based code is very localized. For the "core" of the kernel, I'm mostly using (1) as access to MMU registers are localized in armv7/armv8 specialized sub-directories.
+
+
+
+Thanks for the update -- I'm going to close this bug. (Incidentally, my experience with checks of the "insist we're on a known platform with ID register values we recognize" kind is that they're more trouble than they're worth, especially if you plan running the software in an emulator.)
+
+
diff --git a/results/classifier/118/performance/1725707 b/results/classifier/118/performance/1725707
new file mode 100644
index 00000000..a1acd295
--- /dev/null
+++ b/results/classifier/118/performance/1725707
@@ -0,0 +1,114 @@
+performance: 0.925
+debug: 0.918
+semantic: 0.909
+architecture: 0.905
+graphic: 0.894
+network: 0.889
+virtual: 0.886
+register: 0.880
+device: 0.872
+assembly: 0.864
+vnc: 0.858
+arm: 0.853
+peripherals: 0.836
+mistranslation: 0.819
+permissions: 0.818
+PID: 0.817
+boot: 0.810
+VMM: 0.789
+KVM: 0.788
+user-level: 0.786
+risc-v: 0.781
+files: 0.780
+hypervisor: 0.770
+TCG: 0.754
+socket: 0.745
+ppc: 0.725
+kernel: 0.715
+x86: 0.570
+i386: 0.561
+
+QEMU sends excess VNC data to websockify even when network is poor
+
+Description of problem
+-------------------------
+In my latest topic, I reported a bug relate to QEMU's websocket:
+https://bugs.launchpad.net/qemu/+bug/1718964
+
+It has been fixed but someone mentioned that he met the same problem when using QEMU with a standalone websocket proxy.
+That makes me confused because in that scenario QEMU will get a "RAW" VNC connection.
+So I did a test and found that there indeed existed some problems. The problem is:
+
+When the client's network is poor (on a low speed WAN), QEMU still sends a lot of data to the websocket proxy, then the client get stuck. It seems that only QEMU has this problem, other VNC servers works fine.
+
+Environment
+-------------------------
+All of the following versions have been tested:
+
+QEMU: 2.8.1.1 / 2.9.1 / 2.10.1 / master (Up to date)
+Host OS: Ubuntu 16.04 Server LTS / CentOS 7 x86_64_1611
+Websocket Proxy: websockify 0.6.0 / 0.7.0 / 0.8.0 / master
+VNC Web Client: noVNC 0.5.1 / 0.61 / 0.62 / master
+Other VNC Servers: TigerVNC 1.8 / x11vnc 0.9.13 / TightVNC 2.8.8
+
+Steps to reproduce:
+-------------------------
+100% reproducible.
+
+1. Launch a QEMU instance (No need websocket option):
+qemu-system-x86_64 -enable-kvm -m 6G ./win_x64.qcow2 -vnc :0
+
+2. Launch websockify on a separate host and connect to QEMU's VNC port
+
+3. Open VNC Web Client (noVNC/vnc.html) in browser and connect to websockify
+
+4. Play a video (e.g. Watch YouTube) on VM (To produce a lot of frame buffer update)
+
+5. Limit (e.g. Use NetLimiter) the client inbound bandwidth to 300KB/S (To simulate a low speed WAN)
+
+6. Then client's output gets stuck(less than 1 fps), the cursor is almost impossible to move
+
+7. Monitor network traffic on the proxy server
+
+Current result:
+-------------------------
+Monitor Downlink/Uplink network traffic on the proxy server
+(Refer to the attachments for more details).
+
+1. Used with QEMU
+- D: 5.9 MB/s U: 5.7 MB/s (Client on LAN)
+- D: 4.3 MB/s U: 334 KB/s (Client on WAN)
+
+2. Used with other VNC servers
+- D: 5.9 MB/s U: 5.6 MB/s (Client on LAN)
+- D: 369 KB/s U: 328 KB/s (Client on WAN)
+
+It is found that when the client's network is poor, all the VNC servers (tigervnc/x11vnc/tightvnc) 
+will reduce the VNC data send to websocket proxy (uplink and downlink symmetry), but QEMU never drop any frames and still sends a lot of data to websockify, the client has no capacity to accept so much data, more and more data are accumulated in the websockify, then it crashes.
+
+Expected results:
+-------------------------
+When the client's network is poor (WAN), QEMU will reduce the VNC data send to websocket proxy.
+
+
+
+
+
+
+
+This is nothing specific to websockets AFAIK. Even using regular VNC QEMU doesn't try to dynamically throttle data / quality settings.
+
+NB, if websockify crashes, then that is a serious flaw in websockify - it shouldn't read an unbounded amount of data from QEMU, if it is unable to send it onto the client.  If websockify stopped reading data from QEMU, then QEMU would in turn stop sending it once the TCP buffer was full
+
+
+Reference:
+https://github.com/novnc/noVNC/issues/431#issuecomment-71883085
+
+QEMU uses many more (30x) operations with much smaller amounts of data than other VNC server, perhaps this leads to the different result.
+
+The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
+If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1728116 b/results/classifier/118/performance/1728116
new file mode 100644
index 00000000..000d464b
--- /dev/null
+++ b/results/classifier/118/performance/1728116
@@ -0,0 +1,89 @@
+performance: 0.981
+virtual: 0.980
+graphic: 0.976
+user-level: 0.974
+x86: 0.964
+ppc: 0.937
+PID: 0.935
+device: 0.921
+kernel: 0.915
+semantic: 0.911
+socket: 0.908
+arm: 0.895
+files: 0.880
+hypervisor: 0.878
+architecture: 0.876
+network: 0.871
+mistranslation: 0.869
+debug: 0.860
+KVM: 0.855
+permissions: 0.843
+vnc: 0.833
+risc-v: 0.796
+i386: 0.796
+VMM: 0.794
+peripherals: 0.793
+boot: 0.793
+TCG: 0.757
+register: 0.695
+assembly: 0.507
+
+Empty /proc/self/auxv (linux-user)
+
+The userspace Linux API virtualization used to fake access to /proc/self/auxv, to provide meaningful data for the guest process.
+
+For newer qemu versions, this fails: The openat() is intercepted, but there's no content: /proc/self/auxv has length zero (i.e. reading from it returns 0 bytes).
+
+Good:
+
+$ x86_64-linux-user/qemu-x86_64 /usr/bin/cat /proc/self/auxv | wc -c
+256 /proc/self/auxv
+
+Bad:
+
+$ x86_64-linux-user/qemu-x86_64 /usr/bin/cat /proc/self/auxv | wc -c
+0 /proc/self/auxv
+
+This worked in 2.7.1, and fails in 2.10.1.
+
+This causes e.g. any procps-ng-based tool to segfault while reading from /proc/self/auxv in an endless loop (probably worth another bug report...)
+
+Doing a "git bisect" shows that this commit: https://github.com/qemu/qemu/commit/7c4ee5bcc introduced the problem.
+
+It might be a simple logic (subtraction in the wrong direction?) or sign-ness error: Adding some logging (to v2.10.1)
+
+diff --git a/linux-user/syscall.c b/linux-user/syscall.c
+index 9b6364a..49285f9 100644
+--- a/linux-user/syscall.c
++++ b/linux-user/syscall.c
+@@ -7469,6 +7469,9 @@ static int open_self_auxv(void *cpu_env, int fd)
+     abi_ulong len = ts->info->auxv_len;
+     char *ptr;
+ 
++    gemu_log(TARGET_ABI_FMT_lu"\n", len);
++    gemu_log(TARGET_ABI_FMT_ld"\n", len);
++
+     /*
+      * Auxiliary vector is stored in target process stack.
+      * read in whole auxv vector and copy it to file
+
+shows this output:
+
+$  x86_64-linux-user/qemu-x86_64 /usr/bin/cat /proc/self/auxv | wc -c
+18446744073709551264
+-352
+0
+
+And 352 could be the expected length.
+
+Oops, yes, commit 7c4ee5bcc82e643 broke this -- it switched the order in which we fill in the AUXV info, but forgot to adjust the calculation of the length, which as you've guessed we now get backwards.
+
+
+I've just sent this patch which fixes this bug:
+https://lists.gnu.org/archive/html/qemu-devel/2017-11/msg01199.html
+(it turns out it wasn't quite as simple as getting the sign wrong, we were subtracting two things that were totally wrong).
+
+
+Fix has been released with QEMU 2.11:
+https://git.qemu.org/?p=qemu.git;a=commitdiff;h=f516511ea84d8bb3395d6e
+
diff --git a/results/classifier/118/performance/1730099 b/results/classifier/118/performance/1730099
new file mode 100644
index 00000000..ec0317c9
--- /dev/null
+++ b/results/classifier/118/performance/1730099
@@ -0,0 +1,44 @@
+performance: 0.889
+device: 0.885
+network: 0.845
+mistranslation: 0.807
+x86: 0.678
+vnc: 0.665
+user-level: 0.656
+graphic: 0.626
+risc-v: 0.613
+socket: 0.608
+permissions: 0.604
+register: 0.574
+i386: 0.573
+architecture: 0.563
+semantic: 0.555
+ppc: 0.550
+virtual: 0.540
+hypervisor: 0.452
+debug: 0.440
+boot: 0.436
+VMM: 0.420
+PID: 0.415
+TCG: 0.364
+KVM: 0.358
+peripherals: 0.339
+files: 0.326
+arm: 0.273
+kernel: 0.272
+assembly: 0.161
+
+Sometimes, when not touching the SDL window, the guest freezes
+
+I often just run some development guest machine, and leave its SDL window on a workspace I don’t touch, and only interact with it via TCP.
+
+And sometimes, the guest just freezes.
+
+After it gets the focus back, it comes back to life (starts responding via network).
+
+QEMU release version: 2.8.1.1
+
+Which version of SDL are you using? SDL 1.2 or SDL 2.0? If you were using 1.2, could you please try 2.0 instead? Support for SDL 1.2 has been removed now.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1731 b/results/classifier/118/performance/1731
new file mode 100644
index 00000000..bf899337
--- /dev/null
+++ b/results/classifier/118/performance/1731
@@ -0,0 +1,42 @@
+performance: 0.971
+virtual: 0.968
+graphic: 0.967
+boot: 0.948
+device: 0.935
+peripherals: 0.772
+ppc: 0.754
+semantic: 0.685
+PID: 0.639
+socket: 0.573
+vnc: 0.492
+VMM: 0.483
+permissions: 0.479
+hypervisor: 0.452
+arm: 0.449
+mistranslation: 0.446
+register: 0.425
+architecture: 0.411
+user-level: 0.389
+risc-v: 0.338
+debug: 0.325
+TCG: 0.303
+files: 0.245
+assembly: 0.186
+kernel: 0.114
+i386: 0.086
+x86: 0.084
+network: 0.054
+KVM: 0.017
+
+i440fx ide cdrom pathological slow on early win10 install  screen
+Description of problem:
+if you choose i440fx virtual hardware (default in proxmox) for windows 10 instead of q35 , from power on to the windows boot logo is 10 times slower.  you need to wait more then 1m45s on my hardware until the blinking cursor in the upper left goes away and the blue windows bootlogo appears. that leads to false assumption, that your setup hangs. 
+
+what's causing this slownewss?
+
+is implementation really that bad?
+
+i did compare read performance of ide, sata and scsi cdrom in linux vm and cannot observe such a big difference.
+
+see
+https://forum.proxmox.com/threads/win10-installation-pathological-slowness-with-i440fx-ide-cdrom.129351/
diff --git a/results/classifier/118/performance/1734810 b/results/classifier/118/performance/1734810
new file mode 100644
index 00000000..f313408c
--- /dev/null
+++ b/results/classifier/118/performance/1734810
@@ -0,0 +1,77 @@
+performance: 0.972
+virtual: 0.965
+graphic: 0.942
+KVM: 0.927
+mistranslation: 0.885
+x86: 0.866
+architecture: 0.842
+ppc: 0.824
+device: 0.792
+kernel: 0.750
+permissions: 0.741
+boot: 0.740
+arm: 0.738
+hypervisor: 0.734
+user-level: 0.726
+register: 0.701
+network: 0.696
+semantic: 0.695
+socket: 0.667
+TCG: 0.657
+files: 0.644
+PID: 0.640
+risc-v: 0.635
+VMM: 0.632
+peripherals: 0.632
+vnc: 0.606
+debug: 0.514
+i386: 0.505
+assembly: 0.213
+
+Windows guest virtual PC running abnormally slow
+
+Guest systems running Windows 10 in a virtualized environment run unacceptably slow, with no option in Boxes to offer the virtual machine more (or less) cores from my physical CPU.
+
+ProblemType: Bug
+DistroRelease: Ubuntu 17.10
+Package: gnome-boxes 3.26.1-1
+ProcVersionSignature: Ubuntu 4.13.0-17.20-lowlatency 4.13.8
+Uname: Linux 4.13.0-17-lowlatency x86_64
+ApportVersion: 2.20.7-0ubuntu3.5
+Architecture: amd64
+CurrentDesktop: ubuntu:GNOME
+Date: Tue Nov 28 00:37:11 2017
+ProcEnviron:
+ TERM=xterm-256color
+ PATH=(custom, no user)
+ XDG_RUNTIME_DIR=<set>
+ LANG=en_US.UTF-8
+ SHELL=/bin/bash
+SourcePackage: gnome-boxes
+UpgradeStatus: No upgrade log present (probably fresh install)
+
+
+
+Any news or fixes?
+
+Which command line parameters are passed to QEMU? Is your system able to use KVM (e.g. did you enable virtualization support in your BIOS)?
+
+I am constantly running Windows 10 and Windows Server 2016 and I don't experience specific slowdowns.
+
+QEMU command line is needed to understand the specific setup that might be problematic.
+
+If you don't provide the CLI parameters, there's no way we can help here, sorry. So marking this as "invalid" for the QEMU project.
+
+Windows installs are still acting abnormally slow on the latest Gnome Boxes flatpaks in Ubuntu 18.10.
+I'll try to get my CLI parameters and add it to the bug.
+
+Sorry if this sounds dumb, where do I find my CLI Parameters for my Windows VM?
+
+Jeb, if you open a bug against QEMU here, we expect some information how QEMU is run. If you only interact with Gnome Boxes, then please only open a bug against Boxes - best in their Bug tracker here: https://bugzilla.gnome.org/ ... I guess nobody of the Boxes project is checking Launchpad, so reporting Boxes bugs here in Launchpad does not make much sense.
+
+At least please try to answer my questions in comment #3: Is virtualization enabled in your BIOS? Is KVM enabled on your system (i.e. are the kvm.ko and kvm_intel.ko or kvm_amd.ko modules loaded)?
+
+And for the CLI parameters, you could run this in a console window for example, after starting your guest:
+
+ps aux | grep qemu
+
diff --git a/results/classifier/118/performance/1735576 b/results/classifier/118/performance/1735576
new file mode 100644
index 00000000..6e69c249
--- /dev/null
+++ b/results/classifier/118/performance/1735576
@@ -0,0 +1,62 @@
+performance: 0.949
+hypervisor: 0.914
+architecture: 0.905
+device: 0.895
+x86: 0.889
+PID: 0.815
+graphic: 0.784
+boot: 0.731
+VMM: 0.729
+register: 0.728
+files: 0.723
+network: 0.686
+user-level: 0.686
+ppc: 0.679
+peripherals: 0.677
+permissions: 0.676
+socket: 0.654
+semantic: 0.652
+virtual: 0.592
+vnc: 0.577
+mistranslation: 0.523
+risc-v: 0.517
+KVM: 0.469
+TCG: 0.449
+debug: 0.369
+kernel: 0.354
+arm: 0.345
+i386: 0.332
+assembly: 0.129
+
+Support more than 4G memory for guest with Intel HAXM acceleration
+
+setup:
+
+host: windows 7 professional 64bit
+guest: centos 7
+qemu 2.10.92
+haxm 6.2.1
+
+issue: when assign 4096M or more memory to the guest, I got following error message:
+E:\qemuvm\vm-svr>qemu-system-x86_64 -accel hax -hda centos-1.vdi -m 4096
+HAX is working and emulator runs in fast virt mode.
+Failed to allocate 0 memory
+hax_transaction_commit: Failed mapping @0x0000000000000000+0xc0000000 flags 00
+hax_transaction_commit: Failed mapping @0x0000000100000000+0x40000000 flags 00
+VCPU shutdown request
+VCPU shutdown request
+if I change memory to 4095M, guest VM boot up without issue
+
+E:\qemuvm\vm-svr>qemu-system-x86_64 -accel hax -hda centos-1.vdi -m 4095
+HAX is working and emulator runs in fast virt mode.
+
+
+This is known limitation, I already raised a request on HAXM github site for fix this: https://github.com/intel/haxm/issues/13, and it got accepted will be fixed in next haxm release; however it seems there is also qemu side work (according to haxm dev), so I raise this for qemu side fix;
+
+update:
+according to haxm dev, they will submit a patch for qemu side of work;
+
+
+Fix has been included here:
+https://git.qemu.org/?p=qemu.git;a=commitdiff;h=7a5235c9e679c58be4
+
diff --git a/results/classifier/118/performance/1737 b/results/classifier/118/performance/1737
new file mode 100644
index 00000000..9533f930
--- /dev/null
+++ b/results/classifier/118/performance/1737
@@ -0,0 +1,79 @@
+architecture: 0.968
+performance: 0.958
+graphic: 0.919
+arm: 0.894
+files: 0.862
+device: 0.848
+assembly: 0.842
+ppc: 0.833
+peripherals: 0.829
+socket: 0.826
+permissions: 0.825
+PID: 0.799
+debug: 0.786
+user-level: 0.781
+kernel: 0.778
+vnc: 0.774
+register: 0.701
+network: 0.701
+hypervisor: 0.695
+semantic: 0.687
+boot: 0.686
+mistranslation: 0.642
+risc-v: 0.639
+virtual: 0.630
+VMM: 0.618
+x86: 0.529
+TCG: 0.480
+i386: 0.421
+KVM: 0.346
+
+qemu-aarch64: Incorrect result for ssra instruction when using vector lengths of 1024-bit or higher.
+Description of problem:
+```
+#include <arm_sve.h>
+#include <stdio.h>
+
+#define SZ 32
+
+int main(int argc, char* argv[]) {
+  svbool_t pg = svptrue_b64();
+  uint64_t VL = svcntd();
+
+  fprintf(stderr, "One SVE vector can hold %li uint64_ts\n", VL);
+
+  int64_t sr[SZ], sx[SZ], sy[SZ];
+  uint64_t ur[SZ], ux[SZ], uy[SZ];
+
+  for (uint64_t i = 0; i < SZ; ++i) {
+    sx[i] = ux[i] = 0;
+    sy[i] = uy[i] = 1024;
+  }
+
+  for (uint64_t i = 0; i < SZ; i+=VL) {
+    fprintf(stderr, "Processing elements %li - %li\n", i, i + VL - 1);
+
+    svint64_t SX = svld1(pg, sx + i);
+    svint64_t SY = svld1(pg, sy + i);
+    svint64_t SR = svsra(SX, SY, 4);
+    svst1(pg, sr + i, SR);
+
+    svuint64_t UX = svld1(pg, ux + i);
+    svuint64_t UY = svld1(pg, uy + i);
+    svuint64_t UR = svsra(UX, UY, 4);
+    svst1(pg, ur + i, UR);
+  }
+
+  for (uint64_t i = 0; i < SZ; ++i) {
+    fprintf(stderr, "sr[%li]=%li, ur[%li]\n", i, sr[i], ur[i]);
+  }
+
+  return 0;
+}
+```
+Steps to reproduce:
+1. Build the above C source using "gcc -march=armv9-a -O1 ssra.c", can also use clang.
+2. Run with "qemu-aarch64 -cpu max,sve-default-vector-length=64 ./a.out" and you'll see the expected result of 64 (signed and unsigned)
+3. Run with "qemu-aarch64 -cpu max,sve-default-vector-length=128 ./a.out" and you'll see the expected result of 64 for unsigned but the signed result is 0. This suggests the emulation of SVE2 ssra instruction is incorrect for this and bigger vector lengths.
+Additional information:
+
diff --git a/results/classifier/118/performance/1743 b/results/classifier/118/performance/1743
new file mode 100644
index 00000000..ef710f22
--- /dev/null
+++ b/results/classifier/118/performance/1743
@@ -0,0 +1,46 @@
+x86: 0.952
+performance: 0.942
+device: 0.923
+graphic: 0.913
+PID: 0.809
+socket: 0.726
+files: 0.620
+vnc: 0.588
+network: 0.587
+register: 0.547
+permissions: 0.508
+boot: 0.507
+arm: 0.490
+ppc: 0.472
+semantic: 0.451
+debug: 0.436
+risc-v: 0.415
+mistranslation: 0.413
+architecture: 0.402
+kernel: 0.381
+TCG: 0.349
+i386: 0.258
+user-level: 0.255
+peripherals: 0.212
+VMM: 0.210
+virtual: 0.161
+hypervisor: 0.092
+assembly: 0.038
+KVM: 0.031
+
+QEm+Android emulator crashes on x86 host (but not mac M1)
+Description of problem:
+Using QEmu+Android emulator crashes when using tflite on x86 hosts (but not M1 macs).
+Steps to reproduce:
+1. Install android toolchain, including emulator (sdkmanager, adb, avdmanager etc)
+2. Start android emulator on an x86 host
+3. Follow instructions to download and run tflite benchmarking tool [here](https://www.tensorflow.org/lite/performance/measurement)
+4. Crashes with the following error
+
+```
+06-27 17:38:28.093  8355  8355 F ndk_translation: vendor/unbundled_google/libs/ndk_translation/intrinsics/intrinsics_impl_x86_64.cc:86: CHECK failed: 524288 == 0
+```
+
+We have tried with many different models and the result is always the same. The same models run fine when the emulator runs on a mac M1 host.
+Additional information:
+
diff --git a/results/classifier/118/performance/1750229 b/results/classifier/118/performance/1750229
new file mode 100644
index 00000000..d206f10f
--- /dev/null
+++ b/results/classifier/118/performance/1750229
@@ -0,0 +1,770 @@
+performance: 0.913
+permissions: 0.912
+mistranslation: 0.901
+debug: 0.898
+semantic: 0.892
+graphic: 0.887
+register: 0.885
+architecture: 0.885
+assembly: 0.880
+device: 0.868
+network: 0.864
+arm: 0.861
+files: 0.859
+PID: 0.854
+kernel: 0.854
+virtual: 0.851
+user-level: 0.843
+boot: 0.836
+risc-v: 0.835
+hypervisor: 0.829
+TCG: 0.819
+KVM: 0.818
+peripherals: 0.810
+socket: 0.804
+ppc: 0.796
+x86: 0.776
+vnc: 0.774
+VMM: 0.731
+i386: 0.532
+
+virtio-blk-pci regression: softlock in guest kernel at module loading
+
+Hello,
+
+I am running qemu from master git branch on x86_64 host with kernel is 4.4.114. I've found that commit
+
+    9a4c0e220d8a "hw/virtio-pci: fix virtio behaviour"
+
+introduces an regression with the following command:
+
+    qemu-system-x86_64 -enable-kvm -nodefaults -no-reboot -nographic -vga none -runas qemu -kernel .build.kernel.kvm -initrd .build.initrd.kvm -append 'panic=1 softlockup_panic=1 no-kvmclock nmi_watchdog=0 console=ttyS0 root=/dev/disk/by-id/virtio-0' -m 2048 -drive file=./root,format=raw,if=none,id=disk,serial=0,cache=unsafe -device virtio-blk-pci,drive=disk -serial stdio -smp 2
+
+Starting from this commit to master the following happens with a wide variety of guest kernels (4.4 to 4.15):
+
+[   62.428107] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=-20 stuck for 59s!
+[   62.437426] Showing busy workqueues and worker pools:
+[   62.443117] workqueue events: flags=0x0
+[   62.447512]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
+[   62.448161]     pending: check_corruption
+[   62.458570] workqueue kblockd: flags=0x18
+[   62.463082]   pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=3/256
+[   62.463082]     in-flight: 4:blk_mq_run_work_fn
+[   62.463082]     pending: blk_mq_run_work_fn, blk_mq_timeout_work
+[   62.474831] pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=59s workers=2 idle: 214
+[   62.492121] INFO: rcu_preempt detected stalls on CPUs/tasks:
+[   62.492121]  Tasks blocked on level-0 rcu_node (CPUs 0-1): P4
+[   62.492121]  (detected by 0, t=15002 jiffies, g=-130, c=-131, q=32)
+[   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+[   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+[   62.492121] Call Trace:
+[   62.492121]  <IRQ>
+[   62.492121]  sched_show_task+0xdf/0x100
+[   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+[   62.492121]  rcu_check_callbacks+0x93d/0x9d0
+[   62.492121]  ? tick_sched_do_timer+0x40/0x40
+[   62.492121]  update_process_times+0x28/0x50
+[   62.492121]  tick_sched_handle+0x22/0x70
+[   62.492121]  tick_sched_timer+0x34/0x70
+[   62.492121]  __hrtimer_run_queues+0xcc/0x250
+[   62.492121]  hrtimer_interrupt+0xab/0x1f0
+[   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+[   62.492121]  apic_timer_interrupt+0xa2/0xb0
+[   62.492121]  </IRQ>
+[   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+[   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+[   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+[   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+[   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+[   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+[   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+[   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+[   62.492121]  vp_notify+0x12/0x20
+[   62.492121]  virtqueue_notify+0x18/0x30
+[   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+[   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+[   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+[   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+[   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+[   62.492121]  process_one_work+0x1e3/0x420
+[   62.492121]  worker_thread+0x2b/0x3d0
+[   62.492121]  ? process_one_work+0x420/0x420
+[   62.492121]  kthread+0x113/0x130
+[   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+[   62.492121]  ret_from_fork+0x3a/0x50
+[   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+[   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+[   62.492121] Call Trace:
+[   62.492121]  <IRQ>
+[   62.492121]  sched_show_task+0xdf/0x100
+[   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+[   62.492121]  rcu_check_callbacks+0x972/0x9d0
+[   62.492121]  ? tick_sched_do_timer+0x40/0x40
+[   62.492121]  update_process_times+0x28/0x50
+[   62.492121]  tick_sched_handle+0x22/0x70
+[   62.492121]  tick_sched_timer+0x34/0x70
+[   62.492121]  __hrtimer_run_queues+0xcc/0x250
+[   62.492121]  hrtimer_interrupt+0xab/0x1f0
+[   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+[   62.492121]  apic_timer_interrupt+0xa2/0xb0
+[   62.492121]  </IRQ>
+[   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+[   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+[   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+[   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+[   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+[   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+[   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+[   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+[   62.492121]  vp_notify+0x12/0x20
+[   62.492121]  virtqueue_notify+0x18/0x30
+[   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+[   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+[   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+[   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+[   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+[   62.492121]  process_one_work+0x1e3/0x420
+[   62.492121]  worker_thread+0x2b/0x3d0
+[   62.492121]  ? process_one_work+0x420/0x420
+[   62.492121]  kthread+0x113/0x130
+[   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+[   62.492121]  ret_from_fork+0x3a/0x50
+
+Another important thing is that the commit works well on other hardware with the same setup (same host kernel, same qemu command line and host kernel binaries). How could I try to find the issue reason?
+
+
+
+
+
+
+
+Well, I've found that on qemu side VirtQueue for virtio_blk device
+infinitely calls virtio_blk_handle_vq() where virtio_blk_get_request()
+(virtqueue_pop() in essens) always returns NULL.
+
+2018-02-18 15:26 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+> ** Attachment added: ".build.kernel.kvm"
+>    https://bugs.launchpad.net/qemu/+bug/1750229/+attachment/5057653/+files/.build.kernel.kvm
+>
+> --
+> You received this bug notification because you are subscribed to the bug
+> report.
+> https://bugs.launchpad.net/bugs/1750229
+>
+> Title:
+>   virtio-blk-pci regression: softlock in guest kernel at module loading
+>
+> Status in QEMU:
+>   New
+>
+> Bug description:
+>   Hello,
+>
+>   I am running qemu from master git branch on x86_64 host with kernel is
+>   4.4.114. I've found that commit
+>
+>       9a4c0e220d8a "hw/virtio-pci: fix virtio behaviour"
+>
+>   introduces an regression with the following command:
+>
+>       qemu-system-x86_64 -enable-kvm -nodefaults -no-reboot -nographic
+>   -vga none -runas qemu -kernel .build.kernel.kvm -initrd
+>   .build.initrd.kvm -append 'panic=1 softlockup_panic=1 no-kvmclock
+>   nmi_watchdog=0 console=ttyS0 root=/dev/disk/by-id/virtio-0' -m 2048
+>   -drive file=./root,format=raw,if=none,id=disk,serial=0,cache=unsafe
+>   -device virtio-blk-pci,drive=disk -serial stdio -smp 2
+>
+>   Starting from this commit to master the following happens with a wide
+>   variety of guest kernels (4.4 to 4.15):
+>
+>   [   62.428107] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=-20 stuck for 59s!
+>   [   62.437426] Showing busy workqueues and worker pools:
+>   [   62.443117] workqueue events: flags=0x0
+>   [   62.447512]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
+>   [   62.448161]     pending: check_corruption
+>   [   62.458570] workqueue kblockd: flags=0x18
+>   [   62.463082]   pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=3/256
+>   [   62.463082]     in-flight: 4:blk_mq_run_work_fn
+>   [   62.463082]     pending: blk_mq_run_work_fn, blk_mq_timeout_work
+>   [   62.474831] pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=59s workers=2 idle: 214
+>   [   62.492121] INFO: rcu_preempt detected stalls on CPUs/tasks:
+>   [   62.492121]  Tasks blocked on level-0 rcu_node (CPUs 0-1): P4
+>   [   62.492121]  (detected by 0, t=15002 jiffies, g=-130, c=-131, q=32)
+>   [   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+>   [   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+>   [   62.492121] Call Trace:
+>   [   62.492121]  <IRQ>
+>   [   62.492121]  sched_show_task+0xdf/0x100
+>   [   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+>   [   62.492121]  rcu_check_callbacks+0x93d/0x9d0
+>   [   62.492121]  ? tick_sched_do_timer+0x40/0x40
+>   [   62.492121]  update_process_times+0x28/0x50
+>   [   62.492121]  tick_sched_handle+0x22/0x70
+>   [   62.492121]  tick_sched_timer+0x34/0x70
+>   [   62.492121]  __hrtimer_run_queues+0xcc/0x250
+>   [   62.492121]  hrtimer_interrupt+0xab/0x1f0
+>   [   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+>   [   62.492121]  apic_timer_interrupt+0xa2/0xb0
+>   [   62.492121]  </IRQ>
+>   [   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+>   [   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+>   [   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+>   [   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+>   [   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+>   [   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+>   [   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+>   [   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+>   [   62.492121]  vp_notify+0x12/0x20
+>   [   62.492121]  virtqueue_notify+0x18/0x30
+>   [   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+>   [   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+>   [   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+>   [   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+>   [   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+>   [   62.492121]  process_one_work+0x1e3/0x420
+>   [   62.492121]  worker_thread+0x2b/0x3d0
+>   [   62.492121]  ? process_one_work+0x420/0x420
+>   [   62.492121]  kthread+0x113/0x130
+>   [   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+>   [   62.492121]  ret_from_fork+0x3a/0x50
+>   [   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+>   [   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+>   [   62.492121] Call Trace:
+>   [   62.492121]  <IRQ>
+>   [   62.492121]  sched_show_task+0xdf/0x100
+>   [   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+>   [   62.492121]  rcu_check_callbacks+0x972/0x9d0
+>   [   62.492121]  ? tick_sched_do_timer+0x40/0x40
+>   [   62.492121]  update_process_times+0x28/0x50
+>   [   62.492121]  tick_sched_handle+0x22/0x70
+>   [   62.492121]  tick_sched_timer+0x34/0x70
+>   [   62.492121]  __hrtimer_run_queues+0xcc/0x250
+>   [   62.492121]  hrtimer_interrupt+0xab/0x1f0
+>   [   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+>   [   62.492121]  apic_timer_interrupt+0xa2/0xb0
+>   [   62.492121]  </IRQ>
+>   [   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+>   [   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+>   [   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+>   [   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+>   [   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+>   [   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+>   [   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+>   [   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+>   [   62.492121]  vp_notify+0x12/0x20
+>   [   62.492121]  virtqueue_notify+0x18/0x30
+>   [   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+>   [   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+>   [   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+>   [   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+>   [   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+>   [   62.492121]  process_one_work+0x1e3/0x420
+>   [   62.492121]  worker_thread+0x2b/0x3d0
+>   [   62.492121]  ? process_one_work+0x420/0x420
+>   [   62.492121]  kthread+0x113/0x130
+>   [   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+>   [   62.492121]  ret_from_fork+0x3a/0x50
+>
+>   Another important thing is that the commit works well on other
+>   hardware with the same setup (same host kernel, same qemu command line
+>   and host kernel binaries). How could I try to find the issue reason?
+>
+> To manage notifications about this bug go to:
+> https://bugs.launchpad.net/qemu/+bug/1750229/+subscriptions
+
+
+
+-- 
+With best regards,
+Matwey V. Kornilov
+
+
+virtqueue_pop() returns NULL due to virtio_queue_empty_rcu() returns
+true all the time.
+
+2018-02-20 14:47 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+> Well, I've found that on qemu side VirtQueue for virtio_blk device
+> infinitely calls virtio_blk_handle_vq() where virtio_blk_get_request()
+> (virtqueue_pop() in essens) always returns NULL.
+>
+> 2018-02-18 15:26 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+>> ** Attachment added: ".build.kernel.kvm"
+>>    https://bugs.launchpad.net/qemu/+bug/1750229/+attachment/5057653/+files/.build.kernel.kvm
+>>
+>> --
+>> You received this bug notification because you are subscribed to the bug
+>> report.
+>> https://bugs.launchpad.net/bugs/1750229
+>>
+>> Title:
+>>   virtio-blk-pci regression: softlock in guest kernel at module loading
+>>
+>> Status in QEMU:
+>>   New
+>>
+>> Bug description:
+>>   Hello,
+>>
+>>   I am running qemu from master git branch on x86_64 host with kernel is
+>>   4.4.114. I've found that commit
+>>
+>>       9a4c0e220d8a "hw/virtio-pci: fix virtio behaviour"
+>>
+>>   introduces an regression with the following command:
+>>
+>>       qemu-system-x86_64 -enable-kvm -nodefaults -no-reboot -nographic
+>>   -vga none -runas qemu -kernel .build.kernel.kvm -initrd
+>>   .build.initrd.kvm -append 'panic=1 softlockup_panic=1 no-kvmclock
+>>   nmi_watchdog=0 console=ttyS0 root=/dev/disk/by-id/virtio-0' -m 2048
+>>   -drive file=./root,format=raw,if=none,id=disk,serial=0,cache=unsafe
+>>   -device virtio-blk-pci,drive=disk -serial stdio -smp 2
+>>
+>>   Starting from this commit to master the following happens with a wide
+>>   variety of guest kernels (4.4 to 4.15):
+>>
+>>   [   62.428107] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=-20 stuck for 59s!
+>>   [   62.437426] Showing busy workqueues and worker pools:
+>>   [   62.443117] workqueue events: flags=0x0
+>>   [   62.447512]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
+>>   [   62.448161]     pending: check_corruption
+>>   [   62.458570] workqueue kblockd: flags=0x18
+>>   [   62.463082]   pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=3/256
+>>   [   62.463082]     in-flight: 4:blk_mq_run_work_fn
+>>   [   62.463082]     pending: blk_mq_run_work_fn, blk_mq_timeout_work
+>>   [   62.474831] pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=59s workers=2 idle: 214
+>>   [   62.492121] INFO: rcu_preempt detected stalls on CPUs/tasks:
+>>   [   62.492121]  Tasks blocked on level-0 rcu_node (CPUs 0-1): P4
+>>   [   62.492121]  (detected by 0, t=15002 jiffies, g=-130, c=-131, q=32)
+>>   [   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+>>   [   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+>>   [   62.492121] Call Trace:
+>>   [   62.492121]  <IRQ>
+>>   [   62.492121]  sched_show_task+0xdf/0x100
+>>   [   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+>>   [   62.492121]  rcu_check_callbacks+0x93d/0x9d0
+>>   [   62.492121]  ? tick_sched_do_timer+0x40/0x40
+>>   [   62.492121]  update_process_times+0x28/0x50
+>>   [   62.492121]  tick_sched_handle+0x22/0x70
+>>   [   62.492121]  tick_sched_timer+0x34/0x70
+>>   [   62.492121]  __hrtimer_run_queues+0xcc/0x250
+>>   [   62.492121]  hrtimer_interrupt+0xab/0x1f0
+>>   [   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+>>   [   62.492121]  apic_timer_interrupt+0xa2/0xb0
+>>   [   62.492121]  </IRQ>
+>>   [   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+>>   [   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+>>   [   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+>>   [   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+>>   [   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+>>   [   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+>>   [   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+>>   [   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+>>   [   62.492121]  vp_notify+0x12/0x20
+>>   [   62.492121]  virtqueue_notify+0x18/0x30
+>>   [   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+>>   [   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+>>   [   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+>>   [   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+>>   [   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+>>   [   62.492121]  process_one_work+0x1e3/0x420
+>>   [   62.492121]  worker_thread+0x2b/0x3d0
+>>   [   62.492121]  ? process_one_work+0x420/0x420
+>>   [   62.492121]  kthread+0x113/0x130
+>>   [   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+>>   [   62.492121]  ret_from_fork+0x3a/0x50
+>>   [   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+>>   [   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+>>   [   62.492121] Call Trace:
+>>   [   62.492121]  <IRQ>
+>>   [   62.492121]  sched_show_task+0xdf/0x100
+>>   [   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+>>   [   62.492121]  rcu_check_callbacks+0x972/0x9d0
+>>   [   62.492121]  ? tick_sched_do_timer+0x40/0x40
+>>   [   62.492121]  update_process_times+0x28/0x50
+>>   [   62.492121]  tick_sched_handle+0x22/0x70
+>>   [   62.492121]  tick_sched_timer+0x34/0x70
+>>   [   62.492121]  __hrtimer_run_queues+0xcc/0x250
+>>   [   62.492121]  hrtimer_interrupt+0xab/0x1f0
+>>   [   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+>>   [   62.492121]  apic_timer_interrupt+0xa2/0xb0
+>>   [   62.492121]  </IRQ>
+>>   [   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+>>   [   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+>>   [   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+>>   [   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+>>   [   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+>>   [   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+>>   [   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+>>   [   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+>>   [   62.492121]  vp_notify+0x12/0x20
+>>   [   62.492121]  virtqueue_notify+0x18/0x30
+>>   [   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+>>   [   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+>>   [   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+>>   [   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+>>   [   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+>>   [   62.492121]  process_one_work+0x1e3/0x420
+>>   [   62.492121]  worker_thread+0x2b/0x3d0
+>>   [   62.492121]  ? process_one_work+0x420/0x420
+>>   [   62.492121]  kthread+0x113/0x130
+>>   [   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+>>   [   62.492121]  ret_from_fork+0x3a/0x50
+>>
+>>   Another important thing is that the commit works well on other
+>>   hardware with the same setup (same host kernel, same qemu command line
+>>   and host kernel binaries). How could I try to find the issue reason?
+>>
+>> To manage notifications about this bug go to:
+>> https://bugs.launchpad.net/qemu/+bug/1750229/+subscriptions
+>
+>
+>
+> --
+> With best regards,
+> Matwey V. Kornilov
+
+
+
+-- 
+With best regards,
+Matwey V. Kornilov
+
+
+Well, last_avail_idx equals to shadow_avail_idx and both of them are 1
+at the qemu side. So, only one request is transferred.
+I wonder why, probably something is badly cached, but new avail_idx
+(which is supposed to become 2) is never shown up.
+
+2018-02-20 15:49 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+> virtqueue_pop() returns NULL due to virtio_queue_empty_rcu() returns
+> true all the time.
+>
+> 2018-02-20 14:47 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+>> Well, I've found that on qemu side VirtQueue for virtio_blk device
+>> infinitely calls virtio_blk_handle_vq() where virtio_blk_get_request()
+>> (virtqueue_pop() in essens) always returns NULL.
+>>
+>> 2018-02-18 15:26 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+>>> ** Attachment added: ".build.kernel.kvm"
+>>>    https://bugs.launchpad.net/qemu/+bug/1750229/+attachment/5057653/+files/.build.kernel.kvm
+>>>
+>>> --
+>>> You received this bug notification because you are subscribed to the bug
+>>> report.
+>>> https://bugs.launchpad.net/bugs/1750229
+>>>
+>>> Title:
+>>>   virtio-blk-pci regression: softlock in guest kernel at module loading
+>>>
+>>> Status in QEMU:
+>>>   New
+>>>
+>>> Bug description:
+>>>   Hello,
+>>>
+>>>   I am running qemu from master git branch on x86_64 host with kernel is
+>>>   4.4.114. I've found that commit
+>>>
+>>>       9a4c0e220d8a "hw/virtio-pci: fix virtio behaviour"
+>>>
+>>>   introduces an regression with the following command:
+>>>
+>>>       qemu-system-x86_64 -enable-kvm -nodefaults -no-reboot -nographic
+>>>   -vga none -runas qemu -kernel .build.kernel.kvm -initrd
+>>>   .build.initrd.kvm -append 'panic=1 softlockup_panic=1 no-kvmclock
+>>>   nmi_watchdog=0 console=ttyS0 root=/dev/disk/by-id/virtio-0' -m 2048
+>>>   -drive file=./root,format=raw,if=none,id=disk,serial=0,cache=unsafe
+>>>   -device virtio-blk-pci,drive=disk -serial stdio -smp 2
+>>>
+>>>   Starting from this commit to master the following happens with a wide
+>>>   variety of guest kernels (4.4 to 4.15):
+>>>
+>>>   [   62.428107] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=-20 stuck for 59s!
+>>>   [   62.437426] Showing busy workqueues and worker pools:
+>>>   [   62.443117] workqueue events: flags=0x0
+>>>   [   62.447512]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
+>>>   [   62.448161]     pending: check_corruption
+>>>   [   62.458570] workqueue kblockd: flags=0x18
+>>>   [   62.463082]   pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=3/256
+>>>   [   62.463082]     in-flight: 4:blk_mq_run_work_fn
+>>>   [   62.463082]     pending: blk_mq_run_work_fn, blk_mq_timeout_work
+>>>   [   62.474831] pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=59s workers=2 idle: 214
+>>>   [   62.492121] INFO: rcu_preempt detected stalls on CPUs/tasks:
+>>>   [   62.492121]  Tasks blocked on level-0 rcu_node (CPUs 0-1): P4
+>>>   [   62.492121]  (detected by 0, t=15002 jiffies, g=-130, c=-131, q=32)
+>>>   [   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+>>>   [   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+>>>   [   62.492121] Call Trace:
+>>>   [   62.492121]  <IRQ>
+>>>   [   62.492121]  sched_show_task+0xdf/0x100
+>>>   [   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+>>>   [   62.492121]  rcu_check_callbacks+0x93d/0x9d0
+>>>   [   62.492121]  ? tick_sched_do_timer+0x40/0x40
+>>>   [   62.492121]  update_process_times+0x28/0x50
+>>>   [   62.492121]  tick_sched_handle+0x22/0x70
+>>>   [   62.492121]  tick_sched_timer+0x34/0x70
+>>>   [   62.492121]  __hrtimer_run_queues+0xcc/0x250
+>>>   [   62.492121]  hrtimer_interrupt+0xab/0x1f0
+>>>   [   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+>>>   [   62.492121]  apic_timer_interrupt+0xa2/0xb0
+>>>   [   62.492121]  </IRQ>
+>>>   [   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+>>>   [   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+>>>   [   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+>>>   [   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+>>>   [   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+>>>   [   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+>>>   [   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+>>>   [   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+>>>   [   62.492121]  vp_notify+0x12/0x20
+>>>   [   62.492121]  virtqueue_notify+0x18/0x30
+>>>   [   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+>>>   [   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+>>>   [   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+>>>   [   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+>>>   [   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+>>>   [   62.492121]  process_one_work+0x1e3/0x420
+>>>   [   62.492121]  worker_thread+0x2b/0x3d0
+>>>   [   62.492121]  ? process_one_work+0x420/0x420
+>>>   [   62.492121]  kthread+0x113/0x130
+>>>   [   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+>>>   [   62.492121]  ret_from_fork+0x3a/0x50
+>>>   [   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+>>>   [   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+>>>   [   62.492121] Call Trace:
+>>>   [   62.492121]  <IRQ>
+>>>   [   62.492121]  sched_show_task+0xdf/0x100
+>>>   [   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+>>>   [   62.492121]  rcu_check_callbacks+0x972/0x9d0
+>>>   [   62.492121]  ? tick_sched_do_timer+0x40/0x40
+>>>   [   62.492121]  update_process_times+0x28/0x50
+>>>   [   62.492121]  tick_sched_handle+0x22/0x70
+>>>   [   62.492121]  tick_sched_timer+0x34/0x70
+>>>   [   62.492121]  __hrtimer_run_queues+0xcc/0x250
+>>>   [   62.492121]  hrtimer_interrupt+0xab/0x1f0
+>>>   [   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+>>>   [   62.492121]  apic_timer_interrupt+0xa2/0xb0
+>>>   [   62.492121]  </IRQ>
+>>>   [   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+>>>   [   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+>>>   [   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+>>>   [   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+>>>   [   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+>>>   [   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+>>>   [   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+>>>   [   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+>>>   [   62.492121]  vp_notify+0x12/0x20
+>>>   [   62.492121]  virtqueue_notify+0x18/0x30
+>>>   [   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+>>>   [   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+>>>   [   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+>>>   [   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+>>>   [   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+>>>   [   62.492121]  process_one_work+0x1e3/0x420
+>>>   [   62.492121]  worker_thread+0x2b/0x3d0
+>>>   [   62.492121]  ? process_one_work+0x420/0x420
+>>>   [   62.492121]  kthread+0x113/0x130
+>>>   [   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+>>>   [   62.492121]  ret_from_fork+0x3a/0x50
+>>>
+>>>   Another important thing is that the commit works well on other
+>>>   hardware with the same setup (same host kernel, same qemu command line
+>>>   and host kernel binaries). How could I try to find the issue reason?
+>>>
+>>> To manage notifications about this bug go to:
+>>> https://bugs.launchpad.net/qemu/+bug/1750229/+subscriptions
+>>
+>>
+>>
+>> --
+>> With best regards,
+>> Matwey V. Kornilov
+>
+>
+>
+> --
+> With best regards,
+> Matwey V. Kornilov
+
+
+
+-- 
+With best regards,
+Matwey V. Kornilov
+
+
+The same story with 4.15.4 host kernel.
+
+2018-02-21 11:58 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+> Well, last_avail_idx equals to shadow_avail_idx and both of them are 1
+> at the qemu side. So, only one request is transferred.
+> I wonder why, probably something is badly cached, but new avail_idx
+> (which is supposed to become 2) is never shown up.
+>
+> 2018-02-20 15:49 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+>> virtqueue_pop() returns NULL due to virtio_queue_empty_rcu() returns
+>> true all the time.
+>>
+>> 2018-02-20 14:47 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+>>> Well, I've found that on qemu side VirtQueue for virtio_blk device
+>>> infinitely calls virtio_blk_handle_vq() where virtio_blk_get_request()
+>>> (virtqueue_pop() in essens) always returns NULL.
+>>>
+>>> 2018-02-18 15:26 GMT+03:00 Matwey V. Kornilov <email address hidden>:
+>>>> ** Attachment added: ".build.kernel.kvm"
+>>>>    https://bugs.launchpad.net/qemu/+bug/1750229/+attachment/5057653/+files/.build.kernel.kvm
+>>>>
+>>>> --
+>>>> You received this bug notification because you are subscribed to the bug
+>>>> report.
+>>>> https://bugs.launchpad.net/bugs/1750229
+>>>>
+>>>> Title:
+>>>>   virtio-blk-pci regression: softlock in guest kernel at module loading
+>>>>
+>>>> Status in QEMU:
+>>>>   New
+>>>>
+>>>> Bug description:
+>>>>   Hello,
+>>>>
+>>>>   I am running qemu from master git branch on x86_64 host with kernel is
+>>>>   4.4.114. I've found that commit
+>>>>
+>>>>       9a4c0e220d8a "hw/virtio-pci: fix virtio behaviour"
+>>>>
+>>>>   introduces an regression with the following command:
+>>>>
+>>>>       qemu-system-x86_64 -enable-kvm -nodefaults -no-reboot -nographic
+>>>>   -vga none -runas qemu -kernel .build.kernel.kvm -initrd
+>>>>   .build.initrd.kvm -append 'panic=1 softlockup_panic=1 no-kvmclock
+>>>>   nmi_watchdog=0 console=ttyS0 root=/dev/disk/by-id/virtio-0' -m 2048
+>>>>   -drive file=./root,format=raw,if=none,id=disk,serial=0,cache=unsafe
+>>>>   -device virtio-blk-pci,drive=disk -serial stdio -smp 2
+>>>>
+>>>>   Starting from this commit to master the following happens with a wide
+>>>>   variety of guest kernels (4.4 to 4.15):
+>>>>
+>>>>   [   62.428107] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=-20 stuck for 59s!
+>>>>   [   62.437426] Showing busy workqueues and worker pools:
+>>>>   [   62.443117] workqueue events: flags=0x0
+>>>>   [   62.447512]   pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
+>>>>   [   62.448161]     pending: check_corruption
+>>>>   [   62.458570] workqueue kblockd: flags=0x18
+>>>>   [   62.463082]   pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=3/256
+>>>>   [   62.463082]     in-flight: 4:blk_mq_run_work_fn
+>>>>   [   62.463082]     pending: blk_mq_run_work_fn, blk_mq_timeout_work
+>>>>   [   62.474831] pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=59s workers=2 idle: 214
+>>>>   [   62.492121] INFO: rcu_preempt detected stalls on CPUs/tasks:
+>>>>   [   62.492121]  Tasks blocked on level-0 rcu_node (CPUs 0-1): P4
+>>>>   [   62.492121]  (detected by 0, t=15002 jiffies, g=-130, c=-131, q=32)
+>>>>   [   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+>>>>   [   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+>>>>   [   62.492121] Call Trace:
+>>>>   [   62.492121]  <IRQ>
+>>>>   [   62.492121]  sched_show_task+0xdf/0x100
+>>>>   [   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+>>>>   [   62.492121]  rcu_check_callbacks+0x93d/0x9d0
+>>>>   [   62.492121]  ? tick_sched_do_timer+0x40/0x40
+>>>>   [   62.492121]  update_process_times+0x28/0x50
+>>>>   [   62.492121]  tick_sched_handle+0x22/0x70
+>>>>   [   62.492121]  tick_sched_timer+0x34/0x70
+>>>>   [   62.492121]  __hrtimer_run_queues+0xcc/0x250
+>>>>   [   62.492121]  hrtimer_interrupt+0xab/0x1f0
+>>>>   [   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+>>>>   [   62.492121]  apic_timer_interrupt+0xa2/0xb0
+>>>>   [   62.492121]  </IRQ>
+>>>>   [   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+>>>>   [   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+>>>>   [   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+>>>>   [   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+>>>>   [   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+>>>>   [   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+>>>>   [   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+>>>>   [   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+>>>>   [   62.492121]  vp_notify+0x12/0x20
+>>>>   [   62.492121]  virtqueue_notify+0x18/0x30
+>>>>   [   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+>>>>   [   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+>>>>   [   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+>>>>   [   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+>>>>   [   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+>>>>   [   62.492121]  process_one_work+0x1e3/0x420
+>>>>   [   62.492121]  worker_thread+0x2b/0x3d0
+>>>>   [   62.492121]  ? process_one_work+0x420/0x420
+>>>>   [   62.492121]  kthread+0x113/0x130
+>>>>   [   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+>>>>   [   62.492121]  ret_from_fork+0x3a/0x50
+>>>>   [   62.492121] kworker/0:0H    R  running task        0     4      2 0x80000000
+>>>>   [   62.492121] Workqueue: kblockd blk_mq_run_work_fn
+>>>>   [   62.492121] Call Trace:
+>>>>   [   62.492121]  <IRQ>
+>>>>   [   62.492121]  sched_show_task+0xdf/0x100
+>>>>   [   62.492121]  rcu_print_detail_task_stall_rnp+0x48/0x69
+>>>>   [   62.492121]  rcu_check_callbacks+0x972/0x9d0
+>>>>   [   62.492121]  ? tick_sched_do_timer+0x40/0x40
+>>>>   [   62.492121]  update_process_times+0x28/0x50
+>>>>   [   62.492121]  tick_sched_handle+0x22/0x70
+>>>>   [   62.492121]  tick_sched_timer+0x34/0x70
+>>>>   [   62.492121]  __hrtimer_run_queues+0xcc/0x250
+>>>>   [   62.492121]  hrtimer_interrupt+0xab/0x1f0
+>>>>   [   62.492121]  smp_apic_timer_interrupt+0x62/0x150
+>>>>   [   62.492121]  apic_timer_interrupt+0xa2/0xb0
+>>>>   [   62.492121]  </IRQ>
+>>>>   [   62.492121] RIP: 0010:iowrite16+0x1d/0x30
+>>>>   [   62.492121] RSP: 0018:ffffa477c034fcc8 EFLAGS: 00010292 ORIG_RAX: ffffffffffffff11
+>>>>   [   62.492121] RAX: ffffffffa24fbdb0 RBX: ffff92a1f8f82000 RCX: 0000000000000001
+>>>>   [   62.492121] RDX: ffffa477c0371000 RSI: ffffa477c0371000 RDI: 0000000000000000
+>>>>   [   62.492121] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000001080020
+>>>>   [   62.492121] R10: ffffdc7cc1e4fc00 R11: 0000000000000000 R12: 0000000000000000
+>>>>   [   62.492121] R13: 0000000000000000 R14: ffff92a1f93f0000 R15: ffff92a1f8e1aa80
+>>>>   [   62.492121]  ? vp_synchronize_vectors+0x60/0x60
+>>>>   [   62.492121]  vp_notify+0x12/0x20
+>>>>   [   62.492121]  virtqueue_notify+0x18/0x30
+>>>>   [   62.492121]  virtio_queue_rq+0x2f5/0x300 [virtio_blk]
+>>>>   [   62.492121]  blk_mq_dispatch_rq_list+0x7e/0x4a0
+>>>>   [   62.492121]  blk_mq_do_dispatch_sched+0x4a/0xd0
+>>>>   [   62.492121]  blk_mq_sched_dispatch_requests+0x106/0x170
+>>>>   [   62.492121]  __blk_mq_run_hw_queue+0x80/0x90
+>>>>   [   62.492121]  process_one_work+0x1e3/0x420
+>>>>   [   62.492121]  worker_thread+0x2b/0x3d0
+>>>>   [   62.492121]  ? process_one_work+0x420/0x420
+>>>>   [   62.492121]  kthread+0x113/0x130
+>>>>   [   62.492121]  ? kthread_create_worker_on_cpu+0x50/0x50
+>>>>   [   62.492121]  ret_from_fork+0x3a/0x50
+>>>>
+>>>>   Another important thing is that the commit works well on other
+>>>>   hardware with the same setup (same host kernel, same qemu command line
+>>>>   and host kernel binaries). How could I try to find the issue reason?
+>>>>
+>>>> To manage notifications about this bug go to:
+>>>> https://bugs.launchpad.net/qemu/+bug/1750229/+subscriptions
+>>>
+>>>
+>>>
+>>> --
+>>> With best regards,
+>>> Matwey V. Kornilov
+>>
+>>
+>>
+>> --
+>> With best regards,
+>> Matwey V. Kornilov
+>
+>
+>
+> --
+> With best regards,
+> Matwey V. Kornilov
+
+
+
+-- 
+With best regards,
+Matwey V. Kornilov
+
+
+This has been fixed in the kernel
+
+
diff --git a/results/classifier/118/performance/1768 b/results/classifier/118/performance/1768
new file mode 100644
index 00000000..9d55119a
--- /dev/null
+++ b/results/classifier/118/performance/1768
@@ -0,0 +1,62 @@
+performance: 0.967
+user-level: 0.868
+architecture: 0.858
+TCG: 0.843
+mistranslation: 0.839
+graphic: 0.794
+device: 0.756
+x86: 0.548
+assembly: 0.541
+debug: 0.538
+kernel: 0.524
+vnc: 0.511
+hypervisor: 0.483
+arm: 0.466
+risc-v: 0.462
+semantic: 0.438
+i386: 0.415
+peripherals: 0.402
+ppc: 0.392
+network: 0.338
+socket: 0.336
+PID: 0.321
+VMM: 0.314
+boot: 0.307
+files: 0.288
+virtual: 0.218
+permissions: 0.165
+register: 0.159
+KVM: 0.084
+
+Could not allocate more than ~2GB with qemu-user
+Description of problem:
+On qemu-user, failed to allocate more than about 2GB on 32bit platform supporting up to 4GB (arm, ppc, etc.)
+Steps to reproduce:
+1. Try to allocate more than 2GB [e.g. for(i=0;i<64;i++) if(malloc(64*1024*1024)==NULL) perror("Failed to allocate 64MB");]
+2. Only 1 64MB chunck is allocated in the upper 2GB memory space
+3. Failed to allocate after about 2GB.
+Additional information:
+The problem is in **pageflags_find** and **pageflags_next** functions (found in _accel/tcg/user-exec.c_) 3rd parameters, that should be **target_ulong** instead of incorrect _target_long_ (the parameter will be converted signed extended to uint64_t).
+The testing program is the following:
+```
+#include <stdio.h>
+#include <stdlib.h>
+
+int main(int argc,char *argv[]) {
+  unsigned int a;
+  unsigned int i;
+  char *al;
+  unsigned int sss=1U*1024*1024*64;
+  for(a=0;a<128;a++) {
+    al=malloc(sss);
+    if(al!=NULL) {
+      printf("ALLOC OK %u (%08lX)!\n",sss*(a+1),al);
+    }
+    else {
+      printf("Cannot alloc %d\n",(a+1)*sss);
+      perror("Cannot alloc");
+      exit(1);
+    }
+  }
+}
+```
diff --git a/results/classifier/118/performance/1784 b/results/classifier/118/performance/1784
new file mode 100644
index 00000000..188ea400
--- /dev/null
+++ b/results/classifier/118/performance/1784
@@ -0,0 +1,43 @@
+performance: 0.955
+device: 0.921
+graphic: 0.911
+permissions: 0.808
+assembly: 0.805
+peripherals: 0.785
+files: 0.771
+semantic: 0.757
+mistranslation: 0.705
+network: 0.695
+register: 0.695
+boot: 0.679
+user-level: 0.654
+architecture: 0.643
+debug: 0.637
+vnc: 0.633
+socket: 0.606
+PID: 0.577
+kernel: 0.513
+arm: 0.489
+risc-v: 0.473
+ppc: 0.420
+i386: 0.420
+hypervisor: 0.413
+x86: 0.392
+VMM: 0.389
+TCG: 0.344
+virtual: 0.336
+KVM: 0.206
+
+Mac M1 Max / Debian guest / Luks password / Switching to graphical login manager (lightdm/Gdm) hangs in 75%
+Description of problem:
+In approximately 70% of cases I start QEMU with a Debian guest where the Debian guest was installed with full disk encryption, QEMU 'hangs' (does not respond') after I unlock the encrypted guest and the guest tries to start the graphical login manager (gdm or lightdm).
+
+I need to force quit QEMU, restart it multiple times until the start of the graphical login manager works.
+Steps to reproduce:
+1. Install Debian with (guided) full disk encryption and either the Gnome or the XFCE desktop environment
+2. To be able to unlock the hard disk after the installation finished, the Linux boot parameter 'console=tty1' needs to be added within grub to the Linux command line
+3. Try to restart/reboot QEMU  several times and QEMU will become unresponsive multiple times in this process.
+Additional information:
+I encounter this problem for several months now, with different versions of QEMU, macOS and Debian.
+
+There is one observation, which might help: I installed [DropBear](https://packages.debian.org/buster/dropbear-initramfs) to experiment with remote unlocking of Luks encrypted Linux boxes. It seems, that QEMU does not go into the unresponsive state, when I unlock the hard disk via SSH and not focus the QEMU window until after the graphical login manager started. (Only tried remote unlocking a few times so it is too early to confirm if this works 100% of the time.
diff --git a/results/classifier/118/performance/179 b/results/classifier/118/performance/179
new file mode 100644
index 00000000..ad9a2841
--- /dev/null
+++ b/results/classifier/118/performance/179
@@ -0,0 +1,31 @@
+performance: 0.856
+device: 0.845
+peripherals: 0.826
+graphic: 0.695
+network: 0.568
+architecture: 0.315
+arm: 0.313
+debug: 0.300
+permissions: 0.275
+semantic: 0.250
+user-level: 0.218
+mistranslation: 0.211
+virtual: 0.187
+boot: 0.146
+hypervisor: 0.143
+VMM: 0.123
+register: 0.116
+files: 0.092
+ppc: 0.091
+socket: 0.082
+vnc: 0.075
+risc-v: 0.062
+PID: 0.053
+i386: 0.038
+x86: 0.035
+assembly: 0.030
+TCG: 0.018
+kernel: 0.004
+KVM: 0.003
+
+qemu guest crashes on spice client USB redirected device removal
diff --git a/results/classifier/118/performance/1790460 b/results/classifier/118/performance/1790460
new file mode 100644
index 00000000..509cd5d5
--- /dev/null
+++ b/results/classifier/118/performance/1790460
@@ -0,0 +1,66 @@
+performance: 0.837
+debug: 0.544
+graphic: 0.526
+virtual: 0.472
+hypervisor: 0.449
+device: 0.441
+network: 0.429
+ppc: 0.425
+socket: 0.405
+user-level: 0.393
+semantic: 0.375
+vnc: 0.365
+architecture: 0.348
+PID: 0.337
+peripherals: 0.332
+TCG: 0.327
+kernel: 0.327
+x86: 0.321
+boot: 0.320
+register: 0.310
+files: 0.310
+risc-v: 0.296
+i386: 0.293
+assembly: 0.272
+permissions: 0.272
+mistranslation: 0.257
+arm: 0.255
+VMM: 0.246
+KVM: 0.192
+
+-icount,sleep=off mode is broken (target slows down or hangs)
+
+QEMU running with options "-icount,sleep=off -rtc clock=vm" doesn't execute emulation at maximum possible speed.
+Target virtual clock may run faster or slower than realtime clock by N times, where N value depends on various unrelated conditions (i.e. random from the user point of view). The worst case is when target hangs (hopefully, early in booting stage).
+Example scenarios I've described here: http://lists.nongnu.org/archive/html/qemu-discuss/2018-08/msg00102.html
+
+QEMU process just sleeps most of the time (polling, waiting some condition, etc.). I've tried to debug issue and came to 99% conclusion that there are racing somewhere in qemu internals.
+
+The feature is broken since v2.6.0 release.
+Bad commit is 281b2201e4e18d5b9a26e1e8d81b62b5581a13be by Pavel Dovgalyuk, 03/10/2016 05:56 PM:
+
+  icount: remove obsolete warp call
+  
+  qemu_clock_warp call in qemu_tcg_wait_io_event function is not needed
+  anymore, because it is called in every iteration of main_loop_wait.
+  
+  Reviewed-by: Paolo Bonzini <email address hidden>
+
+  Signed-off-by: Pavel Dovgalyuk <email address hidden>
+  Message-Id: <20160310115603.4812.67559.stgit@PASHA-ISP>
+  Signed-off-by: Paolo Bonzini <email address hidden>
+
+I've reverted commit to all major releases and latest git master branch. Issue was fixed for all of them. My adaptation is trivial: just restoring removed function call before "qemu_cond_wait(...)" line.
+
+I'm sure following bugs are just particular cases of the issue: #1774677, #1653063 .
+
+This modification also fixes issue:
+https://git.greensocs.com/qemu/qbox/commit/a8ed106032e375e715a531d6e93e4d9ec295dbdb
+Although I tested it only on v2.12.0 and didn't noticed performance improvement.
+
+Long-term testing showed that my trivial adaptation didn't fixed issue and I sticked to modification from QBox. It didn't failed yet.
+And yes, issue still relevant to current master branch.
+
+I think we fixed this bug in commit 013aabdc665e4256b38d which would have been in the 3.1.0 release (this is why we closed #1774677, which as you say is the same issue).
+
+
diff --git a/results/classifier/118/performance/1798057 b/results/classifier/118/performance/1798057
new file mode 100644
index 00000000..0945dfd8
--- /dev/null
+++ b/results/classifier/118/performance/1798057
@@ -0,0 +1,72 @@
+performance: 0.812
+KVM: 0.665
+architecture: 0.657
+files: 0.650
+hypervisor: 0.637
+graphic: 0.612
+semantic: 0.609
+kernel: 0.573
+debug: 0.565
+device: 0.479
+ppc: 0.473
+user-level: 0.412
+x86: 0.412
+socket: 0.376
+network: 0.329
+risc-v: 0.326
+mistranslation: 0.301
+register: 0.296
+PID: 0.287
+vnc: 0.284
+boot: 0.256
+arm: 0.229
+permissions: 0.216
+VMM: 0.206
+peripherals: 0.198
+virtual: 0.179
+TCG: 0.177
+assembly: 0.092
+i386: 0.081
+
+Not able to start instances larger than 1 TB
+
+Specs:
+
+CPU: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
+OS: Ubuntu 18.04 AMD64
+QEMU: 1:2.11+dfsg-1ubuntu7.6 (Ubuntu Bionic Package)
+Openstack: Openstack Queens (Ubuntu Bionic Package)
+Libvirt-daemon: 4.0.0-1ubuntu8.5
+Seabios: 1.10.2-1ubuntu1
+
+
+The Problem:
+We are not able to start instances, which have a memory size over 1 TB.
+After starting the instance, they shortly lock up. Starting guests with a lower amount of RAM works
+perfectly. We dealt with the same problem in the past with an older Qemu Version (2.5) by patching some source files according to this patch:
+
+https://git.centos.org/blob/rpms!!qemu-kvm.git/34b32196890e2c41b0aee042e600ba422f29db17/SOURCES!kvm-fix-guest-physical-bits-to-match-host-to-go-beyond-1.patch
+
+
+I think we now have somewhat the same problem here, however the source base changed and I'am not able to find the corresponding snippet to patch this.
+
+Also, guests show a wrong physical address size which is probably the cause of the lock ups on high memory guests:
+root@debug:~# grep physical /proc/cpuinfo 
+physical id	: 0
+address sizes	: 40 bits physical, 48 bits virtual 
+
+Any way to fix this?
+
+Hi Alex,
+  You should be able to fix this by passing the right cpu flags, e.g.:
+
+-cpu IvyBridge,host-phys-bits=yes
+
+or
+
+-cpu IvyBridge,physbits=46
+
+Dave
+
+I'm assuming that the right physbits setting fixed the bug? ... so I'm marking this ticket as "Invalid". If the problem still persists, then please open again.
+
diff --git a/results/classifier/118/performance/1808824 b/results/classifier/118/performance/1808824
new file mode 100644
index 00000000..7c116ac0
--- /dev/null
+++ b/results/classifier/118/performance/1808824
@@ -0,0 +1,49 @@
+performance: 0.829
+graphic: 0.814
+device: 0.806
+ppc: 0.730
+mistranslation: 0.700
+semantic: 0.674
+vnc: 0.666
+register: 0.631
+PID: 0.605
+user-level: 0.603
+VMM: 0.575
+socket: 0.563
+peripherals: 0.559
+virtual: 0.546
+hypervisor: 0.546
+architecture: 0.537
+risc-v: 0.536
+permissions: 0.533
+assembly: 0.515
+kernel: 0.509
+network: 0.497
+TCG: 0.495
+arm: 0.482
+KVM: 0.476
+debug: 0.447
+boot: 0.436
+files: 0.431
+x86: 0.424
+i386: 0.363
+
+Mouse leaves VM window when Grab on Hover isn't selected Windows 10 and Intel HAX
+
+On Windows 10.0.17134 I have been having the problem that the mouse will leave the VM window after a short time when grab on hover isn't selected.  The VM will then try to grab on Hover and the mouse will grab in weird places and it will become very unwieldy to control the mouse in the VM window.
+
+This is exasperated by super slow response making it nearly unusable  if the Intel® Hardware Accelerated Execution Manager (Intel® HAXM) is not currently installed on my machine.
+
+I know they are different things but they compounded on each other when you have a mouse that is not staying in the VM window and the VM's visualized cpu is acting VERY slow the system is unusable.
+
+https://youtu.be/Vpi59ptOiyc
+Here is a video demonstrating the main issue. This does not show any lag but, it does show how VM doesn't capture the mouse correctly.
+
+After capture if you move the mouse far enough off the VM window the mouse goes out of the window in odd places that isn't representative of the location of where the mouse is.
+The mouse will leave the VM from the middle of the of VM window at times.
+
+The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
+If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1811720 b/results/classifier/118/performance/1811720
new file mode 100644
index 00000000..611d2ca1
--- /dev/null
+++ b/results/classifier/118/performance/1811720
@@ -0,0 +1,42 @@
+performance: 0.856
+device: 0.794
+graphic: 0.596
+semantic: 0.589
+mistranslation: 0.486
+architecture: 0.436
+vnc: 0.404
+kernel: 0.392
+socket: 0.385
+ppc: 0.380
+boot: 0.346
+risc-v: 0.345
+arm: 0.330
+network: 0.288
+VMM: 0.246
+debug: 0.201
+user-level: 0.194
+x86: 0.192
+permissions: 0.185
+KVM: 0.177
+PID: 0.176
+TCG: 0.169
+files: 0.163
+i386: 0.147
+register: 0.115
+assembly: 0.112
+peripherals: 0.102
+hypervisor: 0.095
+virtual: 0.066
+
+storage physical_block_size is restricted to uint16_t
+
+It is desirable to set -global scsi-hd.physical_block_size=4194304 for RBD-backed storage.
+
+But unfortunatelly, this values is restricted with uint16_t, i.e. 65536 maximum.
+
+For example, scsi-hd.discard_granularity=4194304 is not so restricted (and works as expected)
+
+I think the SCSI spec is limited to 16 bits for representing the block length (in bytes) (see READ CAPACITY(10) command). It's also probably sub-optimal to force a full 4MiB write even for small IOs. You might achieve what you are looking for by setting the minimal and optimal IO size hints to 4MiB (exposed via SCSI block limits VPD page) using "min_io_size" and "opt_io_size" settings.
+
+Yes, you are right. Thanks for the response.
+
diff --git a/results/classifier/118/performance/1820 b/results/classifier/118/performance/1820
new file mode 100644
index 00000000..9790e6ac
--- /dev/null
+++ b/results/classifier/118/performance/1820
@@ -0,0 +1,40 @@
+performance: 0.992
+device: 0.972
+graphic: 0.940
+architecture: 0.919
+hypervisor: 0.820
+virtual: 0.792
+semantic: 0.636
+risc-v: 0.526
+network: 0.507
+permissions: 0.430
+debug: 0.417
+kernel: 0.401
+register: 0.396
+boot: 0.366
+x86: 0.359
+mistranslation: 0.346
+PID: 0.345
+user-level: 0.344
+arm: 0.344
+vnc: 0.331
+KVM: 0.303
+VMM: 0.271
+peripherals: 0.260
+TCG: 0.145
+socket: 0.129
+files: 0.111
+ppc: 0.057
+assembly: 0.044
+i386: 0.002
+
+whpx is slower than tcg
+Description of problem:
+I find whpx much slower than tcg, which is rather odd.
+Steps to reproduce:
+1. Enable Hyper-V
+2. run qemu with **-accel whpx,kernel-irqchip=off**
+Additional information:
+my cpu: intel i7 6500u
+memory: 8go
+my gpu: intel graphics 520 hd
diff --git a/results/classifier/118/performance/1833668 b/results/classifier/118/performance/1833668
new file mode 100644
index 00000000..23ce66b5
--- /dev/null
+++ b/results/classifier/118/performance/1833668
@@ -0,0 +1,81 @@
+architecture: 0.994
+mistranslation: 0.991
+arm: 0.982
+user-level: 0.977
+performance: 0.953
+debug: 0.924
+files: 0.902
+ppc: 0.896
+peripherals: 0.874
+semantic: 0.871
+graphic: 0.868
+PID: 0.867
+hypervisor: 0.856
+device: 0.831
+permissions: 0.830
+socket: 0.806
+kernel: 0.806
+network: 0.793
+register: 0.772
+assembly: 0.748
+TCG: 0.732
+vnc: 0.719
+KVM: 0.694
+VMM: 0.691
+boot: 0.655
+virtual: 0.632
+risc-v: 0.620
+i386: 0.509
+x86: 0.379
+
+linux-user: Unable to run ARM binaries on Aarch64
+
+Download a ARM package from https://packages.debian.org/sid/busybox-static
+
+Here tested with: busybox-static_1.30.1-4_armel.deb
+
+$ file busybox.armel
+busybox.armel: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=12cf572e016bafa240e113b57b3641e94b837f37, stripped
+
+$ qemu-aarch64 --version
+qemu-aarch64 version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.14)
+
+$ qemu-aarch64 busybox.armel
+busybox.armel: Invalid ELF image for this architecture
+
+$ qemu-aarch64 -cpu cortex-a7 busybox.armel
+unable to find CPU model 'cortex-a7'
+
+Also reproduced with commit 33d609990621dea6c7d056c86f707b8811320ac1,
+while the aarch64_cpus[] array contains Aarch64 CPUs, the arm_cpus[] array is empty:
+
+$ gdb -q aarch64-linux-user/qemu-aarch64
+(gdb) p aarch64_cpus
+$1 = {{name = 0x1fe4e8 "cortex-a57", initfn = 0x109bc0 <aarch64_a57_initfn>, class_init = 0x0}, {name = 0x1fe508 "cortex-a53", initfn = 0x109a10 <aarch64_a53_initfn>, class_init = 0x0}, {name = 0x1fe518 "cortex-a72", 
+    initfn = 0x109868 <aarch64_a72_initfn>, class_init = 0x0}, {name = 0x218020 "max", initfn = 0x109d70 <aarch64_max_initfn>, class_init = 0x0}, {name = 0x0, initfn = 0x0, class_init = 0x0}}
+(gdb) p arm_cpus
+$2 = {{name = 0x0, initfn = 0x0, class_init = 0x0}}
+
+Of course.  There's a separate qemu-arm executable for that.
+
+Le 25/06/2019 à 16:43, Richard Henderson a écrit :
+> Of course.  There's a separate qemu-arm executable for that.
+
+On some other architectures (like ppc/ppc64) the idea is the 64bit
+version supports also all 32bit versions CPUs.
+
+I think it's why this bug has been opened.
+
+
+On 6/25/19 5:27 PM, Laurent Vivier wrote:
+> Le 25/06/2019 à 16:43, Richard Henderson a écrit :
+>> Of course.  There's a separate qemu-arm executable for that.
+> 
+> On some other architectures (like ppc/ppc64) the idea is the 64bit
+> version supports also all 32bit versions CPUs.
+> 
+> I think it's why this bug has been opened.
+
+At any rate the error message could be more explicit, to avoid confusion.
+
+
diff --git a/results/classifier/118/performance/1834496 b/results/classifier/118/performance/1834496
new file mode 100644
index 00000000..330f6e8a
--- /dev/null
+++ b/results/classifier/118/performance/1834496
@@ -0,0 +1,83 @@
+arm: 0.950
+TCG: 0.928
+performance: 0.927
+i386: 0.918
+architecture: 0.918
+user-level: 0.833
+PID: 0.831
+debug: 0.789
+permissions: 0.776
+ppc: 0.775
+device: 0.757
+graphic: 0.755
+register: 0.738
+risc-v: 0.730
+network: 0.722
+peripherals: 0.707
+semantic: 0.696
+VMM: 0.683
+x86: 0.663
+files: 0.655
+vnc: 0.621
+hypervisor: 0.612
+kernel: 0.612
+KVM: 0.602
+socket: 0.580
+assembly: 0.570
+boot: 0.558
+mistranslation: 0.503
+virtual: 0.494
+
+Regressions on arm target with some GCC tests
+
+Hi,
+
+After trying qemu master:
+commit 474f3938d79ab36b9231c9ad3b5a9314c2aeacde
+Merge: 68d7ff0 14f5d87
+Author: Peter Maydell <email address hidden>
+Date:   Fri Jun 21 15:40:50 2019 +0100
+
+I found several regressions compared to qemu-3.1 when running the GCC testsuite.
+I'm attaching a tarball containing several GCC tests (binaries), needed shared libs, and a short script to run all the tests.
+
+All tests used to pass w/o error (one of them is verbose), but with a recent qemu, all of them make qemu crash:
+
+qemu: uncaught target signal 6 (Aborted) - core dumped
+
+This was noticed with GCC master configured with
+--target arm-none-linux-gnueabi
+--with-mode arm
+--with-cpu cortex-a9
+
+and calling qemu with --cpu cortex-a9 (the script uses "any", this makes no difference).
+
+I have noticed other failures with arm-v8 code, but this is probably the same root cause. Since it's a bit tedious to manually rebuild & extract the testcases, I'd prefer to start with this subset, and I can extract more if needed later.
+
+Thanks
+
+
+
+I bisected a chunk of the errors to:
+
+  commit c6fb8c0cf704c4a1a48c3e99e995ad4c58150dab (refs/bisect/bad)
+  Author: Richard Henderson <email address hidden>
+  Date:   Mon Feb 25 11:42:35 2019 -0800
+
+      tcg/i386: Support INDEX_op_extract2_{i32,i64}
+
+      Signed-off-by: Richard Henderson <email address hidden>
+
+Specifically I think when tcg_gen_deposit_i32 handles the if (ofs + len == 32) case.
+
+
+Fixed by:
+
+Subject: [PATCH for-4.1] tcg: Fix constant folding of INDEX_op_extract2_i32
+Date: Tue,  9 Jul 2019 14:19:00 +0200
+Message-Id: <email address hidden>
+
+
+I confirm this patch fixes the problem I reported. Thanks!
+
+
diff --git a/results/classifier/118/performance/1840865 b/results/classifier/118/performance/1840865
new file mode 100644
index 00000000..562344d8
--- /dev/null
+++ b/results/classifier/118/performance/1840865
@@ -0,0 +1,74 @@
+performance: 0.825
+device: 0.805
+ppc: 0.796
+PID: 0.795
+graphic: 0.769
+register: 0.686
+files: 0.669
+peripherals: 0.606
+VMM: 0.601
+network: 0.586
+debug: 0.584
+vnc: 0.575
+TCG: 0.570
+kernel: 0.558
+permissions: 0.553
+x86: 0.529
+mistranslation: 0.513
+socket: 0.509
+semantic: 0.476
+arm: 0.449
+architecture: 0.429
+risc-v: 0.413
+user-level: 0.409
+i386: 0.404
+virtual: 0.397
+boot: 0.383
+hypervisor: 0.291
+KVM: 0.181
+assembly: 0.158
+
+qemu crashes when doing iotest on  virtio-9p filesystem 
+
+Qemu crashes when doing avocado-vt test on virtio-9p filesystem.
+This bug can be reproduced running https://github.com/autotest/tp-qemu/blob/master/qemu/tests/9p.py.
+The crash stack goes like:
+
+Program terminated with signal SIGSEGV, Segmentation fault.
+#0  v9fs_mark_fids_unreclaim (pdu=pdu@entry=0xaaab00046868, path=path@entry=0xffff851e2fa8)
+    at hw/9pfs/9p.c:505
+#1  0x0000aaaae3585acc in v9fs_unlinkat (opaque=0xaaab00046868) at hw/9pfs/9p.c:2590
+#2  0x0000aaaae3811c10 in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>)
+    at util/coroutine-ucontext.c:116
+#3  0x0000ffffa13ddb20 in ?? () from /lib64/libc.so.6
+Backtrace stopped: not enough registers or memory available to unwind further
+
+A segment fault is triggered at hw/9pfs/9p.c line 505
+
+    for (fidp = s->fid_list; fidp; fidp = fidp->next) {
+        if (fidp->path.size != path->size) {     # fidp is invalid 
+            continue;
+        }
+
+(gdb) p path
+$10 = (V9fsPath *) 0xffff851e2fa8
+(gdb) p *path
+$11 = {size = 21, data = 0xaaaafed6f420 "./9p_test/p2a1/d0/f1"}
+(gdb) p *fidp
+Cannot access memory at address 0x101010101010101
+(gdb) p *pdu
+$12 = {size = 19, tag = 54, id = 76 'L', cancelled = 0 '\000', complete = {entries = {
+      sqh_first = 0x0, sqh_last = 0xaaab00046870}}, s = 0xaaab000454b8, next = {
+    le_next = 0xaaab000467c0, le_prev = 0xaaab00046f88}, idx = 88}
+(gdb) 
+
+Address Sanitizer shows error and saying that there is a heap-use-after-free on *fidp*.
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/181
+
+
diff --git a/results/classifier/118/performance/1841990 b/results/classifier/118/performance/1841990
new file mode 100644
index 00000000..cad6b1df
--- /dev/null
+++ b/results/classifier/118/performance/1841990
@@ -0,0 +1,416 @@
+performance: 0.930
+PID: 0.925
+debug: 0.917
+architecture: 0.916
+user-level: 0.903
+register: 0.900
+assembly: 0.897
+graphic: 0.895
+permissions: 0.895
+risc-v: 0.894
+semantic: 0.889
+arm: 0.881
+device: 0.873
+ppc: 0.841
+socket: 0.831
+vnc: 0.812
+kernel: 0.812
+virtual: 0.808
+mistranslation: 0.803
+KVM: 0.790
+files: 0.780
+TCG: 0.770
+boot: 0.759
+network: 0.747
+VMM: 0.732
+peripherals: 0.706
+hypervisor: 0.690
+x86: 0.622
+i386: 0.566
+
+instruction 'denbcdq' misbehaving
+
+Instruction 'denbcdq' appears to have no effect.  Test case attached.
+
+On ppc64le native:
+--
+gcc -g -O -mcpu=power9 bcdcfsq.c test-denbcdq.c -o test-denbcdq
+$ ./test-denbcdq
+0x00000000000000000000000000000000
+0x0000000000000000000000000000000c
+0x22080000000000000000000000000000
+$ ./test-denbcdq 1
+0x00000000000000000000000000000001
+0x0000000000000000000000000000001c
+0x22080000000000000000000000000001
+$ ./test-denbcdq $(seq 0 99)
+0x00000000000000000000000000000064
+0x0000000000000000000000000000100c
+0x22080000000000000000000000000080
+--
+
+With "qemu-ppc64le -cpu power9"
+--
+$ qemu-ppc64le -cpu power9 -L [...] ./test-denbcdq
+0x00000000000000000000000000000000
+0x0000000000000000000000000000000c
+0x0000000000000000000000000000000c
+$ qemu-ppc64le -cpu power9 -L [...] ./test-denbcdq 1
+0x00000000000000000000000000000001
+0x0000000000000000000000000000001c
+0x0000000000000000000000000000001c
+$ qemu-ppc64le -cpu power9 -L [...] ./test-denbcdq $(seq 100)
+0x00000000000000000000000000000064
+0x0000000000000000000000000000100c
+0x0000000000000000000000000000100c
+--
+
+I started looking at the code, but I got confused rather quickly.  Could be related to endianness? I think denbcdq arrived on the scene before little-endian was a big deal.  Maybe something to do with utilizing implicit floating-point register pairs...  I don't think the right data is getting to helper_denbcdq, which would point back to the gen_fprp_ptr uses in dfp-impl.inc.c (GEN_DFP_T_FPR_I32_Rc).  (Maybe?)
+
+
+
+I tried to compile your test program with 2 different GCC versions but it keeps failing, do you need a special/recent version? Meanwhile can you attach a statically linked binary?
+
+$ gcc -v
+gcc version 6.3.0 20170516 (Debian 6.3.0-18) 
+
+$ gcc -g -O -mcpu=power9 test-denbcdq.c -o test-denbcdq
+test-denbcdq.c: In function 'bcdcfsq':
+test-denbcdq.c:7:2: error: impossible register constraint in 'asm'
+  asm volatile ( "bcdcfsq. %0,%1,0" : "=v" (r) : "v" (i128) );
+  ^~~
+
+--
+
+$ gcc version 8.1.1 20180626 (Red Hat Cross 8.1.1-3) (GCC)
+
+$ gcc -g -O -mcpu=power9 test-denbcdq.c -o test-denbcdq
+test-denbcdq.c: In function ‘main’:
+test-denbcdq.c:15:3: error: decimal floating point not supported for this target
+   _Decimal128 d128;
+   ^~~~~~~~~~~
+
+FWIW I could compile the attached test with:
+
+$ gcc -v
+gcc version 8.3.1 20190507 (Red Hat 8.3.1-4) (GCC)
+
+@Philippe, thank you for spending the time to find a compiler that works with the testcase. I've been operating on RHEL 8 primarily:
+gcc version 8.2.1 20180905 (Red Hat 8.2.1-3) (GCC)
+
+This seems related to this change:
+
+commit ef96e3ae9698d6726a8113f448c82985a9f31ff5
+Author: Mark Cave-Ayland <email address hidden>
+Date:   Wed Jan 2 09:14:22 2019 +0000
+
+    target/ppc: move FP and VMX registers into aligned vsr register array
+    
+    The VSX register array is a block of 64 128-bit registers where the first 32
+    registers consist of the existing 64-bit FP registers extended to 128-bit
+    using new VSR registers, and the last 32 registers are the VMX 128-bit
+    registers as show below:
+    
+                64-bit               64-bit
+        +--------------------+--------------------+
+        |        FP0         |                    |  VSR0
+        +--------------------+--------------------+
+        |        FP1         |                    |  VSR1
+        +--------------------+--------------------+
+        |        ...         |        ...         |  ...
+        +--------------------+--------------------+
+        |        FP30        |                    |  VSR30
+        +--------------------+--------------------+
+        |        FP31        |                    |  VSR31
+        +--------------------+--------------------+
+        |                  VMX0                   |  VSR32
+        +-----------------------------------------+
+        |                  VMX1                   |  VSR33
+        +-----------------------------------------+
+        |                  ...                    |  ...
+        +-----------------------------------------+
+        |                  VMX30                  |  VSR62
+        +-----------------------------------------+
+        |                  VMX31                  |  VSR63
+        +-----------------------------------------+
+    
+    In order to allow for future conversion of VSX instructions to use TCG vector
+    operations, recreate the same layout using an aligned version of the existing
+    vsr register array.
+    
+    Since the old fpr and avr register arrays are removed, the existing callers
+    must also be updated to use the correct offset in the vsr register array. This
+    also includes switching the relevant VMState fields over to using subarrays
+    to make sure that migration is preserved.
+
+@@ -1055,11 +1053,10 @@ struct CPUPPCState {
+-    /* VSX registers */
+-    uint64_t vsr[32];
++    /* VSX registers (including FP and AVR) */
++    ppc_vsr_t vsr[64] QEMU_ALIGNED(16);
+
+The denbcdq helper is:
+
+#define DFP_HELPER_ENBCD(op, size)                                           \
+void helper_##op(CPUPPCState *env, uint64_t *t, uint64_t *b, uint32_t s)     \
+{                                                                            \
+[...]
+    if ((size) == 64) {                                                      \
+        t[0] = dfp.t64[0];                                                   \
+    } else if ((size) == 128) {                                              \
+        t[0] = dfp.t64[HI_IDX];                                              \
+        t[1] = dfp.t64[LO_IDX];                                              \
+    }                                                                        \
+}
+
+t[1] doesn't point to the proper vsr register anymore.
+
+Thanks for the report Paul (and also the investigation work Philippe).
+
+So yes it seems the DFP code is another fallout from the conversion of the floating point registers over to host-endian/VSR format. I've had a quick look at this and it seems that the simple fix to compensate for the FP registers not being contiguous anymore still won't work on ppc64le.
+
+In order to fix this properly I think the best solution is to use an approach similar to that used in my last set of VSX patches, i.e. using macros to avoid having separate code paths for big and little endian hosts.
+
+I can certainly come up with some patches for this, however I don't have any ppc64le hardware to test it myself. If I were to do a trial conversion of denbcdq would you be able to test it for me?
+
+
+I have access to lots of Power hardware, and happy to test and help however I can!  Thanks, Mark!
+
+Sorry I didn't get a chance to look at this before I went away on holiday, however I've just posted a patchset at https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg05577.html which should resolve the issue for you.
+
+With the above patchset applied I now see the following results with your test program:
+
+LE host:
+$ ../qemu-ppc64le -L /usr/powerpc64le-linux-gnu -cpu power9 test-denbcdqle
+0x00000000000000000000000000000000
+0x0000000000000000000000000000000c
+0x22080000000000000000000000000000
+$ ../qemu-ppc64le -L /usr/powerpc64le-linux-gnu -cpu power9 test-denbcdqle 1
+0x00000000000000000000000000000001
+0x0000000000000000000000000000001c
+0x22080000000000000000000000000001
+$ ../qemu-ppc64le -L /usr/powerpc64le-linux-gnu -cpu power9 test-denbcdqle $(seq 0 99)
+0x00000000000000000000000000000064
+0x0000000000000000000000000000100c
+0x22080000000000000000000000000080
+
+BE host:
+$ ../qemu-ppc64 -L /usr/powerpc64-linux-gnu -cpu power9 test-denbcdq
+0x00000000000000000000000000000000
+0x000000000000000c0000000000000000
+0x00000000000000002208000000000000
+$ ../qemu-ppc64 -L /usr/powerpc64-linux-gnu -cpu power9 test-denbcdq 1
+0x00000000000000010000000000000000
+0x000000000000001c0000000000000000
+0x00000000000000012208000000000000
+$ ../qemu-ppc64 -L /usr/powerpc64-linux-gnu -cpu power9 test-denbcdq $(seq 0 99)
+0x00000000000000640000000000000000
+0x000000000000100c0000000000000000
+0x00000000000000802208000000000000
+
+If you could confirm that the BE host results above match those on real hardware then that would be great as I've switched over to use macros that should do the right thing regardless of host endian.
+
+Finally if you have access to a more comprehensive test suite then that would be helpful to test more of the 64-bit DFP number paths and some of more esoteric DFP instructions.
+
+I'm still trying to track down a BE system.  Everything I have which is newer than POWER7 is LE, and POWER7 is not sufficient to run the test.
+
+The test suite that produced the problem is from https://github.com/open-power-sdk/pveclib.  The good news is that with your (v1) changes, 275 tests no longer fail.  22 tests still fail, but I bet it is different issue(s).
+
+That certainly sounds like progress. Did you see the follow up email indicating the typo that I found in patch 6? It can be fixed by applying the following diff on top:
+
+diff --git a/target/ppc/dfp_helper.c b/target/ppc/dfp_helper.c
+index c2d335e928..b801acbedc 100644
+--- a/target/ppc/dfp_helper.c
++++ b/target/ppc/dfp_helper.c
+@@ -1054,7 +1054,7 @@ static inline void dfp_set_sign_64(ppc_vsr_t *t, uint8_t sgn)
+ static inline void dfp_set_sign_128(ppc_vsr_t *t, uint8_t sgn)
+ {
+     t->VsrD(0) <<= 4;
+-    t->VsrD(0) |= (t->VsrD(0) >> 60);
++    t->VsrD(0) |= (t->VsrD(1) >> 60);
+     t->VsrD(1) <<= 4;
+     t->VsrD(1) |= (sgn & 0xF);
+ }
+
+Does that help any more tests to pass? Also the changes to the FP register layout were made in QEMU 4.0 and so it seems to me that even if some tests fail, if the results between QEMU 3.1 and QEMU git master with the patchset applied are equivalent then we can assume that the patchset functionality is correct.
+
+
+> Did you see the follow up email indicating the typo that I found in patch 6?
+
+I did, then forgot to include it in my build.  I've included that change now...
+
+> Does that help any more tests to pass?
+
+I'm down from 22 failures to 8.
+
+That's looking much better :)  And finally, how many failures do you get running the same test under QEMU 3.1? If that gives you zero failures then I'll need to look a lot closer at the changes to try and figure out what is going on.
+
+As a matter of interest, which tests are the ones that are failing?
+
+I haven't tried QEMU 3.1 yet.  Adding to to-do list.
+
+I am narrowing down the remaining failures.  Within the pveclib test suite, there are two tests, one is failing, "pveclib_test".  It contains numerous subtests.  The failing subtests are:
+- test_setb_bcdsq
+- test_setb_bcdinv
+- test_bcdsr
+- test_bcdsrrqi
+
+Investigating the first two so far, it looks like "bcdadd." and "bcdsub." are not operating correctly.  gdb sessions showing the difference in behavior between QEMU 4.2+patches and hardware (in that order):
+
+QEMU 4.2+patches:
+
+(gdb) x/i $pc                                                                                                       
+=> 0x10000698 <vec_setbool_bcdsq+60>:   bcdsub. v0,v0,v1,0                                                          
+(gdb) p $vr0.uint128                                                                                                
+$3 = 0x9999999999999999999999999999999d                                                                             
+(gdb) p $vr1.uint128                                                                                                
+$4 = 0x1d                                                                                                           
+(gdb) stepi                                                                                                         
+(gdb) p $vr1.uint128                                                                                                
+$5 = 0x1d
+
+hardware:
+
+1: x/i $pc
+=> 0x10000698 <vec_setbool_bcdsq+60>:	bcdsub. v0,v0,v1,0
+(gdb) p $vr0.uint128
+$2 = 0x9999999999999999999999999999999d
+(gdb) p $vr1.uint128
+$3 = 0x1d
+(gdb) nexti
+(gdb) p $vr0.uint128
+$4 = 0x9999999999999999999999999999998d
+
+--
+
+QEMU 4.2+patches:
+
+=> 0x10000740 <vec_setbool_bcdinv+60>:  bcdadd. v0,v0,v1,0
+(gdb) p $vr0.uint128                                      
+$1 = 0xa999999999999999000000000000000c                   
+(gdb) p $vr1.uint128                                      
+$2 = 0xc                                                  
+(gdb) p $cr                                               
+$4 = 0x24000242                                           
+(gdb) nexti                                               
+(gdb) p $vr0.uint128                                      
+$5 = 0xffffffffffffffffffffffffffffffff                   
+(gdb) p $cr                             
+$6 = 0x24000212                         
+
+hardware:
+
+=> 0x10000740 <vec_setbool_bcdinv+60>:  bcdadd. v0,v0,v1,0
+(gdb) p $vr0.uint128
+$2 = 0xa999999999999999000000000000000c
+(gdb) p $vr1.uint128
+$3 = 0xc
+(gdb) p $cr
+$4 = 0x24000442
+(gdb) nexti
+(gdb) p $vr0.uint128
+$5 = 0x999999999999999000000000000000c
+(gdb) p $cr
+$6 = 0x24000412
+
+Right so this looks like a different bug: if you look at helper_bcdadd() and helper_bcdsub() in target/ppc/int_helper.c then you can see the problem straight away: the code is accessing the elements of ppc_avr_t without directly without using the VsrX() macros which correct for host endian.
+
+Fortunately the fix is really easy - replace the direct access with the relevant VsrX() macro from target/ppc/cpu.h instead. It does look as if there are several places in the BCD code that need fixing up though.
+
+The first thing to fix is the #define BCD_DIG_BYTE around line 2055: the VsrX() macro offsets are in "big-endian" format to match the ISA specification so VsrD(0) is the MSB and VsrD(1) is the LSB, which means that during the conversion you generally want the index from within the #if defined(HOST_WORDS_BIGENDIAN) ... #endif section.
+
+Given that the VsrX() macros invert the array index according to host endian then you can completely remove everything between #if defined(HOST_WORDS_BIGENDIAN) ... #endif and replace it with simply:
+
+    #define BCD_DIG_BYTE(n) (15 - ((n) / 2))
+
+Then as an example in the bcd_get_sgn() function below you can change the switch from:
+
+    switch (bcd->u8[BCD_DIG_BYTE(0)] & 0xF)
+
+to:
+
+    switch (bcd->VsrB(BCD_DIG_BYTE(0)) & 0xF)
+
+etc. and repeat for the remaining bcd helpers down to helper_vsbox() around line 2766. Note it seems the last few bcd helpers have a #if defined(HOST_WORDS_BIGENDIAN) ... #endif section towards the start that might a bit of thought, however once they are written in terms of the VsrX() macros then everything will "just work" regardless of host endian.
+
+
+`vsl` appears to be acting incorrectly as well, per the test 'vec_bcdsr':
+
+=> 0x100006e0 <vec_slq+132>:    vsl     v0,v0,v1      
+(gdb) p $vr0.uint128                                  
+$21 = 0x10111213141516172021222324252650              
+(gdb) p $vr1.uint128                                  
+$22 = 0x0                                             
+(gdb) stepi                                           
+0x00000000100006e4 in vec_slq ()                      
+1: x/i $pc                                             each byte                                                       
+=> 0x100006e4 <vec_slq+136>:    xxlor   vs0,vs32,vs32 
+(gdb) p $vr0.uint128                                  
+$23 = 0x10111213141516572021222324252650
+
+=> 0x100006e0 <vec_slq+132>:    vsl     v0,v0,v1
+(gdb) p $vr0.uint128
+$21 = 0x10111213141516172021222324252650
+(gdb) p $vr1.uint128
+$22 = 0x0
+(gdb) stepi
+0x00000000100006e4 in vec_slq ()
+1: x/i $pc
+=> 0x100006e4 <vec_slq+136>:    xxlor   vs0,vs32,vs32
+(gdb) p $vr0.uint128
+$23 = 0x10111213141516172021222324252650
+
+Note in the final result differs in the first nybble of the 8th MSB ('57' vs '17').
+
+The final failure is 'vsr' acting incorrectly, with basically the same issue as 'vsl'.
+
+Ahhh in that case I suspect that you may be seeing a bug in this commit:
+
+commit 4e6d0920e7547e6af4bbac5ffe9adfe6ea621822
+Author: Stefan Brankovic <email address hidden>
+Date:   Mon Jul 15 16:22:48 2019 +0200
+
+    target/ppc: Optimize emulation of vsl and vsr instructions
+    
+    Optimization of altivec instructions vsl and vsr(Vector Shift Left/Rigt).
+    Perform shift operation (left and right respectively) on 128 bit value of
+    register vA by value specified in bits 125-127 of register vB. Lowest 3
+    bits in each byte element of register vB must be identical or result is
+    undefined.
+    
+    For vsl instruction, the first step is bits 125-127 of register vB have
+    to be saved in variable sh. Then, the highest sh bits of the lower
+    doubleword element of register vA are saved in variable shifted,
+    in order not to lose those bits when shift operation is performed on
+    the lower doubleword element of register vA, which is the next
+    step. After shifting the lower doubleword element shift operation
+    is performed on higher doubleword element of vA, with replacement of
+    the lowest sh bits(that are now 0) with bits saved in shifted.
+    
+    For vsr instruction, firstly, the bits 125-127 of register vB have
+    to be saved in variable sh. Then, the lowest sh bits of the higher
+    doubleword element of register vA are saved in variable shifted,
+    in odred not to lose those bits when the shift operation is
+    performed on the higher doubleword element of register vA, which is
+    the next step. After shifting higher doubleword element, shift operation
+    is performed on lower doubleword element of vA, with replacement of
+    highest sh bits(that are now 0) with bits saved in shifted.
+    
+    Signed-off-by: Stefan Brankovic <email address hidden>
+    Reviewed-by: Richard Henderson <email address hidden>
+    Message-Id: <email address hidden>
+    Signed-off-by: David Gibson <email address hidden>
+
+In fact, looking at that commit I think you should just be able to revert it for a quick test - does that enable your regression tests to pass?
+
+
+Reverted 4e6d0920e7547e6af4bbac5ffe9adfe6ea621822, and those 'vsl/vsr' tests now succeed.
+
+Great! It looks as if I can't add Stefan to the bug report in launchpad since he isn't registered there, so I'll send a quick email to qemu-devel and add him as CC.
+
+In the meantime whilst your test setup is working and everything is fresh, I'll have a quick go at switching the BCD_DIG_BYTE bits over to use the VsrX() macros to abstract out more host endian behaviour...
+
+If I got that right, this has been fixed by this commit here:
+https://gitlab.com/qemu-project/qemu/-/commit/8d745875c28528a3015
+... so I'm closing this now. If you disagree, feel free to open it again.
+
diff --git a/results/classifier/118/performance/1853123 b/results/classifier/118/performance/1853123
new file mode 100644
index 00000000..19b35142
--- /dev/null
+++ b/results/classifier/118/performance/1853123
@@ -0,0 +1,79 @@
+performance: 0.931
+KVM: 0.893
+device: 0.877
+peripherals: 0.842
+user-level: 0.819
+network: 0.813
+graphic: 0.785
+architecture: 0.774
+files: 0.664
+semantic: 0.654
+hypervisor: 0.640
+debug: 0.632
+PID: 0.605
+mistranslation: 0.555
+TCG: 0.500
+permissions: 0.467
+virtual: 0.454
+kernel: 0.429
+ppc: 0.413
+x86: 0.402
+boot: 0.388
+socket: 0.364
+vnc: 0.364
+register: 0.345
+risc-v: 0.338
+i386: 0.273
+VMM: 0.228
+assembly: 0.132
+arm: 0.104
+
+Memory synchronization error between kvm and target, e1000(dpdk)
+
+Hi folks.
+
+I use linux with dpdk drivers on the target system, and e1000 emulation device with tap interface for host. I use kvm for accelerate.
+Version qemu 4.0.94 and master (Nov 12 10:14:33 2019)
+Version dpdk stable-17.11.4
+Version linux host 4.15.0-66-generic (ubuntu 18.04)
+
+I type command "ping <target ip> -f" and wait about 1-2 minutes. Network subsystem freezes.
+
+For receive the eth pack from host system (tap interface) to host system the e1000 using ring buffer. 
+
+The e1000 write body of eth pack, set E1000_RXD_STAT_DD flag and move RDH (Ring Device Head).
+(file hw/net/e1000.c function e1000_receive_iov() )
+
+The dpdk driver is reading from E1000_RXD_STAT_DD flags (ignoring RDH), if flag is set: read buffer, unset flag E1000_RXD_STAT_DD and move RDT (Ring Device Tail).
+(source drivers/net/e1000/em_rxtx.c function eth_em_recv_scattered_pkts() )
+
+I see what the driver unet E1000_RXD_STAT_DD (rxdp->status = 0; ), but sometimes rxdp->status remains equal to 7. On the next cycle, this this buffer is read, RDT moved to far. RDH becomes equal RDT and network is freezes.
+
+If I insert some delay after unset E1000_RXD_STAT_DD, and repeatedly unset E1000_RXD_STAT_DD (if rxdp->status == 7 ), then all work fine.
+If check E1000_RXD_STAT_DD without delay, status rxdp->status always valid.
+
+This only appears on kvm. If I use tcg all works fine.
+
+I trying set watchpoint for memory on the qemu (for tcg), and see, that for one package cycle of set/unse STAT_DD repeated once.
+
+I trying set watchpoint for memory on the qemu (for kvm), and see, that rxdp->status changed to 0(unset) only once, but is changes immediately before set flag. 
+
+
+Please help me with advice on how to catch and fix this error. 
+Theoretically, it would help me to trace the memory access when writing to E1000_RXD_STAT_DD, RHD and RDT, both from the target and the host system. But I have no idea how this can be done.
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1857 b/results/classifier/118/performance/1857
new file mode 100644
index 00000000..2409d187
--- /dev/null
+++ b/results/classifier/118/performance/1857
@@ -0,0 +1,82 @@
+performance: 0.996
+arm: 0.983
+architecture: 0.974
+x86: 0.963
+device: 0.938
+socket: 0.916
+graphic: 0.901
+user-level: 0.897
+PID: 0.895
+register: 0.886
+peripherals: 0.877
+debug: 0.861
+semantic: 0.842
+permissions: 0.841
+ppc: 0.837
+vnc: 0.817
+network: 0.817
+files: 0.779
+hypervisor: 0.766
+VMM: 0.765
+TCG: 0.747
+risc-v: 0.720
+kernel: 0.719
+KVM: 0.713
+assembly: 0.698
+i386: 0.683
+boot: 0.639
+virtual: 0.436
+mistranslation: 0.328
+
+Major qemu-aarch64 performance slowdown since commit 59b6b42cd3
+Description of problem:
+I have observed a major performance slowdown between qemu 8.0.0 and 8.1.0:
+
+
+qemu 8.0.0: 0.8s
+
+qemu 8.1.0: 6.8s
+
+
+After bisecting the commits between 8.0.0 and 8.1.0, the offending commit is 59b6b42cd3:
+
+
+commit 59b6b42cd3446862567637f3a7ab31d69c9bef51
+Author: Richard Henderson <richard.henderson@linaro.org>
+Date:   Tue Jun 6 10:19:39 2023 +0100
+
+    target/arm: Enable FEAT_LSE2 for -cpu max
+
+    Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
+    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
+    Message-id: 20230530191438.411344-21-richard.henderson@linaro.org
+    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
+
+
+Reverting the commit in latest master fixes the problem:
+
+qemu 8.0.0: 0.8s
+
+qemu 8.1.0: 6.8s
+
+qemu master + revert 59b6b42cd3: 0.8s
+
+Alternatively, specify `-cpu cortex-a35` to disable LSE2:
+
+`time ./qemu-aarch64 -cpu cortex-a35`: 0.8s
+
+`time ./qemu-aarch64`: 6.77s
+
+The slowdown is also observed when running qemu-aarch64 on aarch64 machine:
+
+`time ./qemu-aarch64 /usr/bin/node -e 1`: 2.91s
+
+`time ./qemu-aarch64 -cpu cortex-a35 /usr/bin/node -e 1`: 1.77s
+
+The slowdown on x86_64 machine is small: 362ms -> 378ms.
+Steps to reproduce:
+1. Run `time ./qemu-aarch64 node-aarch64 -e 1` (node-aarch64 is NodeJS v16 built for AArch64)
+2. Using qemu master, the output says `0.8s`
+3. Using qemu master with commit 59b6b42cd3 reverted, the output says `6.77s`
+Additional information:
+
diff --git a/results/classifier/118/performance/1859081 b/results/classifier/118/performance/1859081
new file mode 100644
index 00000000..1f594791
--- /dev/null
+++ b/results/classifier/118/performance/1859081
@@ -0,0 +1,72 @@
+performance: 0.927
+peripherals: 0.826
+graphic: 0.812
+virtual: 0.807
+semantic: 0.760
+device: 0.742
+user-level: 0.730
+ppc: 0.685
+network: 0.659
+boot: 0.654
+vnc: 0.632
+mistranslation: 0.624
+architecture: 0.614
+files: 0.605
+socket: 0.559
+register: 0.530
+assembly: 0.515
+permissions: 0.512
+hypervisor: 0.495
+PID: 0.480
+x86: 0.464
+kernel: 0.422
+VMM: 0.417
+risc-v: 0.415
+debug: 0.404
+KVM: 0.362
+TCG: 0.332
+arm: 0.321
+i386: 0.286
+
+Mouse way too fast when Qemu is on a Windows VM with a OS 9 Guest
+
+On a server, I have a Windows 10 VM with Qemu 4.1.0 (latest) from https://qemu.weilnetz.de/w64/ installed.
+There I have a Mac OS 9.2.2 machine.
+Now if I connect to the Windows VM with VNC or RDP or even VMWare console, the Mouse in the Mac OS Guest inside Qemu is waaaay to fast. Even when lowering the mouse speed in the Mac OS mouse setting, one pixel in the Host (Windows 10 VM) still moves the mouse by 10 pixels inside the Qemu machine.
+I tried different resolutions but that does not help.
+Is there any way to fix this or any way how I can provide more information?
+Thanks
+
+What is the QEMU command-line you use?
+Does this problem exist with the usb mouse (-device usb-mouse)?
+Could you try upgrading to the latest version of QEMU and see if the issue is resolved please?
+
+The command line I currently use is:
+
+".\qemu-4.2.0-win64\qemu-system-ppc.exe" -L pc-bios -boot c -M mac99,via=pmu -m 512 ^
+-prom-env "auto-boot?=true" -prom-env "boot-args=-v" -prom-env "vga-ndrv?=true" ^
+-drive file=c:\qemu\MacOS9.2.img,format=raw,media=disk ^
+-drive file=c:\qemu\MacOS9.2.2_Universal_Install.iso,format=raw,media=cdrom ^
+-sdl ^
+-netdev user,id=network01 -device sungem,netdev=network01 ^
+-device VGA,edid=on
+
+I also tried by adding "-device usb-mouse" but it does not make any difference.
+I now tried with 4.2.0 from omledom (yesterday with 4.1.0 from weilnetz.
+There is no difference in 4.1.0 and 4.2.0 with or without the usb-mouse.
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1873341 b/results/classifier/118/performance/1873341
new file mode 100644
index 00000000..e561cae8
--- /dev/null
+++ b/results/classifier/118/performance/1873341
@@ -0,0 +1,50 @@
+performance: 0.923
+device: 0.877
+graphic: 0.862
+KVM: 0.768
+virtual: 0.694
+vnc: 0.663
+socket: 0.557
+risc-v: 0.517
+ppc: 0.483
+VMM: 0.471
+boot: 0.470
+architecture: 0.453
+peripherals: 0.438
+PID: 0.429
+semantic: 0.423
+register: 0.418
+files: 0.397
+arm: 0.357
+debug: 0.334
+TCG: 0.320
+permissions: 0.303
+mistranslation: 0.290
+user-level: 0.274
+network: 0.252
+kernel: 0.239
+hypervisor: 0.229
+i386: 0.177
+x86: 0.158
+assembly: 0.093
+
+Qemu Win98 VM with KVM videocard passthrough DOS mode video is not working for most of games..
+
+Hello,
+im using Win98 machine with KVM videocards passthrough which is working fine, but when i try Windows 98 - Dosbox mode, there is something work with all videocards which i tried PCI-E/PCI - Nvidia, 3Dfx, Matrox.
+
+ Often is framerate is very slow, as slideshow:
+Doom 2, Blood, even for Fdisk start - i can see how its slowly rendering individual lines, or its not working at all - freeze / black screen only - Warcraft 2 demo (vesa 640x480). 
+
+ There is something wrong with it.
+
+ Qemu 2.11 + 4.2, Linux Mint 19.3. Gigabyte Z170 MB.
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/253
+
+
diff --git a/results/classifier/118/performance/1875762 b/results/classifier/118/performance/1875762
new file mode 100644
index 00000000..875de9f9
--- /dev/null
+++ b/results/classifier/118/performance/1875762
@@ -0,0 +1,78 @@
+performance: 0.952
+graphic: 0.895
+device: 0.851
+architecture: 0.848
+semantic: 0.817
+files: 0.816
+user-level: 0.809
+permissions: 0.774
+kernel: 0.772
+peripherals: 0.762
+hypervisor: 0.742
+KVM: 0.742
+risc-v: 0.741
+mistranslation: 0.733
+register: 0.706
+arm: 0.696
+vnc: 0.676
+PID: 0.666
+VMM: 0.636
+socket: 0.631
+debug: 0.631
+ppc: 0.623
+network: 0.566
+boot: 0.565
+assembly: 0.561
+virtual: 0.555
+x86: 0.539
+TCG: 0.516
+i386: 0.443
+
+Poor disk performance on sparse VMDKs
+
+Found in QEMU 4.1, and reproduced on master.
+
+QEMU appears to suffer from remarkably poor disk performance when writing to sparse-extent VMDKs. Of course it's to be expected that allocation takes time and sparse VMDKs peform worse than allocated VMDKs, but surely not on the orders of magnitude I'm observing. On my system, the fully allocated write speeds are approximately 1.5GB/s, while the fully sparse write speeds can be as low as 10MB/s. I've noticed that adding "cache unsafe" reduces the issue dramatically, bringing speeds up to around 750MB/s. I don't know if this is still slow or if this perhaps reveals a problem with the default caching method.
+
+To reproduce the issue I've attached two 4GiB VMDKs. Both are completely empty and both are technically sparse-extent VMDKs, but one is 100% pre-allocated and the other is 100% unallocated. If you attach these VMDKs as second and third disks to an Ubuntu VM running on QEMU (with KVM) and measure their write performance (using dd to write to /dev/sdb and /dev/sdc for example) the difference in write speeds is clear.
+
+For what it's worth, the flags I'm using that relate to the VMDK are as follows:
+
+`-drive if=none,file=sparse.vmdk,id=hd0,format=vmdk -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=hd0`
+
+
+
+On Tue, Apr 28, 2020 at 10:45:07PM -0000, Alan Murtagh wrote:
+> QEMU appears to suffer from remarkably poor disk performance when
+> writing to sparse-extent VMDKs. Of course it's to be expected that
+> allocation takes time and sparse VMDKs peform worse than allocated
+> VMDKs, but surely not on the orders of magnitude I'm observing.
+
+Hi Alan,
+This is expected behavior. The VMDK block driver is not intended for
+running VMs. It is primarily there for qemu-img convert support.
+
+You can get good performance by converting the image file to qcow2 or
+raw instead.
+
+The effort required to develop a high-performance image format driver
+for non-trivial file formats like VMDK is quite high. Therefore only
+qcow2 goes through the lengths required to deliver good performance
+(request parallelism, metadata caching, optimizing metadata update
+dependencies, etc).
+
+The non-native image format drivers are simple and basically only work
+well for sequential I/O with no parallel requests. That's all qemu-img
+convert needs!
+
+If someone volunteers to optimize VMDK then I'm sure the patches could
+be merged. In the meantime I suggest using QEMU's native image formats:
+qcow2 or raw.
+
+Stefan
+
+
+Thanks Stefan.
+
+Ok, I'm closing this now, since this is the expected behavior according to Stefan's description.
+
diff --git a/results/classifier/118/performance/1877418 b/results/classifier/118/performance/1877418
new file mode 100644
index 00000000..4c51d5ce
--- /dev/null
+++ b/results/classifier/118/performance/1877418
@@ -0,0 +1,61 @@
+performance: 0.855
+virtual: 0.848
+graphic: 0.801
+device: 0.762
+architecture: 0.738
+files: 0.681
+user-level: 0.630
+ppc: 0.624
+mistranslation: 0.544
+kernel: 0.530
+semantic: 0.505
+register: 0.459
+arm: 0.413
+vnc: 0.365
+debug: 0.356
+boot: 0.355
+VMM: 0.340
+socket: 0.319
+PID: 0.303
+x86: 0.285
+risc-v: 0.266
+i386: 0.247
+permissions: 0.234
+assembly: 0.230
+TCG: 0.187
+hypervisor: 0.168
+peripherals: 0.160
+network: 0.152
+KVM: 0.135
+
+qemu-nbd freezes access to VDI file
+
+Mounted Oracle Virtualbox .vdi drive, which has GTP+BTRFS:
+sudo qemu-nbd -c /dev/nbd0 /storage/btrfs.vdi
+
+Then I am operating on the btrfs filesystem and suddenly it freezes.
+
+
+
+
+
+I don't recommend you use VDI images in this way; we do not intend to support performant RW access; support for VDI images is there to convert to qcow2 or raw, generally.
+
+That said, some questions that might be interesting to know the answer to:
+
+- Try converting your VDI image to raw or qcow2 and mounting that instead. Does the conversion work successfully? Can you export that image via qemu-nbd and mount it? Does it work?
+- Do non-BTRFS filesystems cause any problems?
+
+
+I thought there were qemu-img for that. Since qemu-nbd allows mounting images a rw block devices, it's logical to think that you can use it for that purpose. Will try to reproduce again the issue in case it was a kernel problem instead of qemu-nbd.
+
+I agree, the program doesn't stop you from doing such things. It should work without error, but it might be slow. Just offering some advice you may not want to use it like this.
+
+Try to reproduce with qcow2 and qemu-nbd to see if the problem is with our support of the disk image format or if it's a problem with e.g. the access patterns and qemu-nbd itself, for instance.
+
+Tied again with same .vdi as block device for 4 hours, didn't experience further problems. don't know how to reproduce the issue again, even don't know if has been fixed with any software update.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
+[Expired for btrfs-progs (Ubuntu) because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1880722 b/results/classifier/118/performance/1880722
new file mode 100644
index 00000000..4c9170f0
--- /dev/null
+++ b/results/classifier/118/performance/1880722
@@ -0,0 +1,75 @@
+performance: 0.864
+graphic: 0.686
+user-level: 0.535
+architecture: 0.526
+device: 0.511
+mistranslation: 0.499
+permissions: 0.458
+files: 0.450
+semantic: 0.449
+debug: 0.436
+hypervisor: 0.426
+register: 0.421
+PID: 0.416
+network: 0.405
+peripherals: 0.399
+virtual: 0.396
+risc-v: 0.378
+VMM: 0.378
+vnc: 0.359
+x86: 0.348
+TCG: 0.344
+i386: 0.340
+socket: 0.329
+ppc: 0.327
+KVM: 0.307
+kernel: 0.305
+assembly: 0.276
+arm: 0.260
+boot: 0.253
+
+Problems related to checking page crossing in use_goto_tb()
+
+The discussion that led to this bug discovery can be found in this 
+mailing list thread:
+https://lists.nongnu.org/archive/html/qemu-devel/2020-05/msg05426.html
+
+A workaround for this problem would be to check for page crossings for 
+both the user and system modes in the use_goto_tb() function across 
+targets. Some targets like "hppa" already implement this fix but others
+don't.
+
+To solve the root cause of this problem, the linux-user/mmap.c should 
+be fixed to do all the invalidations required. By doing so, up to 6.93% 
+performance improvements will be achieved.
+
+The QEMU project is currently moving its bug tracking to another system.
+For this we need to know which bugs are still valid and which could be
+closed already. Thus we are setting older bugs to "Incomplete" now.
+
+If the bug has already been fixed in the latest upstream version of QEMU,
+then please close this ticket as "Fix released".
+
+If it is not fixed yet and you think that this bug report here is still
+valid, then you have two options:
+
+1) If you already have an account on gitlab.com, please open a new ticket
+for this problem in our new tracker here:
+
+    https://gitlab.com/qemu-project/qemu/-/issues
+
+and then close this ticket here on Launchpad (or let it expire auto-
+matically after 60 days). Please mention the URL of this bug ticket on
+Launchpad in the new ticket on GitLab.
+
+2) If you don't have an account on gitlab.com and don't intend to get
+one, but still would like to keep this ticket opened, then please switch
+the state back to "New" within the next 60 days (otherwise it will get
+closed as "Expired"). We will then eventually migrate the ticket auto-
+matically to the new system.
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1881450 b/results/classifier/118/performance/1881450
new file mode 100644
index 00000000..70e901e2
--- /dev/null
+++ b/results/classifier/118/performance/1881450
@@ -0,0 +1,90 @@
+performance: 0.952
+files: 0.938
+user-level: 0.937
+graphic: 0.802
+device: 0.789
+debug: 0.775
+PID: 0.769
+permissions: 0.766
+semantic: 0.763
+boot: 0.751
+architecture: 0.746
+kernel: 0.716
+socket: 0.708
+mistranslation: 0.700
+ppc: 0.687
+register: 0.628
+vnc: 0.622
+risc-v: 0.616
+network: 0.605
+peripherals: 0.598
+assembly: 0.582
+x86: 0.576
+i386: 0.543
+arm: 0.541
+TCG: 0.535
+hypervisor: 0.534
+VMM: 0.522
+KVM: 0.473
+virtual: 0.355
+
+Emulation of a math function fails for m68k Linux user mode
+
+Please check the attached math-example.c file.
+When running the m68k executable under QEMU, it results in an "Illegal instruction" error.
+Other targets don't produce this error.
+
+Steps to reproduce the bug:
+
+1. Download the math-example.c attached file.
+2. Compile it by running:
+        m68k-linux-gnu-gcc -O2 -static math-example.c -o math-example-m68k -lm
+3. Run the executable with QEMU:
+        /build/qemu-5.0.0/build-gcc/m68k-linux-user/qemu-m68k math-example-m68k 
+
+The output of execution is:
+        Profiling function expm1f():
+        qemu: uncaught target signal 4 (Illegal instruction) - core dumped
+        Illegal instruction (core dumped)
+
+Expected output:
+        Profiling function expm1f():
+          Elapsed time: 47 ms
+          Control result: 71804.953125
+
+
+
+
+
+Tracing gives me:
+
+IN: expm1f
+0x800005cc:  fetoxm1x %fp2,%fp0
+Disassembler disagrees with translator over instruction decoding
+Please report this to <email address hidden>
+
+(gdb) x/2hx 0x800005cc
+0x800005cc:	0xf200	0x0808
+
+The instruction is not implemented in qemu. I fix that.
+
+
+
+Fix available.
+
+Execution doesn't fail anymore:
+
+  Profiling function expm1f():
+    Elapsed time: 41 ms
+    Control result: 71805.108342
+
+Control result matches real hardware one:
+
+  Profiling function expm1f():
+    Elapsed time: 2152 ms
+    Control result: 71805.108342
+
+
+Fixed here:
+https://git.qemu.org/?p=qemu.git;a=commitdiff;h=250b1da35d579f423
+
diff --git a/results/classifier/118/performance/1882497 b/results/classifier/118/performance/1882497
new file mode 100644
index 00000000..1b2e6dc5
--- /dev/null
+++ b/results/classifier/118/performance/1882497
@@ -0,0 +1,66 @@
+performance: 0.911
+ppc: 0.680
+graphic: 0.535
+files: 0.533
+device: 0.527
+vnc: 0.503
+PID: 0.503
+semantic: 0.469
+network: 0.461
+socket: 0.444
+register: 0.439
+architecture: 0.424
+risc-v: 0.416
+permissions: 0.411
+kernel: 0.384
+hypervisor: 0.383
+debug: 0.380
+peripherals: 0.365
+TCG: 0.342
+VMM: 0.315
+i386: 0.289
+boot: 0.281
+arm: 0.269
+virtual: 0.241
+KVM: 0.235
+user-level: 0.233
+x86: 0.196
+assembly: 0.186
+mistranslation: 0.167
+
+Missing 'cmp' utility makes build take 10 times as long
+
+I have been doing some work cross compiling qemu for Windows using a minimal Fedora container. Recently I started hitting some timeouts on the CI service and noticed a build of all targets was going over 1 hour.
+
+It seems like the 'cmp' utility from diffutils is used somewhere in the process and if it's missing, either a configure or a make gets run way too many times - I'll try to pull logs from the CI system at some stage soon.
+
+Could a warning or error be added if cmp is missing?
+
+cmp is used in the makefiles. 
+
+And there is some kind of warning during build if it is missing:
+
+/bin/sh: cmp: command not found
+
+But perhaps it should abort the build in this case.
+
+Something like that helps:
+
+diff --git a/Makefile b/Makefile
+index 40e4f7677bde..05e029bd99db 100644
+--- a/Makefile
++++ b/Makefile
+@@ -482,6 +482,7 @@ include $(SRC_PATH)/tests/Makefile.include
+ all: $(DOCS) $(if $(BUILD_DOCS),sphinxdocs) $(TOOLS) $(HELPERS-y) recurse-all modules $(vhost-user-json-y)
+ 
+ qemu-version.h: FORCE
++       @type cmp
+        $(call quiet-command, \
+                 (printf '#define QEMU_PKGVERSION "$(QEMU_PKGVERSION)"\n'; \
+                printf '#define QEMU_FULL_VERSION "$(FULL_VERSION)"\n'; \
+
+
+Does this problem still persist with the latest version of QEMU (since we switched the build system mostly to meson now)?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1883400 b/results/classifier/118/performance/1883400
new file mode 100644
index 00000000..843513f8
--- /dev/null
+++ b/results/classifier/118/performance/1883400
@@ -0,0 +1,77 @@
+performance: 0.964
+semantic: 0.834
+graphic: 0.820
+device: 0.788
+user-level: 0.768
+architecture: 0.755
+files: 0.741
+PID: 0.726
+ppc: 0.710
+mistranslation: 0.709
+x86: 0.701
+permissions: 0.694
+network: 0.685
+socket: 0.684
+vnc: 0.675
+arm: 0.657
+peripherals: 0.646
+register: 0.645
+VMM: 0.621
+TCG: 0.598
+hypervisor: 0.595
+assembly: 0.575
+risc-v: 0.565
+virtual: 0.556
+KVM: 0.533
+kernel: 0.503
+boot: 0.498
+i386: 0.443
+debug: 0.320
+
+Windows 10 extremely slow and unresponsive
+
+Hi,
+
+Fedora 32, x64
+qemu-5.0.0-2.fc32.x86_64
+
+https://www.microsoft.com/en-us/software-download/windows10ISO
+Win10_2004_English_x64.iso
+
+Windows 10 is excruciatingly slow since upgrading to 5.0.0-2.fc32.  Disabling your repo and downgrading to 2:4.2.0-7.fc32 and corrects the issue (the package in the Fedora repo).
+
+You can duplicate this off of the Windows 10 ISO (see above) and do not even have to install Windows 10 itself.
+
+Please fix,
+
+Many thanks,
+-T
+
+On Sun, Jun 14, 2020 at 01:30:07AM -0000, Toddandmargo-n wrote:
+> Public bug reported:
+> 
+> Hi,
+> 
+> Fedora 32, x64
+> qemu-5.0.0-2.fc32.x86_64
+> 
+> https://www.microsoft.com/en-us/software-download/windows10ISO
+> Win10_2004_English_x64.iso
+> 
+> Windows 10 is excruciatingly slow since upgrading to 5.0.0-2.fc32.
+> Disabling your repo and downgrading to 2:4.2.0-7.fc32 and corrects the
+> issue (the package in the Fedora repo).
+> 
+> You can duplicate this off of the Windows 10 ISO (see above) and do not
+> even have to install Windows 10 itself.
+
+Could this be a duplicate of
+https://bugs.launchpad.net/qemu/+bug/1877716?
+
+Stefan
+
+
+1877716 sounds exactly like what I experienced.
+
+ok, closing this as a duplicate
+
diff --git a/results/classifier/118/performance/1883593 b/results/classifier/118/performance/1883593
new file mode 100644
index 00000000..091fb4c5
--- /dev/null
+++ b/results/classifier/118/performance/1883593
@@ -0,0 +1,84 @@
+performance: 0.819
+TCG: 0.737
+boot: 0.695
+semantic: 0.611
+device: 0.601
+hypervisor: 0.590
+architecture: 0.573
+vnc: 0.549
+PID: 0.541
+x86: 0.536
+graphic: 0.527
+permissions: 0.508
+i386: 0.506
+kernel: 0.501
+register: 0.485
+user-level: 0.481
+risc-v: 0.479
+socket: 0.475
+files: 0.462
+network: 0.460
+virtual: 0.458
+debug: 0.456
+peripherals: 0.442
+arm: 0.428
+VMM: 0.418
+ppc: 0.411
+assembly: 0.354
+mistranslation: 0.279
+KVM: 0.080
+
+Windows XP takes much longer to boot in TCG mode since 5.0
+
+Since upgrading from 4.2 to 5.0, a Windows XP VM takes much longer to boot.
+
+It hangs about three minutes on "welcome" screen with the blue background, while previously the total boot time was less than a minute. 
+
+The issue only happens in TCG mode (not with KVM) and also happens with the current master which includes the uring patches (7d3660e7).
+
+I can reproduce this issue with a clean XP install with no flags other than `-m 2G`. After booting, the performance seems to be normal.
+
+Are you able to bisect between 4.2 and 5.0 and identify what introduces the slow down?
+
+Bisecting showed that this is the bad commit:
+
+    b55f54bc965607c45b5010a107a792ba333ba654 exec: flush CPU TB cache in breakpoint_invalidate
+
+And I can indeed confirm that this commit is much slower than the previous one, e18e5501d8ac692d32657a3e1ef545b14e72b730.
+
+The QEMU project is currently moving its bug tracking to another system.
+For this we need to know which bugs are still valid and which could be
+closed already. Thus we are setting older bugs to "Incomplete" now.
+
+If the bug has already been fixed in the latest upstream version of QEMU,
+then please close this ticket as "Fix released".
+
+If it is not fixed yet and you think that this bug report here is still
+valid, then you have two options:
+
+1) If you already have an account on gitlab.com, please open a new ticket
+for this problem in our new tracker here:
+
+    https://gitlab.com/qemu-project/qemu/-/issues
+
+and then close this ticket here on Launchpad (or let it expire auto-
+matically after 60 days). Please mention the URL of this bug ticket on
+Launchpad in the new ticket on GitLab.
+
+2) If you don't have an account on gitlab.com and don't intend to get
+one, but still would like to keep this ticket opened, then please switch
+the state back to "New" within the next 60 days (otherwise it will get
+closed as "Expired"). We will then eventually migrate the ticket auto-
+matically to the new system.
+
+Thank you and sorry for the inconvenience.
+
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/404
+
+
diff --git a/results/classifier/118/performance/1884 b/results/classifier/118/performance/1884
new file mode 100644
index 00000000..22b32143
--- /dev/null
+++ b/results/classifier/118/performance/1884
@@ -0,0 +1,40 @@
+performance: 0.922
+graphic: 0.790
+device: 0.729
+vnc: 0.688
+register: 0.607
+PID: 0.594
+socket: 0.522
+network: 0.519
+i386: 0.517
+permissions: 0.479
+semantic: 0.458
+debug: 0.441
+x86: 0.432
+arm: 0.419
+ppc: 0.415
+boot: 0.390
+mistranslation: 0.390
+files: 0.386
+architecture: 0.375
+risc-v: 0.368
+TCG: 0.321
+VMM: 0.305
+user-level: 0.231
+assembly: 0.156
+KVM: 0.141
+hypervisor: 0.138
+kernel: 0.103
+peripherals: 0.090
+virtual: 0.069
+
+avocado-system-* CI jobs are unreliable
+Description of problem:
+The avocado-system-* CI jobs fail randomly:
+https://gitlab.com/qemu-project/qemu/-/jobs/5058610614  
+https://gitlab.com/qemu-project/qemu/-/jobs/5058610654  
+https://gitlab.com/qemu-project/qemu/-/jobs/5030428571  
+
+I don't know how to interpret the test output. Until these CI jobs pass reliably it won't be possible for me to identify when a subtest that is actually healthy/reliable breaks.
+
+Please take a look at the logs and fix or remove unreliable test cases.
diff --git a/results/classifier/118/performance/1886306 b/results/classifier/118/performance/1886306
new file mode 100644
index 00000000..b84290fc
--- /dev/null
+++ b/results/classifier/118/performance/1886306
@@ -0,0 +1,44 @@
+performance: 0.941
+mistranslation: 0.894
+x86: 0.874
+graphic: 0.795
+device: 0.668
+semantic: 0.584
+architecture: 0.547
+ppc: 0.436
+files: 0.420
+network: 0.415
+socket: 0.345
+vnc: 0.291
+PID: 0.263
+kernel: 0.254
+peripherals: 0.250
+permissions: 0.247
+i386: 0.237
+arm: 0.228
+register: 0.197
+debug: 0.188
+assembly: 0.173
+boot: 0.161
+hypervisor: 0.136
+user-level: 0.135
+virtual: 0.131
+VMM: 0.120
+TCG: 0.116
+risc-v: 0.116
+KVM: 0.083
+
+qemu running slow when the window is in background
+
+Reported by <jedinix> on IRC:
+
+QEMU almost freezes when running with `GDK_BACKEND=x11` set and the parameter `gl=on` added to the `-display` option.
+
+GDK_BACKEND=x11 qemu-system-x86_64 -nodefaults -no-user-config -enable-kvm -machine q35 -cpu host -m 4G -display gtk,gl=on -vga std -usb -device usb-kbd -drive file=/tmp/Win10.qcow2,media=disk,format=qcow2 -drive file=~/Downloads/Win10_2004_EnglishInternational_x64.iso,media=cdrom
+
+Leaving out `GDK_BACKEND=x11` or `gl=on` fixes the issue.
+
+I think there is quite a bit of information missing here? Which host OS / distribution are we talking about here? Which parameters were used for "configure"? Which QEMU version has been used? Is it still reproducible with the latest version? ... thus I wonder whether this should get closed, or whether it's worth the effort to move this to the new tracker at Gitlab?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1886602 b/results/classifier/118/performance/1886602
new file mode 100644
index 00000000..d0b37470
--- /dev/null
+++ b/results/classifier/118/performance/1886602
@@ -0,0 +1,160 @@
+performance: 0.913
+permissions: 0.911
+graphic: 0.905
+semantic: 0.900
+register: 0.894
+assembly: 0.881
+virtual: 0.862
+debug: 0.856
+network: 0.855
+peripherals: 0.854
+boot: 0.836
+architecture: 0.830
+user-level: 0.827
+PID: 0.825
+device: 0.814
+kernel: 0.798
+files: 0.789
+arm: 0.786
+socket: 0.773
+hypervisor: 0.744
+mistranslation: 0.742
+vnc: 0.741
+VMM: 0.690
+KVM: 0.660
+TCG: 0.659
+ppc: 0.649
+risc-v: 0.621
+i386: 0.469
+x86: 0.469
+
+Windows 10 very slow with OVMF
+
+Debian Buster
+
+Kernel 4.19.0-9-amd64
+qemu-kvm 1:3.1+dfsg-8+deb10u5
+ovmf 0~20181115.85588389-3+deb10u1
+
+Machine: Thinkpad T470, i7-7500u, 20GB RAM
+VM: 4 CPUs, 8GB RAM, Broadwell-noTSX CPU Model
+
+Windows 10, under this VM, seems to be exceedingly slow with all operations. This is a clean install with very few services running. Task Manager can take 30% CPU looking at an idle system.
+
+# dmidecode 3.2
+Getting SMBIOS data from sysfs.
+SMBIOS 3.0.0 present.
+Table at 0x9A694000.
+
+...
+
+Handle 0x000A, DMI type 4, 48 bytes
+Processor Information
+        Socket Designation: U3E1
+        Type: Central Processor
+        Family: Core i7
+...
+        Core Count: 2
+        Core Enabled: 2
+        Thread Count: 4
+        Characteristics:
+                64-bit capable
+                Multi-Core
+                Hardware Thread
+                Execute Protection
+                Enhanced Virtualization
+                Power/Performance Control
+
+
+Handle 0x000B, DMI type 0, 24 bytes
+BIOS Information
+        Vendor: LENOVO
+        Version: N1QET88W (1.63 )
+        Release Date: 04/22/2020
+        Address: 0xE0000
+        Runtime Size: 128 kB
+        ROM Size: 16 MB
+        Characteristics:
+                PCI is supported
+                PNP is supported
+                BIOS is upgradeable
+                BIOS shadowing is allowed
+                Boot from CD is supported
+                Selectable boot is supported
+                EDD is supported
+                3.5"/720 kB floppy services are supported (int 13h)
+                Print screen service is supported (int 5h)
+                8042 keyboard services are supported (int 9h)
+                Serial services are supported (int 14h)
+                Printer services are supported (int 17h)
+                CGA/mono video services are supported (int 10h)
+                ACPI is supported
+                USB legacy is supported
+                BIOS boot specification is supported
+                Targeted content distribution is supported
+                UEFI is supported
+        BIOS Revision: 1.63
+        Firmware Revision: 1.35
+
+
+
+Sorry, no input from me. OVMF is apparently from November 2018, and QEMU is version 3.1. Please try to reproduce with recent upstream components (build both OVMF and QEMU from source), and if the issue persists, please provide the complete QEMU command line, capture the OVMF debug log (see OvmfPkg/README for instructions on that), and please also provide the host CPU characteristics (/proc/cpuinfo, /sys/module/kvm_*/parameters/*).
+
+I did try the most recent OVMF from QEMU 5.0 (https://git.qemu.org/?p=qemu.git;a=blob_plain;f=pc-bios/edk2-x86_64-code.fd.bz2;hb=fdd76fecdde) and there was no difference.
+
+I will re-build qemu sometime soon.
+
+=======
+$ cat /proc/cpuinfo 
+processor       : 0
+vendor_id       : GenuineIntel
+cpu family      : 6
+model           : 142
+model name      : Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
+stepping        : 9
+microcode       : 0xca
+cpu MHz         : 659.478
+cache size      : 4096 KB
+physical id     : 0
+siblings        : 4
+core id         : 0
+cpu cores       : 2
+apicid          : 0
+initial apicid  : 0
+fpu             : yes
+fpu_exception   : yes
+cpuid level     : 22
+wp              : yes
+flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds
+bogomips        : 5808.00
+clflush size    : 64
+cache_alignment : 64
+address sizes   : 39 bits physical, 48 bits virtual
+power management:
+
+=======
+$ grep . /sys/module/kvm_*/parameters/*
+/sys/module/kvm_intel/parameters/emulate_invalid_guest_state:Y
+/sys/module/kvm_intel/parameters/enable_apicv:N
+/sys/module/kvm_intel/parameters/enable_shadow_vmcs:N
+/sys/module/kvm_intel/parameters/enlightened_vmcs:N
+/sys/module/kvm_intel/parameters/ept:Y
+/sys/module/kvm_intel/parameters/eptad:Y
+/sys/module/kvm_intel/parameters/fasteoi:Y
+/sys/module/kvm_intel/parameters/flexpriority:Y
+/sys/module/kvm_intel/parameters/nested:N
+/sys/module/kvm_intel/parameters/ple_gap:128
+/sys/module/kvm_intel/parameters/ple_window:4096
+/sys/module/kvm_intel/parameters/ple_window_grow:2
+/sys/module/kvm_intel/parameters/ple_window_max:4294967295
+/sys/module/kvm_intel/parameters/ple_window_shrink:0
+/sys/module/kvm_intel/parameters/pml:Y
+/sys/module/kvm_intel/parameters/preemption_timer:Y
+/sys/module/kvm_intel/parameters/unrestricted_guest:Y
+/sys/module/kvm_intel/parameters/vmentry_l1d_flush:cond
+/sys/module/kvm_intel/parameters/vnmi:Y
+/sys/module/kvm_intel/parameters/vpid:Y
+
+Inactive for more than a month, significant amount of info was not provided. Closing.
+
diff --git a/results/classifier/118/performance/1891829 b/results/classifier/118/performance/1891829
new file mode 100644
index 00000000..fba4eb35
--- /dev/null
+++ b/results/classifier/118/performance/1891829
@@ -0,0 +1,85 @@
+performance: 0.832
+peripherals: 0.811
+graphic: 0.802
+architecture: 0.797
+register: 0.792
+device: 0.779
+semantic: 0.760
+kernel: 0.743
+user-level: 0.711
+mistranslation: 0.639
+risc-v: 0.609
+ppc: 0.604
+network: 0.589
+permissions: 0.577
+hypervisor: 0.571
+PID: 0.560
+socket: 0.540
+files: 0.534
+vnc: 0.529
+VMM: 0.477
+x86: 0.472
+TCG: 0.456
+debug: 0.440
+assembly: 0.438
+KVM: 0.426
+arm: 0.422
+virtual: 0.406
+i386: 0.400
+boot: 0.385
+
+High bit(s) sometimes set high on rcvd serial bytes when char size < 8 bits
+
+I *believe* (not confirmed) that the old standard PC serial ports, when configured with a character size of 7 bits or less, should set non-data bits to 0 when the CPU reads received chars from the read register.  qemu doesn't do this.
+
+Windows 1.01 will not make use of a serial mouse when bit 7 is 1.  The ID byte that the mouse sends on reset is ignored.  I added a temporary hack to set bit 7 to 0 on all incoming bytes, and this convinced windows 1.01 to use the mouse.
+
+note 1:  This was using a real serial mouse through a passed-through serial port.  The emulated msmouse doesn't work for other reasons.
+
+note 2:  The USB serial port I am passing through to the guest sets non-data bits to 1.  Not sure if this is the USB hardware or linux.
+
+note 3:  I also needed to add an -icount line to slow down the guest CPU, so that certain cpu-sensitive timing code in the guest didn't give up too quickly.
+
+I will hopefully submit a patch for review soon.
+
+
+If I connect a serial mouse to the built-in serial port on an old (kernel 2.4) box and go
+
+stty -F /dev/ttyS0 1200 cs7
+dd if=/dev/ttyS0 bs=1|hexdump -C
+
+The bytes received/printed when the mouse is moved all have bit7=0.
+
+
+The QEMU project is currently moving its bug tracking to another system.
+For this we need to know which bugs are still valid and which could be
+closed already. Thus we are setting the bug state to "Incomplete" now.
+
+If the bug has already been fixed in the latest upstream version of QEMU,
+then please close this ticket as "Fix released".
+
+If it is not fixed yet and you think that this bug report here is still
+valid, then you have two options:
+
+1) If you already have an account on gitlab.com, please open a new ticket
+for this problem in our new tracker here:
+
+    https://gitlab.com/qemu-project/qemu/-/issues
+
+and then close this ticket here on Launchpad (or let it expire auto-
+matically after 60 days). Please mention the URL of this bug ticket on
+Launchpad in the new ticket on GitLab.
+
+2) If you don't have an account on gitlab.com and don't intend to get
+one, but still would like to keep this ticket opened, then please switch
+the state back to "New" or "Confirmed" within the next 60 days (other-
+wise it will get closed as "Expired"). We will then eventually migrate
+the ticket automatically to the new system (but you won't be the reporter
+of the bug in the new system and thus you won't get notified on changes
+anymore).
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1892081 b/results/classifier/118/performance/1892081
new file mode 100644
index 00000000..86fe55d3
--- /dev/null
+++ b/results/classifier/118/performance/1892081
@@ -0,0 +1,65 @@
+performance: 0.959
+graphic: 0.587
+device: 0.569
+vnc: 0.526
+kernel: 0.461
+network: 0.423
+VMM: 0.412
+risc-v: 0.407
+semantic: 0.395
+socket: 0.363
+ppc: 0.359
+architecture: 0.356
+x86: 0.327
+boot: 0.320
+files: 0.300
+i386: 0.265
+permissions: 0.261
+KVM: 0.249
+arm: 0.246
+hypervisor: 0.236
+TCG: 0.221
+PID: 0.218
+peripherals: 0.211
+mistranslation: 0.200
+assembly: 0.167
+virtual: 0.161
+register: 0.154
+debug: 0.145
+user-level: 0.108
+
+Performance improvement when using "QEMU_FLATTEN" with softfloat type conversions
+
+Attached below is a matrix multiplication program for double data
+types. The program performs the casting operation "(double)rand()"
+when generating random numbers.
+
+This operation calls the integer to float softfloat conversion
+function "int32_to_float_64".
+
+Adding the "QEMU_FLATTEN" attribute to the function definition
+decreases the instructions per call of the function by about 63%.
+
+Attached are before and after performance screenshots from
+KCachegrind.
+
+
+
+
+
+
+
+Confirmed, although "65% decrease" is on 0.44% of the total
+execution for this test case, so the decrease isn't actually
+noticeable.
+
+Nevertheless, it's a simple enough change.
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/134
+
+
diff --git a/results/classifier/118/performance/1895703 b/results/classifier/118/performance/1895703
new file mode 100644
index 00000000..05a014c3
--- /dev/null
+++ b/results/classifier/118/performance/1895703
@@ -0,0 +1,85 @@
+performance: 0.981
+permissions: 0.891
+TCG: 0.883
+graphic: 0.714
+device: 0.548
+architecture: 0.476
+PID: 0.342
+semantic: 0.335
+peripherals: 0.320
+mistranslation: 0.224
+KVM: 0.215
+ppc: 0.199
+risc-v: 0.178
+boot: 0.177
+hypervisor: 0.167
+network: 0.160
+user-level: 0.149
+kernel: 0.148
+debug: 0.146
+VMM: 0.141
+vnc: 0.136
+arm: 0.117
+register: 0.115
+x86: 0.113
+files: 0.108
+socket: 0.104
+i386: 0.093
+virtual: 0.049
+assembly: 0.037
+
+performance degradation in tcg since Meson switch
+
+The buildsys conversion to Meson (1d806cef0e3..7fd51e68c34)
+introduced a degradation in performance in some TCG targets:
+
+--------------------------------------------------------
+Test Program: matmult_double
+--------------------------------------------------------
+Target              Instructions     Previous    Latest
+                                     1d806cef   7fd51e68
+----------  --------------------  ----------  ----------
+alpha              3 233 957 639       -----     +7.472%
+m68k               3 919 110 506       -----    +18.433%
+--------------------------------------------------------
+
+Original report from Ahmed Karaman with further testing done
+by Aleksandar Markovic:
+https://<email address hidden>/msg740279.html
+
+Can I get a sample statically linked m68k binary that exhibits this effect?
+
+
+
+I get
+
+$ qemu-m68k ./matmult_double-m68k
+Error while loading /home/pbonzini/matmult_double-m68k: Permission denied
+
+
+Paolo: what are the permissions on matmult_double-m68k on your local fs? (needs to be readable/executable by you)
+
+
+Uff, of course...
+
+This patch shold fix the regression:
+
+diff --git a/configure b/configure
+index 0004c46525..0786144043 100755
+--- a/configure
++++ b/configure
+@@ -7414,6 +7414,7 @@ NINJA=${ninja:-$PWD/ninjatool} $meson setup \
+         -Dwerror=$(if test "$werror" = yes; then echo true; else echo false; fi) \
+         -Dstrip=$(if test "$strip_opt" = yes; then echo true; else echo false; fi) \
+         -Db_pie=$(if test "$pie" = yes; then echo true; else echo false; fi) \
++        -Db_staticpic=$(if test "$pie" = yes; then echo true; else echo false; fi) \
+         -Db_coverage=$(if test "$gcov" = yes; then echo true; else echo false; fi) \
+ 	-Dmalloc=$malloc -Dmalloc_trim=$malloc_trim -Dsparse=$sparse \
+ 	-Dkvm=$kvm -Dhax=$hax -Dwhpx=$whpx -Dhvf=$hvf \
+
+
+This was fixed initially by commit 0c3dd50eaecbfe2, which is the change suggested in Paolo's comment #6, and then refined by commit a5cb7c5afe717d4.
+
+
+Released with QEMU v5.2.0.
+
diff --git a/results/classifier/118/performance/1896 b/results/classifier/118/performance/1896
new file mode 100644
index 00000000..14649e1d
--- /dev/null
+++ b/results/classifier/118/performance/1896
@@ -0,0 +1,84 @@
+performance: 0.922
+semantic: 0.893
+architecture: 0.882
+i386: 0.867
+arm: 0.845
+mistranslation: 0.826
+peripherals: 0.820
+graphic: 0.807
+debug: 0.804
+device: 0.789
+network: 0.783
+user-level: 0.759
+kernel: 0.759
+socket: 0.749
+PID: 0.748
+hypervisor: 0.698
+permissions: 0.643
+files: 0.612
+TCG: 0.596
+vnc: 0.539
+x86: 0.513
+ppc: 0.481
+assembly: 0.459
+risc-v: 0.429
+register: 0.427
+VMM: 0.409
+boot: 0.380
+KVM: 0.363
+virtual: 0.323
+
+Use `qemu_exit()` function instead of `exit()`
+Additional information:
+I just saw the similar refactoring for the GDB part of QEMU and thought it might be useful in more general case too: https://lore.kernel.org/qemu-devel/20230907112640.292104-1-chigot@adacore.com/T/#m540552946cfa960b34c4d76d2302324f5de8627f
+
+```
+$ rg "exit\(0" -t c -l
+gdbstub/gdbstub.c
+qemu-edid.c
+subprojects/libvhost-user/libvhost-user.c
+semihosting/arm-compat-semi.c
+softmmu/async-teardown.c
+softmmu/device_tree.c
+softmmu/vl.c
+softmmu/runstate.c
+os-posix.c
+dtc/util.c
+dtc/dtc.c
+dtc/tests/dumptrees.c
+qemu-keymap.c
+qemu-io.c
+contrib/ivshmem-server/main.c
+contrib/rdmacm-mux/main.c
+tests/qtest/vhost-user-blk-test.c
+tests/qtest/fuzz/fuzz.c
+tests/qtest/fuzz/generic_fuzz.c
+tests/unit/test-seccomp.c
+tests/unit/test-rcu-list.c
+tests/unit/rcutorture.c
+tests/bench/qht-bench.c
+tests/bench/atomic64-bench.c
+tests/bench/atomic_add-bench.c
+tests/unit/test-iov.c
+tests/tcg/multiarch/linux/linux-test.c
+tests/tcg/aarch64/mte-3.c
+tests/tcg/aarch64/pauth-2.c
+tests/tcg/aarch64/mte-5.c
+tests/tcg/aarch64/mte-6.c
+tests/tcg/aarch64/mte-2.c
+tests/tcg/cris/libc/check_glibc_kernelversion.c
+tests/tcg/cris/libc/check_lz.c
+tests/tcg/s390x/signals-s390x.c
+tests/tcg/i386/hello-i386.c
+tests/tcg/cris/bare/sys.c
+tests/tcg/ppc64/mtfsf.c
+qemu-nbd.c
+net/net.c
+hw/nvram/eeprom93xx.c
+hw/arm/allwinner-r40.c
+hw/rdma/rdma_backend.c
+hw/watchdog/watchdog.c
+trace/control.c
+hw/pci/pci.c
+hw/misc/sifive_test.c
+```
diff --git a/results/classifier/118/performance/1896754 b/results/classifier/118/performance/1896754
new file mode 100644
index 00000000..ec51af6c
--- /dev/null
+++ b/results/classifier/118/performance/1896754
@@ -0,0 +1,70 @@
+performance: 0.936
+user-level: 0.779
+boot: 0.745
+x86: 0.694
+architecture: 0.634
+hypervisor: 0.615
+graphic: 0.611
+semantic: 0.609
+device: 0.579
+network: 0.543
+kernel: 0.541
+PID: 0.539
+TCG: 0.534
+debug: 0.508
+socket: 0.504
+permissions: 0.497
+mistranslation: 0.496
+register: 0.473
+files: 0.447
+ppc: 0.440
+peripherals: 0.425
+vnc: 0.400
+risc-v: 0.342
+arm: 0.338
+VMM: 0.331
+virtual: 0.305
+i386: 0.302
+assembly: 0.302
+KVM: 0.281
+
+Performance degradation for WinXP boot time after b55f54bc
+
+Qemu 5.1 loads Windows XP in TCG mode 5-6 times slower (~2 minutes) than 4.2 (25 seconds), I git bisected it, and it appears that commit b55f54bc965607c45b5010a107a792ba333ba654 causes this issue. Probably similar to an older fixed bug https://bugs.launchpad.net/qemu/+bug/1672383
+
+Command line is trivial: qemu-system-x86_64 -nodefaults -vga std -m 4096M -hda WinXP.qcow2 -monitor stdio -snapshot
+
+The QEMU project is currently moving its bug tracking to another system.
+For this we need to know which bugs are still valid and which could be
+closed already. Thus we are setting the bug state to "Incomplete" now.
+
+If the bug has already been fixed in the latest upstream version of QEMU,
+then please close this ticket as "Fix released".
+
+If it is not fixed yet and you think that this bug report here is still
+valid, then you have two options:
+
+1) If you already have an account on gitlab.com, please open a new ticket
+for this problem in our new tracker here:
+
+    https://gitlab.com/qemu-project/qemu/-/issues
+
+and then close this ticket here on Launchpad (or let it expire auto-
+matically after 60 days). Please mention the URL of this bug ticket on
+Launchpad in the new ticket on GitLab.
+
+2) If you don't have an account on gitlab.com and don't intend to get
+one, but still would like to keep this ticket opened, then please switch
+the state back to "New" or "Confirmed" within the next 60 days (other-
+wise it will get closed as "Expired"). We will then eventually migrate
+the ticket automatically to the new system (but you won't be the reporter
+of the bug in the new system and thus you won't get notified on changes
+anymore).
+
+Thank you and sorry for the inconvenience.
+
+
+Ticket has been moved here (thanks, Maksim!):
+https://gitlab.com/qemu-project/qemu/-/issues/286
+Thus closing this one at Launchpad now.
+
diff --git a/results/classifier/118/performance/1901981 b/results/classifier/118/performance/1901981
new file mode 100644
index 00000000..e522a8e6
--- /dev/null
+++ b/results/classifier/118/performance/1901981
@@ -0,0 +1,106 @@
+performance: 0.948
+register: 0.939
+permissions: 0.938
+peripherals: 0.928
+user-level: 0.924
+risc-v: 0.919
+virtual: 0.917
+debug: 0.915
+files: 0.899
+network: 0.893
+ppc: 0.892
+device: 0.890
+semantic: 0.880
+PID: 0.877
+arm: 0.873
+graphic: 0.870
+socket: 0.868
+assembly: 0.848
+kernel: 0.842
+hypervisor: 0.830
+architecture: 0.829
+vnc: 0.820
+boot: 0.796
+KVM: 0.788
+VMM: 0.757
+mistranslation: 0.733
+TCG: 0.707
+x86: 0.698
+i386: 0.543
+
+assert issue locates in hw/usb/dev-storage.c:248: usb_msd_send_status
+
+Hello,
+
+I found an assertion failure through hw/usb/dev-storage.c.
+
+This was found in latest version 5.1.0.
+
+--------
+
+qemu-system-x86_64: hw/usb/dev-storage.c:248: usb_msd_send_status: Assertion `s->csw.sig == cpu_to_le32(0x53425355)' failed.
+[1]    29544 abort      sudo  -enable-kvm -boot c -m 2G -drive format=qcow2,file=./ubuntu.img -nic
+
+To reproduce the assertion failure, please run the QEMU with following command line.
+
+
+$ qemu-system-x86_64 -enable-kvm -boot c -m 2G -drive format=qcow2,file=./ubuntu.img -nic user,model=rtl8139,hostfwd=tcp:0.0.0.0:5555-:22 -device piix4-usb-uhci,id=uhci -device usb-storage,drive=mydrive -drive id=mydrive,file=null-co://,size=2M,format=raw,if=none
+
+The poc is attached.
+
+
+
+poc doens't run on fedora:
+uhci: common.c:59: gva_to_gpa: Assertion `gfn != -1' failed.
+
+Can you build qemu with DEBUG_MSD enabled (see hw/usb/dev-storage.c),
+then attach both stderr log and stacktrace?
+
+thanks.
+
+Sorry, my reproduced environment is as follows:
+    Host: ubuntu 18.04
+    Guest: ubuntu 18.04
+
+Stderr log is as follows:
+usb-msd: Reset
+usb-msd: Command on LUN 0
+usb-msd: Command tag 0x0 flags 00000000 len 0 data 0
+[scsi.0 id=0] INQUIRY 0x00 0x00 0x00 0x01 0x00 - from-dev len=1
+usb-msd: Deferring packet 0x6110002d2d40 [wait status]
+usb-msd: Command status 0 tag 0x0, len 256
+qemu-system-x86_64: hw/usb/dev-storage.c:248: usb_msd_send_status: Assertion `s->csw.sig == cpu_to_le32(0x53425355)' failed.
+[1]    643 abort      sudo  -enable-kvm -boot c -m 4G -drive format=qcow2,file=./ubuntu.img -nic
+
+
+Backtrace is as follows:
+#0  0x00007f8b36a63f47 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
+#1  0x00007f8b36a658b1 in __GI_abort () at abort.c:79
+#2  0x00007f8b36a5542a in __assert_fail_base (fmt=0x7f8b36bdca38 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x55aef41e7440 "s->csw.sig == cpu_to_le32(0x53425355)", file=file@entry=0x55aef41e7180 "hw/usb/dev-storage.c", line=line@entry=248, function=function@entry=0x55aef41e7980 <__PRETTY_FUNCTION__.29124> "usb_msd_send_status") at assert.c:92
+#3  0x00007f8b36a554a2 in __GI___assert_fail (assertion=assertion@entry=0x55aef41e7440 "s->csw.sig == cpu_to_le32(0x53425355)", file=file@entry=0x55aef41e7180 "hw/usb/dev-storage.c", line=line@entry=248, function=function@entry=0x55aef41e7980 <__PRETTY_FUNCTION__.29124> "usb_msd_send_status") at assert.c:101
+#4  0x000055aef32226d5 in usb_msd_send_status (s=0x623000001d00, p=0x6110002e3500) at hw/usb/dev-storage.c:248
+#5  0x000055aef322804e in usb_msd_handle_data (dev=0x623000001d00, p=0x6110002e3500) at hw/usb/dev-storage.c:525
+#6  0x000055aef30bc46a in usb_device_handle_data (dev=dev@entry=0x623000001d00, p=p@entry=0x6110002e3500) at hw/usb/bus.c:179
+#7  0x000055aef30a0ab4 in usb_process_one (p=p@entry=0x6110002e3500) at hw/usb/core.c:387
+#8  0x000055aef30a9db0 in usb_handle_packet (dev=0x623000001d00, p=p@entry=0x6110002e3500) at hw/usb/core.c:419
+#9  0x000055aef30fe890 in uhci_handle_td (s=s@entry=0x61f000002a80, q=0x6060000c9200, q@entry=0x0, qh_addr=qh_addr@entry=0, td=td@entry=0x7ffd88f90620, td_addr=<optimized out>, int_mask=int_mask@entry=0x7ffd88f905a0) at hw/usb/hcd-uhci.c:899
+#10 0x000055aef3104c6f in uhci_process_frame (s=s@entry=0x61f000002a80) at hw/usb/hcd-uhci.c:1075
+#11 0x000055aef31098e0 in uhci_frame_timer (opaque=0x61f000002a80) at hw/usb/hcd-uhci.c:1174
+#12 0x000055aef3ae5f95 in timerlist_run_timers (timer_list=0x60b000051be0) at util/qemu-timer.c:572
+#13 0x000055aef3ae619b in qemu_clock_run_timers (type=QEMU_CLOCK_VIRTUAL) at util/qemu-timer.c:586
+#14 0x000055aef3ae6922 in qemu_clock_run_all_timers () at util/qemu-timer.c:672
+#15 0x000055aef3aca63d in main_loop_wait (nonblocking=0) at util/main-loop.c:523
+#16 0x000055aef1f320f5 in qemu_main_loop () at /home/zjusvn/new-hyper/qemu-5.1.0/softmmu/vl.c:1676
+#17 0x000055aef397475c in main (argc=18, argv=0x7ffd88f90e98, envp=0x7ffd88f90f30) at /home/zjusvn/new-hyper/qemu-5.1.0/softmmu/main.c:49
+#18 0x00007f8b36a46b97 in __libc_start_main (main=0x55aef397471d <main>, argc=18, argv=0x7ffd88f90e98, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffd88f90e88) at ../csu/libc-start.c:310
+#19 0x000055aef1a3481a in _start ()
+
+thanks.
+
+https://git.kraxel.org/cgit/qemu/log/?h=sirius/usb-asserts
+can you try that branch?
+
+OK, It seems to be fixed now. 
+
+Released with QEMU v5.2.0.
+
diff --git a/results/classifier/118/performance/1913341 b/results/classifier/118/performance/1913341
new file mode 100644
index 00000000..4889af14
--- /dev/null
+++ b/results/classifier/118/performance/1913341
@@ -0,0 +1,57 @@
+performance: 0.892
+architecture: 0.860
+graphic: 0.807
+files: 0.790
+device: 0.766
+semantic: 0.746
+peripherals: 0.742
+socket: 0.738
+permissions: 0.734
+kernel: 0.730
+PID: 0.729
+assembly: 0.715
+network: 0.714
+register: 0.693
+debug: 0.680
+hypervisor: 0.672
+TCG: 0.662
+ppc: 0.656
+arm: 0.647
+mistranslation: 0.647
+virtual: 0.639
+vnc: 0.602
+x86: 0.599
+risc-v: 0.593
+user-level: 0.578
+boot: 0.552
+KVM: 0.541
+VMM: 0.521
+i386: 0.517
+
+Chardev behavior breaks polling based devices
+
+Currently in latest QEMU (9cd69f1a270235b652766f00b94114f48a2d603f at this time) the behavior of chardev sources is that when processed (before IO polling occurs), the chardev source will check the amount of space for reading.
+
+If it reports more than 0 bytes available to accept the read and a callback is not set, the code will set a child source connected to the QIOChannel submitted to the original source. If there's no buffer space reported, it will check for an active source, if registered it will detach this source. 
+
+Next time the loop fires, if the buffer now reports space (most likely the guest has run, emptying some bytes from the buffer), it will setup the callback again.
+
+However, if we have a stupid simple device (or driver) that doesn't have buffers big enough to fit an available write when one is sent (say a single byte buffer, polled serial port), then the poll will be set, the poll will occur and return quickly, then the callback will (depending on the backend chardev used) most likely read the 1 byte it has space for from the source, push it over to the frontend hardware side, and the IO loop will run again.
+
+Most likely the guest will not clear this byte before the next io loop cycle, meaning that the next prepare call on the source will see a full buffer in the guest and remove the poll for the data source, to allow the guest time to run to clear the buffer. Except, without a poll or a timeout set, the io loop might now block forever, since there's no report from the guest after clearing that buffer. This only returns in a sane amount of time because often some other device/timer is scheduled which sets a timeout on the poll to a reasonable time.
+
+I don't have a simple submittable bit of code to replicate at the moment but connecting a serial port to a pty then writing a large amount of data, while a guest that doesn't enable the fifo spins on an rx ready register, you can observe that RX on the guest takes anywhere from 1s to forever per byte.
+
+This logic all occurs in chardev/char-io.c
+
+Fixing this can be as simple as removing the logic to detach the child event source and changing the attach logic to only occur if there's buffer space and the poll isn't already setup. That fix could cause flow control issues potentially if the io runs on the same thread as the emulated guest (I am not sure about the details of this) and the guest is in a tight loop doing the poll. I don't see that as happening but the logic might be there for a reason.
+
+Another option is to set a timeout when the source gets removed, forcing the poll to exit with a fixed delay, this delay could potentially be derived from something like the baud rate set, forcing a minimum time before forward progress.
+
+If removing the logic isn't an option, another solution is to make the emulated hardware code itself kick the IO loop and trigger it to reschedule the poll. Similar to how the non-blocking write logic works, the read logic could recognize when the buffer has been emptied and reschedule the hw on the guest. In theory this sounds nice, but for it to work would require adding logic to all the emulated chardev frontends and in reality would likely be going through the effort to remove the callback only to within a few nanoseconds potentially want to add it back. 
+
+I'm planning to submit a patch with just outright removing the logic, but am filing this bug as a place to reference since tracking down this problem is non-obvious.
+
+commit f2c0fb93a44972a96f93510311c93ff4c2c6fab5
+
+
diff --git a/results/classifier/118/performance/192 b/results/classifier/118/performance/192
new file mode 100644
index 00000000..8991100e
--- /dev/null
+++ b/results/classifier/118/performance/192
@@ -0,0 +1,31 @@
+performance: 0.898
+device: 0.883
+virtual: 0.870
+boot: 0.863
+graphic: 0.844
+files: 0.709
+risc-v: 0.696
+kernel: 0.586
+vnc: 0.570
+VMM: 0.523
+mistranslation: 0.441
+architecture: 0.429
+assembly: 0.340
+arm: 0.317
+TCG: 0.303
+register: 0.258
+PID: 0.241
+socket: 0.227
+debug: 0.227
+network: 0.217
+ppc: 0.153
+hypervisor: 0.143
+user-level: 0.139
+KVM: 0.116
+semantic: 0.105
+permissions: 0.076
+peripherals: 0.036
+x86: 0.009
+i386: 0.003
+
+xv6 Bootloop
diff --git a/results/classifier/118/performance/1926174 b/results/classifier/118/performance/1926174
new file mode 100644
index 00000000..da54fec2
--- /dev/null
+++ b/results/classifier/118/performance/1926174
@@ -0,0 +1,78 @@
+performance: 0.924
+peripherals: 0.850
+graphic: 0.828
+virtual: 0.809
+device: 0.807
+user-level: 0.787
+hypervisor: 0.771
+x86: 0.754
+architecture: 0.705
+files: 0.703
+permissions: 0.663
+ppc: 0.650
+mistranslation: 0.639
+register: 0.630
+kernel: 0.625
+semantic: 0.622
+PID: 0.619
+network: 0.591
+debug: 0.590
+vnc: 0.577
+socket: 0.555
+VMM: 0.521
+TCG: 0.503
+risc-v: 0.493
+assembly: 0.468
+arm: 0.466
+KVM: 0.458
+boot: 0.447
+i386: 0.442
+
+Laggy and/or displaced mouse input on CloudReady (Chrome OS) VM
+
+This weekend I tried to get a CloudReady (Chrome OS) VM running on qemu 5.2. This seems to wok quite well, performance seems to be great in fact. Only problem is mouse input.
+
+Using SDL display, there is no visible mouse unless I set "show-cursor=on". After that the mouse pointer flickers a bit and most of the time is displaced so I need to press below a button in order to hit it. After switching to fullscreen and back using ctrl-alt-f this effect seems to be fixed for a while but the mouse pointer does not reach all parts of the emulated screen anymore.
+
+Using SPICE instead the mouse pointer is drawn, but it is *very* laggy. In fact it is only drawn every few seconds so it is unusable but placement seems to be correct. Text input is instant, so general emulation speed is not an issue here.
+
+To reproduce, download the free image from https://www.neverware.com/freedownload#home-edition-install
+
+Then run one of the following commands:
+
+qemu-system-x86_64 -drive driver=raw,file=cloudready-free-89.3.3-64bit.bin -machine pc,accel=kvm -m 2048 -device virtio-vga,virgl=on -display sdl,gl=on,show-cursor=on -usb -device usb-mouse -device intel-hda -device hda-duplex
+
+qemu-system-x86_64 -drive driver=raw,file=cloudready-free-89.3.3-64bit.bin -machine pc,accel=kvm -m 2048 -device virtio-vga,virgl=on -display spice-app,gl=on -usb -device usb-mouse -device intel-hda -device hda-duplex
+
+The QEMU project is currently moving its bug tracking to another system.
+For this we need to know which bugs are still valid and which could be
+closed already. Thus we are setting the bug state to "Incomplete" now.
+
+If the bug has already been fixed in the latest upstream version of QEMU,
+then please close this ticket as "Fix released".
+
+If it is not fixed yet and you think that this bug report here is still
+valid, then you have two options:
+
+1) If you already have an account on gitlab.com, please open a new ticket
+for this problem in our new tracker here:
+
+    https://gitlab.com/qemu-project/qemu/-/issues
+
+and then close this ticket here on Launchpad (or let it expire auto-
+matically after 60 days). Please mention the URL of this bug ticket on
+Launchpad in the new ticket on GitLab.
+
+2) If you don't have an account on gitlab.com and don't intend to get
+one, but still would like to keep this ticket opened, then please switch
+the state back to "New" or "Confirmed" within the next 60 days (other-
+wise it will get closed as "Expired"). We will then eventually migrate
+the ticket automatically to the new system (but you won't be the reporter
+of the bug in the new system and thus you won't get notified on changes
+anymore).
+
+Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/1940 b/results/classifier/118/performance/1940
new file mode 100644
index 00000000..b14079cc
--- /dev/null
+++ b/results/classifier/118/performance/1940
@@ -0,0 +1,50 @@
+performance: 0.923
+graphic: 0.914
+device: 0.871
+virtual: 0.858
+socket: 0.803
+boot: 0.803
+semantic: 0.726
+debug: 0.710
+PID: 0.703
+architecture: 0.698
+x86: 0.693
+permissions: 0.667
+hypervisor: 0.659
+network: 0.521
+user-level: 0.517
+peripherals: 0.503
+mistranslation: 0.488
+assembly: 0.483
+i386: 0.455
+register: 0.407
+kernel: 0.380
+files: 0.323
+ppc: 0.298
+risc-v: 0.273
+VMM: 0.272
+KVM: 0.268
+arm: 0.222
+vnc: 0.208
+TCG: 0.196
+
+Saving vm with shared folder results in Error: State blocked by non-migratable device  '000.../vhost-user-fs'
+Description of problem:
+Saving a vm with savevm in the QEMU Monitor with a shared folder causes the following error message:
+`Error: State blocked by non-migratable device '0000:00:05.0/vhost-user-fs'`
+Steps to reproduce:
+1. Get an qcow2 image that can boot (not sure if working qcow2 image is actually needed)
+2. Start virtiofsd with this /usr/libexec/virtiofsd --socket-path=/tmp/virtiofs_socket -o source=/path/to/share
+3. Run qemu-system-x86_64 -m 4G -object memory-backend-file,id=mem,size=4G,mem-path=/dev/shm,share=on -numa node,memdev=mem  -smp 2 -hda image.qcow2 -vga qxl -virtfs local,path=/path/to/share,mount_tag=share,security_model=passthrough,id=virtiofs -chardev socket,id=char0,path=/tmp/virtiofs_socket -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=share
+4. Let the image boot and/or go into the QEMU monitor.
+5. type savevm testvm
+6. See error.
+Additional information:
+This happens with both the legacy virtio-fs and the rust version.
+
+According to the first reply to https://gitlab.com/virtio-fs/virtiofsd/-/issues/81 there needs to be "a lot of changes not only in virtiofsd but also in the rust-vmm crates and qemu (and maybe in the vhost-user protocol)" so I'm reporting this here in the hopes it will speed something up.
+
+I followed the following to get virtiofsd working with command line QEMU:
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+This is blocking our migration from VirtualBox because it doesn't have problems like this. The least I need is a work around or alternative shared filesystem. We are trying to avoid networked shares.
diff --git a/results/classifier/118/performance/2014 b/results/classifier/118/performance/2014
new file mode 100644
index 00000000..f8631545
--- /dev/null
+++ b/results/classifier/118/performance/2014
@@ -0,0 +1,83 @@
+performance: 0.929
+device: 0.905
+graphic: 0.877
+architecture: 0.833
+debug: 0.770
+peripherals: 0.754
+socket: 0.731
+PID: 0.641
+network: 0.614
+VMM: 0.612
+semantic: 0.611
+mistranslation: 0.604
+virtual: 0.601
+hypervisor: 0.582
+register: 0.569
+kernel: 0.563
+x86: 0.554
+arm: 0.536
+ppc: 0.532
+TCG: 0.522
+vnc: 0.507
+i386: 0.482
+user-level: 0.474
+risc-v: 0.451
+boot: 0.406
+KVM: 0.377
+permissions: 0.328
+assembly: 0.321
+files: 0.319
+
+virtio: bounce.in_use==true in virtqueue_map_desc()
+Description of problem:
+
+Steps to reproduce:
+1. Build EDK II (edk2-stable202311) for riscv64
+2. Build UEFI SCT (commit 81dfa8d53d4290) for riscv64
+3. Run the UEFI SCT
+4. Observe the message "qemu: virtio: bogus descriptor or out of resources" after which the execution stalls.
+
+The full procedure is described in https://github.com/xypron/sct_release_test
+
+To save time you can call `sct -u` and select only test 'MediaAccessTest\\BlockIOProtocolTest'. Run it with `F9`.
+Additional information:
+virtqueue_map_desc() may be called for a large buffers size `sz`. It will then call dma_memory_map() multiple times in a loop. In address_space_map() `bounce.in_use` is set to `true` on the first call. Each subsequent call is bound to fail.
+
+To verify this is the cause I applied the following diff:
+
+```plaintext
+diff --git a/system/physmem.c b/system/physmem.c
+index a63853a7bc..12b3c2f828 100644
+--- a/system/physmem.c
++++ b/system/physmem.c
+@@ -3151,12 +3151,16 @@ void *address_space_map(AddressSpace *as,
+ 
+     if (!memory_access_is_direct(mr, is_write)) {
+         if (qatomic_xchg(&bounce.in_use, true)) {
++           fprintf(stderr, "bounce.in_use in address_space_map\n");
++
+             *plen = 0;
+             return NULL;
+         }
+         /* Avoid unbounded allocations */
+         l = MIN(l, TARGET_PAGE_SIZE);
+         bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
++       if (!bounce.buffer)
++           fprintf(stderr, "Out of memory in address_space_map\n");
+         bounce.addr = addr;
+         bounce.len = l;
+```
+
+and saw this output:
+
+```plaintext
+Logfile: "\sct\Log\MediaAccessTest\BlockIOProtocolTest0\ReadBlocks_Conf_0_0_8261
+59D3-04A5-4CCE-8431-344707A8B57A.log"
+Test Started: 12/02/23  08:43a
+------------------------------------------------------------
+Current Device: Acpi(PNP0A03,0)/Pci(3|0)
+Bounce.in_use in address_space_map
+qemu: virtio: bogus descriptor or out of resources
+```
+
+See related bug #850.
diff --git a/results/classifier/118/performance/2016 b/results/classifier/118/performance/2016
new file mode 100644
index 00000000..6549a2c4
--- /dev/null
+++ b/results/classifier/118/performance/2016
@@ -0,0 +1,39 @@
+performance: 0.970
+graphic: 0.911
+device: 0.886
+architecture: 0.736
+PID: 0.669
+mistranslation: 0.658
+debug: 0.635
+virtual: 0.452
+register: 0.438
+semantic: 0.434
+socket: 0.407
+arm: 0.394
+files: 0.384
+user-level: 0.323
+boot: 0.296
+vnc: 0.262
+TCG: 0.261
+peripherals: 0.210
+ppc: 0.209
+VMM: 0.204
+network: 0.170
+permissions: 0.157
+i386: 0.082
+risc-v: 0.064
+kernel: 0.060
+hypervisor: 0.028
+assembly: 0.024
+KVM: 0.023
+x86: 0.014
+
+-virtfs not working on windows
+Description of problem:
+performing the above returns
+qemu-system-aarch64.exe: -virtfs abc: There is no option group 'virtfs'
+qemu-system-aarch64.exe: -virtfs abc: virtfs support is disabled
+Steps to reproduce:
+1.qemu-system-aarch64.exe -virtfs abc
+Additional information:
+
diff --git a/results/classifier/118/performance/2023 b/results/classifier/118/performance/2023
new file mode 100644
index 00000000..11a11360
--- /dev/null
+++ b/results/classifier/118/performance/2023
@@ -0,0 +1,31 @@
+performance: 0.868
+network: 0.815
+device: 0.721
+debug: 0.593
+arm: 0.405
+architecture: 0.399
+graphic: 0.323
+mistranslation: 0.276
+VMM: 0.268
+ppc: 0.257
+PID: 0.222
+socket: 0.217
+risc-v: 0.212
+semantic: 0.208
+vnc: 0.208
+TCG: 0.202
+kernel: 0.196
+boot: 0.151
+i386: 0.131
+virtual: 0.109
+x86: 0.098
+user-level: 0.072
+hypervisor: 0.070
+permissions: 0.068
+assembly: 0.057
+register: 0.054
+peripherals: 0.053
+KVM: 0.016
+files: 0.012
+
+[block jobs]qemu hang when creating snapshot target node(iothread enable)
diff --git a/results/classifier/118/performance/2068 b/results/classifier/118/performance/2068
new file mode 100644
index 00000000..254906aa
--- /dev/null
+++ b/results/classifier/118/performance/2068
@@ -0,0 +1,45 @@
+performance: 0.984
+graphic: 0.947
+architecture: 0.844
+device: 0.756
+x86: 0.706
+boot: 0.655
+peripherals: 0.641
+semantic: 0.581
+permissions: 0.496
+register: 0.487
+mistranslation: 0.479
+debug: 0.440
+socket: 0.418
+PID: 0.398
+user-level: 0.347
+kernel: 0.334
+risc-v: 0.322
+VMM: 0.296
+vnc: 0.275
+ppc: 0.274
+virtual: 0.254
+hypervisor: 0.248
+arm: 0.240
+assembly: 0.238
+TCG: 0.228
+files: 0.194
+KVM: 0.181
+i386: 0.144
+network: 0.144
+
+Regression: 8.1.3 -> 8.2.0 breaks virtio vga driver
+Description of problem:
+I have a number of emulated arch linuxes using the same x11/kde configuration. After updating from 8.1.3 to 8.2.0, they all broke in the following way:
+- screen tearing/artifacts seen from bios up until sddm
+- sddm is possibly affected
+- kde/x11 has so many artifacts that its unusable. if i attempt to write in a console window, i can only see parts of what ive written if i attempt to gently resize the bottom of the window. clicking the menu item will only render the menu 1/6 times and only partly. however if I click where I remember the shutdown button to be, the system shuts down immediately, so thi seems to be purely a graphics issue.
+- starting with -vga qxl fixes all issues.
+Steps to reproduce:
+1. make new qemu, install arch/kde
+2. boot said qemu with -vga virtio option
+3. observe issue from the moment it boots
+Additional information:
+Using nVidia card and drivers on host.
+
+Removing x86-video-vesa on the guest system seemed to significant improve performance. There are still many artifacts but its almost usable with this driver removed.
diff --git a/results/classifier/118/performance/2128 b/results/classifier/118/performance/2128
new file mode 100644
index 00000000..2b5d79bb
--- /dev/null
+++ b/results/classifier/118/performance/2128
@@ -0,0 +1,31 @@
+performance: 0.807
+device: 0.670
+architecture: 0.573
+network: 0.530
+graphic: 0.522
+permissions: 0.429
+arm: 0.392
+VMM: 0.365
+register: 0.359
+semantic: 0.329
+debug: 0.281
+files: 0.252
+vnc: 0.247
+hypervisor: 0.237
+peripherals: 0.230
+virtual: 0.228
+ppc: 0.224
+mistranslation: 0.202
+i386: 0.190
+PID: 0.172
+TCG: 0.170
+risc-v: 0.156
+kernel: 0.139
+boot: 0.133
+socket: 0.123
+x86: 0.100
+KVM: 0.079
+user-level: 0.056
+assembly: 0.053
+
+avocado tests using landley.net URLs sometimes time out fetching assets
diff --git a/results/classifier/118/performance/2149 b/results/classifier/118/performance/2149
new file mode 100644
index 00000000..b37e17bf
--- /dev/null
+++ b/results/classifier/118/performance/2149
@@ -0,0 +1,41 @@
+performance: 0.854
+device: 0.793
+graphic: 0.777
+network: 0.747
+peripherals: 0.740
+ppc: 0.715
+vnc: 0.670
+register: 0.668
+hypervisor: 0.658
+permissions: 0.653
+user-level: 0.645
+risc-v: 0.631
+semantic: 0.610
+VMM: 0.609
+files: 0.585
+virtual: 0.583
+arm: 0.571
+TCG: 0.570
+architecture: 0.544
+i386: 0.535
+mistranslation: 0.535
+socket: 0.532
+debug: 0.528
+KVM: 0.508
+x86: 0.500
+PID: 0.475
+kernel: 0.462
+boot: 0.436
+assembly: 0.168
+
+Segfault in libvhost-user and libvduse because of invalid pointer arithmetic with indirect read
+Description of problem:
+Hello, this is my first experience communicating with open-source community. I have already reported the problem and have submitted patches through qemu-devel mailing list https://mail.gnu.org/archive/html/qemu-devel/2024-01/msg02533.html, as instructed in https://www.qemu.org/docs/master/devel/submitting-a-patch.html, albeit getting no response from any maintainer. I know, that everyone are very busy and are spammed everyday from millions of threads, but I am getting very upset, that such a trivial bug lives in code base for many years and even have been copied to "sister"-library without proper review. So, excuse me, if I am taking this issue too personally.
+
+The problem - when one tries to use libvhost-user\libvduse and triggers for some reason non-zero-copy mode (like pushing a lot of data) of indirect descriptor reading routine `virtqueue_read_indirect_desc`, any time one got to read more than one descriptor - one would overwrite stack and depending on one's luck getting some weird behaviour, or simple crash moments later, when other code tries to access broken data.
+
+Steps to reproduce are non-trivial, because depends on one's host and VM (one simply gets random crashes here and there, with core dumps pointing somewhere around given libraries), but anyone who can read C code, can clearly see that pointer arithmetic of `struct vring_desc *desc` is wrong.
+
+Maybe, I got instructions wrong and posted fixes to wrong mailing list, maybe, nobody cares, so thank you for attention. I'll be glad to hear any advice on how can I help with fixing this simple error, besides what has been done already.
+
+Thank you.
diff --git a/results/classifier/118/performance/2153 b/results/classifier/118/performance/2153
new file mode 100644
index 00000000..028f07c2
--- /dev/null
+++ b/results/classifier/118/performance/2153
@@ -0,0 +1,31 @@
+performance: 0.911
+device: 0.761
+network: 0.580
+register: 0.445
+arm: 0.432
+socket: 0.418
+debug: 0.408
+vnc: 0.363
+ppc: 0.358
+mistranslation: 0.342
+architecture: 0.311
+PID: 0.289
+graphic: 0.287
+semantic: 0.271
+peripherals: 0.262
+hypervisor: 0.252
+boot: 0.248
+files: 0.247
+virtual: 0.232
+permissions: 0.226
+kernel: 0.225
+user-level: 0.186
+risc-v: 0.163
+VMM: 0.150
+TCG: 0.148
+x86: 0.141
+assembly: 0.113
+KVM: 0.080
+i386: 0.030
+
+ubuntu-20.04-s390x-all CI job is very flaky
diff --git a/results/classifier/118/performance/218 b/results/classifier/118/performance/218
new file mode 100644
index 00000000..936e3841
--- /dev/null
+++ b/results/classifier/118/performance/218
@@ -0,0 +1,31 @@
+performance: 0.810
+network: 0.695
+device: 0.665
+debug: 0.624
+graphic: 0.369
+architecture: 0.346
+socket: 0.344
+files: 0.326
+virtual: 0.316
+semantic: 0.305
+i386: 0.234
+x86: 0.184
+mistranslation: 0.157
+PID: 0.112
+arm: 0.098
+peripherals: 0.089
+user-level: 0.078
+ppc: 0.069
+boot: 0.034
+vnc: 0.031
+VMM: 0.029
+register: 0.026
+permissions: 0.020
+TCG: 0.019
+assembly: 0.015
+hypervisor: 0.008
+KVM: 0.004
+kernel: 0.003
+risc-v: 0.003
+
+qemu-storage-daemon --nbd-server fails with "too many connections" error
diff --git a/results/classifier/118/performance/2183 b/results/classifier/118/performance/2183
new file mode 100644
index 00000000..c7c0984c
--- /dev/null
+++ b/results/classifier/118/performance/2183
@@ -0,0 +1,50 @@
+performance: 0.945
+kernel: 0.925
+boot: 0.916
+architecture: 0.898
+graphic: 0.830
+arm: 0.804
+device: 0.789
+semantic: 0.758
+virtual: 0.642
+hypervisor: 0.577
+TCG: 0.537
+ppc: 0.525
+PID: 0.506
+VMM: 0.504
+vnc: 0.499
+risc-v: 0.498
+user-level: 0.433
+socket: 0.431
+debug: 0.426
+network: 0.298
+permissions: 0.293
+KVM: 0.251
+register: 0.239
+files: 0.198
+peripherals: 0.178
+assembly: 0.145
+i386: 0.141
+mistranslation: 0.116
+x86: 0.016
+
+aarch-64 emulation much slower since release 8.1.5 (issue also present on 8.2.1)
+Description of problem:
+Since QEMU 8.1.5 our aarch64 based emulation got much slower. We use a linux 5.4 kernel which we cross-compile with the ARM toolchain. Things that are noticable:
+- Boot time got a lot longer
+- All memory accesses seem to take 3x longer (can be verified by e.g. executing below script, address does not matter):
+```
+date
+for i in $(seq 0 1000); do
+    devmem 0x200000000 2>/dev/null
+done
+date
+```
+Steps to reproduce:
+Just boot an ARM based kernel on the virt machine and execute above script.
+Additional information:
+I've tried reproducing the issue on the master branch. There the issue is not present. It only seems to be present on releases 8.1.5 and 8.2.1. 
+
+I've narrowed the problem down to following commit on the 8.2 branch (@bonzini): ef74024b76bf285e247add8538c11cb3c7399a1a accel/tcg: Revert mapping of PCREL translation block to multiple virtual addresses.
+
+Let me know if any other information / tests are required.
diff --git a/results/classifier/118/performance/2187 b/results/classifier/118/performance/2187
new file mode 100644
index 00000000..e78729e4
--- /dev/null
+++ b/results/classifier/118/performance/2187
@@ -0,0 +1,31 @@
+performance: 0.937
+graphic: 0.644
+architecture: 0.568
+debug: 0.539
+ppc: 0.436
+x86: 0.396
+risc-v: 0.284
+i386: 0.275
+semantic: 0.274
+mistranslation: 0.273
+vnc: 0.202
+KVM: 0.174
+arm: 0.156
+device: 0.143
+virtual: 0.121
+VMM: 0.087
+TCG: 0.085
+boot: 0.052
+kernel: 0.024
+PID: 0.016
+hypervisor: 0.010
+register: 0.008
+permissions: 0.008
+network: 0.007
+socket: 0.006
+user-level: 0.004
+assembly: 0.004
+peripherals: 0.002
+files: 0.001
+
+system/cpu: deadlock in pause_all_vcpus()
diff --git a/results/classifier/118/performance/2193 b/results/classifier/118/performance/2193
new file mode 100644
index 00000000..99960f9f
--- /dev/null
+++ b/results/classifier/118/performance/2193
@@ -0,0 +1,60 @@
+performance: 0.989
+virtual: 0.950
+boot: 0.937
+graphic: 0.929
+files: 0.869
+device: 0.818
+ppc: 0.757
+architecture: 0.755
+mistranslation: 0.728
+user-level: 0.690
+hypervisor: 0.667
+PID: 0.641
+VMM: 0.623
+network: 0.614
+semantic: 0.610
+i386: 0.603
+x86: 0.600
+register: 0.582
+vnc: 0.580
+peripherals: 0.577
+risc-v: 0.576
+assembly: 0.562
+debug: 0.529
+TCG: 0.519
+kernel: 0.501
+KVM: 0.489
+permissions: 0.479
+arm: 0.431
+socket: 0.431
+
+qemu-system-mips64el 70 times slower than qemu -ppc64, -riscv64, -s390x
+Description of problem:
+I installed Debian 12 inside a `qemu-system-mips64el` virtual machine. The performances are awfully slow, roughly 70 times slower than other qemu targets on the same host, namely ppc64, riscv64, s390x.
+
+The idea is to recompile and test an open source project on various platforms.
+
+Using a command such as `time make path/to/bin/file.o`, I compiled one single source file on the host and within qemu for various targets. The same source file, inside the same project, is used in all cases.
+
+The results are shown below (the "x" number between parentheses is the time factor compared to the compilation on the host).
+
+- Host (native): 0m1.316s
+- qemu-system-ppc64: 0m31.622s (x24)
+- qemu-system-riscv64: 0m40.691s (x31)
+- qemu-system-s390x: 0m43.459s (x33)
+- qemu-system-mips64el: 48m33.587s (x2214)
+
+The compilation of the same source is 24 to 33 times slower on the first three emulated targets, compared to the same compilation on the host, which is understandable. However, the same compilation on the mips64el target is 2214 time slower than the host, roughly 70 times slower than other emulated targets.
+
+Why do we have such a tremendous difference between qemu mips64el and other targets?
+Additional information:
+For reference, here are the other qemu to boot the other targets. Guest OS are Debian 12 or Ubuntu 22.
+```
+qemu-system-ppc64 -smp 8 -m 8192 -nographic ...
+qemu-system-riscv64 -machine virt -smp 8 -m 8192 -nographic ...
+qemu-system-s390x -machine s390-ccw-virtio -cpu max,zpci=on -smp 8 -m 8192 -nographic ...
+```
+
+The other targets use `-smp 8` while qemu-system-mips64el does not support smp. However, the test compiles one single source file and does not (or marginally) use more than one CPU.
+
+Arguably, each compilation addresses a different target, uses a different backend, and the compilation time is not necessarily identical. OK, but 70 times slower seems way too much for this.
diff --git a/results/classifier/118/performance/2216 b/results/classifier/118/performance/2216
new file mode 100644
index 00000000..996bec8f
--- /dev/null
+++ b/results/classifier/118/performance/2216
@@ -0,0 +1,33 @@
+performance: 0.956
+graphic: 0.919
+mistranslation: 0.808
+semantic: 0.567
+device: 0.429
+arm: 0.344
+architecture: 0.307
+assembly: 0.231
+VMM: 0.212
+i386: 0.200
+ppc: 0.194
+vnc: 0.187
+register: 0.167
+boot: 0.164
+network: 0.162
+KVM: 0.155
+TCG: 0.151
+risc-v: 0.140
+debug: 0.112
+x86: 0.106
+PID: 0.063
+socket: 0.057
+hypervisor: 0.049
+virtual: 0.039
+permissions: 0.030
+user-level: 0.021
+files: 0.016
+kernel: 0.016
+peripherals: 0.002
+
+Incresaed artifacts generation speed with paralleled process
+Additional information:
+`parallel-jobs` was referenced `main`
diff --git a/results/classifier/118/performance/2241 b/results/classifier/118/performance/2241
new file mode 100644
index 00000000..681952eb
--- /dev/null
+++ b/results/classifier/118/performance/2241
@@ -0,0 +1,31 @@
+performance: 0.826
+device: 0.800
+debug: 0.594
+graphic: 0.341
+mistranslation: 0.335
+arm: 0.292
+permissions: 0.291
+risc-v: 0.271
+network: 0.250
+user-level: 0.213
+semantic: 0.194
+register: 0.178
+vnc: 0.148
+ppc: 0.108
+architecture: 0.107
+socket: 0.105
+i386: 0.099
+virtual: 0.096
+boot: 0.060
+files: 0.056
+hypervisor: 0.055
+x86: 0.045
+assembly: 0.044
+PID: 0.040
+peripherals: 0.031
+kernel: 0.023
+TCG: 0.015
+VMM: 0.011
+KVM: 0.001
+
+QMP Commands dont't work properly
diff --git a/results/classifier/118/performance/226 b/results/classifier/118/performance/226
new file mode 100644
index 00000000..694fb5a8
--- /dev/null
+++ b/results/classifier/118/performance/226
@@ -0,0 +1,31 @@
+performance: 0.842
+device: 0.792
+network: 0.733
+VMM: 0.620
+register: 0.614
+PID: 0.574
+vnc: 0.561
+risc-v: 0.531
+arm: 0.520
+semantic: 0.493
+hypervisor: 0.491
+architecture: 0.488
+mistranslation: 0.484
+debug: 0.479
+TCG: 0.479
+ppc: 0.460
+boot: 0.459
+permissions: 0.429
+i386: 0.429
+KVM: 0.401
+socket: 0.386
+x86: 0.386
+graphic: 0.326
+virtual: 0.284
+kernel: 0.265
+peripherals: 0.264
+user-level: 0.264
+assembly: 0.249
+files: 0.013
+
+host window size does not change when guest video screen size changes while moving host window
diff --git a/results/classifier/118/performance/2319 b/results/classifier/118/performance/2319
new file mode 100644
index 00000000..f3148c68
--- /dev/null
+++ b/results/classifier/118/performance/2319
@@ -0,0 +1,47 @@
+performance: 0.922
+register: 0.914
+graphic: 0.911
+architecture: 0.909
+semantic: 0.851
+ppc: 0.842
+device: 0.824
+peripherals: 0.781
+mistranslation: 0.777
+socket: 0.746
+PID: 0.736
+vnc: 0.726
+network: 0.692
+arm: 0.679
+risc-v: 0.665
+assembly: 0.664
+debug: 0.663
+VMM: 0.658
+permissions: 0.616
+hypervisor: 0.613
+user-level: 0.565
+virtual: 0.563
+kernel: 0.557
+boot: 0.553
+files: 0.549
+TCG: 0.505
+x86: 0.441
+KVM: 0.417
+i386: 0.230
+
+SPARC32-bit SDIV of negative divisor gives wrong result
+Description of problem:
+SDIV of negative divisor gives wrong result because of typo in helper_sdiv(). This is true for QEMU 9.0.0 and earlier.
+
+Place -1 in the Y register and -128 in another reg, then -120 in another register and do SDIV into a result register, instead of the proper value of 1 for the result, the incorrect value of 0 is produced.
+
+There is a typo in target/sparc/helper.c that causes the divisor to be consider unsigned, this patch fixes it:
+
+\*\*\* helper.c.ori Tue Apr 23 16:23:45 2024 --- helper.c Mon Apr 29 20:14:07 2024
+
+---
+
+\*\*\* 121,127 \*\*\*\* return (uint32_t)(b32 \< 0 ? INT32_MAX : INT32_MIN) | (-1ull \<\< 32); }
+
+! a64 /= b; r = a64; if (unlikely(r != a64)) { return (uint32_t)(a64 \< 0 ? INT32_MIN : INT32_MAX) | (-1ull \<\< 32); --- 121,127 ---- return (uint32_t)(b32 \< 0 ? INT32_MAX : INT32_MIN) | (-1ull \<\< 32); }
+
+! a64 /= b32; r = a64; if (unlikely(r != a64)) { return (uint32_t)(a64 \< 0 ? INT32_MIN : INT32_MAX) | (-1ull \<\< 32);
diff --git a/results/classifier/118/performance/2325 b/results/classifier/118/performance/2325
new file mode 100644
index 00000000..806f29f8
--- /dev/null
+++ b/results/classifier/118/performance/2325
@@ -0,0 +1,41 @@
+performance: 0.980
+KVM: 0.964
+hypervisor: 0.931
+virtual: 0.869
+device: 0.837
+graphic: 0.830
+architecture: 0.765
+debug: 0.735
+semantic: 0.625
+permissions: 0.533
+arm: 0.524
+register: 0.448
+VMM: 0.428
+vnc: 0.397
+socket: 0.397
+ppc: 0.365
+risc-v: 0.322
+mistranslation: 0.306
+PID: 0.265
+boot: 0.255
+kernel: 0.241
+x86: 0.232
+files: 0.213
+network: 0.168
+user-level: 0.161
+TCG: 0.133
+assembly: 0.066
+i386: 0.058
+peripherals: 0.053
+
+[Performance Regression] Constant freezes on Alder lake and Raptor lake CPUs.
+Description of problem:
+Strangely, no logs are recorded. The guest just freezes. It can however be rescued by a simple pause and unpause.
+
+This issue only happens when using the KVM hypervisor. Other hypervisors are fine.
+
+This issue does NOT happen when I tested my Intel Core i7 8700K.
+Steps to reproduce:
+1. Create a basic virtual machine for Windows 11 (Or 10).
+2. Run it for about 5 - 30 minutes (Sometimes it happens in 20 seconds or even less).
+3. The problem should occur.
diff --git a/results/classifier/118/performance/2344 b/results/classifier/118/performance/2344
new file mode 100644
index 00000000..80c1bf10
--- /dev/null
+++ b/results/classifier/118/performance/2344
@@ -0,0 +1,75 @@
+performance: 0.878
+graphic: 0.855
+ppc: 0.770
+device: 0.758
+semantic: 0.758
+socket: 0.736
+kernel: 0.728
+architecture: 0.714
+PID: 0.661
+vnc: 0.630
+risc-v: 0.619
+VMM: 0.604
+boot: 0.585
+arm: 0.565
+network: 0.546
+register: 0.541
+hypervisor: 0.518
+i386: 0.518
+mistranslation: 0.506
+peripherals: 0.501
+files: 0.483
+debug: 0.462
+user-level: 0.436
+KVM: 0.412
+permissions: 0.388
+TCG: 0.382
+x86: 0.380
+assembly: 0.364
+virtual: 0.135
+
+Plugin scoreboard deadlock (plugin.lock vs start_exclusive)
+Description of problem:
+Deadlock
+
+In frame 9 the thread grabs the plugin.lock, and starts to wait for other cpus to enter exclusive idle.
+```
+#7  0x00005555555a1295 in start_exclusive () at ../hw/core/cpu-common.c:199
+#8  plugin_grow_scoreboards__locked (cpu=0x7fff0c2b4720) at ../plugins/core.c:238
+#9  qemu_plugin_vcpu_init_hook (cpu=0x7fff0c2b4720) at ../plugins/core.c:258
+```
+
+The other thread just finished a TB and do the callback to the plugin, so it will not become exclusive idle until it finishes.
+That callback tries to create a new 'scoreboard', but plugin.lock is already taken.
+```
+#7  qemu_plugin_scoreboard_new (element_size=element_size@entry=8) at ../plugins/api.c:464
+#8  0x00007ffff7fb973d in vcpu_tb_trans (id=<optimized out>, tb=0x555555858d60) at /home/rehn/source/qemu/contrib/plugins/hotblocks.c:125
+#9  0x00005555557394f1 in qemu_plugin_tb_trans_cb (cpu=<optimized out>, tb=0x555555858d60) at ../plugins/core.c:418
+```
+
+Locally I'm using this fix, reverse order so we enter exclusive idle before grabbing the plugin.lock:
+```
+diff --git a/plugins/core.c b/plugins/core.c
+index 1e58a57bf1..0e41c4ef22 100644
+--- a/plugins/core.c
++++ b/plugins/core.c
+@@ -236,4 +236,2 @@ static void plugin_grow_scoreboards__locked(CPUState *cpu)
+ 
+-    /* cpus must be stopped, as tb might still use an existing scoreboard. */
+-    start_exclusive();
+     struct qemu_plugin_scoreboard *score;
+@@ -244,3 +242,2 @@ static void plugin_grow_scoreboards__locked(CPUState *cpu)
+     tb_flush(cpu);
+-    end_exclusive();
+ }
+@@ -250,2 +247,4 @@ void qemu_plugin_vcpu_init_hook(CPUState *cpu)
+     bool success;
++    /* cpus must be stopped, as tb might still use an existing scoreboard. */
++    start_exclusive();
+ 
+@@ -259,2 +258,3 @@ void qemu_plugin_vcpu_init_hook(CPUState *cpu)
+     qemu_rec_mutex_unlock(&plugin.lock);
++    end_exclusive();
+```
+Steps to reproduce:
+Run command a few times and get 'unlucky'
diff --git a/results/classifier/118/performance/236 b/results/classifier/118/performance/236
new file mode 100644
index 00000000..4d30cd95
--- /dev/null
+++ b/results/classifier/118/performance/236
@@ -0,0 +1,31 @@
+performance: 0.911
+device: 0.829
+architecture: 0.786
+boot: 0.489
+risc-v: 0.469
+x86: 0.450
+graphic: 0.430
+i386: 0.398
+semantic: 0.373
+ppc: 0.369
+debug: 0.331
+vnc: 0.323
+TCG: 0.321
+VMM: 0.281
+PID: 0.268
+permissions: 0.245
+arm: 0.232
+virtual: 0.186
+mistranslation: 0.172
+kernel: 0.162
+user-level: 0.100
+register: 0.092
+KVM: 0.036
+socket: 0.016
+assembly: 0.016
+hypervisor: 0.008
+network: 0.006
+files: 0.002
+peripherals: 0.001
+
+CPU fetch from unpopulated ROM on reset
diff --git a/results/classifier/118/performance/2365 b/results/classifier/118/performance/2365
new file mode 100644
index 00000000..9a604cbf
--- /dev/null
+++ b/results/classifier/118/performance/2365
@@ -0,0 +1,38 @@
+performance: 0.963
+graphic: 0.946
+boot: 0.873
+kernel: 0.855
+VMM: 0.833
+device: 0.804
+risc-v: 0.787
+debug: 0.683
+architecture: 0.669
+vnc: 0.642
+semantic: 0.612
+hypervisor: 0.592
+PID: 0.567
+ppc: 0.539
+virtual: 0.455
+arm: 0.414
+mistranslation: 0.413
+socket: 0.380
+TCG: 0.380
+user-level: 0.370
+KVM: 0.338
+x86: 0.336
+network: 0.278
+i386: 0.259
+register: 0.256
+files: 0.249
+peripherals: 0.119
+permissions: 0.100
+assembly: 0.098
+
+[Regression v8.2/v9.0+] stuck at SeaBIOS for >30s with 100% CPU (1T)
+Description of problem:
+starting our Linux direct-kernel-boot VMs with same args on different hosts/hardware will get stuck at SeaBIOS for 30-60s with 100% 1T CPU load starting with v8.2 and also in v9.0. v9.0.0 and v8.2.3 - v8.1.5 is OK. To be clear, everything seems to be fine after that, though I did not do any benchmarks to compare performance. It just delays (re)booting by almost 1 minute, which is a shame, because before that update/regression it was instant and our VMs only take 4s to boot, which is now more like 60s.
+Downgrading to v8.1 instantly fixes it, upgrading to v8.2/v9.0 instantly breaks it.
+Steps to reproduce:
+1. start VM with same args on different versions
+
+somehow if I save this bug with `/label ~"kind::Bug"` it disappears, so I'm unable to add/keep the label
diff --git a/results/classifier/118/performance/2410 b/results/classifier/118/performance/2410
new file mode 100644
index 00000000..c9043374
--- /dev/null
+++ b/results/classifier/118/performance/2410
@@ -0,0 +1,122 @@
+performance: 0.930
+graphic: 0.926
+hypervisor: 0.918
+user-level: 0.912
+register: 0.905
+assembly: 0.901
+peripherals: 0.899
+device: 0.898
+arm: 0.893
+architecture: 0.890
+permissions: 0.885
+virtual: 0.877
+risc-v: 0.876
+network: 0.875
+debug: 0.868
+files: 0.864
+vnc: 0.855
+PID: 0.847
+ppc: 0.844
+boot: 0.833
+semantic: 0.828
+TCG: 0.825
+socket: 0.823
+kernel: 0.818
+KVM: 0.814
+VMM: 0.801
+mistranslation: 0.774
+i386: 0.648
+x86: 0.575
+
+linux-user: `Setsockopt` with IP_OPTIONS returns "Protocol not available" error
+Description of problem:
+It seems that call to `setsockopt(sd, SOL_IP, IP_OPTIONS,_)` behaves differently on RISC-V Qemu than on x64 Linux. 
+On Linux syscall returns 0, but on Qemu it fails with `Protocol not available`.
+According [man](https://man7.org/linux/man-pages/man7/ip.7.html) `IP_OPTIONS` on `SOCK_STREAM` socket "should work".
+Steps to reproduce:
+1. Use below toy program `setsockopt.c` and compile it without optimizations like:
+```
+    gcc -Wall -W -Wextra -std=gnu17 -pedantic setsockopt.c -o setsockopt
+```
+
+```
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <arpa/inet.h>
+#include <netinet/in.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+int main() {
+    {
+        int sd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+        if(sd < 0) {
+            perror("Opening stream socket error");
+            exit(1);
+        }
+        else
+            printf("Opening stream socket....OK.\n");
+
+        struct sockaddr_in local_address = {AF_INET, htons(1234), {inet_addr("255.255.255.255")}, {0}};
+        int err = connect(sd, (struct sockaddr*)&local_address, (socklen_t)16);
+
+        if (err < 0) {
+            perror("Connect error");
+            close(sd);
+        }
+        else
+            printf("Connect...OK.\n");
+    }
+    {
+        int sd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
+        if(sd < 0) {
+            perror("Opening stream socket error");
+            exit(1);
+        }
+        else
+            printf("Opening stream socket....OK.\n");
+
+        char option[4] = {0};
+        if(setsockopt(sd, SOL_IP, IP_OPTIONS, (char *)option, sizeof(option)) < 0) {
+            perror("setsockopt error");
+            close(sd);
+            exit(1);
+        }
+        else
+            printf("setsockopt...OK.\n");
+
+        struct sockaddr_in local_address = {AF_INET, htons(1234), {inet_addr("255.255.255.255")}, {0}};
+        int err = connect(sd, (struct sockaddr*)&local_address, (socklen_t)16);
+
+        if (err < 0) {
+            perror("Connect error");
+            close(sd);
+        }
+        else
+            printf("Connect...OK.\n");
+    }
+    return 0;
+}
+```
+
+
+2. Run program on Qemu and compare output with output from x64 build. In my case it looks like:
+```
+root@AMDC4705:~/runtime/connect$ ./setsockopt-x64
+Opening stream socket....OK.
+Connect error: Network is unreachable
+Opening stream socket....OK.
+setsockopt...OK.
+Connect error: Network is unreachable
+
+root@AMDC4705:/runtime/connect# ./setsockopt-riscv
+Opening stream socket....OK.
+Connect error: Network is unreachable
+Opening stream socket....OK.
+setsockopt error: Protocol not available
+```
+Additional information:
+In above demo option `value` is quite artificial. However I tried passing many different `option` arguments (with same `SOL_IP` + `IP_OPTIONS` combination) but always ended up with `setsockopt` failure. 
+From the other hand on x64 it worked fine. Then I realized that appropriate path in Qemu was unimplemented: https://github.com/qemu/qemu/blob/master/linux-user/syscall.c#L2141
diff --git a/results/classifier/118/performance/2417 b/results/classifier/118/performance/2417
new file mode 100644
index 00000000..4bb152fd
--- /dev/null
+++ b/results/classifier/118/performance/2417
@@ -0,0 +1,35 @@
+performance: 0.867
+device: 0.861
+graphic: 0.755
+mistranslation: 0.723
+semantic: 0.621
+virtual: 0.618
+network: 0.590
+debug: 0.590
+hypervisor: 0.564
+architecture: 0.558
+socket: 0.342
+boot: 0.305
+kernel: 0.295
+x86: 0.286
+PID: 0.284
+user-level: 0.220
+i386: 0.168
+permissions: 0.163
+files: 0.121
+peripherals: 0.100
+KVM: 0.093
+register: 0.086
+VMM: 0.075
+vnc: 0.063
+arm: 0.052
+TCG: 0.044
+ppc: 0.034
+risc-v: 0.031
+assembly: 0.025
+
+qemu-img allocates full size on exFAT when metadata preallocation is requested
+Description of problem:
+`qemu-img` seems to preallocate the full size of a qcow2 image on exFAT rather than just the metadata when that is requested. This was initially seen via libvirt/libvirt#649. exFAT does not support sparse files.
+Steps to reproduce:
+1. Run command
diff --git a/results/classifier/118/performance/2460 b/results/classifier/118/performance/2460
new file mode 100644
index 00000000..a090759f
--- /dev/null
+++ b/results/classifier/118/performance/2460
@@ -0,0 +1,38 @@
+performance: 0.998
+architecture: 0.980
+x86: 0.949
+semantic: 0.903
+mistranslation: 0.895
+arm: 0.888
+graphic: 0.880
+user-level: 0.847
+device: 0.695
+register: 0.685
+network: 0.551
+vnc: 0.543
+socket: 0.528
+risc-v: 0.521
+peripherals: 0.484
+files: 0.449
+hypervisor: 0.434
+TCG: 0.399
+PID: 0.387
+ppc: 0.378
+permissions: 0.375
+i386: 0.359
+boot: 0.359
+debug: 0.351
+VMM: 0.350
+assembly: 0.305
+virtual: 0.190
+kernel: 0.154
+KVM: 0.082
+
+Significant performance degradation of qemu-x86_64 starting from version 3 on aarch64
+Description of problem:
+When I ran CoreMark with different qemu user-mode versions,guest x86-64-> host arm64, I found that the performance was highest with QEMU 2.x versions, and there was a significant performance degradation starting from QEMU version 3. What is the reason?
+
+|  |             |             |             |             |             |             |            |             |             |             |             |
+|------------------------------------------|-------------|-------------|-------------|-------------|-------------|-------------|------------|-------------|-------------|-------------|-------------|
+| qemu version                             | 2.5.1       | 2.8.0       | 2.9.0       | 2.9.1       | 3.0.0       | 4.0.0       | 5.2.0      | 6.2.0       | 7.2.13      | 8.2.6       | 9.0.1       |
+| coremark score                           | 3905.995703 | 4465.947153 | 4534.119247 | 4538.577912 | 1167.337886 | 1163.399453 | 928.348384 | 1327.051954 | 1301.659616 | 1034.714677 | 1085.304971 |
diff --git a/results/classifier/118/performance/2472 b/results/classifier/118/performance/2472
new file mode 100644
index 00000000..f26672f6
--- /dev/null
+++ b/results/classifier/118/performance/2472
@@ -0,0 +1,31 @@
+performance: 0.895
+device: 0.870
+VMM: 0.802
+network: 0.765
+vnc: 0.625
+architecture: 0.581
+arm: 0.543
+kernel: 0.513
+boot: 0.473
+files: 0.444
+PID: 0.435
+ppc: 0.397
+TCG: 0.388
+hypervisor: 0.324
+socket: 0.324
+permissions: 0.311
+KVM: 0.301
+i386: 0.280
+semantic: 0.258
+graphic: 0.255
+assembly: 0.247
+risc-v: 0.241
+virtual: 0.218
+x86: 0.205
+register: 0.143
+mistranslation: 0.130
+user-level: 0.086
+peripherals: 0.068
+debug: 0.063
+
+optimize nvme_directive_receive() function
diff --git a/results/classifier/118/performance/2483 b/results/classifier/118/performance/2483
new file mode 100644
index 00000000..4296edeb
--- /dev/null
+++ b/results/classifier/118/performance/2483
@@ -0,0 +1,50 @@
+performance: 0.919
+graphic: 0.916
+device: 0.801
+debug: 0.775
+peripherals: 0.760
+architecture: 0.722
+assembly: 0.714
+network: 0.704
+files: 0.627
+arm: 0.626
+PID: 0.602
+ppc: 0.600
+vnc: 0.586
+socket: 0.572
+semantic: 0.544
+permissions: 0.542
+hypervisor: 0.511
+i386: 0.510
+kernel: 0.476
+boot: 0.442
+user-level: 0.410
+register: 0.395
+mistranslation: 0.349
+risc-v: 0.320
+x86: 0.268
+virtual: 0.250
+VMM: 0.229
+TCG: 0.129
+KVM: 0.006
+
+m68k: jsr (sp) doesn't work as expected
+Description of problem:
+Consider the following code (disassembly from ghidra). This copies the current `SP` to `A1` then copies 0x68 bytes from the address pointed at by `A0` to the address pointed at by `A1` with increment. This should end up with a copy of some bytes and `SP` pointing at the first.
+
+```
+        ff8241e6 22 4f           movea.l    SP,A1
+        ff8241e8 70 68           moveq      #0x68,D0
+                             LAB_ff8241ea                                    XREF[1]:     ff8241ee(j)  
+        ff8241ea 12 d8           move.b     (A0)+,(A1)+
+        ff8241ec 53 80           subq.l     #0x1,D0
+        ff8241ee 66 fa           bne.b      LAB_ff8241ea
+        ff8241f0 4e 97           jsr        (SP)
+```
+
+`SP` is `0x3bfc` at the `jsr` so we'd expect to jump to `0x3bfc` and put the address to return to at `0x3bf8` so the `jsr` can return I think?
+What currently happens in QEMU is the return address is put at `0xb3f8` and `PC` also becomes `0x3bf8` and the return address starts being executed as code and things go off the rails.
+
+Forgive the screenshot but this is what it looks like with GDB connected. Dumping the memory where the `PC` is shows that the return address is actually there and we can see there is garbage before the instructions it should be executing.
+
+![image](/uploads/d5fd6f455e5a433735d8fae2be3d53ee/image.png){width=289 height=759}
diff --git a/results/classifier/118/performance/2519 b/results/classifier/118/performance/2519
new file mode 100644
index 00000000..d991a834
--- /dev/null
+++ b/results/classifier/118/performance/2519
@@ -0,0 +1,31 @@
+performance: 0.878
+device: 0.778
+peripherals: 0.719
+mistranslation: 0.715
+kernel: 0.700
+network: 0.689
+architecture: 0.562
+arm: 0.509
+hypervisor: 0.474
+ppc: 0.463
+TCG: 0.425
+boot: 0.422
+vnc: 0.410
+virtual: 0.403
+KVM: 0.396
+debug: 0.370
+semantic: 0.355
+risc-v: 0.276
+x86: 0.218
+assembly: 0.213
+permissions: 0.192
+VMM: 0.175
+i386: 0.174
+user-level: 0.150
+register: 0.143
+graphic: 0.141
+socket: 0.127
+files: 0.078
+PID: 0.007
+
+make check TIMEOUT_MULTIPLIER variable is undocumented
diff --git a/results/classifier/118/performance/2551 b/results/classifier/118/performance/2551
new file mode 100644
index 00000000..6a58180f
--- /dev/null
+++ b/results/classifier/118/performance/2551
@@ -0,0 +1,43 @@
+performance: 0.993
+graphic: 0.974
+device: 0.854
+boot: 0.784
+semantic: 0.704
+TCG: 0.603
+risc-v: 0.596
+debug: 0.586
+vnc: 0.586
+register: 0.511
+network: 0.492
+hypervisor: 0.477
+PID: 0.465
+user-level: 0.428
+arm: 0.421
+virtual: 0.384
+architecture: 0.378
+ppc: 0.371
+kernel: 0.370
+VMM: 0.350
+KVM: 0.291
+peripherals: 0.288
+mistranslation: 0.278
+assembly: 0.259
+i386: 0.224
+socket: 0.211
+x86: 0.165
+files: 0.163
+permissions: 0.144
+
+RTC time could run slow 3s than host time when clock=vm & base=UTC
+Description of problem:
+When start qemu with `-rtc base=utc,clock=vm`, sometime guest time can slower 3s than host. There's no problem (also didn't be noticed) as we often start ntp service, who will adjust our system time. But let's talk about if we havn't enable NTP service(for example system just booted)
+
+After inspect into the code, i found that there are two problem we should think about:
+#
+Steps to reproduce:
+1. start vm with `-rtc base=utc,clock=vm`
+2. disable NTP (OS specific)`systemctl disable --now ntpd;systemctl disable --now ntpdate`
+3. reboot in the guest
+4. after guest started, compare guest time with host time(at the same time) `date +'%F %T.%3N'`
+Additional information:
+
diff --git a/results/classifier/118/performance/2562 b/results/classifier/118/performance/2562
new file mode 100644
index 00000000..35bd9e52
--- /dev/null
+++ b/results/classifier/118/performance/2562
@@ -0,0 +1,82 @@
+performance: 0.889
+architecture: 0.866
+semantic: 0.856
+graphic: 0.851
+boot: 0.841
+assembly: 0.839
+user-level: 0.823
+debug: 0.818
+peripherals: 0.798
+PID: 0.798
+device: 0.792
+permissions: 0.788
+register: 0.787
+arm: 0.771
+mistranslation: 0.748
+risc-v: 0.724
+socket: 0.695
+x86: 0.692
+VMM: 0.691
+kernel: 0.686
+network: 0.682
+hypervisor: 0.678
+ppc: 0.657
+virtual: 0.634
+KVM: 0.634
+files: 0.602
+vnc: 0.582
+TCG: 0.554
+i386: 0.507
+
+Booting EFI shell from GRUB using "chainloader" in Qemu with UEFI boot shows video artifacts if we have all_video, gfxterm
+Steps to reproduce:
+- Start Qemu in UEFI mode, i. e. `qemu-system-x86_64 -bios OVMF.fd ...`
+- Qemu should load GRUB from the disk as the first thing after firmware
+- GRUB should run commands `loadfont unicode; insmod all_video; terminal_output gfxterm` (note: this is perfectly ordinary sequence executed by Debian's default configuration)
+- Then GRUB should execute EFI shell using `chainloader` command
+
+If we do all this, then instead of EFI shell we will see broken image. I. e. video output will be completely broken/mangled/damaged. But EFI shell will still respond to commands. If we type "exit", then we will exit from EFI shell back to GRUB.
+
+I will repeat: my configuration is not special at all. `loadfont unicode; insmod all_video; terminal_output gfxterm` are absolutely ordinary commands executed by Debian's GRUB default setup. So, essentially this bug means this: if I add EFI shell to GRUB menu in Debian, then this new menu entry will not work properly if I try to boot in Qemu in UEFI mode.
+
+Okay, now let me give you more detailed steps to reproduce.
+
+- Execute the following script on Linux x86_64 host:
+```bash
+#!/bin/bash
+# This script was tested on Debian trixie (as on 2024-09-07) with the following packages installed:
+# dosfstools grub-efi-amd64-bin qemu-system-x86 ovmf efi-shell-x64
+set -e
+DIR="$(mktemp -d /tmp/qemu-bug-XXXXXX)"
+truncate --size=100M "$DIR/disk"
+echo ',+,' | sfdisk --label gpt "$DIR/disk"
+LOOP="$(losetup --find --show --partscan --nooverlap "$DIR/disk")"
+sleep 1
+mkfs.vfat "${LOOP}p1"
+mkdir "$DIR/root"
+mount "${LOOP}p1" "$DIR/root"
+losetup --detach "$LOOP"
+mkdir -p "$DIR/root/EFI/boot" "$DIR/root/boot/grub/fonts"
+grub-mkimage --format=x86_64-efi --output="$DIR/root/EFI/boot/bootx64.efi" --prefix=/boot/grub part_gpt fat
+cp -r /usr/lib/grub/x86_64-efi "$DIR/root/boot/grub"
+cp /usr/share/efi-shell-x64/shellx64.efi "$DIR/root/boot"
+cp /usr/share/grub/unicode.pf2 "$DIR/root/boot/grub/fonts"
+cat << "EOF" > "$DIR/root/boot/grub/grub.cfg"
+loadfont unicode
+insmod all_video
+terminal_output gfxterm
+menuentry "EFI shell" {
+  chainloader /boot/shellx64.efi
+}
+EOF
+umount "$DIR/root"
+qemu-system-x86_64 -m 2048 -bios OVMF.fd -drive file="$DIR/disk",format=raw
+```
+- When you see Qemu window, choose "EFI shell" menu entry in GRUB menu
+- You will immediately see damaged video output instead of proper EFI shell
+
+This bug doesn't reproduce on real hardware, i. e. without Qemu!!! I. e. this is Qemu bug. Qemu task is to duplicate real hardware behaviour. On real hardware there is no this bug, so Qemu should not have it, either.
+
+Note: if I remove `loadfont unicode; insmod all_video; terminal_output gfxterm`, then the bug disappears.
+
+Also note: if I replace `all_video` with `efi_gop`, then the bug disappears, too. So, workaround is to use `efi_gop` instead of `all_video` in UEFI mode. But I still believe the bug is in Qemu, because `all_video` doesn't cause any problems on real hardware, so Qemu should work, too.
diff --git a/results/classifier/118/performance/2565 b/results/classifier/118/performance/2565
new file mode 100644
index 00000000..b9f79d08
--- /dev/null
+++ b/results/classifier/118/performance/2565
@@ -0,0 +1,43 @@
+performance: 0.995
+graphic: 0.965
+device: 0.793
+debug: 0.768
+architecture: 0.703
+semantic: 0.700
+peripherals: 0.674
+PID: 0.595
+KVM: 0.588
+x86: 0.497
+user-level: 0.493
+permissions: 0.471
+arm: 0.375
+i386: 0.345
+risc-v: 0.319
+ppc: 0.306
+virtual: 0.298
+mistranslation: 0.266
+vnc: 0.242
+socket: 0.238
+kernel: 0.208
+boot: 0.175
+VMM: 0.166
+register: 0.163
+TCG: 0.157
+files: 0.116
+network: 0.088
+hypervisor: 0.054
+assembly: 0.039
+
+Bisected: 176e3783f2ab14 results in a heavy performance regression with the SDL interface
+Description of problem:
+With the patch  176e3783f2ab14 a significant 3D performance regression was introduced when using the SDL gui and VirGL. Before the patch glxgears runs at about 4000 FPS on my machine, with the patch this drops to about 150 FPS, and if one moves the mouse the reported frame rate drops even more.
+Steps to reproduce:
+1. Run the qemu like given above with a current Debian-SID guest
+2. Start glxgears from a terminal 
+3. Move the mouse continuously to see the extra drop in frame rate
+Additional information:
+* (Guest) OpenGL Renderer string: virgl (AMD Radeon RX 6700 XT (radeonsi, navi22, LLVM 18.1.8 ...)
+* Reverting the commit 176e3783f2ab14 fixes the problem on SDL 
+* I don't think the host kernel version is an issue here (namely the KVM patches that are required to run Venus on discrete graphics cards) 
+* I've seen a similar issue when using GTK, but other that with SDL it's already present in version 7.2.11 (the one I used as a "good" base when I was bisecting the regression) - so I was not able to bisect yet.
+* I've looked around in the code and I'm aware the that commit *shouldn't* have the impact it seems to have. I can only assume that there is some unexpected side effect when creating the otherwise unused renderer.
diff --git a/results/classifier/118/performance/2572 b/results/classifier/118/performance/2572
new file mode 100644
index 00000000..ca65b9c3
--- /dev/null
+++ b/results/classifier/118/performance/2572
@@ -0,0 +1,60 @@
+performance: 0.939
+device: 0.930
+peripherals: 0.891
+graphic: 0.839
+virtual: 0.804
+hypervisor: 0.804
+ppc: 0.792
+architecture: 0.762
+semantic: 0.727
+PID: 0.716
+vnc: 0.711
+register: 0.707
+risc-v: 0.707
+debug: 0.699
+VMM: 0.686
+mistranslation: 0.658
+network: 0.648
+i386: 0.644
+TCG: 0.632
+files: 0.629
+arm: 0.626
+permissions: 0.610
+x86: 0.607
+socket: 0.606
+boot: 0.579
+KVM: 0.545
+kernel: 0.545
+user-level: 0.471
+assembly: 0.262
+
+Guest os=Windows , qemu. Shutdown very slow. Memory allocation issue.
+Description of problem:
+simplifiying - libvirt config:
+```
+<memory unit='KiB'>33554432</memory>
+  <currentMemory unit='KiB'>131072</currentMemory>
+```
+when use `<currentMemory>` less than `<memory>` - at/after shutdown of guest os cpu hangs on 100% and lasts long- approximately 3-5 minutes
+if change to
+```
+<memory unit='KiB'>33554432</memory>
+  <currentMemory unit='KiB'>33554432</currentMemory>
+```
+then shutdown takes less some seconds
+
+problem occurs not (shutdown of VM takes some seconds) in cases when not used balloon device:
+1 `<currentMemory>` equal to `<memory>`
+2 memballoon driver disabled in windows
+3 memballoon disabled on libvirt with "model=none" (and therefore not passed to qemu command line)
+Additional information:
+on the guest :
+ * used drivers from virtio-win-0.1.262.iso - membaloon ver 100.95.104.26200 
+ * possible combination of all or some components 
+
+monitored next: 
+`virsh dommemstat VMName` at shutdown time there grows "rss" till MaxMem, but very slowly.
+aLso on `virsh setmem VMName --live --size 32G` 
+rss grows slow - but takes 2 times less than at simple shutdown time ( = at shutdown seems occurs memory allocation and deallocation at the same time)
+
+so something with some or all libvirt/qemu/balloon parts not so nice
diff --git a/results/classifier/118/performance/2682 b/results/classifier/118/performance/2682
new file mode 100644
index 00000000..20f6e771
--- /dev/null
+++ b/results/classifier/118/performance/2682
@@ -0,0 +1,71 @@
+performance: 0.942
+device: 0.891
+hypervisor: 0.862
+ppc: 0.857
+graphic: 0.850
+peripherals: 0.846
+kernel: 0.830
+socket: 0.830
+architecture: 0.822
+files: 0.810
+arm: 0.803
+network: 0.803
+VMM: 0.798
+risc-v: 0.792
+KVM: 0.789
+PID: 0.749
+boot: 0.736
+vnc: 0.708
+semantic: 0.707
+permissions: 0.687
+assembly: 0.681
+debug: 0.664
+TCG: 0.637
+x86: 0.620
+register: 0.615
+i386: 0.553
+user-level: 0.524
+virtual: 0.340
+mistranslation: 0.299
+
+QEMU throws errors at the beginning of building
+Description of problem:
+QEMU throws errors at the beginning of building:
+```
+ninja: no work to do.
+/tmp/qemu-8.1.5/build/pyvenv/bin/meson introspect --targets --tests --benchmarks | /tmp/qemu-8.1.5/build/pyvenv/bin/python3 -B scripts/mtest2make.py > Makefile.mtest
+pc-bios/optionrom: -fcf-protection=none detected
+pc-bios/optionrom: -fno-pie detected
+pc-bios/optionrom: -no-pie detected
+pc-bios/optionrom: -fno-stack-protector detected
+pc-bios/optionrom: -Wno-array-bounds detected
+pc-bios/optionrom: Assembling multiboot.o
+pc-bios/optionrom: Assembling linuxboot.o
+pc-bios/optionrom: Assembling multiboot_dma.o
+pc-bios/optionrom: Compiling linuxboot_dma.o
+pc-bios/optionrom: Assembling pvh.o
+pc-bios/optionrom: Assembling kvmvapic.o
+pc-bios/optionrom: Compiling pvh_main.o
+pc-bios/optionrom: Linking multiboot.img
+pc-bios/optionrom: Linking linuxboot.img
+pc-bios/optionrom: Linking kvmvapic.img
+pc-bios/optionrom: Extracting raw object multiboot.raw
+/bin/sh: 1: -O: not found
+make[1]: *** [Makefile:53: multiboot.raw] Error 127
+make[1]: *** Waiting for unfinished jobs....
+pc-bios/optionrom: Linking multiboot_dma.img
+pc-bios/optionrom: Extracting raw object linuxboot.raw
+/bin/sh: 1: -O: not found
+make[1]: *** [Makefile:53: linuxboot.raw] Error 127
+make: *** [Makefile:190: pc-bios/optionrom/all] Error 2
+make: *** Waiting for unfinished jobs....
+[1/10003] Generating trace/trace-hw_i2c.h with a custom command
+
+...
+```
+Then proceeds the building. Whether it is failing at the end is not reliabily reproducible as it do fail one time and builds successfully at the next time. However, i don't know if these errors will cause runtime problems in the case of a successful build.
+Steps to reproduce:
+1. `../configure --enable-strip --audio-drv-list=alsa --enable-tools --enable-modules`
+2. `make -j16`
+Additional information:
+Configuration log is available here: http://oscomp.hu/depot/qemu-8.1.5-configure.log
diff --git a/results/classifier/118/performance/2686 b/results/classifier/118/performance/2686
new file mode 100644
index 00000000..c4f5bb2f
--- /dev/null
+++ b/results/classifier/118/performance/2686
@@ -0,0 +1,78 @@
+performance: 0.886
+graphic: 0.855
+debug: 0.843
+device: 0.803
+socket: 0.763
+network: 0.755
+architecture: 0.744
+PID: 0.723
+vnc: 0.714
+permissions: 0.695
+risc-v: 0.684
+VMM: 0.673
+arm: 0.654
+hypervisor: 0.635
+peripherals: 0.616
+files: 0.615
+boot: 0.601
+TCG: 0.595
+i386: 0.580
+kernel: 0.572
+register: 0.559
+mistranslation: 0.557
+ppc: 0.552
+user-level: 0.495
+semantic: 0.494
+assembly: 0.469
+x86: 0.418
+virtual: 0.391
+KVM: 0.279
+
+rng-seed addition causing test_loongarch64_virt.py to hang in EFI startup
+Description of problem:
+Since the rng-seed addition, the test_loongarch64_virt.py test will periodically hang.
+
+git bisect blames this
+
+```
+commit d9bd1ccbf1d84d872aed684c65fec33814b8ac1b
+Author: Jason A. Donenfeld <Jason@zx2c4.com>
+Date:   Thu Sep 5 17:33:16 2024 +0200
+
+    hw/loongarch: virt: pass random seed to fdt
+    
+    If the FDT contains /chosen/rng-seed, then the Linux RNG will use it to
+    initialize early. Set this using the usual guest random number
+    generation function.
+    
+    This is the same procedure that's done in b91b6b5a2c ("hw/microblaze:
+    pass random seed to fdt"), e4b4f0b71c ("hw/riscv: virt: pass random seed
+    to fdt"), c6fe3e6b4c ("hw/openrisc: virt: pass random seed to fdt"),
+    67f7e426e5 ("hw/i386: pass RNG seed via setup_data entry"), c287941a4d
+    ("hw/rx: pass random seed to fdt"), 5e19cc68fb ("hw/mips: boston: pass
+    random seed to fdt"), 6b23a67916 ("hw/nios2: virt: pass random seed to fdt")
+    c4b075318e ("hw/ppc: pass random seed to fdt"), and 5242876f37
+    ("hw/arm/virt: dt: add rng-seed property").
+    
+    These earlier commits later were amended to rerandomize the RNG seed on
+    snapshot load, but the LoongArch code somehow already does that, despite
+    not having this patch here, presumably due to some lucky copy and
+    pasting.
+    
+    Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
+    Reviewed-by: Song Gao <gaosong@loongson.cn>
+    Message-Id: <20240905153316.2038769-1-Jason@zx2c4.com>
+    Signed-off-by: Song Gao <gaosong@loongson.cn>
+```
+
+When it hangs, test_loongarch64_virt.py will get stuck waiting for serial console output from the guest.
+
+Looking at the console.log file shows it to be completely empty. 
+
+This appears to indicate it has hung before EDK has even initialized, as it has not even printed the 'Entering C environment' message
+Steps to reproduce:
+1. ./configure --target-list=loongarch64-softmmu
+2. make -j 20
+3. n=0 ; while true ; do n=$(expr $n + 1); echo $n ; QEMU_TEST_QEMU_BINARY=./build/qemu-system-loongarch64  PYTHONPATH=./python ./tests/functional/test_loongarch64_virt.py  ; done
+
+Most commonly it will hang within 10 iterations, very occasionally needing upto 25
diff --git a/results/classifier/118/performance/2689 b/results/classifier/118/performance/2689
new file mode 100644
index 00000000..37eac4bd
--- /dev/null
+++ b/results/classifier/118/performance/2689
@@ -0,0 +1,31 @@
+performance: 0.907
+device: 0.902
+arm: 0.896
+network: 0.877
+register: 0.607
+vnc: 0.586
+architecture: 0.552
+socket: 0.549
+files: 0.491
+graphic: 0.473
+peripherals: 0.452
+VMM: 0.440
+risc-v: 0.438
+kernel: 0.357
+boot: 0.350
+TCG: 0.347
+debug: 0.339
+semantic: 0.313
+ppc: 0.294
+permissions: 0.255
+mistranslation: 0.251
+PID: 0.246
+assembly: 0.165
+hypervisor: 0.155
+virtual: 0.098
+user-level: 0.084
+KVM: 0.064
+x86: 0.016
+i386: 0.007
+
+arm64be tuxrun test is sometimes failing with I/O errors
diff --git a/results/classifier/118/performance/277 b/results/classifier/118/performance/277
new file mode 100644
index 00000000..e4afe75b
--- /dev/null
+++ b/results/classifier/118/performance/277
@@ -0,0 +1,31 @@
+performance: 0.861
+network: 0.800
+device: 0.794
+architecture: 0.709
+VMM: 0.659
+debug: 0.652
+graphic: 0.562
+virtual: 0.396
+arm: 0.370
+user-level: 0.343
+PID: 0.312
+semantic: 0.251
+boot: 0.232
+vnc: 0.230
+TCG: 0.227
+hypervisor: 0.204
+ppc: 0.180
+mistranslation: 0.165
+risc-v: 0.129
+register: 0.111
+peripherals: 0.099
+permissions: 0.083
+x86: 0.059
+i386: 0.032
+socket: 0.031
+KVM: 0.024
+kernel: 0.024
+assembly: 0.007
+files: 0.005
+
+Multi-queue vhost-user fails to reconnect with qemu version >=4.2
diff --git a/results/classifier/118/performance/2817 b/results/classifier/118/performance/2817
new file mode 100644
index 00000000..abe47d57
--- /dev/null
+++ b/results/classifier/118/performance/2817
@@ -0,0 +1,81 @@
+performance: 0.868
+architecture: 0.865
+device: 0.806
+VMM: 0.806
+PID: 0.780
+graphic: 0.777
+user-level: 0.774
+network: 0.771
+ppc: 0.765
+files: 0.756
+permissions: 0.736
+kernel: 0.714
+socket: 0.713
+debug: 0.699
+semantic: 0.699
+vnc: 0.686
+virtual: 0.684
+arm: 0.682
+risc-v: 0.660
+peripherals: 0.629
+assembly: 0.607
+hypervisor: 0.605
+register: 0.598
+boot: 0.560
+mistranslation: 0.520
+TCG: 0.502
+x86: 0.361
+i386: 0.325
+KVM: 0.260
+
+Strange floating-point behaviour under Windows with some CPU models
+Description of problem:
+I'm encountering a very weird bug with some floating-point maths code, but only under very specific configurations. First I thought it was a Clang bug, but then further digging eventually showed it to only occur under Windows VMs with specific QEMU CPU options, I'm not certain whether it is a QEMU/KVM bug or a Windows bug, but thought starting here would be easiest.
+
+When compiled under MSVC Clang with modern CPU instructions disabled (e.g. `-march=pentium3` or `-march=pentium-mmx`), the `floorf()` call in the following program always returns 0.0, while the truncation works correctly:
+
+```
+#include <math.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+int main(int argc, char **argv)
+{
+	float n = atof(argv[1]);
+	printf("n = %f\n", n);
+	
+	float f = floorf(n);
+	printf("f = %f\n", f);
+	
+	float c = (int)(n);
+	printf("c = %f\n", c);
+	
+	return 0;
+}
+```
+
+Example output on an affected VM:
+
+```
+C:\Users\Administrator> floorf-p3.exe 10
+n = 10.000000
+f = 0.000000
+c = 10.000000
+
+C:\Users\Administrator> floorf-p4.exe 10
+n = 10.000000
+f = 10.000000
+c = 10.000000
+```
+
+(`floorf-p3.exe` was compiled with `-march=pentium3` and `floorf-p4.exe` with `-march=pentium4` above)
+
+I've tried a few QEMU CPU models on a variety of Intel/AMD VM hosts and two different Windows versions (10 and Server 2022), and observed the following:
+
+* `host-passthrough` - works (on AMD and Intel hosts)
+* `qemu64` - broken
+* `EPYC-Milan` - works
+* `Westmere` - works
+* `Penryn` - broken
+
+(I also reported this via the mailing list, but I think it might've swallowed my post)
diff --git a/results/classifier/118/performance/2821 b/results/classifier/118/performance/2821
new file mode 100644
index 00000000..0d3859f3
--- /dev/null
+++ b/results/classifier/118/performance/2821
@@ -0,0 +1,53 @@
+performance: 0.801
+graphic: 0.774
+KVM: 0.731
+architecture: 0.704
+x86: 0.571
+device: 0.568
+hypervisor: 0.479
+network: 0.455
+files: 0.434
+PID: 0.394
+kernel: 0.372
+vnc: 0.328
+semantic: 0.319
+ppc: 0.316
+socket: 0.287
+permissions: 0.272
+user-level: 0.266
+debug: 0.249
+boot: 0.237
+virtual: 0.227
+risc-v: 0.215
+VMM: 0.201
+register: 0.200
+TCG: 0.194
+peripherals: 0.142
+arm: 0.131
+i386: 0.119
+mistranslation: 0.088
+assembly: 0.068
+
+Emulated newer x86 chipsets are noticably slower on cpu-bound loads than "-cpu qemu64"
+Description of problem:
+I noticed that "-cpu qemu64" is much faster than "-cpu max" or "-cpu Icelake-Server-noTSX" for cpu bound loads, and with more than one cpu under load.
+Steps to reproduce:
+1. Run a guest as per "qemu-system-x86_64 -cpu max [..]" command from above. Any linux distro should do.
+2. run through the setup questions if you use Fedora-Server-KVM-41-1.4.x86_64.qcow2 from the example command line above
+3. log into the guest via ssh, i.e. "ssh chris@amd64" here
+4. cd /dev/shm; wget http://archive.apache.org/dist/httpd/httpd-2.4.57.tar.bz2; wget https://fluxcoil.net/files/tmp/job_httpd_extract_cpu.sh
+6. bash ./job_httpd_extract_cpu.sh 4 300
+8. cat /tmp/counter
+
+Step 6 is executing a script which simply uses 4 parallel loops, where each loop runs "bzcat httpd-2.4.57.tar.bz2" constantly. After 300sec, the successful uncompressions over all 4 loops are summed up and stored in /tmp/counter.
+
+- result with "-cpu qemu64": 96
+- result with "-cpu max": 84
+- result with "-cpu Icelake-Server-noTSX": 44
+Additional information:
+- For "-cpu Icelake-Server-noTSX" on this Thinkpad T590 I get these warnings, I think they are not relevant:
+  qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17]
+  qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.01H:ECX.tsc-deadline [bit 24]
+  [..]
+- I also looked at Broadwell etc, and all of them seem in the same ballpark.
+  Graph over some emulated architectures: https://fluxcoil.net/files/tmp/gnuplot_cpu-performance-emulated-only.png
diff --git a/results/classifier/118/performance/2848 b/results/classifier/118/performance/2848
new file mode 100644
index 00000000..c6ff1a45
--- /dev/null
+++ b/results/classifier/118/performance/2848
@@ -0,0 +1,43 @@
+i386: 0.981
+performance: 0.965
+x86: 0.955
+VMM: 0.894
+graphic: 0.758
+device: 0.691
+architecture: 0.665
+semantic: 0.657
+vnc: 0.558
+kernel: 0.484
+socket: 0.461
+ppc: 0.438
+risc-v: 0.330
+virtual: 0.326
+PID: 0.317
+network: 0.305
+boot: 0.303
+mistranslation: 0.286
+arm: 0.255
+KVM: 0.214
+register: 0.177
+TCG: 0.177
+hypervisor: 0.169
+files: 0.168
+permissions: 0.166
+debug: 0.132
+user-level: 0.119
+assembly: 0.093
+peripherals: 0.065
+
+i386 max_cpus off by one
+Description of problem:
+X86 VMs are currently limited to 255 vCPUs (`mc->max_cpus = 255;` in `pc.c`).
+The first occurrence i can find of this limit is in d3e9db933f416c9f1c04df4834d36e2315952e42 from 2005 where both `MAX_APICS` and `MAX_CPUS` was set to 255. This is becoming relevant for some people as servers with 256 cores become more available. 
+
+**Can we increase the limit to 256 vCPUs?** 
+I think so. 
+
+Today, the APIC id limit (see `apic_id_limit` in `x86-common.c`) is based on the CPU id limit. 
+According to the a comment for `typdef uint32_t apic_id_t;` (see `topology.h`), we can have 256 APICs, but more APICs require x2APIC support. 
+APIC seems to be no hindrance to increase max_cpus to 256. 
+
+**Can we increase the limit to 512?** Maybe not? We need x2APIC support of which i have no clue. Also there is always a performance risk of exceeding the size at which current data structures work efficiently.
diff --git a/results/classifier/118/performance/285 b/results/classifier/118/performance/285
new file mode 100644
index 00000000..13c7611b
--- /dev/null
+++ b/results/classifier/118/performance/285
@@ -0,0 +1,31 @@
+performance: 0.921
+device: 0.696
+arm: 0.635
+debug: 0.564
+user-level: 0.538
+architecture: 0.523
+network: 0.496
+graphic: 0.442
+files: 0.271
+i386: 0.257
+PID: 0.217
+register: 0.195
+peripherals: 0.171
+boot: 0.165
+x86: 0.156
+VMM: 0.142
+semantic: 0.141
+virtual: 0.132
+ppc: 0.130
+TCG: 0.110
+assembly: 0.088
+hypervisor: 0.083
+mistranslation: 0.079
+permissions: 0.078
+vnc: 0.051
+socket: 0.041
+kernel: 0.024
+risc-v: 0.021
+KVM: 0.008
+
+qemu-user child process hangs when forking due to glib allocation
diff --git a/results/classifier/118/performance/286 b/results/classifier/118/performance/286
new file mode 100644
index 00000000..ed728ade
--- /dev/null
+++ b/results/classifier/118/performance/286
@@ -0,0 +1,31 @@
+performance: 0.997
+device: 0.879
+graphic: 0.872
+boot: 0.857
+architecture: 0.488
+kernel: 0.376
+network: 0.370
+debug: 0.278
+register: 0.273
+user-level: 0.247
+arm: 0.229
+peripherals: 0.227
+socket: 0.222
+files: 0.211
+permissions: 0.206
+semantic: 0.175
+mistranslation: 0.136
+hypervisor: 0.095
+assembly: 0.066
+ppc: 0.064
+VMM: 0.053
+TCG: 0.050
+vnc: 0.046
+i386: 0.044
+risc-v: 0.041
+virtual: 0.041
+PID: 0.035
+x86: 0.004
+KVM: 0.001
+
+Performance degradation for WinXP boot time after b55f54bc
diff --git a/results/classifier/118/performance/2883 b/results/classifier/118/performance/2883
new file mode 100644
index 00000000..346374b2
--- /dev/null
+++ b/results/classifier/118/performance/2883
@@ -0,0 +1,31 @@
+performance: 0.840
+device: 0.727
+network: 0.376
+vnc: 0.374
+graphic: 0.357
+boot: 0.357
+risc-v: 0.309
+PID: 0.304
+TCG: 0.282
+VMM: 0.275
+ppc: 0.234
+i386: 0.222
+x86: 0.212
+architecture: 0.209
+user-level: 0.194
+arm: 0.194
+mistranslation: 0.173
+register: 0.173
+files: 0.147
+KVM: 0.143
+hypervisor: 0.138
+semantic: 0.136
+virtual: 0.133
+socket: 0.114
+peripherals: 0.110
+kernel: 0.100
+assembly: 0.084
+debug: 0.061
+permissions: 0.035
+
+Advice regarding implementation of smooth scrolling
diff --git a/results/classifier/118/performance/2900 b/results/classifier/118/performance/2900
new file mode 100644
index 00000000..7c2221a6
--- /dev/null
+++ b/results/classifier/118/performance/2900
@@ -0,0 +1,41 @@
+performance: 0.877
+device: 0.837
+graphic: 0.816
+debug: 0.661
+vnc: 0.528
+files: 0.488
+boot: 0.477
+network: 0.438
+semantic: 0.396
+ppc: 0.374
+i386: 0.313
+risc-v: 0.307
+socket: 0.297
+arm: 0.277
+TCG: 0.249
+PID: 0.229
+x86: 0.216
+permissions: 0.188
+register: 0.174
+mistranslation: 0.135
+user-level: 0.114
+peripherals: 0.104
+architecture: 0.102
+virtual: 0.094
+VMM: 0.082
+hypervisor: 0.052
+kernel: 0.050
+assembly: 0.026
+KVM: 0.019
+
+Data races in test-bdrv-drain test
+Description of problem:
+Data races in the access of `Job` fields in the `test-bdrv-drain` test were identified using TSAN.
+Steps to reproduce:
+```sh
+QEMU_BUILD_DIR=<path to the QEMU build directory>
+QEMU_DIR=<path to the QEMU repository directory>
+configure --enable-tsan --cc=clang --cxx=clang++ --enable-trace-backends=ust --enable-fdt=system --disable-slirp
+make tests/unit/test-bdrv-drain
+MALLOC_PERTURB_=186 G_TEST_SRCDIR=$QEMU_BUILD_DIR/tests/unit G_TEST_BUILDDIR=$QEMU_BUILD_DIR/tests/unit $QEMU_BUILD_DIR/tests/unit/test-bdrv-drain --tap -k
+```
diff --git a/results/classifier/118/performance/2906 b/results/classifier/118/performance/2906
new file mode 100644
index 00000000..3757a298
--- /dev/null
+++ b/results/classifier/118/performance/2906
@@ -0,0 +1,43 @@
+performance: 0.982
+x86: 0.975
+architecture: 0.972
+graphic: 0.802
+mistranslation: 0.802
+device: 0.801
+arm: 0.636
+semantic: 0.617
+permissions: 0.351
+files: 0.332
+debug: 0.323
+register: 0.309
+boot: 0.307
+network: 0.289
+risc-v: 0.279
+PID: 0.278
+socket: 0.272
+hypervisor: 0.252
+user-level: 0.242
+vnc: 0.236
+i386: 0.225
+VMM: 0.170
+ppc: 0.154
+kernel: 0.140
+TCG: 0.124
+virtual: 0.089
+peripherals: 0.086
+KVM: 0.065
+assembly: 0.048
+
+x86 (32-bit) multicore very slow, but x86-64 is fast (on macOS arm64 host)
+Description of problem:
+More cores doesn't slow down a x86-32 guest on an x86-64 host, nor does it slow down an x86-64 guest on an arm64 host. However, adding extra cores massively slows down an x86-32 guest on an arm64 host.
+Steps to reproduce:
+1. Run 32-bit guest or 32-bit installer
+2.
+3.
+
+I have replicated this over several OSes using homebrew qemu, source-built qemu and UTM. This is not to be confused with a different bug in UTM that caused its version of QEMU to be slow.
+
+This also seems to apply to 32-bit processes in an x86-64 guest.
+Additional information:
+https://github.com/utmapp/UTM/issues/5468
diff --git a/results/classifier/118/performance/301 b/results/classifier/118/performance/301
new file mode 100644
index 00000000..f0d535df
--- /dev/null
+++ b/results/classifier/118/performance/301
@@ -0,0 +1,31 @@
+performance: 0.875
+semantic: 0.715
+device: 0.671
+network: 0.513
+architecture: 0.490
+arm: 0.470
+graphic: 0.300
+register: 0.252
+socket: 0.244
+assembly: 0.240
+debug: 0.221
+virtual: 0.185
+i386: 0.180
+mistranslation: 0.173
+hypervisor: 0.168
+x86: 0.153
+vnc: 0.101
+peripherals: 0.092
+boot: 0.084
+permissions: 0.037
+PID: 0.033
+files: 0.030
+user-level: 0.027
+ppc: 0.023
+VMM: 0.018
+TCG: 0.017
+KVM: 0.007
+risc-v: 0.005
+kernel: 0.003
+
+Assertion `addr < cache->len && 2 <= cache->len - addr' in virtio-blk
diff --git a/results/classifier/118/performance/321 b/results/classifier/118/performance/321
new file mode 100644
index 00000000..95c9cca2
--- /dev/null
+++ b/results/classifier/118/performance/321
@@ -0,0 +1,31 @@
+performance: 0.814
+device: 0.788
+graphic: 0.638
+network: 0.577
+arm: 0.479
+permissions: 0.448
+debug: 0.412
+architecture: 0.362
+socket: 0.297
+peripherals: 0.281
+VMM: 0.275
+semantic: 0.242
+ppc: 0.237
+hypervisor: 0.235
+boot: 0.227
+kernel: 0.225
+vnc: 0.194
+register: 0.192
+mistranslation: 0.192
+PID: 0.189
+TCG: 0.159
+risc-v: 0.131
+i386: 0.130
+files: 0.126
+virtual: 0.100
+user-level: 0.096
+x86: 0.085
+assembly: 0.044
+KVM: 0.010
+
+qemu 5.2.0 configure script explodes when in read only directory
diff --git a/results/classifier/118/performance/343 b/results/classifier/118/performance/343
new file mode 100644
index 00000000..04c7a2f4
--- /dev/null
+++ b/results/classifier/118/performance/343
@@ -0,0 +1,31 @@
+mistranslation: 0.964
+performance: 0.928
+device: 0.905
+semantic: 0.819
+virtual: 0.655
+graphic: 0.464
+network: 0.462
+architecture: 0.399
+arm: 0.323
+user-level: 0.313
+permissions: 0.164
+vnc: 0.141
+ppc: 0.140
+boot: 0.139
+hypervisor: 0.139
+files: 0.133
+i386: 0.122
+register: 0.106
+VMM: 0.106
+risc-v: 0.097
+debug: 0.087
+peripherals: 0.076
+x86: 0.062
+assembly: 0.051
+socket: 0.050
+PID: 0.025
+kernel: 0.017
+KVM: 0.009
+TCG: 0.009
+
+madvise reports success, but doesn't implement WIPEONFORK.
diff --git a/results/classifier/118/performance/404 b/results/classifier/118/performance/404
new file mode 100644
index 00000000..d26e5a0e
--- /dev/null
+++ b/results/classifier/118/performance/404
@@ -0,0 +1,31 @@
+performance: 0.958
+TCG: 0.917
+device: 0.822
+hypervisor: 0.715
+architecture: 0.685
+graphic: 0.654
+boot: 0.611
+network: 0.559
+permissions: 0.499
+peripherals: 0.492
+semantic: 0.489
+virtual: 0.452
+arm: 0.447
+register: 0.439
+socket: 0.390
+kernel: 0.382
+risc-v: 0.368
+files: 0.363
+mistranslation: 0.277
+vnc: 0.241
+ppc: 0.225
+user-level: 0.219
+debug: 0.155
+PID: 0.154
+assembly: 0.124
+i386: 0.039
+KVM: 0.036
+VMM: 0.030
+x86: 0.015
+
+Windows XP takes much longer to boot in TCG mode since 5.0
diff --git a/results/classifier/118/performance/409 b/results/classifier/118/performance/409
new file mode 100644
index 00000000..ba4bac5c
--- /dev/null
+++ b/results/classifier/118/performance/409
@@ -0,0 +1,31 @@
+performance: 0.917
+files: 0.880
+device: 0.846
+architecture: 0.809
+network: 0.711
+arm: 0.575
+ppc: 0.457
+peripherals: 0.435
+kernel: 0.435
+semantic: 0.433
+permissions: 0.410
+graphic: 0.386
+VMM: 0.382
+risc-v: 0.351
+PID: 0.350
+TCG: 0.339
+boot: 0.312
+register: 0.289
+hypervisor: 0.245
+debug: 0.245
+socket: 0.225
+KVM: 0.193
+vnc: 0.180
+virtual: 0.163
+i386: 0.129
+mistranslation: 0.120
+user-level: 0.105
+x86: 0.088
+assembly: 0.046
+
+tar can only read 4096 bytes from some files on 9p
diff --git a/results/classifier/118/performance/445 b/results/classifier/118/performance/445
new file mode 100644
index 00000000..d90bb16c
--- /dev/null
+++ b/results/classifier/118/performance/445
@@ -0,0 +1,31 @@
+performance: 0.955
+device: 0.909
+graphic: 0.482
+mistranslation: 0.420
+peripherals: 0.292
+user-level: 0.279
+boot: 0.256
+semantic: 0.253
+debug: 0.180
+permissions: 0.114
+virtual: 0.112
+ppc: 0.110
+arm: 0.108
+risc-v: 0.108
+architecture: 0.035
+vnc: 0.025
+network: 0.023
+register: 0.022
+assembly: 0.017
+files: 0.016
+hypervisor: 0.014
+PID: 0.011
+socket: 0.009
+VMM: 0.006
+TCG: 0.003
+kernel: 0.003
+i386: 0.002
+KVM: 0.001
+x86: 0.001
+
+QEMU + DOS keyboard behavior
diff --git a/results/classifier/118/performance/472 b/results/classifier/118/performance/472
new file mode 100644
index 00000000..b4184c4c
--- /dev/null
+++ b/results/classifier/118/performance/472
@@ -0,0 +1,31 @@
+performance: 0.858
+device: 0.840
+ppc: 0.632
+x86: 0.542
+TCG: 0.537
+i386: 0.500
+kernel: 0.427
+arm: 0.405
+architecture: 0.371
+VMM: 0.360
+network: 0.351
+graphic: 0.345
+boot: 0.317
+mistranslation: 0.314
+semantic: 0.272
+risc-v: 0.260
+KVM: 0.247
+peripherals: 0.242
+debug: 0.238
+socket: 0.236
+vnc: 0.214
+hypervisor: 0.204
+virtual: 0.103
+register: 0.057
+PID: 0.032
+assembly: 0.019
+user-level: 0.016
+files: 0.005
+permissions: 0.005
+
+Device trees should specify `clock-frequency` property for `/cpus/cpu*` nodes
diff --git a/results/classifier/118/performance/485 b/results/classifier/118/performance/485
new file mode 100644
index 00000000..ca043c43
--- /dev/null
+++ b/results/classifier/118/performance/485
@@ -0,0 +1,31 @@
+performance: 0.851
+virtual: 0.761
+debug: 0.729
+device: 0.666
+graphic: 0.520
+hypervisor: 0.468
+architecture: 0.359
+semantic: 0.338
+mistranslation: 0.333
+network: 0.311
+VMM: 0.230
+boot: 0.213
+i386: 0.157
+assembly: 0.143
+x86: 0.132
+PID: 0.120
+vnc: 0.095
+register: 0.089
+TCG: 0.082
+permissions: 0.079
+ppc: 0.078
+arm: 0.045
+KVM: 0.029
+risc-v: 0.009
+socket: 0.005
+user-level: 0.004
+peripherals: 0.003
+files: 0.003
+kernel: 0.002
+
+Failed to restore domain - error load load virtio-balloon:virtio
diff --git a/results/classifier/118/performance/490484 b/results/classifier/118/performance/490484
new file mode 100644
index 00000000..5976baf2
--- /dev/null
+++ b/results/classifier/118/performance/490484
@@ -0,0 +1,93 @@
+architecture: 0.953
+performance: 0.936
+semantic: 0.914
+graphic: 0.910
+device: 0.886
+PID: 0.884
+user-level: 0.883
+x86: 0.872
+kernel: 0.870
+KVM: 0.868
+boot: 0.859
+mistranslation: 0.854
+virtual: 0.848
+ppc: 0.848
+debug: 0.846
+peripherals: 0.843
+vnc: 0.833
+permissions: 0.831
+assembly: 0.812
+arm: 0.801
+network: 0.789
+register: 0.787
+VMM: 0.752
+risc-v: 0.726
+TCG: 0.722
+socket: 0.715
+hypervisor: 0.712
+files: 0.706
+i386: 0.563
+
+running 64bit client in 64bit host with intel crashes
+
+Binary package hint: qemu-kvm
+
+running windows 7 VM halts on early boot with
+
+kvm: unhandled exit 80000021
+kvm_run returned -22
+
+ProblemType: Bug
+Architecture: amd64
+Date: Mon Nov 30 21:28:54 2009
+DistroRelease: Ubuntu 9.10
+KvmCmdLine: Error: command ['ps', '-C', 'kvm', '-F'] failed with exit code 1: UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
+MachineType: System manufacturer P5Q-PRO
+NonfreeKernelModules: fglrx
+Package: kvm (not installed)
+ProcCmdLine: BOOT_IMAGE=/vmlinuz-2.6.31-14-generic root=UUID=17a8e181-fac7-461e-8cad-8aea97be2536 ro quiet splash
+ProcEnviron:
+ LANGUAGE=en_US:en
+ PATH=(custom, user)
+ LANG=en_US.UTF-8
+ SHELL=/bin/bash
+ProcVersionSignature: Ubuntu 2.6.31-14.48-generic
+SourcePackage: qemu-kvm
+Uname: Linux 2.6.31-14-generic x86_64
+dmi.bios.date: 07/10/2008
+dmi.bios.vendor: American Megatrends Inc.
+dmi.bios.version: 1004
+dmi.board.asset.tag: To Be Filled By O.E.M.
+dmi.board.name: P5Q-PRO
+dmi.board.vendor: ASUSTeK Computer INC.
+dmi.board.version: Rev 1.xx
+dmi.chassis.asset.tag: Asset-1234567890
+dmi.chassis.type: 3
+dmi.chassis.vendor: Chassis Manufacture
+dmi.chassis.version: Chassis Version
+dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvr1004:bd07/10/2008:svnSystemmanufacturer:pnP5Q-PRO:pvrSystemVersion:rvnASUSTeKComputerINC.:rnP5Q-PRO:rvrRev1.xx:cvnChassisManufacture:ct3:cvrChassisVersion:
+dmi.product.name: P5Q-PRO
+dmi.product.version: System Version
+dmi.sys.vendor: System manufacturer
+
+
+
+Thanks for the information.
+
+regards
+chuck
+
+Hey Chuck-
+
+You marked this confirmed... Are you able to reproduce this?
+
+Hi Sarunas-
+
+Were you able to install windows7 and just the reboot failed?  Or are you using a windows7 image that was installed elsewhere (or otherwise)?
+
+Anthony, any idea of the state of 64bit Windows7 on a 64bit QEMU host?
+
+I was able to install windows7 and just the reboot failed. It all works in VirtualBox OSE though.
+
+Looks like the install failed to succeed and there was not an MBR written.
+
diff --git a/results/classifier/118/performance/498523 b/results/classifier/118/performance/498523
new file mode 100644
index 00000000..03d045b6
--- /dev/null
+++ b/results/classifier/118/performance/498523
@@ -0,0 +1,77 @@
+performance: 0.984
+mistranslation: 0.981
+device: 0.926
+architecture: 0.894
+files: 0.894
+virtual: 0.886
+permissions: 0.883
+graphic: 0.877
+semantic: 0.866
+PID: 0.861
+hypervisor: 0.846
+arm: 0.845
+assembly: 0.824
+ppc: 0.816
+user-level: 0.813
+socket: 0.807
+network: 0.805
+VMM: 0.796
+vnc: 0.784
+kernel: 0.736
+register: 0.735
+risc-v: 0.727
+debug: 0.665
+x86: 0.652
+peripherals: 0.648
+TCG: 0.640
+i386: 0.633
+KVM: 0.619
+boot: 0.611
+
+Add on-line write compression support to qcow2
+
+This is a wishlist item.  Launchpad really need a way for the submitter to indicate this.
+
+It would be really cool if qemu were to support disk compression on-line for writes.
+
+I know this wouldn't be really easy.  Although most OS's use blocks, you can really only count on being able to compress 512-byte sectors, which doesn't give much room for a good compression ratio.  Moreover, the index indicating where in the image file each sector is located would be complex to manage, since the compressed blocks would be variable sized, and you'd be wanting to do some kind of best-fit allocation of space in the image file.  (If you were to make the image file compressed block size granularity, say, 64 bytes, you could probably do this best fit O(1).)  If you were to buffer enough writes, you could group arbitrary sequences of written sectors into blocks to compress (which with writeback could be sent to a helper thread on another CPU, so the throughput would be good).
+
++1 vote for this feature.
+
+As far as I know, QEMU v5.1 now has support for compression filters, e.g. by creating a qcow2 image with:
+
+ qemu-img create -f qcow2 -o compression_type=zlib image.qcow2 1G
+
+... so I think we can finally mark this ticket here as done.
+
+On 11/19/20 3:39 AM, Thomas Huth wrote:
+> As far as I know, QEMU v5.1 now has support for compression filters,
+> e.g. by creating a qcow2 image with:
+> 
+>  qemu-img create -f qcow2 -o compression_type=zlib image.qcow2 1G
+> 
+> ... so I think we can finally mark this ticket here as done.
+
+That says what compression type to use when writing the entire disk in
+one pass, but not online write compression. I think we may be a bit
+premature in calling this 'fix released', although I'm not certain we
+will ever try to add the feature requested.
+
+> 
+> ** Changed in: qemu
+>        Status: Confirmed => Fix Released
+> 
+
+-- 
+Eric Blake, Principal Software Engineer
+Red Hat, Inc.           +1-919-301-3226
+Virtualization:  qemu.org | libvirt.org
+
+
+
+Ok, sorry, seems like I mis-understood that new compression_type feature. If the requested feature will likely never be implemented, should we move this to WontFix instead?
+
+The compression filter can be used e.g. with -drive driver=compress,file.driver=qcow2,file.file.filename=foo.qcow2.  However, it shouldn’t be used lightly, as it will only do the right thing in very specific circumstances, namely every cluster that’s written to must not be allocated already.  So writing to the same cluster twice will not work.  (Which is why I was hesitant to merge this filter, but in the end I was contend with the fact that it’s at least difficult enough to use that unsuspecting users hopefully won’t accidentally enable it.)
+
+(It should be noted that this is not a limitation of the compression filter, though, but of qcow2’s implementation (VMDK isn’t any better).  So technically qemu has the feature now, but qcow2 is still missing it.)
+
diff --git a/results/classifier/118/performance/534 b/results/classifier/118/performance/534
new file mode 100644
index 00000000..98ea5fa6
--- /dev/null
+++ b/results/classifier/118/performance/534
@@ -0,0 +1,31 @@
+performance: 0.822
+device: 0.804
+arm: 0.459
+graphic: 0.396
+network: 0.378
+kernel: 0.281
+architecture: 0.238
+peripherals: 0.227
+boot: 0.214
+VMM: 0.192
+ppc: 0.190
+risc-v: 0.162
+i386: 0.162
+debug: 0.158
+vnc: 0.151
+hypervisor: 0.123
+TCG: 0.119
+KVM: 0.094
+x86: 0.092
+semantic: 0.092
+virtual: 0.066
+register: 0.048
+socket: 0.043
+files: 0.036
+mistranslation: 0.035
+PID: 0.032
+user-level: 0.030
+permissions: 0.029
+assembly: 0.013
+
+Memcpy param-overlap through e1000e_write_to_rx_buffers
diff --git a/results/classifier/118/performance/548 b/results/classifier/118/performance/548
new file mode 100644
index 00000000..14fcc1ac
--- /dev/null
+++ b/results/classifier/118/performance/548
@@ -0,0 +1,31 @@
+performance: 0.829
+graphic: 0.575
+device: 0.490
+debug: 0.472
+architecture: 0.462
+assembly: 0.142
+semantic: 0.078
+hypervisor: 0.066
+virtual: 0.056
+mistranslation: 0.047
+vnc: 0.044
+boot: 0.043
+VMM: 0.042
+peripherals: 0.032
+network: 0.024
+ppc: 0.018
+arm: 0.014
+user-level: 0.011
+register: 0.010
+permissions: 0.009
+risc-v: 0.008
+i386: 0.008
+socket: 0.007
+TCG: 0.005
+files: 0.005
+x86: 0.004
+PID: 0.004
+KVM: 0.003
+kernel: 0.002
+
+Null-ptr dereference in megasas_finish_dcmd
diff --git a/results/classifier/118/performance/549 b/results/classifier/118/performance/549
new file mode 100644
index 00000000..3079b668
--- /dev/null
+++ b/results/classifier/118/performance/549
@@ -0,0 +1,31 @@
+performance: 0.827
+device: 0.798
+PID: 0.718
+network: 0.657
+files: 0.468
+architecture: 0.387
+graphic: 0.346
+mistranslation: 0.267
+ppc: 0.237
+permissions: 0.230
+assembly: 0.222
+boot: 0.208
+peripherals: 0.196
+arm: 0.185
+semantic: 0.168
+virtual: 0.156
+debug: 0.105
+user-level: 0.104
+socket: 0.090
+register: 0.072
+hypervisor: 0.062
+kernel: 0.040
+vnc: 0.032
+VMM: 0.028
+x86: 0.015
+risc-v: 0.012
+i386: 0.008
+KVM: 0.006
+TCG: 0.004
+
+FPE in npcm7xx_clk_update_pll
diff --git a/results/classifier/118/performance/588735 b/results/classifier/118/performance/588735
new file mode 100644
index 00000000..0f2c5c75
--- /dev/null
+++ b/results/classifier/118/performance/588735
@@ -0,0 +1,68 @@
+performance: 0.857
+socket: 0.856
+virtual: 0.853
+KVM: 0.772
+architecture: 0.769
+x86: 0.760
+mistranslation: 0.757
+semantic: 0.718
+permissions: 0.674
+register: 0.604
+network: 0.591
+graphic: 0.589
+device: 0.575
+user-level: 0.569
+peripherals: 0.469
+VMM: 0.456
+kernel: 0.456
+debug: 0.437
+PID: 0.423
+ppc: 0.408
+hypervisor: 0.389
+vnc: 0.264
+risc-v: 0.201
+i386: 0.196
+assembly: 0.187
+TCG: 0.186
+boot: 0.185
+files: 0.137
+arm: 0.116
+
+Quit command not working
+
+Qemu strace
+
+
+
+rt_sigreturn(0x1b)                      = 56
+clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f6fddecbad0) = ? ERESTARTNOINTR (To be restarted)
+--- SIGPROF (Profiling timer expired) @ 0 (0) ---
+rt_sigreturn(0x1b)                      = 56
+
+
+started with :
+
+[root@virtual-test ~]# /root/qemu-test/qemu-kvm/x86_64-softmmu/qemu-system-x86_64 -net tap,vlan=0,name=tap.0 -chardev socket,id=serial0,host=0.0.0.0,port=$CONSOLEPORT,telnet,server,nowait -serial chardev:serial0 -hda hda -hdb hdb -hdc hdc -hdd hdd -fda fd0 -fdb fd1 -chardev socket,id=monitor,host=0.0.0.0,port=$MONITORPORT,telnet,server,nowait -monitor chardev:monitor -net nic,macaddr=$MAC,vlan=0,model=e1000,name=e1000.0 -M pc -m 4096
+
+when removing -m 4096, the quit command works.
+
+but I think its a combination of different args that causes the problem.
+
+I tried this exact syntax and could not reproduce.  What version of qemu are you using?
+
+how much memory do you have in the host?
+
+The host now has 8GB of memory, problem remains.
+
+
+Try
+
+ ./configure --target-list=x86_64-softmmu --enable-profiler --enable-gprof --enable-io-thread --enable-debug-tcg --enable-debug
+
+
+Without these options it magically works :)
+
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/589231 b/results/classifier/118/performance/589231
new file mode 100644
index 00000000..393b08d3
--- /dev/null
+++ b/results/classifier/118/performance/589231
@@ -0,0 +1,44 @@
+performance: 0.963
+peripherals: 0.869
+KVM: 0.859
+device: 0.837
+graphic: 0.830
+risc-v: 0.731
+architecture: 0.639
+mistranslation: 0.611
+network: 0.606
+semantic: 0.550
+kernel: 0.514
+register: 0.503
+user-level: 0.498
+PID: 0.494
+hypervisor: 0.461
+virtual: 0.431
+socket: 0.422
+debug: 0.416
+ppc: 0.395
+boot: 0.392
+arm: 0.372
+i386: 0.317
+files: 0.296
+VMM: 0.283
+vnc: 0.273
+x86: 0.266
+permissions: 0.261
+TCG: 0.239
+assembly: 0.158
+
+cirrus vga is very slow in qemu-kvm-0.12
+
+As has been reported multiple times (*), there were a regression in qemu-kvm from 0.11 to 0.12, which causes significant slowdown in cirrus vga emulation.  For windows guests, where "standard VGA" driver works reasonable well, -vga std is a good workaround. But for e.g. linux guests, where vesa driver is painfully slow by its own, that's not a solution.
+
+(*)
+ debian qemu-kvm bug report #574988: http://bugs.debian.org/574988#17
+ debian qemu bugreport (might be related): http://bugs.debian.org/575720
+ kvm mailinglist thread: http://<email address hidden>/msg33459.html
+ another kvm ml thread: http://<email address hidden>/msg32744.html
+
+QEMU 0.12 is pretty much outdated - has this been fixed in a newer version of QEMU? I.e. do you think we can close this bug nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/597 b/results/classifier/118/performance/597
new file mode 100644
index 00000000..920d7906
--- /dev/null
+++ b/results/classifier/118/performance/597
@@ -0,0 +1,49 @@
+performance: 0.931
+virtual: 0.876
+device: 0.815
+graphic: 0.769
+network: 0.682
+hypervisor: 0.660
+mistranslation: 0.570
+boot: 0.550
+semantic: 0.540
+register: 0.533
+PID: 0.513
+kernel: 0.492
+architecture: 0.487
+socket: 0.471
+arm: 0.425
+user-level: 0.398
+vnc: 0.388
+i386: 0.378
+VMM: 0.368
+debug: 0.348
+ppc: 0.345
+TCG: 0.317
+x86: 0.311
+permissions: 0.302
+peripherals: 0.286
+risc-v: 0.272
+KVM: 0.232
+files: 0.183
+assembly: 0.055
+
+sunhme sometimes causes the VM to hang forever
+Description of problem:
+When using sunhme, sometimes on receiving traffic (and doing disk IO?) it will get slower and slower until it becomes entirely unresponsive, which does not happen on the real hardware I have sitting next to me (Sun Netra T1, running the same OS+kernel, though not the same image)
+
+virtio-net-pci does not, so far, demonstrate the problem, and neither does just sending a lot of traffic out over the sunhme interface, so it appears to require receiving or some more complex interaction.
+
+It doesn't always happen immediately, it sometimes takes a couple of tries with the command, but when it does, it's gone.
+
+Output logged to console below.
+Steps to reproduce:
+1. Log into VM (rich/omgqemu)
+2. sudo apt clean;sudo apt update;
+3. If it doesn't lock up the VM, repeat step 2 a few times.
+Additional information:
+Disk image can be found [here](https://www.dropbox.com/s/0oosyf7xej44v9n/sunhme_repro_disk.tgz?dl=0) (tarred in the hope that it does something reasonable with sparseness)
+ 
+Console output can be found [here](https://www.dropbox.com/s/t1wxx41vzv8p3l6/sunhme%20sadness.txt?dl=0)
+
+Ah yes, [the initrd and vmlinux](https://www.dropbox.com/s/t7i4gs7poqaeanz/oops_boot.tgz?dl=0) would help, wouldn't they, though I imagine the ones in the VM itself would boot...
diff --git a/results/classifier/118/performance/597351 b/results/classifier/118/performance/597351
new file mode 100644
index 00000000..57278a7d
--- /dev/null
+++ b/results/classifier/118/performance/597351
@@ -0,0 +1,67 @@
+performance: 0.968
+graphic: 0.682
+device: 0.648
+semantic: 0.548
+user-level: 0.514
+architecture: 0.456
+peripherals: 0.440
+network: 0.415
+mistranslation: 0.389
+virtual: 0.332
+socket: 0.313
+x86: 0.190
+PID: 0.182
+permissions: 0.171
+ppc: 0.163
+hypervisor: 0.148
+debug: 0.136
+assembly: 0.134
+i386: 0.112
+KVM: 0.111
+register: 0.085
+boot: 0.082
+TCG: 0.072
+kernel: 0.063
+VMM: 0.055
+risc-v: 0.051
+arm: 0.048
+vnc: 0.043
+files: 0.036
+
+Slow UDP performance with virtio device
+
+I'm working on an app that is very sensitive to round-trip latency
+between the guest and host, and qemu/kvm seems to be significantly
+slower than it needs to be.
+
+The attached program is a ping/pong over UDP.  Call it with a single
+argument to start a listener/echo server on that port.  With three
+arguments it becomes a counted "pinger" that will exit after a
+specified number of round trips for performance measurements.  For
+example:
+
+  $ gcc -o udp-pong udp-pong.c
+  $ ./udp-pong 12345 &                       # start a listener on port 12345
+  $ time ./udp-pong 127.0.0.1 12345 1000000  # time a million round trips
+
+When run on the loopback device on a single machine (true on the host
+or within a guest), I get about 100k/s.
+
+When run across a port forward using "user" networking on qemu (or
+kvm, the performance is the same) and the default rtl8139 driver (both
+the host and guest are Ubuntu Lucid), I get about 10k/s.  This seems
+very slow, but perhaps unavoidably so?
+
+When run in the same configuration using the "virtio" driver, I get
+only 2k/s.  This is almost certainly a bug in the virtio driver, given
+that it's a paravirtualized device that is 5x slower than the "slow"
+hardware emulation.
+
+I get no meaningful change in performance between kvm/qemu.
+
+
+
+Triaging old bug tickets ... can you still reproduce this issue with the latest version of QEMU? Have you already tried vhost?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/601946 b/results/classifier/118/performance/601946
new file mode 100644
index 00000000..aecb8490
--- /dev/null
+++ b/results/classifier/118/performance/601946
@@ -0,0 +1,107 @@
+performance: 0.839
+permissions: 0.836
+mistranslation: 0.815
+user-level: 0.802
+virtual: 0.767
+graphic: 0.756
+device: 0.754
+register: 0.746
+peripherals: 0.741
+debug: 0.737
+hypervisor: 0.734
+semantic: 0.711
+ppc: 0.708
+assembly: 0.702
+VMM: 0.679
+socket: 0.658
+risc-v: 0.653
+architecture: 0.650
+PID: 0.641
+network: 0.637
+boot: 0.611
+vnc: 0.609
+arm: 0.599
+files: 0.572
+TCG: 0.564
+kernel: 0.561
+KVM: 0.532
+x86: 0.507
+i386: 0.335
+
+[Feature request] qemu-img multi-threaded compressed image conversion
+
+Feature request:
+qemu-img multi-threaded compressed image conversion
+
+Suppose I want to convert raw image to compressed qcow2. Multi-threaded conversion will be much faster, because bottleneck is compressing data.
+
+Hi,
+
+The problem is that it is more than just the compression that is the problem, with modern cpus disk speed is a problem, and compression is often stream based. For now there isn't enough valid data that this qualifies as a bug/rfe.
+
+If you decide to try and implement it, and provide data showing that this is actually a win, please reopen this.
+
+Regards,
+Jes
+
+
+1. during benchmark I used iotop and just top. qemu-img is eating all my cpu (3.07 Ghz) and disk streaming was at low speeds.
+2. Writing on disk in ext4 is cached very strongly, so writing in 4 streams is not the problem.
+3. For example, 7z give huge speed increase in when compressing in multiple threads.
+4. Yes, i understand, that compressing is stream-based. So we can split input stream by chinks and compress each chunk individually.
+
+You can use time qemu-img convert .... and see user/system/real  timings. In my cases, user time is nearly equal real time, so CPU work in my case is the bottleneck.
+
+There're also projects like http://compression.ca/pbzip2/  .  We'll be facing more and more cores per cpu, so we should use these techniques.
+
+The compression in this case is certainly chunked already, otherwise you couldn't implement a pseudo block device without reading the entire stream to read the last block!  As the data in the new disk is necessarily chunk compressed, parallelisation is perfect feasible, it's just a question of the algorithm you use to arbitrate the work between the threads, which may need some thought as you'd likely be navigating a tree structure.
+
+There's no question that Jes' suggestion would create a 12x speed up for me, and there's pretty standard off the shelf server hardware with 48 cores.  As Jan-Simon Möller points out, being single-threaded and single-process isn't much of an option any more.  If one is trying to compress, say, a 4TB virtual disk image then using a little over 2% of the available CPU time meaning you have to wait a week is going to be... frustrating :)
+
+
+I'd like to note, that I use qemu-img to backup snapshots of images. This works fine, it's just so slow. Of my 24 cores only 1 is used to compress the image. 
+
+It could be so much faster.
+
+qcow2_write_compressed in block/qcow2.c would need to be changed. 
+Currently it seems to need bigger changes as it always does compress+write for one block.
+Not sure, how well it would handle multiple writes in parallel, so the safest would be to avoid that and just wait for the previous writer to finish before starting to write.
+
+
+It looks like qcow2_write_compressed() has been removed and turned into a qemu co-routine in qemu 2.8.0 (released in December 2017) to support live compressed back-ups.  Any pointers to start working on this?  We have servers with 128 CPUs and it's very sad to see them compress on a single CPU and take tens of minutes instead of a few seconds.. :)
+
+The fact that it's now a coroutine_fn doesn't change much, if anything it makes it simpler to handle multiple writes in parallel.
+
+That was also my feeling, so nice to get a confirmation!
+
+Another related thing would be to allow qemu-nbd to write compressed blocks its backing image - today if you use a qcow2 with compression, any block which is written to gets uncompressed in the resulting image and you need to recompress the image offline with qemu-img.
+
+Would you have any pointers/documentation on how best to implement this so both qemu-img and qemu-nbd can use multithreaded compressed writes ?  I'm totally new to qemu block subsystem.
+
+@~quentin.casasnovas please report this as new feature request, instead of adding comment to this one.
+
+@~quentin.casasnovas Are you still working on this? If not then I would like to give this a shot?
+
+@Jinank I have not started working on this at all, so please go ahead!  Let me know if I can help with testing or anything, we make quite extensive use of nbd and qcow2 images internally.
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/80
+
+
diff --git a/results/classifier/118/performance/62 b/results/classifier/118/performance/62
new file mode 100644
index 00000000..aaf2a3a4
--- /dev/null
+++ b/results/classifier/118/performance/62
@@ -0,0 +1,31 @@
+performance: 0.835
+device: 0.765
+debug: 0.719
+assembly: 0.599
+boot: 0.530
+network: 0.471
+architecture: 0.420
+graphic: 0.419
+register: 0.192
+arm: 0.160
+semantic: 0.146
+PID: 0.141
+mistranslation: 0.140
+i386: 0.127
+VMM: 0.098
+virtual: 0.096
+hypervisor: 0.087
+vnc: 0.047
+TCG: 0.041
+peripherals: 0.040
+kernel: 0.032
+user-level: 0.032
+ppc: 0.031
+x86: 0.030
+KVM: 0.016
+socket: 0.015
+files: 0.011
+permissions: 0.010
+risc-v: 0.006
+
+[OSS-Fuzz] ahci: stack overflow in ahci_cond_start_engines
diff --git a/results/classifier/118/performance/642 b/results/classifier/118/performance/642
new file mode 100644
index 00000000..c3a2fe91
--- /dev/null
+++ b/results/classifier/118/performance/642
@@ -0,0 +1,34 @@
+performance: 0.993
+device: 0.887
+graphic: 0.817
+semantic: 0.793
+mistranslation: 0.773
+boot: 0.391
+files: 0.356
+ppc: 0.354
+network: 0.277
+architecture: 0.273
+PID: 0.219
+vnc: 0.206
+risc-v: 0.190
+TCG: 0.166
+register: 0.133
+arm: 0.128
+debug: 0.127
+hypervisor: 0.118
+VMM: 0.101
+peripherals: 0.095
+user-level: 0.078
+socket: 0.056
+virtual: 0.045
+permissions: 0.030
+assembly: 0.006
+i386: 0.005
+kernel: 0.003
+x86: 0.002
+KVM: 0.001
+
+Slow QEMU I/O on macOS host
+Description of problem:
+QEMU on macOS host gives very low I/O speed. Tested with fio tool, compared to linux host
+Tested on QEMU v6.1.0, and the recent master
diff --git a/results/classifier/118/performance/672 b/results/classifier/118/performance/672
new file mode 100644
index 00000000..4b19ef0f
--- /dev/null
+++ b/results/classifier/118/performance/672
@@ -0,0 +1,33 @@
+performance: 0.847
+architecture: 0.705
+device: 0.560
+ppc: 0.316
+boot: 0.140
+mistranslation: 0.114
+semantic: 0.097
+user-level: 0.081
+PID: 0.072
+network: 0.068
+arm: 0.065
+debug: 0.058
+virtual: 0.055
+permissions: 0.048
+socket: 0.047
+graphic: 0.037
+peripherals: 0.033
+register: 0.032
+files: 0.029
+TCG: 0.025
+hypervisor: 0.022
+assembly: 0.021
+VMM: 0.019
+risc-v: 0.014
+kernel: 0.014
+vnc: 0.007
+x86: 0.003
+i386: 0.002
+KVM: 0.002
+
+Slow emulation of mac99 (PowerPC G4) due to being single-threaded.
+Additional information:
+None
diff --git a/results/classifier/118/performance/680 b/results/classifier/118/performance/680
new file mode 100644
index 00000000..80b35ee3
--- /dev/null
+++ b/results/classifier/118/performance/680
@@ -0,0 +1,31 @@
+performance: 0.837
+peripherals: 0.757
+architecture: 0.700
+graphic: 0.595
+device: 0.475
+mistranslation: 0.233
+ppc: 0.180
+semantic: 0.161
+debug: 0.047
+network: 0.039
+virtual: 0.038
+hypervisor: 0.035
+permissions: 0.035
+risc-v: 0.027
+arm: 0.025
+assembly: 0.022
+boot: 0.021
+VMM: 0.017
+TCG: 0.013
+vnc: 0.011
+PID: 0.008
+x86: 0.005
+register: 0.005
+socket: 0.004
+i386: 0.003
+user-level: 0.002
+kernel: 0.002
+KVM: 0.002
+files: 0.001
+
+multi-threaded qemu instance and pci bar
diff --git a/results/classifier/118/performance/719 b/results/classifier/118/performance/719
new file mode 100644
index 00000000..6f595b1d
--- /dev/null
+++ b/results/classifier/118/performance/719
@@ -0,0 +1,49 @@
+performance: 0.996
+graphic: 0.977
+network: 0.956
+device: 0.890
+ppc: 0.857
+PID: 0.780
+semantic: 0.756
+debug: 0.737
+socket: 0.731
+architecture: 0.712
+vnc: 0.705
+VMM: 0.702
+hypervisor: 0.674
+risc-v: 0.669
+x86: 0.636
+mistranslation: 0.624
+i386: 0.618
+permissions: 0.609
+KVM: 0.555
+register: 0.551
+kernel: 0.548
+peripherals: 0.545
+arm: 0.517
+TCG: 0.514
+user-level: 0.463
+boot: 0.419
+assembly: 0.331
+virtual: 0.178
+files: 0.168
+
+live migration's performance with compression enabled is much worse than compression disabled
+Description of problem:
+
+Steps to reproduce:
+1. Run QEMU the Guests with 1Gpbs network on source host and destination host with QEMU command line
+2. Run some memory work loads on Guest, for example, ./memtester 1G 1
+3. Set migration parameters in QEMU monitor. On source and destination, 
+   execute: #migrate_set_capability compress on
+   Other compression parameters are all default. 
+4. Run migrate command, # migrate -d tcp:10.156.208.154:4000
+5. The results: 
+   - without compression:  total time:  197366 ms   throughput:   937.81 mbps  transferred Ram: 22593703 kbytes 
+   - with compression: total time:  281711 ms   throughput:  90.24 mbps    transferred Ram: 3102898 kbytes  
+
+When compression is enabled, the compression transferred ram is reduced a lot. But the throughput is down badly.
+The total time of live migration with compression is longer than without compression. 
+I tried with 100G network bandwidth, it also has the same problem.
+Additional information:
+
diff --git a/results/classifier/118/performance/721793 b/results/classifier/118/performance/721793
new file mode 100644
index 00000000..902d5626
--- /dev/null
+++ b/results/classifier/118/performance/721793
@@ -0,0 +1,61 @@
+performance: 0.871
+graphic: 0.801
+device: 0.746
+architecture: 0.657
+network: 0.634
+user-level: 0.592
+socket: 0.588
+semantic: 0.587
+peripherals: 0.585
+ppc: 0.578
+x86: 0.549
+kernel: 0.541
+boot: 0.525
+hypervisor: 0.511
+PID: 0.482
+files: 0.479
+virtual: 0.442
+debug: 0.424
+vnc: 0.398
+i386: 0.364
+mistranslation: 0.363
+arm: 0.345
+permissions: 0.331
+register: 0.318
+risc-v: 0.286
+VMM: 0.260
+assembly: 0.257
+TCG: 0.254
+KVM: 0.176
+
+QEMU freezes on startup (100% CPU utilization)
+
+0.12.5 was the last version of QEMU that runs ok and boots any os image.
+
+0.13.0-0.14.0 just freeze, and the only thing I see is a black screen and both of them make it use 100% of CPU also.
+Both kernels 2.6.35.11 and 2.6.37.1 with and without PAE support.
+
+tested commands:
+
+W2000:
+$ qemu -m 256 -localtime -net nic,model=rtl8139 -net tap -usbdevice host:0e21:0750 /var/opt/vm/w2000.img
+W2000:
+$ qemu /var/opt/vm/w2000.img
+OpenBSD 4.8:
+$ qemu -cdrom ~/cd48.iso -boot d empty-qcow2.img
+
+tried to use `-M pc-0.12` selector, different audio cards (I've found it caused infinite loop on startup once) -- no luck.
+tried to use recent seabios from git -- still no luck.
+
+attached strace log of 0.14.0.
+
+everything was tested on HP mini 311C with Intel Atom N270.
+
+
+
+
+
+QEMU 0.15.0 fixes the problem.
+
+Closing as "Fix released" according to comment #3
+
diff --git a/results/classifier/118/performance/724 b/results/classifier/118/performance/724
new file mode 100644
index 00000000..3ab01a5c
--- /dev/null
+++ b/results/classifier/118/performance/724
@@ -0,0 +1,31 @@
+performance: 0.817
+graphic: 0.642
+debug: 0.516
+mistranslation: 0.433
+device: 0.392
+i386: 0.206
+arm: 0.159
+semantic: 0.157
+architecture: 0.152
+assembly: 0.140
+KVM: 0.138
+VMM: 0.068
+boot: 0.059
+ppc: 0.042
+hypervisor: 0.041
+network: 0.041
+x86: 0.040
+virtual: 0.032
+user-level: 0.030
+TCG: 0.028
+vnc: 0.025
+kernel: 0.024
+PID: 0.018
+risc-v: 0.015
+socket: 0.012
+register: 0.009
+peripherals: 0.007
+permissions: 0.007
+files: 0.002
+
+esp: heap-buffer-overflow in esp_fifo_pop_buf
diff --git a/results/classifier/118/performance/753916 b/results/classifier/118/performance/753916
new file mode 100644
index 00000000..54433ca0
--- /dev/null
+++ b/results/classifier/118/performance/753916
@@ -0,0 +1,64 @@
+performance: 0.983
+user-level: 0.903
+mistranslation: 0.876
+graphic: 0.863
+debug: 0.851
+files: 0.845
+boot: 0.836
+semantic: 0.800
+network: 0.795
+device: 0.784
+PID: 0.783
+permissions: 0.778
+ppc: 0.766
+architecture: 0.764
+register: 0.749
+vnc: 0.741
+peripherals: 0.710
+VMM: 0.704
+kernel: 0.694
+virtual: 0.690
+socket: 0.688
+x86: 0.685
+risc-v: 0.682
+arm: 0.658
+hypervisor: 0.629
+i386: 0.619
+TCG: 0.612
+assembly: 0.573
+KVM: 0.441
+
+performance bug with SeaBios 0.6.x
+
+in my tests SeaBios 0.5.1 has the best performance (100% faster)
+i run qemu port in windows xp (phenom II x4 945, 4 gigas ram DDR3) and windows xp (Pentium 4, 1 giga ram ddr)
+
+Hi. Thanks for reporting this issue.
+
+Can you tell us a bit more about the problem?
+I'm not sure what the cause could be, but perhaps we can understand it better with some of the following information (plus anything else you can think of that could be related):
+ - What version of QEMU are you running on each machine?
+ - Did you build it yourself? If so, can you describe how? If not, can you provide a pointer to where you got it?
+ - What are you running as the guest environment(s)?
+ - I'm assuming that Windows XP is the host environment (two different host machines from your description). Which version / service packs do you have installed?
+ - How did you do the tests? For example, what is the benchmarking tool or load that you are using? How are you using those tools / loads? Can you provide the numbers for each host?
+
+i use QEMU for test PEs (preinstaled environments) in pendrives with a bat script
+
+#
+SET SDL_VIDEODRIVER=directx
+qemu.exe -m 512 -localtime -M pc -hda \\.\physicaldrive1
+#
+
+my workstation run Windows XP SP3 with all hotfixes, and i use QEMU 0.14.0 (this port http://www.megaupload.com/?d=8LUG85F9)
+
+i run syslinux loader for Linux PLD rescue .iso file
+
+i record a test with camstudio http://www.megaupload.com/?d=37LDTOS3
+
+OK, from your test.swf file, I assume that the way you're testing is the boot-up of a Linux ISO, and that "100%" is an estimate of boot speed.
+
+I'm really not sure what the problem is. I can only suggest that you try various SeaBIOS versions and try to isolate which version is the problem. It also might be worth seeing if the problem affects other Linux distro boot-up.
+
+SeaBios 0.x is pretty outdated nowadays, so I think we should close this bug ... anyway, if you still have problems with SeaBios, you likely should it report on the SeaBios mailing list (https://www.seabios.org/Mailinglist) instead of using the QEMU bugtracker.
+
diff --git a/results/classifier/118/performance/756 b/results/classifier/118/performance/756
new file mode 100644
index 00000000..0809374e
--- /dev/null
+++ b/results/classifier/118/performance/756
@@ -0,0 +1,31 @@
+performance: 0.853
+device: 0.775
+debug: 0.747
+network: 0.541
+architecture: 0.470
+kernel: 0.450
+virtual: 0.401
+semantic: 0.395
+graphic: 0.393
+peripherals: 0.318
+socket: 0.303
+assembly: 0.296
+hypervisor: 0.282
+vnc: 0.228
+mistranslation: 0.192
+boot: 0.182
+files: 0.159
+register: 0.156
+x86: 0.153
+permissions: 0.146
+VMM: 0.133
+KVM: 0.122
+arm: 0.122
+PID: 0.107
+ppc: 0.098
+user-level: 0.092
+TCG: 0.060
+i386: 0.040
+risc-v: 0.018
+
+qemu-system-m68k -M q800 -bios /dev/null segfaults
diff --git a/results/classifier/118/performance/760976 b/results/classifier/118/performance/760976
new file mode 100644
index 00000000..873ca9a3
--- /dev/null
+++ b/results/classifier/118/performance/760976
@@ -0,0 +1,59 @@
+performance: 0.988
+x86: 0.793
+boot: 0.721
+device: 0.694
+architecture: 0.648
+graphic: 0.632
+semantic: 0.582
+ppc: 0.567
+hypervisor: 0.511
+PID: 0.495
+network: 0.472
+socket: 0.469
+debug: 0.450
+files: 0.401
+user-level: 0.378
+VMM: 0.345
+register: 0.341
+vnc: 0.338
+permissions: 0.305
+risc-v: 0.300
+mistranslation: 0.258
+TCG: 0.244
+peripherals: 0.232
+i386: 0.228
+kernel: 0.217
+virtual: 0.215
+assembly: 0.205
+arm: 0.154
+KVM: 0.102
+
+Nexenta 3.0.1 fails to install
+
+The latest git version of qemu (commit 420b6c317de87890e06225de6e2f8af7bf714df0) fails to boot Nexenta3.0.1. I don't know if this is a bug in nextenta, or in QEMU or both.
+
+You can obtain a bootable image of Nextenta from http://www.nexenta.org/releases/nexenta-core-platform_3.0.1-b134_x86.iso.zip
+
+Host: Linux/x86_64 gcc4.5 ./configure --enable-linux-aio --enable-io-thread --enable-kvm
+
+qemu-img create nexenta3.0.1 3G
+qemu -hda nexenta3.0.1 -cdrom nexenta-core-platform3.0.1-b134x86.iso -boot d -k en-us -m 256
+
+Boots to grub OK, but when you hit install you get panic[cpu0]/thread=fec226c0: vmem_hash_delete(d4404690, d445abc0, 0): bad free.
+
+You get the same error with or without -enable-kvm
+
+
+
+I have found that I can get to the installer if I give the -no-acpi argument.
+
+As others have noted, Netexnta is very slow.  To get any sort of speed I used the 32-bit version and disabled QEMU disc caching, thus:
+
+qemu -drive file=nexenta3.0.1,index=0,media=disk,cache=unsafe -cdrom nexenta-core-platform_3.0.1-b134_x86.iso -boot d -k en-us -m 512 -enable-kvm -no-acpi
+
+Even then, performance is painful.
+
+Triaging old bug tickets ... Is this issue still reproducible with the latest version of QEMU?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/118/performance/78 b/results/classifier/118/performance/78
new file mode 100644
index 00000000..7fe42403
--- /dev/null
+++ b/results/classifier/118/performance/78
@@ -0,0 +1,31 @@
+performance: 0.805
+device: 0.783
+network: 0.712
+peripherals: 0.656
+arm: 0.634
+debug: 0.587
+VMM: 0.456
+PID: 0.393
+architecture: 0.385
+TCG: 0.383
+boot: 0.383
+graphic: 0.381
+register: 0.345
+risc-v: 0.336
+semantic: 0.258
+mistranslation: 0.217
+files: 0.214
+i386: 0.202
+permissions: 0.183
+kernel: 0.170
+ppc: 0.166
+user-level: 0.156
+x86: 0.146
+socket: 0.118
+vnc: 0.111
+KVM: 0.085
+virtual: 0.080
+assembly: 0.051
+hypervisor: 0.003
+
+msmouse serial mouse emulation broken? No id byte sent on reset
diff --git a/results/classifier/118/performance/781 b/results/classifier/118/performance/781
new file mode 100644
index 00000000..4b121909
--- /dev/null
+++ b/results/classifier/118/performance/781
@@ -0,0 +1,31 @@
+performance: 0.885
+debug: 0.696
+mistranslation: 0.666
+network: 0.621
+assembly: 0.544
+device: 0.525
+architecture: 0.440
+arm: 0.399
+graphic: 0.398
+semantic: 0.346
+ppc: 0.286
+i386: 0.270
+TCG: 0.264
+VMM: 0.245
+KVM: 0.224
+boot: 0.214
+x86: 0.213
+kernel: 0.212
+hypervisor: 0.173
+socket: 0.156
+vnc: 0.140
+risc-v: 0.128
+virtual: 0.081
+PID: 0.062
+register: 0.055
+user-level: 0.052
+peripherals: 0.036
+permissions: 0.025
+files: 0.017
+
+Assertion `addr < cache->len && 2 <= cache->len - addr' failed in address_space_stw_le_cached
diff --git a/results/classifier/118/performance/80 b/results/classifier/118/performance/80
new file mode 100644
index 00000000..06b30030
--- /dev/null
+++ b/results/classifier/118/performance/80
@@ -0,0 +1,31 @@
+performance: 0.947
+device: 0.812
+graphic: 0.500
+network: 0.344
+semantic: 0.339
+arm: 0.277
+ppc: 0.259
+i386: 0.251
+x86: 0.240
+virtual: 0.240
+boot: 0.236
+VMM: 0.223
+TCG: 0.193
+PID: 0.167
+peripherals: 0.155
+permissions: 0.154
+mistranslation: 0.152
+architecture: 0.114
+debug: 0.108
+vnc: 0.105
+hypervisor: 0.087
+register: 0.066
+files: 0.058
+user-level: 0.056
+kernel: 0.051
+risc-v: 0.021
+KVM: 0.013
+socket: 0.010
+assembly: 0.007
+
+[Feature request] qemu-img multi-threaded compressed image conversion
diff --git a/results/classifier/118/performance/81 b/results/classifier/118/performance/81
new file mode 100644
index 00000000..8c7dad8c
--- /dev/null
+++ b/results/classifier/118/performance/81
@@ -0,0 +1,31 @@
+performance: 0.800
+device: 0.800
+graphic: 0.712
+network: 0.477
+architecture: 0.450
+files: 0.351
+hypervisor: 0.343
+semantic: 0.342
+arm: 0.332
+boot: 0.321
+virtual: 0.317
+PID: 0.314
+i386: 0.273
+x86: 0.260
+peripherals: 0.225
+register: 0.223
+VMM: 0.188
+kernel: 0.175
+ppc: 0.162
+permissions: 0.161
+TCG: 0.160
+debug: 0.150
+mistranslation: 0.129
+vnc: 0.128
+user-level: 0.126
+assembly: 0.118
+socket: 0.101
+risc-v: 0.037
+KVM: 0.029
+
+[Feature request] qemu-img option about recompressing
diff --git a/results/classifier/118/performance/815 b/results/classifier/118/performance/815
new file mode 100644
index 00000000..3e2fb1d6
--- /dev/null
+++ b/results/classifier/118/performance/815
@@ -0,0 +1,31 @@
+performance: 0.911
+device: 0.610
+virtual: 0.545
+network: 0.535
+architecture: 0.376
+graphic: 0.358
+semantic: 0.345
+mistranslation: 0.150
+i386: 0.133
+x86: 0.126
+hypervisor: 0.092
+arm: 0.075
+files: 0.051
+debug: 0.049
+user-level: 0.039
+vnc: 0.035
+assembly: 0.032
+VMM: 0.030
+register: 0.029
+boot: 0.026
+risc-v: 0.025
+peripherals: 0.023
+ppc: 0.015
+permissions: 0.015
+TCG: 0.010
+socket: 0.007
+kernel: 0.006
+PID: 0.004
+KVM: 0.002
+
+Using spdk Vhost to accelerate QEMU, which QEMU version is the most appropriate?
diff --git a/results/classifier/118/performance/821 b/results/classifier/118/performance/821
new file mode 100644
index 00000000..e1b26e18
--- /dev/null
+++ b/results/classifier/118/performance/821
@@ -0,0 +1,31 @@
+performance: 0.861
+device: 0.845
+risc-v: 0.706
+user-level: 0.464
+boot: 0.442
+register: 0.371
+debug: 0.316
+ppc: 0.288
+graphic: 0.273
+vnc: 0.268
+hypervisor: 0.261
+arm: 0.258
+semantic: 0.238
+virtual: 0.175
+network: 0.161
+mistranslation: 0.143
+permissions: 0.141
+architecture: 0.135
+PID: 0.101
+VMM: 0.084
+socket: 0.056
+i386: 0.022
+peripherals: 0.018
+TCG: 0.018
+assembly: 0.007
+x86: 0.004
+files: 0.003
+kernel: 0.001
+KVM: 0.001
+
+[SOLVED] ReactOS video problems...
diff --git a/results/classifier/118/performance/849 b/results/classifier/118/performance/849
new file mode 100644
index 00000000..589753f4
--- /dev/null
+++ b/results/classifier/118/performance/849
@@ -0,0 +1,52 @@
+performance: 0.929
+peripherals: 0.895
+graphic: 0.885
+device: 0.676
+semantic: 0.637
+virtual: 0.596
+ppc: 0.372
+mistranslation: 0.322
+register: 0.320
+risc-v: 0.302
+boot: 0.296
+VMM: 0.287
+permissions: 0.269
+debug: 0.229
+vnc: 0.217
+PID: 0.205
+arm: 0.195
+architecture: 0.182
+user-level: 0.138
+KVM: 0.131
+kernel: 0.115
+hypervisor: 0.109
+TCG: 0.101
+assembly: 0.101
+socket: 0.100
+network: 0.060
+files: 0.053
+x86: 0.008
+i386: 0.003
+
+High mouse polling rate stutters some applications
+Description of problem:
+There are couple of instances where moving the mouse would slow down some applications, especially for games
+
+https://www.reddit.com/r/VFIO/comments/ect3sd/having_an_issue_with_my_vm_where_games_stutter/
+
+https://www.reddit.com/r/VFIO/comments/n9hwtg/game_fps_drop_on_mouse_input/
+
+https://www.reddit.com/r/VFIO/comments/ln1uwb/evdev_mouse_passthrough_with_1000hz_mouse_causes/
+
+https://www.reddit.com/r/VFIO/comments/se92rq/looking_for_advice_on_poor_gpu_passthrough/
+
+I myself included, is impacted by this mysterious issue, I'm not pretty sure whether this is related to VFIO or QEMU or both, but I'm definitely sure this is a kind of regression in between since I had no such issue before.
+Steps to reproduce:
+1. Do a GPU passthrough
+2. Get a mouse capable of outputting high polling rate like 1000Hz, usually they are categorized as gaming mouses
+3. Start any 3D applications, including stuff like Unreal Engine 4 Editor or any games
+4. See mysterious stuttering
+Additional information:
+I'm using an AMD Ryzen 7 3700X CPU as the host, but I have made scripts that pins CPU to the VM to get better performance speculatively by putting the threads on the same CCX to minimize memory latency as much as possible. This alleviated some terrible lag, but not by much. (like 11 FPS to 20 FPS if you move your mouse which is still crappy compared to 90+ FPS when static)
+
+I suspect there is something wrong with the USB subsystem.
diff --git a/results/classifier/118/performance/861 b/results/classifier/118/performance/861
new file mode 100644
index 00000000..b0857e23
--- /dev/null
+++ b/results/classifier/118/performance/861
@@ -0,0 +1,31 @@
+performance: 0.982
+device: 0.830
+KVM: 0.468
+network: 0.416
+arm: 0.407
+PID: 0.382
+register: 0.369
+permissions: 0.320
+peripherals: 0.310
+debug: 0.302
+graphic: 0.295
+semantic: 0.288
+boot: 0.284
+kernel: 0.251
+architecture: 0.232
+socket: 0.222
+hypervisor: 0.202
+files: 0.199
+ppc: 0.196
+TCG: 0.161
+risc-v: 0.150
+mistranslation: 0.128
+vnc: 0.126
+user-level: 0.114
+virtual: 0.103
+VMM: 0.086
+assembly: 0.035
+i386: 0.034
+x86: 0.032
+
+Using qemu+kvm is slower than using qemu in rv6(xv6 rust porting)
diff --git a/results/classifier/118/performance/864 b/results/classifier/118/performance/864
new file mode 100644
index 00000000..6db3a8b5
--- /dev/null
+++ b/results/classifier/118/performance/864
@@ -0,0 +1,45 @@
+performance: 0.910
+hypervisor: 0.866
+virtual: 0.855
+device: 0.840
+graphic: 0.777
+semantic: 0.754
+architecture: 0.661
+user-level: 0.485
+network: 0.459
+debug: 0.448
+mistranslation: 0.414
+kernel: 0.385
+permissions: 0.361
+register: 0.346
+vnc: 0.346
+risc-v: 0.300
+peripherals: 0.283
+PID: 0.273
+socket: 0.257
+ppc: 0.244
+i386: 0.220
+x86: 0.219
+boot: 0.218
+arm: 0.208
+files: 0.193
+assembly: 0.163
+TCG: 0.140
+VMM: 0.131
+KVM: 0.056
+
+HVF virtual counter diverges from CLOCK_VIRTUAL when the host sleeps
+Description of problem:
+HVF's virtual counter diverges from `CLOCK_VIRTUAL` when the host sleeps and causes the inconsistency between Linux's system counter and everything else.
+
+HVF's virtual counter apparently relies on something similar to `mach_absolute_time`, which stops when the host sleeps and resumes after it wakes up. However, `CLOCK_VIRTUAL` is implemented with `mach_continuous_time`, which continues even while the host sleeps. Linux uses the virtual counter as the source of the system counter and sees inconsistencies between the system counter and the other devices.
+Steps to reproduce:
+1. Launch Fedora.
+2. Compare the time shown at the top of the guest display and one at the top of the host display. The difference should be less than 2 minutes.
+3. Let the host sleep for 3 minutes.
+4. Compare the times again. The difference is now greater than 2 minutes.
+Additional information:
+Here are solutions I've came up with so far. There are trade-offs but any of them should be better than the current situation. I'm happy to implement one if the maintainers have decided which one is the best or figure out a superior alternative.
+- Implement `cpus_get_virtual_clock` of `AccelOpsClass` with `mach_absolute_time`. It would make HVF inconsistent with the other accelerators. Linux also expects the virtual clock is "continuous" and it leaves the divergence from the real time.
+- Request XNU `HOST_NOTIFY_CALENDAR_CHANGE` to update the virtual clock with the continuous time. The interface is undocumented.
+- Use `IORegisterForSystemPower` to update the virtual clock with the continuous time. It is undocumented that the interface handles every cases where `mach_absolute_time` and `mach_continuous_time`, but it actually does if I read XNU's source code correctly.
diff --git a/results/classifier/118/performance/874 b/results/classifier/118/performance/874
new file mode 100644
index 00000000..be248292
--- /dev/null
+++ b/results/classifier/118/performance/874
@@ -0,0 +1,31 @@
+performance: 0.904
+network: 0.803
+architecture: 0.763
+device: 0.679
+risc-v: 0.625
+arm: 0.483
+ppc: 0.409
+boot: 0.389
+graphic: 0.382
+virtual: 0.243
+mistranslation: 0.231
+debug: 0.164
+semantic: 0.157
+vnc: 0.145
+PID: 0.127
+TCG: 0.116
+register: 0.102
+user-level: 0.095
+peripherals: 0.074
+permissions: 0.073
+socket: 0.056
+i386: 0.052
+files: 0.043
+hypervisor: 0.042
+VMM: 0.016
+kernel: 0.016
+x86: 0.010
+assembly: 0.004
+KVM: 0.002
+
+New Python QMP library races on NetBSD
diff --git a/results/classifier/118/performance/919 b/results/classifier/118/performance/919
new file mode 100644
index 00000000..31956f45
--- /dev/null
+++ b/results/classifier/118/performance/919
@@ -0,0 +1,35 @@
+performance: 0.942
+graphic: 0.872
+device: 0.680
+semantic: 0.600
+debug: 0.547
+arm: 0.292
+boot: 0.284
+network: 0.272
+register: 0.255
+risc-v: 0.251
+mistranslation: 0.197
+socket: 0.195
+vnc: 0.163
+peripherals: 0.162
+architecture: 0.141
+files: 0.127
+permissions: 0.112
+PID: 0.108
+VMM: 0.092
+ppc: 0.080
+user-level: 0.069
+TCG: 0.067
+kernel: 0.062
+virtual: 0.037
+hypervisor: 0.036
+i386: 0.029
+x86: 0.013
+assembly: 0.012
+KVM: 0.006
+
+Slow in Windows
+Description of problem:
+Eg . Win8.1 in QEMU on Windows is very slow and other os also are very slow
+Steps to reproduce:
+Just run a qemu instance
diff --git a/results/classifier/118/performance/928676 b/results/classifier/118/performance/928676
new file mode 100644
index 00000000..82421c15
--- /dev/null
+++ b/results/classifier/118/performance/928676
@@ -0,0 +1,72 @@
+performance: 0.899
+architecture: 0.816
+register: 0.618
+kernel: 0.590
+device: 0.571
+graphic: 0.532
+files: 0.475
+risc-v: 0.423
+vnc: 0.406
+x86: 0.400
+virtual: 0.346
+permissions: 0.314
+network: 0.262
+boot: 0.243
+PID: 0.241
+ppc: 0.238
+socket: 0.234
+TCG: 0.204
+arm: 0.194
+user-level: 0.189
+semantic: 0.189
+hypervisor: 0.187
+peripherals: 0.178
+VMM: 0.157
+debug: 0.114
+mistranslation: 0.111
+assembly: 0.101
+KVM: 0.037
+i386: 0.018
+
+QEMU does not support Westmere (Intel Xeon) CPU model
+
+Setting the CPU model to Westmere (Intel Xeon server CPU) is not possible.
+
+libvirt uses 'core2duo' as fallback:
+https://bugzilla.redhat.com/show_bug.cgi?id=708927
+
+
+$ qemu -cpu ?
+x86           [n270]
+x86         [athlon]
+x86       [pentium3]
+x86       [pentium2]
+x86        [pentium]
+x86            [486]
+x86        [coreduo]
+x86          [kvm32]
+x86         [qemu32]
+x86          [kvm64]
+x86       [core2duo]
+x86         [phenom]
+x86         [qemu64]
+
+$ qemu --version
+QEMU emulator version 1.0 (Debian 1.0+dfsg-3), Copyright (c) 2003-2008 Fabrice Bellard
+
+An application test with high cpu load gives the timing statistics give:
+                                              
+
+                                           bare metal                       virtual                           percent
+
+X4560 cpu                            50m28s                          54m0s                          107%
+
+X5690 (westermere)         29m20s                          38m0s                         134%
+
+
+
+Westmere seems to be available in the latest version of QEMU:
+$ qemu-system-x86_64 -cpu ? | grep Westmere
+x86         Westmere  Westmere E56xx/L56xx/X56xx (Nehalem-C)
+==> Setting status to "Fix released" now.
+
diff --git a/results/classifier/118/performance/933 b/results/classifier/118/performance/933
new file mode 100644
index 00000000..538e736b
--- /dev/null
+++ b/results/classifier/118/performance/933
@@ -0,0 +1,56 @@
+performance: 0.885
+device: 0.852
+graphic: 0.814
+PID: 0.776
+files: 0.749
+architecture: 0.700
+socket: 0.697
+network: 0.685
+register: 0.680
+ppc: 0.677
+semantic: 0.663
+kernel: 0.648
+vnc: 0.640
+assembly: 0.620
+peripherals: 0.614
+x86: 0.597
+hypervisor: 0.584
+permissions: 0.581
+i386: 0.560
+risc-v: 0.560
+VMM: 0.515
+KVM: 0.487
+TCG: 0.473
+boot: 0.464
+arm: 0.426
+debug: 0.421
+user-level: 0.391
+virtual: 0.337
+mistranslation: 0.266
+
+Changing CD ROM medium sometimes fails with 'Tray of device is not open'
+Description of problem:
+QEMU reports that a CD ROM tray is not open when exchanging media:
+`unable to execute QEMU command 'blockdev-remove-medium': Tray of device 'ide0-1-0' is not open`
+
+We see the issue in upstream libvirt integration tests. However, this issue is a race and the reproducibility rate is <15%.
+Steps to reproduce:
+On the high level this is what we do:
+1. eject medium that the machine was started with
+2. insert a different medium into the CD ROM
+
+Translating the above to QEMU QMP commands this is what the test exercises:
+1. blockdev-open-tray
+2. blockdev-remove-medium
+3. blockdev-del
+4. blockdev-close-tray
+5. blockdev-open-tray
+6. blockdev-remove-medium
+7. blockdev-add
+8. blockdev-insert-medium <<< This is where the test fails
+9. blockdev-close-tray
+Additional information:
+I bisected the code (3 times just to be sure since it's a race) and the following commit fell out of it:
+55adb3c45620c31f29978f209e2a44a08d34e2da
+
+I'm attaching QEMU trace events and a bunch of libvirt test logs (good and bad for comparison). If you think of anything else I should provide in order to help with the issue analysis, please let me know what other option should be turned on.[qemu_traces.tar.gz](/uploads/32e48c92efce3484e552df063795af4d/qemu_traces.tar.gz)
diff --git a/results/classifier/118/performance/985 b/results/classifier/118/performance/985
new file mode 100644
index 00000000..b0d17f1f
--- /dev/null
+++ b/results/classifier/118/performance/985
@@ -0,0 +1,89 @@
+performance: 0.952
+architecture: 0.752
+boot: 0.719
+network: 0.700
+user-level: 0.600
+device: 0.594
+PID: 0.578
+graphic: 0.573
+x86: 0.498
+debug: 0.442
+virtual: 0.436
+semantic: 0.404
+ppc: 0.391
+mistranslation: 0.331
+assembly: 0.316
+kernel: 0.314
+permissions: 0.298
+files: 0.235
+VMM: 0.219
+hypervisor: 0.200
+socket: 0.199
+register: 0.198
+i386: 0.183
+TCG: 0.154
+peripherals: 0.152
+vnc: 0.083
+risc-v: 0.078
+KVM: 0.058
+arm: 0.057
+
+pkg_add is working very slow on NetBSD
+Description of problem:
+pkg_add is working very slow, it installs one package in ~30 minutes although network speed is normal.
+Steps to reproduce:
+1. `wget https://cdn.netbsd.org/pub/NetBSD/NetBSD-9.2/images/NetBSD-9.2-amd64.iso`
+2. `qemu-img create -f qcow2 disk.qcow2 15G`
+3. Install
+```
+qemu-system-x86_64 -m 2048 -enable-kvm \
+  -drive if=virtio,file=disk.qcow2,format=qcow2 \
+  -netdev user,id=mynet0,hostfwd=tcp::7722-:22 \
+  -device e1000,netdev=mynet0 \
+  -cdrom NetBSD-9.2-amd64.iso
+```
+       # Installation steps
+       - 1) Boot Normally
+       - a) Installation messages in English
+       - a) unchanged
+       - a) Install NetBSD to hard disk
+       - b) Yes
+       - a) 15G
+       - a) GPT
+       - a) This is the correct geometry
+       - b) Use default partition sizes
+       - x) Partition sizes are ok
+       - b) Yes
+       - a) Use BIOS console
+       - b) Installation without X11
+       - a) CD-ROM / DVD / install image media
+       - Hit enter to continue
+       - a) configure network (Select defaults here, perform autoconf)
+       - x) Finished configuring
+       - Hit enter to continue
+       - x) Exit Install System
+       - Close QEMU
+4. Run
+```
+ qemu-system-x86_64 -m 2048 \
+  -drive if=virtio,file=disk.qcow2,format=qcow2 \
+  -enable-kvm  \
+  -netdev user,id=mynet0,hostfwd=tcp:127.0.0.1:7722-:22 \
+  -device e1000,netdev=mynet0
+```
+5. Login as root
+6. In NetBSD
+```
+export PKG_PATH="http://cdn.NetBSD.org/pub/pkgsrc/packages/NetBSD/$(uname -p)/$(uname -r)/All/" && \
+pkg_add pkgin
+
+```
+You should see that each of the package's installation takes ~30 minutes.
+Additional information:
+NetBSD 9.2 is also tested in Debian 11 with 'QEMU 6.2.0' and encountered same slowness. 
+
+NetBSD 7.1 and 8.1 are tested on openSUSE Tumbleweed and encountered same slowness.
+
+OpenBSD's pkg_add is working correctly.
+
+I am not sure if it will help but Virtualbox(at least 6.1) is working correctly.
diff --git a/results/classifier/118/performance/992067 b/results/classifier/118/performance/992067
new file mode 100644
index 00000000..035d66d5
--- /dev/null
+++ b/results/classifier/118/performance/992067
@@ -0,0 +1,63 @@
+performance: 0.975
+KVM: 0.938
+hypervisor: 0.921
+kernel: 0.920
+graphic: 0.916
+architecture: 0.905
+boot: 0.869
+vnc: 0.801
+debug: 0.792
+device: 0.786
+permissions: 0.784
+user-level: 0.781
+files: 0.772
+peripherals: 0.770
+assembly: 0.758
+mistranslation: 0.749
+risc-v: 0.732
+semantic: 0.717
+register: 0.702
+virtual: 0.691
+ppc: 0.664
+socket: 0.606
+x86: 0.603
+PID: 0.581
+network: 0.579
+VMM: 0.544
+i386: 0.318
+TCG: 0.315
+arm: 0.314
+
+Windows 2008R2 very slow cold boot when >4GB memory
+
+I've been having a consistent problem booting 2008R2 guests with 4096MB of RAM or greater. On the initial boot the KVM process starts out with a ~200MB memory allocation and will use 100% of all CPU allocated to it. The RES memory of the KVM process slowly rises by around 200mb every few minutes until it reaches it's memory allocation (several hours in some cases). Whilst this is happening the guest will usually blue screen with the message of -
+
+A clock interrupt was not received on a secondary processor within the allocated time interval
+
+If I let the KVM process continue to run it will eventually allocate the required memory the guest will run at full speed, usually restarting after the blue screen and booting into startup repair. From here you can restart it and it will boot perfectly. Once booted the guest has no performance issues at all. 
+
+I've tried everything I could think of. Removing PAE, playing with huge pages, different kernels, different userspaces, different systems, different backing file systems, different processor feature set, with or without Virtio etc. My best theory is that the problem is caused by Windows 2008 zeroing out all the memory on boot and something is causing this to be held up or slowed to a crawl. The hosts always have memory free to boot the guest and are not using swap at all. 
+
+Nothing so far has solved the issue. A few observations I've made about the issue are - 
+Large memory 2008R2 guests seem to boot fine (or with a small delay) when they are the first to boot on the host after a reboot
+Sometimes dropping the disk cache (echo 1 > /proc/sys/vm/drop_caches) will cause them to boot faster
+
+
+The hosts I've tried are -
+All Nehalem based (5540, 5620 and 5660)
+Host ram of 48GB, 96GB and 192GB
+Storage on NFS, Gluster and local (ext4, xfs and zfs)
+QED, QCOW and RAW formats
+Scientific Linux 6.1 with the standard kernel 2.6.32, 2.6.38 and 3.3.1
+KVM userspaces 0.12, 0.14 and (currently) 0.15.1
+
+This should be resolved by using Hyper-V relaxed timers which is in the latest development version of QEMU.  You would need to add -cpu host,+hv_relaxed to the command line to verify this.
+
+Thanks for the quick reply,
+
+I pulled the latest version from Git and on first attempt it said the hv_relaxed feature was not present. I checked the source and the 'hv_relaxed' feature was not included in a 'feature_name' array so the flag was being discarded before it could be enabled. 
+
+Once added in to the 'feature_name' array it was enabled but the VM crashes on boot with a blue screen and the error message "Phase0_exception" followed by a reboot.
+
+Triaging old bug tickets... QEMU 0.12/0.14/0.15 is pretty outdated nowadays. Can you still reproduce this behavior with the latest version of QEMU? If not, I think we should close this bug...
+
diff --git a/results/classifier/118/performance/997631 b/results/classifier/118/performance/997631
new file mode 100644
index 00000000..4dbbd572
--- /dev/null
+++ b/results/classifier/118/performance/997631
@@ -0,0 +1,52 @@
+performance: 0.889
+graphic: 0.877
+x86: 0.724
+boot: 0.715
+semantic: 0.685
+architecture: 0.682
+device: 0.680
+mistranslation: 0.594
+kernel: 0.493
+user-level: 0.483
+ppc: 0.462
+register: 0.457
+socket: 0.443
+permissions: 0.440
+PID: 0.393
+vnc: 0.393
+hypervisor: 0.376
+files: 0.367
+risc-v: 0.358
+i386: 0.354
+assembly: 0.351
+network: 0.341
+debug: 0.318
+TCG: 0.283
+peripherals: 0.278
+VMM: 0.266
+arm: 0.236
+virtual: 0.222
+KVM: 0.097
+
+Windows 2008R2 very slow cold boot when 4 CPUs
+
+Hi,
+
+well, I'm in a similar boat as the one in #992067. But regardless any memory-settings.
+It takes "ages" in a cold-boot Windows 2008R2 with qemu-1.0.1, qemu-1.0.50 and latest-n-greatest from today ( 1.0.50 /qemu-1b3e76e ). It eats up 400% host-cpu-load until login-prompt is shown on the console.
+
+Meanwhile I tried couple of settings with "-cpu features (hv_spinlocks), hv_relaxed and hv_vapic. ".
+Due to some Clock-glitches I start qemu-system-x86_64 with "-no-hpet".
+
+With 2 processors the system is up after 2 minutes, with 4 procs almost 10 minutes... After a reset ( warmstart) the 4 proc-system is up after a couple of 20 secs.
+
+Hints welcome, though once started, the system seems to operate "normally".
+
+Thnx in@vance,
+
+Oliver.
+
+Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+