summary refs log tree commit diff stats
path: root/results/classifier/108/other/218
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--results/classifier/108/other/21816
-rw-r--r--results/classifier/108/other/218051
-rw-r--r--results/classifier/108/other/218118
-rw-r--r--results/classifier/108/other/218216
-rw-r--r--results/classifier/108/other/218468
-rw-r--r--results/classifier/108/other/218516
-rw-r--r--results/classifier/108/other/218825
-rw-r--r--results/classifier/108/other/218929
8 files changed, 239 insertions, 0 deletions
diff --git a/results/classifier/108/other/218 b/results/classifier/108/other/218
new file mode 100644
index 00000000..92fbdda1
--- /dev/null
+++ b/results/classifier/108/other/218
@@ -0,0 +1,16 @@
+performance: 0.810
+network: 0.695
+device: 0.665
+debug: 0.624
+graphic: 0.369
+socket: 0.344
+files: 0.326
+semantic: 0.305
+PID: 0.112
+other: 0.079
+boot: 0.034
+vnc: 0.031
+permissions: 0.020
+KVM: 0.004
+
+qemu-storage-daemon --nbd-server fails with "too many connections" error
diff --git a/results/classifier/108/other/2180 b/results/classifier/108/other/2180
new file mode 100644
index 00000000..5ba62bc0
--- /dev/null
+++ b/results/classifier/108/other/2180
@@ -0,0 +1,51 @@
+debug: 0.842
+device: 0.808
+other: 0.766
+graphic: 0.744
+socket: 0.738
+files: 0.714
+KVM: 0.663
+vnc: 0.656
+PID: 0.623
+boot: 0.619
+network: 0.609
+permissions: 0.605
+performance: 0.566
+semantic: 0.550
+
+QEMU crashes when an interrupt is triggered whose descriptor is not in physical memory
+Description of problem:
+When an interrupt is triggered whose descriptor is mapped but not in physical memory, QEMU crashes with the following message:
+```
+**
+ERROR:../system/cpus.c:524:bql_lock_impl: assertion failed: (!bql_locked())
+Bail out! ERROR:../system/cpus.c:524:bql_lock_impl: assertion failed: (!bql_locked())
+Aborted (core dumped)
+```
+
+The given code triggers the bug by moving the IDT's base address, but it can also be triggered by any other method of moving the IDT's physical memory location, f.ex paging. With KVM enabled, this specific example loops forever instead of crashing, but if the code is altered to use paging, an internal KVM error is reported and the VM is paused.
+Steps to reproduce:
+1. Assemble the code listed below using NASM: `nasm test.asm -o test.bin`
+2. Run the code using `qemu-system-i386 -drive format=raw,file=test.bin`. Note that the given code only triggers the bug if the guest has 2 gigabytes or less of physical memory.
+3. QEMU crashes.
+Additional information:
+NASM assembly of the code used:
+```
+bits 16
+org 0x7c00
+
+_start:
+    ; Disable interrupts and load new IDT
+    cli
+    o32 lidt [idtdesc]
+    ; Descriptor for INT 0 is in nonexistent physical memory, which crashes QEMU.
+    int 0x00
+
+idtdesc:
+    dw 0x3ff      ; Limit: 1 KiB for IDT
+    dd 0x80000000 ; Base: 2 GiB
+
+; Like most BIOSes, SeaBIOS requires this magic number to boot
+times 510-($-$$) db 0
+dw 0xaa55
+```
diff --git a/results/classifier/108/other/2181 b/results/classifier/108/other/2181
new file mode 100644
index 00000000..4ab58971
--- /dev/null
+++ b/results/classifier/108/other/2181
@@ -0,0 +1,18 @@
+device: 0.826
+other: 0.654
+performance: 0.635
+network: 0.577
+boot: 0.418
+semantic: 0.369
+graphic: 0.356
+vnc: 0.267
+PID: 0.239
+debug: 0.141
+socket: 0.134
+KVM: 0.080
+permissions: 0.022
+files: 0.019
+
+-icount mips/gips/kips options on QEMU for more advanced icount option
+Additional information:
+Changing IPS in QEMU affects the frequency of VGA updates, the duration of time before a key starts to autorepeat, and the measurement of BogoMips and other benchmarks.
diff --git a/results/classifier/108/other/2182 b/results/classifier/108/other/2182
new file mode 100644
index 00000000..a2a5af50
--- /dev/null
+++ b/results/classifier/108/other/2182
@@ -0,0 +1,16 @@
+network: 0.915
+performance: 0.313
+vnc: 0.284
+graphic: 0.259
+KVM: 0.197
+boot: 0.091
+socket: 0.074
+PID: 0.065
+other: 0.064
+semantic: 0.047
+device: 0.032
+files: 0.009
+debug: 0.006
+permissions: 0.003
+
+Replication and Network
diff --git a/results/classifier/108/other/2184 b/results/classifier/108/other/2184
new file mode 100644
index 00000000..e36f8ce1
--- /dev/null
+++ b/results/classifier/108/other/2184
@@ -0,0 +1,68 @@
+graphic: 0.829
+permissions: 0.769
+debug: 0.758
+other: 0.743
+semantic: 0.735
+KVM: 0.730
+performance: 0.723
+device: 0.713
+PID: 0.688
+vnc: 0.674
+network: 0.673
+socket: 0.661
+files: 0.653
+boot: 0.642
+
+NVMe differences between QEMU v4.1.0 and v8.2.1
+Description of problem:
+We are currently upgrading QEMU from v4.1.0 to v8.2.1. In order to keep compatibility between the two QEMUs, we are adding ``-machine pc-q35-4.1``. One of our test is to ensure a guest that has hibernated on the previous QEMU is able to resume on the new one.
+
+When resuming, we get the following error:
+
+```
+[    7.394709] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
+[    7.926188] nvme nvme0: Device not ready; aborting reset, CSTS=0x1
+[    7.938235] Read-error on swap-device (259:0:4874880)
+[    7.938237] Read-error on swap-device (259:0:4620184)
+[    7.938240] Read-error on swap-device (259:0:5536464)
+[    7.938311] Read-error on swap-device (259:0:5006840)
+[    7.938316] Read-error on swap-device (259:0:5791888)
+[    7.938386] Read-error on swap-device (259:0:6579728)
+[    7.938391] Read-error on swap-device (259:0:5536680)
+[    7.938431] Read-error on swap-device (259:0:4877384)
+[    7.938434] Read-error on swap-device (259:0:5005376)
+[    7.938457] Read-error on swap-device (259:0:5269328)
+[    7.939200] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
+[    7.939267] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
+[    7.946359] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
+[    8.063186] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
+[    8.069556] Aborting journal on device nvme0n1p1-8.
+[    8.069561] Buffer I/O error on dev nvme0n1p1, logical block 262144, lost sync page write
+[    8.069564] JBD2: Error -5 detected when updating journal superblock for nvme0n1p1-8.
+[    8.081218] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
+[    8.081242] Buffer I/O error on dev nvme0n1p1, logical block 0, lost sync page write
+[    8.081247] EXT4-fs (nvme0n1p1): I/O error while writing superblock
+[    8.147693] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
+[    8.147753] Buffer I/O error on dev nvme0n1p1, logical block 0, lost sync page write
+[    8.163478] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
+[    8.174179] EXT4-fs (nvme0n1p1): I/O error while writing superblock
+[    8.198741] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:2: reading directory lblock 0
+[    8.214483] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:1: reading directory lblock 0
+[    8.230322] EXT4-fs error (device nvme0n1p1): __ext4_find_entry:1611: inode #1561: comm kworker/u8:2: reading directory lblock 0
+[    8.246249] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
+[    8.246269] Core dump to |/usr/share/apport/apport pipe failed
+[    8.246291] Core dump to |/usr/share/apport/apport pipe failed
+[    8.246336] Core dump to |/usr/share/apport/apport pipe failed
+[    8.246826] Core dump to |/usr/share/apport/apport pipe failed
+[    8.249232] Core dump to |/usr/share/apport/apport pipe failed
+[    8.249320] Core dump to |/usr/share/apport/apport pipe failed
+[    8.249880] Core dump to |/usr/share/apport/apport pipe failed
+```
+
+Digging throw the NVMe code, I have found one [patch](https://lists.gnu.org/archive/html/qemu-devel/2021-01/msg04202.html) changing the BAR layout. It doesn't look like there is a way to select the previous BAR layout.
+
+When selecting the ``-machine``, I was expecting that the underlying HW (including devices) would not change. Can you clarify if hibernating from QEMU A and resuming to QEMU B is meant to be supported?
+Steps to reproduce:
+1. Start the guest with qemu v4.1.0 and an NVME disk
+2. Hibernate the OS
+3. Resume the guest with qemu v8.2.1
diff --git a/results/classifier/108/other/2185 b/results/classifier/108/other/2185
new file mode 100644
index 00000000..6f226216
--- /dev/null
+++ b/results/classifier/108/other/2185
@@ -0,0 +1,16 @@
+semantic: 0.621
+device: 0.454
+performance: 0.238
+permissions: 0.215
+network: 0.124
+graphic: 0.116
+boot: 0.102
+vnc: 0.097
+KVM: 0.087
+other: 0.075
+PID: 0.060
+debug: 0.038
+socket: 0.038
+files: 0.019
+
+spapr watchdog should honour watchdog-set-action etc monitor commands
diff --git a/results/classifier/108/other/2188 b/results/classifier/108/other/2188
new file mode 100644
index 00000000..476cc154
--- /dev/null
+++ b/results/classifier/108/other/2188
@@ -0,0 +1,25 @@
+device: 0.787
+graphic: 0.731
+vnc: 0.558
+semantic: 0.450
+network: 0.417
+other: 0.408
+performance: 0.376
+boot: 0.373
+permissions: 0.359
+debug: 0.328
+socket: 0.324
+PID: 0.201
+files: 0.148
+KVM: 0.067
+
+virtio_gpu_gl_update_cursor_data() ignores the cursor resource's pixel format
+Description of problem:
+The function virtio_gpu_gl_update_cursor_data() ignores the pixel format of the resource it's reading from. It literally uses memcpy() to copy the pointer data. This works just fins if both the guest OS and the display backend use the same pixel format. 
+
+The SDL backend seems to use a different pixel format to the GTK display backend. So, you'll get the correct colours in one, but not the other.
+Steps to reproduce:
+1. Run a VM using Virtio GPU using the GTK backend. Set the guest OS' mouse pointer to one that's red instead of white, and note the mouse pointer's actual colour
+2. Now run the same VM using the SDL display backend. Check the colour of the mouse pointer (that should be red)
+
+NOTE: The choice of guest OS shouldn't matter.
diff --git a/results/classifier/108/other/2189 b/results/classifier/108/other/2189
new file mode 100644
index 00000000..54b30551
--- /dev/null
+++ b/results/classifier/108/other/2189
@@ -0,0 +1,29 @@
+network: 0.903
+graphic: 0.869
+performance: 0.862
+device: 0.841
+vnc: 0.807
+files: 0.727
+PID: 0.699
+socket: 0.648
+semantic: 0.607
+other: 0.576
+boot: 0.527
+debug: 0.459
+permissions: 0.451
+KVM: 0.430
+
+vhost_user:When configure queues of vhost-user NIC exceeds max_queues, the virtual machine is always paused
+Description of problem:
+When the virtual machine uses the vhost-user network card and sets the queue number of the network card to exceed the maximum number of supported queues, the virtual machine fails to start and stays in the paused state.
+And the virtual machine log file kept print "qemu - system - x86_64: -netdev host-user,chardev=charnet0,queues=5,id=hostnet0:you are asking more queues than supported:4”
+Steps to reproduce:
+1.Configure vhost-user network cards for VMS and use multiple queues.
+2.The number of NIC queues configured in the VM xml file is greater than the maximum number of queues supported by the VM, that is, the number of Vcpus on the VM.
+3.Execute "virsh create VM_xml_file" cmd to start VM.
+Additional information:
+According to normal logic, if the number of configured vhost-user NIC queues exceeds max-queues, the qemu process should be stopped, rather than paused the virtual machine.
+I am confused about this patch:https://github.com/qemu/qemu/commit/c89804d674e4e3804bd3ac1fe79650896044b4e8
+The process will remain in the do...while loop, when vhost_user_start is called in net_vhost_user_event, if queues > max_queues in vhost_user_start.
+/label ~"kind::Bug"
+```