summary refs log tree commit diff stats
path: root/results/classifier/108/other/49
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--results/classifier/108/other/4916
-rw-r--r--results/classifier/108/other/49242
-rw-r--r--results/classifier/108/other/49322
-rw-r--r--results/classifier/108/other/49416
-rw-r--r--results/classifier/108/other/49516
-rw-r--r--results/classifier/108/other/49556647
-rw-r--r--results/classifier/108/other/49841755
-rw-r--r--results/classifier/108/other/49916
8 files changed, 230 insertions, 0 deletions
diff --git a/results/classifier/108/other/49 b/results/classifier/108/other/49
new file mode 100644
index 000000000..f87d5a5dd
--- /dev/null
+++ b/results/classifier/108/other/49
@@ -0,0 +1,16 @@
+device: 0.833
+performance: 0.759
+graphic: 0.594
+other: 0.586
+semantic: 0.542
+boot: 0.169
+permissions: 0.158
+network: 0.144
+debug: 0.118
+vnc: 0.056
+socket: 0.025
+PID: 0.019
+files: 0.014
+KVM: 0.002
+
+[Feature request] MDIO bus
diff --git a/results/classifier/108/other/492 b/results/classifier/108/other/492
new file mode 100644
index 000000000..5c15b2269
--- /dev/null
+++ b/results/classifier/108/other/492
@@ -0,0 +1,42 @@
+graphic: 0.771
+semantic: 0.708
+device: 0.673
+performance: 0.631
+PID: 0.619
+debug: 0.602
+files: 0.569
+vnc: 0.548
+socket: 0.546
+network: 0.519
+permissions: 0.422
+KVM: 0.380
+boot: 0.367
+other: 0.329
+
+[git] "qemu-system-x86_64: Parameter 'drive' is missing" when I tried to launch an existing VM in Virt-Manager.
+Description of problem:
+This bug is related in some way to bug #488.
+
+I cannot start an existing virtual machine using qemu-git.
+Additional information:
+```
+internal error: process exited while connecting to monitor: 2021-07-19T19:24:27.044654Z qemu-system-x86_64: Parameter 'drive' is missing
+
+Traceback (most recent call last):
+  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65, in cb_wrapper
+    callback(asyncjob, *args, **kwargs)
+  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 101, in tmpcb
+    callback(*args, **kwargs)
+  File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 57, in newfn
+    ret = fn(self, *args, **kwargs)
+  File "/usr/share/virt-manager/virtManager/object/domain.py", line 1329, in startup
+    self._backend.create()
+  File "/usr/lib/python3.9/site-packages/libvirt.py", line 1353, in create
+    raise libvirtError('virDomainCreate() failed')
+libvirt.libvirtError: internal error: process exited while connecting to monitor: 2021-07-19T19:24:27.044654Z qemu-system-x86_64: Parameter 'drive' is missing
+
+```
+
+My last working build was made using commit 9bef7ea9. Using Peter Maydell commits as milestone, I noticed commit 9aef0954 was the first showing the bug.
+
+I'll try to do bisect between these two commits and report asap. There is about 40 commits to verify.
diff --git a/results/classifier/108/other/493 b/results/classifier/108/other/493
new file mode 100644
index 000000000..6905eb69a
--- /dev/null
+++ b/results/classifier/108/other/493
@@ -0,0 +1,22 @@
+graphic: 0.849
+device: 0.810
+performance: 0.696
+semantic: 0.644
+files: 0.588
+vnc: 0.466
+other: 0.461
+socket: 0.369
+network: 0.353
+debug: 0.295
+boot: 0.272
+permissions: 0.230
+PID: 0.206
+KVM: 0.064
+
+RISC-V: Setting mtimecmp to -1 immediately triggers an interrupt
+Description of problem:
+When setting mtimecmp to -1, which should set a timer infinitely far in the future, a timer interrupt is triggered immediately. This happens for most values over 2^61. It is the same for both 32-bit and 64-bit, and for M-mode writing to mtimecmp directly and S-mode using OpenSBI.
+
+I have looked through the source code, and the problem is in the function `sifive_clint_write_mtimecmp`, in the file `/hw/intc/sifive_clint.c`. First, the muldiv64 multiplies diff with 100, causing an overflow (at least for -M virt, other machines might use a different timebase_freq). Then, the unsigned `next` is passed to `timer_mod`, which takes a signed integer. In `timer_mod_ns_locked` the value is set to `MAX(next, 0)`, which means that if the MSB of `next` was set, the interrupt happens immediately. This means that it is impossible to set timers more than 2^63 nanoseconds in the future.
+
+This problem basically only affects programs which disable timer interrupts by setting the next one infinitely far in the future. However, the SBI doc specifically says that this is a valid approach, so it should be supported. Using the MSB doesn't work without changing code functionality in QEMU, but it should be sufficient to cap `next` at the maximum signed value.
diff --git a/results/classifier/108/other/494 b/results/classifier/108/other/494
new file mode 100644
index 000000000..1f0d4e1c3
--- /dev/null
+++ b/results/classifier/108/other/494
@@ -0,0 +1,16 @@
+device: 0.742
+performance: 0.684
+graphic: 0.638
+debug: 0.363
+semantic: 0.151
+permissions: 0.146
+network: 0.138
+files: 0.056
+boot: 0.043
+PID: 0.041
+socket: 0.028
+vnc: 0.025
+other: 0.021
+KVM: 0.002
+
+cmake crashes on qemu-alpha-user with Illegal Instruction
diff --git a/results/classifier/108/other/495 b/results/classifier/108/other/495
new file mode 100644
index 000000000..a855ef47c
--- /dev/null
+++ b/results/classifier/108/other/495
@@ -0,0 +1,16 @@
+other: 0.948
+network: 0.693
+device: 0.692
+performance: 0.540
+graphic: 0.288
+semantic: 0.260
+permissions: 0.223
+files: 0.192
+debug: 0.167
+boot: 0.119
+socket: 0.045
+PID: 0.033
+KVM: 0.020
+vnc: 0.009
+
+sdhci: Another way to trigger Assertion wpnum < sd->wpgrps_size failed
diff --git a/results/classifier/108/other/495566 b/results/classifier/108/other/495566
new file mode 100644
index 000000000..109d6ba14
--- /dev/null
+++ b/results/classifier/108/other/495566
@@ -0,0 +1,47 @@
+network: 0.758
+graphic: 0.640
+device: 0.597
+semantic: 0.569
+performance: 0.445
+PID: 0.413
+vnc: 0.403
+socket: 0.340
+other: 0.301
+boot: 0.262
+debug: 0.228
+files: 0.214
+permissions: 0.190
+KVM: 0.136
+
+qemu network adapter initialization fails when using macaddr=<multicast MAC-address>
+
+Not sure if ultra-strange, nondocumented feature in qemu (or linux kernel) or really bug: Network card initialization fails if first byte of mac address  is not 00. The problem occurs at least with model=pcnet/rtl8139, in both cases the network adapter is not usable.
+
+How to reproduce:
+
+* Take standard  initrd/kernel (tested with hardy)
+
+* Start qemu (cmd see below) and enter "modprobe pcnet32" at prompt:
+qemu -name SetupTest -no-acpi -m 128 -drive file=/dev/null,if=ide,index=0 -net nic,macaddr=00:22:33:44:55:66,model=pcnet -net user -kernel vmlinuz-2.6.24-26-generic -initrd initrd.img-2.6.24-26-generic -append break=premount
+
+You will see "pcnet32 ... at 0x..., 00:22:33:44:55:66
+
+* Do same with mac address 11:22:33:44:55:66
+qemu -name SetupTest -no-acpi -m 128 -drive file=/dev/null,if=ide,index=0 -net nic,macaddr=11:22:33:44:55:66,model=pcnet -net user -kernel vmlinuz-2.6.24-26-generic -initrd initrd.img-2.6.24-26-generic -append break=premount
+
+You will see "pcnet32 ... at 0x..., 00:00:00:00:00:00
+
+The network adapter is non-functional, "ip link set eth0 up" does not report error, but does not work (indicates at least some linux kernel influence)
+
+With the rtl8139 adapter, mac-address in guest is correct, but adapter does not work either (indicates qemu influence)
+
+Tested other mac addrs, seems that the first byte has to be even. Would it make sense to issue a warning if a user requests a mac address with an odd first byte? What is the special meaning of the bit 0?
+
+That bit indicates whether the address is a multicast or unicast address.  A network card cannot have a multicast address.
+
+It would make sense to do sanity checking for this.
+
+A check for multicast MAC addresses has been introduced by this commit here:
+http://git.qemu.org/?p=qemu.git;a=commitdiff;h=d60b20cf2ae6644b051
+So I think we can close this ticket now.
+
diff --git a/results/classifier/108/other/498417 b/results/classifier/108/other/498417
new file mode 100644
index 000000000..a5cc9a457
--- /dev/null
+++ b/results/classifier/108/other/498417
@@ -0,0 +1,55 @@
+performance: 0.748
+other: 0.524
+permissions: 0.461
+device: 0.451
+PID: 0.438
+graphic: 0.413
+semantic: 0.411
+vnc: 0.385
+network: 0.363
+socket: 0.350
+files: 0.343
+debug: 0.292
+KVM: 0.291
+boot: 0.268
+
+cache=writeback on disk image doesn't do write-back
+
+I noticed that qemu seems to have poor disk performance.  Here's a test that has miserable performance but which should be really fast:
+
+- Configure qemu to use the disk image with cache=writeback
+- Configure a 2GiB Linux VM on an 8GiB Linux host
+- In the VM, write a 4GiB file (dd if=/dev/zero of=/tmp/x bs=4K count=1M)
+- In the VM, read it back (dd if=/tmp/x of=/dev/null bs=4K count=1M)
+
+With writeback, the whole file should end up in the host pagecache.  So when I read it back, there should be no I/O to the real disk, and it should be really fast.  Instead, I see disk activity through the duration of the test, and the performance is roughly the native hard disk throughput (somewhat slower).
+
+I'm using version 0.11.1, and this is my command line:
+
+qemu-system-x86_64 -drive cache=writeback,index=0,media=disk,file=ubuntu.img -k en-us -m 2048 -smp 2 -vnc :3100 -usbdevice tablet -enable-kvm &
+
+Can you please try reproducing with 0.12.1?
+
+I'm using 0.12.1.2.  With this command line:
+
+qemu-system-x86_64 -drive cache=writeback,index=0,media=disk,file=ubuntu.img -k en-us -m 2048 -smp 2 -vnc :3102 -usbdevice tablet -enable-kvm &
+
+Here's what I'm seeing:
+
+With "dd if=/dev/zero of=./x bs=1024k count=1024", I get at best 29 MiB/sec.  
+
+This fits in the VM, so with "dd if=/dev/zero of=./x bs=1024k count=1024", which spills into the host, I get at best 25 MiB/sec.  (I am aware that with the expanding qcow2 disk image, there's also some overhead for allocating space during the write, so I run the same test multiple times and report the best result.)
+
+Since I'm using writeback, I would expect to see much faster write performance.  This is slower than the host's disk performance with fsync involved.  Is there an fsync going on here?  Is dd doing an fsync?  That should sync the VM's disk.  But then is there an ATA command to sync the drive that qemu interprets as a request to sync the disk image?  It would be "safe" to take this hint, but since I'm explicitly requesting writeback, I'm not expecting "safe".  I'm intentionally defeating "safe" for the sake of performance, with full knowledge of the risks.
+
+With "dd if=./x of=/dev/null bs=1024k count=1024", it varies a LOT, but I've gotten as much as 4.9GiB/sec and as little as 2.5GiB/sec.  This is WAY better than what I was getting with 0.11.1.
+
+With "dd if=./x of=/dev/null bs=1024k count=4096", the best I get is 460MiB/sec.  That's still very good.  Could be better, but the overhead of emulating SATA has got to be really high under any circumstances.
+
+Using a VM imposes a lot of overhead that makes the guest OS quite a bit slower than running it directly on the hardware.  I'm trying to make up for this by using the rest of my hosts RAM (more or less) as a huge disk cache.
+
+
+QEMU 0.11 / 0.12 is pretty much outdated nowadays ... can you still reproduce this issue with the latest version of QEMU?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/499 b/results/classifier/108/other/499
new file mode 100644
index 000000000..7e84e6d3a
--- /dev/null
+++ b/results/classifier/108/other/499
@@ -0,0 +1,16 @@
+boot: 0.869
+device: 0.824
+performance: 0.695
+network: 0.617
+graphic: 0.386
+debug: 0.373
+PID: 0.339
+semantic: 0.297
+other: 0.233
+permissions: 0.208
+files: 0.190
+vnc: 0.185
+socket: 0.077
+KVM: 0.024
+
+booting a linux guest with qemu-system-sparc with icount enabled hangs