diff options
| author | Christian Krinitsin <mail@krinitsin.com> | 2025-07-03 07:27:52 +0000 |
|---|---|---|
| committer | Christian Krinitsin <mail@krinitsin.com> | 2025-07-03 07:27:52 +0000 |
| commit | d0c85e36e4de67af628d54e9ab577cc3fad7796a (patch) | |
| tree | f8f784b0f04343b90516a338d6df81df3a85dfa2 /results/classifier/gemma3:12b/kvm | |
| parent | 7f4364274750eb8cb39a3e7493132fca1c01232e (diff) | |
| download | emulator-bug-study-d0c85e36e4de67af628d54e9ab577cc3fad7796a.tar.gz emulator-bug-study-d0c85e36e4de67af628d54e9ab577cc3fad7796a.zip | |
add deepseek and gemma results
Diffstat (limited to 'results/classifier/gemma3:12b/kvm')
397 files changed, 20047 insertions, 0 deletions
diff --git a/results/classifier/gemma3:12b/kvm/1002 b/results/classifier/gemma3:12b/kvm/1002 new file mode 100644 index 00000000..d700efb3 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1002 @@ -0,0 +1,2 @@ + +qemu-system-aarch64: Synchronous Exception with smp > 1 (on M1 running Asahi Linux with KVM) diff --git a/results/classifier/gemma3:12b/kvm/1003 b/results/classifier/gemma3:12b/kvm/1003 new file mode 100644 index 00000000..8e76c562 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1003 @@ -0,0 +1,22 @@ + +"Cannot allocate memory" when boots a VM > 1026GB memory with -accel kvm +Description of problem: +I can boot an empty VM using command `qemu-system-x86_64 -m 1026G -accel kvm -vnc :1` or `qemu-system-x86_64 -m 8T -vnc :1` + +But when I use `qemu-system-x86_64 -m 1027G -accel kvm -vnc :1`, it will not boot: + +``` +root@debian11:~# qemu-system-x86_64 -m 1027G -accel kvm -vnc :1 +qemu-system-x86_64: kvm_set_user_memory_region: KVM_SET_USER_MEMORY_REGION failed, slot=1, start=0x100000000, size=0x10000000000: Cannot allocate memory +kvm_set_phys_mem: error registering slot: Cannot allocate memory +Aborted +``` + +Which means, with `-accel kvm`, it only can boot a VM which memory <= 1026G, but without these args, it can boot whatever you want. +Steps to reproduce: +1. sysctl vm.overcommit_memory=1 # enable overcommit first +2. qemu-system-x86_64 -m 1027G -accel kvm -vnc :1 +Additional information: +The qemu I use is compiled from the latest source, not the package provided by debian. + +Hardware is `PowerEdge R630` with `E5-2630 v4` * 2, 128G physical RAM. diff --git a/results/classifier/gemma3:12b/kvm/1004408 b/results/classifier/gemma3:12b/kvm/1004408 new file mode 100644 index 00000000..43f97fd7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1004408 @@ -0,0 +1,35 @@ + +BUG: Soft Lockup - CPU#0 stuck for 22s! [qemu-system-x86: 31867] + +Environment: +------------------- + * Upstream git version: qemu-kvm-1.1-rc2-4-g3fd9fed + * Host Kernel: Mainline Kernel - 3.4.0 x86_64 GNU/Linux (Arch: x86_64) + * CPU model: Intel(R) Xeon(R) CPU X5570 @ 2.93GHz + * Guest OS: Red Hat Enterprise Linux Server release 6.2 + * Guest Kernel: 2.6.32-220.el6.x86_64 + * Qemu-command line: +/usr/local/bin/qemu-system-x86_64 -name 'vm1' -nodefaults -monitor unix:'/tmp/monitor-humanmonitor1-20120525-214210-Zua6',server,nowait -serial unix:'/tmp/serial-20120525-214210-Zua6',server,nowait -device ich9-usb-uhci1,id=usb1 -drive file='/tmp/kvm_autotest_root/images/rhel62-64.qcow2',index=0,if=ide,cache=none -device rtl8139,netdev=idvVySvg,mac='9a:6d:16:b9:b5:06',id='idiX1NmG' -netdev tap,id=idvVySvg,fd=21 -m 7198 -smp 2 -device usb-tablet,id=usb-tablet1,bus=usb1.0 -vnc :0 -vga std + +The qemu is started through autotest. + +Description: +----------------- + +While running the cgroup test through autotest, the host was hung and was not responding. When viewed through serial console, found the error "BUG: Soft lockup" error as attached in the screenshot 1. + +There are no errors displayed in /var/log/messages (no call trace) and in dmesg.* +There is a call trace seen in serial console, which is show in screenshot 2. + +Steps to reproduce: +---------------------------- +Currently am not able to consistently reproduce this error. However when I tried to reproduce it again by running the cgroup test, found another error from syslogd as shown below + +"Message from syslogd@phx3 at May 25 21:56:04 ... + kernel:Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 3" + +So this time I got a hard Lockup error. Attached is the screenshot of the same. (screenshot-3, see the message at the bottom of the screen). This time the cgroup test had completed. + +Please let me know if you require more info on this. + +-prem \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1008 b/results/classifier/gemma3:12b/kvm/1008 new file mode 100644 index 00000000..18d4e83e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1008 @@ -0,0 +1,21 @@ + +nested virtualisation with old host kernel, qemu 7.0.0 broken +Description of problem: +``` +$ qemu-system-x86_64 -enable-kvm -nographic +qemu-system-x86_64: error: failed to set MSR 0xc0000104 to 0x100000000 +qemu-system-x86_64: ../target/i386/kvm/kvm.c:2996: kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. +Aborted (core dumped) + +$ +``` +Steps to reproduce: +1. (hardware) Host 1 running kernel 5.10 with nested kvm enabled +2. (virtual) Host 2, with qemu 7.0.0 installed +3. In the inner/virtual host, run: `qemu-system-x86 -enable-kvm -nographic` +Additional information: +It is fixed by using either a more up-to-date kernel version on the hardware/outer host (5.17.x for example), or by reverting to qemu 6.2.0 in the virtual/inner host. + +I have also reproduced this with latest qemu master, commit 731340813fdb4cb8339edb8630e3f923b7d987ec. + +**Reverting commit 3e4546d5bd38a1e98d4bd2de48631abf0398a3a2 also fixes the issue.** diff --git a/results/classifier/gemma3:12b/kvm/101 b/results/classifier/gemma3:12b/kvm/101 new file mode 100644 index 00000000..f577255f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/101 @@ -0,0 +1,2 @@ + +Running a virtual machine on a Haswell system produces machine check events diff --git a/results/classifier/gemma3:12b/kvm/1013714 b/results/classifier/gemma3:12b/kvm/1013714 new file mode 100644 index 00000000..75342da2 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1013714 @@ -0,0 +1,17 @@ + +Data corruption after block migration (LV->LV) + +We quite frequently use the live block migration feature to move VM's between nodes without downtime. These sometimes result in data corruption on the receiving end. It only happens if the VM is actually doing I/O (doesn't have to be all that much to trigger the issue). + +We use logical volumes and each VM has two disks. We use cache=none for all VM disks. + +All guests use virtio (a mix of various Linux distro's and Windows 2008R2). + +We currently have two stacks in use and have seen the issue on both of them: + +Fedora - qemu-kvm 0.13 +Scientific Linux 6.2 (RHEL derived) - qemu-kvm package 0.12.1.2 + +Even though we don't run the most recent versions of KVM I highly suspect this issue is still unreported and that filing a bug is therefore appropriate. (There doesn't seem to be any similar bug report in launchpad or RedHat's bugzilla and nothing related in change logs, release notes and git commit logs.) + +I have no idea where to look or where to start debugging this issue, but if there is any way I can provide useful debug information please let me know. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1021 b/results/classifier/gemma3:12b/kvm/1021 new file mode 100644 index 00000000..81b37e13 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1021 @@ -0,0 +1,10 @@ + +nVMX: QEMU does not clear nVMX state through KVM(L0) when guest(L2) trigger a reboot event through I/O-Port(0xCF9) +Description of problem: +# +Steps to reproduce: +Guest(L2) write 0xCF9 to trigger a platform reboot. + +# +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/1023 b/results/classifier/gemma3:12b/kvm/1023 new file mode 100644 index 00000000..6c0f575c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1023 @@ -0,0 +1,61 @@ + +TCG & LA57 (5-level page tables) causes intermittent triple fault when setting %CR3 +Description of problem: +Enabling LA57 (5-level page tables) + TCG causes an intermittent triple fault when the kernel loads %cr3 in preparation for jumping to protected mode. It is quite rare, only happening on perhaps 1 in 20 runs. + +The observed behaviour for most users is that we see SeaBIOS messages, and no kernel messages, and qemu exits. (Triple fault in TCG code causes qemu to reset the virtual CPU, and we are using `-no-reboot` so that causes qemu to exit). + +There's a simple reproducer below. I enabled qemu -d options to capture the full instruction traces which can be found here: + +http://oirase.annexia.org/tmp/fullexec-failed (error case) +http://oirase.annexia.org/tmp/fullexec-good (successful run) + +I also added an `abort()` into qemu after the triple fault message in order to capture a stack trace, which can be found here: https://bugzilla.redhat.com/show_bug.cgi?id=2082806#c8 +Steps to reproduce: +1. Save the following script into a file, adjusting the two variables at the top as appropriate: + +``` +#!/bin/bash - + +# Point this to any kernel in /boot: +kernel=/boot/vmlinuz-4.18.0-387.el8.x86_64 + +# Point this to qemu: +qemu=/usr/libexec/qemu-kvm +#qemu=/home/rjones/d/qemu/build/qemu-system-x86_64 + +log=/tmp/log + +cpu=max +#cpu=max,la57=off + +while $qemu \ + -global virtio-blk-pci.scsi=off \ + -no-user-config \ + -nodefaults \ + -display none \ + -machine accel=tcg,graphics=off \ + -cpu "$cpu" \ + -m 2048 \ + -no-reboot \ + -rtc driftfix=slew \ + -no-hpet \ + -global kvm-pit.lost_tick_policy=discard \ + -kernel $kernel \ + -object rng-random,filename=/dev/urandom,id=rng0 \ + -device virtio-rng-pci,rng=rng0 \ + -device virtio-serial-pci \ + -serial stdio \ + -append "panic=1 console=ttyS0" >& $log && + grep -sq "Linux version" $log; do + echo -n . +done +``` + +2. Run the script. It will run qemu many times, checking that it reaches the kernel. +3. Eventually the script may exit. +4. Check `/tmp/log` and see if you only see SeaBIOS messages. +5. Modify the script to add `-cpu max,la57=off` and the error will stop happening. +Additional information: +Downstream bug report: https://bugzilla.redhat.com/show_bug.cgi?id=2082806 +LA57 was enabled here: https://gitlab.com/qemu-project/qemu/-/issues/661 diff --git a/results/classifier/gemma3:12b/kvm/1034423 b/results/classifier/gemma3:12b/kvm/1034423 new file mode 100644 index 00000000..669ba1e8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1034423 @@ -0,0 +1,61 @@ + +Guests running OpenIndiana (and relatives) fail to boot on AMD hardware + +First observed with OpenSolaris 2009.06, and also applies to the latest OpenIndiana release. + +Version: qemu-kvm 1.1.1 + +Hardware: + +2 x AMD Opteron 6128 8-core processors, 64GB RAM. + +These guests boot on equivalent Intel hardware. + +To reproduce: + +qemu-kvm -nodefaults -m 512 -cpu host -vga cirrus -usbdevice tablet -vnc :99 -monitor stdio -hda drive.img -cdrom oi-dev-151a5-live-x86.iso -boot order=dc + +I've tested with "-vga std" and various different emulated CPU types, to no effect. + +What happens: + +GRUB loads, and offers multiple boot options, but none work. Some kind of kernel panic flies by very fast before restarting the VM, and careful use of the screenshot button reveals that it reads as follows: + +panic[cpu0]/thread=fec22de0: BAD TRAP: type=8 (#df Double fault) rp=fec2b48c add r=0 + +#df Double fault +pid=0, pc=0xault +pid=0, pc=0xfe800377, sp=0xfec40090, eflags=0x202 +cr0: 80050011<pg,wp,et,pe> cr4:b8<pge,pae,pse,de> +cr2: 0cr3: ae2f000 + gs: 1b0 fs: 0 es: 160 ds: 160 + edi: 0 esi: 0 ebp: 0 esp: fec2b4c4 + ebx: c0010015 edx: 0 ecx: 0 eax: fec40400 + trp: 8 err: 0 eip: fe800377 cs: 158 + efl: 202 usp: fec40090 ss: 160 +tss.tss_link: 0x0 +tss.tss_esp0: 0x0 +tss.tss_ss0: 0x160 +tss.tss_esp1: 0x0 +tss.tss_ss1: 0x0 +tss.tss esp2: 0x0 +tss.tss_ss2: 0x0 +tss.tss_cr3: 0xae2f000 +tss.tss_eip: 0xfec40400 +tss.tss_eflags: 0x202 +tss.tss_eax: 0xfec40400 +tss.tss_ebx: 0xc0010015 +tss.tss_ecx: 0xc0010000 +tss.tss_edx: 0x0 +tss.tss_esp: 0xfec40090 + +Warning - stack not written to the dumpbuf +fec2b3c8 unix:due+e4 (8, fec2b48c, 0, 0) +fec2b478 unix:trap+12fa (fec2b48c, 0, 0) +fec2b48c unix:_cmntrap+7c (1b0, 0, 160, 160, 0) + +If there's any more, I haven't managed to catch it. + +Solaris 11 does not seem to suffer from the same issue, although the first message that appears at boot (after the version info) is "trap: Unkown trap type 8 in user mode". Could be related? + +As always, thanks in advance and please let me know if I can help to test, or provide any more information. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1035 b/results/classifier/gemma3:12b/kvm/1035 new file mode 100644 index 00000000..6f8d3412 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1035 @@ -0,0 +1,17 @@ + +Hyper-V on KVM does not work on AMD CPUs +Description of problem: +Can not enable hytper-v on KVM on AMD 3970x +``` +[ 3743.647780] SVM: kvm [17094]: vcpu0, guest rIP: 0xfffff8125288d7d7 unimplemented wrmsr: 0xc0010115 data 0x0 +[ 3744.014046] SVM: kvm [17094]: vcpu1, guest rIP: 0xfffff8125288d7d7 unimplemented wrmsr: 0xc0010115 data 0x0 +[ 3744.016101] SVM: kvm [17094]: vcpu2, guest rIP: 0xfffff8125288d7d7 unimplemented wrmsr: 0xc0010115 data 0x0 +[ 3744.018011] SVM: kvm [17094]: vcpu3, guest rIP: 0xfffff8125288d7d7 unimplemented wrmsr: 0xc0010115 data 0x0 +[ 3744.020032] SVM: kvm [17094]: vcpu4, guest rIP: 0xfffff8125288d7d7 unimplemented wrmsr: 0xc0010115 data 0x0 +[ 3744.021834] SVM: kvm [17094]: vcpu5, guest rIP: 0xfffff8125288d7d7 unimplemented wrmsr: 0xc0010115 data 0x0 +[ 3744.023644] SVM: kvm [17094]: vcpu6, guest rIP: 0xfffff8125288d7d7 unimplemented wrmsr: 0xc0010115 data 0x0 +[ 3744.025478] SVM: kvm [17094]: vcpu7, guest rIP: 0xfffff8125288d7d7 unimplemented wrmsr: 0xc0010115 data 0x0 +``` +Additional information: +Related issue: +https://bugzilla.kernel.org/show_bug.cgi?id=203477 diff --git a/results/classifier/gemma3:12b/kvm/1042561 b/results/classifier/gemma3:12b/kvm/1042561 new file mode 100644 index 00000000..127c7c1a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1042561 @@ -0,0 +1,35 @@ + +Guest has no "xsave" feature with parameter "-cpu qemu64,+xsave" in qemu command line. + +Environment: +------------ +Host OS (ia32/ia32e/IA64):ia32e +Guest OS (ia32/ia32e/IA64):ia32e +Guest OS Type (Linux/Windows):Linux +kvm.git Commit:1a577b72475d161b6677c05abe57301362023bb2 +qemu-kvm Commit:98f1f30a89901c416e51cc70c1a08d9dc15a2ad4 +Host Kernel Version:3.5.0-rc1 +Hardware:Romley-EP, Ivy-bridge + + +Bug detailed description: +-------------------------- +Guest has no "xsave" feature when create guest with parameter "-cpu qemu64,+xsave,+avx" in qemu command line, but the guest has avx feature. +When starting a guest with parameter "-cpu host" in qemu command line, the guest has 'avx' and 'xsave' features (as /proc/cpuinfo shows). + +This is not a recent regression; it exists for a long time. + +Reproduce steps: +---------------- +1. qemu-system-x86_64 -m 1024 -smp 2 -hda rhel6u3.img -cpu qemu64,+xsave +2. cat /proc/cpuinfo | grep xsave ( check guest's xsave feature) + +Current result: +---------------- +The guest has no xsave feature + +Expected result: +---------------- +The guest has xsave feature + +Basic root-causing log: \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1045 b/results/classifier/gemma3:12b/kvm/1045 new file mode 100644 index 00000000..385c8357 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1045 @@ -0,0 +1,27 @@ + +When a break point is set, nested virtualization sees "kvm_queue_exception: Assertion `!env->exception_has_payload' failed." +Description of problem: +I am debugging XMHF and LHV using QEMU + KVM. I found that if I set a break point using GDB, QEMU will crash when LHV is booting. The message is +``` +qemu-system-i386: ../../../target/i386/kvm/kvm.c:678: kvm_queue_exception: Assertion `!env->exception_has_payload' failed. +``` + +The address of the break point is arbitrary. The break point does not need to hit. So I chose 0 as the address in this bug report. +Steps to reproduce: +1. Start QEMU using `qemu-system-i386 -m 512M -gdb tcp::1234 -smp 2 -cpu Haswell,vmx=yes -enable-kvm -serial stdio -drive media=disk,file=1.img,index=1 -drive media=disk,file=2.img,index=2 -S` +2. In another shell, start GDB using `gdb --ex 'target remote :::1234' --ex 'hb *0' --ex c` +3. See many serial output lines. The tail of the output is + ``` + CPU #0: vcpu_vaddr_ptr=0x01e06080, esp=0x01e11000 + CPU #1: vcpu_vaddr_ptr=0x01e06540, esp=0x01e15000 + BSP(0x00): Rallying APs... + BSP(0x00): APs ready, doing DRTM... + LAPIC base and status=0xfee00900 + Sending INIT IPI to all APs... + ``` +4. See assertion error in QEMU + ``` + qemu-system-i386: ../target/i386/kvm/kvm.c:645: kvm_queue_exception: Assertion `!env->exception_has_payload' failed. + ``` +Additional information: +This bug was first incorrectly filed in KVM's bug tracker at <https://bugzilla.kernel.org/show_bug.cgi?id=216002>. diff --git a/results/classifier/gemma3:12b/kvm/1046 b/results/classifier/gemma3:12b/kvm/1046 new file mode 100644 index 00000000..98e35673 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1046 @@ -0,0 +1,13 @@ + +Using more than 2G of RAM on armv7l guest with RPI4 +Description of problem: +I was able to run my armv7l guest on RPI4 8G using qemu 6.2, but on 7.0 it doesn't work: +`qemu-kvm: Addressing limited to 32 bits, but memory exceeds it by 3221225472 bytes`. + +The only reference I found is this issue: https://gitlab.com/qemu-project/qemu/-/issues/903 +Steps to reproduce: +1. `-M virt,highmem=off,gic-version=host,accel=kvm` +2. `-cpu host,aarch64=off` +3. `-m 6G` +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/1047470 b/results/classifier/gemma3:12b/kvm/1047470 new file mode 100644 index 00000000..07d46bef --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1047470 @@ -0,0 +1,60 @@ + +qemu/kvm hangs reading from serial console + +This is for a qemu-kvm running on RHEL 5, so it's pretty old, +but i think the problem still exists in 1.2 + +We have conman running on our hosts, connecting to the +kvm/qemu's using + virsh console +which just opens up the console /dev/pts/slave that qemu +opens up when run with options + -nographic + -serial mon:pty + +Sometimes virsh console exits and then qemu locks up. +My guess is that something like this happens: + +virsh console exits +qemu does a select() on /dev/ptmx (and other FDs) +select() returns the FD of /dev/ptmx in the read-fdset +qemu does a read() +read() returns -1 (EIO) +qemu does other stuff for a while +select() ... /dev/ptmx +read() .. EIO +other stuff +select() ... read() ... select() ... read() ... select() +conman starts a new virsh console that connects +qemu does a read() +read() blocks b/c there is now a writer on the tty slave + +So i don't see any way around this, given the sorta rudi- +mentary semantics of TTY IO on Linux (not that i know of +any platform that does it better ... ?), except ... + +maybe qemu should + fcntl(master_fd, F_SETFL, flags | O_NONBLOCK) +in qemu-char.c:qemu_char_open_pty() +and be prepared to handle E_WOULDBLOCK|E_AGAIN in +qemu-char.c:fd_chr_read() ... ? + +--buck + +[*] i think, b/c in the old version we are running, sometimes + the guest spits out the + ^] + character to its console, and virsh console reads it and + doesn't check to see if its from stdin or the pty and exits, + which, i think, can be fixed like this: + +--- libvirt-0.8.2/tools/console.c.ctrl_close_bracket_handling_fix 2012-09-06 10:30:43.606997191 -0400 ++++ libvirt-0.8.2/tools/console.c 2012-09-06 10:34:52.154000464 -0400 +@@ -155,6 +155,7 @@ int vshRunConsole(const char *tty) { + + /* Quit if end of file, or we got the Ctrl-] key */ + if (!got || ++ fds[i].fd == STDIN_FILENO && + (got == 1 && + buf[0] == CTRL_CLOSE_BRACKET)) + goto done; \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1047576 b/results/classifier/gemma3:12b/kvm/1047576 new file mode 100644 index 00000000..9183d8f4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1047576 @@ -0,0 +1,55 @@ + +qemu unittest emulator failure on latest git master + +Running the emulator unittest, using the cmdline: + +16:01:30 INFO | Running emulator +16:01:30 INFO | Running qemu command (reformatted): +16:01:30 INFO | /home/lmr/Code/autotest.git/autotest/client/tests/virt/kvm/qemu +16:01:30 INFO | -S +16:01:30 INFO | -name 'unittest_vm' +16:01:30 INFO | -nodefaults +16:01:30 INFO | -chardev socket,id=hmp_id_humanmonitor1,path=/tmp/monitor-humanmonitor1-20120907-155940-WomlFZY3,server,nowait +16:01:30 INFO | -mon chardev=hmp_id_humanmonitor1,mode=readline +16:01:30 INFO | -chardev socket,id=serial_id_20120907-155940-WomlFZY3,path=/tmp/serial-20120907-155940-WomlFZY3,server,nowait +16:01:30 INFO | -device isa-serial,chardev=serial_id_20120907-155940-WomlFZY3 +16:01:30 INFO | -chardev socket,id=seabioslog_id_20120907-155940-WomlFZY3,path=/tmp/seabios-20120907-155940-WomlFZY3,server,nowait +16:01:30 INFO | -device isa-debugcon,chardev=seabioslog_id_20120907-155940-WomlFZY3,iobase=0x402 +16:01:30 INFO | -m 512 +16:01:30 INFO | -smp 2,cores=1,threads=1,sockets=2 +16:01:30 INFO | -kernel '/home/lmr/Code/autotest.git/autotest/client/tests/virt/kvm/unittests/emulator.flat' +16:01:30 INFO | -vnc :0 +16:01:30 INFO | -chardev file,id=testlog,path=/tmp/testlog-20120907-155940-WomlFZY3 +16:01:30 INFO | -device testdev,chardev=testlog +16:01:30 INFO | -rtc base=utc,clock=host,driftfix=none +16:01:30 INFO | -boot order=cdn,once=c,menu=off +16:01:30 INFO | -S +16:01:30 INFO | -enable-kvm + +We get + +16:01:32 INFO | Waiting for unittest emulator to complete, timeout 600, output in /tmp/testlog-20120907-155940-WomlFZY3 +16:01:32 INFO | [qemu output] KVM internal error. Suberror: 1 +16:01:32 INFO | [qemu output] emulation failure +16:01:32 INFO | [qemu output] RAX=ffffffffffffeff8 RBX=ffffffffffffe000 RCX=fffffffffffff000 RDX=000000000044d2b0 +16:01:32 INFO | [qemu output] RSI=000000000044c9fa RDI=000000000044e370 RBP=ffffffffffffeff8 RSP=000000000044d2b0 +16:01:32 INFO | [qemu output] R8 =000000000000000a R9 =00000000000003f8 R10=0000000000000000 R11=0000000000000000 +16:01:32 INFO | [qemu output] R12=ffffffffffffe000 R13=000000001fff6000 R14=000000001fff5000 R15=0000000000000000 +16:01:32 INFO | [qemu output] RIP=0000000000400a89 RFL=00010002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +16:01:32 INFO | [qemu output] ES =0010 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +16:01:32 INFO | [qemu output] CS =0008 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +16:01:32 INFO | [qemu output] SS =0000 0000000000000000 ffffffff 00000000 +16:01:32 INFO | [qemu output] DS =0010 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +16:01:32 INFO | [qemu output] FS =0010 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +16:01:32 INFO | [qemu output] GS =0010 000000000044c370 ffffffff 00c09300 DPL=0 DS [-WA] +16:01:32 INFO | [qemu output] LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT +16:01:32 INFO | [qemu output] TR =0048 000000000040a452 0000ffff 00008b00 DPL=0 TSS64-busy +16:01:32 INFO | [qemu output] GDT= 000000000040a00a 00000447 +16:01:32 INFO | [qemu output] IDT= 0000000000000000 00000fff +16:01:32 INFO | [qemu output] CR0=80010011 CR2=0000000000000000 CR3=000000001ffff000 CR4=00000020 +16:01:32 INFO | [qemu output] DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +16:01:32 INFO | [qemu output] DR6=00000000ffff0ff0 DR7=0000000000000400 +16:01:32 INFO | [qemu output] EFER=0000000000000500 +16:01:32 INFO | [qemu output] Code=88 77 00 49 8d 84 24 f8 0f 00 00 48 89 e2 48 89 e9 48 89 c5 <c9> 48 87 e2 48 87 e9 48 81 f9 99 88 77 00 0f 94 c0 48 39 d5 40 0f 94 c6 40 0f b6 f6 21 c6 + +More logs will be attached to this bug report. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1062411 b/results/classifier/gemma3:12b/kvm/1062411 new file mode 100644 index 00000000..f09ddf84 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1062411 @@ -0,0 +1,85 @@ + +QEMU fails during migration and reports "qemu: VQ 0 size 0x80 Guest index 0x2d6 inconsistent with Host index 0x18: delta 0x2be" + +This issue is reproducing consistently on automated testing, verified on manual testing (although it may require many tries). + +Steps to reproduce: + +1) Start a linux guest. The command line used by automated testing was: + +10/05 06:48:27 INFO | kvm_vm:1605| MALLOC_PERTURB_=1 /usr/local/autotest/tests/kvm/qemu +10/05 06:48:27 INFO | kvm_vm:1605| -S +10/05 06:48:27 INFO | kvm_vm:1605| -name 'vm1' +10/05 06:48:27 INFO | kvm_vm:1605| -nodefaults +10/05 06:48:27 INFO | kvm_vm:1605| -chardev socket,id=hmp_id_humanmonitor1,path=/tmp/monitor-humanmonitor1-20121005-062311-r6UwQhzg,server,nowait +10/05 06:48:27 INFO | kvm_vm:1605| -mon chardev=hmp_id_humanmonitor1,mode=readline +10/05 06:48:27 INFO | kvm_vm:1605| -chardev socket,id=qmp_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20121005-062311-r6UwQhzg,server,nowait +10/05 06:48:27 INFO | kvm_vm:1605| -mon chardev=qmp_id_qmpmonitor1,mode=control +10/05 06:48:27 INFO | kvm_vm:1605| -chardev socket,id=serial_id_20121005-062311-r6UwQhzg,path=/tmp/serial-20121005-062311-r6UwQhzg,server,nowait +10/05 06:48:27 INFO | kvm_vm:1605| -device isa-serial,chardev=serial_id_20121005-062311-r6UwQhzg +10/05 06:48:27 INFO | kvm_vm:1605| -chardev socket,id=seabioslog_id_20121005-062311-r6UwQhzg,path=/tmp/seabios-20121005-062311-r6UwQhzg,server,nowait +10/05 06:48:27 INFO | kvm_vm:1605| -device isa-debugcon,chardev=seabioslog_id_20121005-062311-r6UwQhzg,iobase=0x402 +10/05 06:48:27 INFO | kvm_vm:1605| -device ich9-usb-uhci1,id=usb1 +10/05 06:48:27 INFO | kvm_vm:1605| -drive file='/tmp/kvm_autotest_root/images/rhel62-64.qcow2',if=none,cache=none,id=virtio0 +10/05 06:48:27 INFO | kvm_vm:1605| -device virtio-blk-pci,drive=virtio0 +10/05 06:48:27 INFO | kvm_vm:1605| -device virtio-net-pci,netdev=idbdMz8N,mac='9a:cf:d0:d1:d2:d3',id='ida0Kc7l' +10/05 06:48:27 INFO | kvm_vm:1605| -netdev tap,id=idbdMz8N,fd=21 +10/05 06:48:27 INFO | kvm_vm:1605| -m 2048 +10/05 06:48:27 INFO | kvm_vm:1605| -smp 2,cores=1,threads=1,sockets=2 +10/05 06:48:27 INFO | kvm_vm:1605| -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 +10/05 06:48:27 INFO | kvm_vm:1605| -vnc :0 +10/05 06:48:27 INFO | kvm_vm:1605| -vga std +10/05 06:48:27 INFO | kvm_vm:1605| -rtc base=utc,clock=host,driftfix=none +10/05 06:48:27 INFO | kvm_vm:1605| -boot order=cdn,once=c,menu=off +10/05 06:48:27 INFO | kvm_vm:1605| -enable-kvm + +Start a new VM in incoming mode. The example on this bug is using TCP protocol: + +10/05 07:18:56 INFO | kvm_vm:1605| MALLOC_PERTURB_=1 /usr/local/autotest/tests/kvm/qemu +10/05 07:18:56 INFO | kvm_vm:1605| -S +10/05 07:18:56 INFO | kvm_vm:1605| -name 'vm1' +10/05 07:18:56 INFO | kvm_vm:1605| -nodefaults +10/05 07:18:56 INFO | kvm_vm:1605| -chardev socket,id=hmp_id_humanmonitor1,path=/tmp/monitor-humanmonitor1-20121005-071855-5QYsCtRS,server,nowait +10/05 07:18:56 INFO | kvm_vm:1605| -mon chardev=hmp_id_humanmonitor1,mode=readline +10/05 07:18:56 INFO | kvm_vm:1605| -chardev socket,id=qmp_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20121005-071855-5QYsCtRS,server,nowait +10/05 07:18:56 INFO | kvm_vm:1605| -mon chardev=qmp_id_qmpmonitor1,mode=control +10/05 07:18:56 INFO | kvm_vm:1605| -chardev socket,id=serial_id_20121005-071855-5QYsCtRS,path=/tmp/serial-20121005-071855-5QYsCtRS,server,nowait +10/05 07:18:56 INFO | kvm_vm:1605| -device isa-serial,chardev=serial_id_20121005-071855-5QYsCtRS +10/05 07:18:56 INFO | kvm_vm:1605| -chardev socket,id=seabioslog_id_20121005-071855-5QYsCtRS,path=/tmp/seabios-20121005-071855-5QYsCtRS,server,nowait +10/05 07:18:56 INFO | kvm_vm:1605| -device isa-debugcon,chardev=seabioslog_id_20121005-071855-5QYsCtRS,iobase=0x402 +10/05 07:18:56 INFO | kvm_vm:1605| -device ich9-usb-uhci1,id=usb1 +10/05 07:18:56 INFO | kvm_vm:1605| -drive file='/tmp/kvm_autotest_root/images/rhel62-64.qcow2',if=none,cache=none,id=virtio0 +10/05 07:18:56 INFO | kvm_vm:1605| -device virtio-blk-pci,drive=virtio0 +10/05 07:18:56 INFO | kvm_vm:1605| -device virtio-net-pci,netdev=idERNnUO,mac='9a:cf:d0:d1:d2:d3',id='ideI7zfw' +10/05 07:18:56 INFO | kvm_vm:1605| -netdev tap,id=idERNnUO,fd=32 +10/05 07:18:56 INFO | kvm_vm:1605| -m 2048 +10/05 07:18:56 INFO | kvm_vm:1605| -smp 2,cores=1,threads=1,sockets=2 +10/05 07:18:56 INFO | kvm_vm:1605| -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 +10/05 07:18:56 INFO | kvm_vm:1605| -vnc :1 +10/05 07:18:56 INFO | kvm_vm:1605| -vga std +10/05 07:18:56 INFO | kvm_vm:1605| -rtc base=utc,clock=host,driftfix=none +10/05 07:18:56 INFO | kvm_vm:1605| -boot order=cdn,once=c,menu=off +10/05 07:18:56 INFO | kvm_vm:1605| -enable-kvm +10/05 07:18:56 INFO | kvm_vm:1605| -incoming tcp:0:5200 + +Start the migration, typing on monitor + +10/05 07:18:58 DEBUG|kvm_monito:0177| (monitor humanmonitor1) Sending command 'migrate -d tcp:0:5200' + +The VM will start migrating state to the new qemu instance, and at some point it will stop with: + +10/05 07:19:10 INFO | aexpect:0786| [qemu output] qemu: VQ 0 size 0x80 Guest index 0x2d6 inconsistent with Host index 0x18: delta 0x2be +10/05 07:19:10 INFO | aexpect:0786| [qemu output] qemu: warning: error while loading state for instance 0x0 of device '0000:00:04.0/virtio-blk' +10/05 07:19:10 INFO | aexpect:0786| [qemu output] load of migration failed +10/05 07:19:10 INFO | aexpect:0786| [qemu output] (Process terminated with status 0) + +Due to the large number of migrations executed during a virt job (vm state keeps being passed back and forth many times, using many protocols), we get this problem on every single virt job. + +Latest commits we found this issue: + +Kernel (kvm.git, avi's tree) +10/05 05:38:12 INFO | git:0153| git commit ID is 1a95620f45155ac523cd1419d89150fbb4eb858b (tag kvm-3.6-2-136-g1a95620) + +Userspace (qemu.git) + +10/05 06:20:30 INFO | git:0153| git commit ID is a14c74928ba1fdaada515717f4d3c3fa3275d6f7 (tag v1.2.0-546-ga14c749) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1063807 b/results/classifier/gemma3:12b/kvm/1063807 new file mode 100644 index 00000000..71c4054a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1063807 @@ -0,0 +1,71 @@ + +KVM crashes when booting a PointSec encrypted Windows 7 + +Hi all, + +KVM crashes each time the VM boots after installing PointSec. + +Steps to reproduce are: +1) install win7 64bits +2) install PointSec FDE (Full Disk Encryption => http://www.checkpoint.com/products/full-disk-encryption/index.html) +3) regardless any other qemu parameters, one gets a "KVM internal error. Suberror: 1 / emulation failure" error message and a qemu dump like this one: + +KVM internal error. Suberror: 1 +emulation failure +EAX=00000130 EBX=00000000 ECX=00014000 EDX=00050000 +ESI=00000000 EDI=00000000 EBP=00008e3f ESP=0001802d +EIP=000006d3 EFL=00017087 [--S--PC] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0048 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +CS =25a1 00025a10 0000ffff 00009b00 DPL=0 CS16 [-RA] +SS =0040 00028050 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0040 00028050 ffffffff 00c09300 DPL=0 DS [-WA] +FS =0130 00300000 ffffffff 00c09300 DPL=0 DS [-WA] +GS =0040 00028050 ffffffff 00c09300 DPL=0 DS [-WA] +LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT +TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy +GDT= 00028050 00001dd8 +IDT= 00029e40 00000188 +CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=00 8e c0 b8 30 01 8e e0 66 b9 00 00 00 00 66 ba 00 00 00 00 <66> 26 67 8b 9a 00 00 05 00 66 64 67 89 1a 66 83 c2 04 66 41 66 81 f9 00 80 01 00 75 e3 0f + + +My system info: +root@RJZ-LNX:/home/rjz# cat /proc/cpuinfo | tail -24 +cpu family : 6 +model : 37 +model name : Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz +stepping : 5 +microcode : 0x2 +cpu MHz : 1199.000 +cache size : 3072 KB +physical id : 0 +siblings : 4 +core id : 2 +cpu cores : 2 +apicid : 5 +initial apicid : 5 +fpu : yes +fpu_exception : yes +cpuid level : 11 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt lahf_lm ida arat dtherm tpr_shadow vnmi flexpriority ept vpid +bogomips : 5319.72 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual +power management: + + + +and qemu (Ubuntu distribution) info is: + +root@RJZ-LNX:/home/rjz# qemu-system-x86_64 --version +QEMU emulator version 1.0 (qemu-kvm-1.0), Copyright (c) 2003-2008 Fabrice Bellard + + + +Best regards, +Rolando. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1086782 b/results/classifier/gemma3:12b/kvm/1086782 new file mode 100644 index 00000000..e09b850c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1086782 @@ -0,0 +1,85 @@ + +HPET time drift windows 7 64bits guest + +Using latest qemu-kvm (1.2.0), time drift (clock slow in guest) in Windows 7 64 bits guest when HPET is enabled (default). +Disabling HPET (-no-hpet) solves the time drift. + +UsePlatformClock enable/disable doesn't make a difference in the guest. +bcdedit /set useplatformclock true + +Using driftfix slew doesn't make a difference too. + + +# qemu-system-x86_64 --version +QEMU emulator version 1.2.0 (qemu-kvm-1.2.0), Copyright (c) 2003-2008 Fabrice Bellard + +Kernel is 3.6.8: +# uname -a +Linux pulsar 3.6.8 #1 SMP Sat Dec 1 16:26:10 CET 2012 x86_64 x86_64 x86_64 GNU/Linux + +TSC is stable in the host: +=== +# cat /sys/devices/system/clocksource/clocksource0/current_clocksource +tsc + +Dmesg: +[ 0.000000] hpet clockevent registered +[ 0.000000] tsc: Fast TSC calibration using PIT +[ 0.000000] tsc: Detected 2660.096 MHz processor +[ 0.001002] Calibrating delay loop (skipped), value calculated using timer frequency.. 5320.19 BogoMIPS (lpj=2660096) +[ 0.001138] pid_max: default: 32768 minimum: 301 +... +[ 1.492019] tsc: Refined TSC clocksource calibration: 2659.973 MHz +[ 1.492093] Switching to clocksource tsc + + +CPUinfo, constant_tsc: +vendor_id : GenuineIntel +cpu family : 6 +model : 23 +model name : Intel(R) Core(TM)2 Quad CPU Q8400 @ 2.66GHz +stepping : 10 +microcode : 0xa0b +cpu MHz : 2667.000 +cache size : 2048 KB +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm tpr_shadow vnmi flexpriority +bogomips : 5320.19 + +# grep -i hpet .config +CONFIG_HPET_TIMER=y +CONFIG_HPET_EMULATE_RTC=y +CONFIG_HPET=y +# CONFIG_HPET_MMAP is not set +=== + +Qemu command line: +/usr/bin/qemu-system-x86_64 -drive file=/dev/vol0/KVMORION01,cache=none,aio=native,if=virtio \ + -drive file=/dev/vol0/KVMORION02,cache=none,aio=native,if=virtio \ + -cpu host \ + -m 2048 \ + -smp 4,maxcpus=4,cores=4,threads=1,sockets=1 \ + -rtc base=localtime,driftfix=slew \ + -vnc 10.124.241.211:0,password -k es \ + -monitor telnet:localhost:37200,server,nowait \ + -netdev tap,id=kvmorion,ifname=kvmorion,script=/etc/qemu-ifup-br0,downscript=/etc/qemu-ifdown-br0 \ + -device virtio-net-pci,netdev=kvmorion,id=virtio-nic0,mac=02:85:64:02:c2:aa \ + -device virtio-balloon-pci,id=balloon0 \ + -boot menu=on \ + -pidfile /var/run/kvmorion.pid \ + -daemonize + +Using 1 CPU doesn't make a difference. +Only workaround is disabling hpet (-no-hpet) + +Sample time drift in guest: +>ntpdate -q 10.124.241.211 + 5 Dec 13:36:06 ntpdate[3464]: Raised to realtime priority class +server 10.124.241.211, stratum 2, offset 3.694184, delay 0.02551 + 5 Dec 13:36:12 ntpdate[3464]: step time server 10.124.241.211 offset 3.694184 s +ec + +>ntpdate -q 10.124.241.211 + 5 Dec 13:52:02 ntpdate[1964]: Raised to realtime priority class +server 10.124.241.211, stratum 2, offset 4.719968, delay 0.02554 + 5 Dec 13:52:08 ntpdate[1964]: step time server 10.124.241.211 offset 4.719968 s +ec \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1089281 b/results/classifier/gemma3:12b/kvm/1089281 new file mode 100644 index 00000000..7752ece2 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1089281 @@ -0,0 +1,55 @@ + +kvm crash when writing on disk + +When running the following command: + +/usr/bin/kvm -S -M pc-1.0 -cpu qemu32 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name winxp -uuid f86ef88f-b90e-699a-74b8-9675063fc26e -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/winxp.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device lsi,id=scsi0,bus=pci.0,addr=0x4 -drive file=/home/master/xpnew.iso,if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/var/lib/zentyal/machines/winxp/winxp.img,if=none,id=drive-scsi0-0-0,format=qcow2 -device scsi-disk,bus=scsi0.0,scsi-id=0,drive=drive-scsi0-0-0,id=scsi0-0-0 -netdev tap,fd=18,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=b3:b8:a9:49:a2:f8,bus=pci.0,addr=0x3 -usb -device usb-mouse,id=input0 -vnc 0.0.0.0:0,password -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 + + +running a windows installation (for instance, it has crashed with other OS), when the guest OS installer has reached 60% of the copying files process, the following errors can be found, and KVM gets Force Closed (i am recollecting errors from different times I have tried to references to memory positions may vary) + +syslog: + +Nov 26 19:46:59 mikeboxx kernel: [2254718.689953] kvm6983 general protection ip:7fc451d4be08 sp:7fc44991ab80 error:0 in libc-2.15.so[7fc451ccd000+1b5000] + +/var/log/libvirt/libvirtd.log: + +2012-11-21 10:01:26.464+0000: 16050: error : qemuMonitorIO:603 : internal error End of file from monitor + +/var/log/libvirt/qemu/winxp-ajur.log + +**enclosed as it has a long size due to the core dump + + +The linux kernel running is this one: + +3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux + +Libvirtd versions are these: +root@mikeboxx:/home/ebox-remote-support# dpkg -l | grep libvirt +ii libvirt-bin 0.9.8-2ubuntu17.4 programs for the libvirt library +ii libvirt0 0.9.8-2ubuntu17.4 library for interfacing with different virtualization systems + +and KVM - QEMU versions are these ones: +root@mikeboxx:/home/ebox-remote-support# dpkg -l | grep qemu +ii qemu-common 1.0+noroms-0ubuntu14.3 qemu common functionality (bios, documentation, etc) +ii qemu-kvm 1.0+noroms-0ubuntu14.3 Full virtualization on i386 and amd64 hardware +ii qemu-utils 1.0+noroms-0ubuntu14.3 qemu utilities + + + +I have checked bug #1022901 in https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/1022901 due to the similarity of the error "internal error End of file from monitor", but the sintoms are not the same as long as the partition where the img file resides has plenty of space and so does the img itself: + +root@mikeboxx:/home/ebox-remote-support# df -h +Filesystem Size Used Avail Use% Mounted on +/dev/sda1 226G 3.4G 211G 2% / + +root@mikeboxx:/home/ebox-remote-support# qemu-img info /var/lib/zentyal/machines/winxp/winxp.img +image: /var/lib/zentyal/machines/winxp/winxp.img +file format: qcow2 +virtual size: 11G (11559501824 bytes) +disk size: 384M +cluster_size: 65536 + + +Can you help us to solve this? Case you needed any information else, please do not hesitate to ask for it \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1091 b/results/classifier/gemma3:12b/kvm/1091 new file mode 100644 index 00000000..2b1e7184 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1091 @@ -0,0 +1,14 @@ + +qemu-system-x86_64 hard crashes when using `--accel hvf` on intel Mac +Description of problem: +The QEMU process hard crashes after a few minutes. The only message is: + +``` +vmx_write_mem: mmu_gva_to_gpa ffff990489fa0000 failed +``` +Steps to reproduce: +1. Run QEMU with the above commandline +2. Do something to keep the VM busy - running `git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git` reliably crashes it for me +3. Wait a 3-5 minutes +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/1103 b/results/classifier/gemma3:12b/kvm/1103 new file mode 100644 index 00000000..9d6c720b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1103 @@ -0,0 +1,2 @@ + +VTCR fields are not checked when building parameters for aarch64 secure EL2 page table walk diff --git a/results/classifier/gemma3:12b/kvm/1114 b/results/classifier/gemma3:12b/kvm/1114 new file mode 100644 index 00000000..b8272495 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1114 @@ -0,0 +1,2 @@ + +Non-deterministic hang in libvfio-user:functional/test-client-server test causing timeout in CentOS 8 CI job diff --git a/results/classifier/gemma3:12b/kvm/1120383 b/results/classifier/gemma3:12b/kvm/1120383 new file mode 100644 index 00000000..81a156d7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1120383 @@ -0,0 +1,28 @@ + +incremental live block migration of qemu 1.3.1 doesn't work + +We tested qemu 1.3.1 for live migration of block device. It failed with error. Since qemu-kvm 1.2.0 is ok for this test, we think this problem is introduced by new qemu 1.3.x releases. + +To reproduce: + +1. compile qemu 1.3.1: + # cd qemu-1.3.1 + # ./configure --prefix=/usr --sysconfdir=/etc --target-list=x86_64-softmmu + # make; make install + +2. prepare source(172.16.1.13): + # qemu-img create -f qcow2 os.img -b /home/reno/wheezyx64 ###Note: wheezyx64 is a template image for Debian Wheezy + # qemu-system-x86_64 -hda os.img -m 512 --enable-kvm -vnc :51 -monitor stdio + +3. prepare destination(172.16.1.14): + # qemu-img create -f qcow2 os.img -b /home/reno/wheezyx64 + # qemu-system-x86_64 -hda os.img -m 512 --enable-kvm -vnc :51 -incoming tcp:0:4444 + +4. do live migrate: + on source monitor command prompt, input: + (qemu) migrate -i tcp:172.16.1.14:4444 + +monitor command will quit immediately and on destination host, there are errors thrown: + Receiving block device images + Co-routine re-entered recursively + Aborted \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1123975 b/results/classifier/gemma3:12b/kvm/1123975 new file mode 100644 index 00000000..93785004 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1123975 @@ -0,0 +1,31 @@ + +QEmu 1.3+ cannot restore a 1.1- live snapshot made in qemu-kvm + +I have upgraded to QEmu 1.3.90 (Debian 1.4.0~rc0+dfsg-1exp) but now when I try to restore a live snapshot made in QEmu 1.1.2 (Debian 1.1.2+dfsg-5) I get the following message: + +virsh # snapshot-revert fgtbbuild wtb +error: operation failed: Error -22 while loading VM state + +I have test VMs with live snapshots coreresponding to different testing configurations. So I typically revert the VMs in one of the live snapshots and run the tests. It would be pretty annoying to have to recreate all these live snapshots any time I upgrade QEmu. + + +ipxe-qemu 1.0.0+git-20120202.f6840ba-3 +qemu 1.4.0~rc0+dfsg-1exp +qemu-keymaps 1.4.0~rc0+dfsg-1exp +qemu-kvm 1.4.0~rc0+dfsg-1exp +qemu-system 1.4.0~rc0+dfsg-1exp +qemu-system-arm 1.4.0~rc0+dfsg-1exp +qemu-system-common 1.4.0~rc0+dfsg-1exp +qemu-system-mips 1.4.0~rc0+dfsg-1exp +qemu-system-misc 1.4.0~rc0+dfsg-1exp +qemu-system-ppc 1.4.0~rc0+dfsg-1exp +qemu-system-sparc 1.4.0~rc0+dfsg-1exp +qemu-system-x86 1.4.0~rc0+dfsg-1exp +qemu-user 1.4.0~rc0+dfsg-1exp +qemu-utils 1.4.0~rc0+dfsg-1exp +libvirt-bin 1.0.2-1 +libvirt-dev 1.0.2-1 +libvirt-doc 1.0.2-1 +libvirt-glib-1.0-0 0.1.2-1 +libvirt0 1.0.2-1 +libvirtodbc0 6.1.4+dfsg1-5 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1127053 b/results/classifier/gemma3:12b/kvm/1127053 new file mode 100644 index 00000000..7786b7f8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1127053 @@ -0,0 +1,25 @@ + +assertion failed in exec.c while attempting to start a guest (latest commit) + +Hi team, + +I decided to try the latest commit on git (previously used version 1.3.0), and I got failed assertions while attempting to start my guests: + +eclipse ~ # qemu-kvm -enable-kvm -hda arch.img -m 4096 -smp sockets=1,cores=4 -vnc :0 -cpu host -vga std -net nic,model=e1000,macaddr=00:00:00:00:00:00 -net tap,ifname=vm0 -qmp tcp:0.0.0.0:4900,server,nowait +qemu-kvm: /var/tmp/portage/app-emulation/qemu-9999/work/qemu-9999/exec.c:982: qemu_ram_set_idstr: Assertion `!new_block->idstr[0]' failed. +Aborted + +The assertion seems valid, so whatever's causing it is probably to blame. I haven't dug around much to find out what calls the method (qemu_ram_set_idstr()), but that is probably the best place to start. + +The host contains a Xeon E3-1240 CPU, virtualising a bunch of guests one of which is Arch Linux 64-bit, if that helps. + +eclipse ~ # qemu-kvm -version +QEMU emulator version 1.4.50, Copyright (c) 2003-2008 Fabrice Bellard + +It looks like this assertion happens if you call the executable without any parameters as well: + +eclipse ~ # qemu-kvm +qemu-kvm: /var/tmp/portage/app-emulation/qemu-9999/work/qemu-9999/exec.c:982: qemu_ram_set_idstr: Assertion `!new_block->idstr[0]' failed. +Aborted + +Thanks. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1129571 b/results/classifier/gemma3:12b/kvm/1129571 new file mode 100644 index 00000000..d2108a42 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1129571 @@ -0,0 +1,15 @@ + +libreoffice armhf FTBFS + +We have been experiencing FTBFS of LibreOffice 3.5.7, 12.04, armhf in the launchpad buildds. We believe this is likely due to an error in qemu. + +While we do not have a small test case yet, we do have a build log (attaching here). + +The relevant snippet from the build log is: + +3.5.7/solver/unxlngr.pro/bin/jaxp.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/juh.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/parser.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/xt.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/unoil.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/ridl.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/jurt.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/xmlsearch.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/LuceneHelpWrapper.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/HelpIndexerTool.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/lucene-core-2.3.jar:/build/buildd/libreoffice-3.5.7/solver/unxlngr.pro/bin/lucene-analyzers-2.3.jar" com.sun.star.help.HelpIndexerTool -lang cs -mod swriter -zipdir ../../unxlngr.pro/misc/ziptmpswriter_cs -o ../../unxlngr.pro/bin/swriter_cs.zip.unxlngr.pro +dmake: Error code 132, while making '../../unxlngr.pro/bin/swriter_cs.zip' + +We believe this is from bash error code 128 + 4, where 4 is illegal instruction, thus leading us to suspect qemu. + +Any help in tracking this down would be appreciated. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1136 b/results/classifier/gemma3:12b/kvm/1136 new file mode 100644 index 00000000..0199dabd --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1136 @@ -0,0 +1,2 @@ + +qemu-system-ppc64: KVM HPT guest sometimes fails to migrate diff --git a/results/classifier/gemma3:12b/kvm/1138 b/results/classifier/gemma3:12b/kvm/1138 new file mode 100644 index 00000000..7fafcfe9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1138 @@ -0,0 +1,2 @@ + +Not able to get KVM in qemu-system-s390x built from 6.2.0 source on Fedora 31 diff --git a/results/classifier/gemma3:12b/kvm/1151 b/results/classifier/gemma3:12b/kvm/1151 new file mode 100644 index 00000000..097f74db --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1151 @@ -0,0 +1,50 @@ + +when guest unexpect shutdown,can't enter system,the terminal has a black screen +Description of problem: + +Steps to reproduce: +1.guest unexpect shutdown + +2.when start again,cpu usage is high and can't enter the guest system + +3.restart guest can recovery + +**libvirt print:** + +`2022-08-11 14:39:58.080+0000: 1942: warning : qemuDomainObjTaint:6079 : Domain id=117 name='GDT99d2578e-f06e-4fbe-88dd-7d9dd56fd02d' uuid=99d2578e-f06e-4fbe-88dd-7d9dd56fd02d is tainted: high-privileges + +2022-08-11 14:39:58.080+0000: 1942: warning : qemuDomainObjTaint:6079 : Domain id=117 name='GDT99d2578e-f06e-4fbe-88dd-7d9dd56fd02d' uuid=99d2578e-f06e-4fbe-88dd-7d9dd56fd02d is tainted: custom-argv + +2022-08-11 14:40:28.792+0000: 741037: warning : qemuDomainObjBeginJobInternal:946 : Cannot start job (modify, none, none) for domain GDT99d2578e-f06e-4fbe-88dd-7d9dd56fd02d; current job is (none, none, migration in) owned by (0 <null>, 0 <null>, 0 remoteDispatchDomainMigratePrepare3Params (flags=0x203)) for (0s, 0s, 30s) + +2022-08-11 14:40:28.792+0000: 741037: error : qemuDomainObjBeginJobInternal:968 : Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainMigratePrepare3Params) +` + + +**user perf to analyse:** + +\#top -d 3 -Hp 1311519 + + + +\#perf record -a -g -p 1311519 sleep 20 + +\#report -n --header --stdio + + + + +**query kvm stat:** + + \# perf stat -e 'kvm:*' -a -p 1311519 sleep 20 + + + + +kvm vmexit stat: + +\#perf kvm stat record -a -p 1311519 sleep 10 + +\#perf kvm stat report --event=vmexit + + diff --git a/results/classifier/gemma3:12b/kvm/1155 b/results/classifier/gemma3:12b/kvm/1155 new file mode 100644 index 00000000..13a784d5 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1155 @@ -0,0 +1,28 @@ + +RISC-V: Instruction fetch exceptions can have invalid tval/epc combination +Description of problem: +Instruction page fault / guest-page fault / access fault exceptions can have invalid `epc`/`tval` combinations, for example as shown in the debug log: + +``` +riscv_cpu_do_interrupt: hart:0, async:0, cause:0000000000000014, epc:0xffffffff802fec76, tval:0xffffffff802ff000, desc=guest_exec_page_fault +riscv_cpu_do_interrupt: hart:0, async:0, cause:0000000000000014, epc:0xffffffff80243fe6, tval:0xffffffff80244000, desc=guest_exec_page_fault +``` + +From the privileged spec: + +> If `mtval` is written with a nonzero value when an instruction access-fault or page-fault exception occurs on a system with variable-length instructions, then `mtval` will contain the virtual address of the portion of the instruction that caused the fault, while `mepc` will point to the beginning of the instruction. + +Currently RISC-V only has 32-bit and 16-bit instructions, so the difference `tval - epc` should be either `0` or `2`. In the examples above the differences are `906` and `26` respectively. + +Possibly notable: all occurrences of these invalid combinations to have `tval` aligned to a page-boundary. +Steps to reproduce: +This one only gives invalid `tval`/`epc` combinations with instruction guest-page faults, but I've found it to be the easiest reproducer to describe, since presumably running KVM in RISC-V QEMU is a standard setup. I have not otherwise been able to find a more minimal case. + +1. Start a QEMU-based `riscv64` machine +2. Start a KVM-based virtual machine with QEMU inside it +3. Do some stuff in the KVM-based virtual machine to increase the chance of page faults +4. Look in the debug log of the outer QEMU for `guest_exec_page_fault` exceptions with `tval` ending in `000`, but `epc` ending in neither `000` nor `ffe` + +Everything in both layers of guests should otherwise work without issue, but other/future software that relies on the spec-mandated relationship of `epc`/`tval` may break. +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/1162 b/results/classifier/gemma3:12b/kvm/1162 new file mode 100644 index 00000000..be534292 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1162 @@ -0,0 +1,13 @@ + +`./configure` gives `big/little test failed` error when attempting to statically link on Fedora 36 +Description of problem: +I'm having trouble attempting to build the QEMU System emulator statically linked. The error `./configure` gives `big/little test failed` with nothing else. I couldn't find any information relating to this. I'm not sure where to start fixing this. If anyone can help me with this, thanks! +Steps to reproduce: +1. `git clone https://gitlab.com/qemu-project/qemu.git` +2. `cd qemu` +3. `git submodule init` +4. `git submodule update` +5. `./configure --enable-kvm --enable-vnc --enable-vhost-net --enable-avx2 --enable-avx512f --target-list=x86_64-softmmu --static` +6. Observe build error +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/1168 b/results/classifier/gemma3:12b/kvm/1168 new file mode 100644 index 00000000..1f5c37a1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1168 @@ -0,0 +1,14 @@ + +ivshmem: ivshmem-doorbell can't notify the MSI-X interrupt on Arm64 guest +Description of problem: +I init several qemu-kvm VMs on my arm64 host, which is a NVIDIA Xavier board. I want to use qemu's ivshmem-doorbell to build a sync shared memory communition with its MSI-X interrupt mechanism. I init the ivshmem-server and ivshmem-client on the host first, after then init the guests. The visul PCI-e device named "Inter-VM shared memory" can be successfully seen in my guests with command "lspci". +I write a driver for this pci-e device to request and handle the MSI-X interrupts, which init well in the guest and can ring or receive from an interrupt vector on other peerID with the driver's IOCTL interface, the peer that receive vector in my environment is the ivshmem-client. However, when i use the ivshmem-client command "int" to ring my guest , the guest can't receive the msi-x interrupt notification. +Steps to reproduce: +1. init ivshmem-server on the host, with command "ivshmem-server -l 4M -M fg-doorbell -n 8 -F -v". +2. init ivshmem-client on the host, with command "ivshmem-client -v". +3. init the qemu-kvm VM . +4. init the driver with "insmod" in guest to request the msi-x interrupt, while "cat /proc/interrupts" shows the interrupt request successfully! +5. on host, ivshmem-client use command "int 1 0" to ring the guest's interrupt trigger, however ,nothing happened. +Additional information: +I am fully sure that there is no problem about the driver I wrote for the pci-e inter-VM shared memory device, for i has tested that the driver works on my X86 PC, where I deployed qemu-x86 VMs and the driver can work well in X86 guests with the inshmem-doorbell mechanism. The ivshmem-client work on host can notify the guest to trigger the correct msix-x interrupt. +Therefore, I digged the msi-x interrupt structure and use devmem tool to write the data to the messageAddress manually, which can correctly trigger the msi-x interrupt in my arm64 guest in the Xavier board, meaning the msi-x interrupt is OK in the guest. So I doubt maybe there is any issue on the ivshmem-doorbell mechanism that ring a interrupt vector in the guset of qemu-aarch64. diff --git a/results/classifier/gemma3:12b/kvm/1175089 b/results/classifier/gemma3:12b/kvm/1175089 new file mode 100644 index 00000000..47e21d49 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1175089 @@ -0,0 +1,25 @@ + +Crash why dragon fly 3.4.1 + +Hello, all is here (kernel 3.8, qemu 1.2.2-r3): +/usr/bin/qemu-system-x86_64 -k fr -alt-grab -m 2048 -vga vmware -net nic,vlan=0,model=virtio -net user -rtc base=localtime -smp 4,cores=4,sockets=1 -boot once=d -cdrom dfly-x86_64-gui-3.4.1_REL.iso +KVM internal error. Suberror: 1 +emulation failure +EAX=00000010 EBX=00009338 ECX=00000000 EDX=00000000 +ESI=000017fc EDI=000017c8 EBP=000364a0 ESP=000017b8 +EIP=00009318 EFL=00003002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0010 00000000 ffffffff 00c09300 +CS =0018 00000000 0000ffff 00009b00 +SS =0010 00000000 ffffffff 00c09300 +DS =0010 00000000 ffffffff 00c09300 +FS =0033 0000a000 ffffffff 00c0f300 +GS =0033 0000a000 ffffffff 00c0f300 +LDT=0000 00000000 0000ffff 00008200 +TR =0038 00005f98 00002067 00008b00 +GDT= 00009590 0000003f +IDT= 00005e00 00000197 +CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=00 a3 ea 5d 00 00 66 ea 10 93 18 00 0f 20 c0 fe c8 0f 22 c0 <ea> 1d 93 00 00 31 c0 8e d8 8e d0 0f 01 1e dc 95 66 07 66 1f 66 0f a1 66 0f a9 66 61 bc ea \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1183 b/results/classifier/gemma3:12b/kvm/1183 new file mode 100644 index 00000000..297aa1a2 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1183 @@ -0,0 +1,132 @@ + +KVM crash due to qcow2 out of space condition during virsh-snapshot creation +Description of problem: +virsh snapshot failed due to out of space condition (into the qcow2 image ?) + +libvirt log: + +``` +2022-08-27T06:41:41.164368Z qemu-kvm-one: terminating on signal 15 from pid 1782 (/usr/sbin/libvirtd) +2022-08-27T06:41:41.172667Z qemu-kvm-one: Failed to flush the L2 table cache: Input/output error +2022-08-27T06:41:41.172692Z qemu-kvm-one: Failed to flush the refcount block cache: Input/output error +``` +Steps to reproduce: +1. not possible for that moment - i did resize/increase the qcow2 image - +now its running again. +Additional information: +as i saw - there was a very old qemu-snapshot, which was not properly deleted. +After removing this snapshot, i did reszie the image. +I do suppose, this could be one reason the image (qcow2) got full ? + +Because all is THIN i was not aware of it (fs level ok, storage layer ok). +Is there any tool, how free space in a thin qcow2 file can be monitored ? + + + +``` +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \ +HOME=/var/lib/libvirt/qemu/domain-13-one-89 \ +XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-13-one-89/.local/share \ +XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-13-one-89/.cache \ +XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-13-one-89/.config \ +QEMU_AUDIO_DRV=none \ +/usr/bin/qemu-kvm-one \ +-name guest=one-89,debug-threads=on \ +-S \ +-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-one-89/master-key.aes \ +-machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off \ +-cpu qemu64 \ +-m 8192 \ +-overcommit mem-lock=off \ +-smp 4,sockets=4,cores=1,threads=1 \ +-uuid 8c920c7f-f687-4c47-bfc7-671425c7436b \ +-no-user-config \ +-nodefaults \ +-chardev socket,id=charmonitor,fd=40,server,nowait \ +-mon chardev=charmonitor,id=monitor,mode=control \ +-rtc base=utc \ +-no-shutdown \ +-boot strict=on \ +-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \ +-device virtio-scsi-pci,id=scsi0,num_queues=1,bus=pci.0,addr=0x4 \ +-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 \ +-blockdev '{"driver":"file","filename":"/var/lib/one//xxxx/disk.0","aio":"threads","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ +-blockdev '{"node-name":"libvirt-3-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-3-storage","backing":null}' \ +-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=libvirt-3-format,id=scsi0-0-0-0,bootindex=1,write-cache=off \ +-blockdev '{"driver":"file","filename":"/var/lib/one//xxxx/disk.1","aio":"threads","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ +-blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage","backing":null}' \ +-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,device_id=drive-scsi0-0-1-0,drive=libvirt-2-format,id=scsi0-0-1-0,write-cache=off \ +-blockdev '{"driver":"file","filename":"/var/lib/one//xxxx/disk.2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ +-blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ +-device ide-cd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0 \ +-netdev tap,fd=42,id=hostnet0 \ +-device e1000,netdev=hostnet0,id=net0,mac=02:00:c0:a8:02:17,bus=pci.0,addr=0x3 \ +-chardev socket,id=charchannel0,fd=43,server,nowait \ +-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ +-vnc 0.0.0.0:89 \ +-device cirrus-vga,id=video0,bus=pci.0,addr=0x2 \ +-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \ +-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ +-msg timestamp=on +``` + +as the time of the crash the qcow2 status was: +(so i'm not sure the issue is about a space problem or a bug in qemu): + +``` +qemu-img info xxx/0/xxx +image: xxx/0/xxx +file format: qcow2 +virtual size: 1.46 TiB (1610612736000 bytes) +disk size: 988 GiB +cluster_size: 65536 +Snapshot list: +ID TAG VM SIZE DATE VM CLOCK ICOUNT +112 snap-111 0 B 2022-03-11 01:59:15 49:07:53.846 +282 snap-281 0 B 2022-08-20 01:59:17538:16:30.416 +283 snap-282 0 B 2022-08-21 01:59:16562:10:40.759 +284 snap-283 0 B 2022-08-22 01:59:16585:59:16.170 +285 snap-284 0 B 2022-08-23 01:59:16609:51:44.825 +286 snap-285 0 B 2022-08-24 01:59:16633:45:32.243 +287 snap-286 0 B 2022-08-25 01:59:16657:36:44.718 +288 snap-287 0 B 2022-08-26 01:59:16681:29:00.793 +Format specific information: + compat: 1.1 + compression type: zlib + lazy refcounts: false + refcount bits: 16 + corrupt: false + extended l2: false +root@proxpve1:~# qemu-img check xxxx/0/xxx +No errors were found on the image. +15252433/24576000 = 62.06% allocated, 6.32% fragmented, 0.00% compressed clusters +Image end offset: 1062936117248 + +1rst (OS) Disk on the VM: +------------------------------------------ +file format: qcow2 +virtual size: 100 GiB (107374182400 bytes) +disk size: 190 GiB +cluster_size: 65536 +Snapshot list: +ID TAG VM SIZE DATE VM CLOCK ICOUNT +282 snap-281 7.66 GiB 2022-08-20 01:59:17538:16:30.416 +283 snap-282 7.6 GiB 2022-08-21 01:59:16562:10:40.759 +284 snap-283 7.62 GiB 2022-08-22 01:59:16585:59:16.170 +285 snap-284 7.65 GiB 2022-08-23 01:59:16609:51:44.825 +286 snap-285 7.62 GiB 2022-08-24 01:59:16633:45:32.243 +287 snap-286 7.63 GiB 2022-08-25 01:59:16657:36:44.718 +288 snap-287 7.65 GiB 2022-08-26 01:59:16681:29:00.793 +Format specific information: + compat: 1.1 + compression type: zlib + lazy refcounts: false + refcount bits: 16 + corrupt: false + extended l2: false + + +No errors were found on the image. +782257/1638400 = 47.75% allocated, 22.16% fragmented, 0.00% compressed clusters +Image end offset: 315680292864 +``` diff --git a/results/classifier/gemma3:12b/kvm/1186984 b/results/classifier/gemma3:12b/kvm/1186984 new file mode 100644 index 00000000..84f91005 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1186984 @@ -0,0 +1,24 @@ + +large -initrd can wrap around in memory causing memory corruption + +We don't use large -initrd in libguestfs any more, but I noticed that a large -initrd file now crashes qemu spectacularly: + +$ ls -lh /tmp/kernel /tmp/initrd +-rw-r--r--. 1 rjones rjones 273M Jun 3 14:02 /tmp/initrd +lrwxrwxrwx. 1 rjones rjones 35 Jun 3 14:02 /tmp/kernel -> /boot/vmlinuz-3.9.4-200.fc18.x86_64 + +$ ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios \ + -kernel /tmp/kernel -initrd /tmp/initrd -hda /tmp/test1.img -serial stdio \ + -append console=ttyS0 + +qemu crashes with one of several errors: + +PFLASH: Possible BUG - Write block confirm + +qemu: fatal: Trying to execute code outside RAM or ROM at 0x00000000000b96cd + +If -enable-kvm is used: + +KVM: injection failed, MSI lost (Operation not permitted) + +In all cases the SDL display fills up with coloured blocks before the crash (see the attached screenshot). \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1198 b/results/classifier/gemma3:12b/kvm/1198 new file mode 100644 index 00000000..b067dac7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1198 @@ -0,0 +1,54 @@ + +Windows 11 Guest keeps crashing with abort in cpu_asidx_from_attrs +Steps to reproduce: +1. Create Windows 11 guest, SWTPM, SECBOOT (haven't tested without since this is not an option for installing Windows 11) +2. Use OS +3. Will eventually crash. Have tried across multiple kernels 5.17, 5.18, 5.19 +Additional information: +``` + + Stack trace of thread 76223: + #0 0x00007f24072d44dc n/a (libc.so.6 + 0x884dc) + #1 0x00007f2407284998 raise (libc.so.6 + 0x38998) + #2 0x00007f240726e53d abort (libc.so.6 + 0x2253d) + #3 0x00007f240726e45c n/a (libc.so.6 + 0x2245c) + #4 0x00007f240727d4c6 __assert_fail (libc.so.6 + 0x314c6) + #5 0x0000555681a35101 cpu_asidx_from_attrs (qemu-system-x86_64 + 0x572101) + #6 0x0000555681c6531e cpu_memory_rw_debug (qemu-system-x86_64 + 0x7a231e) + #7 0x0000555681bfb54a x86_cpu_dump_state (qemu-system-x86_64 + 0x73854a) + #8 0x0000555681d84a65 kvm_cpu_exec (qemu-system-x86_64 + 0x8c1a65) + #9 0x0000555681d85e48 kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x8c2e48) + #10 0x0000555681fed0a8 qemu_thread_start (qemu-system-x86_64 + 0xb2a0a8) + #11 0x00007f24072d278d n/a (libc.so.6 + 0x8678d) + #12 0x00007f24073538e4 __clone (libc.so.6 + 0x1078e4) +``` + + +``` +KVM: entry failed, hardware error 0x80000021 + +If you're running a guest on an Intel machine without unrestricted mode +support, the failure can be most likely due to the guest entering an invalid +state for Intel VT. For example, the guest maybe running in big real mode +which is not supported on less recent Intel processors. + +EAX=00000000 EBX=00000000 ECX=00000000 EDX=04c6d3e0 +ESI=12af7eb0 EDI=9e55d420 EBP=821b5aa0 ESP=10db0fb0 +EIP=00008000 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=1 HLT=0 +ES =0000 00000000 ffffffff 00809300 +CS =b500 7ffb5000 ffffffff 00809300 +SS =0000 00000000 ffffffff 00809300 +DS =0000 00000000 ffffffff 00809300 +FS =0000 00000000 ffffffff 00809300 +GS =0000 00000000 ffffffff 00809300 +LDT=0000 00000000 000fffff 00000000 +TR =0040 10d97000 00000067 00008b00 +GDT= 10d98fb0 00000057 +IDT= 00000000 00000000 +CR0=00050032 CR2=f80ff80c CR3=e47e7000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=qemu-system-x86_64: ../qemu-7.0.0/hw/core/cpu-sysemu.c:77: cpu_asidx_from_attrs: Assertion `ret < cpu->num_ases && ret >= 0' failed. +2022-09-06 14:48:15.392+0000: shutting down, reason=crashed +``` diff --git a/results/classifier/gemma3:12b/kvm/1201447 b/results/classifier/gemma3:12b/kvm/1201447 new file mode 100644 index 00000000..88d7acad --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1201447 @@ -0,0 +1,18 @@ + +Blue screen when disk uses cache='writeback' + +I am running Windows 2008R2 as KVM guest on Ubuntu 12.04 hypervisor. Disk controller and network card are virtio devices with drivers from https://launchpad.net/kvm-guest-drivers-windows/+download (virtio-win-drivers-20120712-1.iso). +Everything worked fine until I changed disk controller cache from the default (writethrough) to writeback. This introduced occasional blue screens. I noticed that they are linked to high disk IO. For example restoring over 1GB of backup files will results in a blue screen on around 4 out of 5 attempts. Also Windows update crashes the system sometimes. When idle the system will run fine for hours or sometimes even days. +After removing cache='writeback' from the config everything came back to normal. + +qemu-kvm: + Installed: 1.0+noroms-0ubuntu14.8 + Candidate: 1.0+noroms-0ubuntu14.8 + Version table: + *** 1.0+noroms-0ubuntu14.8 0 + 500 http://archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages + 100 /var/lib/dpkg/status + 1.0+noroms-0ubuntu14.7 0 + 500 http://security.ubuntu.com/ubuntu/ precise-security/main amd64 Packages + 1.0+noroms-0ubuntu13 0 + 500 http://archive.ubuntu.com/ubuntu/ precise/main amd64 Packages \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1203 b/results/classifier/gemma3:12b/kvm/1203 new file mode 100644 index 00000000..45f387eb --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1203 @@ -0,0 +1,46 @@ + +migrate with block-dirty-bitmap (disk size is big enough) can't be finished +Description of problem: +when disk size is big enough(this case using the 4T,related to the bandwith of migration), migrate the VM with block-dirty-bitmap , +the migration will not be finished! +Steps to reproduce: +1. **Start up the source VM,using the commands**: + +/usr/libexec/qemu-kvm -name guest=i-00001C,debug-threads=on -machine pc,accel=kvm,usb=off,dump-guest-core=off -cpu qemu64,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 4096 -smp 4,sockets=1,cores=4,threads=1 -uuid 991c2994-e1c9-48c0-9554-6b23e43900eb -smbios type=1,manufacturer=data,serial=7C1A9ABA-02DD-4E7D-993C-E1CDAB47A19B,family="Virtual Machine" -no-user-config -nodefaults -device sga -rtc base=2022-09-09T02:54:38,clock=host,driftfix=slew -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=on,splash-time=0,strict=on -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x6 -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0xa -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0xb -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0xc -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0xd -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0xe -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-1,readonly=on -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1,bootindex=2 -drive if=none,id=drive-fdc0-0-0,readonly=on -drive file=/datastore/e88e2b29-cd39-4b21-9629-5ef2458f7ddd/c08fee8e-caf4-4217-ab4d-351a021c2c3d,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,num-queues=1,bus=pci.1,addr=0x1,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -device usb-tablet,id=input0,bus=usb.0,port=1 -device intel-hda,id=sound0,bus=pci.0,addr=0x3 -device hda-micro,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -sandbox off -device pvpanic,ioport=1285 -msg timestamp=on -qmp tcp:127.0.0.1:4444,server,nowait + +**Start the dst VM using commands as:** + +/usr/libexec/qemu-kvm -name guest=i-00001C,debug-threads=on -machine pc,accel=kvm,usb=off,dump-guest-core=off -cpu qemu64,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 4096 -smp 4,sockets=1,cores=4,threads=1 -uuid 991c2994-e1c9-48c0-9554-6b23e43900eb -smbios type=1,manufacturer=data,serial=7C1A9ABA-02DD-4E7D-993C-E1CDAB47A19B,family="Virtual Machine" -no-user-config -nodefaults -device sga -rtc base=2022-09-09T02:54:38,clock=host,driftfix=slew -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=on,splash-time=0,strict=on -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x6 -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0xa -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0xb -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0xc -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0xd -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0xe -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-1,readonly=on -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1,bootindex=2 -drive if=none,id=drive-fdc0-0-0,readonly=on -drive file=/datastore/e88e2b29-cd39-4b21-9629-5ef2458f7ddd/c08fee8e-caf4-4217-ab4d-351a021c2c3d,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,num-queues=1,bus=pci.1,addr=0x1,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -device usb-tablet,id=input0,bus=usb.0,port=1 -device intel-hda,id=sound0,bus=pci.0,addr=0x3 -device hda-micro,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -sandbox off -device pvpanic,ioport=1285 -msg timestamp=on -qmp tcp:127.0.0.1:4444,server,nowait -incoming tcp:0:3333 + +2. **image info as:** + +image: /datastore/e88e2b29-cd39-4b21-9629-5ef2458f7ddd/c08fee8e-caf4-4217-ab4d-351a021c2c3d + +file format: qcow2 +virtual size: 4.0T (4380866641920 bytes) +disk size: 1.0M +cluster_size: 65536 + +Format specific information: + compat: 1.1 + lazy refcounts: false + refcount bits: 16 + corrupt: false + +3. **Add the bitmap :** {"execute":"block-dirty-bitmap-add","arguments":{"node":"drive-virtio-disk0", "name":"bitmap-2022-09-09-16-10-23"}} +4. **set the dirty-bitmaps capability** :{ "execute": "migrate-set-capabilities" , "arguments":{"capabilities":[ {"capability":"dirty-bitmaps","state": true }]}} +5. **start migrate ** { "execute": "migrate", "arguments": { "uri": "tcp:10.49.35.23:3333" } } +6. **quert migrate parameters** {"execute":"query-migrate-parameters"} the retrun message : +{"return": {"cpu-throttle-tailslow": false, "xbzrle-cache-size": 67108864, "cpu-throttle-initial": 20, "announce-max": 550, "decompress-threads": 2, "compress-threads": 8, "compress-level": 1, "multifd-channels": 2, "multifd-zstd-level": 1, "announce-initial": 50, "block-incremental": false, "compress-wait-thread": true, "downtime-limit": 300, "tls-authz": "", "multifd-compression": "none", "announce-rounds": 5, "announce-step": 100, "tls-creds": "", "multifd-zlib-level": 1, "max-cpu-throttle": 99, "max-postcopy-bandwidth": 0, "tls-hostname": "", "throttle-trigger-threshold": 50, "max-bandwidth": 134217728, "x-checkpoint-delay": 20000, "cpu-throttle-increment": 10}} + +7. **query-migrate-capabilities** : +{"execute":"query-migrate-capabilities"} the retrun message : +{"return": [{"state": false, "capability": "xbzrle"}, {"state": false, "capability": "rdma-pin-all"}, {"state": false, "capability": "auto-converge"}, {"state": false, "capability": "zero-blocks"}, {"state": false, "capability": "compress"}, {"state": false, "capability": "events"}, {"state": false, "capability": "postcopy-ram"}, {"state": false, "capability": "x-colo"}, {"state": false, "capability": "release-ram"}, {"state": false, "capability": "return-path"}, {"state": false, "capability": "pause-before-switchover"}, {"state": false, "capability": "multifd"}, {"state": true, "capability": "dirty-bitmaps"}, {"state": false, "capability": "postcopy-blocktime"}, {"state": false, "capability": "late-block-activate"}, {"state": false, "capability": "x-ignore-shared"}, {"state": false, "capability": "validate-uuid"}, {"state": false, "capability": "background-snapshot"}]} + +8. **query the info of migrate** using the command {"execute":"query-migrate"} +{"return": {"expected-downtime": 0, "status": "active", "setup-time": 64, "total-time": 1320361, "ram": {"total": 4295499776, "postcopy-requests": 0, "dirty-sync-count": 7909410, "multifd-bytes": 0, "pages-per-second": 80, "page-size": 4096, "remaining": 0, "mbps": 3.5006399999999998, "transferred": 430971236, "duplicate": 1048569, "dirty-pages-rate": 66, "skipped": 0, "normal-bytes": 357560320, "normal": 87295}}} + +**the state of migrate is always active ,no matter how long it takes.** +The bug is : migration with big block dirty bitmap can not be finished +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/1204697 b/results/classifier/gemma3:12b/kvm/1204697 new file mode 100644 index 00000000..9f1c0487 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1204697 @@ -0,0 +1,36 @@ + +guest disk accesses lead to ATA errors + host vcpu0 unhandled wrmsr/rdmsr + +Hi. + +This is from http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=717724. + +Using Debian sid with 1.5.0-5 my Linux VMs (also Debian sid) are broken. When they boot I get gazillions of ATA errors inside the guest, as well as: +[ 242.479951] kvm [7790]: vcpu0 unhandled rdmsr: 0x345 +[ 242.483683] kvm [7790]: vcpu0 unhandled wrmsr: 0x680 data 0 +[ 242.483687] kvm [7790]: vcpu0 unhandled wrmsr: 0x6c0 data 0 +[ 242.483689] kvm [7790]: vcpu0 unhandled wrmsr: 0x681 data 0 +[ 242.483691] kvm [7790]: vcpu0 unhandled wrmsr: 0x6c1 data 0 +[ 242.483693] kvm [7790]: vcpu0 unhandled wrmsr: 0x682 data 0 +[ 242.483696] kvm [7790]: vcpu0 unhandled wrmsr: 0x6c2 data 0 +[ 242.483698] kvm [7790]: vcpu0 unhandled wrmsr: 0x683 data 0 +[ 242.483700] kvm [7790]: vcpu0 unhandled wrmsr: 0x6c3 data 0 +[ 242.483702] kvm [7790]: vcpu0 unhandled wrmsr: 0x684 data 0 +[ 242.483704] kvm [7790]: vcpu0 unhandled wrmsr: 0x6c4 data 0 +[ 242.988307] kvm [7790]: vcpu0 unhandled rdmsr: 0xe8 +[ 242.988312] kvm [7790]: vcpu0 unhandled rdmsr: 0xe7 +[ 242.988314] kvm [7790]: vcpu0 unhandled rdmsr: 0xce +[ 242.988316] kvm [7790]: vcpu0 unhandled rdmsr: 0xce +[ 242.988318] kvm [7790]: vcpu0 unhandled rdmsr: 0x1ad +[ 242.988320] kvm [7790]: vcpu0 unhandled rdmsr: 0xce +[ 242.988322] kvm [7790]: vcpu0 unhandled rdmsr: 0xe8 +[ 242.988324] kvm [7790]: vcpu0 unhandled rdmsr: 0xe7 +[ 242.988598] kvm [7790]: vcpu0 unhandled rdmsr: 0xce + +in the host. + +Please have a look at the Debian bug, for screenshots and more info. +The problem didn't occur in 1.5.0-4 and there were basically no changes inside the VM (i.e. no kernel upgrade or so). + +Thanks, +Chris. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1212402 b/results/classifier/gemma3:12b/kvm/1212402 new file mode 100644 index 00000000..125e4666 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1212402 @@ -0,0 +1,35 @@ + +Enabling KVM with recent QEMU builds from GIT hang at boot on Ubuntu Precise AMD64 kernel 3.8.0 + +When I compile QEMU from GIT and run it with './x86_64-softmmu/qemu-system-x86_64 -enable-kvm' it just hangs, the QEMU screen stays black. (Everything else in the GTK UI is responsive though, I can use the QEMU console as well.) +I'm running Ubuntu Precise with kernel 3.8.0-27-generic on an Intel Core2 Duo P9500. + +With bisecting, I found this commit caused the problem: + +235e8982ad393e5611cb892df54881c872eea9e1 is the first bad commit +commit 235e8982ad393e5611cb892df54881c872eea9e1 +Author: Jordan Justen <email address hidden> +Date: Wed May 29 01:27:26 2013 -0700 + + kvm: support using KVM_MEM_READONLY flag for regions + + For readonly memory regions and rom devices in romd_mode, + we make use of the KVM_MEM_READONLY. A slot that uses + KVM_MEM_READONLY can be read from and code can execute from the + region, but writes will exit to qemu. + + For rom devices with !romd_mode, we force the slot to be + removed so reads or writes to the region will exit to qemu. + (Note that a memory region in this state is not executable + within kvm.) + + v7: + * Update for readable => romd_mode rename (5f9a5ea1) + + Signed-off-by: Jordan Justen <email address hidden> + Reviewed-by: Xiao Guangrong <email address hidden> (v4) + Reviewed-by: Paolo Bonzini <email address hidden> (v5) + Message-id: <email address hidden> + Signed-off-by: Anthony Liguori <email address hidden> + +:100644 100644 327ae12f08b9dddc796d753d8adfb1f70c78b2c1 8e7bbf8698f6bcaa5ae945ef86e7b51effde06fe M kvm-all.c \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1219 b/results/classifier/gemma3:12b/kvm/1219 new file mode 100644 index 00000000..a2d0c700 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1219 @@ -0,0 +1,14 @@ + +--enable-kvm not work for riscv64-softmmu +Description of problem: +I want to enable kvm for qemu-system-riscv64, so I compile it with `--enable-kvm` as above. But the log shows + +```sh + Targets and accelerators + KVM support : NO +``` + +And also compiled qemu-system-riscv64 does not support kvm. +Steps to reproduce: +1. clone the repo +2. `./configure --target-list=riscv64-softmmu --enable-kvm` diff --git a/results/classifier/gemma3:12b/kvm/1221 b/results/classifier/gemma3:12b/kvm/1221 new file mode 100644 index 00000000..4b5be134 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1221 @@ -0,0 +1,30 @@ + +qga return "frozen" when vm just been created from snapfile +Steps to reproduce: +1. virsh create lisa.xml +Domain lisa created from lisa.xml + +2. virsh domblklist lisa + vda /mnt/a/b/srv.qcow2 + +3. virsh snapshot-create-as lisa --disk-only --diskspec vda,file=/tmp/f1,snapfile=/tmp/sp1 --no-metadata --quiesce +Domain snapshot 20220919165217 created + +4. virsh shutdown lisa +Domain lisa is being shutdown + +5. modify lisa.xml: replace /mnt/a/b/srv/qcow2 with /tmp/sp1 + +6. virsh create lisa.xml +Domain lisa created from lisa.xml + +7. virsh domblklist lisa + vda /tmp/sp1 + +8. virsh qemu-agent-command lisa '{"execute":"guest-fsfreeze-status"}' +{"return":"frozen"} + +9. virsh snapshot-create-as lisa --disk-only --diskspec vda,file=/tmp/f2,snapfile=/tmp/sp2 --no-metadata --quiesce +error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance +Additional information: +Is "frozen" a normal value in step8? If not, what's the best way to avoid this? diff --git a/results/classifier/gemma3:12b/kvm/1221797 b/results/classifier/gemma3:12b/kvm/1221797 new file mode 100644 index 00000000..87fb36e6 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1221797 @@ -0,0 +1,23 @@ + +virt-install gets stuck during downloading install.img + +I have tried to install CentOS 6.4 using the latest version of all kvm related tools. My problem is that I get stuck at arround 20% during the download of the file install.img. Everything before that works just fine. + +I am using the following commands to install to server: +virt-install \ + -n test \ + -r 1024 \ + --disk path=/var/kvm/images/test.img,size=25 \ + --vcpus=1 \ + --os-type linux \ + --os-variant=rhel6 \ + --network bridge=br2 \ + --nographics \ + --location='http://mirror.netcologne.de/centos/6.4/os/x86_64' \ + --extra-args='console=tty0 console=ttyS0,115200n8 serial' \ + +I have checked that my server is ready for virtualization. + +If you need any further information I am happy to provide them + +Patrick Gramsl \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1235 b/results/classifier/gemma3:12b/kvm/1235 new file mode 100644 index 00000000..384b9ef1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1235 @@ -0,0 +1,181 @@ + +Using packer and plugin qemu in the json file to create a RHEL 8.4 guest kvm vm, but ssh timeout error coming, but it is running fine in RHEL 7.9 +Description of problem: +I have RHEL 8.5 as the KVM host. I want to create a guest vm of RHEL 8.4 through packer type qemu and have a json file where all the configurations are mentioned. + +{ + +“builders”: [ + +{ + +“type”: “qemu”, + +“iso_url”: “/var/lib/libvirt/images/test.iso”, + +“iso_checksum”: “md5:3959597d89e8c20d58c4514a7cf3bc7f”, + +“output_directory”: “/var/lib/libvirt/images/iso-dir/test”, + +“disk_size”: “55G”, + +“headless”: “true”, + +“qemuargs”: [ + + [ + + "-m", + + "4096" + + ], + + [ + + "-smp", + + "2" + + ] +], + +“format”: “qcow2”, + +“shutdown_command”: “echo ‘nonrootuser’ | sudo -S shutdown -P now”, + +“accelerator”: “kvm”, + +“ssh_username”: “nonrootuser”, + +“ssh_password”: “********”, + +“ssh_timeout”: “20m”, + +“vm_name”: “test”, + +“net_device”: “virtio-net”, + +“disk_interface”: “virtio”, + +“http_directory”: “/home/azureuser/http”, + +“boot_wait”: “10s”, + +“boot_command”: [ + +“e inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/anaconda-ks.cfg” + +] + +} + +], + +“provisioners”: + +[ + +{ + + "type": "file", + + "source": "/home/azureuser/service_status_check.sh", + + "destination": "/tmp/service_status_check.sh" + +}, + +{ + + "type": "file", + + "source": "/home/azureuser/service_check.sh", + + "destination": "/tmp/service_check.sh" + +}, + +{ + + "type": "file", + + "source": "/home/azureuser/azure.sh", + + "destination": "/tmp/azure.sh" + +}, + +{ + + + "type": "file", + + "source": "/home/azureuser/params.cfg", + + "destination": "/tmp/params.cfg" + +}, + + + +{ + + "type": "shell" , + + + + "execute_command": "echo 'siedgerexuser' | {{.Vars}} sudo -E -S bash '{{.Path}}'", + + + + "inline": [ +"echo copying" , "cp /tmp/params.cfg /root/", + "sudo ls -lrt /root/params.cfg", + "sudo ls -lrt /opt/scripts/" + ], + + + "inline_shebang": "/bin/sh -x" + +}, + +{ + + "type": "shell", + + "pause_before": "5s", + "expect_disconnect": true , + + "inline": [ + "echo runningconfigurescript" , "sudo sh /opt/scripts/configure-env.sh" + + ] + +}, + +{ + + "type": "shell", + + "pause_before": "200s", + + "inline": [ + + "sudo sh /tmp/service_check.sh", + "sudo sh /tmp/azure.sh" + + ] + +} +] + +} + +It is working fine in rhel 7.9, but the same thing giving ssh timeout error in RHEL 8.4. + +But when i am creating guest vm with virt-install it is able to create a vm and i am able to see it in cockpit web ui, but when i initiate packer build with qemu plugin then giving ssh timeout error it is not visible in cockpit UI, so not able to see where the guest vm created get stuck. + +Can anyone please help me to fix this issue where why vm not able to come up and also why qemu created vm not visible in cockpit web ui as that will be really helpful while debugging +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/1237 b/results/classifier/gemma3:12b/kvm/1237 new file mode 100644 index 00000000..5ccfd4df --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1237 @@ -0,0 +1,2 @@ + +after OS upgrade usb-redir connection broken during migration and qemu-kvm: terminating on signal 15 diff --git a/results/classifier/gemma3:12b/kvm/1243287 b/results/classifier/gemma3:12b/kvm/1243287 new file mode 100644 index 00000000..19d70ec1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1243287 @@ -0,0 +1,27 @@ + +[KVM/QEMU][ARM][SAUCY] fails to boot cloud-image due to host kvm fail + +On booting the cloud image using qemu/kvm fails with the following error: + +Cloud-init v. 0.7.3 running 'init' at Thu, 03 Oct 2013 16:45:21 +0000. Up 360.78 seconds. +ci-info: +++++++++++++++++++++++++Net device info+++++++++++++++++++++++++ +ci-info: +--------+------+-----------+---------------+-------------------+ +ci-info: | Device | Up | Address | Mask | Hw-Address | +ci-info: +--------+------+-----------+---------------+-------------------+ +ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . | +ci-info: | eth0 | True | 10.0.2.15 | 255.255.255.0 | 52:54:00:12:34:56 | +ci-info: +--------+------+-----------+---------------+-------------------+ +ci-info: ++++++++++++++++++++++++++++++Route info++++++++++++++++++++++++++++++ +ci-info: +-------+-------------+----------+---------------+-----------+-------+ +ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | +ci-info: +-------+-------------+----------+---------------+-----------+-------+ +ci-info: | 0 | 0.0.0.0 | 10.0.2.2 | 0.0.0.0 | eth0 | UG | +ci-info: | 1 | 10.0.2.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | +ci-info: +-------+-------------+----------+---------------+-----------+-------+ +error: kvm run failed Function not implemented + +/usr/lib/python2.7/dist-packages/cloudinit/sources/DataSourceAltCloud.py assumes that dmidecode command is availabe (ie it assumes that system is x86) on arm systems there is no dmidecode command so host kvm fails with the message "error: kvm run failed Function not implemented" + +The patch makes get_cloud_type() function return with UNKNOWN for ARM systems. + +I was able to boot the cloud-image on ARM after applying this change. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1248959 b/results/classifier/gemma3:12b/kvm/1248959 new file mode 100644 index 00000000..bc97c50a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1248959 @@ -0,0 +1,82 @@ + +pdpe1gb flag is missing in guest running on Intel h/w + +I need to utilize 1G hugepages on my guest system. But this is not possible as long as there is no pdpe1gb support in guest system. The latest source code contains pdpe1gb support for AMD but not for Intel. + +Are there any obstacles that does not allow to implement it for modern Intel chips? + +My configuration: +Host: +------- +uname -a +Linux tripel.salab.cic.nsn-rdnet.net 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 23:51:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux + +cat /etc/*-release +CentOS release 6.4 (Final) + +yum list installed | grep qemu +gpxe-roms-qemu.noarch 0.9.7-6.9.el6 @base +qemu-img.x86_64 2:0.12.1.2-2.355.0.1.el6.centos.5 +qemu-kvm.x86_64 2:0.12.1.2-2.355.0.1.el6.centos.5 + +cat /proc/cpuinfo +processor : 0 +vendor_id : GenuineIntel +cpu family : 6 +model : 45 +model name : Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz +stepping : 7 +cpu MHz : 2700.000 +cache size : 20480 KB +physical id : 0 +siblings : 16 +core id : 0 +cpu cores : 8 +apicid : 0 +initial apicid : 0 +fpu : yes +fpu_exception : yes +cpuid level : 13 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid +bogomips : 5387.09 +clflush size : 64 +cache_alignment : 64 +address sizes : 46 bits physical, 48 bits virtual + +/usr/libexec/qemu-kvm -cpu ? +Recognized CPUID flags: + f_edx: pbe ia64 tm ht ss sse2 sse fxsr mmx acpi ds clflush pn pse36 pat cmov mca pge mtrr sep apic cx8 mce pae msr tsc pse de vme fpu + f_ecx: hypervisor rdrand f16c avx osxsave xsave aes tsc-deadline popcnt movbe x2apic sse4.2|sse4_2 sse4.1|sse4_1 dca pcid pdcm xtpr cx16 fma cid ssse3 tm2 est smx vmx ds_cpl monitor dtes64 pclmulqdq|pclmuldq pni|sse3 + extf_edx: 3dnow 3dnowext lm|i64 rdtscp pdpe1gb fxsr_opt|ffxsr fxsr mmx mmxext nx|xd pse36 pat cmov mca pge mtrr syscall apic cx8 mce pae msr tsc pse de vme fpu + extf_ecx: perfctr_nb perfctr_core topoext tbm nodeid_msr tce fma4 lwp wdt skinit xop ibs osvw 3dnowprefetch misalignsse sse4a abm cr8legacy extapic svm cmp_legacy lahf_lm + +ps ax | grep qemu + 7197 ? Sl 0:15 /usr/libexec/qemu-kvm -name vladimir.AS-0 -S -M rhel6.4.0 -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -enable-kvm -m 8192 -mem-prealloc -mem-path /var/lib/hugetlbfs/pagesize-1GB/libvirt/qemu -smp 4,sockets=4,cores=1,threads=1 -uuid ec2d3c58-a7f0-fdbd-9de5-b547a5b3130f -nographic -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vladimir.AS-0.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -netdev tap,fd=28,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=52:54:00:81:5b:df,bus=pci.0,addr=0x3,bootindex=1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device pci-assign,host=02:00.0,id=hostdev0,configfd=29,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 + +Guest: +--------- +# uname -a +Linux AS-0 2.6.34.13-WR4.3.fp_x86_64_standard-00019-g052bb3e #1 SMP Wed May 8 12:21:02 EEST 2013 x86_64 x86_64 x86_64 GNU/Linux + +# cat /etc/*-release +Wind River Linux 4.3 glibc_cgl + +# cat /proc/cpuinfo +processor : 0 +vendor_id : GenuineIntel +cpu family : 6 +model : 42 +model name : Intel Xeon E312xx (Sandy Bridge) +stepping : 1 +cpu MHz : 2693.893 +cache size : 4096 KB +fpu : yes +fpu_exception : yes +cpuid level : 13 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx lm constant_tsc rep_good pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm xsaveopt +bogomips : 5387.78 +clflush size : 64 +cache_alignment : 64 +address sizes : 46 bits physical, 48 bits virtual \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1250360 b/results/classifier/gemma3:12b/kvm/1250360 new file mode 100644 index 00000000..73f4f290 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1250360 @@ -0,0 +1,35 @@ + +qcow2 image logical corruption after host crash + +Description of problem: +In case of power failure disk images that were active and created in qcow2 format can become logically corrupt so that they actually appear as unused (full of zeroes). +Data seems to be there, but at this moment i cannot find any reliable method to recover it. Should it be a raw image, a recovery path would be available, but a qcow2 image only presents zeroes once it gets corrupted. My understanding is that the blockmap of the image gets reset and the image is then assumed to be unused. +My detailed setup : + +Kernel 2.6.32-358.18.1.el6.x86_64 +qemu-kvm-0.12.1.2-2.355.0.1.el6.centos.7.x86_64 +Used via libvirt libvirt-0.10.2-18.el6_4.14.x86_64 +The image was used from a NFS share (the nfs server did NOT crash and remained permanently active). + +qemu-img check finds no corruption; +qemu-img convert will fully convert the image to raw at a raw image full of zeroes. However, there is data in the file, and the storage backend was not restarted, inactivated during the incident. +I encountered this issue on two different machines, in both cases i was not able to recover the data. +Image was qcow2, thin provisioned, created like this : + qemu-img create -f qcow2 -o cluster_size=2M imagename.img + +While addressing the root cause in order to not have this issue repeat would be the ideal scenario, a temporary workaround to run on the affected qcow2 image to "patch" it and recover the data (eventually after a full fsck/recovery inside the guest) would also be good. Otherwise we are basically losing data on a large scale when using qcow2. + + + +Version-Release number of selected component (if applicable): +Kernel 2.6.32-358.18.1.el6.x86_64 +qemu-kvm-0.12.1.2-2.355.0.1.el6.centos.7.x86_64 +Used via libvirt libvirt-0.10.2-18.el6_4.14.x86_64 + +How reproducible: +I am not able (and don't have at the moment enough resources to try to manually reproduce it), but the probability of the issue seems quite high as this is the second case of such corruption in weeks. +Additional info: +I can privately provide an image displaying the corruption. + +The reported problem has actually two aspects : first is the cause that eventually produces this issue. +The second is the fact that once the logical corruption has occured, qemu-img check finds nothing wrong with the image - this is obviously wrong. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1251470 b/results/classifier/gemma3:12b/kvm/1251470 new file mode 100644 index 00000000..e89e552d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1251470 @@ -0,0 +1,19 @@ + +Guest not working in KVM mode but does in TCG mode + +Qemu: 1.5.0 (Debian 1.5.0+dfsg-3ubuntu5) +Host: Ubuntu 13.10 x86_64 (Intel Core i7-950) +Guest: FreeBSD 9.2 RELEASE x86_64 + +As initially reported here: https://www.redhat.com/archives/libvirt-users/2013-November/msg00066.html +I was told that this is a bug in Qemu. + +Basically this command works: +qemu-system-x86_64 -hda images/FreeBSD-9.2-RELEASE-amd64.img + +But this one doesn't: +qemu-system-x86_64 -machine accel=kvm -hda images/FreeBSD-9.2-RELEASE-amd64.img + +FreeBSD is stuck after the bootloader: +Booting... +CPU doesn't support long mode \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1254443 b/results/classifier/gemma3:12b/kvm/1254443 new file mode 100644 index 00000000..4d21dc27 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1254443 @@ -0,0 +1,6 @@ + +Periodic mode of LAPIC doesn't fire interrupts when using kvm + +It works fine when not using kvm and it does also work fine when using oneshot mode. + +Tested with qemu 1.6.1 (commit 62ecc3a0e3c77a4944c92a02dd7fae2ab1f2290d). \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1254940 b/results/classifier/gemma3:12b/kvm/1254940 new file mode 100644 index 00000000..2684b139 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1254940 @@ -0,0 +1,49 @@ + +qemu-KVM guest OS occurs many ext3-fs errors after multiple forced shutdown + +Hi: +I met some filesysterm errors in a sles guest on KVM. My system environment is: +HOST: + suse 10, the kernel version is 2.6.32.43 + Qemu-KVM 1.2 + Libvirt 1.0 +guest OS: + suse 10, the kernel version is 2.6.32.43 +VMs use a qcow2 disk. + +Description of problem: +I have 20+ VMs with qcow2 disk, these VMs have been forced to shut down by +"virsh destroy" many times during and after VM installation. +When these vm reboot,dmesg show a ext3-fs mount error occurred on /usr/local +partion /dev/vda3: + EXT3-fs warning: mounting fs with errors, running e2fsck is recommendedand +when I wrote into partion /dev/vda3,some errors occurred in dmesg: +1.error (device vda3): ext3_free_blocks: Freeing blocks not in datazone - block += 1869619311, count = 1error (device vda3): ext3_free_blocks_sb: bit already +cleared for block 2178152error (device vda3): ext3_readdir: bad entry in +directory #1083501: +2.[347470.661893] attempt to access beyond end of device[347470.661896] vda3: +rw=0, want=6870892952, limit=41945715[347470.661897] EXT3-fs error (device +vda3): ext3_free_branches: Read failure, inode=1083508, block=858861618 +3.EXT3-fs error (device vda3): ext3_new_block: block(4295028581) >= blocks +count(-1) - block_group = 1, es == ffff88021b6c7400 + +I suspect this fs-error is caused by multiple forced shutdown, but I can't +reproduce this bug now. + +Could anyone has an idea or suggestion to help me? + +Thanks in Advance +Regards +Ben + +Reproducible: Always + +Steps to Reproduce: +I can't reproduce this bug now. + + +additional: +1.multiple forced shutdown during and after the vm installing +2.vm with qcow2 disk +3.different vm dmesg different errors in above error list(1/2/3) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1256826 b/results/classifier/gemma3:12b/kvm/1256826 new file mode 100644 index 00000000..db85310f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1256826 @@ -0,0 +1,12 @@ + +INT instruction bug in WindowsXP + +This bug is in -no-kvm mode. + +In windowsXP at IDT entry 2&8 is Task gate + +when application use INT 2 or INT 8 it will cause blue screen in XP. + +I found it should cause #GP not generate hw interrupt. + +also I check this bug with -enable-kvm and works correctly. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1257352 b/results/classifier/gemma3:12b/kvm/1257352 new file mode 100644 index 00000000..8127ab2c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1257352 @@ -0,0 +1,32 @@ + +kvm hangs occasionally when switching out of the qemu console + +To recreate (although this does *NOT* fail most of the time alas): + +1) press "ctrl-alt-2" to switch to the qemu console. +2) type say "sendkey ctrl-alt-f1" +3) press "ctrl-alt-1". + +Expected outcome: Switch to tty1 in the VM. + +Actual outcome: No switch to tty1 in the VM. and qemu console unresponsive to any keyboard input. + + +Rather a vague problem description I'm afraid but this has happened to me 3 times recently. No crash and no excessive CPU is observed. + +I'll grab an strace when it happens again and attach... + +ProblemType: Bug +DistroRelease: Ubuntu 14.04 +Package: qemu-system-x86 1.6.0+dfsg-2ubuntu4 +ProcVersionSignature: Ubuntu 3.12.0-4.12-generic 3.12.1 +Uname: Linux 3.12.0-4-generic i686 +NonfreeKernelModules: nvidia +ApportVersion: 2.12.7-0ubuntu1 +Architecture: i386 +CurrentDesktop: Unity +Date: Tue Dec 3 15:41:40 2013 +InstallationDate: Installed on 2010-10-21 (1139 days ago) +InstallationMedia: Ubuntu 10.10 "Maverick Meerkat" - Release i386 (20101007) +SourcePackage: qemu +UpgradeStatus: Upgraded to trusty on 2013-11-01 (31 days ago) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1268279 b/results/classifier/gemma3:12b/kvm/1268279 new file mode 100644 index 00000000..5c1e8e1b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1268279 @@ -0,0 +1,144 @@ + +Windows 7 x86 does not start on 1.7.50 from git + +I have "Debian 7.2 x64". + +Install last QEMU from git: + +aptitude install git gcc make autoconf libglib2.0-dev libcurl4-gnutls-dev libpixman-1-dev libcap-dev libaio-dev libcap-ng-dev libjpeg8-dev libpng12-dev libssh2-1-dev uuid-dev + +#cd /usr/src +#git clone git://git.qemu.org/qemu.git +#cd qemu +# ./configure --prefix=/usr --sysconfdir=/etc --target-list=x86_64-softmmu --enable-kvm +Install prefix /usr +BIOS directory /usr/share/qemu +binary directory /usr/bin +library directory /usr/lib +libexec directory /usr/libexec +include directory /usr/include +config directory /etc +local state directory /usr/var +Manual directory /usr/share/man +ELF interp prefix /usr/gnemul/qemu-%M +Source path /usr/src/qemu +C compiler cc +Host C compiler cc +C++ compiler c++ +Objective-C compiler cc +ARFLAGS rv +CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g +QEMU_CFLAGS -Werror -fPIE -DPIE -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -Wendif-labels -Wmissing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-all -I/usr/include/p11-kit-1 -I/usr/include/libpng12 -I/usr/include/pixman-1 +LDFLAGS -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g +make make +install install +python python -B +smbd /usr/sbin/smbd +host CPU x86_64 +host big endian no +target list x86_64-softmmu +tcg debug enabled no +gprof enabled no +sparse enabled no +strip binaries yes +profiler no +static build no +-Werror enabled yes +pixman system +SDL support yes +GTK support no +curses support no +curl support yes +mingw32 support no +Audio drivers oss +Block whitelist (rw) +Block whitelist (ro) +VirtFS support yes +VNC support yes +VNC TLS support yes +VNC SASL support no +VNC JPEG support yes +VNC PNG support yes +VNC WS support yes +xen support no +brlapi support no +bluez support no +Documentation yes +GUEST_BASE yes +PIE yes +vde support no +netmap support no +Linux AIO support yes +ATTR/XATTR support yes +Install blobs yes +KVM support yes +RDMA support no +TCG interpreter no +fdt support no +preadv support yes +fdatasync yes +madvise yes +posix_madvise yes +sigev_thread_id yes +uuid support yes +libcap-ng support yes +vhost-net support yes +vhost-scsi support yes +Trace backend nop +Trace output file trace-<pid> +spice support no (/) +rbd support no +xfsctl support no +nss used no +libusb no +usb net redir no +GLX support yes +libiscsi support no +build guest agent yes +QGA VSS support no +seccomp support no +coroutine backend ucontext +coroutine pool yes +GlusterFS support no +virtio-blk-data-plane yes +gcov gcov +gcov enabled no +TPM support no +libssh2 support yes +TPM passthrough no +QOM debugging yes +vhdx yes +#make && make install + +QEMU is successfully builded and installed: + +# /usr/bin/qemu-system-x86_64 --version +QEMU emulator version 1.7.50, Copyright (c) 2003-2008 Fabrice Bellard + +Create virtual HDD: + +# qemu-img create -f qcow2 /kvm/vm/test.img 50G +Formatting '/kvm/vm/test.img', fmt=qcow2 size=53687091200 encryption=off cluster_size=65536 lazy_refcounts=off + +Start virtual machine: + + # /usr/bin/qemu-system-x86_64 -cpu qemu64 -M pc -smp 1 -m 1024 -monitor tcp:127.0.0.1:4444,server,nowait -drive file=/kvm/vm/test.img,cache=writeback,aio=native -boot order=dc,menu=on -enable-kvm -vnc 127.0.0.1:14 -localtime -no-hpet -rtc-td-hack -global kvm-pit.lost_tick_policy=discard -daemonize -usb -device usb-tablet,id=input0 -runas kvm + +Connect ISO image: + +# nc 127.0.0.1 4444 +QEMU 1.7.50 monitor - type 'help' for more information +(qemu) change ide1-cd0 http://iso.vpsnow.ru/windows/7/ru_windows_7_professional_with_sp1_x86_dvd.iso +change ide1-cd0 http://someiso.host.com/ru_windows_7_professional_with_sp1_x86_dvd.iso +(qemu) + +Open NVC console to VM, reboot and boot (F12) from connected ISO. +Windows installation start and successfully goes to first reboot. + +========================================== +After reboot it hangs on "Starting windows" with 100% load on one of CPU cores for many hours. +========================================== + +Other Windows OS (XP, Server 2003, Server 2008 R2) are installed and work good. + +Is this a QEMU BUG? Could you please try reproduce it? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1268671 b/results/classifier/gemma3:12b/kvm/1268671 new file mode 100644 index 00000000..3d0ec29e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1268671 @@ -0,0 +1,45 @@ + +CentOS guest crashing due to assertion failure in qemu-char.c + +Here is the log in /var/log/libvirt/qemu/centos_heavy.log + +qemu-kvm: /builddir/build/BUILD/qemu-kvm-0.12.1.2/qemu-char.c:630: io_watch_poll_finalize: Assertion `iwp->src == ((void *)0)' failed. +2014-01-13 16:50:31.576+0000: shutting down + +The code it's failing the assertion on has an interesting comment: + + static void io_watch_poll_finalize(GSource *source) + { + /* Due to a glib bug, removing the last reference to a source + * inside a finalize callback causes recursive locking (and a + * deadlock). This is not a problem inside other callbacks, + * including dispatch callbacks, so we call io_remove_watch_poll + * to remove this source. A t this point, iwp->src must + * be NULL, or we would leak it. + * + * This would be solved much more elegantly by child sources, + * but we support older glib versions that do not have them. + */ + IOWatchPoll *iwp = io_watch_poll_from_source(source); + assert(iwp->src == NULL); + } + +------ +CPU Info: + +http://pastebin.com/U7MrzFxK + +-------- + +Relevant RPM versions: + +qemu-kvm-0.12.1.2-2.415.el6_5.3.x86_64 +libvirt-0.10.2-29.el6_5.2.x86_64 + +-------- + +Domain config: + +http://pastebin.com/Nf2VsER8 + +(Note the use of the vmchannels; I believe this is playing a part in this crash) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1270397 b/results/classifier/gemma3:12b/kvm/1270397 new file mode 100644 index 00000000..822ee2c6 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1270397 @@ -0,0 +1,66 @@ + +Systemd segfaults after live migration + +After live migrating my virtual machine it panics because of segmentation fault in systemd (see attachment). + +Software used (on archlinux): +qemu 1.7.0-1 +libvirt 1.2.0-1 +linux 3.12.7-1 + +This is configuration of this VM: +<domain type='kvm'> + <name>vbroker</name> + <uuid>455c9c62-10a6-11e3-a7f2-441ea153aac8</uuid> + <description>455c9c62-10a6-11e3-a7f2-441ea153aac8</description> + <memory unit='KiB'>6291456</memory> + <currentMemory unit='KiB'>6291456</currentMemory> + <vcpu placement='static'>4</vcpu> + <os> + <type arch='x86_64' machine='pc-i440fx-1.7'>hvm</type> + <boot dev='cdrom'/> + <bootmenu enable='no'/> + </os> + <features> + <acpi/> + <apic/> + </features> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <devices> + <emulator>/usr/bin/qemu-kvm</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2' cache='none'/> + <source file='/var/lib/libvirt/images/archipel/drives/455c9c62-10a6-11e3-a7f2-441ea153aac8/vbroker.qcow2'/> + <target dev='vda' bus='virtio'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </disk> + <controller type='usb' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> + </controller> + <controller type='pci' index='0' model='pci-root'/> + <interface type='bridge'> + <mac address='de:ad:fb:8e:17:c2'/> + <source bridge='br0'/> + <model type='virtio'/> + <filterref filter='clean-traffic'> + <parameter name='IP' value='10.0.0.2'/> + </filterref> + <bandwidth> + </bandwidth> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <input type='tablet' bus='usb'/> + <input type='mouse' bus='ps2'/> + <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> + <video> + <model type='cirrus' vram='9216' heads='1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + <memballoon model='virtio'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> + </memballoon> + </devices> +</domain> \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1278 b/results/classifier/gemma3:12b/kvm/1278 new file mode 100644 index 00000000..01440bc9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1278 @@ -0,0 +1,7 @@ + +Error creating encrypted qcow2 disk using qemu-img +Description of problem: +Error creating encrypted qcow2 disk using qemu-img:No crypto library supporting PBKDF in this build: Function not implemented + +Steps to reproduce: +1.qemu-img create --object secret,id=sec0,data=123456 -f qcow2 -o encrypt.format=luks,encrypt.key-secret=sec0 base.qcow2 1G diff --git a/results/classifier/gemma3:12b/kvm/1279500 b/results/classifier/gemma3:12b/kvm/1279500 new file mode 100644 index 00000000..62736da7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1279500 @@ -0,0 +1,93 @@ + +system_powerdown causes SMP OpenBSD guest to freeze + +system_powerdown causes an SMP OpenBSD guest to freeze. I can reproduce it with the following systems/versions: + + - Debian 6: QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5) + - Fedora 20: + qemu-system-x86-1.6.1 (from Fedora repository) + qemu-1.7.0 (latest release version) + qemu-1.7.50 (latest development snapshot, "git cloned" today, 20140212) + +all of the above hosts are running x86_64 linux. + +The first OpenBSD version that I ran as a VM, v5.1, experienced the problem. All subsequent versions experience the problem. The above tests were performed using OpenBSD v5.4 (amd64). + +I will open an OpenBSD bug report for this problem as well, and update this report with the OpenBSD bug ID. + +There's an interesting RedHat bug report concerning this problem: + URL: https://bugzilla.redhat.com/show_bug.cgi?id=508801#c34 + +Here an excerpt: +-snip- +Gleb Natapov 2009-12-23 10:37:44 EST + +I posted patch to provide correct PCI irq routing info in mptable to kvm +mailing list. It works for all devices except for SCI interrupt. BIOS +programs SCI interrupt to be 9 as spec requires, but OpenBSD thinks that +it is smarter and moves it to interrupts 10. Qemu will still send it on +vector 9 and OpenBSD will enter the same infinity recursion. This can +be triggered by issuing system_powerdown on qemu monitor. +-snip- + +Michael Tokarev reported this problem on the kvm mailing list in 2011: + URL: http://www.spinics.net/lists/kvm/msg51311.html + +I compiled qemu as follows: +-snip- +cd qemu-src-dir +mkdir -p bin/native +cd bin/native +../../configure \ + --prefix=/usr/local/qemu-dev-snapshot-20140212 \ + --target-list=x86_64-softmmu \ + --enable-kvm \ + --enable-spice \ + --with-gtkabi="3.0" \ + --audio-drv-list=pa,sdl,alsa,oss \ + --extra-cflags='-I/usr/include/SDL' +-snip- + +I'm running OpenBSD with the following command: +-snip- +#!/bin/bash + +DEF=/usr/bin/qemu-system-x86_64 +QEMU_LATEST=/usr/local/qemu-1.7.0/bin/qemu-system-x86_64 +QEMU_DEV=/usr/local/qemu-dev-snapshot-20140212/bin/qemu-system-x86_64 + +$QEMU_DEV \ + -machine accel=kvm \ + -name obsdtest-v54 \ + -S \ + -machine pc-i440fx-1.6,accel=kvm,usb=off \ + -boot c \ + -m 2048 \ + -realtime mlock=off \ + -smp 2,sockets=2,cores=1,threads=1 \ + -uuid 8b685793-2510-473e-b97e-822a4cf2fbca \ + -no-user-config \ + -monitor stdio \ + -rtc base=utc,driftfix=slew \ + -global kvm-pit.lost_tick_policy=discard \ + -no-hpet \ + -drive file=/guest_images/obsdtest_v54.raw,if=none,id=drive-virtio-disk0,format=raw,cache=none \ + -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \ + -drive if=none,id=drive-ide0-0-0,readonly=on,format=raw \ + -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 \ + -chardev pty,id=charserial0 \ + -device isa-serial,chardev=charserial0,id=serial0 \ + -k en-us \ + -device cirrus-vga,id=video0,bus=pci.0,addr=0x3 \ + -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \ + -net nic \ + -net user +-snip- + +The OpenBSD disk image I used for testing is 143MB compressed, 10G uncompressed. It can be found here: + + http://www.spielwiese.de/OpenBSD/obsd54.raw.7z + +The root password is "x". + +Rob Urban \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1288259 b/results/classifier/gemma3:12b/kvm/1288259 new file mode 100644 index 00000000..d0bb181b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1288259 @@ -0,0 +1,40 @@ + +KVM vms are paused and cannot be deleted due to hardware error 0x0 + +Upon creation of instances via OpenStack nova api instances got stuck in spawning state. Then, after trying to delete instances they got stuck in deleting state. After investigation the following error was found: + +KVM: entry failed, hardware error 0x0 +EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000623 +ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000 +EIP=0000fff0 EFL=00000002 [-------] CPL=3 II=0 A20=1 SMM=0 HLT=0 +ES =0000 00000000 0000ffff 00009300 +CS =f000 000f0000 0000ffff 0000f300 +SS =0000 00000000 0000ffff 0000f300 +DS =0000 00000000 0000ffff 00009300 +FS =0000 00000000 0000ffff 00009300 +GS =0000 00000000 0000ffff 00009300 +LDT=0000 00000000 0000ffff 00008200 +TR =0000 00000000 0000ffff 00008b00 +GDT= 00000000 0000ffff +IDT= 00000000 0000ffff +CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=28 95 66 ba 01 4a 03 00 66 89 d8 66 5b 66 5e e9 15 79 66 c3 <ea> 5b e0 00 f0 30 36 2f 32 33 2f 39 39 00 fc 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 + +All instances were in paused state: +root@node-7:~# virsh list +setlocale: No such file or directory + Id Name State +---------------------------------------------------- + 4 instance-00000004 paused + 5 instance-00000005 paused + 6 instance-00000006 paused + 7 instance-00000007 paused + 8 instance-00000008 paused + 9 instance-00000009 paused + +The only way to delete VM is to reset it and then resume it. After this, VM is deleted properly. +OpenStack version: Havana on Ubuntu 12.04 +KVM version: QEMU emulator version 1.2.0 (qemu-kvm-1.2.0+noroms-0ubuntu7.12.10, Debian), Copyright (c) 2003-2008 Fabrice Bellard \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1294227 b/results/classifier/gemma3:12b/kvm/1294227 new file mode 100644 index 00000000..cdab1ebf --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1294227 @@ -0,0 +1,15 @@ + +migration wrong handling of KVM_GET_DIRTY_LOG ioctl + +In the code below kvm_vm_ioctl(...) can return --errno != -1 from ioctl call, but return only checks for -1. +Found during KVM-ARM migration which apperead to go through but was actually failing getting +memslot dirty bitmap. + +static int kvm_physical_sync_dirty_bitmap(....) +{ + .... + if(kvm_vm_ioctl(s, KVM_GET_DIRTY_LOG, &d) == -1) { + - err out + } + ... continue +} \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1297218 b/results/classifier/gemma3:12b/kvm/1297218 new file mode 100644 index 00000000..41b7599c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1297218 @@ -0,0 +1,21 @@ + +guest hangs after live migration due to tsc jump + +We have two identical Ubuntu servers running libvirt/kvm/qemu, sharing a Gluster filesystem. Guests can be live migrated between them. However, live migration often leads to the guest being stuck at 100% for a while. In that case, the dmesg output for such a guest will show (once it recovers): Clocksource tsc unstable (delta = 662463064082 ns). In this particular example, a guest was migrated and only after 11 minutes (662 seconds) did it become responsive again. + +It seems that newly booted guests doe not suffer from this problem, these can be migrated back and forth at will. After a day or so, the problem becomes apparent. It also seems that migrating from server A to server B causes much more problems than going from B back to A. If necessary, I can do more measurements to qualify these observations. + +The VM servers run Ubuntu 13.04 with these packages: +Kernel: 3.8.0-35-generic x86_64 +Libvirt: 1.0.2 +Qemu: 1.4.0 +Gluster-fs: 3.4.2 (libvirt access the images via the filesystem, not using libgfapi yet as the Ubuntu libvirt is not linked against libgfapi). +The interconnect between both machines (both for migration and gluster) is 10GbE. +Both servers are synced to NTP and well within 1ms form one another. + +Guests are either Ubuntu 13.04 or 13.10. + +On the guests, the current_clocksource is kvm-clock. +The XML definition of the guests only contains: <clock offset='utc'/> + +Now as far as I've read in the documentation of kvm-clock, it specifically supports live migrations, so I'm a bit surprised at these problems. There isn't all that much information to find on these issue, although I have found postings by others that seem to have run into the same issues, but without a solution. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1305402 b/results/classifier/gemma3:12b/kvm/1305402 new file mode 100644 index 00000000..e3f5a71a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1305402 @@ -0,0 +1,34 @@ + +libvirt fails to start VirtualMachines + +I've created several kvm based machines with the 'trusty' designation using virtual machine manager. They have operated well over the last 4 days without issue. I did an apt-get upgrade, and qemu was in the list of updates. + +After upgrading, I am unable to start any of the provisioned virtual machines with the following error output + +virsh # start node2 +error: Failed to start domain node2 +error: internal error: process exited while connecting to monitor: qemu-system-x86_64: -machine trusty,accel=kvm,usb=off: Unsupported machine type +Use -machine help to list supported machines! + + +virsh # start node3 +error: Failed to start domain node3 +error: internal error: process exited while connecting to monitor: qemu-system-x86_64: -machine trusty,accel=kvm,usb=off: Unsupported machine type +Use -machine help to list supported machines! + + + +$ dpkg -l | grep kvm +ii qemu-kvm 2.0.0~rc1+dfsg-0ubuntu3 amd64 QEMU Full virtualization on x86 hardware (transitional package) + +Log snippet from vm 'media' that was verified working, and fails to start after the upgrade. + +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name media -S -machine trusty,accel=kvm,usb=off -m 1548 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 60b20f7b-6d20-bcb3-f4fc-808a9b2fe0d0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/media.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot menu=off,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/media.img,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/home/charles/iso/ubuntu-desktop-12.04.4-amd64.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a0:69:d9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 +char device redirected to /dev/pts/13 (label charserial0) +qemu: terminating on signal 15 from pid 31688 +2014-04-10 03:36:54.593+0000: shutting down +2014-04-10 03:59:25.487+0000: starting up +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name media -S -machine trusty,accel=kvm,usb=off -m 1548 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 60b20f7b-6d20-bcb3-f4fc-808a9b2fe0d0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/media.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot menu=off,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/media.img,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/home/charles/iso/ubuntu-desktop-12.04.4-amd64.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:a0:69:d9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 +qemu-system-x86_64: -machine trusty,accel=kvm,usb=off: Unsupported machine type +Use -machine help to list supported machines! +2014-04-10 03:59:25.696+0000: shutting down \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1307656 b/results/classifier/gemma3:12b/kvm/1307656 new file mode 100644 index 00000000..d2ffa269 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1307656 @@ -0,0 +1,16 @@ + +qemu segfault when starting virt-manager + +libvirtd 1.2.3 +virt-manager 1.0.1 +qemu 1.7.92 (2.0.0-rc2) + +1. Initially virt-manager is NOT running + +2. I start a VM manually using "virsh start ...", causing a qemu instance to be run as + +/usr/bin/qemu-system-x86_64 -machine accel=kvm -name Zeus_Virtualized -S -machine pc-i440fx-2.0,accel=kvm,usb=off -cpu Penryn -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 6384b4d2-1c58-4595-bce2-b248230e2c9c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Zeus_Virtualized.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/home/pief/libvirt VMs/Zeus_Virtualized_USBStick.qcow2,if=none,id=drive-usb-disk0,format=qcow2 -device usb-storage,drive=drive-usb-disk0,id=usb-disk0,removable=off -drive file=/home/pief/isos/openSUSE-13.1-DVD-x86_64.iso,if=none,id=drive-ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/home/pief/libvirt VMs/Zeus_Virtualized_HDD1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=/home/pief/libvirt VMs/Zeus_Virtualized_HDD2.qcow2,if=none,id=drive-virtio-disk1,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=/home/pief/libvirt VMs/Zeus_Virtualized_SSD.qcow2,if=none,id=drive-virtio-disk2,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=2 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -vnc 127.0.0.1:0 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x3 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -chardev spicevmc,id=charredir2,name=usbredir -device usb-redir,chardev=charredir2,id=redir2 -chardev spicevmc,id=charredir3,name=usbredir -device usb-redir,chardev=charredir3,id=redir3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x9 + +3. I start virt-manager (just starting it, nothing special). + +4. The qemu instance segfaults with the attached backtrace. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/131 b/results/classifier/gemma3:12b/kvm/131 new file mode 100644 index 00000000..786c252f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/131 @@ -0,0 +1,2 @@ + +QEMU's default msrs handling causes Windows 10 64 bit to crash diff --git a/results/classifier/gemma3:12b/kvm/1312668 b/results/classifier/gemma3:12b/kvm/1312668 new file mode 100644 index 00000000..858cb9ef --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1312668 @@ -0,0 +1,54 @@ + +x86 cpu nx feature: guest reboots after migrate exec + +Using instruction on +http://www.linux-kvm.org/page/Migration +I save VM state to external file and try load it, but VM starts, shows saved screen and reboots immediatly. + +Cmdline for vm state saving: + +$ sudo ./i386-softmmu/qemu-system-i386 -machine accel=kvm,kernel_irqchip=on -enable-kvm -m 512 -hda image.raw -vga std -net none -M pc -monitor stdio -cpu SandyBridge +(or -cpu "n270" , or "kvm32,+sse2,+pae,+nx") + +Monitor cmd: +(qemu) stop +(qemu) migrate_set_speed 4095m +(qemu) migrate "exec:gzip -c > STATEFILE.gz" +(qemu) q + +Cmdline for vm state loading: + +$ sudo ./i386-softmmu/qemu-system-i386 -machine accel=kvm,kernel_irqchip=on -enable-kvm -m 512 -hda image.raw -vga std -net none -M pc -monitor stdio -cpu SandyBridge -incoming "exec: gzip -c -d STATEFILE.gz" +(or -cpu "n270" , or "kvm32,+sse2,+pae,+nx") + +If I do the same without NX cpu feature (-cpu option "n270,-nx" / "SandyBridge,-nx" / "kvm32,+pae,+sse2") or on qemu-system-x86_64, VM save/load works correctly. + +Log kvm-all.c, DEBUG_KVM=y: + +QEMU 2.0.0 monitor - type 'help' for more information +(qemu) kvm_init_vcpu +...handle_io.../...handle_mmio... +kvm_cpu_exec() +shutdown +kvm_cpu_exec() +interrupt exit requested +io window exit +kvm_cpu_exec() + +Host: + + $ lsb_release -rd + Description: Ubuntu 12.04.4 LTS + Release: 12.04 + + $ uname -a + Linux <username> 3.8.0-38-generic #56~precise1 SMP Tue Apr 22 12:46:44 MSK 2014 x86_64 x86_64 x86_64 GNU/Linux + +Guest: + 1. Ubuntu 12.04 32bit + 2. WIndows 8 32bit + +Qemu: v2.0.0 +commit a9e8aeb3755bccb7b51174adcf4a3fc427e0d147 +Author: Peter Maydell <email address hidden> +Date: Thu Apr 17 13:41:45 2014 +0100 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1318091 b/results/classifier/gemma3:12b/kvm/1318091 new file mode 100644 index 00000000..0dee74c9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1318091 @@ -0,0 +1,40 @@ + +Perfctr MSRs not available to Guest OS on AMD Phenom II + +The AMD Phenom(tm) II X4 965 Processor (family 16, model 4, stepping 3) has the 4 architecturally supported perf counters at MSRs. The selectors are c001000-c001003, and the counters are c001004-c001007. I've verified that the MSRs are there and working by manually setting the MSRs with msr-tools to count cycles. + +The processor does not support the extended perfctr or the perfctr_nb. These are in cpuid leaf 8000_0001. Qemu also sees that these cpuid flags are not set, when I try launching with -cpu host,perfctr_core,check. However, this flag is only for the extended perfctr MSRs, which also happen to map the original four counters at c0010200. + +When I run a Guest OS, that OS is unable to use the perf counter registers from c001000-7. rdmsr and wrmsr complete, but the results are always 0. By contrast, a wrmsr to one of the c0010200 registers causes a general protection fault (as it should, since those aren't supported). + +Kernel: 3.14.0-gentoo +Qemu: 2.0.0 (gentoo) and also with 2.0.50 (commit 06b4f00d5) +Qemu command: qemu-system-x86_64 -enable-kvm -cpu host -smp 8 -m 1024 -nographic -monitor /dev/pts/4 mnt/hdd.img +cat /proc/cpuinfo: +processor : 3 +vendor_id : AuthenticAMD +cpu family : 16 +model : 4 +model name : AMD Phenom(tm) II X4 965 Processor +stepping : 3 +cpu MHz : 800.000 +cache size : 512 KB +physical id : 0 +siblings : 4 +core id : 3 +cpu cores : 4 +apicid : 3 +initial apicid : 3 +fpu : yes +fpu_exception : yes +cpuid level : 5 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt hw_pstate npt lbrv svm_lock nrip_save +bogomips : 6803.79 +TLB size : 1024 4K pages +clflush size : 64 +cache_alignment : 64 +address sizes : 48 bits physical, 48 bits virtual +power management: ts ttp tm stc 100mhzsteps hwpstate + +thanks. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1318746 b/results/classifier/gemma3:12b/kvm/1318746 new file mode 100644 index 00000000..21d0ce4a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1318746 @@ -0,0 +1,27 @@ + +qemu Windows 7 BSOD when using hv-time + +When I use hv-time sub option and run CPU-Z or 3DMark (Physics Test) the Windiws 7 guest stops with BSOD (SYSTEM_SERVICE_EXCEPTION). It can be easily reproduced by running CPU-Z. It will fail every second or third time you execute CPU-Z and fail during "PCI detection". If I disable hv-time I can run CPU-Z and 3DMark (Physics Test) without any problems. QEMU was called with the following options: + +/usr/bin/taskset -c 4,5,6,7 /usr/bin/qemu-system-x86_64 \ + -machine q35,accel=kvm,kernel_irqchip=on \ + -enable-kvm \ + -serial none \ + -parallel none \ + -monitor none \ + -vga std \ + -boot order=dc \ + -cpu host,hv-time \ + -smp cores=4,threads=1,sockets=1 \ + -m 8192 \ + -k de \ + -rtc base=localtime \ + -drive file=/srv/kvm/maggie-drive0.img,id=drive0,if=none,cache=none,aio=threads \ + -mon chardev=monitor0 \ + -chardev socket,id=monitor0,path=/tmp/maggie.monitor,nowait,server \ + -netdev tap,id=net0,vhost=on,helper=/usr/lib/qemu/qemu-bridge-helper \ + -device virtio-net-pci,netdev=net0,mac=00:00:00:02:01:01 \ + -device virtio-blk-pci,drive=drive0,ioeventfd=on \ + -device ioh3420,bus=pcie.0,id=pcie0,port=1,chassis=1,multifunction=on + +I've removed the VFIO PCI passthrough line of my GPU to make reproduction easier. In any case it happens in both scenarios so VGA passthrough is not the root cause. It happens with linux-3.15-rc5 and linux-3.14.3 with patch from commit mentioned at https://bugzilla.kernel.org/show_bug.cgi?id=73721#c3 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1329956 b/results/classifier/gemma3:12b/kvm/1329956 new file mode 100644 index 00000000..7b720d6a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1329956 @@ -0,0 +1,21 @@ + +multi-core FreeBSD guest hangs after warm reboot + +On some Linux KVM hosts in our environment, FreeBSD guests fail to reboot properly if they have more than one CPU (socket, core, and/or thread). They will boot fine the first time, but after issuing a "reboot" command via the OS the guest starts to boot but hangs during SMP initialization. Fully shutting down and restarting the guest works in all cases. + +The only meaningful difference between hosts with the problem and those without is the CPU. Hosts with Xeon E5-26xx v2 processors have the problem, including at least the "Intel(R) Xeon(R) CPU E5-2667 v2" and the "Intel(R) Xeon(R) CPU E5-2650 v2". +Hosts with any other CPU, including "Intel(R) Xeon(R) CPU E5-2650 0", "Intel(R) Xeon(R) CPU E5-2620 0", or "AMD Opteron(TM) Processor 6274" do not have the problem. Note the "v2" in the names of the problematic CPUs. + +On hosts with a "v2" Xeon, I can reproduce the problem under Linux kernel 3.10 or 3.12 and Qemu 1.7.0 or 2.0.0. + +The problem occurs with all currently-supported versions of FreeBSD, including 8.4, 9.2, 10.0 and 11-CURRENT. + +On a Linux KVM host with a "v2" Xeon, this command line is adequate to reproduce the problem: + +/usr/bin/qemu-system-x86_64 -machine accel=kvm -name bsdtest -m 512 -smp 2,sockets=1,cores=1,threads=2 -drive file=./20140613_FreeBSD_9.2-RELEASE_ufs.qcow2,if=none,id=drive0,format=qcow2 -device virtio-blk-pci,scsi=off,drive=drive0 -vnc 0.0.0.0:0 -net none + +I have tried many variations including different models of -machine and -cpu for the guest with no visible difference. + +A native FreeBSD installation on a host with a "v2" Xeon does not have the problem, nor do a paravirtualized FreeBSD guests under bhyve (the BSD legacy-free hypervisor) using the same FreeBSD disk images as on the Linux hosts. So it seems unlikely the cause is on the FreeBSD side of things. + +I would greatly appreciate any feedback or developer attention to this. I am happy to provide additional details, test patches, etc. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1333651 b/results/classifier/gemma3:12b/kvm/1333651 new file mode 100644 index 00000000..56735a3d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1333651 @@ -0,0 +1,591 @@ + +qemu-2.0 occasionally segfaults with Windows 2012R2 + +This is with qemu-2.0 (KVM), linux kernel 3.10.35, using qcow2 images directly accessed via libgfapi (glusterfs-3.4.2). +Such a segfaults happens roughly once every 2 weeks and only for VMs with high network and/or disk activity. + +Guest OS with which we could reproduce this was always Windows Server 2012R2 using virtio-win-0.1-75. + +vhost-net is active, the disks are attached as virtio-blk devices (see also XML definition from libvirt further below) + +Following are the backtraces for all threads: + +(gdb) threads +Undefined command: "threads". Try "help". +(gdb) info threads + Id Target Id Frame + 32 Thread 0x7f5c1affd700 (LWP 16783) 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 + 31 Thread 0x7f5bfe2fc700 (LWP 19906) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 30 Thread 0x7f5c45f87880 (LWP 16769) 0x00007f5c42637ff6 in __GI_ppoll (fds=0x7f5c48bcd750, nfds=74, + timeout=0x7fff92d94e60, timeout@entry=0x7fff92d94f20, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:57 + 29 Thread 0x7f5c1bfff700 (LWP 16781) 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 + 28 Thread 0x7f5c28de1700 (LWP 16780) 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 + 27 Thread 0x7f5c1a7fc700 (LWP 16784) __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 + 26 Thread 0x7f5c295e2700 (LWP 16779) __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 + 25 Thread 0x7f57b2ffd700 (LWP 18170) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 24 Thread 0x7f57c97fa700 (LWP 31326) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 23 Thread 0x7f57b3fff700 (LWP 5016) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 22 Thread 0x7f57c9ffb700 (LWP 25116) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 21 Thread 0x7f5c31f7c700 (LWP 16776) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 20 Thread 0x7f5c1b7fe700 (LWP 16782) __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 + 19 Thread 0x7f57ca7fc700 (LWP 24029) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 18 Thread 0x7f57cbfff700 (LWP 19985) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 17 Thread 0x7f57c8ff9700 (LWP 31327) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 16 Thread 0x7f5bfcefa700 (LWP 19924) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 15 Thread 0x7f5c30ee7700 (LWP 16777) 0x00007f5c426421b3 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81 + 14 Thread 0x7f5c3dc17700 (LWP 16772) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 13 Thread 0x7f5bfd6fb700 (LWP 19907) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 12 Thread 0x7f5c18bff700 (LWP 16788) 0x00007f5c42637ded in poll () at ../sysdeps/unix/syscall-template.S:81 + 11 Thread 0x7f5c19ffb700 (LWP 16785) 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 + 10 Thread 0x7f57caffd700 (LWP 20235) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 9 Thread 0x7f5c2bfff700 (LWP 16778) 0x00007f5c4290e43d in nanosleep () at ../sysdeps/unix/syscall-template.S:81 + 8 Thread 0x7f5bfecfd700 (LWP 17854) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 7 Thread 0x7f5c3e418700 (LWP 16771) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 6 Thread 0x7f57b37fe700 (LWP 18169) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 5 Thread 0x7f5c3bb57700 (LWP 16774) 0x00007f5c4290e43d in nanosleep () at ../sysdeps/unix/syscall-template.S:81 + 4 Thread 0x7f5c3c97f700 (LWP 16773) 0x00007f5c426421b3 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81 + 3 Thread 0x7f5c3277d700 (LWP 16775) pthread_cond_timedwait () + at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 + 2 Thread 0x7f5c197fa700 (LWP 16786) 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +* 1 Thread 0x7f57cb7fe700 (LWP 19986) event_notifier_set (e=0x124) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/util/event_notifier-posix.c:97 +(gdb) bt +#0 event_notifier_set (e=0x124) at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/util/event_notifier-posix.c:97 +#1 0x00007f5c457145d1 in ?? () from /usr/lib64/libgfapi.so.0 +#2 0x00007f5c454d1d0a in synctask_wrap () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c4259d760 in ?? () from /lib64/libc.so.6 +#4 0x0000000000000000 in ?? () +(gdb) thread 2 +[Switching to thread 2 (Thread 0x7f5c197fa700 (LWP 16786))] +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c4627b5e9 in kvm_vcpu_ioctl (cpu=cpu@entry=0x7f5c48b8ccd0, type=type@entry=44672) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1790 +#2 0x00007f5c4627b725 in kvm_cpu_exec (cpu=cpu@entry=0x7f5c48b8ccd0) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1675 +#3 0x00007f5c4622095c in qemu_kvm_cpu_thread_fn (arg=0x7f5c48b8ccd0) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:873 +#4 0x00007f5c42906fda in start_thread (arg=0x7f5c197fa700) at pthread_create.c:308 +#5 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 3 +[Switching to thread 3 (Thread 0x7f5c3277d700 (LWP 16775))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory. +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5c3277d700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 4 +[Switching to thread 4 (Thread 0x7f5c3c97f700 (LWP 16773))] +#0 0x00007f5c426421b3 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c426421b3 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c454ea917 in ?? () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c45712584 in ?? () from /usr/lib64/libgfapi.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5c3c97f700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 5 +[Switching to thread 5 (Thread 0x7f5c3bb57700 (LWP 16774))] +#0 0x00007f5c4290e43d in nanosleep () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c4290e43d in nanosleep () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c454b4874 in gf_timer_proc () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c42906fda in start_thread (arg=0x7f5c3bb57700) at pthread_create.c:308 +#3 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 6 +[Switching to thread 6 (Thread 0x7f57b37fe700 (LWP 18169))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory. +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57b37fe700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 7 +[Switching to thread 7 (Thread 0x7f5c3e418700 (LWP 16771))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5c3e418700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 8 +[Switching to thread 8 (Thread 0x7f5bfecfd700 (LWP 17854))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5bfecfd700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 9 +[Switching to thread 9 (Thread 0x7f5c2bfff700 (LWP 16778))] +#0 0x00007f5c4290e43d in nanosleep () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c4290e43d in nanosleep () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c454b4874 in gf_timer_proc () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c42906fda in start_thread (arg=0x7f5c2bfff700) at pthread_create.c:308 +#3 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 10 +[Switching to thread 10 (Thread 0x7f57caffd700 (LWP 20235))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory. +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57caffd700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 11 +[Switching to thread 11 (Thread 0x7f5c19ffb700 (LWP 16785))] +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c4627b5e9 in kvm_vcpu_ioctl (cpu=cpu@entry=0x7f5c48b7c720, type=type@entry=44672) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1790 +#2 0x00007f5c4627b725 in kvm_cpu_exec (cpu=cpu@entry=0x7f5c48b7c720) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1675 +#3 0x00007f5c4622095c in qemu_kvm_cpu_thread_fn (arg=0x7f5c48b7c720) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:873 +#4 0x00007f5c42906fda in start_thread (arg=0x7f5c19ffb700) at pthread_create.c:308 +#5 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 12 +[Switching to thread 12 (Thread 0x7f5c18bff700 (LWP 16788))] +#0 0x00007f5c42637ded in poll () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c42637ded in poll () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c43521494 in ?? () from /usr/lib64/libspice-server.so.1 +#2 0x00007f5c42906fda in start_thread (arg=0x7f5c18bff700) at pthread_create.c:308 +#3 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 13 +[Switching to thread 13 (Thread 0x7f5bfd6fb700 (LWP 19907))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory. +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5bfd6fb700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 14 +[Switching to thread 14 (Thread 0x7f5c3dc17700 (LWP 16772))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5c3dc17700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 15 +[Switching to thread 15 (Thread 0x7f5c30ee7700 (LWP 16777))] +#0 0x00007f5c426421b3 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c426421b3 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c454ea917 in ?? () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c45712584 in ?? () from /usr/lib64/libgfapi.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5c30ee7700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 16 +[Switching to thread 16 (Thread 0x7f5bfcefa700 (LWP 19924))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory. +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5bfcefa700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 17 +[Switching to thread 17 (Thread 0x7f57c8ff9700 (LWP 31327))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57c8ff9700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 18 +[Switching to thread 18 (Thread 0x7f57cbfff700 (LWP 19985))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57cbfff700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 19 +[Switching to thread 19 (Thread 0x7f57ca7fc700 (LWP 24029))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57ca7fc700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 20 +[Switching to thread 20 (Thread 0x7f5c1b7fe700 (LWP 16782))] +#0 __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 +135 ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S: No such file or directory. +(gdb) bt +#0 __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 +#1 0x00007f5c4290923c in _L_lock_1001 () from /lib64/libpthread.so.0 +#2 0x00007f5c4290908b in __GI___pthread_mutex_lock (mutex=0x7f5c46b87dc0 <qemu_global_mutex>) at pthread_mutex_lock.c:64 +#3 0x00007f5c4631c6c9 in qemu_mutex_lock (mutex=mutex@entry=0x7f5c46b87dc0 <qemu_global_mutex>) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/util/qemu-thread-posix.c:76 +#4 0x00007f5c46221c50 in qemu_mutex_lock_iothread () + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:1043 +#5 0x00007f5c4627b72d in kvm_cpu_exec (cpu=cpu@entry=0x7f5c48b4b610) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1677 +#6 0x00007f5c4622095c in qemu_kvm_cpu_thread_fn (arg=0x7f5c48b4b610) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:873 +#7 0x00007f5c42906fda in start_thread (arg=0x7f5c1b7fe700) at pthread_create.c:308 +#8 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 21 +[Switching to thread 21 (Thread 0x7f5c31f7c700 (LWP 16776))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory. +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5c31f7c700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 22 +[Switching to thread 22 (Thread 0x7f57c9ffb700 (LWP 25116))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57c9ffb700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 23 +[Switching to thread 23 (Thread 0x7f57b3fff700 (LWP 5016))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57b3fff700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 24 +[Switching to thread 24 (Thread 0x7f57c97fa700 (LWP 31326))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57c97fa700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 25 +[Switching to thread 25 (Thread 0x7f57b2ffd700 (LWP 18170))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 in ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f57b2ffd700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 26 +[Switching to thread 26 (Thread 0x7f5c295e2700 (LWP 16779))] +#0 __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 +135 ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S: No such file or directory. +(gdb) bt +#0 __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 +#1 0x00007f5c4290923c in _L_lock_1001 () from /lib64/libpthread.so.0 +#2 0x00007f5c4290908b in __GI___pthread_mutex_lock (mutex=0x7f5c46b87dc0 <qemu_global_mutex>) at pthread_mutex_lock.c:64 +#3 0x00007f5c4631c6c9 in qemu_mutex_lock (mutex=mutex@entry=0x7f5c46b87dc0 <qemu_global_mutex>) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/util/qemu-thread-posix.c:76 +#4 0x00007f5c46221c50 in qemu_mutex_lock_iothread () + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:1043 +#5 0x00007f5c4627b72d in kvm_cpu_exec (cpu=cpu@entry=0x7f5c48aefab0) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1677 +#6 0x00007f5c4622095c in qemu_kvm_cpu_thread_fn (arg=0x7f5c48aefab0) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:873 +#7 0x00007f5c42906fda in start_thread (arg=0x7f5c295e2700) at pthread_create.c:308 +#8 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 27 +[Switching to thread 27 (Thread 0x7f5c1a7fc700 (LWP 16784))] +#0 __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 +135 in ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S +(gdb) bt +#0 __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135 +#1 0x00007f5c4290923c in _L_lock_1001 () from /lib64/libpthread.so.0 +#2 0x00007f5c4290908b in __GI___pthread_mutex_lock (mutex=0x7f5c46b87dc0 <qemu_global_mutex>) at pthread_mutex_lock.c:64 +#3 0x00007f5c4631c6c9 in qemu_mutex_lock (mutex=mutex@entry=0x7f5c46b87dc0 <qemu_global_mutex>) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/util/qemu-thread-posix.c:76 +#4 0x00007f5c46221c50 in qemu_mutex_lock_iothread () + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:1043 +#5 0x00007f5c4627b72d in kvm_cpu_exec (cpu=cpu@entry=0x7f5c48b6c170) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1677 +#6 0x00007f5c4622095c in qemu_kvm_cpu_thread_fn (arg=0x7f5c48b6c170) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:873 +#7 0x00007f5c42906fda in start_thread (arg=0x7f5c1a7fc700) at pthread_create.c:308 +#8 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 28 +[Switching to thread 28 (Thread 0x7f5c28de1700 (LWP 16780))] +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c4627b5e9 in kvm_vcpu_ioctl (cpu=cpu@entry=0x7f5c48b2aab0, type=type@entry=44672) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1790 +#2 0x00007f5c4627b725 in kvm_cpu_exec (cpu=cpu@entry=0x7f5c48b2aab0) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1675 +#3 0x00007f5c4622095c in qemu_kvm_cpu_thread_fn (arg=0x7f5c48b2aab0) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:873 +#4 0x00007f5c42906fda in start_thread (arg=0x7f5c28de1700) at pthread_create.c:308 +#5 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 29 +[Switching to thread 29 (Thread 0x7f5c1bfff700 (LWP 16781))] +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +81 in ../sysdeps/unix/syscall-template.S +(gdb) bt +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c4627b5e9 in kvm_vcpu_ioctl (cpu=cpu@entry=0x7f5c48b3b060, type=type@entry=44672) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1790 +#2 0x00007f5c4627b725 in kvm_cpu_exec (cpu=cpu@entry=0x7f5c48b3b060) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1675 +#3 0x00007f5c4622095c in qemu_kvm_cpu_thread_fn (arg=0x7f5c48b3b060) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:873 +#4 0x00007f5c42906fda in start_thread (arg=0x7f5c1bfff700) at pthread_create.c:308 +#5 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 30 +[Switching to thread 30 (Thread 0x7f5c45f87880 (LWP 16769))] +#0 0x00007f5c42637ff6 in __GI_ppoll (fds=0x7f5c48bcd750, nfds=74, timeout=0x7fff92d94e60, timeout@entry=0x7fff92d94f20, + sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:57 +57 ../sysdeps/unix/sysv/linux/ppoll.c: No such file or directory. +(gdb) bt +#0 0x00007f5c42637ff6 in __GI_ppoll (fds=0x7f5c48bcd750, nfds=74, timeout=0x7fff92d94e60, timeout@entry=0x7fff92d94f20, + sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:57 +#1 0x00007f5c461d5c39 in ppoll (__ss=0x0, __timeout=0x7fff92d94f20, __nfds=<optimized out>, __fds=<optimized out>) + at /usr/include/bits/poll2.h:77 +#2 qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=timeout@entry=313102) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/qemu-timer.c:316 +#3 0x00007f5c46199154 in os_host_main_loop_wait (timeout=313102) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/main-loop.c:229 +#4 main_loop_wait (nonblocking=<optimized out>) at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/main-loop.c:484 +#5 0x00007f5c460457ae in main_loop () at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/vl.c:2051 +#6 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/vl.c:4507 +(gdb) thread 31 +[Switching to thread 31 (Thread 0x7f5bfe2fc700 (LWP 19906))] +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +238 ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: No such file or directory. +(gdb) bt +#0 pthread_cond_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238 +#1 0x00007f5c454d34e3 in syncenv_task () from /usr/lib64/libglusterfs.so.0 +#2 0x00007f5c454d3920 in syncenv_processor () from /usr/lib64/libglusterfs.so.0 +#3 0x00007f5c42906fda in start_thread (arg=0x7f5bfe2fc700) at pthread_create.c:308 +#4 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 +(gdb) thread 32 +[Switching to thread 32 (Thread 0x7f5c1affd700 (LWP 16783))] +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +81 ../sysdeps/unix/syscall-template.S: No such file or directory. +(gdb) bt +#0 0x00007f5c42639607 in ioctl () at ../sysdeps/unix/syscall-template.S:81 +#1 0x00007f5c4627b5e9 in kvm_vcpu_ioctl (cpu=cpu@entry=0x7f5c48b5bbc0, type=type@entry=44672) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1790 +#2 0x00007f5c4627b725 in kvm_cpu_exec (cpu=cpu@entry=0x7f5c48b5bbc0) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/kvm-all.c:1675 +#3 0x00007f5c4622095c in qemu_kvm_cpu_thread_fn (arg=0x7f5c48b5bbc0) + at /var/tmp/portage/app-emulation/qemu-2.0.0/work/qemu-2.0.0/cpus.c:873 +#4 0x00007f5c42906fda in start_thread (arg=0x7f5c1affd700) at pthread_create.c:308 +#5 0x00007f5c42641b1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113 + +XML definition from libvirt: + +<domain type='kvm' id='9'> + <name>b1032388-abb8-4176-897b-0c30a1a73714</name> + <uuid>b1032388-abb8-4176-897b-0c30a1a73714</uuid> + <memory unit='KiB'>16777216</memory> + <currentMemory unit='KiB'>16777216</currentMemory> + <vcpu placement='static'>8</vcpu> + <resource> + <partition>/machine</partition> + </resource> + <os> + <type arch='x86_64' machine='pc-i440fx-1.5'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <pae/> + <hap/> + </features> + <cpu mode='host-model'> + <model fallback='allow'/> + </cpu> + <clock offset='localtime'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/qemu-kvm</emulator> + <disk type='file' device='cdrom'> + <driver name='qemu' type='raw'/> + <source file='/var/virtualization/iso/571196f0-4b29-48c9-b250-50697cbe4317.iso'/> + <backingStore/> + <target dev='hdb' bus='ide'/> + <readonly/> + <alias name='ide0-0-1'/> + <address type='drive' controller='0' bus='0' target='0' unit='1'/> + </disk> + <disk type='file' device='cdrom'> + <driver name='qemu' type='raw'/> + <source file='/var/virtualization/iso/85d7e9f5-4288-4a3f-b209-c12ff11c61f3.iso'/> + <backingStore/> + <target dev='hdc' bus='ide'/> + <readonly/> + <alias name='ide0-1-0'/> + <address type='drive' controller='0' bus='1' target='0' unit='0'/> + </disk> + <disk type='network' device='disk'> + <driver name='qemu' type='qcow2' cache='none'/> + <source protocol='gluster' name='virtualization/vm-persistent/0f83f084-8080-413e-b558-b678e504836e/d35a6600-91bf-4fd1-aba4-1fe6a813d481.qcow2'> + <host name='1.2.3.4'/> + </source> + <backingStore/> + <target dev='vda' bus='virtio'/> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> + </disk> + <disk type='network' device='disk'> + <driver name='qemu' type='qcow2' cache='none'/> + <source protocol='gluster' name='virtualization/vm-persistent/0f83f084-8080-413e-b558-b678e504836e/V5qtOlCs-8PgV-64tQ-a79N-j4ko4GIBijT4.qcow2'> + <host name='1.2.3.4'/> + </source> + <backingStore/> + <target dev='vdb' bus='virtio'/> + <alias name='virtio-disk1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> + </disk> + <controller type='usb' index='0' model='ich9-ehci1'> + <alias name='usb0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x7'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci1'> + <alias name='usb0'/> + <master startport='0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0' multifunction='on'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci2'> + <alias name='usb0'/> + <master startport='2'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x1'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci3'> + <alias name='usb0'/> + <master startport='4'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x2'/> + </controller> + <controller type='pci' index='0' model='pci-root'> + <alias name='pci.0'/> + </controller> + <controller type='ide' index='0'> + <alias name='ide0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> + </controller> + <controller type='virtio-serial' index='0'> + <alias name='virtio-serial0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> + </controller> + <interface type='bridge'> + <mac address='XX.XX.XX.XX:XX:XX'/> + <source bridge='vmbr0'/> + <target dev='kvm-XYZ_0'/> + <model type='virtio'/> + <alias name='net0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <channel type='spicevmc'> + <target type='virtio' name='com.redhat.spice.0'/> + <alias name='channel0'/> + <address type='virtio-serial' controller='0' bus='0' port='1'/> + </channel> + <channel type='unix'> + <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/b1032388-abb8-4176-897b-0c30a1a73714.org.qemu.guest_agent.0'/> + <target type='virtio' name='org.qemu.guest_agent.0'/> + <alias name='channel1'/> + <address type='virtio-serial' controller='0' bus='0' port='2'/> + </channel> + <input type='tablet' bus='usb'> + <alias name='input0'/> + </input> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <graphics type='spice' port='5900' autoport='no' listen='1.2.3.4'> + <listen type='address' address='1.2.3.4'/> + </graphics> + <sound model='ac97'> + <alias name='sound0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </sound> + <video> + <model type='qxl' ram='65536' vram='65536' heads='1'/> + <alias name='video0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + <redirdev bus='usb' type='spicevmc'> + <alias name='redir0'/> + </redirdev> + <redirdev bus='usb' type='spicevmc'> + <alias name='redir1'/> + </redirdev> + <redirdev bus='usb' type='spicevmc'> + <alias name='redir2'/> + </redirdev> + <redirfilter> + <usbdev allow='no'/> + </redirfilter> + <memballoon model='virtio'> + <alias name='balloon0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> + </memballoon> + <rng model='virtio'> + <rate bytes='1024' period='1000'/> + <backend model='random'>/dev/random</backend> + <alias name='rng0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> + </rng> + </devices> + <seclabel type='none'/> +</domain> \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1344 b/results/classifier/gemma3:12b/kvm/1344 new file mode 100644 index 00000000..99754dc7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1344 @@ -0,0 +1,2 @@ + +custom kernel give me KVM internal error. Suberror: 4 diff --git a/results/classifier/gemma3:12b/kvm/1348106 b/results/classifier/gemma3:12b/kvm/1348106 new file mode 100644 index 00000000..c32d46db --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1348106 @@ -0,0 +1,206 @@ + +kvm crash on Kali Linux + +platform: DELL Vostro 2421 +#uname -a +Linux x-linux 3.14-kali1-686-pae #1 SMP Debian 3.14.4-1kali1 (2014-05-14) i686 GNU/Linux + +#kvm --version +QEMU emulator version 1.1.2 (qemu-kvm-1.1.2+dfsg-6+deb7u3, Debian), Copyright (c) 2003-2008 Fabrice Bellard + +#qemu --version +QEMU emulator version 1.1.2 (Debian 1.1.2+dfsg-6a+deb7u3), Copyright (c) 2003-2008 Fabrice Bellard + +# cat /etc/issue +Kali GNU/Linux 1.0.7 \n \l + +# cat /proc/cpuinfo +processor : 0 +vendor_id : GenuineIntel +cpu family : 6 +model : 58 +model name : Intel(R) Core(TM) i3-3227U CPU @ 1.90GHz +stepping : 9 +microcode : 0x19 +cpu MHz : 790.875 +cache size : 3072 KB +physical id : 0 +siblings : 4 +core id : 0 +cpu cores : 2 +apicid : 0 +initial apicid : 0 +fdiv_bug : no +f00f_bug : no +coma_bug : no +fpu : yes +fpu_exception : yes +cpuid level : 13 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms +bogomips : 3791.39 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual +power management: + +processor : 1 +vendor_id : GenuineIntel +cpu family : 6 +model : 58 +model name : Intel(R) Core(TM) i3-3227U CPU @ 1.90GHz +stepping : 9 +microcode : 0x19 +cpu MHz : 790.875 +cache size : 3072 KB +physical id : 0 +siblings : 4 +core id : 1 +cpu cores : 2 +apicid : 2 +initial apicid : 2 +fdiv_bug : no +f00f_bug : no +coma_bug : no +fpu : yes +fpu_exception : yes +cpuid level : 13 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms +bogomips : 3791.39 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual +power management: + +processor : 2 +vendor_id : GenuineIntel +cpu family : 6 +model : 58 +model name : Intel(R) Core(TM) i3-3227U CPU @ 1.90GHz +stepping : 9 +microcode : 0x19 +cpu MHz : 790.875 +cache size : 3072 KB +physical id : 0 +siblings : 4 +core id : 0 +cpu cores : 2 +apicid : 1 +initial apicid : 1 +fdiv_bug : no +f00f_bug : no +coma_bug : no +fpu : yes +fpu_exception : yes +cpuid level : 13 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms +bogomips : 3791.39 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual +power management: + +processor : 3 +vendor_id : GenuineIntel +cpu family : 6 +model : 58 +model name : Intel(R) Core(TM) i3-3227U CPU @ 1.90GHz +stepping : 9 +microcode : 0x19 +cpu MHz : 790.875 +cache size : 3072 KB +physical id : 0 +siblings : 4 +core id : 1 +cpu cores : 2 +apicid : 3 +initial apicid : 3 +fdiv_bug : no +f00f_bug : no +coma_bug : no +fpu : yes +fpu_exception : yes +cpuid level : 13 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms +bogomips : 3791.39 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual +power management: + +# cat /proc/meminfo +MemTotal: 4010792 kB +MemFree: 3123960 kB +MemAvailable: 3307340 kB +Buffers: 44908 kB +Cached: 389772 kB +SwapCached: 0 kB +Active: 476588 kB +Inactive: 348656 kB +Active(anon): 391436 kB +Inactive(anon): 71016 kB +Active(file): 85152 kB +Inactive(file): 277640 kB +Unevictable: 0 kB +Mlocked: 0 kB +HighTotal: 3148696 kB +HighFree: 2431604 kB +LowTotal: 862096 kB +LowFree: 692356 kB +SwapTotal: 2095100 kB +SwapFree: 2095100 kB +Dirty: 64 kB +Writeback: 0 kB +AnonPages: 390604 kB +Mapped: 89160 kB +Shmem: 71892 kB +Slab: 31688 kB +SReclaimable: 14196 kB +SUnreclaim: 17492 kB +KernelStack: 2864 kB +PageTables: 5448 kB +NFS_Unstable: 0 kB +Bounce: 0 kB +WritebackTmp: 0 kB +CommitLimit: 4100496 kB +Committed_AS: 1886836 kB +VmallocTotal: 122880 kB +VmallocUsed: 68924 kB +VmallocChunk: 43084 kB +HardwareCorrupted: 0 kB +AnonHugePages: 0 kB +HugePages_Total: 0 +HugePages_Free: 0 +HugePages_Rsvd: 0 +HugePages_Surp: 0 +Hugepagesize: 2048 kB +DirectMap4k: 26616 kB +DirectMap2M: 882688 kB + + +when I launch kvm with Juniper Simulator, it crashed after 1minute. +all the command line is below +kvm 1-JunOS-10.2R1.8.img \ + -m 128 \ + -net nic,macaddr=00:50:56:C0:00:01 \ + -net tap \ + -net nic,macaddr=00:50:56:C0:00:02 \ + -net tap \ + -net nic,macaddr=00:50:56:C0:00:03 \ + -net tap \ + -net nic,macaddr=00:50:56:C0:00:04 \ + -net tap \ + -net nic,macaddr=00:50:56:C0:00:05 \ + -net tap \ + -net nic,macaddr=00:50:56:C0:00:06 \ + -net tap \ + -display curses + +of course I had loaded the kvm module +#modpro kvm +#modpro kvm-intel + +for more detials, see the srceenshot \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1355644 b/results/classifier/gemma3:12b/kvm/1355644 new file mode 100644 index 00000000..89d7cc6d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1355644 @@ -0,0 +1,18 @@ + +windows7 reboot bluesreen 0x0000005c + +I have met sevaral blue screen with 0x0000005c(0x0000010b,0x00000003,0x00000000,0x00000000) after windows7 reboot. +It always happens just before the windows iron animation appears. + +my qemu version is qemu-2.1.0 +my guest os is windows7 32bits sp1 + +my qemu commandline is +./x86_64-softmmu/qemu-system-x86_64 -m 2048 -hda system.inst -spice port=5940,disable-ticketing -monitor stdio --enable-kvm + +This bug doesn't happen always,and i don‘t know how to reproduce it. + +But i have a special way to produce such a bluescreen. +I set nmi dump on and set windows to collect dump file and set auto reboot after system fail on. +Then i send a nmi to guest, and then after collecting dump file , windows will auto reboot and then such a blue screen happens. +And this can be reproduced always. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1358619 b/results/classifier/gemma3:12b/kvm/1358619 new file mode 100644 index 00000000..12cd4efa --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1358619 @@ -0,0 +1,94 @@ + +keep savevm/loadvm and migration cause snapshot crash + +--Version: qemu 2.1.0 public release +--OS: CentOS release 6.4 +--gcc: 4.4.7 + +Hi: + I found problems when doing some tests on qemu migration and savevm/loadvm. +On my experiment, a quest is migrated between two same host back and forth. +Source host would savevm after migration completed and incoming host loadvm before migration coming. + +image=./image/centos-1.qcow2 + +====Migration Part==== +qemu-system-x86_64 \ + -qmp tcp:$host_ip:4451,server,nowait \ + -drive file=$image \ + --enable-kvm \ + -monitor stdio \ + -m 8192 \ + -device virtio-net-pci,netdev=net0,mac=$mac \ + -netdev tap,id=net0,script=./qemu-ifup + +====Incoming Part==== +qemu-system-x86_64 \ + -qmp tcp:$host_ip:4451,server,nowait \ + -incoming tcp:0:4449 \ + --enable-kvm \ + -monitor stdio \ + -drive file=$image \ + -m 8192 \ + -device virtio-net-pci,netdev=net0,mac=$mac \ + -netdev tap,id=net0,script=./qemu-ifup + + +Command lines: + +host1 $: qemu-system-x86_64 ........ //migration part +host2 $: qemu-system-x86_64 ...incoming... //incoming part + +After finishing boot + +host1 (qemu) : migrate tcp:host2:4449 +host1 (qemu) : savevm s1 +host1 (qemu) : quit +host1 $: qemu-system-x86_64 ...incoming... //incoming part +host1 (qemu) : loadvm s1 + +host2 (qemu) : migrate tcp:host1:4449 +host2 (qemu) : savevm s2 +host2 (qemu) : quit +host2 $: qemu-system-x86_64 ...incoming... //incoming part +host2 (qemu) : loadvm s2 + +host1 (qemu) : migrate tcp:host2:4449 +host1 (qemu) : savevm s3 +host1 (qemu) : quit +host1 $: qemu-system-x86_64 ...incoming... //incoming part +host1 (qemu) : loadvm s3 + +I wish those operation can be success every time. +However problem happened irregularly when loadvm and error messages are not the same. + +host1 (qemu) : loadvm s3 +qcow2: Preventing invalid write on metadata (overlaps with active L1 table); image marked as corrupt. +Error -5 while activating snapshot 's3' on 'ide0-hd0' + +or + +host2 (qemu) : loadvm s2 +Error -22 while loading VM state + + +I have done some sample test on savevm/loadvm +On same host +(qemu) savevm test1 +(qemu) loadvm test1 +(qemu) savevm test2 +(qemu) loadvm test2 +(qemu) savevm test3 +(qemu) loadvm test3 +(qemu) savevm test4 +(qemu) loadvm test4 +(qemu) savevm test5 +(qemu) loadvm test5 +(qemu) savevm test6 +(qemu) loadvm test6 + +This is OK. No any problem. + + +Any idea? +I think it is related to migration. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1366836 b/results/classifier/gemma3:12b/kvm/1366836 new file mode 100644 index 00000000..25c000f8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1366836 @@ -0,0 +1,17 @@ + +Core2Duo and KVM may not boot Win8 properly on 3.x kernels + +When I start up QEMU w/ KVM 1.7.0 on a Core2Duo machine running a vanilla kernel +3.4.67 or 3.10.12 to run a Windows 8.0 guest, the guest freezes at Windows 8 boot without any error. +When I dump the CPU registers via "info registers", nothing changes, that means +the system really stalled. Same happens with QEMU 2.0.0 and QEMU 2.1.0. + +But - when I run the very same guest using Kernel 2.6.32.12 and QEMU 1.7.0 or 2.0.0 on +the host side it works on the Core2Duo. Also the system above but just with an +i3 or i5 CPU it works fine. + +I already disabled networking and USB for the guest and changed the graphics +card - no effect. I assume that some mean bits and bytes have to be set up +properly to get the thing running. + +Seems to be related to a kvm/progressor incompatibility. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1376 b/results/classifier/gemma3:12b/kvm/1376 new file mode 100644 index 00000000..768e16f1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1376 @@ -0,0 +1,16 @@ + +x86 LSL and LAR fault +Description of problem: +From the description of LSL and LAR instructions in manual, `If the segment descriptor cannot be accessed or is an invalid type for the instruction, the ZF flag is cleared and no value is loaded in the destination operand.`. When it happens at the CPU, it seems they do nothing (nop). However, in QEMU, it crashes. +Steps to reproduce: +1. Compile this code +``` +void main() { + asm("mov rax, 0xa02e698e741f5a6a"); + asm("mov rbx, 0x20959ddd7a0aef"); + asm("lsl ax, bx"); +} +``` +2. Execute. QEMU crashes but CPU does not. This problem happens with LAR, too. +Additional information: +This bug is discovered by research conducted by KAIST SoftSec. diff --git a/results/classifier/gemma3:12b/kvm/1377 b/results/classifier/gemma3:12b/kvm/1377 new file mode 100644 index 00000000..441707f1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1377 @@ -0,0 +1,15 @@ + +x86 CVT* series instructions fault +Description of problem: +For example, CVTSD2SS instruction converts SRC[63:0] double precision floating point to DEST[31:0] single precision floating point. Although the CVTSD2SS instruction uses only 8 bytes, if it overlaps page boundary, I think QEMU tries to access over the valid memory and crashes. +Steps to reproduce: +1. Compile this code +``` +void main() { + mmap(0x555555559000, 0x1000, flag, ~~, 0); + asm("cvtsd2ss xmm1, qword ptr [0x555555559ff8]"); +} +``` +2. Execute. QEMU crashes but CPU does not. +Additional information: +This bug is discovered by research conducted by KAIST SoftSec. diff --git a/results/classifier/gemma3:12b/kvm/1378 b/results/classifier/gemma3:12b/kvm/1378 new file mode 100644 index 00000000..6e33f40c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1378 @@ -0,0 +1,21 @@ + +iSCSI causes memory corruption +Description of problem: +This is a compound problem, which most likely involves a combination of how TrueNAS SCALE handles iSCSI triggering a problem **and** some memory-handling issue in QEMU leading to a crash. In short any Linux machine started with iSCSI handled by QEMU directly leads to a hard crash within 30s-1h. I was able to find a pattern in logs: + +1. First, a message like `QEMU[53139]: kvm: iSCSI Busy/TaskSetFull/TimeOut (retry #1 in 0 ms): TASK_SET_FULL` is logged + - it is always `TASK_SET_FULL` + - it is always `retry #1 in ... ms`, where only number of miliseconds varies + - the line is repeated multiple times, sometimes 5x and sometimes >200x +2. It is followed by a single line with one of the following: + - `double free or corruption (out)` + - `double free or corruption (!prev)` + - `kvm: ../block/block-backend.c:1567: blk_aio_write_entry: Assertion `!qiov || qiov->size == acb->bytes' failed.` + - `kvm: malloc.c:2379: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.` + - `kvm: iSCSI CheckCondition: SENSE KEY:UNIT_ATTENTION(6) ASCQ:BUS_RESET(0x2900)` + - `malloc(): invalid size (unsorted)` +3. The virtual machine crashes +Steps to reproduce: +I don't have a specific concrete steps, only clues really. This problem started happening after TrueNAS SCALE updated their iSCSI code in Bluefin release to a new upstream version. That iSCSI server still works when iSCSI is mounted by the kernel and QEMU uses a normal `/dev` entry. While there's probably some problem with it, QEMU shouldn't probably crash with memory errors. +Additional information: +While I'm a software developer, I don't code in C on a daily basis. However, looking at the errors, I have a suspicion the problem may be somewhere in the `iscsi_co_generic_cb()`, as it seems the struct is getting damaged (out of bound write?) and causes explosion somewhere down the line. diff --git a/results/classifier/gemma3:12b/kvm/1384 b/results/classifier/gemma3:12b/kvm/1384 new file mode 100644 index 00000000..bb6e6fdc --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1384 @@ -0,0 +1,2 @@ + +Update libvfio-user to latest upstream diff --git a/results/classifier/gemma3:12b/kvm/139 b/results/classifier/gemma3:12b/kvm/139 new file mode 100644 index 00000000..fb34c257 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/139 @@ -0,0 +1,2 @@ + +kvm rbd driver (and maybe others, i.e. qcow2, qed and so on) does not report DISCARD-ZERO flag diff --git a/results/classifier/gemma3:12b/kvm/1400 b/results/classifier/gemma3:12b/kvm/1400 new file mode 100644 index 00000000..8c54e49a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1400 @@ -0,0 +1,2 @@ + +helper_access_check_cp_reg() raising Undefined Instruction on big-endian host diff --git a/results/classifier/gemma3:12b/kvm/1416246 b/results/classifier/gemma3:12b/kvm/1416246 new file mode 100644 index 00000000..f78b0d89 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1416246 @@ -0,0 +1,51 @@ + +create guest fail when compile qemu with parameter "--disable-gtk" + +Environment: +------------ +Host OS (ia32/ia32e/IA64):ia32e +Guest OS (ia32/ia32e/IA64):ia32e +Guest OS Type (Linux/Windows):Linux +kvm.git Commit:8fff5e374a2f6047d1bb52288af7da119bc75765 +qemu.kvm Commit:16017c48547960539fcadb1f91d252124f442482 +Host Kernel Version:3.19.0-rc3 +Hardware:Ivytown_EP, Haswell_EP + + +Bug detailed description: +-------------------------- +compile the qemu with disable gtk, the create guest , the guest create fail + +note: +1.qemu.git: 699eae17b841e6784dc3864bf357e26bff1e9dfe +when compile the qemu with enable gtk or disable gtk, the guest create pass + +2. this should be a qemu bug +kvm.git + qemu.git = result +8fff5e37 + 16017c48 = bad +8fff5e37 + 699eae17 = good + +Reproduce steps: +---------------- +1. git clone git://vt-sync/qemu.git qemu.git +2. cd qemu.git +3. ./configure --target-list=x86_64-softmmu --disable-sdl --disable-gtk +4. make -j16 +5. ./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 4G -smp 2 -net none /root/rhel6u5.qcow + +Current result: +---------------- +create gust fail when compile qemu with disable gtk + +Expected result: +---------------- +create guest pass when compile qemu with disable or enable gtk + +Basic root-causing log: +---------------------- +[root@vt-ivt2 qemu.git]# ./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 4G -smp 2 -net none /root/rhel6u5-1.qcow +qemu-system-x86_64: Invalid parameter 'to' +Segmentation fault (core dumped) + +some dmesg message: +qemu-system-x86[96364]: segfault at 24 ip 00007fe6d9636a69 sp 00007fffc03cf970 error 4 in qemu-system-x86_64[7fe6d9330000+4ba000] \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1422285 b/results/classifier/gemma3:12b/kvm/1422285 new file mode 100644 index 00000000..4caf6dad --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1422285 @@ -0,0 +1,55 @@ + +The guest will be destroyed when hot plug the VF to guest for the second time. + +Environment: +------------ +Host OS (ia32/ia32e/IA64):ia32e +Guest OS (ia32/ia32e/IA64):ia32e +Guest OS Type (Linux/Windows):linux +kvm.git Commit: 6557bada461afeaa920a189fae2cff7c8fdce39f +qemu.kvm Commit: cd2d5541271f1934345d8ca42f5fafff1744eee7 +Host Kernel Version:3.19.0-rc3 +Hardware:Haswell_EP,Ivytown_EP + + +Bug detailed description: +-------------------------- +create guest , then hot plug the VF to the guest for the second time, the guest will be destroyed. + +note: +1. hot plug the device to guest with vfio, the guest works fine +2.this should be a qemu bug: +kvm + qemu = result +6557bada + cd2d5541 = bad +6557bada + a805ca54 = good + + +Reproduce steps: +---------------- +1. qemu-system-x86_64 -enable-kvm -m 2G -net none -monitor pty rhel6u5.qcow +2. echo "device_add pci-assign,host=03:10.1,id=nic" >/dev/pts/2 +3. cat /dev/pts/2 & +4. echo "device_del nic" >/dev/pts/2 +5. echo "device_add pci-assign,host=03:10.0,id=nic" >/dev/pts/2 + +Current result: +---------------- +guest will be destroyed when hot plug the vf to guest for the second time. + +Expected result: +---------------- +guest works fine when hot plug the vf to guest for the second time + +Basic root-causing log: +---------------------- +[root@vt-hsw2 cathy]# qemu-system-x86_64 -enable-kvm -m 2G -net none -monitor pty rhel6u5.qcow +char device redirected to /dev/pts/2 (label compat_monitor0) +Segmentation fault (core dumped) + + +some dmesg log: + +pci-stub 0000:03:10.1: kvm deassign device +pci-stub 0000:03:10.1: enabling device (0000 -> 0002) +qemu-system-x86[9894]: segfault at 0 ip (null) sp 00007fa73df0cae8 error 14 +pci-stub 0000:03:10.1: kvm assign device \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1433081 b/results/classifier/gemma3:12b/kvm/1433081 new file mode 100644 index 00000000..a25e4461 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1433081 @@ -0,0 +1,131 @@ + +kvm hardware error 0xffffffff with vfio-pci VGA passthrough + +Hi, + +Using qcow2 format for an ide-hd device is causing "KVM: entry failed, hardware error 0xffffffff". When this error occurs, qemu-monitor shows the guest has stopped. The error did not occur immediately, but at the point that the boot, running from an attached Ubuntu 14.04.1 iso, switched to graphical mode after text-mode startup. + +The root-cause was verified by switching only the ide-hd disk to raw format (no OS installed), which allowed the guest to boot normally from the iso. The error and fix are reliably repeatable. + +The interesting part is that the ide-hd (with no OS installed) with qcow2 format was not actually being used for boot - the boot was from a Ubuntu iso, with the intention of installing an ubuntu guest on the attached ide-hd device. The guest was using a vfio-pci passthrough GPU connected to an external UHD monitor. + +The commands used to create the disk images: +qemu-img create -f qcow2 /media/v2min/Data/VMachines-KVM/KVM-NVidia/kvm-nvidia.img 20G +qemu-img create -f raw /media/v2min/Data/VMachines-KVM/KVM-NVidia/kvm-nvidia.img 20G + +The script vm1 was used to launch the guests with "sudo ./vm1", with the only difference between launches being the ide-hd format (raw vs qcow2). With qcow2 this resulted in the terminal below. The corresponding dmesg snippets are attached. There were two dmesg entries each time the error occurred. + +The same problem occurred when using the latest packages from the ppa:jacob/virtualisation. However, when using jacob's packages, it was not verified that raw format resolves the error (I am running this on my primary system and purged jacob's ppa when this problem first occured). + +A fix would be helpful as the qcow2 format allows snapshots, while raw does not. + +----------------------------------System info----------------------------------------------------------- +Linux v2min 3.18.9-031809-generic #201503080036 SMP Sun Mar 8 00:37:46 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux + +/var/log/libvirt/libvirt.d is 0 bytes + +Libvirt versions are these: + +v2min@v2min:~/QCOW2-Error$ dpkg -l | grep libvirt +ii libvirt-bin 1.2.2-0ubuntu13.1.9 amd64 programs for the libvirt library +rc libvirt-glib-1.0-0 0.1.6-1ubuntu2 amd64 libvirt glib mainloop integration +ii libvirt0 1.2.2-0ubuntu13.1.9 amd64 library for interfacing with different virtualization systems +ii python-libvirt 1.2.2-0ubuntu2 amd64 libvirt Python bindings + + +v2min@v2min:~/QCOW2-Error$ dpkg -l | grep qemu +ii ipxe-qemu 1.0.0+git-20131111.c3d1e78-2ubuntu1.1 all PXE boot firmware - ROM images for qemu +ii qemu-keymaps 2.0.0+dfsg-2ubuntu1.10 all QEMU keyboard maps +ii qemu-kvm 2.0.0+dfsg-2ubuntu1.10 amd64 QEMU Full virtualization on x86 hardware (transitional package) +ii qemu-system-common 2.0.0+dfsg-2ubuntu1.10 amd64 QEMU full system emulation binaries (common files) +ii qemu-system-x86 2.0.0+dfsg-2ubuntu1.10 amd64 QEMU full system emulation binaries (x86) +ii qemu-utils 2.0.0+dfsg-2ubuntu1.10 amd64 QEMU utilities + +Passthrough GPU: Zotac GT 730 2GB. +Processor: AMD A10-5800K APU +Primary GPU: Radeon R9-290X + + +------------------------------------------vm1 script----------------------------------------------------- +#!/bin/bash + +configfile=/etc/vfio-pci1.cfg + +vfiobind() { + dev="$1" + vendor=$(cat /sys/bus/pci/devices/$dev/vendor) + device=$(cat /sys/bus/pci/devices/$dev/device) + if [ -e /sys/bus/pci/devices/$dev/driver ]; then + echo $dev > /sys/bus/pci/devices/$dev/driver/unbind + fi + echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id + +} + +modprobe vfio-pci + +cat $configfile | while read line;do + echo $line | grep ^# >/dev/null 2>&1 && continue + vfiobind $line +done + +sudo qemu-system-x86_64 -enable-kvm -M q35 -m 4096 -cpu host \ +-smp 2,sockets=1,cores=2,threads=1 \ +-bios /usr/share/qemu/bios.bin -vga none \ +-usb -device usb-host,hostbus=5,hostaddr=8 \ +-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \ +-device vfio-pci,host=04:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \ +-device vfio-pci,host=04:00.1,bus=root.1,addr=00.1 \ +-drive file=/media/v2min/Data/VMachines-KVM/KVM-NVidia/kvm-nvidia.img,id=disk,format=qcow2 -device ide-hd,bus=ide.0,drive=disk \ +-drive file=/media/v2min/Data/Shr/Software/OSes/ubuntu-14.04.1-desktop-amd64.iso,id=isocd -device ide-cd,bus=ide.1,drive=isocd \ +-boot menu=on \ +-boot d + +exit 0 + +------------------------------------------------------------gnome-terminal----------------------------- +v2min@v2min:~$ sudo ./vm1 +[sudo] password for v2min: +KVM: entry failed, hardware error 0xffffffff +RAX=0000000000000005 RBX=0000000000000000 RCX=0000000000000000 RDX=ffffffff81eaf3e8 +RSI=0000000000000000 RDI=0000000000000000 RBP=ffff880179553930 RSP=ffff880179553910 +R8 =ffffffff81eaf3e0 R9 =000000000000ffff R10=0000000000000206 R11=000000000000000f +R12=ffff880179597b1c R13=0000000000000028 R14=0000000000000000 R15=ffff880179597800 +RIP=ffffffff8104ed58 RFL=00000046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 ffffffff 00800000 +CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0000 0000000000000000 ffffffff 00800000 +FS =0000 00007f51d8a96880 ffffffff 00800000 +GS =0000 ffff88017fd00000 ffffffff 00800000 +LDT=0000 0000000000000000 0000ffff 00000000 +TR =0040 ffff88017fd11900 00002087 00008b00 DPL=0 TSS64-busy +GDT= ffff88017fd0a000 0000007f +IDT= ffffffffff576000 00000fff +CR0=8005003b CR2=00007f51d8a99000 CR3=00000001740be000 CR4=000406e0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +Code=00 01 48 c7 c0 6a b0 00 00 31 db 0f b7 0c 01 b8 05 00 00 00 <0f> 01 c1 0f 1f 44 00 00 5b 41 5c 41 5d 41 5e 5d c3 89 f0 31 c9 f0 0f b0 0d 9b 06 e6 00 40 +v2min@v2min:~$ sudo ./vm1 +KVM: entry failed, hardware error 0xffffffff +RAX=0000000000000005 RBX=0000000000000000 RCX=0000000000000000 RDX=ffffffff81eaf3e8 +RSI=0000000000000000 RDI=0000000000000000 RBP=ffff88017957f9e8 RSP=ffff88017957f9c8 +R8 =ffffffff81eaf3e0 R9 =0000000000000000 R10=ffff88017b001d00 R11=0000000000000246 +R12=ffff880179527ac0 R13=000000000000003e R14=0000000000000000 R15=0000000000000001 +RIP=ffffffff8104ed58 RFL=00000046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 ffffffff 00800000 +CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0000 0000000000000000 ffffffff 00800000 +FS =0000 00007f49153a6740 ffffffff 00800000 +GS =0000 ffff88017fd00000 ffffffff 00800000 +LDT=0000 0000000000000000 0000ffff 00000000 +TR =0040 ffff88017fd11900 00002087 00008b00 DPL=0 TSS64-busy +GDT= ffff88017fd0a000 0000007f +IDT= ffffffffff576000 00000fff +CR0=8005003b CR2=00007f4914e82170 CR3=0000000001c0e000 CR4=000406e0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +Code=00 01 48 c7 c0 6a b0 00 00 31 db 0f b7 0c 01 b8 05 00 00 00 <0f> 01 c1 0f 1f 44 00 00 5b 41 5c 41 5d 41 5e 5d c3 89 f0 31 c9 f0 0f b0 0d 9b 06 e6 00 40 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1435359 b/results/classifier/gemma3:12b/kvm/1435359 new file mode 100644 index 00000000..b17e1c0e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1435359 @@ -0,0 +1,40 @@ + +Booting kernel 3.19.2 fails most of the time + +Host system: openSuSE 13.2 + kernel 4.0.0-rc4 + qemu 2.2.1. + +When I try to boot a virtual machine with Ubuntu 14.10 and kernel 3.13.0 every boot succeeds. However, with kernel 3.19.2 booting fails most of the time. The following appears in /var/log/libvirt/qemu/ubuntu-vm.log when I try to boot that VM with kernel 3.19.2: + +2015-03-23 02:44:18.801+0000: starting up +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-system-x86_64 -name ubuntu-vm -S -machine pc-i440fx-2.1,accel=kvm,usb=off -cpu Haswell -m 2048 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 395110dc-9fbe-4542-8fce-4ef958f24b2c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ubuntu-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/lib/libvirt/images/ubuntusaucy.qcow2,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/libvirt/images/ubuntu-14.04-mini.iso,if=none,id=drive-ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:5e:71:5e,bus=pci.0,addr=0x3 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -spice port=5900,addr=127.0.0.1,disable-ticketing,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,bus=pci.0,addr=0x2 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -chardev spicevmc,id=charredir2,name=usbredir -device usb-redir,chardev=charredir2,id=redir2 -chardev spicevmc,id=charredir3,name=usbredir -device usb-redir,chardev=charredir3,id=redir3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -object rng-random,id=rng0,filename=/dev/random -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x9 -msg timestamp=on +main_channel_link: add main channel client +main_channel_handle_parsed: net test: latency 0.229000 ms, bitrate 28444444444 bps (27126.736111 Mbps) +red_dispatcher_set_cursor_peer: +inputs_connect: inputs channel client create +((null):30728): SpiceWorker-ERROR **: red_worker.c:8337:red_marshall_qxl_drawable: invalid type +KVM: injection failed, MSI lost (Input/output error) +qemu-system-x86_64: /home/bart/software/qemu-2.2.1/hw/net/vhost_net.c:264: vhost_net_stop_one: Assertion `r >= 0' failed. +2015-03-23 02:44:44.952+0000: shutting down + +That message is similar to the message reported by the older qemu version provided by openSuse (qemu 2.1.0 + qemu-kvm 2.1.0): + +2015-03-21 13:51:00.724+0000: starting up +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-system-x86_64 -name ubuntu-vm -S -machine pc-i440fx-2.1,accel=kvm,usb=off -cpu Haswell -m 1024 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 395110dc-9fbe-4542-8fce-4ef958f24b2c -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ubuntu-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr +=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/var/lib/libvirt/images/ubuntusaucy.qcow2,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/libvirt/images/ubuntu-14.04-mini.iso,if=none,id=drive-ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id +=ide0-0-0,bootindex=2 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:5e:71:5e,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -spice port=5900,addr=127.0.0.1,disable-ticketing,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0 -chardev spicevmc,id=charredir1, +name=usbredir -device usb-redir,chardev=charredir1,id=redir1 -chardev spicevmc,id=charredir2,name=usbredir -device usb-redir,chardev=charredir2,id=redir2 -chardev spicevmc,id=charredir3,name=usbredir -device usb-redir,chardev=charredir3,id=redir3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -object rng-random,id=rng0,filename=/dev/random -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x9 -msg timestamp=on +char device redirected to /dev/pts/0 (label charserial0) +main_channel_link: add main channel client +main_channel_handle_parsed: net test: latency 0.233000 ms, bitrate 17964912280 bps (17132.675438 Mbps) +red_dispatcher_set_cursor_peer: +inputs_connect: inputs channel client create +((null):5798): SpiceWorker-ERROR **: red_worker.c:8337:red_marshall_qxl_drawable: invalid type +red_channel_client_disconnect: 0x7f90397ec0c0 (channel 0x7f903812a090 type 5 id 0) +((null):8349): Spice-Warning **: red_channel.c:1661:red_channel_remove_client: channel type 5 id 0 - channel->thread_id (0x7f90362cba80) != pthread_self (0x7f9011fff700).If one of the threads is != io-thread && != vcpu-thread, this might be a BUG +snd_channel_put: sound channel freed +red_channel_client_disconnect: 0x7f903a04c4c0 (channel 0x7f903812a230 type 6 id 0) +((null):8349): Spice-Warning **: red_channel.c:1661:red_channel_remove_client: channel type 6 id 0 - channel->thread_id (0x7f90362cba80) != pthread_self (0x7f9011fff700).If one of the threads is != io-thread && != vcpu-thread, this might be a BUG +snd_channel_put: sound channel freed +KVM: injection failed, MSI lost (Input/output error) +qemu-system-x86_64: /home/abuild/rpmbuild/BUILD/qemu-2.1.0/hw/virtio/vhost.c:1003: vhost_virtqueue_mask: Assertion `r >= 0' failed. +2015-03-21 15:30:10.148+0000: shutting down \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1438572 b/results/classifier/gemma3:12b/kvm/1438572 new file mode 100644 index 00000000..81b9347e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1438572 @@ -0,0 +1,17 @@ + +kvm does not support KVM_CAP_USER_MEMORY Please upgrade to at least kernel 2.6.29 or recent kvm-kmod (see http://sourceforge.net/projects/kvm) + +We have a machine which is having QEMU+KVM on below configuration of linux +uname -a +Linux cairotrior 2.6.18-308.13.1.el5 #1 SMP Thu Jul 26 05:45:09 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux +cat /etc/issue +Red Hat Enterprise Linux Server release 5.8 (Tikanga) +Kernel \r on an \m + + +But in another setup, we are trying on a different machine having RHEL 5.9 having higher kernel version but it still gives below error +kvm does not support KVM_CAP_USER_MEMORY Please upgrade to at least kernel 2.6.29 or recent kvm-kmod (see http://sourceforge.net/projects/kvm). +failed to initialize KVM: Invalid argument No accelerator found! + + +I don’t know if the qemu version have compatibility issues with redhat 5.9 version – need someone to check if the qemu can run on redhat 5.9 64 bit or not ? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1439 b/results/classifier/gemma3:12b/kvm/1439 new file mode 100644 index 00000000..47474b29 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1439 @@ -0,0 +1,12 @@ + +QEMU crashes when there is an "[accel]" section in the config file +Description of problem: +QEMU crashes with a segmentation fault if there is a "[accel]" section in the config file with a type="kvm" entry. It would be maybe still be OK if there was an error message instead, but it should certainly not crash. +Steps to reproduce: +``` +$ cat > /tmp/config <<EOF +[accel] +type = "kvm" +EOF +$ qemu-system-x86_64 -readconfig /tmp/config +``` diff --git a/results/classifier/gemma3:12b/kvm/1448985 b/results/classifier/gemma3:12b/kvm/1448985 new file mode 100644 index 00000000..78fd72ef --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1448985 @@ -0,0 +1,25 @@ + +llvmpipe i386 crashes when running on qemu64 cpu + +I have installed Ubuntu 14.04.2 amd64 with all updates. +I have downloaded the Ubuntu 14.0.4.2 i386 iso (ubuntu-14.04.2-desktop-i386.iso, MD5SUM = a8a14f1f92c1ef35dae4966a2ae1a264). + +It does not boot to Unity from QEMU-KVM with the all following commands: +* sudo kvm -m 1536 -cdrom ubuntu-14.04.2-desktop-i386.iso +* sudo kvm -m 1536 -cdrom ubuntu-14.04.2-desktop-i386.iso -vga std +* sudo kvm -m 1536 -cdrom ubuntu-14.04.2-desktop-i386.iso -vga vmware + +ProblemType: Bug +DistroRelease: Ubuntu 14.04 +Package: qemu-kvm 2.0.0+dfsg-2ubuntu1.10 +ProcVersionSignature: Ubuntu 3.13.0-49.83-generic 3.13.11-ckt17 +Uname: Linux 3.13.0-49-generic x86_64 +NonfreeKernelModules: nvidia +ApportVersion: 2.14.1-0ubuntu3.10 +Architecture: amd64 +CurrentDesktop: Unity +Date: Mon Apr 27 14:11:31 2015 +InstallationDate: Installed on 2015-01-04 (112 days ago) +InstallationMedia: Ubuntu 14.04.1 LTS "Trusty Tahr" - Release amd64 (20140722.2) +SourcePackage: qemu +UpgradeStatus: No upgrade log present (probably fresh install) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1456 b/results/classifier/gemma3:12b/kvm/1456 new file mode 100644 index 00000000..f8dd7077 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1456 @@ -0,0 +1,12 @@ + +qemu-system-alpha crashes during migration +Description of problem: +QEMU crashes (aborts) when trying to migrate with qemu-system-alpha. + +``` +qemu-system-alpha: migration/ram.c:874: pss_find_next_dirty: Assertion `pss->host_page_end' failed. +``` +Steps to reproduce: +1. Run `./qemu-system-alpha -incoming tcp:0:1234` in one terminal +2. Run `./qemu-system-alpha -monitor stdio` in another terminal +3. Type `migrate tcp:0:1234` in the HMP monitor diff --git a/results/classifier/gemma3:12b/kvm/1456804 b/results/classifier/gemma3:12b/kvm/1456804 new file mode 100644 index 00000000..a3b28589 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1456804 @@ -0,0 +1,199 @@ + +kvm_irqchip_commit_routes: Assertion `ret == 0' failed. + +During win8.1 boot on qemu.git eba05e922e8e7f307bc5d4104a78797e55124e97, kernel 4.1-rc4, I get the following assert: + +qemu-system-x86_64: /net/gimli/home/alwillia/Work/qemu.git/kvm-all.c:1033: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. + +Bisected to: + +commit 851c2a75a6e80c8aa5e713864d98cfb512e7229b +Author: Jason Wang <email address hidden> +Date: Thu Apr 23 14:21:47 2015 +0800 + + virtio-pci: speedup MSI-X masking and unmasking + + This patch tries to speed up the MSI-X masking and unmasking through + the mapping between vector and queues. With this patch it will there's + no need to go through all possible virtqueues, which may help to + reduce the time spent when doing MSI-X masking/unmasking a single + vector when more than hundreds or even thousands of virtqueues were + supported. + + Tested with 80 queue pairs virito-net-pci by changing the smp affinity + in the background and doing netperf in the same time: + + Before the patch: + 5711.70 Gbits/sec + After the patch: + 6830.98 Gbits/sec + + About 19.6% improvements in throughput. + + Cc: Michael S. Tsirkin <email address hidden> + Signed-off-by: Jason Wang <email address hidden> + Reviewed-by: Michael S. Tsirkin <email address hidden> + Signed-off-by: Michael S. Tsirkin <email address hidden> + +Backtrace: + +Program received signal SIGABRT, Aborted. +[Switching to Thread 0x7f32fffff700 (LWP 23059)] +0x00007f33187438d7 in raise () from /lib64/libc.so.6 +(gdb) bt +#0 0x00007f33187438d7 in raise () at /lib64/libc.so.6 +#1 0x00007f331874553a in abort () at /lib64/libc.so.6 +#2 0x00007f331873c47d in __assert_fail_base () at /lib64/libc.so.6 +#3 0x00007f331873c532 in () at /lib64/libc.so.6 +#4 0x000055e0252fed5b in kvm_irqchip_commit_routes (s=0x55e027ce17e0) + at /net/gimli/home/alwillia/Work/qemu.git/kvm-all.c:1033 +#5 0x000055e0252fef46 in kvm_update_routing_entry (s=0x55e027ce17e0, new_entry=0x7f32ffffe4a0) at /net/gimli/home/alwillia/Work/qemu.git/kvm-all.c:1078 +#6 0x000055e0252ff78e in kvm_irqchip_update_msi_route (s=0x55e027ce17e0, virq=0, msg=...) at /net/gimli/home/alwillia/Work/qemu.git/kvm-all.c:1282 +#7 0x000055e0255899a0 in virtio_pci_vq_vector_unmask (proxy=0x55e02a0ee580, queue_no=2, vector=2, msg=...) at hw/virtio/virtio-pci.c:588 +#8 0x000055e025589b76 in virtio_pci_vector_unmask (dev=0x55e02a0ee580, vector=2, msg=...) at hw/virtio/virtio-pci.c:641 +#9 0x000055e0255186f0 in msix_set_notifier_for_vector (dev=0x55e02a0ee580, vector=2) at hw/pci/msix.c:513 +#10 0x000055e0255187ee in msix_set_vector_notifiers (dev=0x55e02a0ee580, use_notifier=0x55e025589ad2 <virtio_pci_vector_unmask>, release_notifier=0x55e025589c0c <virtio_pci_vector_mask>, poll_notifier=0x55e025589cb1 <virtio_pci_vector_poll>) at hw/pci/msix.c:540 +#11 0x000055e02558a1f0 in virtio_pci_set_guest_notifiers (d=0x55e02a0ee580, nvqs=2, assign=true) at hw/virtio/virtio-pci.c:794 +#12 0x000055e02533c2bc in vhost_net_start (dev=0x55e02a0eef90, ncs=0x55e02a866ac0, total_queues=1) + at /net/gimli/home/alwillia/Work/qemu.git/hw/net/vhost_net.c:318 +#13 0x000055e025336dce in virtio_net_vhost_status (n=0x55e02a0eef90, status=7 '\a') at /net/gimli/home/alwillia/Work/qemu.git/hw/net/virtio-net.c:146 +#14 0x000055e025336e78 in virtio_net_set_status (vdev=0x55e02a0eef90, status=7 '\a') at /net/gimli/home/alwillia/Work/qemu.git/hw/net/virtio-net.c:165 +#15 0x000055e0253504c6 in virtio_set_status (vdev=0x55e02a0eef90, val=7 '\a') + at /net/gimli/home/alwillia/Work/qemu.git/hw/virtio/virtio.c:551 +#16 0x000055e025588d6d in virtio_ioport_write (opaque=0x55e02a0ee580, addr=18, val=7) at hw/virtio/virtio-pci.c:259 +#17 0x000055e0255891d1 in virtio_pci_config_write (opaque=0x55e02a0ee580, addr=18, val=7, size=1) at hw/virtio/virtio-pci.c:385 +#18 0x000055e025303155 in memory_region_write_accessor (mr=0x55e02a0eee00, addr=18, value=0x7f32ffffe908, size=1, shift=0, mask=255, attrs=...) + at /net/gimli/home/alwillia/Work/qemu.git/memory.c:457 +#19 0x000055e025303308 in access_with_adjusted_size (addr=18, value=0x7f32ffffe908, size=1, access_size_min=1, access_size_max=4, access= + 0x55e0253030d0 <memory_region_write_accessor>, mr=0x55e02a0eee00, attrs=...) at /net/gimli/home/alwillia/Work/qemu.git/memory.c:516 +#20 0x000055e025305b15 in memory_region_dispatch_write (mr=0x55e02a0eee00, addr=18, data=7, size=1, attrs=...) + at /net/gimli/home/alwillia/Work/qemu.git/memory.c:1166 +#21 0x000055e0252b68eb in address_space_rw (as=0x55e025b71a80 <address_space_io>, addr=49458, attrs=..., buf=0x7f3322f48000 "\a", len=1, is_write=true) + at /net/gimli/home/alwillia/Work/qemu.git/exec.c:2363 +#22 0x000055e02530041e in kvm_handle_io (port=49458, attrs=..., data=0x7f3322f48000, direction=1, size=1, count=1) + at /net/gimli/home/alwillia/Work/qemu.git/kvm-all.c:1679 +#23 0x000055e025300914 in kvm_cpu_exec (cpu=0x55e02a1383a0) +---Type <return> to continue, or q <return> to quit--- + at /net/gimli/home/alwillia/Work/qemu.git/kvm-all.c:1839 +#24 0x000055e0252e8062 in qemu_kvm_cpu_thread_fn (arg=0x55e02a1383a0) + at /net/gimli/home/alwillia/Work/qemu.git/cpus.c:947 +#25 0x00007f3321b2352a in start_thread () at /lib64/libpthread.so.0 +#26 0x00007f331880f22d in clone () at /lib64/libc.so.6 + +VM XML (-snapshot added only for bisect): + +<domain type='kvm' id='8' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> + <name>Steam</name> + <uuid>79f39860-d659-426b-89e8-90cb46ee57c6</uuid> + <memory unit='KiB'>4194304</memory> + <currentMemory unit='KiB'>4194304</currentMemory> + <memoryBacking> + <hugepages/> + </memoryBacking> + <vcpu placement='static'>4</vcpu> + <cputune> + <vcpupin vcpu='0' cpuset='0'/> + <vcpupin vcpu='1' cpuset='1'/> + <vcpupin vcpu='2' cpuset='2'/> + <vcpupin vcpu='3' cpuset='3'/> + </cputune> + <resource> + <partition>/machine</partition> + </resource> + <os> + <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type> + <loader readonly='yes' type='pflash'>/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> + <nvram template='/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd'>/var/lib/libvirt/qemu/nvram/Steam_VARS.fd</nvram> + </os> + <features> + <acpi/> + <apic/> + <pae/> + <kvm> + <hidden state='on'/> + </kvm> + </features> + <cpu mode='host-passthrough'> + <topology sockets='1' cores='4' threads='1'/> + </cpu> + <clock offset='localtime'> + <timer name='rtc' tickpolicy='catchup'/> + <timer name='pit' tickpolicy='delay'/> + <timer name='hpet' present='no'/> + </clock> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <devices> + <emulator>/usr/local/bin/qemu-system-x86_64</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2' cache='none' io='native'/> + <source file='/mnt/store/vm/Steam.qcow2'/> + <backingStore type='file' index='1'> + <format type='raw'/> + <source file='/mnt/store/vm/Steam.img'/> + <backingStore/> + </backingStore> + <target dev='sda' bus='scsi'/> + <boot order='2'/> + <alias name='scsi0-0-0-0'/> + <address type='drive' controller='0' bus='0' target='0' unit='0'/> + </disk> + <controller type='scsi' index='0' model='virtio-scsi'> + <alias name='scsi0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> + </controller> + <controller type='usb' index='0' model='ich9-ehci1'> + <alias name='usb0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci1'> + <alias name='usb0'/> + <master startport='0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci2'> + <alias name='usb0'/> + <master startport='2'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci3'> + <alias name='usb0'/> + <master startport='4'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/> + </controller> + <controller type='pci' index='0' model='pci-root'> + <alias name='pci.0'/> + </controller> + <interface type='bridge'> + <mac address='52:54:00:60:ef:ac'/> + <source bridge='br0'/> + <target dev='vnet0'/> + <model type='virtio'/> + <alias name='net0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> + </source> + <alias name='hostdev0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </hostdev> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> + </source> + <alias name='hostdev1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> + </hostdev> + <memballoon model='none'> + <alias name='balloon0'/> + </memballoon> + </devices> + <qemu:commandline> + <qemu:arg value='-snapshot'/> + </qemu:commandline> +</domain> \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1465935 b/results/classifier/gemma3:12b/kvm/1465935 new file mode 100644 index 00000000..3aeb080f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1465935 @@ -0,0 +1,8 @@ + +kvm_irqchip_commit_routes: Assertion `ret == 0' failed + +Several my QEMU instances crashed, and in the qemu log, I can see this assertion failure, + + qemu-system-x86_64: /build/buildd/qemu-2.0.0+dfsg/kvm-all.c:984: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. + +The QEMU version is 2.0.0, HV OS is ubuntu 12.04, kernel 3.2.0-38. Guest OS is RHEL 6.3. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1469924 b/results/classifier/gemma3:12b/kvm/1469924 new file mode 100644 index 00000000..4d683a0e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1469924 @@ -0,0 +1,51 @@ + +qemu-kvm crash when guest os is booting + +this is the command line of qemu. + + +2015-06-30 01:52:59.508+0000: starting up +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name rhel7 -S -machine pc-i440fx-2.1,accel=kvm,usb=off -cpu SandyBridge -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 2a3f1d8a-850d-4e37-aecd-65cbf1e4e415 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel7.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=on,strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device lsi,id=scsi0,bus=pci.0,addr=0x6 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=/var/lib/libvirt/images/rhel7.qcow2,if=none,id=drive-ide0-0-0,format=qcow2 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/home/jemmy/Downloads/rhel-server-7.1-x86_64-dvd.iso,if=none,id=drive-ide0-0-1,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1,bootindex=2 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:b4:7b:bb,bus=pci.0,addr=0x8 -chardev socket,id=charserial0,host=127.0.0.1,port=4555,telnet,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -chardev file,id=charserial1,path=/tmp/log.txt -device isa-serial,chardev=charserial1,id=serial1 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -vnc 127.0.0.1:0 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on +char device redirected to /dev/pts/2 (label charconsole0) + + +this is the error log of qemu when crash. + +id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0 +id 1, group 1, virt start 7fbe20000000, virt end 7fbe23ffe000, generation 0, delta 7fbe20000000 +id 2, group 1, virt start 7fbe1c000000, virt end 7fbe20000000, generation 0, delta 7fbe1c000000 +((null):16237): Spice-CRITICAL **: red_memslots.c:69:validate_virt: virtual address out of range + virt=0x0+0x18 slot_id=0 group_id=1 + slot=0x0-0x0 delta=0x0 +Thread 4 (Thread 0x7fbeb3a32700 (LWP 16278)): +#0 0x00007fbec182d407 in ioctl () at /lib64/libc.so.6 +#1 0x00007fbecc80e565 in kvm_vcpu_ioctl () +#2 0x00007fbecc80e61c in kvm_cpu_exec () +#3 0x00007fbecc7fd0a2 in qemu_kvm_cpu_thread_fn () +#4 0x00007fbecb2f652a in start_thread () at /lib64/libpthread.so.0 +#5 0x00007fbec183722d in clone () at /lib64/libc.so.6 +Thread 3 (Thread 0x7fbeb15ff700 (LWP 16287)): +#0 0x00007fbecb2fe1cd in read () at /lib64/libpthread.so.0 +#1 0x00007fbec2a50499 in spice_backtrace_gstack () at /lib64/libspice-server.so.1 +#2 0x00007fbec2a57dae in spice_logv () at /lib64/libspice-server.so.1 +#3 0x00007fbec2a57f05 in spice_log () at /lib64/libspice-server.so.1 +#4 0x00007fbec2a177ff in validate_virt () at /lib64/libspice-server.so.1 +#5 0x00007fbec2a1791e in get_virt () at /lib64/libspice-server.so.1 +#6 0x00007fbec2a17fb9 in red_get_clip_rects () at /lib64/libspice-server.so.1 +#7 0x00007fbec2a1976f in red_get_drawable () at /lib64/libspice-server.so.1 +#8 0x00007fbec2a30332 in red_process_commands.constprop () at /lib64/libspice-server.so.1 +#9 0x00007fbec2a3638a in red_worker_main () at /lib64/libspice-server.so.1 +#10 0x00007fbecb2f652a in start_thread () at /lib64/libpthread.so.0 +#11 0x00007fbec183722d in clone () at /lib64/libc.so.6 +Thread 2 (Thread 0x7fbeb0bff700 (LWP 16289)): +#0 0x00007fbecb2fb590 in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0 +#1 0x00007fbecca954c9 in qemu_cond_wait () +#2 0x00007fbecca3bfe3 in vnc_worker_thread_loop () +#3 0x00007fbecca3c3c8 in vnc_worker_thread () +#4 0x00007fbecb2f652a in start_thread () at /lib64/libpthread.so.0 +#5 0x00007fbec183722d in clone () at /lib64/libc.so.6 +Thread 1 (Thread 0x7fbeccd2ca80 (LWP 16237)): +#0 0x00007fbec182bd51 in ppoll () at /lib64/libc.so.6 +#1 0x00007fbecca4d2ec in qemu_poll_ns () +#2 0x00007fbecca4ca94 in main_loop_wait () +#3 0x00007fbecc7d58dd in main () \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1469978 b/results/classifier/gemma3:12b/kvm/1469978 new file mode 100644 index 00000000..9a4ed47f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1469978 @@ -0,0 +1,34 @@ + +compile qemu use with KVM machine not supported + +I have to compile qemu 2.3.0 and 2.2.0. and install follow this. + ./configure --enable-kvm --target-list=x86_64-softmmu + make + make install +It's located in /usr/local/bin +I want to use qemu with KVM so I copy /usr/local/bin/qemu-system-x86_64 to /usr/bin + +and I run VMM for start my VM. + + + +---------------------------------------------------------------------------------------------------------------------------------------------------------------------- +Error starting domain: internal error: process exited while connecting to monitor: qemu-system-x86_64: -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6: Unsupported machine type +Use -machine help to list supported machines! + + +Traceback (most recent call last): + File "/usr/share/virt-manager/virtManager/asyncjob.py", line 96, in cb_wrapper + callback(asyncjob, *args, **kwargs) + File "/usr/share/virt-manager/virtManager/asyncjob.py", line 117, in tmpcb + callback(*args, **kwargs) + File "/usr/share/virt-manager/virtManager/domain.py", line 1162, in startup + self._backend.create() + File "/usr/lib/python2.7/dist-packages/libvirt.py", line 866, in create + if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) +libvirtError: internal error: process exited while connecting to monitor: qemu-system-x86_64: -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6: Unsupported machine type +Use -machine help to list supported machines! + +---------------------------------------------------------------------------------------------------------------------------------------------------------------------- + +I can't use my VM except reinstall kvm, qemu-kvm. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1477538 b/results/classifier/gemma3:12b/kvm/1477538 new file mode 100644 index 00000000..c51122a1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1477538 @@ -0,0 +1,57 @@ + +Windows QEMU Guest Agent VSS Provider service stops during qemu backup + +I’m currently implementing the QEMU Guest Agent on all my KVM Windows guests. + +I’m using the stable VirtIO drivers and Guest agent from Fedora: https://fedoraproject.org/wiki/Windows_Virtio_Drivers + +Both the stable and latest release do provide Qemu Windows Guest agent 7.0.0.10. + +After the Guest agent installation I initially received VSS events with ID 8194: + + Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface. hr = 0x80070005, Access is denied. + +This is often caused by incorrect security settings in either the writer or requestor process. +Operation: + Gathering Writer Data +Context: + + Writer Class Id: {e8132975-6f93-4464-a53e-1050253ae220} + + Writer Name: System Writer + + Writer Instance ID: {6c777a34-53dd-4fb3-a4c9-b85d7e183e27} + + +I was able to fix this issue by adding local access permissions for the “Network Service” account at the Dcom security permissions: + +On the client computer from the Start Menu, select Run +The Run dialog opens. +In the Open field type dcomcnfg and click OK. +The Component Services dialog opens. +Expand Component Services, Computers, and My Computer. +Right-click My Computer and click Properties on the pop-up menu. +The My Computer Properties dialog opens. +Click the COM Security tab. +Under Access Permission click Edit Default. +The Access Permissions dialog opens. +From the Access Permissions dialog, add the "Network Service" account with Local Access allowed. +Close all open dialogs. + +Now an initial backup runs without any errors but this is causing the QEMU Guest Agent VSS Provider service to stop running without any error in the event log. + +As a result a second backup will cause an error: MSDTC Client 2 with event ID: 4879: + +MSDTC encountered an error (HR=0x80000171) while attempting to establish a secure connection with system SERVERNAME. + +This is probably caused because the QEMU Guest Agent VSS Provider service isn’t running anymore. + +I can manually start the QEMU Guest Agent VSS Provider service but every backup is causing the service to stop. + +I’m seeing this behavior at all my Windows based guests running both Windows Server 2012 R2 and Windows Server 2008 R2. + + Since I can’t find any logging or troubleshooting possibilities for this particular service I'm open for suggestions how to troubleshoot this issue to receive detailed information about the reason why this services stops running during a backup. + + +qemu-server 3.4-3 amd64 +Guest agent 7.0.0.10 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1487264 b/results/classifier/gemma3:12b/kvm/1487264 new file mode 100644 index 00000000..7e24385f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1487264 @@ -0,0 +1,19 @@ + +Windows 8.1/10 Crashes during upgrade - SYSTEM_THREAD_EXCEPTION_NOT_HANDLED + +Ever since Windows 8.x, 10 I cannot upgrade or upgrade to tech builds within Windows 10 without hard shutting off the VM. + +Physical hardware: Intel(R) Core(TM) i7-4910MQ CPU @ 2.90GHz [Haswell] + +QEMU 2.1-2.3.x seem all broken, I am using Q35 chipset w/ BIOS mode. + +Launch command via virt-manager/libvirt launch: + +QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name Windows_10 -S -machine pc-q35-2.3,accel=kvm,usb=off -cpu Haswell-noTSX,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid ed7e372b-ebf9-4feb-a305-869f82e6aaee -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/Windows_10.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device ich9-usb-ehci1,id=usb,bus=pci.2,addr=0x3.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.2,multifunction=on,addr=0x3 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.2,addr=0x3.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.2,addr=0x3.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x4 -drive file=/var/lib/libvirt/images/Windows_10.qcow2,if=none,id=drive-sata0-0-0,format=qcow2 -device ide-hd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=1 -drive file=/usr/share/virtio-win/virtio-win-0.1.109.iso,if=none,media=cdrom,id=drive-sata0-0-1,readonly=on,format=raw -device ide-cd,bus=ide.1,drive=drive-sata0-0-1,id=sata0-0-1 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:54:14:20,bus=pci.2,addr=0x1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -device usb-tablet,id=input0 -spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga,id=video0,ram_size=268435456,vram_size=268435456,vgamem_mb=256,bus=pcie.0,addr=0x1 -device ich9-intel-hda,id=sound0,bus=pci.2,addr=0x2 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device usb-host,hostbus=1,hostaddr=2,id=hostdev0 -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x5 -msg timestamp=on + +The workaround I've been able to come up with is to set boot menu in virt-manager, then put in a bootable CD so I have enough time to hard power off the QEMU/KVM instance, when I power it back on, it continues upgrade/install without issue, each time it needs to restart however I go though same exercise. + +Anything known about this issue? The workaround is a kludge, but it does get it to upgrade/install Windows 8.1, and upgrade between Windows 10 X builds. + +Thanks, +Shawn \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1488901 b/results/classifier/gemma3:12b/kvm/1488901 new file mode 100644 index 00000000..5e186c9a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1488901 @@ -0,0 +1,163 @@ + +KVM guest crashes when doing a block commit command + +I scripted a simple backup procedure that works like this: + +1. Create snapshot + + virsh snapshot-create-as ${VM} "backup-${VM}" \ + --diskspec vda,file=${SNAP}/backup-snapshot-${VM}.qcow2 \ + --disk-only \ + --atomic \ + --quiesce \ + --no-metadata + +2. Copy disk image to different location + + cp -f --sparse=always /var/lib/libvirt/images/${VM}.img ${DST}/ + +3. Merge snapshot back to base image + + virsh blockcommit ${VM} vda --wait --active --verbose + virsh blockjob ${VM} ${SNAP}/backup-snapshot-${VM}.qcow2 --pivot + +4. Copy XML liver file + + cp -f /etc/libvirt/qemu/${VM}.xml ${DST}/ + +5. Remove old snapshot file + + rm -f ${SNAP}/backup-snapshot-${VM}.qcow2 + +When it comes to the blockcommit operation, the guest receives a SIGABRT. + +(gdb) bt +#0 0x00007f4b6e6ccb8e in raise () from /lib64/libc.so.6 +#1 0x00007f4b6e6ce391 in abort () from /lib64/libc.so.6 +#2 0x0000555a316a8c39 in qemu_coroutine_enter (co=0x555a34651a50, opaque=0x0) + at /var/tmp/portage/app-emulation/qemu-2.4.0/work/qemu-2.4.0/qemu-coroutine.c:111 +#3 0x0000555a316a8eda in qemu_co_queue_run_restart (co=co@entry=0x555a33d271b0) + at /var/tmp/portage/app-emulation/qemu-2.4.0/work/qemu-2.4.0/qemu-coroutine-lock.c:59 +#4 0x0000555a316a8b53 in qemu_coroutine_enter (co=0x555a33d271b0, opaque=<optimized out>) + at /var/tmp/portage/app-emulation/qemu-2.4.0/work/qemu-2.4.0/qemu-coroutine.c:118 +#5 0x0000555a316e3adf in bdrv_co_aio_rw_vector (bs=bs@entry=0x555a336a6be0, + sector_num=sector_num@entry=113551488, qiov=qiov@entry=0x555a3367d2c8, + nb_sectors=nb_sectors@entry=15360, flags=flags@entry=(unknown: 0), + cb=cb@entry=0x555a316e1fe0 <mirror_read_complete>, opaque=0x555a3367d2c0, is_write=is_write@entry=false) + at /var/tmp/portage/app-emulation/qemu-2.4.0/work/qemu-2.4.0/block/io.c:2142 +#6 0x0000555a316e4b1e in bdrv_aio_readv (bs=bs@entry=0x555a336a6be0, + sector_num=sector_num@entry=113551488, qiov=qiov@entry=0x555a3367d2c8, + nb_sectors=nb_sectors@entry=15360, cb=cb@entry=0x555a316e1fe0 <mirror_read_complete>, + opaque=opaque@entry=0x555a3367d2c0) + at /var/tmp/portage/app-emulation/qemu-2.4.0/work/qemu-2.4.0/block/io.c:1744 +#7 0x0000555a316e2ccf in mirror_iteration (s=0x555a34a0c250) + at /var/tmp/portage/app-emulation/qemu-2.4.0/work/qemu-2.4.0/block/mirror.c:302 +#8 mirror_run (opaque=0x555a34a0c250) + at /var/tmp/portage/app-emulation/qemu-2.4.0/work/qemu-2.4.0/block/mirror.c:512 +#9 0x0000555a316a9a5a in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) + at /var/tmp/portage/app-emulation/qemu-2.4.0/work/qemu-2.4.0/coroutine-ucontext.c:80 +#10 0x00007f4b6e6df4a0 in ?? () from /lib64/libc.so.6 +#11 0x00007ffe67b71840 in ?? () +#12 0x0000000000000000 in ?? () +(gdb) + +There is one very interesting aspect: + +After the guest died, I can restart it and if I do the exact same blockcommit and blockjob as shown above, everything succeeds without errors. That's strange. + +This is from my libvirt log. So you can see how the guest was started and the C-routine error message: + +2015-08-24 18:38:13.077+0000: starting up libvirt version: 1.2.18, qemu version: 2.4.0 +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-system-x86_64 -name mx.roessner-net.de-TESTING -S -machine pc-i440fx-2.1,accel=kvm,usb=off -cpu qemu64,+kvm_pv_eoi -m 4096 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid d86b82d5-153f-4dd9-aa66-d98c2e65db8c -no-user-config -nodefaults -device sga -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/mx.roessner-net.de-TESTING.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-shutdown -boot order=cd,menu=on,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x8 -drive file=/var/lib/libvirt/images/mx.roessner-net.de-TESTING.img,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=54:52:00:27:ac:8d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/mx.roessner-net.de-TESTING.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:7 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device i6300esb,id=watchdog0,bus=pci.0,addr=0x7 -watchdog-action reset -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on +char device redirected to /dev/pts/8 (label charserial0) +Formatting '/var/backups/snapshots/backup-snapshot-mx.roessner-net.de-TESTING.qcow2', fmt=qcow2 size=107374182400 backing_file='/var/lib/libvirt/images/mx.roessner-net.de-TESTING.img' backing_fmt='raw' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 +Formatting '/var/backups/snapshots/backup-snapshot-mx.roessner-net.de-TESTING.qcow2', fmt=qcow2 size=107374182400 backing_file='/var/lib/libvirt/images/mx.roessner-net.de-TESTING.img' backing_fmt='raw' encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16 +Co-routine re-entered recursively +2015-08-24 19:43:17.700+0000: shutting down + +Here is my envirmornment information: + +emerge --info qemu +Portage 2.2.20.1 (python 2.7.9-final-0, hardened/linux/amd64/no-multilib, gcc-4.8.4, glibc-2.20-r2, 4.1.6-gentoo x86_64) +================================================================= + System Settings +================================================================= +System uname: Linux-4.1.6-gentoo-x86_64-Intel-R-_Xeon-R-_CPU_L5520_@_2.27GHz-with-gentoo-2.2 +KiB Mem: 49454932 total, 7729532 free +KiB Swap: 2097148 total, 2097148 free +Timestamp of repository gentoo: Tue, 25 Aug 2015 21:15:01 +0000 +sh bash 4.3_p39 +ld GNU ld (Gentoo 2.24 p1.4) 2.24 +ccache version 3.1.9 [enabled] +app-shells/bash: 4.3_p39::gentoo +dev-lang/perl: 5.20.2::gentoo +dev-lang/python: 2.7.9-r1::gentoo, 3.4.1::gentoo +dev-util/ccache: 3.1.9-r4::gentoo +dev-util/cmake: 3.2.2::gentoo +dev-util/pkgconfig: 0.28-r2::gentoo +sys-apps/baselayout: 2.2::gentoo +sys-apps/openrc: 0.17::gentoo +sys-apps/sandbox: 2.6-r1::gentoo +sys-devel/autoconf: 2.69::gentoo +sys-devel/automake: 1.14.1::gentoo, 1.15::gentoo +sys-devel/binutils: 2.24-r3::gentoo +sys-devel/gcc: 4.8.4::gentoo +sys-devel/gcc-config: 1.7.3::gentoo +sys-devel/libtool: 2.4.6::gentoo +sys-devel/make: 4.1-r1::gentoo +sys-kernel/linux-headers: 3.18::gentoo (virtual/os-headers) +sys-libs/glibc: 2.20-r2::gentoo +Repositories: + +gentoo + location: /usr/portage + sync-type: rsync + sync-uri: rsync://rsync.europe.gentoo.org/gentoo-portage + priority: -1000 + +x-portage + location: /usr/local/portage + masters: gentoo + priority: 0 + +ACCEPT_KEYWORDS="amd64" +ACCEPT_LICENSE="* -@EULA" +CBUILD="x86_64-pc-linux-gnu" +CFLAGS="-O2 -pipe" +CHOST="x86_64-pc-linux-gnu" +CONFIG_PROTECT="/etc /usr/share/easy-rsa /usr/share/gnupg/qualified.txt" +CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/fonts/fonts.conf /etc/gconf /etc/gentoo-release /etc/php/apache2-php5.6/ext-active/ /etc/php/cgi-php5.6/ext-active/ /etc/php/cli-php5.6/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo" +CXXFLAGS="-O2 -pipe" +DISTDIR="/usr/portage/distfiles" +EMERGE_DEFAULT_OPTS="--keep-going --with-bdeps=y --binpkg-respect-use=y --binpkg-changed-deps=y --usepkg=y --rebuilt-binaries=y --rebuilt-binaries-timestamp=20140405050000" +FCFLAGS="-O2 -pipe" +FEATURES="assume-digests binpkg-logs ccache compressdebug config-protect-if-modified distlocks ebuild-locks fixlafiles merge-sync news parallel-fetch preserve-libs protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch userpriv usersandbox usersync xattr" +FFLAGS="-O2 -pipe" +GENTOO_MIRRORS="http://de-mirror.org/gentoo/ rsync://de-mirror.org/gentoo/" +LANG="en_US.utf8" +LC_ALL="en_US.UTF-8" +LDFLAGS="-Wl,-O1 -Wl,--as-needed" +MAKEOPTS="-j17" +PKGDIR="/export/packages" +PORTAGE_CONFIGROOT="/" +PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --omit-dir-times --compress --force --whole-file --delete --stats --human-readable --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" +PORTAGE_TMPDIR="/var/tmp" +USE="acl adns aio amd64 bacula-clientonly bacula-console bash-completion berkdb bindist btrfs bzip2 caps cli cracklib crypt curl cxx device-mapper dri gdbm hardened iconv ipv6 justify logrotate loop-aes lzo mmap mmx mmxext modules ncurses nls nptl nscd ntp openmp openssl pam pax_kernel pcre pie readline seccomp session sse sse2 ssl ssp systemd tcpd threads unicode urandom vim-syntax xattr xtpax zlib" ABI_X86="64" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" APACHE2_MODULES="authn_core authz_core socache_shmcb unixd actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" CALLIGRA_FEATURES="kexi words flow plan sheets stage tables krita karbon braindump author" CAMERAS="ptp2" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog aggregation cgroups contextswitch cpu cpufreq curl curl_json curl_xml disk email entropy ethstat exec filecount fscache hddtemp ipmi iptables logfile multimeter netlink network nfs nginx ntpd numa openvpn ping postgresql processes protocols python sensors snmp uptime users uuid" CPU_FLAGS_X86="mmx sse sse2" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ublox ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LIBREOFFICE_EXTENSIONS="presenter-console presenter-minimizer" LINGUAS="de en" NGINX_MODULES_HTTP="access auth_basic autoindex browser charset dav empty_gif fastcgi geo gzip headers_more limit_conn limit_req map memcached proxy referer rewrite scgi spdy split_clients ssi upstream_ip_hash userid uwsgi" OFFICE_IMPLEMENTATION="libreoffice" PHP_TARGETS="php5-6" PYTHON_SINGLE_TARGET="python2_7" PYTHON_TARGETS="python2_7 python3_4" QEMU_SOFTMMU_TARGETS="x86_64 i386" QEMU_USER_TARGETS="x86_64 i386" RUBY_TARGETS="ruby19 ruby20" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" +Unset: CC, CPPFLAGS, CTARGET, CXX, INSTALL_MASK, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS, USE_PYTHON + +================================================================= + Package Settings +================================================================= + +app-emulation/qemu-2.4.0::gentoo was built with the following: +USE="aio caps curl debug fdt filecaps iscsi jpeg lzo ncurses nfs nls pin-upstream-blobs png python sasl seccomp snappy spice ssh systemtap threads tls usb usbredir uuid vhost-net virtfs vnc xattr xfs -accessibility -alsa -bluetooth -glusterfs -gtk -gtk2 -infiniband -numa -opengl -pulseaudio -rbd -sdl -sdl2 (-selinux) -smartcard -static -static-softmmu -static-user -tci -test -vde -vte -xen" PYTHON_TARGETS="python2_7" QEMU_SOFTMMU_TARGETS="i386 x86_64 -aarch64 (-alpha) (-arm) -cris -lm32 (-m68k) -microblaze -microblazeel (-mips) -mips64 -mips64el -mipsel -moxie -or32 (-ppc) (-ppc64) -ppcemb -s390x -sh4 -sh4eb (-sparc) -sparc64 -unicore32 -xtensa -xtensaeb" QEMU_USER_TARGETS="i386 x86_64 -aarch64 (-alpha) (-arm) -armeb -cris (-m68k) -microblaze -microblazeel (-mips) -mips64 -mips64el -mipsel -mipsn32 -mipsn32el -or32 (-ppc) (-ppc64) -ppc64abi32 -s390x -sh4 -sh4eb (-sparc) -sparc32plus -sparc64 -unicore32" + +>>> Attempting to run pkg_info() for 'app-emulation/qemu-2.4.0' +Using: + app-emulation/spice-protocol-0.12.3 + sys-firmware/ipxe-1.0.0_p20130925 + sys-firmware/seabios-1.7.5 + USE=binary + sys-firmware/vgabios-0.7a + +The server hardware is a HP ProLiant SE316M1-R2 (also known as DL160 G6) with 48GB RAM and a RAID1+0 with 15k SAS disks. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1490853 b/results/classifier/gemma3:12b/kvm/1490853 new file mode 100644 index 00000000..fd406e46 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1490853 @@ -0,0 +1,217 @@ + +qemu windows guest hangs on 100% cpu usage + +hi: +I have two VM , one is winXP Prefessional SP3 32bit, another on is WindowsServer2008 Enterprise SP2 64bit. +When I hot reboot winXP in guest OS, it'll hangs on progress bar, and all the vcpu thread in qemu is 100% usage. +I try to rebuild kvm and add some debug info , I found the cpu exit reason is EXIT_REASON_PAUSE_INSTRUCTION. +It seems like all the vcpu always in spinlock waiting. I not sure it's qemu's bug or kvm's. +Any help would be appreciated. + +How reproducible: +WinXP: seems always. +WinServer2008: rare. + +Steps to Reproduce: +winXP: 1. hot reboot the xp guest os, hot reboot is necessary. +WinServer2008: not sure, I didn't do anything, it just happened. + +The different between WinXP and WInServer2008: +1. When WinXP hangs, the boot progress bar is rolling, I think that vnc is work fine. +2. When WinServer2008 hangs, the vnc show the last screen and the screen won't change anything include system time. +3. When the VM hangs , if I execute "virsh suspend vm-name" and "virsh resume vm-name", the WinServer2008 will change to normal , and work fine not hangs anymore. But WinXP not change anything, still hangs. + +qemu version: +QEMU emulator version 1.5.0, Copyright (c) 2003-2008 Fabrice Bellard +host info: +Ubuntu 12.04 LTS \n \l +Linux cvknode2026 3.13.6 #1 SMP Fri Dec 12 09:17:35 CST 2014 x86_64 x86_64 x86_64 GNU/Linux + + + qemu command line (guest OS XP): +root 7124 1178 7.6 7750360 3761644 ? Sl 14:02 435:23 /usr/bin/kvm -name x -S -machine pc-i440fx-1.5,accel=kvm,usb=off,system=windows -cpu qemu64,hv_relaxed,hv_spinlocks=0x2000 -m 6144 -smp 12,maxcpus=72,sockets=12,cores=6,threads=1 -uuid d3832129-f77d-4b21-bbf7-fd337f53e572 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/x.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -no-hpet -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device usb-ehci,id=ehci,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/vms/images/sn1-of-ff.qcow2,if=none,id=drive-ide0-0-0,format=qcow2,cache=directsync -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive if=none,id=drive-ide0-1-1,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1,bootindex=2 -netdev tap,fd=24,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=0c:da:41:1d:f8:40,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/x.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0 -vnc 0.0.0.0:0 -device VGA,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 + + + all qemu thread (guest OS XP): +root@cvknode2026:/proc/7124/task# top -d 1 -H -p 7124 +top - 14:37:05 up 7 days, 4:07, 1 user, load average: 10.71, 10.90, 10.19 +Tasks: 14 total, 12 running, 2 sleeping, 0 stopped, 0 zombie +Cpu(s): 38.8%us, 11.2%sy, 0.0%ni, 50.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st +Mem: 49159888k total, 35665128k used, 13494760k free, 436312k buffers +Swap: 8803324k total, 0k used, 8803324k free, 28595100k cached + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P SWAP WCHAN COMMAND + 7130 root 20 0 7568m 3.6g 6628 R 101 7.7 33:43.48 3 3.8g - kvm + 7132 root 20 0 7568m 3.6g 6628 R 101 7.7 33:43.13 1 3.8g - kvm + 7133 root 20 0 7568m 3.6g 6628 R 101 7.7 33:42.70 6 3.8g - kvm + 7135 root 20 0 7568m 3.6g 6628 R 101 7.7 33:42.33 11 3.8g - kvm + 7137 root 20 0 7568m 3.6g 6628 R 101 7.7 33:42.59 17 3.8g - kvm + 7126 root 20 0 7568m 3.6g 6628 R 100 7.7 34:06.76 4 3.8g - kvm + 7127 root 20 0 7568m 3.6g 6628 R 100 7.7 33:44.14 8 3.8g - kvm + 7128 root 20 0 7568m 3.6g 6628 R 100 7.7 33:43.64 13 3.8g - kvm + 7129 root 20 0 7568m 3.6g 6628 R 100 7.7 33:43.64 7 3.8g - kvm + 7131 root 20 0 7568m 3.6g 6628 R 100 7.7 33:44.24 10 3.8g - kvm + 7134 root 20 0 7568m 3.6g 6628 R 100 7.7 33:42.47 12 3.8g - kvm + 7136 root 20 0 7568m 3.6g 6628 R 100 7.7 33:42.16 2 3.8g - kvm + 7124 root 20 0 7568m 3.6g 6628 S 1 7.7 0:30.65 14 3.8g poll_sche kvm + 7139 root 20 0 7568m 3.6g 6628 S 0 7.7 0:01.71 14 3.8g futex_wai kvm + +all thread's kernel stack (guest OS XP): +root@cvknode2026:/proc/7124/task# cat 7130/stack +[<ffffffffa02b1fa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7132/stack +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7133/stack +[<ffffffffa02b1fa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7135/stack +[<ffffffffa02b1fa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffa02b6788>] vmx_vcpu_run+0x88/0x760 [kvm_intel] +[<ffffffffa0413aec>] __vcpu_run+0x63c/0xc30 [kvm] +[<ffffffffa0414188>] kvm_arch_vcpu_ioctl_run+0xa8/0x270 [kvm] +[<ffffffffa03fc042>] kvm_vcpu_ioctl+0x512/0x6d0 [kvm] +[<ffffffff811d4326>] do_vfs_ioctl+0x86/0x4f0 +[<ffffffff811d4821>] SyS_ioctl+0x91/0xb0 +[<ffffffff817610ad>] system_call_fastpath+0x1a/0x1f +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7137/stack +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7126/stack +[<ffffffffa02b1fa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7127/stack +[<ffffffffa02b74f6>] handle_pause+0x16/0x30 [kvm_intel] +[<ffffffffa02ba0d4>] vmx_handle_exit+0x94/0x8b0 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7128/stack +[<ffffffffa02b1fa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7129/stack +[<ffffffffa02b1fa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7131/stack +[<ffffffffa02b1fa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7134/stack +[<ffffffffa02b74fe>] handle_pause+0x1e/0x30 [kvm_intel] +[<ffffffffa02ba0d4>] vmx_handle_exit+0x94/0x8b0 [kvm_intel] +[<ffffffffa0413aec>] __vcpu_run+0x63c/0xc30 [kvm] +[<ffffffffa0414188>] kvm_arch_vcpu_ioctl_run+0xa8/0x270 [kvm] +[<ffffffffa03fc042>] kvm_vcpu_ioctl+0x512/0x6d0 [kvm] +[<ffffffff811d4326>] do_vfs_ioctl+0x86/0x4f0 +[<ffffffff811d4821>] SyS_ioctl+0x91/0xb0 +[<ffffffff817610ad>] system_call_fastpath+0x1a/0x1f +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7136/stack +[<ffffffffa02b1fa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7124/stack +[<ffffffff811d50c9>] poll_schedule_timeout+0x49/0x70 +[<ffffffff811d678a>] do_sys_poll+0x50a/0x590 +[<ffffffff811d68eb>] SyS_poll+0x6b/0x100 +[<ffffffff817610ad>] system_call_fastpath+0x1a/0x1f +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvknode2026:/proc/7124/task# cat 7139/stack +[<ffffffff810daf77>] futex_wait_queue_me+0xd7/0x150 +[<ffffffff810dc087>] futex_wait+0x1a7/0x2c0 +[<ffffffff810ddc14>] do_futex+0x334/0xb70 +[<ffffffff810de592>] SyS_futex+0x142/0x1a0 +[<ffffffff817610ad>] system_call_fastpath+0x1a/0x1f +[<ffffffffffffffff>] 0xffffffffffffffff + + qemu command line (guest OS WinServer2008): +root 25258 996 21.5 21174412 14181580 ? Sl Aug27 73740:11 /usr/bin/kvm -name zjx_1-clone -S -machine pc-i440fx-1.5,accel=kvm,usb=off,system=windows -cpu qemu64,hv_relaxed,hv_spinlocks=0x2000 -m 16384 -smp 12,maxcpus=72,sockets=12,cores=6,threads=1 -uuid 8c8b9abf-e9a6-4c3e-93cd-137a9550e593 -no-user-config -nodefaults -chardev so +cket,id=charmonitor,path=/var/lib/libvirt/qemu/zjx_1-clone.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -no-hpet -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device usb-ehci,id=ehci,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus +=pci.0,addr=0x5 -drive file=/vms/aaa/zjx_1-clone.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=directsync -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/vms/isos/virtio-win2008R2.vfd,if=none,id=drive-fdc0-0-0,readonly=on,format=raw,cache=directsync -global isa-fdc.driveA=drive-fdc0-0-0 -drive if=none,id=drive-ide0-1-1,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1,bootindex=2 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=0c:da:41:1d:b6:47,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-ser +ial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/zjx_1-clone.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0 -vnc 0.0.0.0:3 -device VGA,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 + + all qemu thread (guest OS WinServer2008): + top -d 1 -H -p 25258 +top - 14:53:37 up 24 days, 21:27, 2 users, load average: 19.12, 20.56, 20.20 +Tasks: 14 total, 13 running, 1 sleeping, 0 stopped, 0 zombie +Cpu(s): 48.1%us, 18.2%sy, 0.0%ni, 33.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st +Mem: 65674944k total, 64651012k used, 1023932k free, 194608k buffers +Swap: 8803324k total, 4140324k used, 4663000k free, 363712k cached + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P WCHAN COMMAND +25281 root 20 0 20.2g 13g 4020 R 157 21.6 5864:12 14 - kvm +25284 root 20 0 20.2g 13g 4020 R 155 21.6 5863:02 4 - kvm +25294 root 20 0 20.2g 13g 4020 R 153 21.6 5851:59 3 - kvm +25287 root 20 0 20.2g 13g 4020 R 152 21.6 5861:20 15 - kvm +25299 root 20 0 20.2g 13g 4020 R 152 21.6 5847:14 1 - kvm +25258 root 20 0 20.2g 13g 4020 R 122 21.6 3372:41 13 - kvm +25269 root 20 0 20.2g 13g 4020 R 101 21.6 5929:42 5 - kvm +25301 root 20 0 20.2g 13g 4020 R 101 21.6 5847:26 10 - kvm +25292 root 20 0 20.2g 13g 4020 R 100 21.6 5853:18 7 - kvm +25297 root 20 0 20.2g 13g 4020 R 100 21.6 5843:37 16 - kvm +25272 root 20 0 20.2g 13g 4020 R 98 21.6 5872:52 2 - kvm +25277 root 20 0 20.2g 13g 4020 R 93 21.6 5878:21 0 - kvm +25290 root 20 0 20.2g 13g 4020 R 51 21.6 5863:15 8 - kvm +25314 root 20 0 20.2g 13g 4020 S 0 21.6 0:41.42 1 futex_wai kvm + +all thread's kernel stack (guest OS WinServer2008): +root@cvk11:/proc/25258/task# cat 25281/stack +[<ffffffffa03cdfa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffa03d60d4>] vmx_handle_exit+0x94/0x8b0 [kvm_intel] +[<ffffffffa062cbb4>] __vcpu_run+0x704/0xc30 [kvm] +[<ffffffffa062d188>] kvm_arch_vcpu_ioctl_run+0xa8/0x270 [kvm] +[<ffffffffa0615042>] kvm_vcpu_ioctl+0x512/0x6d0 [kvm] +[<ffffffff811d4326>] do_vfs_ioctl+0x86/0x4f0 +[<ffffffff811d4821>] SyS_ioctl+0x91/0xb0 +[<ffffffff817610ad>] system_call_fastpath+0x1a/0x1f +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25284/stack +[<ffffffffa0613537>] kvm_vcpu_yield_to+0x47/0xa0 [kvm] +[<ffffffffa06136ab>] kvm_vcpu_on_spin+0x11b/0x150 [kvm] +[<ffffffffa03cdfa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25294/stack +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25287/stack +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25299/stack +[<ffffffffa03d34f6>] handle_pause+0x16/0x30 [kvm_intel] +[<ffffffffa03d60d4>] vmx_handle_exit+0x94/0x8b0 [kvm_intel] +[<ffffffffa062caec>] __vcpu_run+0x63c/0xc30 [kvm] +[<ffffffffa062d188>] kvm_arch_vcpu_ioctl_run+0xa8/0x270 [kvm] +[<ffffffffa0615042>] kvm_vcpu_ioctl+0x512/0x6d0 [kvm] +[<ffffffff811d4326>] do_vfs_ioctl+0x86/0x4f0 +[<ffffffff811d4821>] SyS_ioctl+0x91/0xb0 +[<ffffffff817610ad>] system_call_fastpath+0x1a/0x1f +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25258/stack +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25269/stack +[<ffffffffa03d34fe>] handle_pause+0x1e/0x30 [kvm_intel] +[<ffffffffa03d60d4>] vmx_handle_exit+0x94/0x8b0 [kvm_intel] +[<ffffffffa062caec>] __vcpu_run+0x63c/0xc30 [kvm] +[<ffffffffa062d188>] kvm_arch_vcpu_ioctl_run+0xa8/0x270 [kvm] +[<ffffffffa0615042>] kvm_vcpu_ioctl+0x512/0x6d0 [kvm] +[<ffffffff811d4326>] do_vfs_ioctl+0x86/0x4f0 +[<ffffffff811d4821>] SyS_ioctl+0x91/0xb0 +[<ffffffff817610ad>] system_call_fastpath+0x1a/0x1f +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25301/stack +[<ffffffffa03d34fe>] handle_pause+0x1e/0x30 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25292/stack +[<ffffffffa03cdfa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25297/stack +[<ffffffffa03cdfa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25272/stack +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25277/stack +[<ffffffffa03cdfa3>] clear_atomic_switch_msr+0x133/0x170 [kvm_intel] +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25290/stack +[<ffffffffffffffff>] 0xffffffffffffffff +root@cvk11:/proc/25258/task# cat 25314/stack +[<ffffffff810daf77>] futex_wait_queue_me+0xd7/0x150 +[<ffffffff810dc087>] futex_wait+0x1a7/0x2c0 +[<ffffffff810ddc14>] do_futex+0x334/0xb70 +[<ffffffff810de592>] SyS_futex+0x142/0x1a0 +[<ffffffff817610ad>] system_call_fastpath+0x1a/0x1f +[<ffffffffffffffff>] 0xffffffffffffffff \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1494350 b/results/classifier/gemma3:12b/kvm/1494350 new file mode 100644 index 00000000..3792b0e6 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1494350 @@ -0,0 +1,258 @@ + +QEMU: causes vCPU steal time overflow on live migration + +I'm pasting in text from Debian Bug 785557 +https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 +b/c I couldn't find this issue reported. + +It is present in QEMU 2.3, but I haven't tested later versions. Perhaps someone else will find this bug and confirm for later versions. (Or I will when I have time!) + +-------------------------------------------------------------------------------------------- + +Hi, + +I'm trying to debug an issue we're having with some debian.org machines +running in QEMU 2.1.2 instances (see [1] for more background). In short, +after a live migration guests running Debian Jessie (linux 3.16) stop +accounting CPU time properly. /proc/stat in the guest shows no increase +in user and system time anymore (regardless of workload) and what stands +out are extremely large values for steal time: + + % cat /proc/stat + cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0 + cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0 + cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0 + cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0 + cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0 + intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 + ctxt 837862829 + btime 1431642967 + processes 8529939 + procs_running 1 + procs_blocked 0 + softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 + +Reading the memory pointed to by the steal time MSRs pre- and +post-migration, I can see that post-migration the high bytes are set to +0xff: + +(qemu) xp /8b 0x1fc0cfc0 +000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff + +The "jump" in steal time happens when the guest is resumed on the +receiving side. + +I've also been able to consistently reproduce this on a Ganeti cluster +at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The +issue goes away if I disable the steal time MSR using `-cpu +qemu64,-kvm_steal_time`. + +So, it looks to me as if the steal time MSR is not set/copied properly +during live migration, although AFAICT this should be the case after +917367aa968fd4fef29d340e0c7ec8c608dffaab. + +After investigating a bit more, it looks like the issue comes from an overflow +in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023): + + static void accumulate_steal_time(struct kvm_vcpu *vcpu) + { + u64 delta; + + if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) + return; + + delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; + +Using systemtap with the attached script to trace KVM execution on the +receiving host kernel, we can see that shortly before marking the vCPUs +as runnable on a migrated KVM instance with 2 vCPUs, the following +happens (** marks lines of interest): + + ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns + 0 qemu-system-x86(18446): -> kvm_arch_vcpu_load + 0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick + 5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick + 23 qemu-system-x86(18446): <- kvm_arch_vcpu_load + 0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl + 2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl + 0 qemu-system-x86(18446): -> kvm_arch_vcpu_put + 2 qemu-system-x86(18446): -> kvm_put_guest_fpu + 3 qemu-system-x86(18446): <- kvm_put_guest_fpu + 4 qemu-system-x86(18446): <- kvm_arch_vcpu_put + ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns + 0 qemu-system-x86(18446): -> kvm_arch_vcpu_load + 1 qemu-system-x86(18446): <- kvm_arch_vcpu_load + 0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl + 1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl + 0 qemu-system-x86(18446): -> kvm_arch_vcpu_put + 1 qemu-system-x86(18446): -> kvm_put_guest_fpu + 2 qemu-system-x86(18446): <- kvm_put_guest_fpu + 3 qemu-system-x86(18446): <- kvm_arch_vcpu_put + ** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns + 0 qemu-system-x86(18449): -> kvm_arch_vcpu_load + ** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns + 10 qemu-system-x86(18449): <- kvm_arch_vcpu_load + ** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run + 4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable + 6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable + ... + 0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns + 0 qemu-system-x86(18448): -> kvm_arch_vcpu_load + ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns + 40 qemu-system-x86(18448): <- kvm_arch_vcpu_load + ** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run + 5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable + +Now, what's really interesting is that current->sched_info.run_delay +gets reset because the tasks (threads) using the vCPUs change, and thus +have a different current->sched_info: it looks like task 18446 created +the two vCPUs, and then they were handed over to 18448 and 18449 +respectively. This is also verified by the fact that during the +overflow, both vCPUs have the old steal time of the last vcpu_load of +task 18446. However, according to Documentation/virtual/kvm/api.txt: + + - vcpu ioctls: These query and set attributes that control the operation + of a single virtual cpu. + + Only run vcpu ioctls from the same thread that was used to create the vcpu. + + + +So it seems qemu is doing something that it shouldn't: calling vCPU +ioctls from a thread that didn't create the vCPU. Note that this +probably happens on every QEMU startup, but is not visible because the +guest kernel zeroes out the steal time on boot. + +There are at least two ways to mitigate the issue without a kernel +recompilation: + + - The first one is to disable the steal time propagation from host to + guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will + short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & + KVM_MSR_ENABLED) and will completely disable steal time reporting in + the guest, which may not be desired if people rely on it to detect + CPU congestion. + + - The other one is using the following systemtap script to prevent the + steal time counter from overflowing by dropping the problematic + samples (WARNING: systemtap guru mode required, use at your own + risk): + + probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") { + if (@defined($delta) && $delta < 0) { + printk(4, "kvm: steal time delta < 0, dropping") + $delta = 0 + } + } + +Note that not all *guests* handle this condition in the same way: 3.2 +guests still get the overflow in /proc/stat, but their scheduler +continues to work as expected. 3.16 guests OTOH go nuts once steal time +overflows and stop accumulating system & user time, while entering an +erratic state where steal time in /proc/stat is *decreasing* on every +clock tick. +-------------------------------------------- Revised statement: +> Now, what's really interesting is that current->sched_info.run_delay +> gets reset because the tasks (threads) using the vCPUs change, and +> thus have a different current->sched_info: it looks like task 18446 +> created the two vCPUs, and then they were handed over to 18448 and +> 18449 respectively. This is also verified by the fact that during the +> overflow, both vCPUs have the old steal time of the last vcpu_load of +> task 18446. However, according to Documentation/virtual/kvm/api.txt: + +The above is not entirely accurate: the vCPUs were created by the +threads that are used to run them (18448 and 18449 respectively), it's +just that the main thread is issuing ioctls during initialization, as +illustrated by the strace output on a different process: + + [ vCPU #0 thread creating vCPU #0 (fd 20) ] + [pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20 + [pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0 + [pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0 + [pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0 + + [ vCPU #1 thread creating vCPU #1 (fd 21) ] + [pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21 + [pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0 + [pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0 + [pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0 + + [ Main thread calling kvm_arch_put_registers() on vCPU #0 ] + [pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0 + [pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0 + [pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0 + [pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0 + [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87 + [pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0 + [pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0 + [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1 + [pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0 + [pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 + + [ Main thread calling kvm_arch_put_registers() on vCPU #1 ] + [pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0 + [pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0 + [pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0 + [pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0 + [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87 + [pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0 + [pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0 + [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1 + [pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0 + [pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 + +Using systemtap again, I noticed that the main thread's run_delay is copied to +last_steal only after a KVM_SET_MSRS ioctl which enables the steal time +MSR is issued by the main thread (see linux +3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I +reverted the following qemu commits: + + commit 0e5035776df31380a44a1a851850d110b551ecb6 + Author: Marcelo Tosatti <email address hidden> + Date: Tue Sep 3 18:55:16 2013 -0300 + + fix steal time MSR vmsd callback to proper opaque type + + Convert steal time MSR vmsd callback pointer to proper X86CPU type. + + Signed-off-by: Marcelo Tosatti <email address hidden> + Signed-off-by: Paolo Bonzini <email address hidden> + + commit 917367aa968fd4fef29d340e0c7ec8c608dffaab + Author: Marcelo Tosatti <email address hidden> + Date: Tue Feb 19 23:27:20 2013 -0300 + + target-i386: kvm: save/restore steal time MSR + + Read and write steal time MSR, so that reporting is functional across + migration. + + Signed-off-by: Marcelo Tosatti <email address hidden> + Signed-off-by: Gleb Natapov <email address hidden> + +and the steal time jump on migration went away. However, steal time was +not reported at all after migration, which is expected after reverting +917367aa. + +So it seems that after 917367aa, the steal time MSR is correctly saved +and copied to the receiving side, but then it is restored by the main +thread (probably during cpu_synchronize_all_post_init()), causing the +overflow when the vCPU threads are unpaused. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1497204 b/results/classifier/gemma3:12b/kvm/1497204 new file mode 100644 index 00000000..4cf77955 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1497204 @@ -0,0 +1,58 @@ + +qemu-system-s390x: no SMP support without KVM + +It seems SMP support is not implemented for s390x target, at least when not running under KVM. There is also no error message when starting qemu, it just fails when the kernel tries to bring up the CPUs: + +$ qemu-system-s390x -nographic -smp 8 -kernel s390x/kernel.debian +[ 0.003309] Initializing cgroup subsys cpuset +[ 0.004183] Initializing cgroup subsys cpu +[ 0.004263] Initializing cgroup subsys cpuacct +[ 0.004493] Linux version 3.16.0-4-s390x (<email address hidden>) (gcc version 4.8.4 (Debian 4.8.4-1) ) #1 SMP Debian 3.16.7-ckt9-2 (2015-04-13) +[ 0.005816] setup: Linux is running under KVM in 64-bit mode +[ 0.007231] setup: Max memory size: 128MB +[ 0.032383] Zone ranges: +[ 0.034115] DMA [mem 0x00000000-0x7fffffff] +[ 0.034652] Normal empty +[ 0.034686] Movable zone start for each node +[ 0.034737] Early memory node ranges +[ 0.034847] node 0: [mem 0x00000000-0x07ffffff] +[ 0.047489] PERCPU: Embedded 12 pages/cpu @0000000007f29000 s17920 r8192 d23040 u49152 +[ 0.049613] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 32320 +[ 0.049802] Kernel command line: +[ 0.053715] PID hash table entries: 512 (order: 0, 4096 bytes) +[ 0.053993] Dentry cache hash table entries: 16384 (order: 5, 131072 bytes) +[ 0.054330] Inode-cache hash table entries: 8192 (order: 4, 65536 bytes) +[ 0.061216] Memory: 115912K/131072K available (5701K kernel code, 847K rwdata, 2512K rodata, 452K init, 776K bss, 15160K reserved) +[ 0.062432] Write protected kernel read-only data: 0x100000 - 0x905fff +[ 0.068906] Hierarchical RCU implementation. +[ 0.068934] CONFIG_RCU_FANOUT set to non-default value of 32 +[ 0.068953] RCU dyntick-idle grace-period acceleration is enabled. +[ 0.068989] RCU restricting CPUs from NR_CPUS=32 to nr_cpu_ids=9. +[ 0.069045] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=9 +[ 0.070043] NR_IRQS:260 +[ 0.094273] console [ttyS1] enabled +[ 0.095630] pid_max: default: 32768 minimum: 301 +[ 0.097792] Security Framework initialized +[ 0.100624] AppArmor: AppArmor disabled by boot time parameter +[ 0.100677] Yama: disabled by default; enable with sysctl kernel.yama.* +[ 0.102466] Mount-cache hash table entries: 512 (order: 0, 4096 bytes) +[ 0.102556] Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes) +[ 0.116828] Initializing cgroup subsys memory +[ 0.117460] Initializing cgroup subsys devices +[ 0.117678] Initializing cgroup subsys freezer +[ 0.118080] Initializing cgroup subsys net_cls +[ 0.118267] Initializing cgroup subsys blkio +[ 0.118393] Initializing cgroup subsys perf_event +[ 0.118477] Initializing cgroup subsys net_prio +[ 0.119176] ftrace: allocating 17140 entries in 67 pages +XXX unknown sigp: 0xb +XXX unknown sigp: 0xb +XXX unknown sigp: 0xb +[...] +XXX unknown sigp: 0xb +[ 0.211835] cpu: 8 configured CPUs, 0 standby CPUs +XXX unknown sigp: 0xb +XXX unknown sigp: 0xb +[endless stream of messages continues until qemu is killed] + +The XXX message is printed by qemu FWIW. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1502934 b/results/classifier/gemma3:12b/kvm/1502934 new file mode 100644 index 00000000..38554be3 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1502934 @@ -0,0 +1,25 @@ + +QEMU does not start when kvm enabled (SMM issue?) + +Hi! + +QEMU stopped working after "[355023f2010c4df619d88a0dd7012b4b9c74c12c] pc: add SMM property" on my server. It says "Guest has not initialized the display (yet)." and nothing happens. But only if I use -enable-kvm. + +However, the problem gone after I hardcoded pc_machine_is_smm_enabled() to always return false (but I have little to no understanding of what SMM really is). + +CMD line that reproduces the issue: qemu-system-x86_64 -enable-kvm -display curses . It doesn't work the server, but works perfectly on my laptop :(. + +I'm using Arch Linux with all updates. +Some info: +Linux machine 4.2.2-1-ARCH #1 SMP PREEMPT Tue Sep 29 22:21:33 CEST 2015 x86_64 GNU/Linux +Qemu-2.4.0 (tried HEAD as well) +CPU: Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz +Some messages from dmesg, just in case: +[ 6.996297] kvm: VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL does not work properly. Using workaround +[ 6381.722990] kvm: zapping shadow pages for mmio generation wraparound + + +I'm more than happy to provide additional information if needed. + +Cheers, +Alex \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1508405 b/results/classifier/gemma3:12b/kvm/1508405 new file mode 100644 index 00000000..c5f1ed1d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1508405 @@ -0,0 +1,70 @@ + +qemu 2.4.0 with --enable-kvm hangs, takes 100% CPU + +When starting qemu-system-x86_64 from version 2.4.0 with --enable-kvm, it hangs and takes 100% CPU. The graphical display (SeaBIOS output) is not initialized. + +There have been multiple reports of this issue in the following thread: +https://bbs.archlinux.org/viewtopic.php?pid=1572405 + +There is no need to load a certain image, it already hangs with the following command: +qemu-system-x86_64 --enable-kvm + +There are three workarounds: +- Downgrading the kernel form 4.2.2 to 4.1.6 (according to the forum thread, have not tested this myself) +- Downgrading qemu to 2.3 (tested personally, works) +- passing -machine pc-i440fx-2.3 to qemu 2.4 (have not tested this myself, I will try that shortly) + +modules kvm and kvm_intel are loaded and rmmod && modprobing them does not change the situation + +I have an nvidia card and switching from official binary drivers to nouveau and back does not change the situation. + + +qemu is installed from Arch package. From the PKGBUILD you can see that is is built with the following configuration: +================================================================ +export ARFLAGS="rv" + export CFLAGS+=' -fPIC' + ./configure --prefix=/usr --sysconfdir=/etc --audio-drv-list='pa alsa sdl' \ + --python=/usr/bin/python2 --smbd=/usr/bin/smbd \ + --enable-docs --libexecdir=/usr/lib/qemu \ + --disable-gtk --enable-linux-aio --enable-seccomp \ + --enable-spice --localstatedir=/var \ + --enable-tpm \ + --enable-modules --enable-{rbd,glusterfs,libiscsi,curl} + make V=99 +================================================================ + +cpuinfo on my machine (for the first core only): + +================================================================ +processor : 0 +vendor_id : GenuineIntel +cpu family : 6 +model : 30 +model name : Intel(R) Core(TM) i7 CPU Q 820 @ 1.73GHz +stepping : 5 +microcode : 0x7 +cpu MHz : 1333.000 +cache size : 8192 KB +physical id : 0 +siblings : 8 +core id : 0 +cpu cores : 4 +apicid : 0 +initial apicid : 0 +fpu : yes +fpu_exception : yes +cpuid level : 11 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida dtherm tpr_shadow vnmi flexpriority ept vpid +bugs : +bogomips : 3459.21 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual +================================================================ + +Is there more information I can provide you with to help debug this problem? + +Thanks, + +cptG \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1524 b/results/classifier/gemma3:12b/kvm/1524 new file mode 100644 index 00000000..c97ac3bc --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1524 @@ -0,0 +1,38 @@ + +error while loading state for instance 0x0 of device 'kvm-tpr-opt',load of migration failed: Operation not permitted +Description of problem: +when i save and restore a guest,it report the error: "error while loading state for instance 0x0 of device 'kvm-tpr-opt',load of migration failed: Operation not permitted" +Steps to reproduce: +1.virsh save test ccc.img + +2.virsh restore ccc.im + + +it report error: + +[root@TOS-9772 ~]# virsh save test ccc.img + +[root@TOS-9772 ~]# virsh restore ccc.img + +error: Failed to restore domain from ccc.img + +error: internal error: qemu unexpectedly closed the monitor: qmp_cmd_name: query-hotpluggable-cpus, arguments: {} + +qmp_cmd_name: query-cpus-fast, arguments: {} + +qmp_cmd_name: query-iothreads, arguments: {} + +qmp_cmd_name: expire_password, arguments: {"protocol": "spice", "time": "never"} + +qmp_cmd_name: balloon, arguments: {"value": 1073741824} + +qmp_cmd_name: migrate-incoming, arguments: {"uri": "fd:29"} + +{"timestamp": {"seconds": 1677661413, "microseconds": 275227}, "event": "MIGRATION", "data": {"status": "setup"}} + +{"timestamp": {"seconds": 1677661413, "microseconds": 275600}, "event": "MIGRATION", "data": {"status": "active"}} + +2023-03-01T09:03:33.316549Z qemu-system-x86_64: error while loading state for instance 0x0 of device 'kvm-tpr-opt' + +2023-03-01T09:03:33.317076Z qemu-system-x86_64: load of migration failed: Operation not permitted +{"timestamp": {"seconds": 1677661413, "microseconds": 317297}, "event": "MIGRATION", "data": {"status": "failed"}} diff --git a/results/classifier/gemma3:12b/kvm/1524637 b/results/classifier/gemma3:12b/kvm/1524637 new file mode 100644 index 00000000..25eebc48 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1524637 @@ -0,0 +1,16 @@ + +system_powerdown/system_reset not working when exec stop on hmp + +system_powerdown/system_reset stops working in qemu for centos kernels if KVM is enabled. + +qemu versioin: 2.4 +linux kernel versioin: 4.2.5 + +How to reproduce: + +1. qemu-system-x86_64 -enable-kvm -drive if=none,id=drive0,file=/media/sda5/image/fc21/fc21.raw -device virtio-blk-pci,drive=drive0,iothread=iothread0 -machine smm=off -object iothread,id=iothread0 -monitor stdio +2. Enter stop in the qemu console, we can see the vm is stopped. +3. Enter system_powerdown in the qemu console +4. Nothing happens. + +Can you please give a prompt or something else when the vm isn't allowed to powerdown or reset? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1526 b/results/classifier/gemma3:12b/kvm/1526 new file mode 100644 index 00000000..fd608d38 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1526 @@ -0,0 +1,2 @@ + +hw/vfio/trace-events incorrect format diff --git a/results/classifier/gemma3:12b/kvm/1529187 b/results/classifier/gemma3:12b/kvm/1529187 new file mode 100644 index 00000000..90e12a76 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1529187 @@ -0,0 +1,37 @@ + +vfio passtrhough fails at 'No available IOMMU models' on Intel BDW-EP platform + +Environment: + ------------ + Host OS (ia32/ia32e/IA64): ia32e + Guest OS (ia32/ia32e/IA64): ia32e + Guest OS Type (Linux/Windows): linux + kvm.git Commit: da3f7ca3 + qemu.git Commit: 38a762fe + Host Kernel Version: 4.4.0-rc2 + Hardware: BDW EP (Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz, Grantley-EP) + +Bug description: + -------------------------- + when create guest with vt-d assignment using vfio-pci driver, the guest can not be created. +Warning 'No available IOMMU models' + + +Reproduce steps: + ---------------- + 1. bind device to vfio-pci driver + 2. qemu-system-x86_64 -enable-kvm -m 512 -smp 2 -device vfio-pci,host=81:00.0 -net none -drive file=rhel7u2.qcow2,if=none,id=virtio-disk0 -device virtio-blk-pci,drive=virtio-disk0 + +Current result: + ---------------- + qemu-system-x86_64: -device vfio-pci,host=81:00.0: vfio: No available IOMMU models + qemu-system-x86_64: -device vfio-pci,host=81:00.0: vfio: failed to setup container for group 41 + qemu-system-x86_64: -device vfio-pci,host=81:00.0: vfio: failed to get group 41 + qemu-system-x86_64: -device vfio-pci,host=81:00.0: Device initialization failed + +Expected result: + ---------------- + guest can be created +Basic root-causing log: + ---------------------- + \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1530246 b/results/classifier/gemma3:12b/kvm/1530246 new file mode 100644 index 00000000..1f62992a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1530246 @@ -0,0 +1,25 @@ + +Suppressing kvm rdmsr errors to console + +I am seeing numerous kvm rdmsr messages logged to /dev/tty1 (console), and would like to know how to suppress these messages. I've attempted "echo 1 > /sys/module/kvm/parameters/ignore_msrs" and the messages still appear on tty1. + +I'm seeing the following rdmsr messages: +kvm [22212]: vcpu0 ignored rdmsr: 0x606 +kvm [22212]: vcpu0 ignored rdmsr: 0x611 +kvm [22212]: vcpu0 ignored rdmsr: 0x639 +kvm [22212]: vcpu0 ignored rdmsr: 0x641 +kvm [22212]: vcpu0 ignored rdmsr: 0x619 +kvm [22212]: vcpu0 ignored rdmsr: 0x1ad + + +The following QEMU/KVM RPMs are installed: +ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch +libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.x86_64 +libvirt-daemon-kvm-1.2.17-13.el7_2.2.x86_64 +qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64 +qemu-kvm-common-ev-2.3.0-31.el7_2.4.1.x86_64 +qemu-kvm-ev-2.3.0-31.el7_2.4.1.x86_64 +qemu-kvm-tools-ev-2.3.0-31.el7_2.4.1.x86_64 + +uname -a +Linux server 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1532 b/results/classifier/gemma3:12b/kvm/1532 new file mode 100644 index 00000000..42417b5a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1532 @@ -0,0 +1,504 @@ + +libivrtd fork qemu to create vm ,which start with ceph rbd device, after vm status:runing , the qemu stuck at booting from hard disk.... +Description of problem: +[root@ceph-client ceph]# virsh list --all + Id Name State +---------------------------------------------------- + 19 c7_ceph running + +the vm qemu stuck at booting from hard disk..... +Steps to reproduce: +1. use ceph-deploy deploy a ceph distribute storage, which use to store vm's qcow2 files,this ceph has 3 osd node +2. refer the link https://docs.ceph.com/en/quincy/rbd/libvirt/ create a ceph user :client.libvirt +3. import a exists qcow2 file into ceph libvit-pool, then start vm + +[root@ceph-1 ~]# ceph -s + cluster: + id: 3fbbf51f-88fd-4883-9f24-595bf853c5f2 + health: HEALTH_OK + + services: + mon: 1 daemons, quorum ceph-1 + mgr: ceph-1(active) + osd: 3 osds: 3 up, 3 in + + data: + pools: 1 pools, 128 pgs + objects: 940 objects, 3.6 GiB + usage: 31 GiB used, 209 GiB / 240 GiB avail + pgs: 128 active+clean + +[root@ceph-1 ~]#ceph auth ls +client.libvirt + key: AQD/XwFkq7kHMhAA1OmPtKPVno6gjmZleOevOA== + caps: [mon] allow r + caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool + +[root@ceph-client ceph]# cat ceph.conf +[global] +fsid = 3fbbf51f-88fd-4883-9f24-595bf853c5f2 +mon_initial_members = ceph-1 +mon_host = 172.24.193.62 +auth_cluster_required = cephx +auth_service_required = cephx +auth_client_required = cephx + +osd_pool_default_size = 2 +[root@ceph-client ceph]# + +[root@ceph-client ceph]# virsh start c7_ceph +Domain c7_ceph started + +[root@ceph-client ceph]# +[root@ceph-client ceph]# virsh list --all + Id Name State +---------------------------------------------------- + 19 c7_ceph running + + + <emulator>/usr/local/qemu-3.0/bin/qemu-system-x86_64</emulator> + <disk type='network' device='disk'> + <driver name='qemu' type='raw' cache='writeback'/> + <auth username='libvirt'> + <secret type='ceph' uuid='fb57a2a3-8cdf-44cb-afc1-2d8bdc0fc5d0'/> + </auth> + <source protocol='rbd' name='libvirt-pool/root-vsys_c5.qcow2'> + <host name='172.24.193.62' port='6789'/> + <host name='172.24.193.63' port='6789'/> + <host name='172.24.193.64' port='6789'/> + </source> + <target dev='vda' bus='virtio'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> + </disk> + +======================== +[root@ceph-client ceph]# cat /run/libvirt/qemu/c7_ceph.xml + + +<domstatus state='running' reason='booted' pid='57437'> + <monitor path='/var/lib/libvirt/qemu/domain-19-c7_ceph/monitor.sock' json='1' type='unix'/> + <namespaces> + <mount/> + </namespaces> + <vcpus> + <vcpu id='0' pid='57487'/> + <vcpu id='1' pid='57488'/> + </vcpus> + <qemuCaps> + <flag name='kvm'/> + <flag name='no-hpet'/> + <flag name='spice'/> + <flag name='boot-index'/> + <flag name='hda-duplex'/> + <flag name='ccid-emulated'/> + <flag name='ccid-passthru'/> + <flag name='virtio-tx-alg'/> + <flag name='virtio-blk-pci.ioeventfd'/> + <flag name='sga'/> + <flag name='virtio-blk-pci.event_idx'/> + <flag name='virtio-net-pci.event_idx'/> + <flag name='piix3-usb-uhci'/> + <flag name='piix4-usb-uhci'/> + <flag name='usb-ehci'/> + <flag name='ich9-usb-ehci1'/> + <flag name='vt82c686b-usb-uhci'/> + <flag name='pci-ohci'/> + <flag name='usb-redir'/> + <flag name='usb-hub'/> + <flag name='ich9-ahci'/> + <flag name='no-acpi'/> + <flag name='virtio-blk-pci.scsi'/> + <flag name='scsi-disk.channel'/> + <flag name='scsi-block'/> + <flag name='transaction'/> + <flag name='block-job-async'/> + <flag name='scsi-cd'/> + <flag name='ide-cd'/> + <flag name='hda-micro'/> + <flag name='dump-guest-memory'/> + <flag name='nec-usb-xhci'/> + <flag name='balloon-event'/> + <flag name='lsi'/> + <flag name='virtio-scsi-pci'/> + <flag name='blockio'/> + <flag name='disable-s3'/> + <flag name='disable-s4'/> + <flag name='usb-redir.filter'/> + <flag name='ide-drive.wwn'/> + <flag name='scsi-disk.wwn'/> + <flag name='seccomp-sandbox'/> + <flag name='reboot-timeout'/> + <flag name='seamless-migration'/> + <flag name='block-commit'/> + <flag name='vnc'/> + <flag name='drive-mirror'/> + <flag name='usb-redir.bootindex'/> + <flag name='usb-host.bootindex'/> + <flag name='blockdev-snapshot-sync'/> + <flag name='qxl'/> + <flag name='VGA'/> + <flag name='cirrus-vga'/> + <flag name='vmware-svga'/> + <flag name='device-video-primary'/> + <flag name='usb-serial'/> + <flag name='usb-net'/> + <flag name='add-fd'/> + <flag name='nbd-server'/> + <flag name='virtio-rng'/> + <flag name='rng-random'/> + <flag name='rng-egd'/> + <flag name='megasas'/> + <flag name='tpm-passthrough'/> + <flag name='tpm-tis'/> + <flag name='pci-bridge'/> + <flag name='vfio-pci'/> + <flag name='vfio-pci.bootindex'/> + <flag name='scsi-generic'/> + <flag name='scsi-generic.bootindex'/> + <flag name='mem-merge'/> + <flag name='vnc-websocket'/> + <flag name='drive-discard'/> + <flag name='mlock'/> + <flag name='device-del-event'/> + <flag name='dmi-to-pci-bridge'/> + <flag name='i440fx-pci-hole64-size'/> + <flag name='q35-pci-hole64-size'/> + <flag name='usb-storage'/> + <flag name='usb-storage.removable'/> + <flag name='ich9-intel-hda'/> + <flag name='kvm-pit-lost-tick-policy'/> + <flag name='boot-strict'/> + <flag name='pvpanic'/> + <flag name='spice-file-xfer-disable'/> + <flag name='spiceport'/> + <flag name='usb-kbd'/> + <flag name='msg-timestamp'/> + <flag name='active-commit'/> + <flag name='change-backing-file'/> + <flag name='memory-backend-ram'/> + <flag name='numa'/> + <flag name='memory-backend-file'/> + <flag name='usb-audio'/> + <flag name='rtc-reset-reinjection'/> + <flag name='splash-timeout'/> + <flag name='iothread'/> + <flag name='migrate-rdma'/> + <flag name='ivshmem'/> + <flag name='drive-iotune-max'/> + <flag name='VGA.vgamem_mb'/> + <flag name='vmware-svga.vgamem_mb'/> + <flag name='qxl.vgamem_mb'/> + <flag name='pc-dimm'/> + <flag name='machine-vmport-opt'/> + <flag name='aes-key-wrap'/> + <flag name='dea-key-wrap'/> + <flag name='pci-serial'/> + <flag name='vhost-user-multiqueue'/> + <flag name='migration-event'/> + <flag name='ioh3420'/> + <flag name='x3130-upstream'/> + <flag name='xio3130-downstream'/> + <flag name='rtl8139'/> + <flag name='e1000'/> + <flag name='virtio-net'/> + <flag name='gic-version'/> + <flag name='incoming-defer'/> + <flag name='virtio-gpu'/> + <flag name='virtio-keyboard'/> + <flag name='virtio-mouse'/> + <flag name='virtio-tablet'/> + <flag name='virtio-input-host'/> + <flag name='chardev-file-append'/> + <flag name='ich9-disable-s3'/> + <flag name='ich9-disable-s4'/> + <flag name='vserport-change-event'/> + <flag name='virtio-balloon-pci.deflate-on-oom'/> + <flag name='mptsas1068'/> + <flag name='qxl.vram64_size_mb'/> + <flag name='chardev-logfile'/> + <flag name='debug-threads'/> + <flag name='secret'/> + <flag name='pxb'/> + <flag name='pxb-pcie'/> + <flag name='device-tray-moved-event'/> + <flag name='nec-usb-xhci-ports'/> + <flag name='virtio-scsi-pci.iothread'/> + <flag name='name-guest'/> + <flag name='qxl.max_outputs'/> + <flag name='spice-unix'/> + <flag name='drive-detect-zeroes'/> + <flag name='tls-creds-x509'/> + <flag name='intel-iommu'/> + <flag name='smm'/> + <flag name='virtio-pci-disable-legacy'/> + <flag name='query-hotpluggable-cpus'/> + <flag name='virtio-net.rx_queue_size'/> + <flag name='virtio-vga'/> + <flag name='drive-iotune-max-length'/> + <flag name='ivshmem-plain'/> + <flag name='ivshmem-doorbell'/> + <flag name='query-qmp-schema'/> + <flag name='gluster.debug_level'/> + <flag name='drive-iotune-group'/> + <flag name='query-cpu-model-expansion'/> + <flag name='virtio-net.host_mtu'/> + <flag name='nvdimm'/> + <flag name='pcie-root-port'/> + <flag name='query-cpu-definitions'/> + <flag name='block-write-threshold'/> + <flag name='query-named-block-nodes'/> + <flag name='cpu-cache'/> + <flag name='qemu-xhci'/> + <flag name='kernel-irqchip'/> + <flag name='kernel-irqchip.split'/> + <flag name='intel-iommu.intremap'/> + <flag name='intel-iommu.caching-mode'/> + <flag name='intel-iommu.eim'/> + <flag name='intel-iommu.device-iotlb'/> + <flag name='virtio.iommu_platform'/> + <flag name='virtio.ats'/> + <flag name='loadparm'/> + <flag name='vnc-multi-servers'/> + <flag name='virtio-net.tx_queue_size'/> + <flag name='chardev-reconnect'/> + <flag name='virtio-gpu.max_outputs'/> + <flag name='vxhs'/> + <flag name='virtio-blk.num-queues'/> + <flag name='vmcoreinfo'/> + <flag name='numa.dist'/> + <flag name='disk-share-rw'/> + <flag name='iscsi.password-secret'/> + <flag name='isa-serial'/> + <flag name='dump-completed'/> + <flag name='qcow2-luks'/> + <flag name='pcie-pci-bridge'/> + <flag name='seccomp-blacklist'/> + <flag name='query-cpus-fast'/> + <flag name='disk-write-cache'/> + <flag name='nbd-tls'/> + <flag name='tpm-crb'/> + <flag name='pr-manager-helper'/> + <flag name='qom-list-properties'/> + <flag name='memory-backend-file.discard-data'/> + <flag name='sdl-gl'/> + <flag name='screendump_device'/> + <flag name='hda-output'/> + <flag name='blockdev-del'/> + <flag name='vmgenid'/> + <flag name='vhost-vsock'/> + <flag name='chardev-fd-pass'/> + <flag name='tpm-emulator'/> + <flag name='mch'/> + <flag name='mch.extended-tseg-mbytes'/> + <flag name='usb-storage.werror'/> + <flag name='egl-headless'/> + <flag name='vfio-pci.display'/> + </qemuCaps> + <devices> + <device alias='rng0'/> + <device alias='virtio-disk0'/> + <device alias='virtio-serial0'/> + <device alias='video0'/> + <device alias='serial0'/> + <device alias='balloon0'/> + <device alias='channel0'/> + <device alias='net0'/> + <device alias='input0'/> + <device alias='scsi0'/> + <device alias='usb'/> + </devices> + <libDir path='/var/lib/libvirt/qemu/domain-19-c7_ceph'/> + <channelTargetDir path='/var/lib/libvirt/qemu/channel/target/domain-19-c7_ceph'/> + <cpu mode='custom' match='exact' check='partial'> + <model fallback='forbid'>Broadwell</model> + </cpu> + <chardevStdioLogd/> + <allowReboot value='yes'/> + <blockjobs active='no'/> + <domain type='kvm' id='19'> + <name>c7_ceph</name> + <uuid>ff08671e-824c-4939-80ec-602235c0662e</uuid> + <memory unit='KiB'>4194304</memory> + <currentMemory unit='KiB'>4194304</currentMemory> + <vcpu placement='static'>2</vcpu> + <resource> + <partition>/machine</partition> + </resource> + <os> + <type arch='x86_64' machine='pc-i440fx-3.0'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + </features> + <cpu mode='custom' match='exact' check='full'> + <model fallback='forbid'>Broadwell</model> + <feature policy='require' name='vme'/> + <feature policy='require' name='f16c'/> + <feature policy='require' name='rdrand'/> + <feature policy='require' name='hypervisor'/> + <feature policy='require' name='arat'/> + <feature policy='disable' name='erms'/> + <feature policy='require' name='xsaveopt'/> + <feature policy='require' name='abm'/> + </cpu> + <clock offset='utc'> + <timer name='rtc' tickpolicy='catchup'/> + <timer name='pit' tickpolicy='delay'/> + <timer name='hpet' present='no'/> + </clock> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <pm> + <suspend-to-mem enabled='no'/> + <suspend-to-disk enabled='no'/> + </pm> + <devices> + <emulator>/usr/local/qemu-3.0/bin/qemu-system-x86_64</emulator> + <disk type='network' device='disk'> + <driver name='qemu' type='raw' cache='writeback'/> + <auth username='libvirt'> + <secret type='ceph' uuid='fb57a2a3-8cdf-44cb-afc1-2d8bdc0fc5d0'/> + </auth> + <source protocol='rbd' name='libvirt-pool/root-vsys_c5.qcow2' tlsFromConfig='0'> + <host name='172.24.193.62' port='6789'/> + <host name='172.24.193.63' port='6789'/> + <host name='172.24.193.64' port='6789'/> + <privateData> + <objects> + <secret type='auth' alias='virtio-disk0-secret0'/> + </objects> + </privateData> + </source> + <target dev='vda' bus='virtio'/> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> + </disk> + <controller type='usb' index='0' model='ich9-ehci1'> + <alias name='usb'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci1'> + <alias name='usb'/> + <master startport='0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci2'> + <alias name='usb'/> + <master startport='2'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci3'> + <alias name='usb'/> + <master startport='4'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/> + </controller> + <controller type='pci' index='0' model='pci-root'> + <alias name='pci.0'/> + </controller> + <controller type='virtio-serial' index='0'> + <alias name='virtio-serial0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> + </controller> + <controller type='scsi' index='0' model='lsilogic'> + <alias name='scsi0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> + </controller> + <controller type='ide' index='0'> + <alias name='ide'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> + </controller> + <interface type='bridge'> + <mac address='52:54:00:2e:e1:1f'/> + <source bridge='virbr0'/> + <target dev='vnet0'/> + <model type='virtio'/> + <alias name='net0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <serial type='pty'> + <source path='/dev/pts/2'/> + <target type='isa-serial' port='0'> + <model name='isa-serial'/> + </target> + <alias name='serial0'/> + </serial> + <console type='pty' tty='/dev/pts/2'> + <source path='/dev/pts/2'/> + <target type='serial' port='0'/> + <alias name='serial0'/> + </console> + <channel type='unix'> + <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-19-c7_ceph/org.qemu.guest_agent.0'/> + <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> + <alias name='channel0'/> + <address type='virtio-serial' controller='0' bus='0' port='1'/> + </channel> + <input type='tablet' bus='usb'> + <alias name='input0'/> + <address type='usb' bus='0' port='1'/> + </input> + <input type='mouse' bus='ps2'> + <alias name='input1'/> + </input> + <input type='keyboard' bus='ps2'> + <alias name='input2'/> + </input> + <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'> + <listen type='address' address='0.0.0.0' fromConfig='0' autoGenerated='no'/> + </graphics> + <video> + <model type='cirrus' vram='16384' heads='1' primary='yes'/> + <alias name='video0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + <memballoon model='virtio'> + <alias name='balloon0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> + </memballoon> + <rng model='virtio'> + <backend model='random'>/dev/urandom</backend> + <alias name='rng0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> + </rng> + </devices> + <seclabel type='dynamic' model='selinux' relabel='yes'> + <label>system_u:system_r:svirt_t:s0:c99,c659</label> + <imagelabel>system_u:object_r:svirt_image_t:s0:c99,c659</imagelabel> + </seclabel> + <seclabel type='dynamic' model='dac' relabel='yes'> + <label>+107:+107</label> + <imagelabel>+107:+107</imagelabel> + </seclabel> + </domain> +</domstatus> +[root@ceph-client ceph]# + +/usr/local/qemu-3.0/bin/qemu-system-x86_64 which is build by qemu-3.0 source code , first i build qemu-3.0 source with --enable-rbd , +later i rebuild qemu-3.0 source with more config paramter from centos7-2009 qemu, those config paramter from qemu-kvm-1.5.3-175.el7.src.rpm ,which has those paramters: +# QEMU configure log Fri Mar 3 18:22:31 CST 2023 +# Configured with: './configure' '--prefix=/usr' '--libdir=/usr/lib64' '--sysconfdir=/etc' '--interp-prefix=/usr/qemu-%M' '--audio-drv-list=pa,alsa' '--with-confsuffix=/qemu-kvm' '--localstatedir=/var' '--libexecdir=/usr/libexec' '--wit +h-pkgversion=qemu-kvm-1.5.3-175.el7' '--disable-strip' '--disable-qom-cast-debug' '--extra-ldflags=-Wl,--build-id -pie -Wl,-z,relro -Wl,-z,now' '--extra-cflags=-O2 -g -pipe -Wall -fexceptions -fstack-protector-strong --param=ssp-buffer +-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIE -DPIE' '--enable-trace-backend=dtrace' '--enable-werror' '--disable-xen' '--disable-virtfs' '--enable-kvm' '--enable-libusb' '--enable-spice' '--enable-seccomp' '--disable-fdt' '-- +enable-docs' '--disable-sdl' '--disable-debug-tcg' '--disable-sparse' '--disable-brlapi' '--disable-bluez' '--disable-vde' '--disable-curses' '--enable-curl' '--enable-libssh2' '--enable-vnc-tls' '--enable-vnc-sasl' '--enable-linux-aio' + '--enable-smartcard-nss' '--enable-lzo' '--enable-snappy' '--enable-usb-redir' '--enable-vnc-png' '--disable-vnc-jpeg' '--enable-vnc-ws' '--enable-uuid' '--disable-vhost-scsi' '--disable-guest-agent' '--disable-live-block-ops' '--disab +le-live-block-migration' '--enable-rbd' '--enable-glusterfs' '--enable-tcmalloc' '--block-drv-rw-whitelist=qcow2,raw,file,host_device,blkdebug,nbd,iscsi,gluster,rbd' '--block-drv-ro-whitelist=vmdk,vhdx,vpc,ssh,https' '--iasl=/bin/false' + '--target-list=x86_64-softmmu' + + +, after rebuild the qemu-system-x86_64 : + +virsh start c7_ceph +[root@ceph-client ceph]# virsh list --all + Id Name State +---------------------------------------------------- + 19 c7_ceph running + +qemu still stuck at booting from hard disk... + + + +to my surprised if the libvirtd xml file if i replace /usr/local/qemu-3.0/bin/qemu-system-x86_64 with /usr/libexec/bin/qemu-kvm , then the vm +can start successfully . diff --git a/results/classifier/gemma3:12b/kvm/1534382 b/results/classifier/gemma3:12b/kvm/1534382 new file mode 100644 index 00000000..c532bdb7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1534382 @@ -0,0 +1,21 @@ + +loadvm makes Windows 7 x86 guest crash with some CPUs + +Running qemu with kvm enabled and -cpu set to some of the more "modern" CPUs, +and having Windows 7 x86 as the guest. + +After guest OS loads, start some app (I started "cmd"), then do "savevm". +After that, do some more activity (I closed cmd window and opened IE), +then do "loadvm" of the previously saved snapshot. + +loadvm shows briefly the state that the system was in at the snapshot time, +then guest OS crashes (blue screen). + +Originally I saw this problem on qemu 1.4.0, +then I also tried qemu 2.5.0 and found the same problem. + +The CPUs that I tried were mostly those that support NX bit (core2duo, +qemu64, kvm64, Nehalem, etc.) + +If I use the default CPU, or some other like qemu32/kvm32, +the problem does not occur. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1541643 b/results/classifier/gemma3:12b/kvm/1541643 new file mode 100644 index 00000000..5ebd1bda --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1541643 @@ -0,0 +1,8 @@ + +IA32_FEATURE_CONTROL MSR unset for nested virtualization + +I enabled nested virtualization for the kvm_intel module, and passed -enable-kvm and -cpu host to qemu. However, the qemu BIOS did not set IA32_FEATURE_CONTROL MSR (index 0x3a) to a non-zero value allow VMXON. According to the Intel manual Section 23.7 ENABLING AND ENTERING VMX OPERATION: "To enable VMX support in a platform, BIOS must set bit 1, bit 2, or both (see below), as well as the lock bit." + +I noticed an old mailing list thread on this (https://lists.nongnu.org/archive/html/qemu-devel/2015-01/msg01372.html), but I wanted to point out that the Intel manual (and all the physical hardware I've tested) specifically contradicts this response. + +Tested on kernel 4.3.3 and qemu 2.4.1. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1545 b/results/classifier/gemma3:12b/kvm/1545 new file mode 100644 index 00000000..8cf223cf --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1545 @@ -0,0 +1,10 @@ + +SSL is out of date on website +Description of problem: +The Linux KVM website is running an out of date SSL certificate. +Steps to reproduce: +1. visit the website. https://www.linux-kvm.org/page/Main_Page +2. +3. +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/1557057 b/results/classifier/gemma3:12b/kvm/1557057 new file mode 100644 index 00000000..46af0fa7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1557057 @@ -0,0 +1,73 @@ + +Windows 10 guest under qemu cannot wake up from S3 using rtc wake with -no_hpet + +Problem : Windows 10 guest cannot wake up from S3 using rtc wake + +Steps to reproduce. + +1. Boot Windows 10 Guest VM. +2. Create scheduled task (using Task Scheduler) to +5 minutes time from current time to run notepad and enabling "Wake the computer to run this task" option +3. Click Start->Power ->Sleep +4. Guest VM enters suspend mode( screen is black) +5. Wait 10 minutes - nothing happens +6. Press key in spicy window +7. VM resumes + +Expected behavior - VM should wake after 5 minutes in step 5. + +More information: +#uname -a +Linux vm-host 4.4.3-300.fc23.x86_64 #1 SMP Fri Feb 26 18:45:40 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux + +# /usr/local/bin/qemu-system-x86_64 --version +QEMU emulator version 2.5.50, Copyright (c) 2003-2008 Fabrice Bellard + + +-----------------QEMU guest config--------------------- +OPTS="$OPTS -enable-kvm " +OPTS="$OPTS -name win10_35" +#OPTS="$OPTS -bios seabios/out/bios.bin" +OPTS="$OPTS -machine pc-q35-2.4,accel=kvm,usb=off,vmport=off" +OPTS="$OPTS -cpu Broadwell,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff" +OPTS="$OPTS -m 4096" +OPTS="$OPTS -realtime mlock=off" +OPTS="$OPTS -smp 2,sockets=2,cores=1,threads=1" +OPTS="$OPTS -uuid e09cbfe5-9016-40b0-a027-62e0d2ef0ba1" +OPTS="$OPTS -no-user-config" +OPTS="$OPTS -nodefaults " +OPTS="$OPTS -rtc base=localtime,driftfix=slew" +OPTS="$OPTS -global kvm-pit.lost_tick_policy=discard" +OPTS="$OPTS -no-hpet" +OPTS="$OPTS -no-shutdown" +OPTS="$OPTS -global ICH9-LPC.disable_s3=0" +OPTS="$OPTS -global ICH9-LPC.disable_s4=0" +OPTS="$OPTS -boot order=c,menu=on,strict=on" +OPTS="$OPTS -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e" +OPTS="$OPTS -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1" +OPTS="$OPTS -device ich9-usb-ehci1,id=usb,bus=pci.2,addr=0x3.0x7" +OPTS="$OPTS -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.2,multifunction=on,addr=0x3" +OPTS="$OPTS -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.2,addr=0x3.0x1" +OPTS="$OPTS -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.2,addr=0x3.0x2" +OPTS="$OPTS -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x4" +OPTS="$OPTS -drive file=/var/lib/images/win10-run2.qcow2,format=qcow2,if=none,id=drive-sata0-0-0,cache=none" +OPTS="$OPTS -device ide-hd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0" +OPTS="$OPTS -drive file=/var/lib/images/diskd.vhd,format=vpc,if=none,id=drive-sata0-0-1" +OPTS="$OPTS -device ide-hd,bus=ide.1,drive=drive-sata0-0-1,id=sata0-0-1" +OPTS="$OPTS -drive file=virtio-win.iso,format=raw,if=none,media=cdrom,id=drive-sata0-0-2,readonly=on" +OPTS="$OPTS -device ide-cd,bus=ide.2,drive=drive-sata0-0-2,id=sata0-0-2 " +OPTS="$OPTS -chardev pty,id=charserial0" +OPTS="$OPTS -device isa-serial,chardev=charserial0,id=serial0" +OPTS="$OPTS -chardev spicevmc,id=charchannel0,name=vdagent" +OPTS="$OPTS -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0" +OPTS="$OPTS -device usb-tablet,id=input0" +OPTS="$OPTS -spice port=5901,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on" +OPTS="$OPTS -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pcie.0,addr=0x1" +OPTS="$OPTS -device intel-hda,id=sound0,bus=pci.2,addr=0x2" +OPTS="$OPTS -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0" +OPTS="$OPTS -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x5" +OPTS="$OPTS -msg timestamp=on" +OPTS="$OPTS -monitor stdio" +#OPTS="$OPTS -qmp stdio" +#OPTS="$OPTS -chardev stdio,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios" + +/usr/local/bin/qemu-system-x86_64 $OPTS \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1559 b/results/classifier/gemma3:12b/kvm/1559 new file mode 100644 index 00000000..14599f74 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1559 @@ -0,0 +1,8 @@ + +7.2 (regression?): ppc64 KVM-HV hangs during boot +Description of problem: +qemu 7.2.0 hangs at " * Mounting ZFS filesystem(s) ..." whereas 7.1.0 would fully boot. + +Without -smp, sometimes gets further and hangs later on at " * Seeding random number generator ..." +Additional information: +7.1.0 used to work before upgrading to 7.2.0, but would hang randomly after booting (usually during my benchmark). Not sure if related. Unfortunately, after downgrading back to 7.1.0, it also now hangs the same way as 7.2.0 does. diff --git a/results/classifier/gemma3:12b/kvm/1562 b/results/classifier/gemma3:12b/kvm/1562 new file mode 100644 index 00000000..891d90c3 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1562 @@ -0,0 +1,130 @@ + +qemu live migration with compression ( zstd or zlib ) in same server always(100% reproduce) failed (recevied ram page flag 0x0) +Description of problem: + +Steps to reproduce: +1. live migration with compress mode in same server +2. src: qemu-system-x86_64 -cpu Cascadelake-Server-v4 -smp 10 -enable-kvm -m 50G -nographic -serial telnet:localhost:4321,server,nowait -nic tap,ifname=tap0,script=no,downscript=no CentOS-Stream-GenericCloud-9-20230123.0.x86_64_test_0.qcow2 + +``` + QEMU 7.2.91 monitor - type 'help' for more information +(qemu) migrate_set_capability compress on +(qemu) migrate_set_parameter multifd-compression zstd +(qemu) info migrate_capabilities +xbzrle: off +rdma-pin-all: off +auto-converge: off +zero-blocks: off +compress: on +events: off +postcopy-ram: off +x-colo: off +release-ram: off +block: off +return-path: off +pause-before-switchover: off +multifd: off +dirty-bitmaps: off +postcopy-blocktime: off +late-block-activate: off +x-ignore-shared: off +validate-uuid: off +background-snapshot: off +zero-copy-send: off +postcopy-preempt: off +(qemu) info migrate_parameters +announce-initial: 50 ms +announce-max: 550 ms +announce-rounds: 5 +announce-step: 100 ms +compress-level: 1 +compress-threads: 8 +compress-wait-thread: on +decompress-threads: 2 +throttle-trigger-threshold: 50 +cpu-throttle-initial: 20 +cpu-throttle-increment: 10 +cpu-throttle-tailslow: off +max-cpu-throttle: 99 +tls-creds: '' +tls-hostname: '' +max-bandwidth: 134217728 bytes/second +downtime-limit: 300 ms +x-checkpoint-delay: 20000 ms +block-incremental: off +multifd-channels: 2 +multifd-compression: zstd +xbzrle-cache-size: 67108864 bytes +max-postcopy-bandwidth: 0 +tls-authz: '' +(qemu) migrate -d tcp:localhost:4444 +(qemu) qemu-system-x86_64: failed to save SaveStateEntry with id(name): 2(ram): -5 +qemu-system-x86_64: Unable to write to socket: Connection reset by peer +``` + +3.dest(in same server): qemu-system-x86_64 -cpu Cascadelake-Server-v4 -smp 10 -enable-kvm -m 50G -nographic -serial telnet:localhost:4322,server,nowait -nic tap,ifname=tap1,script=no,downscript=no --incoming tcp:0:4444 CentOS-Stream-GenericCloud-9-20230123.0.x86_64_test_0.qcow2 + +``` + QEMU 7.2.91 monitor - type 'help' for more information +(qemu) migrate_set_capability compress on +(qemu) migrate_set_parameter multifd-compression zstd +(qemu) info mi +mice migrate migrate_capabilities +migrate_parameters +(qemu) info migrate_capabilities +xbzrle: off +rdma-pin-all: off +auto-converge: off +zero-blocks: off +compress: on +events: off +postcopy-ram: off +x-colo: off +release-ram: off +block: off +return-path: off +pause-before-switchover: off +multifd: off +dirty-bitmaps: off +postcopy-blocktime: off +late-block-activate: off +x-ignore-shared: off +validate-uuid: off +background-snapshot: off +zero-copy-send: off +postcopy-preempt: off +(qemu) info migr +migrate migrate_capabilities migrate_parameters +(qemu) info migrate_parameters +announce-initial: 50 ms +announce-max: 550 ms +announce-rounds: 5 +announce-step: 100 ms +compress-level: 1 +compress-threads: 8 +compress-wait-thread: on +decompress-threads: 2 +throttle-trigger-threshold: 50 +cpu-throttle-initial: 20 +cpu-throttle-increment: 10 +cpu-throttle-tailslow: off +max-cpu-throttle: 99 +tls-creds: '' +tls-hostname: '' +max-bandwidth: 134217728 bytes/second +downtime-limit: 300 ms +x-checkpoint-delay: 20000 ms +block-incremental: off +multifd-channels: 2 +multifd-compression: zstd +xbzrle-cache-size: 67108864 bytes +max-postcopy-bandwidth: 0 +tls-authz: '' +(qemu) info migrate_capabilitiesqemu-system-x86_64: Unknown combination of migration flags: 0x0 +qemu-system-x86_64: decompress data failed +qemu-system-x86_64: error while loading state section id 2(ram) +qemu-system-x86_64: load of migration failed: Operation not permitted +``` +Additional information: +$ zstd -V +*** zstd command line interface 64-bits v1.5.1, by Yann Collet *** diff --git a/results/classifier/gemma3:12b/kvm/1562653 b/results/classifier/gemma3:12b/kvm/1562653 new file mode 100644 index 00000000..9cb1688c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1562653 @@ -0,0 +1,81 @@ + +Ubuntu 15.10: QEMU VM hang if memory >= 1T + +1. Ubuntu 15.10 x86_64 installed on HP SuperDome X with 8CPUs and 4T memory. + +2. Create a VM, install Ubuntu 15.10, if memory >= 1T , VM hang when start. If memory < 1T, it is good. +<domain type='kvm'> + <name>u1510-1</name> + <uuid>39eefe1e-4829-4843-b892-026d143f3ec7</uuid> + <memory unit='KiB'>1073741824</memory> + <currentMemory unit='KiB'>1073741824</currentMemory> + <vcpu placement='static'>16</vcpu> + <os> + <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type> + <boot dev='hd'/> + <boot dev='cdrom'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + </features> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <devices> + <emulator>/usr/bin/kvm</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2' cache='directsync'/> + <source file='/vms/images/u1510-1.img'/> + <target dev='vda' bus='virtio'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> + </disk> + <disk type='file' device='cdrom'> + <driver name='qemu' type='raw'/> + <target dev='hdc' bus='ide'/> + <readonly/> + <address type='drive' controller='0' bus='1' target='0' unit='0'/> + </disk> + <controller type='pci' index='0' model='pci-root'/> + <controller type='ide' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> + </controller> + <controller type='usb' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> + </controller> + <interface type='bridge'> + <mac address='0c:da:41:1d:ae:f1'/> + <source bridge='vswitch0'/> + <model type='virtio'/> + <driver name='vhost'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'> + <listen type='address' address='0.0.0.0'/> + </graphics> + <video> + <model type='cirrus' vram='16384' heads='1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + <memballoon model='virtio'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> + </memballoon> + </devices> +</domain> + +3. The panic stack is + ... cannot show + async_page_fault+0x28 + ioread32_rep+0x38 + ata_sff_data_xfer32+0x8a + ata_pio_sector+0x93 + ata_pio_sectors+0x34 + ata_sff_hsm_move+0x226 + RIP: kthread_data+0x10 + CR2: FFFFFFFF_FFFFFFD8 + +4. Change the host os to Redhat 7.2 , the vm is good even memory >=3.8T. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1563152 b/results/classifier/gemma3:12b/kvm/1563152 new file mode 100644 index 00000000..82351809 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1563152 @@ -0,0 +1,50 @@ + +general protection fault running VirtualBox in KVM guest + +I'm trying to run nested VMs using qemu-kvm on the physical host and VirtualBox on the guest host: + * physical host: Ubuntu 14.04 running Linux 4.2.0, qemu-kvm 2.0.0 + * guest host: Ubuntu 16.04 beta 2 running Linux 4.2.0, VirtualBox 5.0.16 + +When I try to start up a VirtualBox VM in the guest host, I get a general protection fault (see below for dmesg output). According to https://www.virtualbox.org/ticket/14965 this is caused by a bug in QEMU/KVM: + + The problem in more detail: As written above, VirtualBox tries to + read the MSR 0x9B (IA32_SMM_MONITOR_CTL). This is an + architectural MSR which is present if CPUID.01 / ECX bit 5 or bit + 6 are set (VMX or SMX). As KVM has nested virtualization enabled + and therefore pretends to support VT-x, this MSR must be + accessible and reading from this MSR must not raise a + #GP. KVM/QEmu does not behave like real hardware in this case. + +dmesg output: + +SUPR0GipMap: fGetGipCpu=0x3 +general protection fault: 0000 [#1] SMP +Modules linked in: pci_stub vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack xt_tcpudp bridge stp llc iptable_filter ip_tables x_tables ppdev kvm_intel kvm irqbypass snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer i2c_piix4 snd input_leds soundcore joydev 8250_fintek mac_hid serio_raw pvpanic parport_pc parport ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi autofs4 btrfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul qxl ttm drm_kms_helper syscopyarea sysfillrect aesni_intel sysimgblt fb_sys_fops aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd psmouse floppy drm pata_acpi +CPU: 0 PID: 31507 Comm: EMT Tainted: G OE 4.4.0-15-generic #31-Ubuntu +Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 +task: ffff880034c0a580 ti: ffff880002e00000 task.ti: ffff880002e00000 +RIP: 0010:[<ffffffffc067e506>] [<ffffffffc067e506>] 0xffffffffc067e506 +RSP: 0018:ffff880002e03d70 EFLAGS: 00010206 +RAX: 00000000000006f0 RBX: 00000000ffffffdb RCX: 000000000000009b +RDX: 0000000000000000 RSI: ffff880002e03d00 RDI: ffff880002e03cc8 +RBP: ffff880002e03d90 R08: 0000000000000004 R09: 00000000000006f0 +R10: 0000000049656e69 R11: 000000000f8bfbff R12: 0000000000000020 +R13: 0000000000000000 R14: ffffc9000057407c R15: ffffffffc0645260 +FS: 00007f89b8f6b700(0000) GS:ffff88007fc00000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 00007f89b8d10000 CR3: 0000000035ae1000 CR4: 00000000000006f0 +Stack: + 0000000000000000 ffffffff00000000 0000000000000000 0000000000000000 + ffff880002e03db0 ffffffffc0693e93 ffffc90000574010 ffff880035aae550 + ffff880002e03e30 ffffffffc060a3e7 ffff880002e03e10 0000000000000282 +Call Trace: + [<ffffffffc060a3e7>] ? supdrvIOCtl+0x2de7/0x3250 [vboxdrv] + [<ffffffffc06035b0>] ? VBoxDrvLinuxIOCtl_5_0_16+0x150/0x250 [vboxdrv] + [<ffffffff8121e7df>] ? do_vfs_ioctl+0x29f/0x490 + [<ffffffff8106a554>] ? __do_page_fault+0x1b4/0x400 + [<ffffffff8121ea49>] ? SyS_ioctl+0x79/0x90 + [<ffffffff81821ff2>] ? entry_SYSCALL_64_fastpath+0x16/0x71 +Code: 88 e4 fc ff ff b9 3a 00 00 00 0f 32 48 c1 e2 20 89 c0 48 09 d0 48 89 05 f9 db 0e 00 0f 20 e0 b9 9b 00 00 00 48 89 05 d2 db 0e 00 <0f> 32 48 c1 e2 20 89 c0 b9 80 00 00 c0 48 09 d0 48 89 05 cb db +RIP [<ffffffffc067e506>] 0xffffffffc067e506 + RSP <ffff880002e03d70> +---[ end trace b3284b6520f49e0d ]--- \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1565 b/results/classifier/gemma3:12b/kvm/1565 new file mode 100644 index 00000000..71d7aa9c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1565 @@ -0,0 +1,35 @@ + +s390x TCG migration failure +Description of problem: +We're seeing failures running s390x migration kvm-unit-tests tests with TCG. + +Some initial findings: + +What seems to be happening is that after migration a control block header accessed by the test code is all zeros which causes an unexpected exception. + +I did a bisection which points to c8df4a7aef ("migration: Split save_live_pending() into state_pending_*") as the culprit. +The migration issue persists after applying the fix e264705012 ("migration: I messed state_pending_exact/estimate") on top of c8df4a7aef. + +Applying + +``` +diff --git a/migration/ram.c b/migration/ram.c +index 56ff9cd29d..2dc546cf28 100644 +--- a/migration/ram.c ++++ b/migration/ram.c +@@ -3437,7 +3437,7 @@ static void ram_state_pending_exact(void *opaque, uint64_t max_size, + + uint64_t remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE; + +- if (!migration_in_postcopy()) { ++ if (!migration_in_postcopy() && remaining_size < max_size) { + qemu_mutex_lock_iothread(); + WITH_RCU_READ_LOCK_GUARD() { + migration_bitmap_sync_precopy(rs); +``` +on top fixes or hides the issue. (The comparison was removed by c8df4a7aef.) + +I arrived at this by experimentation, I haven't looked into why this makes a difference. +Steps to reproduce: +1. Run ACCEL=tcg ./run_tests.sh migration-skey-sequential with current QEMU master +2. Repeat until the test fails (doesn't happen every time, but still easy to reproduce) diff --git a/results/classifier/gemma3:12b/kvm/1569053 b/results/classifier/gemma3:12b/kvm/1569053 new file mode 100644 index 00000000..5f81fd41 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1569053 @@ -0,0 +1,39 @@ + +Qemu crashes when I start a second VM from command line + +I am using Qemu on 64 bit x86 platform, operating system ubuntu 14.04. I cloned the latest version of qemu from github and installed on my system. + +I booted the 1st VM with the instruction: + +sudo qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /var/lib/libvirt/images/vm1p4.img -boot c -enable-kvm -no-reboot -net none -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc + +It is was launched successfully. +Then I lanched the second VM with the instruction: + +sudo qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /var/lib/libvirt/images/vm1p4-2.img -boot c -enable-kvm -no-reboot -net none -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user2 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2 -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc + +Qemu crashed right away. Plea see the log below. + + + +sudo qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /var/lib/libvirt/images/vm1p4-2.img -boot c -enable-kvm -no-reboot -net none -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user2 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2 -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc +KVM internal error. Suberror: 1 +emulation failure +EAX=000cc765 EBX=00000007 ECX=000cc6ac EDX=0000df00 +ESI=1ff00000 EDI=0000d5d7 EBP=ffffffff ESP=0000f9ce +EIP=d5d70000 EFL=00010012 [----A--] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =df00 000df000 ffffffff 00809300 +CS =f000 000f0000 ffffffff 00809b00 +SS =df00 000df000 ffffffff 00809300 +DS =df00 000df000 ffffffff 00809300 +FS =0000 00000000 ffffffff 00809300 +GS =0000 00000000 ffffffff 00809300 +LDT=0000 00000000 0000ffff 00008200 +TR =0000 00000000 0000ffff 00008b00 +GDT= 00000000 00000000 +IDT= 00000000 000003ff +CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1574 b/results/classifier/gemma3:12b/kvm/1574 new file mode 100644 index 00000000..160e4910 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1574 @@ -0,0 +1,90 @@ + +The guest paused after living migration on destination host, vm-entry error code 0x80000021 +Description of problem: +The guest start on source host, then living migration to destination host, the guest status is pausing. +source host CPU: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz +destination host CPU: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz +If the guest migration from E5-2650 to Silver 4114, the guest runs normally without pausing. +Steps to reproduce: +1. start guest, on source host, host CPU: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz. +2. living migration guest to destination host, host CPU: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz. +3. migration finished, the guest pausing. +Additional information: +/label ~"kind::Bug" +qemu log: +``` +KVM: entry failed, hardware error 0x80000021 + +If you're running a guest on an Intel machine without unrestricted mode +support, the failure can be most likely due to the guest entering an invalid +state for Intel VT. For example, the guest maybe running in big real mode +which is not supported on less recent Intel processors. + +EAX=94d14da0 EBX=95341e20 ECX=00000000 EDX=00000000 +ESI=00000000 EDI=00000046 EBP=95203eb0 ESP=95203eb0 +EIP=94d14f76 EFL=00000286 [--S--P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 00000000 0000ffff 00009300 +CS =f000 ffff0000 0000ffff 00009b00 +SS =0000 00000000 0000ffff 00009300 +DS =0000 00000000 0000ffff 00009300 +FS =0000 00000000 0000ffff 00009300 +GS =0000 00000000 0000ffff 00009300 +LDT=0000 00000000 0000ffff 00008200 +TR =0000 00000000 0000ffff 00008b00 +GDT= 00000000 0000ffff +IDT= 00000000 0000ffff +CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 +``` +host log: +``` +kernel: [228693.951391] *** Guest State *** +kernel: [228693.951411] CR0: actual=0x0000000000000030, shadow=0x0000000060000010, gh_mask=fffffffffffffff7 +kernel: [228693.951422] CR4: actual=0x0000000000002040, shadow=0x0000000000000000, gh_mask=fffffffffffff871 +kernel: [228693.951430] CR3 = 0x0000000000000000 +kernel: [228693.951437] PDPTR0 = 0x0000000000000000 PDPTR1 = 0x0000000000000000 +kernel: [228693.951445] PDPTR2 = 0x0000000000000000 PDPTR3 = 0x0000000000000000 +kernel: [228693.951452] RSP = 0xffffffff95203eb0 RIP = 0xffffffff94d14f76 +kernel: [228693.951459] RFLAGS=0x00000286 DR7 = 0x0000000000000400 +kernel: [228693.951467] Sysenter RSP=0000000000000000 CS:RIP=0000:0000000000000000 +kernel: [228693.951476] CS: sel=0xf000, attr=0x0009b, limit=0x0000ffff, base=0x00000000ffff0000 +kernel: [228693.951485] DS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951494] SS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951502] ES: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951510] FS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951519] GS: sel=0x0000, attr=0x00093, limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951527] GDTR: limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951537] LDTR: sel=0x0000, attr=0x00082, limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951545] IDTR: limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951553] TR: sel=0x0000, attr=0x0008b, limit=0x0000ffff, base=0x0000000000000000 +kernel: [228693.951562] EFER = 0x0000000000000000 PAT = 0x0007040600070406 +kernel: [228693.951569] DebugCtl = 0x0000000000000000 DebugExceptions = 0x0000000000000000 +kernel: [228693.951578] Interruptibility = 00000000 ActivityState = 00000000 +kernel: [228693.951586] InterruptStatus = 00b1 +kernel: [228693.951591] *** Host State *** +kernel: [228693.951597] RIP = 0xffffffffc4b064ff RSP = 0xffffaf14ccf87d10 +kernel: [228693.951606] CS=0010 SS=0018 DS=0000 ES=0000 FS=0000 GS=0000 TR=0040 +kernel: [228693.951614] FSBase=00007f0b2657a640 GSBase=ffff9c083f580000 TRBase=fffffe00001a0000 +kernel: [228693.951623] GDTBase=fffffe000019e000 IDTBase=fffffe0000000000 +kernel: [228693.951631] CR0=0000000080050033 CR3=000000029800c004 CR4=00000000003726e0 +kernel: [228693.951639] Sysenter RSP=fffffe00001a0000 CS:RIP=0010:ffffffff95801590 +kernel: [228693.951648] EFER = 0x0000000000000d01 PAT = 0x0407050600070106 +kernel: [228693.951655] *** Control State *** +kernel: [228693.951662] CPUBased=0xb5a06dfa SecondaryExec=0x00032ff2 TertiaryExec=0x0000000000000000 +kernel: [228693.951671] PinBased=0x000000ff EntryControls=0000d1ff ExitControls=002befff +kernel: [228693.951679] ExceptionBitmap=00060042 PFECmask=00000000 PFECmatch=00000000 +kernel: [228693.951686] VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000 +kernel: [228693.951695] VMExit: intr_info=00000000 errcode=00000000 ilen=00000000 +kernel: [228693.951702] reason=80000021 qualification=0000000000000000 +kernel: [228693.951709] IDTVectoring: info=00000000 errcode=00000000 +kernel: [228693.951717] TSC Offset = 0xfffe2c437c9ab552 +kernel: [228693.951724] SVI|RVI = 00|b1 TPR Threshold = 0x00 +kernel: [228693.951734] virt-APIC addr = 0x00000002a3014000 +kernel: [228693.951736] PostedIntrVec = 0xf2 +kernel: [228693.951743] EPT pointer = 0x000000012dfe705e +kernel: [228693.951749] PLE Gap=00000080 Window=00001000 +kernel: [228693.951755] Virtual processor ID = 0x0009 +``` diff --git a/results/classifier/gemma3:12b/kvm/1574572 b/results/classifier/gemma3:12b/kvm/1574572 new file mode 100644 index 00000000..8b7f39ae --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1574572 @@ -0,0 +1,48 @@ + +config 20 sriov direct bond ports,vm create failed. + +nova log: + + 2016-04-08 09:57:48.640 5057 INFO nova.compute.manager [req-4e1b4d70-62b6-4158-8413-3c9f226fd13b - - - - -] report alarm_instance_shutoff success + +2016-04-08 09:57:48.712 5057 INFO nova.compute.manager [req-4e1b4d70-62b6-4158-8413-3c9f226fd13b - - - - -] [instance: d860169c-0dac-448f-a644-01a9b200cebe] During _sync_instance_power_state the DB power_state (1) does not match the vm_power_state from the hypervisor (4). Updating power_state in the DB to match the hypervisor. + +2016-04-08 09:57:48.791 5057 WARNING nova.compute.manager [req-4e1b4d70-62b6-4158-8413-3c9f226fd13b - - - - -] [instance: d860169c-0dac-448f-a644-01a9b200cebe] Instance shutdown by itself. Calling the heal_instance_state. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4 + +2016-04-08 09:57:48.892 5057 INFO nova.compute.manager [req-4e1b4d70-62b6-4158-8413-3c9f226fd13b - - - - -] alarm_notice_heal_event result:1,host_name:tfg120,instance_id:d860169c-0dac-448f-a644-01a9b200cebe,instance_name:vfnicdirect,vm_state:active,power_state:shutdown,action:start + +2016-04-08 09:57:48.997 5057 INFO nova.compute.manager [req-4e1b4d70-62b6-4158-8413-3c9f226fd13b - - - - -] Refresh_instance_block_device_info:False + +2016-04-08 09:57:48.998 5057 INFO nova.compute.manager [req-4e1b4d70-62b6-4158-8413-3c9f226fd13b - - - - -] [instance: d860169c-0dac-448f-a644-01a9b200cebe] Rebooting instance + +2016-04-08 09:57:49.373 5057 WARNING nova.compute.manager [req-4e1b4d70-62b6-4158-8413-3c9f226fd13b - - - - -] [instance: d860169c-0dac-448f-a644-01a9b200cebe] trying to reboot a non-running instance: (state: 4 expected: 1) + +2016-04-08 09:57:49.479 5057 INFO nova.virt.libvirt.driver [-] [instance: d860169c-0dac-448f-a644-01a9b200cebe] Instance destroyed successfully. + + +libvirtd log: + +2016-04-08 02:05:05.785+0000: 4778: info : qemuDomainDestroyFlags:2227 : Log: VM: name= instance-000000b8 + +2016-04-08 02:05:16.156+0000: 4771: info : qemuDomainDefineXMLFlags:7576 : Creating domain 'instance-000000b8' + +2016-04-08 02:05:16.158+0000: 4773: info : qemuDomainCreateWithFlags:7448 : Log: VM: name= instance-000000b8 + +2016-04-08 02:05:16.158+0000: 4773: info : qemuProcessStart:4412 : vm=0x7f19482fdb30 name=instance-000000b8 id=-1 asyncJob=0 migrateFrom=<null> stdin_fd=-1 stdin_path=<null> snapshot=(nil) vmop=0 flags=0x1 + +2016-04-08 02:05:16.169+0000: 4773: info : virNetDevReplaceNetConfig:2541 : Replace Net Config of linkdev enp132s0f0, vf 28, macaddress 00:d1:d4:00:05:03, vlanid 1250, stateDir /var/run/libvirt/hostdevmgr + +2016-04-08 02:05:16.169+0000: 4773: info : virNetDevReplaceNetConfig:2566 : Replace Vf Config of enp132s0f0, vf 28, vlanid 1250, stateDir /var/run/libvirt/hostdevmgr + +2016-04-08 02:05:16.169+0000: 4773: info : virNetDevReplaceVfConfig:2390 : pflinkdev enp132s0f0, vf 28,vlanid 1250 + +2016-04-08 02:05:16.178+0000: 4773: info : virNetDevReplaceVfConfig:2428 : save oldmac 00:d1:d4:00:05:03, oldvlanid 1250 + +2016-04-08 02:05:16.178+0000: 4773: info : virNetDevSetVfConfig:2196 : ifname enp132s0f0,ifindex -1,vf 28,macaddress 00:d1:d4:00:05:03, vlanid 1250 + + +qemu log: + +kvm_alloc_slot: no free slot available + +2016-04-08 06:21:04.793+0000: shutting down \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1575607 b/results/classifier/gemma3:12b/kvm/1575607 new file mode 100644 index 00000000..a9e09764 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1575607 @@ -0,0 +1,52 @@ + + vm startup failed, qemu returned "kvm run failed Bad address" + + create a virtual machine and start by libvirt. vm startup failed, qemu returned "kvm run failed Bad address" + the error log is : + +error: kvm run failed Bad address + +EAX=7ffc0000 EBX=7ffbffd0 ECX=fffffff0 EDX=0000002c + +ESI=00006f5c EDI=7ffbffd0 EBP=7ffc0000 ESP=00006f34 + +EIP=000dec7b EFL=00010046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 + +ES =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] + +CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA] + +SS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] + +DS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] + +FS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] + +GS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] + +LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT + +TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy + +GDT= 000f6a80 00000037 + +IDT= 000f6abe 00000000 + +CR0=60000011 CR2=00000000 CR3=00000000 CR4=00000000 + +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 + +DR6=00000000ffff0ff0 DR7=0000000000000400 + +EFER=0000000000000000 + +Code=c3 29 d3 21 cb 39 c3 77 27 3b 5e 0c 72 22 85 ff 75 02 89 df <89> 5f 08 01 da 89 57 0c 89 47 10 89 5e 10 8b 56 04 89 f8 e8 8c fc ff ff 89 d8 eb 06 8b 36 + + we add numa in the vm, the numatune info is: + <numatune> + + <memory mode='strict' placement='auto'/> + + </numatune> + + the version of qemu is 2.5.0. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1580 b/results/classifier/gemma3:12b/kvm/1580 new file mode 100644 index 00000000..db7a1ff2 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1580 @@ -0,0 +1,45 @@ + +QEMU crashes when running inside Hyper-V VM on AMD EPYC +Description of problem: +Starting the VM very rarely succeeds and often it crashes with: +``` +# qemu-system-x86_64 -cpu EPYC -machine accel=kvm -smp 1 -m 512 -drive if=pflash,format=raw,readonly=on,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,file=OVMF_VARS.fd -drive file=debian-11-nocloud-amd64-20230124-1270.qcow2,format=qcow2 -snapshot -monitor none +qemu: module ui-ui-gtk not found, do you want to install qemu-system-gui package? +qemu: module ui-ui-sdl not found, do you want to install qemu-system-gui package? +VNC server running on ::1:5900 +KVM internal error. Suberror: 1 +extra data[0]: 0x0000000000000001 +extra data[1]: 0x96d0cff2bed0cf0f +extra data[2]: 0x0bfd29af72b35c7c +extra data[3]: 0x0000000000000400 +extra data[4]: 0x0000000100000004 +extra data[5]: 0x00000000581c356c +extra data[6]: 0x0000000000000000 +extra data[7]: 0x0000000000000000 +emulation failure +EAX=fffd26a4 EBX=00000000 ECX=00000000 EDX=b731cdad +ESI=00000101 EDI=00005042 EBP=fffcc000 ESP=581c3564 +EIP=fffff8a8 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0008 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +CS =0010 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA] +SS =0008 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0008 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +FS =0008 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +GS =0008 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT +TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy +GDT= fffffee0 00000027 +IDT= 00000000 00000000 +CR0=40000033 CR2=00000000 CR3=00800000 CR4=00000660 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000100 +Code=00 0f 20 e0 0f ba e8 05 0f 22 e0 31 db e9 13 02 00 00 85 c0 <75> 38 b9 80 00 00 c0 0f 32 0f ba e8 08 0f 30 31 db b9 01 00 00 00 0f a3 0d 04 b0 80 00 74 +``` +Steps to reproduce: +1. Create a [Standard_D8ads_v5 VM](https://learn.microsoft.com/en-us/azure/virtual-machines/dasv5-dadsv5-series) (AMD EPYC 7763 64-Core Processor) in Azure with Debian 11 +2. Install `qemu-system-x86` (1:7.2+dfsg-5~bpo11+1) from `bullseye-backports` +3. Install `ovmf` (2022.11-6) from `bookworm` (testing) +4. Run the commands under "QEMU command line" +Additional information: +VNC displays "Guest has not initialized the display (yet)". The setup works perfectly on a [Standard_D8ds_v5 VM](https://learn.microsoft.com/en-us/azure/virtual-machines/ddv5-ddsv5-series) (Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz). diff --git a/results/classifier/gemma3:12b/kvm/1583 b/results/classifier/gemma3:12b/kvm/1583 new file mode 100644 index 00000000..02675e09 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1583 @@ -0,0 +1,20 @@ + +SGX Device mapping is not listed into QEMU KVM +Description of problem: +I want to run SGX into QEMU VM, the vm is up and running but SGX device mappings are not listed there. I also looked in dmesg | grep sgx and it returned "There are zero epc section" + +I have upgraded the libvirt to 8.6.0 because of below issue +https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1982896 + +I tried with libvirt-8.0.0 but it did not help + +I have attached the xml, please let me know why sgx mappings are not showing inside VM +Steps to reproduce: +1. Create a Ubuntu 20.04 VM with SGX mapping +Additional information: +Please let me know if any other logs are required + + + + +[ubuntu20.04.xml](/uploads/2609abc31db08e04cc3e3dbf923cd8d7/ubuntu20.04.xml) diff --git a/results/classifier/gemma3:12b/kvm/1585971 b/results/classifier/gemma3:12b/kvm/1585971 new file mode 100644 index 00000000..c0857bb1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1585971 @@ -0,0 +1,21 @@ + +Host system crashes on qemu with DMA remapping + +Hy, + +the host system crashes completely, when i try to pass an physical device without boot option intel_iommu=on set. In older kernel versions you didn't have to pass that option. + +I wonder if this can be easily checked by asking iommu state, avoiding a crash of the complete system. + +My data: +cpu model: Intel(R) Core(TM) i7 CPU +qemu version: 2.4.1-r2 +kernel version: 4.1.2 x86_64 +command line: +qemu-system-x86_64 -enable-kvm -drive file=/vms/prod/fw/fw.iso,if=virtio,format=raw -drive file=/vms/prod/fw/swap,if=virtio,format=raw -drive file=/vms/prod/fw/fwdata.iso,if=virtio,format=raw -m 512 -nographic -kernel /data/kernels/vmlinuz-2.6.36-gentoo-r8 -append "root=/dev/vda console=ttyS0 earlyprintk=serial" -net nic,model=virtio,macaddr=DE:AD:BE:EF:2D:AD -net tap,ifname=tapfw0,script=/etc/qemu/qemu-ifup -device pci-assign,host=03:00.0 + +There are also more detailed informations (if needed) here: +https://forums.gentoo.org/viewtopic-p-7923976.html + +Thanks, +Antonios. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1589153 b/results/classifier/gemma3:12b/kvm/1589153 new file mode 100644 index 00000000..3290ac55 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1589153 @@ -0,0 +1,35 @@ + +qemu-system-x86_64 version 2.5.0 freezes during windows 7 installation in lubuntu 16.04 + +Hi! + +I have been using qemu - kvm for several years in different versions of ubuntu (lubuntu). I am trying to migrate from 15.04 to 16.04 and am having a problem. In particular, on my machine (a samsung series 9 with dual core i7 processor and 8gb ram) the following commands worked in 15.04 but do not work in 15.10 and 16.04. FYI, I tested them on a clean machine, where I have created a 60GB image file in its own partition.. In particular, I am using the command to start installing windows 7 and it works in a clean install of 15.04 (yesterday) but not in 15.10 (yesterday) or 16.04 (the day before). I do not get any error messages in my xterminal when running this and do not know how to check for windows error messages. By not working I mean that after loading files it gets to a windows screen and then stays there forever. + +The command lines used to invoke qemu is: +echo "*** Installing windows 7 virtual machine - Step 2" + + +echo "*** Try command for slow mouse" +export SDL_VIDEO_X11_DGAMOUSE=0 + +sudo qemu-system-x86_64 \ + -enable-kvm \ + -machine pc,accel=kvm \ + -cdrom /home/Archives/Software/OperatingSystems.Windows7HP.64/Windows7HP64_Install.iso \ + -boot d \ + -net nic,macaddr=56:44:45:30:31:34 \ + -net user \ + -cpu host \ + -vga qxl \ + -spice port=5900,disable-ticketing \ + -uuid 8373c3d6-1e6c-f022-38e2-b94e6e14e170 \ + -smp cpus=2,maxcpus=3 \ + -m 6144 \ + -name DrPhilSS9AWin7VM \ + -hda /mnt/Windows7Image/Windows7Guest.img \ + -localtime \ + -k en-us \ + -usb \ + -usbdevice tablet& +sleep 10 +spicy --host 127.0.0.1 --port 5900 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1593605 b/results/classifier/gemma3:12b/kvm/1593605 new file mode 100644 index 00000000..d802718f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1593605 @@ -0,0 +1,84 @@ + +windows2008r2 boot failed with uefi + +I want to run my win2008r2 with uefi. Hypervisor is ubuntu16.04 and my qemu command line show below: + +qemu-system-x86_64 -enable-kvm -name win2008r2 -S -machine pc-i440fx-2.5,accel=kvm,usb=off -cpu host,hv_time,hv_relaxed,hv_spinlocks=0x2000 -drive file=/usr/share/qemu/OVMF.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/var/lib/libvirt/qemu/nvram/win2008r2_VARS.fd,if=pflash,format=raw,unit=1 -m size=8388608k,slots=10,maxmem=1073741824k -realtime mlock=off -smp 8,maxcpus=96,sockets=24,cores=4,threads=1 -numa node,nodeid=0,cpus=0-7,mem=8192 -uuid 030638c5-c6aa-4f06-82f8-dd2d04fd5705 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-win2008r2/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,clock=vm,driftfix=slew -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device usb-ehci,id=usb1,bus=pci.0,addr=0x4 -device nec-usb-xhci,id=usb2,bus=pci.0,addr=0x5 -device lsi,id=scsi0,bus=pci.0,addr=0x6 -device virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x7 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x8 -drive file=/vms/images/win2008r2,format=qcow2,if=none,id=drive-ide0-0-0,cache=directsync -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/vms/isos/cn_windows_server_2008_r2_standard_enterprise_datacenter_and_web_with_sp1_x64_dvd_617598.iso,format=raw,if=none,id=drive-ide0-1-1,readonly=on -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1,bootindex=2 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/win2008r2.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0.0.0.0:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa -msg timestamp=on + + +OVMF.fd is download from http://sourceforge.net/projects/edk2/files/OVMF/ OVMF-X64-r15214.zip. + +When I boot my domain with windows2008 iso, the kvm was caught in endless interrupt. I enable trace on my host and I got this. + + + +1. echo 1 > /sys/kernel/debug/tracing/events/kvm/enable +2. cat /sys/kernel/debug/tracing/trace_pipe +qemu-system-x86-1969 [006] .... 2093.019588: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.019590: kvm_entry: vcpu 0 + qemu-system-x86-1966 [017] .... 2093.021424: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.021429: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.021430: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1969 [006] .... 2093.021683: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.021686: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.022592: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000ef + qemu-system-x86-1969 [006] d... 2093.022595: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.022746: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.022749: kvm_entry: vcpu 0 + qemu-system-x86-1966 [017] .... 2093.023434: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.023444: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.023446: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1969 [006] .... 2093.023610: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.023612: kvm_entry: vcpu 0 + qemu-system-x86-1966 [017] .... 2093.025430: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.025435: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.025436: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1969 [006] .... 2093.025599: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.025601: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .N.. 2093.026593: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000ef + qemu-system-x86-1969 [006] d... 2093.026596: kvm_fpu: unload + qemu-system-x86-1969 [006] .... 2093.026598: kvm_ple_window: vcpu 0: ple_window 4096 (shrink 4096) + qemu-system-x86-1969 [006] .... 2093.026599: kvm_fpu: load + qemu-system-x86-1969 [006] d... 2093.026599: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.026741: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.026744: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.026841: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.026844: kvm_entry: vcpu 0 + qemu-system-x86-1966 [017] .... 2093.027448: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.027454: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.027455: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1966 [017] .... 2093.029444: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.029449: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.029450: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1969 [006] .... 2093.029452: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000ef + qemu-system-x86-1969 [006] d... 2093.029454: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.029633: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.029636: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.030592: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000ef + qemu-system-x86-1969 [006] d... 2093.030595: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.030745: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.030748: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.030840: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.030843: kvm_entry: vcpu 0 + qemu-system-x86-1966 [017] .... 2093.031454: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.031459: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.031460: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1966 [017] .... 2093.032968: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.032974: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.032975: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1969 [006] .... 2093.033229: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.033231: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.034592: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000ef + qemu-system-x86-1969 [006] d... 2093.034595: kvm_entry: vcpu 0 + qemu-system-x86-1969 [006] .... 2093.034781: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.034783: kvm_entry: vcpu 0 + qemu-system-x86-1966 [017] .... 2093.034975: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.034980: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.034981: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1969 [006] .... 2093.035113: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae8e info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.035116: kvm_entry: vcpu 0 + qemu-system-x86-1966 [017] .... 2093.036983: kvm_set_irq: gsi 8 level 1 source 0 + qemu-system-x86-1966 [017] .... 2093.036989: kvm_pic_set_irq: chip 1 pin 0 (edge|masked) + qemu-system-x86-1966 [017] .... 2093.036990: kvm_ioapic_set_irq: pin 8 dst 1 vec=209 (Fixed|logical|edge) (coalesced) + qemu-system-x86-1969 [006] .... 2093.037154: kvm_exit: reason EXTERNAL_INTERRUPT rip 0xfffff8001080ae78 info 0 800000fd + qemu-system-x86-1969 [006] d... 2093.037157: kvm_entry: vcpu 0 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1595 b/results/classifier/gemma3:12b/kvm/1595 new file mode 100644 index 00000000..3f72230e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1595 @@ -0,0 +1,30 @@ + +CPU boot sometimes fails on big.LITTLE CPUs with varying cache sizes +Description of problem: +The RK3588 SoC has three core clusters; one with A55 cores, and the other two have A76 cores. The big cores have more L2 cache than the little cores, so the value of `CCSIDR` depends on the core that it is read from. + +In `write_list_to_kvmstate`, QEMU attempts to use `KVM_SET_ONE_REG` with an ID for `KVM_REG_ARM_DEMUX_ID_CCSIDR`, trying to set `CCSIDR` to a previously read value. + +Normally, that works fine, but if the host kernel has moved QEMU from one core cluster to the other, then the value will be different and `demux_c15_set` will return `EINVAL`, causing the entire `arm_set_cpu_on` to fail, and the guest kernel to print an error. + +https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/kvm/sys_regs.c?h=v6.2#n2827 + +I tried changing the condition for the `ok = false` line in `write_list_to_kvmstate` to `ret && r.id >> 8 != 0x60200000001100`. This causes all CPUs to initialize correctly in the guest, but obviously that's a hack. + +I assume that `CCSIDR` not being uniform across all CPUs means that the guest's copy of `CCSIDR` may be wrong, and so cache maintenance operations may not act on the entire cache. I do not know whether that could actually cause problems. Will QEMU need to find the maximum cache size across all CPUs and present that to guests? +Steps to reproduce: +On a SoC where big and little cores have different cache sizes (e.g. RK3588): + +```text +$ qemu-system-aarch64 -M virt -accel kvm -cpu host -smp 4 -nographic -kernel arch/arm64/boot/Image -append quiet +[ 0.001399][ T1] psci: failed to boot CPU1 (-22) +[ 0.001407][ T1] CPU1: failed to boot: -22 +[ 0.001685][ T1] psci: failed to boot CPU2 (-22) +[ 0.001691][ T1] CPU2: failed to boot: -22 +[ 0.001809][ T1] psci: failed to boot CPU3 (-22) +[ 0.001814][ T1] CPU3: failed to boot: -22 +``` + +The error is not always printed, because it depends on which core cluster the processes are scheduled on. + +Using `taskset -c 0-3` or `taskset -c 4-7` to force QEMU to stick to the little or big cores respectively makes the bug not reproduce. diff --git a/results/classifier/gemma3:12b/kvm/1596579 b/results/classifier/gemma3:12b/kvm/1596579 new file mode 100644 index 00000000..a13eeb3d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1596579 @@ -0,0 +1,27 @@ + +segfault upon reboot + +[ 31.167946] VFIO - User Level meta-driver version: 0.3 +[ 34.969182] kvm: zapping shadow pages for mmio generation wraparound +[ 43.095077] vfio-pci 0000:1a:00.0: irq 50 for MSI/MSI-X +[166493.891331] perf interrupt took too long (2506 > 2500), lowering kernel.perf_event_max_sample_rate to 50000 +[315765.858431] qemu-kvm[1385]: segfault at 0 ip (null) sp 00007ffe5430db18 error 14 +[315782.002077] vfio-pci 0000:1a:00.0: transaction is not cleared; proceeding with reset anyway +[315782.910854] mptsas 0000:1a:00.0: Refused to change power state, currently in D3 +[315782.911236] mptbase: ioc1: Initiating bringup +[315782.911238] mptbase: ioc1: WARNING - Unexpected doorbell active! +[315842.957613] mptbase: ioc1: ERROR - Failed to come READY after reset! IocState=f0000000 +[315842.957670] mptbase: ioc1: WARNING - ResetHistory bit failed to clear! +[315842.957675] mptbase: ioc1: ERROR - Diagnostic reset FAILED! (ffffffffh) +[315842.957717] mptbase: ioc1: WARNING - NOT READY WARNING! +[315842.957720] mptbase: ioc1: ERROR - didn't initialize properly! (-1) +[315842.957890] mptsas: probe of 0000:1a:00.0 failed with error -1 + +The qemu-kvm segfault happens when I issue a reboot on the Windows VM. The card I have is: +1a:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS (rev ff) + +I have two of these cards (bought with many years difference), exact same model, and they fail the same way. I'm using PCI passthrough on this card for access to the tape drive. +This is very easy to reproduce, so feel free to let me know what to try. +Kernel 3.10.0-327.18.2.el7.x86_64 (Centos 7.2.1511). +qemu-kvm-1.5.3-105.el7_2.4.x86_64 +Reporting it here because of the segfault, but I guess I might have to open a bug report with mptbase as well? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1598 b/results/classifier/gemma3:12b/kvm/1598 new file mode 100644 index 00000000..8a7e6734 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1598 @@ -0,0 +1,59 @@ + +vfio-pci - Intel Arc DG2 - host errors +Description of problem: +The host continues to respond (slowly) after the VM is shutdown. Speeds back up to normal after about an hour. However, a reboot is required to get the host to operate normally. + +When shutting down the VM, the host starts to display the following messages in dmesg: + +[Thu Apr 13 01:30:47 2023] vfio-pci 0000:18:00.0: not ready 1023ms after FLR; waiting +[Thu Apr 13 01:30:49 2023] vfio-pci 0000:18:00.0: not ready 2047ms after FLR; waiting +[Thu Apr 13 01:30:52 2023] vfio-pci 0000:18:00.0: not ready 4095ms after FLR; waiting +[Thu Apr 13 01:30:57 2023] vfio-pci 0000:18:00.0: not ready 8191ms after FLR; waiting +[Thu Apr 13 01:31:06 2023] vfio-pci 0000:18:00.0: not ready 16383ms after FLR; waiting +[Thu Apr 13 01:31:25 2023] vfio-pci 0000:18:00.0: not ready 32767ms after FLR; waiting +[Thu Apr 13 01:31:59 2023] vfio-pci 0000:18:00.0: not ready 65535ms after FLR; giving up +[Thu Apr 13 01:32:11 2023] vfio-pci 0000:18:00.0: not ready 1023ms after bus reset; waiting +[Thu Apr 13 01:32:13 2023] vfio-pci 0000:18:00.0: not ready 2047ms after bus reset; waiting +[Thu Apr 13 01:32:16 2023] vfio-pci 0000:18:00.0: not ready 4095ms after bus reset; waiting +[Thu Apr 13 01:32:21 2023] vfio-pci 0000:18:00.0: not ready 8191ms after bus reset; waiting +[Thu Apr 13 01:32:31 2023] vfio-pci 0000:18:00.0: not ready 16383ms after bus reset; waiting +[Thu Apr 13 01:32:48 2023] vfio-pci 0000:18:00.0: not ready 32767ms after bus reset; waiting +[Thu Apr 13 01:33:22 2023] vfio-pci 0000:18:00.0: not ready 65535ms after bus reset; giving up +Steps to reproduce: +1. Shutdown VM. +Additional information: +I have startup and shutdown scripts that detach and reattach the card and these scripts work fine if I test them alone. It's only when I shutdown the VM that issue presents itself. + +revert.sh + +``` +#!/bin/bash +set -x + +systemctl reboot # to workaround host lockup on shutdown + +# Load the config file with our environmental variables +source "/etc/libvirt/hooks/kvm.conf" +source "/etc/libvirt/hooks/vmPreBootSetup" + +cpuSchedutil + +# Unload VFIO-PCI Kernel Driver +modprobe -r vfio_pci +modprobe -r vfio_iommu_type1 +modprobe -r vfio + +# Re-Bind GPU to our display drivers +virsh nodedev-reattach $VIRSH_GPU_VIDEO +virsh nodedev-reattach $VIRSH_GPU_AUDIO + +#modprobe drm_buddy intel_gtt video drm_display_helper cec ttm i915 + +# Restart Display Manager +systemctl restart sddm.service +``` + + + +Full dmesg log: +[vfio_13_april_2023.txt](/uploads/5d5b642595c53cabb3c3608c07d59eb3/vfio_13_april_2023.txt) diff --git a/results/classifier/gemma3:12b/kvm/1603970 b/results/classifier/gemma3:12b/kvm/1603970 new file mode 100644 index 00000000..1671c402 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1603970 @@ -0,0 +1,31 @@ + +KVM freezes after live migration (AMD 4184 -> 4234) + +Hi, + +i have two host systems with different CPU types: + +Host A: +AMD Opteron(tm) Processor 4234 + +Host B: +AMD Opteron(tm) Processor 4184 + +Live migration from B -> A works as expected, migration from A -> B always ends in a freezed KVM. If the KVM is frozen, VNC output is still present, however, you can't type anything. CPU usage is always at 100% for one core (so if i set two cores, one is at 100% the other one at 0% usage). + +My command to launch the KVM is the following: + +/usr/bin/kvm -id 104 -chardev socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/104.pid -daemonize -smbios type=1,uuid=26dd83a9-b9bd-4641-8016-c55f255f1bdf -name kilian-test -smp 1,sockets=1,cores=1,maxcpus=1 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -vga cirrus -vnc unix:/var/run/qemu-server/104.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 512 -object memory-backend-ram,id=ram-node0,size=512M -numa node,nodeid=0,cpus=0,memdev=ram-node0 -k de -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:5ca1e9d334b2 -drive file=/mnt/pve/nfs_synology/images/104/vm-104-disk-2.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=66:33:31:36:35:36,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 + + +KVM / QEMU version: QEMU emulator version 2.5.1.1 + +I have tried to set different CPU types, but no one works (qemu64, vm64, Opteron_G1, ...). + + +I have found an email from 2014 where another user reports exactly the same problem: + +http://lists.gnu.org/archive/html/qemu-discuss/2014-02/msg00002.html + +Greets +Kilian \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1616706 b/results/classifier/gemma3:12b/kvm/1616706 new file mode 100644 index 00000000..48096a3f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1616706 @@ -0,0 +1,17 @@ + +watchdog doesn't bring down the VM + +Qemu-kvm version : QEMU emulator version 1.5.3 (qemu-kvm-1.5.3-105.el7), Copyright (c) 2003-2008 Fabrice Billard + +Qemu version: Virsh command line tool of libvirt 1.2.17 + +I have the VM stuck in bios (efi shell), but i don't see the watchdog in the host bringing it down? + +Couple of questions: + +1. Does the watchdog functionality requires the driver in adminvm to trigger the reload? or qemu detects it in the host and causes the reload. + +2. Does this work reliably? I have seen cases where in i have the watchdog daemon in the VM shut, still don't see the VM going down (I put the action in the XML file as power off) + +Thanks +Amit \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1619991 b/results/classifier/gemma3:12b/kvm/1619991 new file mode 100644 index 00000000..86d58d33 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1619991 @@ -0,0 +1,125 @@ + +Concurrent VMs crash w/ GPU passthrough and multiple disks + +When running multiple VMs with GPU passthrugh, both VMs will crash unless all virtual disks are on the same physical volume as root, likely on all X58 chipset motherboards. I've tested with 3. + +Expected Behavior: No Crash +Result: Both VMs GPU drivers fail and the guest OS are unrecoverable, usually within seconds, though the degree of "fickleness" of it depends on the multidisk setup. +Reproducibility: 100% + + +Steps to reproduce: + +* Install OS (In my case Debian Jessie/Proxmox), and update to latest +* Setup VMs +* Setup up GPU passthrough with 1 GPU per VM, and one for host, as per https://pve.proxmox.com/wiki/Pci_passthrough +* Setup up USB passthrough +* Launch both VM +* Observe "everything is working" +* Stop VMs +* Add a second disk to one of the VMs, which exists on a separate physical disk from Host OS / +* Observe both VMs crash when the virtual disk which exists on separate physical media is used (i.e. copy files to the disk) +* Stop VMs +* Remove new disk, and move Guest OS virtual root disk to separate physical media. +* Observe both VMs crash around the time GPU driver is loaded on one + +As I mentioned earlier, there is some degree of difference in how difficult it is to trigger a crash, depending on the multidisk setup. For instance, when / is ZFS, and the virtual disks exist on a separate ZFS raid-z volume, both VMs must be doing some relatively intensive HW 3d acceleration in order to trigger the crash. + +Passing two GPU to one VM works fine all the time, and running either VM on its in general will not trigger a crash. + +There are many variables I have yet to test, such as using sata instead of virtio for the virtual disks, however unfortunately I do not have anything from std err or logs to indicate what the problem could be. + +kernel verion: Linux test-ve 4.4.15-1-pve (other versions >= 4.2.1 and <= 4.7.? tested) +qemu version: 2.6.0 pve-qemu-kvm_2.6-1 +motherboards tested: rampage iii, ga-ex58-ud5, asus Psomething +CPUs tested: i7 920, X5670 + + +KVM invocation 1: + +/usr/bin/kvm \ +-id 101 \ +-chardev socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait \ +-mon chardev=qmp,mode=control \ +-pidfile /var/run/qemu-server/101.pid \ +-daemonize \ +-smbios type=1,uuid=450e337e-244c-429b-9aa8-afb7aee037e8 \ +-drive if=pflash,format=raw,readonly,file=/usr/share/kvm/OVMF-pure-efi.fd \ +-drive if=pflash,format=raw,file=/root/101-OVMF_VARS-pure-efi.fd \ +-name Madzia-PC \ +-smp 12,sockets=1,cores=12,maxcpus=12 \ +-nodefaults \ +-boot menu=on,strict=on,reboot-timeout=1000 \ +-vga none \ +-nographic \ +-no-hpet \ +-cpu host,hv_vendor_id=Nvidia43FIX,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi,kvm=off \ +-m 8192 \ +-object memory-backend-ram,id=ram-node0,size=8192M \ +-numa node,nodeid=0,cpus=0-11,memdev=ram-node0 \ +-k en-us -readconfig /usr/share/qemu-server/pve-q35.cfg \ +-device usb-tablet,id=tablet,bus=ehci.0,port=1 \ +-device vfio-pci,host=04:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0 \ +-device vfio-pci,host=04:00.1,id=hostpci1,bus=ich9-pcie-port-2,addr=0x0 \ +-device usb-host,hostbus=1,hostport=6.1,id=usb0 \ +-device usb-host,hostbus=1,hostport=6.2.1,id=usb1 \ +-device usb-host,hostbus=1,hostport=6.2.2,id=usb2 \ +-device usb-host,hostbus=1,hostport=6.2.3,id=usb3 \ +-device usb-host,hostbus=1,hostport=6.2,id=usb4 \ +-device usb-host,hostbus=1,hostport=6.3,id=usb5 \ +-device usb-host,hostbus=1,hostport=6.4,id=usb6 \ +-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 \ +-iscsi initiator-name=iqn.1993-08.org.debian:01:3f3df5515b13 \ +-drive file=/dev/pve/vm-101-disk-1,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on \ +-device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 \ +-netdev type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on \ +-device virtio-net-pci,mac=4E:DD:47:D7:DF:C9,netdev=net0,bus=pci.0,addr=0x12,id=net0 \ +-rtc driftfix=slew,base=localtime \ +-machine type=q35 \ +-global kvm-pit.lost_tick_policy=discard + + +KVM invocation 2: + +/usr/bin/kvm \ +-id 102 \ +-chardev socket,id=qmp,path=/var/run/qemu-server/102.qmp,server,nowait \ +-mon chardev=qmp,mode=control \ +-pidfile /var/run/qemu-server/102.pid \ +-daemonize \ +-smbios type=1,uuid=450e337e-244c-429b-9aa8-afb7aee037e8 \ +-drive if=pflash,format=raw,readonly,file=/usr/share/kvm/OVMF-pure-efi.fd \ +-drive if=pflash,format=raw,file=/root/102-OVMF_VARS-pure-efi.fd \ +-name Madzia-PC \ +-smp 12,sockets=1,cores=12,maxcpus=12 \ +-nodefaults \ +-boot menu=on,strict=on,reboot-timeout=1000 \ +-vga none \ +-nographic \ +-no-hpet \ +-cpu host,hv_vendor_id=Nvidia43FIX,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,+kvm_pv_unhalt,+kvm_pv_eoi,kvm=off \ +-m 512 \ +-object memory-backend-ram,id=ram-node0,size=512M \ +-numa node,nodeid=0,cpus=0-11,memdev=ram-node0 \ +-k en-us \ +-readconfig /usr/share/qemu-server/pve-q35.cfg \ +-device usb-tablet,id=tablet,bus=ehci.0,port=1 \ +-device vfio-pci,host=05:00.0,id=hostpci2,bus=ich9-pcie-port-3,addr=0x0 \ +-device vfio-pci,host=05:00.1,id=hostpci3,bus=ich9-pcie-port-4,addr=0x0 \ +-device usb-host,hostbus=2,hostport=2.1,id=usb0 \ +-device usb-host,hostbus=2,hostport=2.2,id=usb1 \ +-device usb-host,hostbus=2,hostport=2.3,id=usb2 \ +-device usb-host,hostbus=2,hostport=2.4,id=usb3 \ +-device usb-host,hostbus=2,hostport=2.5,id=usb4 \ +-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 \ +-iscsi initiator-name=iqn.1993-08.org.debian:01:3f3df5515b13 \ +-drive file=/dev/pve/vm-102-disk-1,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on \ +-device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 \ +-netdev type=tap,id=net0,ifname=tap102i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on \ +-device virtio-net-pci,mac=4E:DD:47:D7:DF:C9,netdev=net0,bus=pci.0,addr=0x12,id=net0 \ +-rtc driftfix=slew,base=localtime \ +-machine type=q35 \ +-global kvm-pit.lost_tick_policy=discard + + +Please let me know what additional information may be helpful, or how I can be of any assistance. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1623276 b/results/classifier/gemma3:12b/kvm/1623276 new file mode 100644 index 00000000..fef60883 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1623276 @@ -0,0 +1,142 @@ + +qemu 2.7 / iPXE crash + +I am running Arch linux + +vanilla 4.7.2 kernel +qemu 2.7 +libvirt 2.2.0 +virt-manager 1.4.0 + + +Since the upgrade from qemu 2.6.1 to 2.7 a few days ago. I'm no longer +able to PXE boot at all. Everything else appears to function normally. +Non PXE booting and everything else is perfect. Obviously have +restarted everying etc. Have tried the various network drivers also. + +This occurs on domains created with 2.6.1 or with 2.7 + +When I choose PXE boot, the machine moves to a paused state (crashed) +immediately after the 'starting PXE rom execution...' message appears. + +Reverting to qemu 2.6.1 package corrects the issue. + +The qemu.log snippet follows below. + +I'm not sure how to troubleshoot this problem to determine if it's a +packaging error by the distribution or a problem with qemu/kvm/kernel? + +Any help would be much appreciated - Thanks, +Greg + +--- qemu.log: + + +2016-09-12 16:36:33.867+0000: starting up libvirt version: 2.2.0, qemu +version: 2.7.0, hostname: seneca +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin +QEMU_AUDIO_DRV=spice /usr/sbin/qemu-system-x86_64 -name guest=c,debug- +threads=on -S -object +secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6- +c/master-key.aes -machine pc-i440fx-2.7,accel=kvm,usb=off,vmport=off +-cpu Nehalem -m 2048 -realtime mlock=off -smp +1,sockets=1,cores=1,threads=1 -uuid 348009be-26d5-4dc7-b515- +e8b45f5117ac -no-user-config -nodefaults -chardev +socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-6- +c/monitor.sock,server,nowait -mon +chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew +-global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global +PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot +menu=on,strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 +-device ich9-usb- +uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6 +-device ich9-usb- +uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1 -device ich9- +usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2 -device +virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive +file=/var/lib/libvirt/images/c.qcow2,format=qcow2,if=none,id=drive- +virtio-disk0 -device virtio-blk- +pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio- +disk0,bootindex=1 -netdev tap,fd=28,id=hostnet0 -device +rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:a0:95:7c,bus=pci.0,addr=0x +3 -chardev pty,id=charserial0 -device isa- +serial,chardev=charserial0,id=serial0 -chardev +socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain +-6-c/org.qemu.guest_agent.0,server,nowait -device +virtserialport,bus=virtio- +serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_age +nt.0 -chardev spicevmc,id=charchannel1,name=vdagent -device +virtserialport,bus=virtio- +serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 +-device usb-tablet,id=input0,bus=usb.0,port=1 -spice +port=5901,addr=127.0.0.1,disable-ticketing,image- +compression=off,seamless-migration=on -device qxl- +vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vga +mem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device intel- +hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0- +codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir +-device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 +-chardev spicevmc,id=charredir1,name=usbredir -device usb- +redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device virtio- +balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on +char device redirected to /dev/pts/0 (label charserial0) +main_channel_link: add main channel client +red_dispatcher_set_cursor_peer: +inputs_connect: inputs channel client create +KVM internal error. Suberror: 1 +emulation failure +EAX=801a8d00 EBX=000000a0 ECX=00002e20 EDX=0009d5e8 +ESI=7ffa3c00 EDI=7fef4000 EBP=ffffffff ESP=00007b92 +EIP=000006ab EFL=00000087 [--S--PC] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 00000000 ffffffff 00c09300 +CS =9c4c 0009c4c0 ffffffff 00809b00 +SS =0000 00000000 ffffffff 00809300 +DS =9cd0 0009cd00 ffffffff 00c09300 +FS =0000 00000000 ffffffff 00c09300 +GS =0000 00000000 ffffffff 00c09300 +LDT=0000 00000000 0000ffff 00008200 +TR =0000 00000000 0000ffff 00008b00 +GDT= 00000000 00000000 +IDT= 00000000 000003ff +CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 +DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=00 16 66 9c 66 60 0f a8 0f a0 06 1e 16 0e fa 2e 8e 1e 90 06 <0f> +ae 06 d0 1c 0f 01 0e c6 1c 0f 01 06 c0 1c fc 66 b9 38 00 00 00 66 ba 10 +02 00 00 66 68 + + +--- /proc/cpuinfo +processor : 0 +vendor_id : GenuineIntel +cpu family : 6 +model : 26 +model name : Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz +stepping : 5 +microcode : 0x11 +cpu MHz : 3066.648 +cache size : 8192 KB +physical id : 0 +siblings : 8 +core id : 0 +cpu cores : 4 +apicid : 0 +initial apicid : 0 +fpu : yes +fpu_exception : yes +cpuid level : 11 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr +pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe +syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl +xtopology nonstop_tsc aperfmperf eagerfpu pni dtes64 monitor ds_cpl vmx +est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm tpr_shadow +vnmi flexpriority ept vpid dtherm +bugs : +bogomips : 6135.85 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual +power management: \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1629618 b/results/classifier/gemma3:12b/kvm/1629618 new file mode 100644 index 00000000..ad3978b8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1629618 @@ -0,0 +1,89 @@ + +QEMU causes host hang / reset on PPC64EL + +QEMU causes a host hang / reset on PPC64EL when used in KVM + HV mode (kvm_hv module). + +After a random amount of uptime, starting new QEMU virtual machines will cause the host to experience a soft CPU lockup. Depending on configuration and other random factors the host will either checkstop and reboot, or hang indefinitely. The following stacktrace was pulled from an instance where the host simply hung after starting a fourth virtual machine. + +Command line: + +qemu-system-ppc64 --enable-kvm -name pbuild-vnode001 -M pseries -cpu host -smp 14,cores=14,threads=1,sockets=1 -m 64G -realtime mlock=on -kernel vmlinux-4.7.0-1-powerpc64le -initrd initrd.img-4.7.0-1-powerpc64le + +Lockup trace: + +[ 527.393933] KVM guest htab at c000003ae4000000 (order 29), LPID 4 +[ 574.637695] INFO: rcu_sched self-detected stall on CPU +[ 574.637799] 112-...: (5249 ticks this GP) idle=699/140000000000001/0 softirq=5358/5382 fqs=5072 +[ 574.637877] (t=5250 jiffies g=19853 c=19852 q=64401) +[ 574.637947] Task dump for CPU 112: +[ 574.637982] qemu-system-ppc R running task 0 12037 11828 0x00040004 +[ 574.638051] Call Trace: +[ 574.638081] [c000001c1cddb430] [c0000000000f2710] sched_show_task+0xe0/0x180 (unreliable) +[ 574.638164] [c000001c1cddb4a0] [c0000000001326f4] rcu_dump_cpu_stacks+0xe4/0x150 +[ 574.638246] [c000001c1cddb4f0] [c000000000137a04] rcu_check_callbacks+0x6b4/0x9c0 +[ 574.638328] [c000001c1cddb610] [c00000000013f7c4] update_process_times+0x54/0xa0 +[ 574.638409] [c000001c1cddb640] [c000000000156c28] tick_sched_handle.isra.5+0x48/0xe0 +[ 574.638489] [c000001c1cddb680] [c000000000156d24] tick_sched_timer+0x64/0xd0 +[ 574.638602] [c000001c1cddb6c0] [c000000000140274] __hrtimer_run_queues+0x124/0x420 +[ 574.638683] [c000001c1cddb750] [c00000000014123c] hrtimer_interrupt+0xec/0x2c0 +[ 574.638765] [c000001c1cddb810] [c00000000001fe5c] __timer_interrupt+0x8c/0x270 +[ 574.638847] [c000001c1cddb860] [c00000000002053c] timer_interrupt+0x9c/0xe0 +[ 574.638915] [c000001c1cddb890] [c000000000002750] decrementer_common+0x150/0x180 +[ 574.639001] --- interrupt: 901 at kvmppc_hv_get_dirty_log+0x1c4/0x570 [kvm_hv] +[ 574.639001] LR = kvmppc_hv_get_dirty_log+0x1f8/0x570 [kvm_hv] +[ 574.639114] [c000001c1cddbc30] [d00000001a524980] kvm_vm_ioctl_get_dirty_log_hv+0xd0/0x170 [kvm_hv] +[ 574.639209] [c000001c1cddbc80] [d00000001a4d4140] kvm_vm_ioctl_get_dirty_log+0x40/0x60 [kvm] +[ 574.639291] [c000001c1cddbcb0] [d00000001a4ca3cc] kvm_vm_ioctl+0x3fc/0x760 [kvm] +[ 574.639372] [c000001c1cddbd40] [c0000000002d9e18] do_vfs_ioctl+0xd8/0x8e0 +[ 574.639442] [c000001c1cddbde0] [c0000000002da6f4] SyS_ioctl+0xd4/0xf0 +[ 574.639512] [c000001c1cddbe30] [c000000000009260] system_call+0x38/0x108 +[ 580.601573] NMI watchdog: BUG: soft lockup - CPU#112 stuck for 22s! [qemu-system-ppc:12037] +[ 580.601655] Modules linked in: xt_tcpudp(E) rpcsec_gss_krb5(E) nfsv4(E) dns_resolver(E) ext4(E) ecb(E) crc16(E) jbd2(E) mbcache(E) tun(E) btrfs(E) crc32c_generic(E) raid6_pq(E) xor(E) dm_crypt(E) xts(E) gf128mul(E) algif_skcipher(E) af_alg(E) dm_mod(E) bonding(E) cpufreq_stats(E) iptable_filter(E) ip_tables(E) x_tables(E) bridge(E) stp(E) llc(E) ipmi_devintf(E) ipmi_msghandler(E) i2c_dev(E) fuse(E) raid1(E) md_mod(E) ses(E) sd_mod(E) enclosure(E) sg(E) binfmt_misc(E) radeon(E) ttm(E) drm_kms_helper(E) snd_hda_codec_hdmi(E) snd_hda_intel(E) drm(E) snd_hda_codec(E) snd_hda_core(E) snd_hwdep(E) snd_pcm(E) syscopyarea(E) sysfillrect(E) sysimgblt(E) fb_sys_fops(E) snd_timer(E) evdev(E) i2c_algo_bit(E) snd(E) soundcore(E) at24(E) ahci(E) mpt3sas(E) nvmem_core(E) libahci(E) raid_class(E) scsi_transport_sas(E) powernv_rng(E) rng_core(E) uinput(E) kvm_hv(E) kvm(E) ib_srp(E) scsi_transport_srp(E) ofpart(E) powernv_flash(E) mtd(E) nfsd(E) opal_prd(E) auth_rpcgss(E) parport_pc(E) lp(E) parport(E) autofs4(E) nfsv3(E) nfs_acl(E) nfs(E) lockd(E) grace(E) sunrpc(E) fscache(E) ib_ipoib(E) ib_umad(E) rdma_ucm(E) ib_uverbs(E) rdma_cm(E) iw_cm(E) ib_cm(E) ib_sa(E) configfs(E) hid_generic(E) usbhid(E) hid(E) xhci_pci(E) xhci_hcd(E) usbcore(E) tg3(E) usb_common(E) ptp(E) pps_core(E) libphy(E) ib_mthca(E) ib_mad(E) ib_core(E) ib_addr(E) +[ 580.603295] CPU: 112 PID: 12037 Comm: qemu-system-ppc Tainted: G E 4.6.0-2-powerpc64le #1 Debian 4.6.3-1 +[ 580.603386] task: c000001f706f0180 ti: c000001c1cdd8000 task.ti: c000001c1cdd8000 +[ 580.603456] NIP: d00000001a52cb54 LR: d00000001a52cb88 CTR: 0000000000000000 +[ 580.603524] REGS: c000001c1cddb900 TRAP: 0901 Tainted: G E (4.6.0-2-powerpc64le Debian 4.6.3-1) +[ 580.603613] MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 24048444 XER: 00000000 +[ 580.603784] CFAR: d00000001a52cb18 SOFTE: 1 +GPR00: d00000001a52cb88 c000001c1cddbb80 d00000001a53c580 40016e77790fe611 +GPR04: 0000000beceac194 00000000019a4544 0000000000000001 0000000000000000 +GPR08: 4000000000000000 0000000000000000 0000000000000001 8000000101a9b824 +GPR12: c00000000009aea0 c00000000fbbf000 +[ 580.604205] NIP [d00000001a52cb54] kvmppc_hv_get_dirty_log+0x1c4/0x570 [kvm_hv] +[ 580.604274] LR [d00000001a52cb88] kvmppc_hv_get_dirty_log+0x1f8/0x570 [kvm_hv] +[ 580.604341] Call Trace: +[ 580.604366] [c000001c1cddbb80] [d00000001a52cb88] kvmppc_hv_get_dirty_log+0x1f8/0x570 [kvm_hv] (unreliable) +[ 580.604469] [c000001c1cddbc30] [d00000001a524980] kvm_vm_ioctl_get_dirty_log_hv+0xd0/0x170 [kvm_hv] +[ 580.604562] [c000001c1cddbc80] [d00000001a4d4140] kvm_vm_ioctl_get_dirty_log+0x40/0x60 [kvm] +[ 580.604685] [c000001c1cddbcb0] [d00000001a4ca3cc] kvm_vm_ioctl+0x3fc/0x760 [kvm] +[ 580.604765] [c000001c1cddbd40] [c0000000002d9e18] do_vfs_ioctl+0xd8/0x8e0 +[ 580.604837] [c000001c1cddbde0] [c0000000002da6f4] SyS_ioctl+0xd4/0xf0 +[ 580.604906] [c000001c1cddbe30] [c000000000009260] system_call+0x38/0x108 +[ 580.604977] Instruction dump: +[ 580.605012] 2ba90003 419effc4 7fa97000 419effbc 81314310 2f890000 409effb0 7d00b0a8 +[ 580.605127] 7d09a039 40820014 7d08a378 7d00b1ad <41e20008> 7e89a378 4c00012c 2fa90000 +[ 637.648374] INFO: rcu_sched self-detected stall on CPU +[ 637.648473] 112-...: (21002 ticks this GP) idle=699/140000000000001/0 softirq=5358/5382 fqs=20825 +[ 637.648554] (t=21003 jiffies g=19853 c=19852 q=260741) +[ 637.648612] Task dump for CPU 112: +[ 637.648646] qemu-system-ppc R running task 0 12037 11828 0x00040004 +[ 637.648719] Call Trace: +[ 637.648745] [c000001c1cddb430] [c0000000000f2710] sched_show_task+0xe0/0x180 (unreliable) +[ 637.648825] [c000001c1cddb4a0] [c0000000001326f4] rcu_dump_cpu_stacks+0xe4/0x150 +[ 637.648903] [c000001c1cddb4f0] [c000000000137a04] rcu_check_callbacks+0x6b4/0x9c0 +[ 637.648985] [c000001c1cddb610] [c00000000013f7c4] update_process_times+0x54/0xa0 +[ 637.649067] [c000001c1cddb640] [c000000000156c28] tick_sched_handle.isra.5+0x48/0xe0 +[ 637.649147] [c000001c1cddb680] [c000000000156d24] tick_sched_timer+0x64/0xd0 +[ 637.649216] [c000001c1cddb6c0] [c000000000140274] __hrtimer_run_queues+0x124/0x420 +[ 637.649296] [c000001c1cddb750] [c00000000014123c] hrtimer_interrupt+0xec/0x2c0 +[ 637.649374] [c000001c1cddb810] [c00000000001fe5c] __timer_interrupt+0x8c/0x270 +[ 637.649456] [c000001c1cddb860] [c00000000002053c] timer_interrupt+0x9c/0xe0 +[ 637.649525] [c000001c1cddb890] [c000000000002750] decrementer_common+0x150/0x180 +[ 637.649645] --- interrupt: 901 at kvmppc_hv_get_dirty_log+0x1c4/0x570 [kvm_hv] +[ 637.649645] LR = kvmppc_hv_get_dirty_log+0x1f8/0x570 [kvm_hv] +[ 637.649760] [c000001c1cddbc30] [d00000001a524980] kvm_vm_ioctl_get_dirty_log_hv+0xd0/0x170 [kvm_hv] +[ 637.649854] [c000001c1cddbc80] [d00000001a4d4140] kvm_vm_ioctl_get_dirty_log+0x40/0x60 [kvm] +[ 637.649936] [c000001c1cddbcb0] [d00000001a4ca3cc] kvm_vm_ioctl+0x3fc/0x760 [kvm] +[ 637.650014] [c000001c1cddbd40] [c0000000002d9e18] do_vfs_ioctl+0xd8/0x8e0 +[ 637.650083] [c000001c1cddbde0] [c0000000002da6f4] SyS_ioctl+0xd4/0xf0 +[ 637.650153] [c000001c1cddbe30] [c000000000009260] system_call+0x38/0x108 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1636217 b/results/classifier/gemma3:12b/kvm/1636217 new file mode 100644 index 00000000..5d81108c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1636217 @@ -0,0 +1,51 @@ + +qemu-kvm 2.7 does not boot kvm VMs with virtio on top of VMware ESX + +After todays Proxmox update all my Linux VMs stopped booting. + +# How to reproduce +- Have KVM on top of VMware ESX (I use VMware ESX 6) +- Boot Linux VM with virtio Disk drive. + + +# Result +virtio based VMs do not boot anymore: + +root@demotuxdc:/etc/pve/nodes/demotuxdc/qemu-server# grep virtio0 100.conf +bootdisk: virtio0 +virtio0: pvestorage:100/vm-100-disk-1.raw,discard=on,size=20G + +(initially with cache=writethrough, but that doesn´t matter) + +What happens instead is: + +- BIOS displays "Booting from harddisk..." +- kvm process of VM loops at about 140% of Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz Skylake dual core CPU + +Disk of course has valid bootsector: + +root@demotuxdc:/srv/pvestorage/images/100# file -sk vm-100-disk-1.raw +vm-100-disk-1.raw: DOS/MBR boot sector DOS/MBR boot sector DOS executable (COM), boot code +root@demotuxdc:/srv/pvestorage/images/100# head -c 2048 vm-100-disk-1.raw | hd | grep GRUB +00000170 be 94 7d e8 2e 00 cd 18 eb fe 47 52 55 42 20 00 |..}.......GRUB .| + + +# Workaround 1 +- Change disk from virtio0 to scsi0 +- Debian boots out of the box after this change +- SLES 12 needs a rebuilt initrd +- CentOS 7 too, but it seems that is not even enough and it still fails (even in hostonly="no" mode for dracut) + + +# Workaround 2 +Downgrade pve-qemu-kvm 2.7.0-3 to 2.6.2-2. + + +# Expected results +Disk boots just fine via virtio like it did before. + + +# Downstream bug report +Downstream suggests an issue with upstream qemu-kvm: + +https://bugzilla.proxmox.com/show_bug.cgi?id=1181 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1640073 b/results/classifier/gemma3:12b/kvm/1640073 new file mode 100644 index 00000000..266202cd --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1640073 @@ -0,0 +1,76 @@ + +Guest pause because VMPTRLD failed in KVM + +1) Qemu command: +/usr/bin/qemu-kvm -name omu1 -S -machine pc-i440fx-2.3,accel=kvm,usb=off -cpu host -m 15625 -realtime mlock=off -smp 8,sockets=1,cores=8,threads=1 -uuid a2aacfff-6583-48b4-b6a4-e6830e519931 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/omu1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/home/env/guest1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=/home/env/guest_300G.img,if=none,id=drive-virtio-disk1,format=raw,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:80:05:00:00,bus=pci.0,addr=0x3 -netdev tap,fd=27,id=hostnet1,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:00:80:05:00:01,bus=pci.0,addr=0x4 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 0.0.0.0:0 -device cirrus-vga,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on + +2) Qemu log: +KVM: entry failed, hardware error 0x4 +RAX=00000000ffffffed RBX=ffff8803fa2d7fd8 RCX=0100000000000000 RDX=0000000000000000 +RSI=0000000000000000 RDI=0000000000000046 RBP=ffff8803fa2d7e90 RSP=ffff8803fa2efe90 +R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=000000000000b69a +R12=0000000000000001 R13=ffffffff81a25b40 R14=0000000000000000 R15=ffff8803fa2d7fd8 +RIP=ffffffff81053e16 RFL=00000286 [--S--P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 ffffffff 00c00000 +CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0000 0000000000000000 ffffffff 00c00000 +FS =0000 0000000000000000 ffffffff 00c00000 +GS =0000 ffff88040f540000 ffffffff 00c00000 +LDT=0000 0000000000000000 ffffffff 00c00000 +TR =0040 ffff88040f550a40 00002087 00008b00 DPL=0 TSS64-busy +GDT= ffff88040f549000 0000007f +IDT= ffffffffff529000 00000fff +CR0=80050033 CR2=00007f81ca0c5000 CR3=00000003f5081000 CR4=000407e0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +Code=?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? <??> ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? + +3) Demsg +[347315.028339] kvm: vmptrld ffff8817ec5f0000/17ec5f0000 failed +klogd 1.4.1, ---------- state change ---------- +[347315.039506] kvm: vmptrld ffff8817ec5f0000/17ec5f0000 failed +[347315.051728] kvm: vmptrld ffff8817ec5f0000/17ec5f0000 failed +[347315.057472] vmwrite error: reg 6c0a value ffff88307e66e480 (err 2120672384) +[347315.064567] Pid: 69523, comm: qemu-kvm Tainted: GF X 3.0.93-0.8-default #1 +[347315.064569] Call Trace: +[347315.064587] [<ffffffff810049d5>] dump_trace+0x75/0x300 +[347315.064595] [<ffffffff8145e3e3>] dump_stack+0x69/0x6f +[347315.064617] [<ffffffffa03738de>] vmx_vcpu_load+0x11e/0x1d0 [kvm_intel] +[347315.064647] [<ffffffffa029a204>] kvm_arch_vcpu_load+0x44/0x1d0 [kvm] +[347315.064669] [<ffffffff81054ee1>] finish_task_switch+0x81/0xe0 +[347315.064676] [<ffffffff8145f0b4>] thread_return+0x3b/0x2a7 +[347315.064687] [<ffffffffa028d9b5>] kvm_vcpu_block+0x65/0xa0 [kvm] +[347315.064703] [<ffffffffa02a16d1>] __vcpu_run+0xd1/0x260 [kvm] +[347315.064732] [<ffffffffa02a2418>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm] +[347315.064759] [<ffffffffa028ecee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm] +[347315.064771] [<ffffffff8116bdfb>] do_vfs_ioctl+0x8b/0x3b0 +[347315.064776] [<ffffffff8116c1c1>] sys_ioctl+0xa1/0xb0 +[347315.064783] [<ffffffff81469272>] system_call_fastpath+0x16/0x1b +[347315.064797] [<00007fee51969ce7>] 0x7fee51969ce6 +[347315.064799] vmwrite error: reg 6c0c value ffff88307e664000 (err 2120630272) +[347315.064802] Pid: 69523, comm: qemu-kvm Tainted: GF X 3.0.93-0.8-default #1 +[347315.064803] Call Trace: +[347315.064807] [<ffffffff810049d5>] dump_trace+0x75/0x300 +[347315.064811] [<ffffffff8145e3e3>] dump_stack+0x69/0x6f +[347315.064817] [<ffffffffa03738ec>] vmx_vcpu_load+0x12c/0x1d0 [kvm_intel] +[347315.064832] [<ffffffffa029a204>] kvm_arch_vcpu_load+0x44/0x1d0 [kvm] +[347315.064851] [<ffffffff81054ee1>] finish_task_switch+0x81/0xe0 +[347315.064855] [<ffffffff8145f0b4>] thread_return+0x3b/0x2a7 +[347315.064865] [<ffffffffa028d9b5>] kvm_vcpu_block+0x65/0xa0 [kvm] +[347315.064880] [<ffffffffa02a16d1>] __vcpu_run+0xd1/0x260 [kvm] +[347315.064907] [<ffffffffa02a2418>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm] +[347315.064933] [<ffffffffa028ecee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm] +[347315.064943] [<ffffffff8116bdfb>] do_vfs_ioctl+0x8b/0x3b0 +[347315.064947] [<ffffffff8116c1c1>] sys_ioctl+0xa1/0xb0 +[347315.064951] [<ffffffff81469272>] system_call_fastpath+0x16/0x1b +[347315.064957] [<00007fee51969ce7>] 0x7fee51969ce6 +[347315.064959] vmwrite error: reg 6c10 value 0 (err 0) + +4) The isssue can't be reporduced. I search the Intel VMX sepc about reaseons of vmptrld failure: +The instruction fails if its operand is not properly aligned, sets unsupported physical-address bits, or is equal to the VMXON +pointer. In addition, the instruction fails if the 32 bits in memory referenced by the operand do not match the VMCS +revision identifier supported by this processor. + +But I can't find any cues from the KVM source code. It seems each error condition is impossible. :( \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1641 b/results/classifier/gemma3:12b/kvm/1641 new file mode 100644 index 00000000..0c6e5514 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1641 @@ -0,0 +1,25 @@ + +[abrt] qemu-system-x86-core: do_patch_instruction(): qemu-system-x86_64 killed by SIGABRT +Description of problem: +Copied from downstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=2195952 + +Description of problem: +Virtualizing a Windows XP system which tried to reboot. + +Version-Release number of selected component: +qemu-system-x86-core-2:7.2.1-1.fc38 + +Additional info: +reason: qemu-system-x86_64 killed by SIGABRT +backtrace_rating: 4 +crash_function: do_patch_instruction +comment: Virtualizing a Windows XP system which tried to reboot. + +Truncated backtrace: +Thread no. 1 (6 frames) + #4 do_patch_instruction at ../hw/i386/kvmvapic.c:439 + #5 process_queued_cpu_work at ../cpus-common.c:347 + #6 qemu_wait_io_event at ../softmmu/cpus.c:435 + #7 kvm_vcpu_thread_fn at ../accel/kvm/kvm-accel-ops.c:56 + #8 qemu_thread_start at ../util/qemu-thread-posix.c:505 + #10 clone3 at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 diff --git a/results/classifier/gemma3:12b/kvm/1652011 b/results/classifier/gemma3:12b/kvm/1652011 new file mode 100644 index 00000000..733049ad --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1652011 @@ -0,0 +1,37 @@ + +VM shuts down due to error in qemu block.c + +On a Trusty KVM host one of the guest VMs shut down without any user interaction. The system is running: + +$ cat /etc/lsb-release +DISTRIB_ID=Ubuntu +DISTRIB_RELEASE=14.04 +DISTRIB_CODENAME=trusty +DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS" + +$ dpkg -l libvirt0 qemu-kvm qemu-system-common qemu-system-x86 +Desired=Unknown/Install/Remove/Purge/Hold +| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend +|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) +||/ Name Version Architecture Description ++++-============================================================-===================================-===================================-============================================================================================================================== +ii libvirt0 1.2.2-0ubuntu13.1.17 amd64 library for interfacing with different virtualization systems +ii qemu-kvm 2.0.0+dfsg-2ubuntu1.27 amd64 QEMU Full virtualization +ii qemu-system-common 2.0.0+dfsg-2ubuntu1.27 amd64 QEMU full system emulation binaries (common files) +ii qemu-system-x86 2.0.0+dfsg-2ubuntu1.27 amd64 QEMU full system emulation binaries (x86) + +In the VMs log in /var/lib/libvirt/qemu/hostname we see: + +2016-11-17 09:18:42.603+0000: starting up +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -name hostname,process=qemu:hostname -S -machine pc-i440fx-trusty,accel=kvm,usb=off -m 12697 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 51766564-ed8e-41aa-91b5-574220af4ac3 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/hostname.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/dev/disk1/hostname,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/dev/disk2/hostname_mnt_data,if=none,id=drive-virtio-disk1,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=/dev/disk1/hostname_tmp,if=none,id=drive-virtio-disk2,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk2,id=virtio-disk2 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,tx=bh,netdev=hostnet0,id=net0,mac=52:54:00:45:e7:d9,bus=pci.0,addr=0x6 -netdev tap,fd=31,id=hostnet1,vhost=on,vhostfd=32 -device virtio-net-pci,tx=bh,netdev=hostnet1,id=net1,mac=52:54:00:f6:6c:77,bus=pci.0,addr=0x7 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 0.0.0.0:5 -device VGA,id=video0,bus=pci.0,addr=0x2 -device AC97,id=sound0,bus=pci.0,addr=0x3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x9 +char device redirected to /dev/pts/6 (label charserial0) +qemu-system-x86_64: /build/qemu-PVxDqC/qemu-2.0.0+dfsg/block.c:3491: bdrv_error_action: Assertion `error >= 0' failed. +2016-12-22 09:49:49.392+0000: shutting down + +In /var/lib/libvirt/libvirtd.log we see: + +2016-12-22 09:49:49.298+0000: 6946: error : qemuMonitorIO:656 : internal error: End of file from monitor + +We investigated to see if this is a known issue and came across a bug report for Fedora (https://bugzilla.redhat.com/show_bug.cgi?id=1147398), but nothing references changes upstream that fix this. + +The guest OS is Ubuntu Precise (12.04.5) running kernel linux-image-3.2.0-101-virtual 3.2.0-101.141. There wasn't any significant load (CPU or IO) on the guest at the time that it shut down and there wasn't any appreciable disk IO on the KVM host either. The disks for the guest are on the KVM host box. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1652459 b/results/classifier/gemma3:12b/kvm/1652459 new file mode 100644 index 00000000..e482f67c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1652459 @@ -0,0 +1,12 @@ + +kvm rbd driver (and maybe others, i.e. qcow2, qed and so on) does not report DISCARD-ZERO flag + +# lsblk -D +NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO +sda 0 4K 1G 0 +├─sda1 0 4K 1G 0 +├─sda2 1024 4K 1G 0 +└─sda5 0 4K 1G 0 + + +Last column should be `1` at least for "RBD+discard=unmap" since reading from discarded regions in RBD MUST return zeroes. The same with QCOW2, QED and sparse raw images. KVM should copy value of this flag when real raw device (i.e. real SSD) with discard capability is used as virtual disk. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1653419 b/results/classifier/gemma3:12b/kvm/1653419 new file mode 100644 index 00000000..8346b337 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1653419 @@ -0,0 +1,73 @@ + +SVM emulation fails due to EIP and FLAG register update optimization + +SVM emulation support has a bug due to which causes KVM emulation error when qemu-kvm is run over KVM installed on top of QEmu in software mode. + +Steps to reproduce +==================== +1. Run KVM inside QEmu(software mode with SVM emulation support). Make sure kvm_amd is running. +2. Run any guest OS on top of the KVM using qemu-kvm. +3. Following KVM emulation error is thrown immediately. + +KVM internal error. Suberror: 1 +emulation failure +EAX=ffffffff EBX=4000004b ECX=00000000 EDX=000f5ea0 +ESI=00000000 EDI=00000000 EBP=00000000 ESP=00006fd0 +EIP=40000000 EFL=00000086 [--S--P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA] +SS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +FS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +GS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT +TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy +GDT= 000f7180 00000037 +IDT= 000f71be 00000000 +CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 + +Reason for the error +==================== +Due to performance reasons, EIP and FLAG registers are not updated after executing every guest instructions. There are optimizations done to update these registers intelligently, for eg: EIP is updated at the end of translation block. This means EIP remains the address of the first instruction in the TB throughout the execution. + +In case of a VMEXIT because of a page fault happened after executing an instruction in the middle of the TB, the VMCB is updated with the wrong guest EIP and jumps to the address where host has left off. On the subsequent VMRUN by the host QEmu start executing some of the instructions that has already been executed. This can cause wrong execution flow. + +Following is the instruction execution trace of the above scenario. + +0x00000000000f368f: callq 0xeecc4 +vmexit(00000060, 0000000000000000, 0000000000000000, 00000000000eecc4)! +vmsave! 00000000b72e9000 +vmload! 00000000b72e9000 +vmrun! 00000000b72e9000 +0x00000000000eecc4: push %rbx +0x00000000000eecc5: xor %ecx,%ecx +0x00000000000eecc7: mov (%rax,%rcx,1),%bl +0x00000000000eecca: cmp (%rdx,%rcx,1),%bl +vmexit(0000004e, 0000000000000000, 00000000000f5ea0, 00000000000eecc4)! + +Page fault happens at 0x00000000000eecca which triggers a VMEXIT. vmcb->save->rip is updated with 0x00000000000eecc4 instead of 0x00000000000eecca. + +vmsave! 00000000b72e9000 +vmload! 00000000b72e9000 +vmrun! 00000000b72e9000 +0x00000000000eecc4: push %rbx +0x00000000000eecc5: xor %ecx,%ecx +0x00000000000eecc7: mov (%rax,%rcx,1),%bl +0x00000000000eecca: cmp (%rdx,%rcx,1),%bl +0x00000000000eeccd: je 0xeecdc +0x00000000000eeccf: setl %al +0x00000000000eecd2: movzbl %al,%eax +0x00000000000eecd5: neg %eax +0x00000000000eecd7: or $0x1,%eax +0x00000000000eecda: jmp 0xeece3 +0x00000000000eece3: pop %rbx +0x00000000000eece4: retq +vmexit(0000004e, 0000000000000000, 0000000040000000, 0000000040000000)! + +The subsequent VMRUN again starts executing from 0x00000000000eecc4 which causes %rbx being pushed to the stack for the second time. The retq instruction picks wrong return address and jumps to an illegal location. + +Similar issue is there with updating FLAG register as well. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1654271 b/results/classifier/gemma3:12b/kvm/1654271 new file mode 100644 index 00000000..49826245 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1654271 @@ -0,0 +1,185 @@ + +host machine freezes + +When trying to install Radeon crimson relive 16.12.1, each host machine freezes at the machine environment check stage. +Even if you launch the installer in an environment where GPU is not installed, each host machine freezes in the same way. +When Gusest's CPU is changed from 4 to 1, the environment check is completed normally. +Even if FMA and AVX 2 are invalidated by CPU setting, environment check will be completed normally. +Since 1 CPU does not freeze, I thought that it would be better to fix the CPU, but I will still freeze. +Is it impossible to enable the function of AVX 2 (FMA?) In the virtual machine on KVM? + +HOST + Motherboard : Asrock Z97 extream6 + CPU : Core i7-4790 + Memory : 24GB + OS : Ubuntu 16.04(kernel 4.7.2-040702) + qemu : 2.5+dfsg-5ubuntu10.6 + libvirt : 1.3.1-1ubuntu10.5 + ovmf : 0~20160408.ffea0a2c-2 + +Guest + BIOS : UEFI + OS : Windows10 Pro Build 14986 + Memory : 8GB + GPU : Radeon HD7770 + +----------------------------------------------------------- +<domain type='kvm'> + <name>WinPC01</name> + <uuid>4f784d78-4d5e-416a-bb43-82ecd2cad409</uuid> + <memory unit='KiB'>8388608</memory> + <currentMemory unit='KiB'>8388608</currentMemory> + <vcpu placement='static'>4</vcpu> + <os> + <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> + <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader> + <nvram>/var/lib/libvirt/qemu/nvram/WinPC01_VARS.fd</nvram> + <bootmenu enable='yes'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + <hap/> + <hyperv> + <relaxed state='on'/> + <vapic state='on'/> + <spinlocks state='on' retries='8191'/> + </hyperv> + <kvm> + <hidden state='on'/> + </kvm> + </features> + <cpu mode='custom' match='exact'> + <model fallback='allow'>Haswell</model> + <vendor>Intel</vendor> + <topology sockets='1' cores='4' threads='1'/> + </cpu> + <clock offset='localtime'> + <timer name='rtc' tickpolicy='catchup'/> + <timer name='pit' tickpolicy='delay'/> + <timer name='hpet' present='no'/> + <timer name='hypervclock' present='yes'/> + </clock> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <pm> + <suspend-to-mem enabled='no'/> + <suspend-to-disk enabled='no'/> + </pm> + <devices> + <emulator>/usr/bin/kvm-spice</emulator> + <disk type='block' device='disk'> + <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/> + <source dev='/dev/mapper/vg_yoshi--kvm01_ssd-lv_WinPC01'/> + <target dev='sda' bus='scsi'/> + <boot order='1'/> + <address type='drive' controller='0' bus='0' target='0' unit='0'/> + </disk> + <disk type='block' device='disk'> + <driver name='qemu' type='raw' cache='none' io='native'/> + <source dev='/dev/mapper/vg_yoshi--kvm01_hdd-lv_WinPC01_Data'/> + <target dev='sdb' bus='scsi'/> + <address type='drive' controller='0' bus='0' target='0' unit='1'/> + </disk> + <disk type='file' device='cdrom'> + <driver name='qemu' type='raw'/> + <target dev='hdb' bus='ide'/> + <readonly/> + <address type='drive' controller='0' bus='0' target='0' unit='1'/> + </disk> + <controller type='usb' index='0' model='ich9-ehci1'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci1'> + <master startport='0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci2'> + <master startport='2'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci3'> + <master startport='4'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> + </controller> + <controller type='pci' index='0' model='pci-root'/> + <controller type='ide' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> + </controller> + <controller type='virtio-serial' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> + </controller> + <controller type='sata' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/> + </controller> + <controller type='scsi' index='0' model='virtio-scsi'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/> + </controller> + <interface type='bridge'> + <mac address='52:54:00:0e:ca:c5'/> + <source bridge='br0'/> + <model type='virtio'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <interface type='bridge'> + <mac address='52:54:00:7e:4e:dd'/> + <source bridge='br1'/> + <model type='virtio'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </interface> + <channel type='spicevmc'> + <target type='virtio' name='com.redhat.spice.0'/> + <address type='virtio-serial' controller='0' bus='0' port='1'/> + </channel> + <input type='tablet' bus='usb'/> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <graphics type='spice' port='5900' autoport='no' listen='0.0.0.0' keymap='ja'> + <listen type='address' address='0.0.0.0'/> + </graphics> + <video> + <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + <hostdev mode='subsystem' type='pci' managed='yes'> + <source> + <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> + </source> + <rom bar='off'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0' multifunction='on'/> + </hostdev> + <hostdev mode='subsystem' type='pci' managed='yes'> + <source> + <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> + </source> + <rom bar='off'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> + </hostdev> + <hostdev mode='subsystem' type='pci' managed='yes'> + <source> + <address domain='0x0000' bus='0x00' slot='0x14' function='0x0'/> + </source> + <rom bar='off'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> + </hostdev> + <hostdev mode='subsystem' type='pci' managed='yes'> + <source> + <address domain='0x0000' bus='0x00' slot='0x1d' function='0x0'/> + </source> + <rom bar='off'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> + </hostdev> + <memballoon model='virtio'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> + </memballoon> + </devices> +</domain> +----------------------------------------------------------- + +If you put the following designation in the CPU tag, it will not freeze. +----------------------------------------------------------- + <feature policy='disable' name='fma'/> + <feature policy='disable' name='avx2'/> +----------------------------------------------------------- \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1657841 b/results/classifier/gemma3:12b/kvm/1657841 new file mode 100644 index 00000000..3239e30a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1657841 @@ -0,0 +1,12 @@ + +QEMU Intel HAX Windows + +Hi, + +Using the latest exe's from http://qemu.weilnetz.de/w32/ + +C:\Users\therock247uk\Desktop\jan\qemu-w64-setup-20170113>qemu-system-i386 --enable-hax -m 512 -cdrom C:\Users\therock247uk\Desktop\jan\en_windows_xp_professional_with_service_pack_3_x86_cd_x14-80428.iso +HAX is working and emulator runs in fast virt mode. +Failed to allocate 20000000 memory + +The emulator seems to hang/get stuck during booting from the CD taking out --enable-hax allows it to boot. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1658141 b/results/classifier/gemma3:12b/kvm/1658141 new file mode 100644 index 00000000..116f3449 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1658141 @@ -0,0 +1,45 @@ + +QEMU's default msrs handling causes Windows 10 64 bit to crash + +Wine uses QEMU to run its conformance test suite on Windows virtual machines. Wine's conformance tests check the behavior of various Windows APIs and verify that they behave as expected. + +One such test checks handling of exceptions down. When run on Windows 10 64 bit in QEMU it triggers a "KMOD_EXCEPTION_NOT_HANDLED" BSOD in the VM. See: +https://bugs.winehq.org/show_bug.cgi?id=40240 + + +To reproduce this bug: +* Pick a Windows 10 64 bit VM on an Intel host. + +* Start the VM. I'm pretty sure any qemu command will do but here's what I used: + qemu-system-x86_64 -machine pc-i440fx-2.1,accel=kvm -cpu core2duo,+nx -m 2048 -hda /var/lib/libvirt/images/wtbw1064.qcow2 + +* Grab the attached source code. The tar file is a bit big at 85KB because I had to include some Wine headers. However the source file proper, exception.c, is only 85 lines, including the LGPL header. + +* Compile the source code with MinGW by typing 'make'. This produces a 32 bit exception.exe executable. I'll attach it for good measure. + +* Put exception.exe on the VM and run it. + + +After investigation it turns out this happens: + * Only for Windows 10 64 bit guests. Windows 10 32 bit and older Windows versions are unaffected. + + * Only on Intel hosts. At least both my Xeon E3-1226 v3 and i7-4790K hosts are impacted but not my Opteron 6128 one. + + * It does not seem to depend on the emulated CPU type: on the Intel hosts this happened with both +core2duo,nx and 'copy the host configuration' and did not depend on the number of emulated cpus/cores. + + * This happened with both QEMU 2.1 and 2.7, and both the 3.16.0 and 4.8.11 Linux kernels, both on Debian 8.6 and Debian Testing. + + +After searching for quite some time I discovered that the kvm kernel module was sneaking the following messages into /var/log/syslog precisely when the BSOD happens: + +Dec 16 13:43:48 vm3 kernel: [ 191.624802] kvm [2064]: vcpu0, guest rIP: 0xfffff803cb3c0bf3 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop +Dec 16 13:43:48 vm3 kernel: [ 191.624835] kvm [2064]: vcpu0, guest rIP: 0xfffff803cb3c0c5c unhandled rdmsr: 0x1c9 + +A search on the Internet turned up a post suggesting to change kvm's ignore_msrs setting: + + echo 1 >/sys/module/kvm/parameters/ignore_msrs + +https://www.reddit.com/r/VFIO/comments/42dj7n/some_games_crash_to_biosboot_on_launch/ + +This does actually work and provides a workaround at least. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1660946 b/results/classifier/gemma3:12b/kvm/1660946 new file mode 100644 index 00000000..14e636a1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1660946 @@ -0,0 +1,183 @@ + +[nested] virt-install falls to SLOF + + +[nested] virt-install falls to SLOF: after starting installer (ISO/CDROM), it crashes w/ a kernel panic due to an HTM (Hardware Transactional Memory) exception. + +Scenario: +Host=Ubuntu 16.04 Xenial (Ubuntu KVM ppc64el POWER8E 8247-22L) +Guest=Ubuntu 16.04 Xenial (Ubuntu Xenial Cloud Image QCOW2) +Nested=Ubuntu 16.04 Xenial (Ubuntu Xenial Server ISO *or* NetInstall mini.iso) + +Inside Guest (1st level), run virt-install as shown below to reproduce the bug. + +Facts: + * ISO images (from both Server or netinstall mini.iso) fail to boot on xenial/yakkety + * Cloud image (xenial/yakkety/zesty) on nested virt boots fine, the login prompt is seen. + * Reproducible with Xenial and Yakkety + * NOT reproducible with Zesty (Installer menu starts just normally) + * virtio-scsi, virtio-net and virtio-blk modules are seen in zesty. Only virtio-scsi is seen on xenial/yakkety (-net and -blk are built-in modules?) + * kvm-pr is loaded for all tested scenarios + * This patch[1] rings a bell, however, it doesn't explain how cloud images boot just fine and don't hit the bug, since the kernel used in the cloud images also enable HTM[2]. + +[1] https://lists.ozlabs.org/pipermail/linuxppc-dev/2016-April/141292.html + +[2] grep TRANSA /boot/config-4.8.0-26-generic +CONFIG_PPC_TRANSACTIONAL_MEM=y + + +# cat virt-inst.sh +virt-install --virt-type=kvm --cpu=host --name=nested-xenial --controller type=scsi,model=virtio-scsi --graphics none --console pty,target_type=serial --disk path=/home/nested-xenial.qcow2,size=20 --vcpus=4 --ram=4096 --os-type=linux --os-variant ubuntu16.04 --network bridge=virbr0,model=virtio --cdrom=$1 + + +# ./virt-inst.sh ubuntu-16.04-server-ppc64el.iso +WARNING CDROM media does not print to the text console by default, so you likely will not see text install output. You might want to use --location. See the man page for examples of using --location with CDROM media + +Starting install... +Creating domain... | 0 B 00:00:00 +Connected to domain nested-xenial +Escape character is ^] +Populating /vdevice methods +Populating /vdevice/vty@30000000 +Populating /vdevice/nvram@71000000 +Populating /pci@800000020000000 + 00 2800 (D) : 1af4 1002 unknown-legacy-device* + 00 2000 (D) : 1af4 1001 virtio [ block ] + 00 1800 (D) : 106b 003f serial bus [ usb-ohci ] + 00 1000 (D) : 1af4 1004 virtio [ scsi ] +Populating /pci@800000020000000/scsi@2 + SCSI: Looking for devices + 100000000000000 CD-ROM : "QEMU QEMU CD-ROM 2.5+" + 00 0800 (D) : 10ec 8139 network [ ethernet ] +No NVRAM common partition, re-initializing... +Scanning USB + OHCI: initializing +Using default console: /vdevice/vty@30000000 + + Welcome to Open Firmware + + Copyright (c) 2004, 2011 IBM Corporation All rights reserved. + This program and the accompanying materials are made available + under the terms of the BSD License available at + http://www.opensource.org/licenses/bsd-license.php + + +Trying to load: from: /pci@800000020000000/scsi@2/disk@100000000000000 ... Successfully loaded + + GNU GRUB version 2.02~beta2-36ubuntu3 + + +----------------------------------------------------------------------------+ + |*Install | + | Rescue mode | + | | + | | + | | + | | + | | + | | + | | + | | + | | + | | + +----------------------------------------------------------------------------+ + + Use the ^ and v keys to select which entry is highlighted. + Press enter to boot the selected OS, `e' to edit the commands + before booting or `c' for a command-line. + + +OF stdout device is: /vdevice/vty@30000000 +Preparing to boot Linux version 4.4.0-21-generic (buildd@bos01-ppc64el-017) (gcc version 5.3.1 20160413 (Ubuntu/IBM 5.3.1-14ubuntu2) ) #37-Ubuntu SMP Mon Apr 18 18:30:22 UTC 2016 (Ubuntu 4.4.0-21.37-generic 4.4.6) +Detected machine type: 0000000000000101 +Max number of cores passed to firmware: 2048 (NR_CPUS = 2048) +Calling ibm,client-architecture-support... done +command line: BOOT_IMAGE=/install/vmlinux tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false --- quiet +memory layout at init: + memory_limit : 0000000000000000 (16 MB aligned) + alloc_bottom : 0000000004640000 + alloc_top : 0000000030000000 + alloc_top_hi : 0000000100000000 + rmo_top : 0000000030000000 + ram_top : 0000000100000000 +instantiating rtas at 0x000000002fff0000... done +prom_hold_cpus: skipped +copying OF device tree... +Building dt strings... +Building dt structure... +Device tree strings 0x0000000004650000 -> 0x0000000004650a5b +Device tree struct 0x0000000004660000 -> 0x0000000004670000 +Quiescing Open Firmware ... +Booting Linux via __start() ... + -> smp_release_cpus() +spinning_secondaries = 3 + <- smp_release_cpus() + <- setup_system() +Linux ppc64le +#37-Ubuntu SMP M[ 2.155665] Facility 'TM' unavailable, exception at 0x3fff9f3d8644, MSR=b00000014280f033 +[ 2.161582] Facility 'TM' unavailable, exception at 0x3fff8a488644, MSR=b00000014280f033 +[ 2.168973] Facility 'TM' unavailable, exception at 0x3fffb2df8644, MSR=b00000014280f033 +[ 2.174818] Facility 'TM' unavailable, exception at 0x3fff902f8644, MSR=b00000014280f033 +[ 2.180887] Facility 'TM' unavailable, exception at 0x3fff84728644, MSR=b00000014280f033 +[ 2.186023] Facility 'TM' unavailable, exception at 0x3fff8f1f8644, MSR=b00000014280f033 +[ 2.193073] Facility 'TM' unavailable, exception at 0x3fffa8ecfe30, MSR=b00000014280f033 +[ 2.193697] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004 +[ 2.193697] +[ 2.193751] CPU: 3 PID: 1 Comm: init Not tainted 4.4.0-21-generic #37-Ubuntu +[ 2.193788] Call Trace: +[ 2.193826] [c0000000fea83a50] [c000000000aedc1c] dump_stack+0xb0/0xf0 (unreliable) +[ 2.193868] [c0000000fea83a90] [c000000000ae9e50] panic+0x100/0x2c0 +[ 2.193914] [c0000000fea83b20] [c0000000000bd474] do_exit+0xc24/0xc30 +[ 2.193945] [c0000000fea83be0] [c0000000000bd564] do_group_exit+0x64/0x100 +[ 2.193979] [c0000000fea83c20] [c0000000000ce9cc] get_signal+0x55c/0x7b0 +[ 2.194012] [c0000000fea83d10] [c000000000017424] do_signal+0x54/0x2b0 +[ 2.194043] [c0000000fea83e00] [c00000000001787c] do_notify_resume+0xbc/0xd0 +[ 2.194072] [c0000000fea83e30] [c000000000009838] ret_from_except_lite+0x64/0x68 + +Domain creation completed. +Restarting guest. +Connected to domain nested-xenial +Escape character is ^] +Populating /vdevice methods +Populating /vdevice/vty@30000000 +Populating /vdevice/nvram@71000000 +Populating /pci@800000020000000 + 00 2800 (D) : 1af4 1002 unknown-legacy-device* + 00 2000 (D) : 1af4 1001 virtio [ block ] + 00 1800 (D) : 106b 003f serial bus [ usb-ohci ] + 00 1000 (D) : 1af4 1004 virtio [ scsi ] +Populating /pci@800000020000000/scsi@2 + SCSI: Looking for devices + 100000000000000 CD-ROM : "QEMU QEMU CD-ROM 2.5+" + 00 0800 (D) : 10ec 8139 network [ ethernet ] +No NVRAM common partition, re-initializing... +Scanning USB + OHCI: initializing +Using default console: /vdevice/vty@30000000 + + Welcome to Open Firmware + + Copyright (c) 2004, 2011 IBM Corporation All rights reserved. + This program and the accompanying materials are made available + under the terms of the BSD License available at + http://www.opensource.org/licenses/bsd-license.php + + +Trying to load: from: /pci@800000020000000/scsi@4 ... +E3404: Not a bootable device! +Trying to load: from: HALT ... +E3405: No such device + +E3407: Load failed + + ..`. .. ....... .. ...... ....... + ..`...`''.`'. .''``````..''. .`''```''`. `''`````` + .`` .:' ': `''..... .''. ''` .''..''....... + ``.':.';. ``````''`.''. .''. ''``''`````'` + ``.':':` .....`''.`'`...... `'`.....`''.`'` + .`.`'`` .'`'`````. ``'''''' ``''`'''`. `'` + Type 'boot' and press return to continue booting the system. + Type 'reset-all' and press return to reboot the system. + + +Ready! +0 > \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1661386 b/results/classifier/gemma3:12b/kvm/1661386 new file mode 100644 index 00000000..cd2eec7c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1661386 @@ -0,0 +1,58 @@ + +Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed + +Hello, + + +I see the following when try to run qemu from master as the following: + +# ./x86_64-softmmu/qemu-system-x86_64 --version +QEMU emulator version 2.8.50 (v2.8.0-1006-g4e9f524) +Copyright (c) 2003-2016 Fabrice Bellard and the QEMU Project developers +# ./x86_64-softmmu/qemu-system-x86_64 -machine accel=kvm -nodefaults +-no-reboot -nographic -cpu host -vga none -kernel .build.kernel.kvm +-initrd .build.initrd.kvm -append 'panic=1 no-kvmclock console=ttyS0 +loglevel=7' -m 1024 -serial stdio +qemu-system-x86_64: /home/matwey/lab/qemu/target/i386/kvm.c:1849: +kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. + +First broken commit has been bisected: + +commit 48e1a45c3166d659f781171a47dabf4a187ed7a5 +Author: Paolo Bonzini <email address hidden> +Date: Wed Mar 30 22:55:29 2016 +0200 + + target-i386: assert that KVM_GET/SET_MSRS can set all requested MSRs + + This would have caught the bug in the previous patch. + + Signed-off-by: Paolo Bonzini <email address hidden> + +My cpuinfo is the following: + +processor : 0 +vendor_id : GenuineIntel +cpu family : 6 +model : 44 +model name : Intel(R) Xeon(R) CPU X5675 @ 3.07GHz +stepping : 2 +microcode : 0x14 +cpu MHz : 3066.775 +cache size : 12288 KB +physical id : 0 +siblings : 2 +core id : 0 +cpu cores : 2 +apicid : 0 +initial apicid : 0 +fpu : yes +fpu_exception : yes +cpuid level : 11 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq vmx ssse3 cx16 sse4_1 sse4_2 popcnt aes hypervisor lahf_lm ida arat epb dtherm tpr_shadow vnmi ept vpid +bugs : +bogomips : 6133.55 +clflush size : 64 +cache_alignment : 64 +address sizes : 40 bits physical, 48 bits virtual +power management: \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1665 b/results/classifier/gemma3:12b/kvm/1665 new file mode 100644 index 00000000..6564400f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1665 @@ -0,0 +1,2 @@ + +When using the"yum install qemu-kvm" command in in rhel 9 , it is not possible to proceed past the "Windows Installer Select Disk" page by iso install diff --git a/results/classifier/gemma3:12b/kvm/1665389 b/results/classifier/gemma3:12b/kvm/1665389 new file mode 100644 index 00000000..5a56e221 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1665389 @@ -0,0 +1,39 @@ + +Nested kvm guest fails to start on a emulated Westmere CPU guest under a Broadwell CPU host + +Using latest master(5dae13), qemu fails to start any nested guest in a Westmere emulated guest(layer 1), under a Broadwell host(layer 0), with the error: + +qemu-custom: /root/qemu/target/i386/kvm.c:1849: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. + +The qemu command used(though other CPUs didn't work either): +/usr/bin/qemu-custom -name guest=12ed9230-vm-el73,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-12ed9230-vm-el73/master-key.aes -machine pc-i440fx-2.9,accel=kvm,usb=off -cpu Westmere,+vmx -m 512 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -object iothread,id=iothread1 -uuid f4ce4eba-985f-42a3-94c4-6e4a8a530347 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-5-12ed9230-vm-el73/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=/root/lago/.lago/default/images/vm-el73_root.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,serial=1,discard=unmap -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=54:52:c0:a7:c8:02,bus=pci.0,addr=0x2 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-5-12ed9230-vm-el73/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x9 -msg timestamp=on +2017-02-16T15:14:45.840412Z qemu-custom: -chardev pty,id=charserial0: char device redirected to /dev/pts/2 (label charserial0) +qemu-custom: /root/qemu/target/i386/kvm.c:1849: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. + + +The CPU flags in the Westmere guest: +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm constant_tsc rep_good nopl pni pclmulqdq vmx ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lahf_lm arat tpr_shadow vnmi flexpriority ept vpid + +The guest kernel is 3.10.0-514.2.2.el7.x86_64. + +The CPU flags of the host(Broadwell): +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp + +qemu command on the host - Broadwell(which works): +/usr/bin/qemu-kvm -name 4ffcd448-vm-el73,debug-threads=on -S -machine pc-i440fx-2.6,accel=kvm,usb=off -cpu Westmere,+x2apic,+vmx,+vme -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -object iothread,id=iothread1 -uuid 8cc0a2cf-d25a-4014-acdb-f159c376a532 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-4ffcd448-vm-el73/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot menu=off,strict=on -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/home/ngoldin/src/nvgoldin.github.com/lago-init-files/.lago/flags-tests/default/images/vm-el73_root.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,serial=1,discard=unmap -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/home/ngoldin/src/nvgoldin.github.com/lago-init-files/.lago/flags-tests/default/images/vm-el73_additonal.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0,serial=2,discard=unmap -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=2 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=54:52:c0:a8:c9:02,bus=pci.0,addr=0x2 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-4-4ffcd448-vm-el73/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x9 -msg timestamp=on + +On the Broadwell host I'm using a distribution package if it matters(qemu-kvm-2.6.2-5.fc24.x86_64 and 4.8.15-200.fc24.x86_64) + +As the error indicates, I think this assertion was put in: +commit 48e1a45c3166d659f781171a47dabf4a187ed7a5 +Author: Paolo Bonzini <email address hidden> +Date: Wed Mar 30 22:55:29 2016 +0200 + + target-i386: assert that KVM_GET/SET_MSRS can set all requested MSRs + + This would have caught the bug in the previous patch. + + Signed-off-by: Paolo Bonzini <email address hidden> + +I tried going back one commit before to 273c515, and then the error is gone and the nested guest comes up as expected. If I try to run with head at the above commit(48e145c) the error output is slightly different, though it looks the same: +/root/qemu/target-i386/kvm.c:1713: kvm_put_msrs: Assertion `ret == n' failed. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1682128 b/results/classifier/gemma3:12b/kvm/1682128 new file mode 100644 index 00000000..16736185 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1682128 @@ -0,0 +1,6 @@ + +solaris can't power off + +I have created solaris 10 VM on KVM. Everything in VM is running OK, but finally I use shell command ‘poweroff’ or ‘init 5’, the solaris VM system could’t be poweroff but with promoting me the message: perss any key to reboot ….. + +but on Xen, solaris can be powerofff \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1686 b/results/classifier/gemma3:12b/kvm/1686 new file mode 100644 index 00000000..1367b293 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1686 @@ -0,0 +1,44 @@ + +VPS does not boots with CPU Model QEMU64 or KVM64 +Description of problem: + +Steps to reproduce: +1. Boot the VPS using AlmaLinux 9 ISO / image and it boots to kernel panic +Additional information: +VNC shows this message : + +[ 1.749935] do_exit.cold+0x14/0x9f + +[1.7502581 do_group_exit+0x33/0xa0 + +1.7506001 _x64_sys_exit_group+0x14/0x20 + +1.7510081 do_syscall 64+0x5c/0x90 + +[1.751361] ? syscall_exit_to_user_mode+0x12/0x30 + +[1.7517911 ? do_syscall_64+0x69/0x90 + +[1.752131] ? do_user_addr_fault+0x1d8/0x698 + +[1.7525091 ? exc_page_fault+0x62/0x150 1.752896] entry_SYSCALL_64_after_hwframe+ +0x63/0xcd + +[1.753612] RIP: 0033:0x7fb0e95b62d1 + +[ 1.7539561 Code: c3 of 1f 84 00 00 00 00 00 f3 Of le fa be e7 00 00 00 ba 3c 00 00 00 eb Od 89 de Of 05 48 3d 00 fe ff ff 77 1c f4 89 fe of 05 <48> 3d 00 fe ff ff 76 e7 f7 d8 89 05 ff fe 00 00 eb dd of 1f 44 00 + +[ 1.755047] RSP: 002b:00007ffe484df 288 EFLAGS: 00000246 ORIG_RAX: 00000000000 + +000e7 + +[ 1.755590] RAX: fffff ffffda RBX: 00007fb0e95b0f30 RCX: 00007fb0e95b62d1 1.756100] RDX: 000000000000003c RSI: 00000000000000e7 RDI: 000000000000007f + +[1.756565] RBP: 00007ffe484df410 R08: 00007ffe484dedf9 R09: 0000000000000000 + +[ 1.757034] R10: 00000000ffffffff R11: 0000000000000246 R12: 00007fb0e958f000 + +[ 1.7574981 R13: 0000002300000007 R14: 0000000000000007 R15: 00007ffe484df420 + +[ 1.7579921 Kernel Offset: 0x3aa00000 from Oxffffffff81000000 (relocation ran ge: 0xffffffff80000000-0xffffffffbfffffff) + +[ 1.7589051---[ end Kernel panic code=0x00007f00 --- not syncing: Attempted to kill init! exit diff --git a/results/classifier/gemma3:12b/kvm/1686350 b/results/classifier/gemma3:12b/kvm/1686350 new file mode 100644 index 00000000..f5ad31d1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1686350 @@ -0,0 +1,48 @@ + +[KVM] The qemu ‘-cpu’ option not have skylake server cpu model + +Environment: +------------------- +KVM commit/branch: bd17117b/next +Qemu commit/branch: cd1ea508/master +Host OS: RHEL7.3 ia32e +Host Kernel:4.11.0-rc3 +Bug detailed description: +---------------------------------- +In latest qemu commit the qemu still not have skylake server cpu model +Reproduce steps: +------------------------- +[root@skl-2s2 ~]# qemu-system-x86_64 -cpu help +Available CPUs: +x86 486 +x86 Broadwell-noTSX Intel Core Processor (Broadwell, no TSX) +x86 Broadwell Intel Core Processor (Broadwell) +x86 Conroe Intel Celeron_4x0 (Conroe/Merom Class Core 2) +x86 Haswell-noTSX Intel Core Processor (Haswell, no TSX) +x86 Haswell Intel Core Processor (Haswell) +x86 IvyBridge Intel Xeon E3-12xx v2 (Ivy Bridge) +x86 Nehalem Intel Core i7 9xx (Nehalem Class Core i7) +x86 Opteron_G1 AMD Opteron 240 (Gen 1 Class Opteron) +x86 Opteron_G2 AMD Opteron 22xx (Gen 2 Class Opteron) +x86 Opteron_G3 AMD Opteron 23xx (Gen 3 Class Opteron) +x86 Opteron_G4 AMD Opteron 62xx class CPU +x86 Opteron_G5 AMD Opteron 63xx class CPU +x86 Penryn Intel Core 2 Duo P9xxx (Penryn Class Core 2) +x86 SandyBridge Intel Xeon E312xx (Sandy Bridge) +x86 Skylake-Client Intel Core Processor (Skylake) +x86 Westmere Westmere E56xx/L56xx/X56xx (Nehalem-C) +x86 athlon QEMU Virtual CPU version 2.5+ +x86 core2duo Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz +x86 coreduo Genuine Intel(R) CPU T2600 @ 2.16GHz +x86 kvm32 Common 32-bit KVM processor +x86 kvm64 Common KVM processor +x86 n270 Intel(R) Atom(TM) CPU N270 @ 1.60GHz +x86 pentium +x86 pentium2 +x86 pentium3 +x86 phenom AMD Phenom(tm) 9550 Quad-Core Processor +x86 qemu32 QEMU Virtual CPU version 2.5+ +x86 qemu64 QEMU Virtual CPU version 2.5+ +x86 base base CPU model type with no features enabled +x86 host KVM processor with all supported host features (only available in KVM mode) +x86 max Enables all features supported by the accelerator in the current host \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1687653 b/results/classifier/gemma3:12b/kvm/1687653 new file mode 100644 index 00000000..5e295586 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1687653 @@ -0,0 +1,66 @@ + +QEMU-KVM / detect_zeroes causes KVM to start unlimited number of threads on Guest-Sided High-IO with big Blocksize + +QEMU-KVM in combination with "detect_zeroes=on" makes a Guest able to DoS the Host. This is possible if the Host itself has "detect_zeroes" enabled and the Guest writes a large Chunk of data with a huge blocksize onto the drive. + +E.g.: dd if=/dev/zero of=/tmp/DoS bs=1G count=1 oflag=direct + +All QEMU-Versions after implementation of detect_zeroes are affected. Prior are unaffected. This is absolutely critical, please fix this ASAP! + +##### + +Provided by Dominik Csapak: + +source , bs , count , O_DIRECT, behaviour + +urandom , bs 1M, count 1024, O_DIRECT: OK +file , bs 1M, count 1024, O_DIRECT: OK +/dev/zero , bs 1M, count 1024, O_DIRECT: OK +zero file , bs 1M, count 1024, O_DIRECT: OK +/dev/zero , bs 1G, count 1, O_DIRECT: NOT OK +zero file , bs 1G, count 1, O_DIRECT: NOT OK +zero file , bs 1G, count 1, no O_DIRECT: NOT OK +rand file , bs 1G, count 1, O_DIRECT: OK +rand file , bs 1G, count 1, no O_DIRECT: OK + +discard on: + +urandom , bs 1M, count 1024, O_DIRECT: OK +rand file , bs 1M, count 1024, O_DIRECT: OK +/dev/zero , bs 1M, count 1024, O_DIRECT: OK +zero file , bs 1M, count 1024, O_DIRECT: OK +/dev/zero , bs 1G, count 1, O_DIRECT: NOT OK +zero file , bs 1G, count 1, O_DIRECT: NOT OK +zero file , bs 1G, count 1, no O_DIRECT: NOT OK +rand file , bs 1G, count 1, O_DIRECT: OK +rand file , bs 1G, count 1, no O_DIRECT: OK + +detect_zeros off: + +urandom , bs 1M, count 1024, O_DIRECT: OK +rand file , bs 1M, count 1024, O_DIRECT: OK +/dev/zero , bs 1M, count 1024, O_DIRECT: OK +zero file , bs 1M, count 1024, O_DIRECT: OK +/dev/zero , bs 1G, count 1, O_DIRECT: OK +zero file , bs 1G, count 1, O_DIRECT: OK +zero file , bs 1G, count 1, no O_DIRECT: OK +rand file , bs 1G, count 1, O_DIRECT: OK +rand file , bs 1G, count 1, no O_DIRECT: OK + +##### + +Provided by Florian Strankowski + +bs - count - io-threads + +512K - 2048 - 2 +1M - 1024 - 2 +2M - 512 - 4 +4M - 256 - 6 +8M - 128 - 10 +16M - 64 - 18 +32M - 32 - uncountable + +Please refer to further information here: + +https://bugzilla.proxmox.com/show_bug.cgi?id=1368 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1688 b/results/classifier/gemma3:12b/kvm/1688 new file mode 100644 index 00000000..33fffa33 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1688 @@ -0,0 +1,34 @@ + +target/riscv KVM_RISCV_SET_TIMER macro is not configured correctly +Description of problem: +When riscv kvm vm state changed, guest virtual time would stop/continue. But KVM_RISCV_SET_TIMER is wrong, qemu-kvm can only set 'time'. +Steps to reproduce: +1.start host kernel +2.start qemu-kvm +Additional information: +Below code has some probelm: +``` +=================================================================== +#define KVM_RISCV_SET_TIMER(cs, env, name, reg) \ + do { \ + int ret = kvm_set_one_reg(cs, RISCV_TIMER_REG(env, time), ®); \ + +=================================================================== +``` +I think it should be like this: + +``` +diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c +index 30f21453d6..0c567f668c 100644 +--- a/target/riscv/kvm.c ++++ b/target/riscv/kvm.c +@@ -99,7 +99,7 @@ static uint64_t kvm_riscv_reg_id(CPURISCVState *env, uint64_t type, + + #define KVM_RISCV_SET_TIMER(cs, env, name, reg) \ + do { \ +- int ret = kvm_set_one_reg(cs, RISCV_TIMER_REG(env, time), ®); \ ++ int ret = kvm_set_one_reg(cs, RISCV_TIMER_REG(env, name), ®); \ + if (ret) { \ + abort(); \ + } \ +``` diff --git a/results/classifier/gemma3:12b/kvm/1691109 b/results/classifier/gemma3:12b/kvm/1691109 new file mode 100644 index 00000000..daccfc39 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1691109 @@ -0,0 +1,14 @@ + +qemu-kvm not working as nested inside ESX 6.0 + +ESX 6.0 (virt bits exposed) - Ubuntu 16.04 + qemu-kvm 1:2.8+dfsg-3ubuntu2~cloud0 - CirrOS launched by OpenStack (devstack master) + + +VM will start with -machine = 'pc-i440fx-zesty' and will stuck in "booting from hard disk" + +to fix it you can manually change -machine to 'pc-i440fx-2.3' + +also, ISOs boots well, so I think it`s something about block devices configuration introduced in new machine type. + +p.s. +also confirmed with RHEL instead of Ubuntu as KVM host - new machine type don`t work \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1693667 b/results/classifier/gemma3:12b/kvm/1693667 new file mode 100644 index 00000000..c1c7eb55 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1693667 @@ -0,0 +1,29 @@ + +-cpu haswell / broadwell have no MONITOR in features1 + +In qemu 2.9.0 if you run + + qemu-system-x86_64 -cpu Broadwell (or Haswell) + +then the CPU features1 flag include the SSE3 bit, but do NOT include the MONITOR/MWAIT bit. This is so even when the host includes the features. + + +Additionally, running qemu in this manner results in several error messages: + +warning: TCG doesn't support requested feature: CPUID.01H:ECX.fma [bit 12] +warning: TCG doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17] +warning: TCG doesn't support requested feature: CPUID.01H:ECX.x2apic [bit 21] +warning: TCG doesn't support requested feature: CPUID.01H:ECX.tsc-deadline [bit 24] +warning: TCG doesn't support requested feature: CPUID.01H:ECX.avx [bit 28] +warning: TCG doesn't support requested feature: CPUID.01H:ECX.f16c [bit 29] +warning: TCG doesn't support requested feature: CPUID.01H:ECX.rdrand [bit 30] +warning: TCG doesn't support requested feature: CPUID.07H:EBX.hle [bit 4] +warning: TCG doesn't support requested feature: CPUID.07H:EBX.avx2 [bit 5] +warning: TCG doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] +warning: TCG doesn't support requested feature: CPUID.07H:EBX.rtm [bit 11] +warning: TCG doesn't support requested feature: CPUID.07H:EBX.rdseed [bit 18] +warning: TCG doesn't support requested feature: CPUID.80000001H:ECX.3dnowprefetch + + +(Among possible other uses, the lack of the MONITOR feature bit causes NetBSD to fall-back on a +check-and-pause loop while an application CPU is waiting to be told to proceed by the boot CPU.) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1699567 b/results/classifier/gemma3:12b/kvm/1699567 new file mode 100644 index 00000000..1cdbb028 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1699567 @@ -0,0 +1,16 @@ + +Qemu does not force SSE data alignment + +I have an OS that tries to use SSE operations. It works fine in qemu. But it crashes when I try to run the OS at the host cpu using KVM. + +The instruction that crahes with #GP(0) is + movaps ADDR,%xmm0 + +The documentation says ADDR has to be 16-bytes alignment otherwise #GP is generated. And indeed the problem was with the data alignment. After adjusting it at my side the OS works fine both with Qemu and KVM. + +It would be great if QEMU followed specification more closely and forced SSE data alignment requirements. It will help to catch alignment issues early and debug it easier. + + +$ qemu-system-x86_64 -version +QEMU emulator version 2.9.50 (v2.9.0-1363-g95eef1c68b) +Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1703 b/results/classifier/gemma3:12b/kvm/1703 new file mode 100644 index 00000000..065f595a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1703 @@ -0,0 +1,46 @@ + +Undefined behaviour when running guest with -enable-kvm and attached debugger +Description of problem: +When attaching a debugger to a Qemu instance with `-enable-kvm` my linux kernel panics on (f.e.) module load. +I am not sure if this is a Qemu bug, however the issue is not occurring if I a) do not attach the debugger (even though Qemu is listening for one) or b) I do not pass `-enable-kvm` (and attach a debugger). +The issue seems to relate to the `lx-symbols` command provided by the Linux kernel gdb script suite. +Every time a module is loaded this script will reload the symbols for said module which may take some time, so maybe there is some race involved? +The issue does not reproduce if you do not run `lx-symbols` prior to continuing (it will however run automatically after first module load as it adds a breakpoint to kernel/module/main.c:do_init_module, so the kernel will crash after the second module load) +Steps to reproduce: +1. Start kernel with some img +2. Attach gdb debugger +3. Run the `lx-symbols` command provided by the Linux kernel gdb scripts in gdb, run `continue` in gdb +3. Load a kernel module +Additional information: +This is the kernel stack trace: +``` +[ 22.930691] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI +[ 22.931174] CPU: 2 PID: 241 Comm: modprobe Tainted: G E 6.1.31+ #2 +[ 22.931675] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-1.fc37 04/01/2014 +[ 22.931675] RIP: 0010:do_init_module+0x1/0x210 +[ 22.931675] Code: 74 0c 48 8b 78 08 48 89 de e8 8b df ff ff 65 ff 0d 84 94 ef 7e 0f 85 e5 fe ff ff 0f 1f 44 00 008 +[ 22.931675] RSP: 0018:ffffc90000593e40 EFLAGS: 00010246 +[ 22.931675] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000006e202 +[ 22.931675] RDX: 000000000006e002 RSI: 5b4504de76578f76 RDI: ffffffffc024e180 +[ 22.931675] RBP: ffffc90000593e50 R08: ffffea0000174a88 R09: ffffea0000174ac0 +[ 22.931675] R10: ffff888006a9c270 R11: 0000000000000100 R12: 0000562f9087b4a0 +[ 22.931675] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 +[ 22.931675] FS: 00007f0dbc5a4040(0000) GS:ffff88801f500000(0000) knlGS:0000000000000000 +[ 22.931675] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +[ 22.931675] CR2: 00007ffdc94bc3f8 CR3: 0000000006f8e000 CR4: 00000000003506e0 +[ 22.931675] Call Trace: +[ 22.931675] <TASK> +[ 22.931675] ? die+0x32/0x80 +[ 22.931675] ? do_trap+0xd6/0x100 +[ 22.931675] ? do_init_module+0x1/0x210 +[ 22.931675] ? do_error_trap+0x6a/0x90 +[ 22.931675] ? do_init_module+0x1/0x210 +[ 22.931675] ? exc_invalid_op+0x4c/0x60 +[ 22.931675] ? do_init_module+0x1/0x210 +[ 22.931675] ? asm_exc_invalid_op+0x16/0x20 +[ 22.931675] ? do_init_module+0x1/0x210 +[ 22.931675] __do_sys_finit_module+0x9e/0xf0 +[ 22.931675] do_syscall_64+0x63/0x90 +[ 22.931675] ? exit_to_user_mode_prepare+0x1a/0x120 +[ 22.931675] entry_SYSCALL_64_after_hwframe+0x63/0xcd +``` diff --git a/results/classifier/gemma3:12b/kvm/1705717 b/results/classifier/gemma3:12b/kvm/1705717 new file mode 100644 index 00000000..5ad36193 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1705717 @@ -0,0 +1,50 @@ + +Live migration fails with 'host' cpu when KVM is inserted with nested=1 + +Qemu v2.9.0 +Linux kernel 4.9.34 + +Live migration(pre-copy) being done from one physical host to another: + +Source Qemu: +sudo qemu-system-x86_64 -drive file=${IMAGE_DIR}/${IMAGE_NAME},if=virtio -m 2048 -smp 1 -net nic,model=virtio,macaddr=${MAC} -net tap,ifname=qtap0,script=no,downscript=no -vnc :1 --enable-kvm -cpu kvm64 -qmp tcp:*:4242,server,nowait + +And KVM is inserted with nested=1 on both source and destination machine. + +Migration fails with a nested specific assertion failure on destination at target/i386/kvm.c +1629 + +Migration is successful in the following cases- + +A) cpu model is 'host' and kvm is inserted without nested=1 parameter +B) If instead of 'host' cpu model, 'kvm64' is used (KVM nested=1) +C) If instead of 'host' cpu model, 'kvm64' is used (KVM nested=0) +D) Between an L0 and a guest Hypervisor L1, with 'kvm64' as CPU type (and nested=1 for L0 KVM) + +Physical host(s)- +$ lscpu +Architecture: x86_64 +CPU op-mode(s): 32-bit, 64-bit +Byte Order: Little Endian +CPU(s): 12 +On-line CPU(s) list: 0-11 +Thread(s) per core: 1 +Core(s) per socket: 6 +Socket(s): 2 +NUMA node(s): 2 +Vendor ID: GenuineIntel +CPU family: 6 +Model: 62 +Model name: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz +Stepping: 4 +CPU MHz: 1200.091 +CPU max MHz: 2600.0000 +CPU min MHz: 1200.0000 +BogoMIPS: 4203.28 +Virtualization: VT-x +L1d cache: 32K +L1i cache: 32K +L2 cache: 256K +L3 cache: 15360K +NUMA node0 CPU(s): 0-5 +NUMA node1 CPU(s): 6-11 +Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1706866 b/results/classifier/gemma3:12b/kvm/1706866 new file mode 100644 index 00000000..153bf643 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1706866 @@ -0,0 +1,30 @@ + +migrate: add tls option in virsh, migrate failed + +version: + libvirt-3.4.0 + qemu-2.9.90(latest) + +domain: + any + +step: + 1. generate tls certificate in /etc/pki/libvirt-migrate + 2. start vm + 3. migrate vm, cmdline: + virsh migrate rh7.1-3 --live --undefinesource --persistent --verbose --tls qemu+ssh://IP/system + 4. then migrate failed and reported: + Migration: [ 64 %]error: internal error: qemu unexpectedly closed the monitor: Domain pid=5288, libvirtd pid=49634 + kvm: warning: CPU(s) not present in any NUMA nodes: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 + kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config + kvm: error while loading state section id 2(ram) + kvm: load of migration failed: Input/output error + +other: + Analysis qemu code and debug: +#0 ram_save_page (f=0x55ca8be370e0, pss=0x7fefdfc7b9a0, last_stage=false, bytes_transferred=0x55ca885ec1d8) +#1 0x000055ca87b00b21 in ram_save_target_page (ms=0x55ca885b8d80, f=0x55ca8be370e0, pss=0x7fefdfc7b9a0, last_stage=false, bytes_transferred=0x55ca885ec1d8, dirty_ram_abs=0) +#2 0x000055ca87b00bda in ram_save_host_page (ms=0x55ca885b8d80, f=0x55ca8be370e0, pss=0x7fefdfc7b9a0, last_stage=false, bytes_transferred=0x55ca885ec1d8, dirty_ram_abs=0) +#3 0x000055ca87b00d39 in ram_find_and_save_block (f=0x55ca8be370e0, last_stage=false, bytes_transferred=0x55ca885ec1d8) +#4 0x000055ca87b020b8 in ram_save_iterate (f=0x55ca8be370e0, opaque=0x0) +#5 0x000055ca87b07a9a in qemu_savevm_state_iterate (f=0x55ca8be370e0, postcopy=false) +#6 0x000055ca87e404e5 in migration_thread (opaque=0x55ca885b8d80) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1707274 b/results/classifier/gemma3:12b/kvm/1707274 new file mode 100644 index 00000000..53b759c5 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1707274 @@ -0,0 +1,7 @@ + +Segfaults inside QEMU + +I'm running a server with QEMU emulator version 2.9.0. Although i gave the machine plenty RAM it begins segfaulting some processes after some hours which ends in a complete crash. +This is the commandline from libvirt: + +/usr/bin/qemu-system-x86_64-nameguest=server,debug-threads=on-S-objectsecret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-server/master-key.aes-machinepc-i440fx-2.9,accel=kvm,usb=off,dump-guest-core=off-cpuhost-m8192-realtimemlock=off-smp4,sockets=4,cores=1,threads=1-uuid5329bfd3-b947-473b-9880-d95b4ca78f28-no-user-config-nodefaults-chardevsocket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-server/monitor.sock,server,nowait-monchardev=charmonitor,id=monitor,mode=control-rtcbase=utc,driftfix=slew-globalkvm-pit.lost_tick_policy=delay-no-hpet-no-shutdown-globalPIIX4_PM.disable_s3=1-globalPIIX4_PM.disable_s4=1-bootstrict=on-deviceich9-usb-ehci1,id=usb,bus=pci.0,addr=0x9.0x7-deviceich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x9-deviceich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x9.0x1-deviceich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x9.0x2-deviceahci,id=sata0,bus=pci.0,addr=0x6-devicevirtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5-drivefile=/mnt/htpc/windows/VM/disks/qcow2/server-boot.qcow2,format=qcow2,if=none,id=drive-virtio-disk0-devicevirtio-blk-pci,scsi=off,bus=pci.0,addr=0xa,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1-drivefile=/dev/sda,format=raw,if=none,id=drive-virtio-disk1-devicevirtio-blk-pci,scsi=off,bus=pci.0,addr=0xb,drive=drive-virtio-disk1,id=virtio-disk1-netdevtap,fd=24,id=hostnet0,vhost=on,vhostfd=26-devicevirtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:12:34:56,bus=pci.0,addr=0x3-spiceport=5902,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on-deviceqxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2-devicevirtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8-msgtimestamp=on \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1707297 b/results/classifier/gemma3:12b/kvm/1707297 new file mode 100644 index 00000000..98a33445 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1707297 @@ -0,0 +1,11 @@ + +qemu became more picky parsing -m option + +With qemu-kvm-2.9.0-3.fc26.x86_64 I am no longer to specify the memory size using something like "-m 1.00000GiB" but with qemu-kvm-2.7.1-7.fc25.x86_64 I could without any problem. I now get an error message like: + +qemu-system-x86_64: -m 1.00000GiB: Parameter 'size' expects a non-negative number below 2^64 +Optional suffix k, M, G, T, P or E means kilo-, mega-, giga-, tera-, peta- +and exabytes, respectively. + + +Is this expected or a regression? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1709784 b/results/classifier/gemma3:12b/kvm/1709784 new file mode 100644 index 00000000..cdbc53a2 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1709784 @@ -0,0 +1,117 @@ + +KVM on 16.04.3 throws an error + +Problem Description +==================== +KVM on Ubuntu 16.04.3 throws an error when used + +---uname output--- +Linux bastion-1 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:37:08 UTC 2017 ppc64le ppc64le ppc64le GNU/Linux + +Machine Type = 8348-21C Habanero + +---Steps to Reproduce--- + Install 16.04.3 + +install KVM like: + +apt-get install libvirt-bin qemu qemu-slof qemu-system qemu-utils + +then exit and log back in so virsh will work without sudo + +then run my spawn script + +$ cat spawn.sh +#!/bin/bash + +img=$1 +qemu-system-ppc64 \ +-machine pseries,accel=kvm,usb=off -cpu host -m 512 \ +-display none -nographic \ +-net nic -net user \ +-drive "file=$img" + +with a freshly downloaded ubuntu cloud image + +sudo ./spawn.sh xenial-server-cloudimg-ppc64el-disk1.img + +And I get nothing on the output. + +and errors in dmesg + + +ubuntu@bastion-1:~$ [ 340.180295] Facility 'TM' unavailable, exception at 0xd0000000148b7f10, MSR=9000000000009033 +[ 340.180399] Oops: Unexpected facility unavailable exception, sig: 6 [#1] +[ 340.180513] SMP NR_CPUS=2048 NUMA PowerNV +[ 340.180547] Modules linked in: xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ip_tables x_tables kvm_hv kvm binfmt_misc joydev input_leds mac_hid opal_prd ofpart cmdlinepart powernv_flash ipmi_powernv ipmi_msghandler mtd at24 uio_pdrv_genirq uio ibmpowernv powernv_rng vmx_crypto ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi autofs4 btrfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid1 raid0 multipath linear mlx4_en hid_generic usbhid hid uas usb_storage ast i2c_algo_bit bnx2x ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops mlx4_core drm ahci vxlan libahci ip6_udp_tunnel udp_tunnel mdio libcrc32c +[ 340.181331] CPU: 46 PID: 5252 Comm: qemu-system-ppc Not tainted 4.4.0-89-generic #112-Ubuntu +[ 340.181382] task: c000001e34c30b50 ti: c000001e34ce4000 task.ti: c000001e34ce4000 +[ 340.181432] NIP: d0000000148b7f10 LR: d000000014822a14 CTR: d0000000148b7e40 +[ 340.181475] REGS: c000001e34ce77b0 TRAP: 0f60 Not tainted (4.4.0-89-generic) +[ 340.181519] MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 22024848 XER: 00000000 +[ 340.181629] CFAR: d0000000148b7ea4 SOFTE: 1 +GPR00: d000000014822a14 c000001e34ce7a30 d0000000148cc018 c000001e37bc0000 +GPR04: c000001db9ac0000 c000001e34ce7bc0 0000000000000000 0000000000000000 +GPR08: 0000000000000001 c000001e34c30b50 0000000000000001 d0000000148278f8 +GPR12: d0000000148b7e40 c00000000fb5b500 0000000000000000 000000000000001f +GPR16: 00003fff91c30000 0000000000800000 00003fffa8e34390 00003fff9242f200 +GPR20: 00003fff92430010 000001001de5c030 00003fff9242eb60 00000000100c1ff0 +GPR24: 00003fffc91fe990 00003fff91c10028 0000000000000000 c000001e37bc0000 +GPR28: 0000000000000000 c000001db9ac0000 c000001e37bc0000 c000001db9ac0000 +[ 340.182315] NIP [d0000000148b7f10] kvmppc_vcpu_run_hv+0xd0/0xff0 [kvm_hv] +[ 340.182357] LR [d000000014822a14] kvmppc_vcpu_run+0x44/0x60 [kvm] +[ 340.182394] Call Trace: +[ 340.182413] [c000001e34ce7a30] [c000001e34ce7ab0] 0xc000001e34ce7ab0 (unreliable) +[ 340.182468] [c000001e34ce7b70] [d000000014822a14] kvmppc_vcpu_run+0x44/0x60 [kvm] +[ 340.182522] [c000001e34ce7ba0] [d00000001481f674] kvm_arch_vcpu_ioctl_run+0x64/0x170 [kvm] +[ 340.182581] [c000001e34ce7be0] [d000000014813918] kvm_vcpu_ioctl+0x528/0x7b0 [kvm] +[ 340.182634] [c000001e34ce7d40] [c0000000002fffa0] do_vfs_ioctl+0x480/0x7d0 +[ 340.182678] [c000001e34ce7de0] [c0000000003003c4] SyS_ioctl+0xd4/0xf0 +[ 340.182723] [c000001e34ce7e30] [c000000000009204] system_call+0x38/0xb4 +[ 340.182766] Instruction dump: +[ 340.182788] e92d02a0 e9290a50 e9290108 792a07e3 41820058 e92d02a0 e9290a50 e9290108 +[ 340.182863] 7927e8a4 78e71f87 40820ed8 e92d02a0 <7d4022a6> f9490ee8 e92d02a0 7d4122a6 +[ 340.182938] ---[ end trace bc5080cb7d18f102 ]--- +[ 340.276202] + + +This was with the latest ubuntu cloud image. I get the same thing when trying to use virt-install with an ISO image. + +I have no way of loading a KVM on 16.04.3 + +== Comment: #2 - Jason M. Furmanek <email address hidden> - 2017-08-09 17:42:34 == +I reinstalled with the HWE kernel (4.10). +I can install VM and see the console and eveything seems fine. + +== Comment: #3 - Jason M. Furmanek <email address hidden> - 2017-08-09 17:44:03 == +I had another system at 16.04.2 (4.4) and updated that one to the latest and it hit the same issue as above. +No qemu or libvirt updates were applied. Just kernel updates and a handful of other stuff. + +Seems this issue is specific to the latest kernel + +old version worked: +Linux fs7 4.4.0-83-generic #106 + +new version did not: +Linux fs7 4.4.0-89-generic #112-Ubuntu + +== Comment: #4 - Gustavo Bueno Romero <email address hidden> - 2017-08-09 20:26:42 == +Looks like 46a704f8409f79fd66567ad3f8a7304830a84293 was backported on 88 but e47057151422a67ce08747176fa21cb3b526a2c9 was not: + +[gromero@localhost ubuntu-xenial]$ git remote -vv +origin git://kernel.ubuntu.com/ubuntu/ubuntu-xenial.git (fetch) +origin git://kernel.ubuntu.com/ubuntu/ubuntu-xenial.git (push) + +[gromero@localhost ubuntu-xenial]$ git log Ubuntu-4.4.0-83.106..Ubuntu-4.4.0-89.112 --oneline | fgrep "Preserve userspace HTM state properly" +a97e978 KVM: PPC: Book3S HV: Preserve userspace HTM state properly +[gromero@localhost ubuntu-xenial]$ git log Ubuntu-4.4.0-83.106..Ubuntu-4.4.0-89.112 --oneline | fgrep "Enable TM before accessing TM registers" +[gromero@localhost ubuntu-xenial]$ git tag --contains a97e978574f41ffcf1813c180aba2772d46fbb5b +Ubuntu-4.4.0-88.111 +Ubuntu-4.4.0-89.112 +Ubuntu-raspi2-4.4.0-1066.74 +Ubuntu-raspi2-4.4.0-1067.75 +Ubuntu-snapdragon-4.4.0-1068.73 +Ubuntu-snapdragon-4.4.0-1069.74 + +So +https://github.com/torvalds/linux/commit/46a704f8409f79fd66567ad3f8a7304830a84293 is present in ISO but https://github.com/torvalds/linux/commit/e47057151422a67ce08747176fa21cb3b526a2c9 is not. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1712818 b/results/classifier/gemma3:12b/kvm/1712818 new file mode 100644 index 00000000..780ebb5c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1712818 @@ -0,0 +1,17 @@ + +live migration with storage encounter assert(!(bs->open_flags & BDRV_O_INACTIVE)) crashes + +The vm guest runs a iotest program, and i migrate it with virsh --copy-storage-all,then the qemu process on the source host happens to crash with the following message: + +kvm: block/io.c:1543: bdrv_co_pwritev: Assertion `!(bs->open_flags & 0x0800)' failed. +2017-08-24 11:43:45.919+0000: shutting down, reason=crashed + + +here is the release: +qemu 2.7 & 2.10.rc3 were tested. +libvirt 3.0.0 & 3.2.0 were tested. + +command line: +src_host:virsh migrate --verbose --live --persistent --copy-storage-all vm-core qemu+ssh://dst_host/system + +Resaon: After bdrv_inactivate_all() was called, mirror_run coroutine stills write the left dirty disk data to remote nbd server, which triggers the assertion. But I don't known how to avoid the problem, help is needed! Thanks. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1714331 b/results/classifier/gemma3:12b/kvm/1714331 new file mode 100644 index 00000000..4c3f3fe1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1714331 @@ -0,0 +1,76 @@ + +Virtual machines not working anymore on 2.10 + +Using 2.10, my virtual machine(s) don't work anymore. This happens 100% of the times. + +----- + +I use QEMU compiling it from source, on Ubuntu 16.04 amd64. This is the configure command: + + configure --target-list=x86_64-softmmu --enable-debug --enable-gtk --enable-spice --audio-drv-list=pa + +I have one virtual disk, with a Windows 10 64-bit, which I launch in two different ways; both work perfectly on 2.9 (and used to do on 2.8, but I haven't used it for a long time). + +This is the first way: + + qemu-system-x86_64 + -drive if=pflash,format=raw,readonly,file=/path/to/OVMF_CODE.fd + -drive if=pflash,format=raw,file=/tmp/OVMF_VARS.fd.tmp + -enable-kvm + -machine q35,accel=kvm,mem-merge=off + -cpu host,kvm=off,hv_vendor_id=vgaptrocks,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time + -smp 4,cores=4,sockets=1,threads=1 + -m 4096 + -display gtk + -vga qxl + -rtc base=localtime + -serial none + -parallel none + -usb + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device virtio-scsi-pci,id=scsi + -drive file=/path/to/image-diff.img,id=hdd1,format=qcow2,if=none,cache=writeback + -device scsi-hd,drive=hdd1 + -net nic,model=virtio + -net user + +On QEMU 2.10, I get the `Recovery - Your PC/Device needs to be repaired` windows screen; on 2.9, it boots regularly. + +This is the second way: + + qemu-system-x86_64 + -drive if=pflash,format=raw,readonly,file=/path/to/OVMF_CODE.fd + -drive if=pflash,format=raw,file=/tmp/OVMF_VARS.fd.tmp + -enable-kvm + -machine q35,accel=kvm,mem-merge=off + -cpu host,kvm=off,hv_vendor_id=vgaptrocks,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time + -smp 4,cores=4,sockets=1,threads=1 + -m 10240 + -vga none + -rtc base=localtime + -serial none + -parallel none + -usb + -device vfio-pci,host=01:00.0,multifunction=on + -device vfio-pci,host=01:00.1 + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device usb-host,vendorid=0xNNNN,productid=0xNNNN + -device virtio-scsi-pci,id=scsi + -drive file=/path/to/image-diff.img,id=hdd1,format=qcow2,if=none,cache=writeback + -device scsi-hd,drive=hdd1 + -net nic,model=virtio + -net user + +On QEMU 2.10, I get the debug window on the linux monitor, and blank screen on VFIO one (no BIOS screen at all); after 10/20 seconds, QEMU crashes without any message. +On 2.9, this works perfectly. + +----- + +I am able to perform a git bisect, if that helps, but if this is the case, I'd need this issue to be reviewed, since bisecting is going to take me a lot of time. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1721221 b/results/classifier/gemma3:12b/kvm/1721221 new file mode 100644 index 00000000..67d81d5f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1721221 @@ -0,0 +1,56 @@ + +PCI-E passthrough of Nvidia GTX GFX card to Win 10 guest fails with "kvm_set_phys_mem: error registering slot: Invalid argument" + +Problem: +Passthrough of a PCI-E Nvidia GTX 970 GFX card to a Windows 10 guest from a Debian Stretch host fails after recent changes to kvm in QEMU master/trunk. Before this recent commit, everything worked as expected. + +QEMU Version: +Master/trunk pulled from github 4/10/17 ( git reflog: d147f7e815 HEAD@{0} ) + +Host: +Debian Stretch kernel SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux + +Guest: +Windows 10 Professional + +Issue is with this commit: +https://github.com/qemu/qemu/commit/f357f564be0bd45245b3ccfbbe20ace08fe83ca8 + +Subsequent commit does not help: +https://github.com/qemu/qemu/commit/3110cdbd8a4845c5b5fb861b0a664c56d993dd3c#diff-7b7a17f6e8ba4195198dd685073f43cb + +Error output from qemu: +(qemu) kvm_set_phys_mem: error registering slot: Invalid argument + +QEMU commandline used: + +./sources/qemu/x86_64-softmmu/qemu-system-x86_64 -machine q35,accel=kvm -serial none -parallel none -name Windows \ +-enable-kvm -cpu host,kvm=off,hv_vendor_id=sugoidesu,-hypervisor -smp 6,sockets=1,cores=3,threads=2 \ +-m 8G -mem-path /dev/hugepages -mem-prealloc -balloon none \ +-drive if=pflash,format=raw,readonly,file=vms/ovmf-x64/ovmf-x64/OVMF_CODE-pure-efi.fd \ +-drive if=pflash,format=raw,file=vms/ovmf-x64/ovmf-x64/OVMF_VARS-pure-efi.fd \ +-rtc clock=host,base=localtime \ +-readconfig ./vms/q35-virtio-graphical.cfg \ +-object iothread,id=iothread0 -object iothread,id=iothread1 -object iothread,id=iothread2 -object iothread,id=iothread3 \ +-device virtio-scsi-pci,iothread=iothread0,id=scsi0 -device virtio-scsi-pci,iothread=iothread1,id=scsi1 -device virtio-scsi-pci,iothread=iothread2,id=scsi2 -device virtio-scsi-pci,iothread=iothread3,id=scsi3 \ +-device scsi-hd,bus=scsi0.0,drive=drive0,bootindex=1 -device scsi-hd,bus=scsi1.0,drive=drive1 -device scsi-hd,bus=scsi2.0,drive=drive2 -device scsi-hd,bus=scsi3.0,drive=drive3 -device scsi-hd,bus=scsi1.0,drive=drive4 -device scsi-hd,bus=scsi2.0,drive=drive5 -device scsi-hd,bus=scsi3.0,drive=drive6 -device scsi-hd,bus=scsi1.0,drive=drive7 -device scsi-hd,bus=scsi2.0,drive=drive8 -device scsi-hd,bus=scsi3.0,drive=drive9 \ +-drive if=none,id=drive0,file=vms/w10p64.qcow2,format=qcow2,cache=none,discard=unmap \ +-drive if=none,id=drive1,file=vms/w10p64-2.qcow2,format=qcow2,cache=none,discard=unmap \ +-drive if=none,id=drive2,file=/dev/mapper/w10p64-3,format=raw,cache=none \ +-drive if=none,id=drive3,file=vms/w10p64-4.qcow2,format=qcow2,cache=none \ +-drive if=none,id=drive4,file=vms/w10p64-5.qcow2,format=qcow2,cache=none \ +-drive if=none,id=drive5,file=vms/w10p64-6.qcow2,format=qcow2,cache=none,discard=unmap \ +-drive if=none,id=drive6,file=/dev/mapper/w10p64-7,format=raw,cache=none \ +-drive if=none,id=drive7,file=vms/w10p64-8.qcow2,format=qcow2,cache=none,discard=unmap \ +-device vfio-pci,host=01:00.0,multifunction=on,x-vga=on \ +-device vfio-pci,host=01:00.1,multifunction=on \ +-netdev type=tap,id=net1,ifname=tap1,script=no,downscript=no,vhost=on \ +-device virtio-net-pci,netdev=net1,mac=52:54:00:18:32:c9,bus=pcie.2,addr=00.0,ioeventfd=on \ +-device usb-host,bus=usb.0,hostbus=3,hostport=2.1 \ +-device usb-host,hostbus=3,hostport=2.2 \ +-device usb-host,bus=ich9-ehci-1.0,hostbus=3,hostport=2.4 \ +-object input-linux,id=kbd1,evdev=/dev/input/event0,grab_all=yes,repeat=on \ +-drive if=none,id=drive8,file=vms/w10p64.qcow2-9,format=qcow2,discard=unmap \ +-drive if=none,id=drive9,file=vms/w10p64-10.qcow2,format=qcow2,cache=none,discard=unmap \ +-device usb-host,bus=usb.0,hostbus=3,hostport=9 \ +-monitor stdio \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1731347 b/results/classifier/gemma3:12b/kvm/1731347 new file mode 100644 index 00000000..1d3fafd6 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1731347 @@ -0,0 +1,31 @@ + +VFIO Passthrough of SAS2008-based HBA card fails on E3-1225v3 due to failed DMA mapping (-14) + +There is a bug preventing multiple people with my combination of hardware from using PCI passthrough. I am not actually sure whether the bug is in kernel/kvm, vfio or qemu, however, as qemu is the highest-level of these, I am reporting the bug here as you will likely know better where the origin of the bug may be found. + +When attempting to pass through this device to a KVM using VFIO, this results in error -14 (Bad Address): + +# qemu-system-x86_64 -enable-kvm -m 10G -net none -monitor stdio -serial +# none -parallel none -vnc :1 -device vfio-pci,host=1:00.0 -S +QEMU 2.9.1 monitor - type 'help' for more information +(qemu) c +(qemu) qemu-system-x86_64: VFIO_MAP_DMA: -14 +qemu-system-x86_64: vfio_dma_map(0x7f548f0a1fc0, 0xfebd0000, 0x2000, 0x7f54a909d000) = -14 (Bad address) +qemu: hardware error: vfio: DMA mapping failed, unable to continue + +See also: +https://bugzilla.proxmox.com/show_bug.cgi?id=1556 +https://www.redhat.com/archives/vfio-users/2016-May/msg00088.html + +This has occurred on Proxmox (Proxmox and Debian packages, Ubuntu kernel), Ubuntu, +and pure Debian packages and kernel on Proxmox. However, this error +reportedly does NOT occur for: + +- different distributions(!) (Fedora 24, 25) +- different HBA cards (SAS2308, SAS3008) +- different CPU (E3-1220v5) + +I would be thankful for any input and I'll be happy to provide any further info necessary. This is my first time delving this deep into anything close to the kernel. + +Thanks and best regards, +Johannes Falke \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1732959 b/results/classifier/gemma3:12b/kvm/1732959 new file mode 100644 index 00000000..d1347e74 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1732959 @@ -0,0 +1,60 @@ + +[regression] stop/cont triggers clock jump proportional to host clock drift + +We (ab)use migration + block mirroring to perform transparent zero downtime VM backups. Basically: + +1) do a block mirror of the source VM's disk +2) migrate the source VM to a destination VM using the disk copy +3) cancel the block mirroring +4) resume the source VM +5) shut down the destination VM gracefully and move the disk to backup + +Relatively recently, the source VM's clock started jumping after step #4. More specifically, the clock jumps an amount of time proportional to the time since it was last migrated. With a week between migrations, clock jumps between ~2.5s and ~12s have been observed. For a particular host, the amount of clock jump is fairly consistent, but there is a large variation from one host to the next (this is likely down to hardware variations and the amount of NTP adjusted clock drift on the host). + +This is caused by a kernel regression which I was able to bisect. The result of the bisect was: + +108b249c453dd7132599ab6dc7e435a7036c193f is the first bad commit +commit 108b249c453dd7132599ab6dc7e435a7036c193f +Author: Paolo Bonzini <email address hidden> +Date: Thu Sep 1 14:21:03 2016 +0200 + + KVM: x86: introduce get_kvmclock_ns + + Introduce a function that reads the exact nanoseconds value that is + provided to the guest in kvmclock. This crystallizes the notion of + kvmclock as a thin veneer over a stable TSC, that the guest will + (hopefully) convert with NTP. In other words, kvmclock is *not* a + paravirtualized host-to-guest NTP. + + Drop the get_kernel_ns() function, that was used both to get the base + value of the master clock and to get the current value of kvmclock. + The former use is replaced by ktime_get_boot_ns(), the latter is + the purpose of get_kernel_ns(). + + This also allows KVM to provide a Hyper-V time reference counter that + is synchronized with the time that is computed from the TSC page. + + Reviewed-by: Roman Kagan <email address hidden> + Signed-off-by: Paolo Bonzini <email address hidden> + +I am able to reproduce the issue with much newer kernels as well, including 4.12.5 and 4.9.6. + +Reliably reproducing the problem in isolation is difficult, as one must run a VM for many hours before the clock jump from this bug is noticeable over the clock jump inherent with a pause and resume of the VM. The reproducer I am including is set to run the VM for 18 hours before migration and looks for >= 150 ms of clock jump. On different hardware, you may need to let the VM run for more than 18 hours to reliably reproduce the issue. + +To reproduce the issue, please see the attached reproducer. The host needs to have perl, screen and socat installed for the backup script to work. Both the host and guest need to be running NTP (and NTP must autostart at boot in the guest). The host needs to be able to SSH into the guest using SSH keys (to measure the clock jump), so you will need to configure the network and SSH keys appropriately, then change the hardcoded IP address in checktime.sh and test.sh. I have only tested with CentOS 7 guests. + +The qemu command that gets run is in .kvmscreen (the destination VM's command line is programmatically constructed from this command as well), you may need to tweak the bridge configuration. Also, although the reproducer is relatively self contained, it has several built in assumptions that will break if the image file is not in the /var/lib/kvm directory or if the monitor file is not in the /var/lib/kvm/monitor directory, or if the /backup directory does not exist. Finally, if you change the process name or socket name in .kvmscreen, you'll need to adjust the cleanup section in test.sh. + +With all of the above in place, run test.sh and check back in a little over 18 hours, part of the output should include something along these lines: + +Target not found (wanted 150, at 10) + +- or - + +Target found (wanted 150, found 340) + +If the target is reported as found, that means that we have probably reproduced the described issue. + +The version of QEMU in use does not appear to matter. At one point I tested every major version from 2.4 to 2.9 (inclusive) and reproduced the issue in all of them. + +This was initially observed on two different Gentoo hosts. I have also started to see this issue happening with four different RHEL 7 hosts as of the upgrade to RHEL 7.4. This is not too surprising as it appears that the above commit has been backported into RHEL 7. All hosts and guests are 64-bit. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1744654 b/results/classifier/gemma3:12b/kvm/1744654 new file mode 100644 index 00000000..fb19f9b4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1744654 @@ -0,0 +1,4 @@ + +commit: 4fe6d78 "virtio: postpone the execution of event_notifier_cleanup function" will cause vhost-user device crash + +The new commit: 4fe6d78 break the existing vhost-user devices, such as vhost-user-scsi/blk and vhost-vsocks when exit the host driver, kvm_io_ioeventfd_del will hit the abort(). \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1752026 b/results/classifier/gemma3:12b/kvm/1752026 new file mode 100644 index 00000000..57af37c6 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1752026 @@ -0,0 +1,149 @@ + +Ubuntu18.04:POWER9:DD2.2 - Unable to start a KVM guest with default machine type(pseries-bionic) complaining "KVM implementation does not support Transactional Memory, try cap-htm=off" (kvm) + +== Comment: #0 - Satheesh Rajendran <email address hidden> - 2018-02-23 08:31:06 == +---Problem Description--- +libvirt unable to start a KVM guest complaining about cap-htm machine property to be off + +Host Env: +# lscpu +Architecture: ppc64le +Byte Order: Little Endian +CPU(s): 160 +On-line CPU(s) list: 0-159 +Thread(s) per core: 4 +Core(s) per socket: 20 +Socket(s): 2 +NUMA node(s): 2 +Model: 2.2 (pvr 004e 1202) +Model name: POWER9 (raw), altivec supported +CPU max MHz: 3800.0000 +CPU min MHz: 2166.0000 +L1d cache: 32K +L1i cache: 32K +L2 cache: 512K +L3 cache: 10240K +NUMA node0 CPU(s): 0-79 +NUMA node8 CPU(s): 80-159 + +ii qemu-kvm 1:2.11+dfsg-1ubuntu2 ppc64el QEMU Full virtualization on x86 hardware + +ii libvirt-bin 4.0.0-1ubuntu3 ppc64el programs for the libvirt library + +# lsmcode +Version of System Firmware : + Product Name : OpenPOWER Firmware + Product Version : open-power-SUPERMICRO-P9DSU-V1.03-20180205-imp + Product Extra : occ-577915f + Product Extra : skiboot-v5.9-240-g081882690163-pcbedce4 + Product Extra : petitboot-v1.6.6-p019c87e + Product Extra : sbe-095e608 + Product Extra : machine-xml-fb5f933 + Product Extra : hostboot-9bfb201 + Product Extra : linux-4.14.13-openpower1-p78d7eee + + + +Contact Information = <email address hidden> + +---uname output--- +4.15.0-10-generic + +Machine Type = power9 boston 2.2 (pvr 004e 1202) + +---Debugger--- +A debugger is not configured + +---Steps to Reproduce--- + 1. Boot a guest from libvirt with default pseries machine type or pseries-bionic + +/usr/bin/virt-install --connect=qemu:///system --hvm --accelerate --name 'virt-tests-vm1' --machine pseries --memory=32768 --vcpu=32,sockets=1,cores=32,threads=1 --import --nographics --serial pty --memballoon model=virtio --controller type=scsi,model=virtio-scsi --disk path=/var/lib/libvirt/images/workspace/runAvocadoFVTTest/avocado-fvt-wrapper/data/avocado-vt/images/ubuntu-18.04-ppc64le.qcow2,bus=scsi,size=10,format=qcow2 --network=bridge=virbr0,model=virtio,mac=52:54:00:77:78:79 --noautoconsole +WARNING No operating system detected, VM performance may suffer. Specify an OS with --os-variant for optimal results. + +Starting install... +ERROR internal error: process exited while connecting to monitor: ,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:77:78:79,bus=pci.0,addr=0x1 -chardev pty,id=charserial0 -device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on +2018-02-23T14:21:11.081809Z qemu-system-ppc64: KVM implementation does not support Transactional Memory, try cap-htm=off +Domain installation does not appear to have been successful. +If it was, you can restart your domain by running: + virsh --connect qemu:///system start virt-tests-vm1 +otherwise, please restart your installation. + +2. Fails to boot.. + +Note: if we specify machine type as pseries=2.12 it boots fine like below + +/usr/bin/virt-install --connect=qemu:///system --hvm --accelerate --name 'virt-tests-vm1' --machine pseries-2.12 --memory=32768 --vcpu=32,sockets=1,cores=32,threads=1 --import --nographics --serial pty --memballoon model=virtio --controller type=scsi,model=virtio-scsi --disk path=/var/lib/libvirt/images/workspace/runAvocadoFVTTest/avocado-fvt-wrapper/data/avocado-vt/images/ubuntu-18.04-ppc64le.qcow2,bus=scsi,size=10,format=qcow2 --network=bridge=virbr0,model=virtio,mac=52:54:00:77:78:79 --noautoconsole +WARNING No operating system detected, VM performance may suffer. Specify an OS with --os-variant for optimal results. + +qemu-cmd line: + +libvirt+ 4283 1 99 09:26 ? 00:00:38 qemu-system-ppc64 -enable-kvm -name guest=virt-tests-vm1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-virt-tests-vm1/master-key.aes -machine pseries-2.12,accel=kvm,usb=off,dump-guest-core=off -m 32768 -realtime mlock=off -smp 32,sockets=1,cores=32,threads=1 -uuid 108ac2b5-e8b2-4399-a925-a707e8020871 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-virt-tests-vm1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device qemu-xhci,id=usb,bus=pci.0,addr=0x3 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -drive file=/var/lib/libvirt/images/workspace/runAvocadoFVTTest/avocado-fvt-wrapper/data/avocado-vt/images/ubuntu-18.04-ppc64le.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:77:78:79,bus=pci.0,addr=0x1 -chardev pty,id=charserial0 -device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on + + +Userspace tool common name: ii libvirt-bin 4.0.0-1ubuntu3 ppc64el programs for the libvirt library + +The userspace tool has the following bit modes: both + +Userspace rpm: ii libvirt-bin 4.0.0-1ubuntu3 ppc64el programs for the libvirt library + +Userspace tool obtained from project website: na + +*Additional Instructions for <email address hidden>: +-Post a private note with access information to the machine that the bug is occuring on. +-Attach ltrace and strace of userspace application. + +== Comment: #1 - Satheesh Rajendran <email address hidden> - 2018-02-23 08:35:17 == +vm qemu log for failed and passed cases: + +Failed:(pseries-bionic) +2018-02-23 14:21:10.806+0000: starting up libvirt version: 4.0.0, package: 1ubuntu3 (Christian Ehrhardt <email address hidden> Mon, 19 Feb 2018 14:18:44 +0100), qemu version: 2.11.1(Debian 1:2.11+dfsg-1ubuntu2), hostname: ltc-boston8.aus.stglabs.ibm.com +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -name guest=virt-tests-vm1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-virt-tests-vm1/master-key.aes -machine pseries-bionic,accel=kvm,usb=off,dump-guest-core=off -m 32768 -realtime mlock=off -smp 32,sockets=1,cores=32,threads=1 -uuid 36c37d3b-fb24-4350-94f9-3271b257f75c -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-virt-tests-vm1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device qemu-xhci,id=usb,bus=pci.0,addr=0x3 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -drive file=/var/lib/libvirt/images/workspace/runAvocadoFVTTest/avocado-fvt-wrapper/data/avocado-vt/images/ubuntu-18.04-ppc64le.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:77:78:79,bus=pci.0,addr=0x1 -chardev pty,id=charserial0 -device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on +2018-02-23T14:21:10.909242Z qemu-system-ppc64: -chardev pty,id=charserial0: char device redirected to /dev/pts/1 (label charserial0) +2018-02-23T14:21:11.081809Z qemu-system-ppc64: KVM implementation does not support Transactional Memory, try cap-htm=off +2018-02-23 14:21:18.857+0000: shutting down, reason=failed + + +Passed:(pseries-2.12) +2018-02-23 14:26:07.047+0000: starting up libvirt version: 4.0.0, package: 1ubuntu3 (Christian Ehrhardt <email address hidden> Mon, 19 Feb 2018 14:18:44 +0100), qemu version: 2.11.1(Debian 1:2.11+dfsg-1ubuntu2), hostname: ltc-boston8.aus.stglabs.ibm.com +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -name guest=virt-tests-vm1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-virt-tests-vm1/master-key.aes -machine pseries-2.12,accel=kvm,usb=off,dump-guest-core=off -m 32768 -realtime mlock=off -smp 32,sockets=1,cores=32,threads=1 -uuid 108ac2b5-e8b2-4399-a925-a707e8020871 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-4-virt-tests-vm1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device qemu-xhci,id=usb,bus=pci.0,addr=0x3 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -drive file=/var/lib/libvirt/images/workspace/runAvocadoFVTTest/avocado-fvt-wrapper/data/avocado-vt/images/ubuntu-18.04-ppc64le.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:77:78:79,bus=pci.0,addr=0x1 -chardev pty,id=charserial0 -device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on +2018-02-23T14:26:07.116991Z qemu-system-ppc64: -chardev pty,id=charserial0: char device redirected to /dev/pts/1 (label charserial0) + +Regards, +-Satheesh + +== Comment: #8 - VIPIN K. PARASHAR <email address hidden> - 2018-02-25 23:38:29 == +Starting install... +ERROR internal error: process exited while connecting to monitor: ,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:77:78:79,bus=pci.0,addr=0x1 -chardev pty,id=charserial0 -device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on +2018-02-23T14:21:11.081809Z qemu-system-ppc64: KVM implementation does not support Transactional Memory, try cap-htm=off +Domain installation does not appear to have been successful. +If it was, you can restart your domain by running: + virsh --connect qemu:///system start virt-tests-vm1 +otherwise, please restart your installation. + +As per above message, qemu is reporting TM to be not supported by KVM on this hardware +and thus recommending to turn off cap-htm. + +== Comment: #12 - Suraj Jitindar Singh <email address hidden> - 2018-02-26 23:35:02 == +I don't know what a pseries-bionic is, there is no reference to it upstream. + +What you are seeing is expected behaviour as far as I can tell. POWER9 currently does not support HTM for a guest and thus it must not be turned on, otherwise qemu will fail to start. + +HTM can be disabled from the qemu command line by setting cap-htm=off, as stated in the the error message. The pseries-2.12 machine type has htm disabled by default and thus with that machine type there is no requirement to set cap-htm=off on the command line to get qemu to start. + +So depending on what machine pseries-bionic is based on it will be required to disable htm on the command line (with cap-htm=off) if it is not disabled by default for the machine. + +== Comment: #13 - Satheesh Rajendran <email address hidden> - 2018-02-27 00:05:12 == +Had a chat with Suraj and here is the summary + +1. Currently Power9 DD2.2(host kernel) does not support HTM, so guest should be booted with cap-htm=off, looks like host kernel patch rework in progress--> Initial patch, https://www.spinics.net/lists/kvm-ppc/msg13378.html +2. Libvirt does not know about this cap-htm yet and currently it does not set any default values? +3. pseries-2.12 does have cap-htm=off by default but not the older machine types, so we see the guest is booting from libvirt with pseries-2.12 +4. Once 1 is fixed, we can boot cap-htm=on, I guess by that time pseries-2.12 to be changed to cap-htm=on bydefault. + +---> 3. Immediate fix can be Canonical defaults their machine type(pseries-bioic) to pseries-2.12... +---> Future 1 and 4 to be addressed, not sure about 2? + +Needs a mirror to Canonical to address this. + +Regards, +-Satheesh \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1761798 b/results/classifier/gemma3:12b/kvm/1761798 new file mode 100644 index 00000000..a55aad84 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1761798 @@ -0,0 +1,28 @@ + +live migration intermittently fails in CI with "VQ 0 size 0x80 Guest index 0x12c inconsistent with Host index 0x134: delta 0xfff8" + +Seen here: + +http://logs.openstack.org/37/522537/20/check/legacy-tempest-dsvm-multinode-live-migration/8de6e74/logs/subnode-2/libvirt/qemu/instance-00000002.txt.gz + +2018-04-05T21:48:38.205752Z qemu-system-x86_64: -chardev pty,id=charserial0,logfile=/dev/fdset/1,logappend=on: char device redirected to /dev/pts/0 (label charserial0) +warning: TCG doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5] +2018-04-05T21:48:43.153268Z qemu-system-x86_64: VQ 0 size 0x80 Guest index 0x12c inconsistent with Host index 0x134: delta 0xfff8 +2018-04-05T21:48:43.153288Z qemu-system-x86_64: Failed to load virtio-blk:virtio +2018-04-05T21:48:43.153292Z qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:04.0/virtio-blk' +2018-04-05T21:48:43.153347Z qemu-system-x86_64: load of migration failed: Operation not permitted +2018-04-05 21:48:43.198+0000: shutting down, reason=crashed + +And in the n-cpu logs on the other host: + +http://logs.openstack.org/37/522537/20/check/legacy-tempest-dsvm-multinode-live-migration/8de6e74/logs/screen-n-cpu.txt.gz#_Apr_05_21_48_43_257541 + +There is a related Red Hat bug: + +https://bugzilla.redhat.com/show_bug.cgi?id=1450524 + +The CI job failures are at present using the Pike UCA: + +ii libvirt-bin 3.6.0-1ubuntu6.2~cloud0 + +ii qemu-system-x86 1:2.10+dfsg-0ubuntu3.5~cloud0 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1762707 b/results/classifier/gemma3:12b/kvm/1762707 new file mode 100644 index 00000000..1e608b42 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1762707 @@ -0,0 +1,17 @@ + +VFIO device gets DMA failures when virtio-balloon leak from highmem to lowmem + +Is there any known conflict between VFIO passthrough device and virtio-balloon? + +The VM has: +1. 4GB system memory +2. one VFIO passthrough device which supports high address memory DMA and uses GFP_HIGHUSER pages. +3. Memory balloon device with 4GB target. + +When setting the memory balloon target to 1GB and 4GB in loop during runtime (I used the command "virsh qemu-monitor-command debian --hmp --cmd balloon 1024"), the VFIO device DMA randomly gets failure. + +More clues: +1. configure 2GB system memory (no highmem) VM, no issue with similar operations +2. setting the memory balloon to higher like 8GB, no issue with similar operations + +I'm also trying to narrow down this issue. It's appreciated for that you guys may share some thoughts. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1769 b/results/classifier/gemma3:12b/kvm/1769 new file mode 100644 index 00000000..45072f8c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1769 @@ -0,0 +1,2 @@ + +RHEL9 ppc64le Power9 pseries guest userspace segfaults diff --git a/results/classifier/gemma3:12b/kvm/1769053 b/results/classifier/gemma3:12b/kvm/1769053 new file mode 100644 index 00000000..cef670d2 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1769053 @@ -0,0 +1,34 @@ + +Ability to control phys-bits through libvirt + +Attempting to start a KVM guest with more than 1TB of RAM fails. + +It looks like we might need some extra patches: https://lists.gnu.org/archive/html/qemu-discuss/2017-12/msg00005.html + +ProblemType: Bug +DistroRelease: Ubuntu 18.04 +Package: qemu-system-x86 1:2.11+dfsg-1ubuntu7 +ProcVersionSignature: Ubuntu 4.15.0-20.21-generic 4.15.17 +Uname: Linux 4.15.0-20-generic x86_64 +ApportVersion: 2.20.9-0ubuntu7 +Architecture: amd64 +CurrentDesktop: Unity:Unity7:ubuntu +Date: Fri May 4 16:21:14 2018 +InstallationDate: Installed on 2017-04-05 (393 days ago) +InstallationMedia: Ubuntu 16.10 "Yakkety Yak" - Release amd64 (20161012.2) +MachineType: Dell Inc. XPS 13 9360 +ProcKernelCmdLine: BOOT_IMAGE=/vmlinuz-4.15.0-20-generic root=/dev/mapper/ubuntu--vg-root ro quiet splash transparent_hugepage=madvise vt.handoff=1 +SourcePackage: qemu +UpgradeStatus: Upgraded to bionic on 2018-04-30 (3 days ago) +dmi.bios.date: 02/26/2018 +dmi.bios.vendor: Dell Inc. +dmi.bios.version: 2.6.2 +dmi.board.name: 0PF86Y +dmi.board.vendor: Dell Inc. +dmi.board.version: A00 +dmi.chassis.type: 9 +dmi.chassis.vendor: Dell Inc. +dmi.modalias: dmi:bvnDellInc.:bvr2.6.2:bd02/26/2018:svnDellInc.:pnXPS139360:pvr:rvnDellInc.:rn0PF86Y:rvrA00:cvnDellInc.:ct9:cvr: +dmi.product.family: XPS +dmi.product.name: XPS 13 9360 +dmi.sys.vendor: Dell Inc. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1773753 b/results/classifier/gemma3:12b/kvm/1773753 new file mode 100644 index 00000000..c8fcf624 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1773753 @@ -0,0 +1,65 @@ + +virsh start, after virsh managed save hangs and vm goes to paused state with qemu version v2.12.0-813-g5a5c383b13-dirty on powerpc + +Host Env: +IBM Power8 with Fedora28 base with compiled upstream kernel, qemu, libvirt. + +Host Kernel: 4.17.0-rc5-00069-g3acf4e395260 + +qemu-kvm(5a5c383b1373aeb6c87a0d6060f6c3dc7c53082b): v2.12.0-813-g5a5c383b13-dirty + +libvirt(4804a4db33a37f828d033733bc47f6eff5d262c3): + +Guest Kernel: 4.17.0-rc7 + +Steps to recreate: +Define a guest attached with above setup and start. +# virsh start avocado-vt-vm1 + +guest console;... +# uname -r +4.17.0-rc7 +[root@atest-guest ~]# lscpu +Architecture: ppc64le +Byte Order: Little Endian +CPU(s): 3 +On-line CPU(s) list: 0-2 +Thread(s) per core: 1 +Core(s) per socket: 1 +Socket(s): 3 +NUMA node(s): 1 +Model: 2.1 (pvr 004b 0201) +Model name: POWER8 (architected), altivec supported +Hypervisor vendor: KVM +Virtualization type: para +L1d cache: 64K +L1i cache: 32K +NUMA node0 CPU(s): 0-2 + + +# virsh managedsave avocado-vt-vm1 + +Domain avocado-vt-vm1 state saved by libvirt + +# virsh list + Id Name State +---------------------------------------------------- + +# virsh start avocado-vt-vm1 ----Hangs forever and vm state goes to paused. + + +# virsh list + Id Name State +---------------------------------------------------- + 87 avocado-vt-vm1 paused + + +P:S:- with same above setup, just changing the qemu-kvm comes bydefault with F28 works fine. + +/usr/bin/qemu-kvm --version +QEMU emulator version 2.11.1(qemu-2.11.1-2.fc28) + +Summary: with above other setup. +machine type pseries-2.12 and qemu-2.11.1-2.fc28 -Works fine. + +machine type pseries-2.12/pseries-2.13 and qemu 5a5c383b1373aeb6c87a0d6060f6c3dc7c53082b - Does not work. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1774605 b/results/classifier/gemma3:12b/kvm/1774605 new file mode 100644 index 00000000..3a747d0c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1774605 @@ -0,0 +1,41 @@ + +PowerPC guest does not emulate L2 and L3 cache for KVM vCPUs + +PowerPC KVM guest does not emulate L2 and L2 caches for vCPU, it would be good to have them enabled if not any known issues/limitation already with PowerPC. + +Host Env: +kernel: 4.17.0-rc7-00045-g0512e0134582 +qemu: v2.12.0-923-gc181ddaa17-dirty +#libvirtd -V +libvirtd (libvirt) 4.4.0 + + +Guest Kernel: +# uname -a +Linux atest-guest 4.17.0-rc7-00045-g0512e0134582 #9 SMP Fri Jun 1 02:55:50 EDT 2018 ppc64le ppc64le ppc64le GNU/Linux + +Guest: +# lscpu +Architecture: ppc64le +Byte Order: Little Endian +CPU(s): 16 +On-line CPU(s) list: 0-15 +Thread(s) per core: 8 +Core(s) per socket: 2 +Socket(s): 1 +NUMA node(s): 1 +Model: 2.1 (pvr 004b 0201) +Model name: POWER8 (architected), altivec supported +Hypervisor vendor: KVM +Virtualization type: para +L1d cache: 64K +L1i cache: 32K +NUMA node0 CPU(s): 0-15 + + + +background: x86 enabling cpu L2 cache bydefault and L3 cache on demand for kvm guest +and claims performance improvement as vcpus can be +benefited with lesser `vmexits due to guest send IPIs.` with L3 cache enabled, below was patch for same. + +https://git.qemu.org/?p=qemu.git;a=commit;h=14c985cffa6cb177fc01a163d8bcf227c104718c \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1777301 b/results/classifier/gemma3:12b/kvm/1777301 new file mode 100644 index 00000000..39850829 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1777301 @@ -0,0 +1,58 @@ + +Boot failed after installing Checkpoint Pointsec FDE + +Boot failed after installing Checkpoint Pointsec FDE + + +Hi, +I installed Windows 10 64-bit guest on CentOS 7. Everything works great as expected. +However after installing CheckPoint AlertSec full disk encryption, the guest failed to boot. + +The following error is displayed in qemu log file. +KVM internal error. Suberror: 1 +emulation failure + + + + + +Installed Software +[root@sesamvmh01 qemu]# yum list installed | grep qemu +ipxe-roms-qemu.noarch 20170123-1.git4e85b27.el7_4.1 @base +libvirt-daemon-driver-qemu.x86_64 3.9.0-14.el7_5.5 @updates +qemu-guest-agent.x86_64 10:2.8.0-2.el7 @base +qemu-img-ev.x86_64 10:2.3.0-29.1.el7 @qemu-kvm-rhev +qemu-kvm-common-ev.x86_64 10:2.3.0-29.1.el7 @qemu-kvm-rhev +qemu-kvm-ev.x86_64 10:2.3.0-29.1.el7 @qemu-kvm-rhev + +# uname -r +3.10.0-862.3.2.el7.x86_64 + +CPU info: +processor : 0..3 +vendor_id : GenuineIntel +cpu family : 6 +model : 30 +model name : Intel(R) Xeon(R) CPU X3430 @ 2.40GHz +stepping : 5 +microcode : 0x7 +cpu MHz : 1200.000 +cache size : 8192 KB +physical id : 0 +siblings : 4 +core id : 0 +cpu cores : 4 +apicid : 0 +initial apicid : 0 +fpu : yes +fpu_exception : yes +cpuid level : 11 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm ida +bogomips : 4799.98 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual +power management: + +Please also check attached logs. I am new to qemu-kvm so please don't hesitate to ask missing info. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1778966 b/results/classifier/gemma3:12b/kvm/1778966 new file mode 100644 index 00000000..a2b63a74 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1778966 @@ -0,0 +1,6 @@ + +Windows 1803 and later crashes on KVM + +For a bionic host, using the current public kvm modules, KVM is not capable of booting a WindowsInsider or msdn Windows 1803 iso. In stallign from an ISO from a started windows 2016 guest results in an unbootable and unrepairable guest. + +The hardware is a threadripper 1920x with 32GB of main memory, disk mydigital BPX SSD and WD based 4 column RAID 5 via mdadm. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1779162 b/results/classifier/gemma3:12b/kvm/1779162 new file mode 100644 index 00000000..19579f83 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1779162 @@ -0,0 +1,45 @@ + +qemu versions 2.10 and 2.11 have error during migration of larger guests + +== Comment: #0 - Christian Borntraeger - 2018-06-28 06:39:27 == +Migration fails with larger guests (e.g. 10GB) on a z system prints an error message in the log + +see /var/log/libvirt/qemu/... +[...] +qemu-system-s390x: KVM_S390_SET_CMMA_BITS failed: Bad address + +This messes up guest state for the CMMA values (guest data corruption) + +This is fixed with + +commit 46fa893355e0bd88f3c59b886f0d75cbd5f0bbbe +Author: Claudio Imbrenda <email address hidden> +AuthorDate: Thu Jan 18 18:51:44 2018 +0100 +Commit: Cornelia Huck <email address hidden> +CommitDate: Mon Jan 22 11:04:52 2018 +0100 + + s390x: fix storage attributes migration for non-small guests + + Fix storage attribute migration so that it does not fail for guests + with more than a few GB of RAM. + With such guests, the index in the buffer would go out of bounds, + usually by large amounts, thus receiving -EFAULT from the kernel. + Migration itself would be successful, but storage attributes would then + not be migrated completely. + + This patch fixes the out of bounds access, and thus migration of all + storage attributes when the guest have large amounts of memory. + + Cc: <email address hidden> + Signed-off-by: Claudio Imbrenda <email address hidden> + Fixes: 903fd80b03243476 ("s390x/migration: Storage attributes device") + Message-Id: <email address hidden> + Reviewed-by: Christian Borntraeger <email address hidden> + Signed-off-by: Cornelia Huck <email address hidden> + +This fix is part of 2.11.1 so the qemu in bionic is fine. +The qemu in artful, as well as the qemu in the cloud archives for 16.04 need this fix, so we have +affected qemus in 17.10 and 16.04. + +Regarding 16.04: +The bug only triggers for host kernels >= 4.13 - in other words when you combine HWE kernel with the qemu from the cloud archive. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1779634 b/results/classifier/gemma3:12b/kvm/1779634 new file mode 100644 index 00000000..93bc07f4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1779634 @@ -0,0 +1,36 @@ + +qemu-x86_64 on aarch64 reports "Synchronous External Abort" + +Purpose: to run x86_64 utilities on aarch64 platform (Intel/Dell network adapters' firmware upgrade tools) +System: aarch64 server platform, with ubuntu 16.04 (xenial) Linux 4.13.0-45-generic #50~16.04.1-Ubuntu SMP Wed May 30 11:14:25 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux + +Reproduce: +1) build linux-user qemu-x86_64 static from source (tried both version 1.12.0 & 1.11.02) + ./configure --target-list=x86_64-linux-user --disable-system --static --enable-linux-user + +2) install the interpreter into binfmt_misc filesystem + $ cat /proc/sys/fs/binfmt_misc/qemu-x86_64 + enabled + interpreter /usr/local/bin/qemu-x86_64 + flags: + offset 0 + magic 7f454c4602010100000000000000000002003e00 + mask fffffffffffefefcfffffffffffffffffeffffff + +3) packaging Intel/Dell upgrade utilities into docker images, I've published two on docker hub: + REPOSITORY TAG IMAGE ID CREATED SIZE + heyi/dellupdate latest 8e013f5511cd 6 hours ago 210MB + heyi/nvmupdate64e latest 9d2de9d0edaa 3 days ago 451MB + +4) run the docker container on aarch64 server platform: + docker run -it --privileged --network host --volume /usr/local/bin/qemu-x86_64:/usr/local/bin/qemu-x86_64 heyi/dellupdate:latest + +5) finally, within docker container run the upgrade tool: + # ./Network_Firmware_T6VN9_LN_18.5.17_A00.BIN + +Errors: in dmesg it reports excessive 'Synchronous External Abort': + +kernel: [242850.159893] Synchronous External Abort: synchronous external abort (0x92000610) at 0x0000000000429958 +kernel: [242850.169199] Unhandled fault: synchronous external abort (0x92000610) at 0x0000000000429958 + +thanks and best regards, Yi \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1780928 b/results/classifier/gemma3:12b/kvm/1780928 new file mode 100644 index 00000000..a118e1f0 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1780928 @@ -0,0 +1,31 @@ + +v2.12.0-2321-gb34181056c: vcpu hotplug crashes qemu-kvm with segfault + +vcpu hotplug crashes upstream qemu(v2.12.0-2321-gb34181056c), vcpu hotplug works fine in v2.12.0-rc4. + +Host: Power8, kernel: 4.18.0-rc2-00037-g6f0d349d922b +Guest: Power8, kernel: 4.18.0-rc3-00183-gc42c12a90545 (base image: fedora27 ppc64le) + +/usr/share/avocado-plugins-vt/build/qemu/ppc64-softmmu/qemu-system-ppc64 -M pseries,accel=kvm,max-cpu-compat=power8 -m 8192 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -drive file=/var/lib/avocado/data/avocado-vt/images/jeos-27-ppc64le.qcow2,format=qcow2,if=none,id=drive1 -device scsi-hd,drive=drive1,bus=scsi0.0 -smp 1,cores=1,threads=1,sockets=1,maxcpus=8 -serial /dev/pts/0 -monitor stdio -vga none -nographic -kernel /home/kvmci/linux/vmlinux -append 'root=/dev/sda2 rw console=tty0 console=ttyS0,115200 init=/sbin/init initcall_debug' -nic user,model=virtio-net-pci +QEMU 2.12.50 monitor - type 'help' for more information +(qemu) device_add host-spapr-cpu-core,id=core1,core-id=1 +Segmentation fault (core dumped) + + +Guest initial cpu: +# lscpu +Architecture: ppc64le +Byte Order: Little Endian +CPU(s): 1 +On-line CPU(s) list: 0 +Thread(s) per core: 1 +Core(s) per socket: 1 +Socket(s): 1 +NUMA node(s): 1 +Model: 2.1 (pvr 004b 0201) +Model name: POWER8 (architected), altivec supported +Hypervisor vendor: KVM +Virtualization type: para +L1d cache: 64K +L1i cache: 32K +NUMA node0 CPU(s): 0 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1781211 b/results/classifier/gemma3:12b/kvm/1781211 new file mode 100644 index 00000000..f960c748 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1781211 @@ -0,0 +1,16 @@ + +HAXM acceleration does not work at all. + +I have qemu windows build 2.12.90, haxm 7.2.0. Ubuntu, nor arch linux does not works when i turn on hax acceleration. Permanent kernel panics, black screen freezing and other crashes happens when i run qemu. +Qemu crashed with hax - when i ran it from iso. It crashed on already installed system - it's not matters. + +Versions: +archlinux-2018.07.01-x86_64 +ubuntu-18.04-live-server-amd64.iso + +I run qemu-system-x86_64.exe binary. + +My CPU: +core i7 2600k + +See screenshot \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1782300 b/results/classifier/gemma3:12b/kvm/1782300 new file mode 100644 index 00000000..66979a6a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1782300 @@ -0,0 +1,97 @@ + +COLO unable to failover to secondary VM + +I test COLO feature on my host following docs/COLO-FT.txt in qemu folder, but fail to failover to secondary VM. +Is there any mistake in my execution steps? + +Execution environment: +QEMU v2.12.0-rc4 +OS: Ubuntu 16.04.3 LTS +Kernel: Linux 4.4.35 +Secondary VM IP: noted as "a.b.c.d" + +Execution steps: +# Primary +${COLO_PATH}/x86_64-softmmu/qemu-system-x86_64 \ + -enable-kvm \ + -m 512M \ + -smp 2 \ + -qmp stdio \ + -vnc :7 \ + -name primary \ + -device piix3-usb-uhci \ + -device usb-tablet \ + -netdev tap,id=tap0,vhost=off \ + -device virtio-net-pci,id=net-pci0,netdev=tap0 \ + -drive if=virtio,id=primary-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ + children.0.file.filename=${IMG_PATH},\ + children.0.driver=raw -S + +# Secondary +${COLO_PATH}/x86_64-softmmu/qemu-system-x86_64 \ + -enable-kvm \ + -m 512M \ + -smp 2 \ + -qmp stdio \ + -vnc :8 \ + -name secondary \ + -device piix3-usb-uhci \ + -device usb-tablet \ + -netdev tap,id=tap1,vhost=off \ + -device virtio-net-pci,id=net-pci0,netdev=tap1 \ + -drive if=none,id=secondary-disk0,file.filename=${IMG_PATH},driver=raw,node-name=node0 \ + -drive if=virtio,id=active-disk0,driver=replication,mode=secondary,\ + file.driver=qcow2,top-id=active-disk0,\ + file.file.filename=$ACTIVE_DISK,\ + file.backing.driver=qcow2,\ + file.backing.file.filename=$HIDDEN_DISK,\ + file.backing.backing=secondary-disk0 \ + -incoming tcp:0:8888 + +# Enter into Secondary: +{'execute':'qmp_capabilities'} +{ 'execute': 'nbd-server-start', + 'arguments': {'addr': {'type': 'inet', 'data': {'host': 'a.b.c.d', 'port': '8889'} } } +} +{'execute': 'nbd-server-add', 'arguments': {'device': 'secondary-disk0', 'writable': true } } + +# Enter into Primary: +{'execute':'qmp_capabilities'} +{'execute': 'human-monitor-command', + 'arguments': { + 'command-line': 'drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=a.b.c.d,file.port=8889,file.export=secondary-disk0,node-name=nbd_client0' + } +} +{ 'execute':'x-blockdev-change', 'arguments':{'parent': 'primary-disk0', 'node': 'nbd_client0' } } +{ 'execute': 'migrate-set-capabilities', + 'arguments': {'capabilities': [ {'capability': 'x-colo', 'state': true } ] } } +{ 'execute': 'migrate', 'arguments': {'uri': 'tcp:a.b.c.d:8888' } } + +# To test failover +Primary +{ 'execute': 'x-blockdev-change', 'arguments': {'parent': 'primary-disk0', 'child': 'children.1'}} +{ 'execute': 'human-monitor-command','arguments': {'command-line': 'drive_del nbd_client0'}} + +Secondary +{ 'execute': 'nbd-server-stop' } + +Stop Primary +Send ^C signal to terminate PVM. + +Secondary +{ "execute": "x-colo-lost-heartbeat" } + + +# Result: +Primary (Use ^C to terminate) +qemu-system-x86_64: Can't receive COLO message: Input/output error +qemu-system-x86_64: terminating on signal 2 +{"timestamp": {"seconds": 1531815575, "microseconds": 997696}, "event": "SHUTDOWN", "data": {"guest":false}} + +Secondary +{ 'execute': 'nbd-server-stop' } +{"return": {}} +{ "execute": "x-colo-lost-heartbeat" } +{"return": {}} +qemu-system-x86_64: Can't receive COLO message: Input/output error +Segmentation fault \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1785734 b/results/classifier/gemma3:12b/kvm/1785734 new file mode 100644 index 00000000..fe040888 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1785734 @@ -0,0 +1,76 @@ + +movdqu partial write at page boundary + +In TCG mode, when a 16-byte write instruction (such as movdqu) is executed at a page boundary and causes a page fault, a partial write is executed in the first page. See the attached code for an example. + +Tested on the qemu-3.0.0-rc1 release. + + +% gcc -m32 qemu-bug2.c && ./a.out && echo && qemu-i386 ./a.out +*(0x70000ff8+ 0) = aa +*(0x70000ff8+ 1) = aa +*(0x70000ff8+ 2) = aa +*(0x70000ff8+ 3) = aa +*(0x70000ff8+ 4) = aa +*(0x70000ff8+ 5) = aa +*(0x70000ff8+ 6) = aa +*(0x70000ff8+ 7) = aa +*(0x70000ff8+ 8) = 55 +*(0x70000ff8+ 9) = 55 +*(0x70000ff8+10) = 55 +*(0x70000ff8+11) = 55 +*(0x70000ff8+12) = 55 +*(0x70000ff8+13) = 55 +*(0x70000ff8+14) = 55 +*(0x70000ff8+15) = 55 +page fault: addr=0x70001000 err=0x7 +*(0x70000ff8+ 0) = aa +*(0x70000ff8+ 1) = aa +*(0x70000ff8+ 2) = aa +*(0x70000ff8+ 3) = aa +*(0x70000ff8+ 4) = aa +*(0x70000ff8+ 5) = aa +*(0x70000ff8+ 6) = aa +*(0x70000ff8+ 7) = aa +*(0x70000ff8+ 8) = 55 +*(0x70000ff8+ 9) = 55 +*(0x70000ff8+10) = 55 +*(0x70000ff8+11) = 55 +*(0x70000ff8+12) = 55 +*(0x70000ff8+13) = 55 +*(0x70000ff8+14) = 55 +*(0x70000ff8+15) = 55 + +*(0x70000ff8+ 0) = aa +*(0x70000ff8+ 1) = aa +*(0x70000ff8+ 2) = aa +*(0x70000ff8+ 3) = aa +*(0x70000ff8+ 4) = aa +*(0x70000ff8+ 5) = aa +*(0x70000ff8+ 6) = aa +*(0x70000ff8+ 7) = aa +*(0x70000ff8+ 8) = 55 +*(0x70000ff8+ 9) = 55 +*(0x70000ff8+10) = 55 +*(0x70000ff8+11) = 55 +*(0x70000ff8+12) = 55 +*(0x70000ff8+13) = 55 +*(0x70000ff8+14) = 55 +*(0x70000ff8+15) = 55 +page fault: addr=0x70001000 err=0x6 +*(0x70000ff8+ 0) = 77 +*(0x70000ff8+ 1) = 66 +*(0x70000ff8+ 2) = 55 +*(0x70000ff8+ 3) = 44 +*(0x70000ff8+ 4) = 33 +*(0x70000ff8+ 5) = 22 +*(0x70000ff8+ 6) = 11 +*(0x70000ff8+ 7) = 0 +*(0x70000ff8+ 8) = 55 +*(0x70000ff8+ 9) = 55 +*(0x70000ff8+10) = 55 +*(0x70000ff8+11) = 55 +*(0x70000ff8+12) = 55 +*(0x70000ff8+13) = 55 +*(0x70000ff8+14) = 55 +*(0x70000ff8+15) = 55 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1785972 b/results/classifier/gemma3:12b/kvm/1785972 new file mode 100644 index 00000000..62e32a0b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1785972 @@ -0,0 +1,49 @@ + +v3.0.0-rc4: VM fails to start after vcpuhotunplug, managedsave sequence + +VM fails to start after vcpu hot un-plug, managedsave sequence + +Host info: +Kernel: 4.18.0-rc8-00002-g1236568ee3cb + +qemu: commit 6ad90805383e6d04b3ff49681b8519a48c9f4410 (HEAD -> master, tag: v3.0.0-rc4) +QEMU emulator version 2.12.94 (v3.0.0-rc4-dirty) + +libvirt: commit 087de2f5a3dffb27d2eeb0c50a86d5d6984e5a5e (HEAD -> master) +libvirtd (libvirt) 4.6.0 + +Guest Kernel: 4.18.0-rc8-00002-g1236568ee3cb + + +Steps to reproduce: +1. Start a guest(VM) with 2 current , 4 max vcpus +virsh start vm1 +Domain vm1 started + +2. Hotplug 2 vcpus +virsh setvcpus vm1 4 --live + +3. Hot unplug 2 vcpus +virsh setvcpus vm1 2 --live + +4. Managedsave the VM +virsh managedsave vm1 + +Domain vm1 state saved by libvirt + +5. Start the VM ---Fails to start +virsh start vm1 + +error: Failed to start domain avocado-vt-vm1 +error: internal error: qemu unexpectedly closed the monitor: 2018-08-08T06:27:53.853818Z qemu: Unknown savevm section or instance 'spapr_cpu' 2 +2018-08-08T06:27:53.854949Z qemu: load of migration failed: Invalid argument + + + +Testcase: +avocado run libvirt_vcpu_plug_unplug.positive_test.vcpu_set.live.vm_operate.managedsave_with_unplug --vt-type libvirt --vt-extra-params emulator_path=/usr/share/avocado-plugins-vt/bin/qemu create_vm_libvirt=yes kill_vm_libvirt=yes env_cleanup=yes smp=8 backup_image_before_testing=no libvirt_controller=virtio-scsi scsi_hba=virtio-scsi-pci drive_format=scsi-hd use_os_variant=no restore_image_after_testing=no vga=none display=nographic kernel=/home/kvmci/linux/vmlinux kernel_args='root=/dev/sda2 rw console=tty0 console=ttyS0,115200 init=/sbin/init initcall_debug' take_regular_screendumps=no --vt-guest-os JeOS.27.ppc64le +JOB ID : 1f869477ad87e7d7e7e7777f631ae08965f41a74 +JOB LOG : /root/avocado/job-results/job-2018-08-08T02.42-1f86947/job.log + (1/1) type_specific.io-github-autotest-libvirt.libvirt_vcpu_plug_unplug.positive_test.vcpu_set.live.vm_operate.managedsave_with_unplug: ERROR (91.58 s) +RESULTS : PASS 0 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 +JOB TIME : 95.89 s \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1788098 b/results/classifier/gemma3:12b/kvm/1788098 new file mode 100644 index 00000000..5ec03e2f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1788098 @@ -0,0 +1,12 @@ + +Avoid migration issues with aligned 2MB THB + +------- Comment From <email address hidden> 2018-08-20 17:12 EDT------- +Hi, in some environments it was observed that this qemu patch to enable THP made it more likely to hit guest migration issues, however the following kernel patch resolves those migration issues: + +https://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc.git/commit/?h=kvm-ppc-next&id=c066fafc595eef5ae3c83ae3a8305956b8c3ef15 +KVM: PPC: Book3S HV: Use correct pagesize in kvm_unmap_radix() + +Once merged upstream, it would be good to include that change as well to avoid potential migration problems. Should I open a new bug for that or is it better to track here? + +Note Paelzer: I have not seen related migration issues myself, but it seems reasonable and confirmed by IBM. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1788275 b/results/classifier/gemma3:12b/kvm/1788275 new file mode 100644 index 00000000..33d3e835 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1788275 @@ -0,0 +1,65 @@ + +-cpu ...,+topoext works only with EPYC CPU model + +See bug report at: +https://bugzilla.redhat.com/show_bug.cgi?id=1615682 + +Probably this is caused by the inconsistent legacy cache information on all CPU models except EPYC. + +--------------------------------------------- +Description of problem: +Guest should get 2 threads per core and all of them should be on-line when booting guest with old amd cpu model + smt + +Steps to Reproduce: +1.Boot rhel7.6 guest with cli: +/usr/libexec/qemu-kvm -name rhel7.6 -m 16G -machine pc,accel=kvm \ + -S \ + -cpu Opteron_G3,+topoext,xlevel=0x8000001e,enforce \ + -smp 2,threads=2 \ + -monitor stdio \ + -qmp unix:/tmp/qmp2,server,nowait \ + -device VGA \ + -vnc :0 \ + -serial unix:/tmp/console2,server,nowait \ + -uuid 115e11b2-a869-41b5-91cd-6a32a907be7f \ + -drive file=rhel7.6-20180812.qcow2,if=none,id=drive-scsi-disk0,format=qcow2,cache=none,werror=stop,rerror=stop -device ide-hd,drive=drive-scsi-disk0,id=scsi-disk0 \ + -netdev tap,id=idinWyYY,vhost=on -device virtio-net-pci,mac=2e:39:fa:ff:88:a1,id=idlbq7eA,netdev=idinWyYY \ + +2.check cpu info inside guest +3. + +Actual results: +Guest gets one online cpu, one offline cpu and one thread per core: +# lscpu +lscpu +Architecture: x86_64 +CPU op-mode(s): 32-bit, 64-bit +Byte Order: Little Endian +CPU(s): 2 +On-line CPU(s) list: 0 +Off-line CPU(s) list: 1 +Thread(s) per core: 1 +Core(s) per socket: 1 +Socket(s): 1 +NUMA node(s): 1 +Vendor ID: AuthenticAMD +CPU family: 16 +Model: 2 +Model name: AMD Opteron 23xx (Gen 3 Class Opteron) +Stepping: 3 +CPU MHz: 2096.060 +BogoMIPS: 4192.12 +Hypervisor vendor: KVM +Virtualization type: full +L1d cache: 64K +L1i cache: 64K +L2 cache: 512K +L3 cache: 16384K +NUMA node0 CPU(s): 0 +Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm art rep_good nopl extd_apicid pni cx16 x2apic popcnt hypervisor lahf_lm cmp_legacy abm sse4a misalignsse topoext retpoline_amd ibp_disable vmmcall + + +Expected results: +Guest should get 2 threads per core and all of them should be on-line + +--------------------------------------------- \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1792193 b/results/classifier/gemma3:12b/kvm/1792193 new file mode 100644 index 00000000..3b78eae4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1792193 @@ -0,0 +1,17 @@ + +AMD Athlon(tm) X2 Dual-Core QL-64 bug + +I upgrade my qemu 2.12.0-2 => 3.0.0-1. After that I can't load virtual machine with "-cpu host" option. Full command line is +qemu-system-x86_64 \ + -monitor stdio \ + -enable-kvm \ + -cpu host \ + -smp cpus=2 \ + -m 1G \ + -vga virtio \ + -display gtk,gl=on \ + -soundhw ac97 \ + -drive file=/ehdd/qemu/arch_hw_12_08_2018/arch_shrinked.raw,format=raw,if=virtio +I have Arch Linux on virtual machine. When I start QEMU, GRUB tries to load initial ramdisk and stops. System doesn't load. If I try to start virtual machine with "-cpu athlon" option then get the same bug. +I downgrade back to qemu 2.12.0-2 and virtual machine works fine, system loads. +My processor is AMD Athlon(tm) X2 Dual-Core QL-64. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1793275 b/results/classifier/gemma3:12b/kvm/1793275 new file mode 100644 index 00000000..dcce123f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1793275 @@ -0,0 +1,27 @@ + +Hosts fail to start after update to QEMU 3.0 + +Host OS: Archlinux +Host Architecture: AMD64 +Guest OS: FreeBSD-11.2 (x2) and Archlinux (x1) +Guest Architecture: AMD64 + +I have been using QEMU 2.x without issue for a number of years but since updating to QEMU 3.0 my guests do not complete startup. + +FreeBSD 11.2 guest failure symptom: +The two FreeBSD-11.2 guests output repeated messages of "unexpected cache type 4". This appears to be an internal error message and I've not found any instances of it through Google search. + +Archlinux guest failure symptom: +The single Archlinux guest gets no further than the message "uncompressing initial ramdisk". + +The guests are started by a qemu-kvm invokation. No virtual machine managers are used. The command lines used (from ps awx) to launch the VMs are: + +[neil@optimus ~]$ ps awx |grep qemu + 1492 ? Sl 3:19 /usr/bin/qemu-system-x86_64 -daemonize -pidfile /run/qemu_vps1.pid -enable-kvm -cpu host -smp 2 -k en-gb -boot order=c -drive file=/dev/system/vps1,cache=none,format=raw,if=virtio,index=0,media=disk -m 1024 -name FreeBSD_1 -net nic,macaddr=52:54:AD:86:64:00,model=virtio -net vde,sock=/run/vde_switch-tap0.sock -monitor telnet:127.0.0.2:23,server,nowait -vnc 192.168.0.1:0 + 1510 ? Sl 0:54 /usr/bin/qemu-system-x86_64 -daemonize -pidfile /run/qemu_vps2.pid -enable-kvm -cpu host -smp 2 -k en-gb -boot order=c -drive file=/dev/system/vps2,cache=none,format=raw,if=virtio,index=0,media=disk -m 1024 -name Archlinux -net nic,macaddr=52:54:AD:86:64:01,model=virtio -net vde,sock=/run/vde_switch-tap0.sock -monitor telnet:127.0.0.3:23,server,nowait -vnc 192.168.0.1:1 + 1529 ? Sl 3:07 /usr/bin/qemu-system-x86_64 -daemonize -pidfile /run/qemu_vps3.pid -enable-kvm -cpu host -smp 2 -k en-gb -boot order=c -drive file=/dev/system/vps3,cache=none,format=raw,if=virtio,index=0,media=disk -m 1024 -name FreeBSD_2 -net nic,macaddr=52:54:AD:86:64:02,model=virtio -net vde,sock=/run/vde_switch-tap0.sock -monitor telnet:127.0.0.4:23,server,nowait -vnc 192.168.0.1:2 + +The VMs were installed to LVM volumes on the host machine (hence the /dev/system/vpsN device names). Networking is over a Linux tap interface connected to a VDE2 virtual network switch. + +Currently working version of QEMU: qemu-headless 2.12.1-1 +Failing version of QEMU: qemu-headless-3.0.0-1 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1795527 b/results/classifier/gemma3:12b/kvm/1795527 new file mode 100644 index 00000000..451c5b4b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1795527 @@ -0,0 +1,68 @@ + +Malformed audio and video output stuttering after upgrade to QEMU 3.0 + +My host is an x86_64 Arch Linux OS with a recompiled 4.18.10 hardened kernel, running a few KVM guests with varying OSes and configurations managed through a Libvirt stack. + +Among these guests I have two Windows 10 VMs with VGA passthrough and PulseAudio-backed virtual audio devices. + +After upgrading to QEMU 3.0.0, both of the Win10 guests started showing corrupted audio output in the form of unnatural reproduction speed and occasional but consistently misplaced audio fragments originating from what seems to be a circular buffer wrapping over itself (misbehaviour detected by starting some games with known OSTs and dialogues: soundtracks sound accelerated and past dialogue lines start replaying middle-sentence until the next line starts playing). + +In addition, the video output of the malfunctioning VMs regularly stutters roughly twice a second for a fraction of a second (sync'ed with the suspected buffer wrapping and especially pronounced during not-pre-rendered cutscenes), toghether with mouse freezes that look like actual input misses more than simple lack of screen refreshes. + + +The issue was succesfully reproduced without the managing stack, directly with the following command line, on the most capable Windows guest: + + QEMU_AUDIO_DRV=pa + QEMU_PA_SERVER=127.0.0.1 + /usr/bin/qemu-system-x86_64 -name guest=win10_gms,debug-threads=on \ + -machine pc-i440fx-3.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off \ + -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=123456789abc,kvm=off \ + -drive file=/usr/share/ovmf/x64/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on \ + -drive file=/var/lib/libvirt/qemu/nvram/win10_gms_VARS.fd,if=pflash,format=raw,unit=1 \ + -m 5120 \ + -realtime mlock=off \ + -smp 3,sockets=1,cores=3,threads=1 \ + -uuid 39b56ee2-6bae-4009-9108-7be26d5d63ac \ + -display none \ + -no-user-config \ + -nodefaults \ + -rtc base=localtime,driftfix=slew \ + -global kvm-pit.lost_tick_policy=delay \ + -no-hpet \ + -no-shutdown \ + -global PIIX4_PM.disable_s3=1 \ + -global PIIX4_PM.disable_s4=1 \ + -boot strict=on \ + -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 \ + -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4 \ + -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 \ + -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 \ + -device ahci,id=sata0,bus=pci.0,addr=0x9 \ + -drive file=/dev/vms/win10_gaming,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native \ + -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on \ + -drive file=/dev/sr0,format=raw,if=none,id=drive-sata0-0-0,media=cdrom,readonly=on \ + -device ide-cd,bus=sata0.0,drive=drive-sata0-0-0,id=sata0-0-0 \ + -device intel-hda,id=sound0,bus=pci.0,addr=0x3 \ + -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 \ + -device usb-host,hostbus=2,hostaddr=3,id=hostdev0,bus=usb.0,port=1 \ + -device vfio-pci,host=01:00.0,id=hostdev1,bus=pci.0,addr=0x6 \ + -device vfio-pci,host=01:00.1,id=hostdev2,bus=pci.0,addr=0x7 \ + -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 \ + -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ + -msg timestamp=on + + +By "purposedly misconfiguring" the codepaths and replacing "pc-i440fx-3.0" with "pc-i440fx-2.11" (basically reverting the config changes I needed to do in order to update the domain definitions), the stuttering seems to disappear (or at least becomes negligible) and the audio output, despite becoming incredibly distorted, is consistent in every other way, with in-order dialogues and (perceived) correct tempo. + + +In order to exclude eventual misconfigurations in the host's audio processing pipeline, I proceeded to update the domain definition's codepath of another guest running Ubuntu 18.04 with a completely different hardware configuration (no video card passthrough and no PulseAudio backconnection, just a plain emulated VirtIO display and Spice audio device). + +The audio issue presented itself again in the form of slightly sped up audio playback from Internet videos interleaved with occasional "quenches" of playing speed. +Stutters are difficult to detect because of the poor refresh rate of the emulated VGA adapter, but I wouldn't be surprised to find them here too (actually, I *think* I sensed them, but I'm not sure enough to assess their existence). + +Once again, by reverting to the old 2.11 directive everything is back to normal. + + + +Given the fact that no official upgrade directives regarding required sampling rate, period or sheduling adjustments were stated or handed-out to administrators, I decided to report this behaviour as a bug. +I hope this is the appropriate channel and that I didn't annoy anyone (this is my first proper bug report, please forgive me for any innaccuracy). \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1797332 b/results/classifier/gemma3:12b/kvm/1797332 new file mode 100644 index 00000000..72e92931 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1797332 @@ -0,0 +1,275 @@ + +qemu nested virtualization is not working with Ubuntu16.04 + Intel CPU + +# 1 What am I trying to do ? # + +I want to use `libvirt` `qemu/KVM` with **nested virtualization** like described +in [1] and [2]. +**But it does not work with Ubuntu16.04.** It worked some times ago, but not +anymore. + + +I want 2 levels of virtualization like this: + +* L0 – the bare metal host, running KVM on `Ubuntu 16.04` +* L1 – a `Ubuntu 16.04` VM running on L0; also called the "guest hypervisor" + — as it itself is capable of running KVM +* L2 – a `Ubuntu 16.04` VM running on L1, also called the "nested guest" + + +[1] https://docs.fedoraproject.org/en-US/quick-docs/using-nested-virtualization-in-kvm/ +[2] https://www.linux-kvm.org/page/Nested_Guests + + +My goal is to deploy an `OpenStack` environnement on top of VMs rather than on +bare metal hosts for convenience for a lab experiment. As a result, the +`OpenStack` nodes are L1 VMs. Compute nodes are L1 VMs as well and the VMs +created with `OpenStack` and wich are running on the compute nodes are L2 VMs. + + + + + + +# 2 What is my problem ? # + +I can **not** run my 2nd levels of virtualization in 16.04: + +* L0 is just fine: running `Ubuntu 16.04.5 LTS`, installed with the `.iso` image +* L1: I install `libvirt` + `KVM` on L0. I can run VMs like the `Ubuntu16.04` + cloud image on L0. +* L2: I install `libvirt` + `KVM` on L1 as well. But I **can not** run VMs on + L1: I get `kernel panic` or `general protection fault`. + + +**But if I do the same with Ubuntu18.04** (on the same hardware) instead of +`Ubuntu16.04`, it works without faults. +I don't change the configuration or `virt-install scripts` (other than using +the 18.04 .iso and cloud image). + + + + + + +# 3 My libvirt installation for Ubuntu16.04 # + +I install `libvir KVM` in both L0 and L1 using a custom repository [3] from +`OpenStack` team, because their version of libvirt in this repo is newer than +the one on Ubuntu 16.04 official repo and it match the version of `libvirt` +in Ubuntu 18.04. + +[3] https://wiki.ubuntu.com/OpenStack/CloudArchive + + + + + + +# 4 hardware and CPU # + +CPU is: +> Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz +> Intel virt is enable in the bios/uefi. + +The rest is standard HDD, standard I/O... + + + + + + +# 5 .iso and cloud image # + +I download .iso for L0 bare metal server and cloud image +for L1/L2 VMs from official repository: + +Ubuntu 16.04 + * http://releases.ubuntu.com/16.04/ + * https://cloud-images.ubuntu.com/releases/16.04/release/ + +Ubuntu 18.04 + * http://releases.ubuntu.com/bionic/ + * https://cloud-images.ubuntu.com/releases/18.04/release/ + + + + + + +# 6 Details # + +## Details about L0 Ubuntu 16.04 bare metal host ## +L0 is running `Ubuntu 16.04.5 LTS` installed with the .iso. + + +**kernel** +``` +user@L0:~$ uname -a +Linux L0 4.4.0-137-generic #163-Ubuntu SMP Mon Sep 24 13:14:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux +``` + +**libvirt version** running on L0 +``` +user@L0:~$ virsh version +Compiled against library: libvirt 4.0.0 +Using library: libvirt 4.0.0 +Using API: QEMU 4.0.0 +Running hypervisor: QEMU 2.11.1 +``` + +**qemu version detail** +``` +ukvm2@kvm2:~$ qemu-system-x86_64 --version +QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.5~cloud0) +Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers +``` + +**KVM acceleration** +``` +user@L0:~$ kvm-ok +INFO: /dev/kvm exists +KVM acceleration can be used +``` + +**nested parameter** +``` +user@L0:~$ cat /sys/module/kvm_intel/parameters/nested +Y +``` + +**number of CPU** +``` +user@L0:~$ egrep -c '(vmx|svm)' /proc/cpuinfo +48 +``` + + + +## Details about a L1 Ubuntu 16.04 VM ## +A VM in L1 (which is running on L0) which is running `Ubuntu 16.04.5 LTS` +installed by a cloud image. + +**kernel** +``` +user@L1-VM:~$ uname -a +Linux L1 4.4.0-137-generic #163-Ubuntu SMP Mon Sep 24 13:14:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux +``` + +**libvirt version** running on the L1 VM +``` +user@L1-VM:~$ sudo virsh version +Compiled against library: libvirt 4.0.0 +Using library: libvirt 4.0.0 +Using API: QEMU 4.0.0 +Running hypervisor: QEMU 2.11.1 +``` + +**qemu version detail** +``` +user@L1-VM:~$ qemu-system-x86_64 --version +QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.5~cloud0) +Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers +``` + +**KVM acceleration** +``` +user@L1-VM:~$ kvm-ok +INFO: /dev/kvm exists +KVM acceleration can be used +``` + +**nested parameter** +``` +user@L1-VM:~$ cat /sys/module/kvm_intel/parameters/nested +Y +``` + +**number of CPU**, which are vCPU given by L0 to the L1 VM +I give 20 vCPU. +``` +user@L1-VM:~$ egrep -c '(vmx|svm)' /proc/cpuinfo +20 +``` + + + +## L1 VM virt-install script parameter ## +If you want to reproduce an L1 VM, I followed this [4]: + +``` +virt-install \ + --connect=qemu:///system \ + --name $VMName \ + --memory $RAM \ + --vcpus $VCPUS \ + --cpu host \ + --metadata description=$DESCRIPTION \ + --os-type linux \ + --os-variant ubuntu16.04 \ + --disk $DISK_PATH/$VMName.$DISK_FORMAT,size=$DISK_SIZE,bus=virtio \ + --disk $CFGIMG_PATH/config_$VMName.$DISK_FORMAT,device=cdrom \ + --network bridge=virbr0 \ + --graphics none \ + --console pty,target_type=serial \ + --hvm +``` + +[4] https://youth2009.org/post/kvm-with-ubuntu-cloud-image/ + + + +## Details about a L2 VM ## + +I want to create a L2 `Ubuntu 16.04.5 LTS` VM installed by a cloud image VM +within my L1 `KVM` VM. But whatever I do, my L2 VM crash before finishing to be +instantiated. I get `kernel panic` or `general protection fault`. + + +Here is the log of an L2 VM after the instanciation failed: +``` +user@L1-VM:~$ less /var/log/libvirt/qemu/VMNAME.log + +2018-10-11T07:40:45.837151Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/1 (label charserial0) +2018-10-11T07:40:45.844279Z qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] +2018-10-11T07:40:45.848532Z qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.invpcid [bit 10] +``` + + +If you want to reproduce an L2 VM running on L1, follow [4]. + + +**However** a Cirros OS image can run on a L1 VM ! + + + + + + +# 7 Thoughts # +I think this is a bug in either `Ubuntu16.04` or `libvirt`. +All the information are here to reproduce the bug, I think. + + +If I do the same with `Ubuntu 18.04`, on the same hardware, following the same +steps but with Ubuntu 18.04 .iso and cloud image, it works. + +It works if: + +* L0 = Ubuntu18.04 (.iso) + qemu/KVM +* L1 = Ubuntu18.04 (cloud image) + qemu/KVM +* L2 = Ubuntu18.04 (cloud image) + + +It also works if: + +* L0 = Ubuntu18.04 (.iso) + qemu/KVM +* L1 = Ubuntu18.04 (cloud image) + qemu/KVM +* L2 = Ubuntu16.04 (cloud image) + + + + +Thank you for your time reading ! +-- +nico \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1798057 b/results/classifier/gemma3:12b/kvm/1798057 new file mode 100644 index 00000000..799b561e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1798057 @@ -0,0 +1,29 @@ + +Not able to start instances larger than 1 TB + +Specs: + +CPU: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz +OS: Ubuntu 18.04 AMD64 +QEMU: 1:2.11+dfsg-1ubuntu7.6 (Ubuntu Bionic Package) +Openstack: Openstack Queens (Ubuntu Bionic Package) +Libvirt-daemon: 4.0.0-1ubuntu8.5 +Seabios: 1.10.2-1ubuntu1 + + +The Problem: +We are not able to start instances, which have a memory size over 1 TB. +After starting the instance, they shortly lock up. Starting guests with a lower amount of RAM works +perfectly. We dealt with the same problem in the past with an older Qemu Version (2.5) by patching some source files according to this patch: + +https://git.centos.org/blob/rpms!!qemu-kvm.git/34b32196890e2c41b0aee042e600ba422f29db17/SOURCES!kvm-fix-guest-physical-bits-to-match-host-to-go-beyond-1.patch + + +I think we now have somewhat the same problem here, however the source base changed and I'am not able to find the corresponding snippet to patch this. + +Also, guests show a wrong physical address size which is probably the cause of the lock ups on high memory guests: +root@debug:~# grep physical /proc/cpuinfo +physical id : 0 +address sizes : 40 bits physical, 48 bits virtual + +Any way to fix this? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1801674 b/results/classifier/gemma3:12b/kvm/1801674 new file mode 100644 index 00000000..d3b1ad82 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1801674 @@ -0,0 +1,53 @@ + +Ubuntu16.04 LTS - PCI Pass Through in Ubuntu KVM 16.04.x must use QEMU with DDW support from PPA (Documentation) + +== Comment: #0 - Siraj M. Ismail <email address hidden> - 2016-10-05 11:35:38 == +---Problem Description--- + +Customer running PCI pass through with 16.04.1 KVM and with the stock QEMU packages will hit a DI if they do not update to the QEMU packages with DDW support. The packages are in PPA for customers to download, and test has verified the fix. The details are in Bugzilla 144123. + +---uname output--- +Ubuntu 16.04.1 with 4.4.0 Kernel + +Machine Type = 8247-22L + +---Debugger--- +A debugger is not configured + +---Steps to Reproduce--- + Run guest with adapters assigned via PCI PT, The guest will start having DI issues as soon as I/O started on the Pass Through adapter. + +Contact Information = <email address hidden> + +---Patches Installed--- +QEMU was updated from 2.5 version to 2.6.1 with patches for DDW support, can be found at PPA location : +https://launchpad.net/~ibmpackages/+archive/ubuntu/ddw + +Userspace tool common name: QEMU + +Documentation version: N/A + +The userspace tool has the following bit modes: N/A + +Userspace rpm: N/A + +Userspace tool obtained from project website: na + +*Additional Instructions for <email address hidden>: +-Post a private note with access information to the machine that the bug is occuring on. +-Attach ltrace and strace of userspace application. + +== Comment: #5 - Leonardo Augusto Guimaraes Garcia <email address hidden> - 2017-04-25 16:23:57 == +I would rephrase to: + +------------------------------- + +Under KVM recommendations + +PCI passthrough recommendations + +If you are running PCI passthrough with 16.04.x KVM and with the stock QEMU package, update to the QEMU package that have DDW support. If you do not update the package, you will hit data integrity issues as soon as I/O is started on the adapter. + +The package is available at: https://launchpad.net/~ibmpackages/+archive/ubuntu/ddw + +------------------------------- \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1806040 b/results/classifier/gemma3:12b/kvm/1806040 new file mode 100644 index 00000000..531c2bcc --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1806040 @@ -0,0 +1,16 @@ + +Nested VMX virtualization error on last Qemu versions + +Recently updated Qemu on a Sony VAIO sve14ag18m with Ubuntu Bionic 4.15.0-38 from Git + +After launching a few VMs, noticed that i could not create Snapshot due to this error: +"Nested VMX virtualization does not support live migration yet" + +I've created a new Windows 7 X64 machine with this compilation of Qemu and the problem persisted, so it's not because of the old machines. + +I launch Qemu with this params (I use them for malware analisys adn re...): +qemu-system-x86_64 -monitor stdio -display none -m 4096 -smp cpus=4 -usbdevice tablet -drive file=VM.img,index=0,media=disk,format=qcow2,cache=unsafe -net nic,macaddr="...." -net bridge,br=br0 -cpu host,-hypervisor,kvm=off -vnc 127.0.0.1:0 -enable-kvm + + +Deleting the changes made on this commit solved the problem, but I dont have idea what is this for, so... xDD +https://github.com/qemu/qemu/commit/d98f26073bebddcd3da0ba1b86c3a34e840c0fb8 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1811862 b/results/classifier/gemma3:12b/kvm/1811862 new file mode 100644 index 00000000..dea9ce4b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1811862 @@ -0,0 +1,39 @@ + +microcode version stays 0x1 even if -cpu host is used + +The microcode version of my host cpu has the following version: + +grep microcode /proc/cpuinfo | head -1 +microcode : 0x3d + +while trying to run ESXi in an nested VM, the boot bailed out with +error message that at least microcode version 0x19 is needed. It +seems they have introduced such a check on certain CPU types. + +The VM in question is using the "host-passthrough" option in libvirt +and the qemu command line reads as this: + +21172 ? Sl 0:09 /usr/libexec/qemu-kvm -name guest=hpe-env-client1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-33-hpe-env-client1/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu host <rest stripped> + +Running a regular Linux VM with `host-passthrough` shows that the +microcode version is still reported as 0x1. + +Within the VM: + +[root@hpe-env-client1 ~]# cat /proc/cpuinfo +processor : 0 +vendor_id : GenuineIntel +cpu family : 6 +model : 63 +model name : Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz +stepping : 2 +microcode : 0x1 +cpu MHz : 2397.222 + + +My impression is qemu should copy the hosts microcode version in this case? + +Running Qemu von RHEl8 beta here. + +[root@3parserver ~]# /usr/libexec/qemu-kvm --version +QEMU emulator version 2.12.0 (qemu-kvm-2.12.0-41.el8+2104+3e32e6f8) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1813165 b/results/classifier/gemma3:12b/kvm/1813165 new file mode 100644 index 00000000..a47f5941 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1813165 @@ -0,0 +1,36 @@ + +KVM internal error. Suberror: 1 emulation failure + +Hello Devs. + +Having problems getting VM to run with qemu 3.1.0. + +2019-01-24 13:46:08.648+0000: starting up libvirt version: 4.10.0, qemu version: 3.1.0, kernel: 4.14.94, hostname: one.lordcritical.de +LC_ALL=C PATH=/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin HOME=/root USER=root QEMU_AUDIO_DRV=none /usr/bin/kvm -name guest=one-266,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-one-266/master-key.aes -machine pc-i440fx-2.9,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,hypervisor=on,tsc_adjust=on,clflushopt=on,ssbd=on,xsaves=on,pdpe1gb=on -m 1024 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid b219b45d-a2f0-4128-a948-8673a7abf968 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=21,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/266/disk.0,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -drive file=/var/lib/one//datastores/0/266/disk.1,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:00:76:69:85,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 0.0.0.0:266 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on +char device redirected to /dev/pts/1 (label charserial0) +KVM internal error. Suberror: 1 +emulation failure +EAX=00000001 EBX=000f7c2c ECX=00000001 EDX=00000001 +ESI=00006a26 EDI=3ffbdc48 EBP=000069e6 ESP=000a8000 +EIP=000fd057 EFL=00010016 [----AP-] CPL=0 II=0 A20=1 SMM=1 HLT=0 +ES =0010 00000000 ffffffff 00c09300 +CS =0000 00000000 00000fff 00809b00 +SS =0010 00000000 ffffffff 00c09300 +DS =0010 00000000 ffffffff 00c09300 +FS =0010 00000000 ffffffff 00c09300 +GS =0010 00000000 ffffffff 00c09300 +LDT=0000 00000000 0000ffff 00008200 +TR =0000 00000000 0000ffff 00008b00 +GDT= 10387cfe 0000fe6c +IDT= 0010387c 00003810 +CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000fffecffc DR7=000000000e1e0400 +EFER=0000000000000000 +Code=cb 66 ba 4d d0 0f 00 e9 c8 fe bc 00 80 0a 00 e8 31 3a ff ff <0f> aa fa fc 66 ba 66 d0 0f 00 e9 b1 fe f3 90 f0 0f ba 2d ac 3b 0f 00 00 72 f3 8b 25 a8 3b +2019-01-24T13:47:39.383366Z kvm: terminating on signal 15 from pid 2708 (/usr/sbin/libvirtd) + +Someone has an idea whats going wrong here? + +thanks and cheers +t. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1813940 b/results/classifier/gemma3:12b/kvm/1813940 new file mode 100644 index 00000000..14296a44 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1813940 @@ -0,0 +1,47 @@ + +kvm_mem_ioeventfd_add: error adding ioeventfd: No space left on device + +Latest QEMU master fails to run with too many MMIO devices specified. + +After patch 3ac7d43a6fb [1] QEMU just prints an error message and exits. +> kvm_mem_ioeventfd_add: error adding ioeventfd: No space left on device + +This is reproducible e.g. with the following setup: + +qemu-3.1.50-dirty \ + -machine pc-i440fx-2.7,accel=kvm \ + -cpu host -m 4096 \ + -smp 2,sockets=2,cores=1,threads=1 \ + -drive file=freebsd_vm_1.qcow2,format=qcow2,if=none,id=bootdr \ + -device ide-hd,drive=bootdr,bootindex=0 \ + -device virtio-scsi-pci,id=vc0 \ + -device virtio-scsi-pci,id=vc1 \ + -device virtio-scsi-pci,id=vc2 \ + -device virtio-scsi-pci,id=vc3 \ + +Running with just 3 Virtio-SCSI controllers seems to work fine, adding more than that causes the error above. Note that this is not Virtio-SCSI specific. I've also reproduced this without any Virtio devices whatsoever. + +strace shows the following ioctl chain over and over: + +145787 ioctl(11, KVM_UNREGISTER_COALESCED_MMIO, 0x7f60a4985410) = 0 +145787 ioctl(11, KVM_UNREGISTER_COALESCED_MMIO, 0x7f60a4985410) = 0 +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = 0 +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = -1 ENOSPC (No space left on device) +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = -1 ENOSPC (No space left on device) +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = -1 ENOSPC (No space left on device) +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = -1 ENOSPC (No space left on device) +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = -1 ENOSPC (No space left on device) +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = -1 ENOSPC (No space left on device) +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = -1 ENOSPC (No space left on device) +145787 ioctl(11, KVM_REGISTER_COALESCED_MMIO, 0x7f60a49853b0) = -1 ENOSPC (No space left on device) + +Which suggests there's some kind of MMIO region leak. + +[1] +commit 3ac7d43a6fbb5d4a3d01fc9a055c218030af3727 +Author: Paolo Bonzini <email address hidden> +AuthorDate: Wed Nov 28 17:28:45 2018 +0100 +Commit: Paolo Bonzini <email address hidden> +CommitDate: Fri Jan 11 13:57:24 2019 +0100 + + memory: update coalesced_range on transaction_commit \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1814 b/results/classifier/gemma3:12b/kvm/1814 new file mode 100644 index 00000000..d052049c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1814 @@ -0,0 +1,17 @@ + +`-M none` breaks on ARM64 platforms with max IPA size < 40 +Description of problem: +QEMU fails to initialize the KVM type properly when `-M none` is used. On ARM64, the KVM type sets the IPA size. Without that setting, the kernel defaults to 40 bits. This fails on machines which cannot support that IPA size, such as Apple M1 machines. + +This presumably happens because `virt_machine_class_init()` in `hw/arm/virt.c` never gets called in that case, which means it doesn't initialize `mc->kvm_type` to the correct callback to do the IPA check. + +Since the max IPA size is a property of the host CPU and must be queried properly for things to work at all, this logic should be invoked unconditionally for all machines, even `none`. + +This is breaking libvirt on Apple M1/M2 systems, since it uses `-M none,accel=kvm` for its KVM test, and when it fails it considers KVM support unavailable. See: https://gitlab.com/libvirt/libvirt/-/issues/365 +Steps to reproduce: +On any ARM64 machine: + +1. strace -e ioctl qemu-system-aarch64 -M none,accel=kvm 2>&1 | grep -C1 CREATE_VM +2. strace -e ioctl qemu-system-aarch64 -M virt,accel=kvm 2>&1 | grep -C1 CREATE_VM + +Observe that the first command line does not issue a `KVM_CAP_ARM_VM_IPA_SIZE` and does not set the machine type argument to `KVM_CREATE_VM`, while the second one does. On machines with <40 bit max IPA, the first invocation would fail to initialize KVM. diff --git a/results/classifier/gemma3:12b/kvm/1815143 b/results/classifier/gemma3:12b/kvm/1815143 new file mode 100644 index 00000000..60cb43bc --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1815143 @@ -0,0 +1,86 @@ + + qemu-system-s390x fails when running without kvm: fatal: EXECUTE on instruction prefix 0x7f4 not implemented + +just wondering if TCG implements instruction prefix 0x7f4 + +server3:~ # zcat /boot/vmlinux-4.4.162-94.72-default.gz > /tmp/kernel + +--> starting qemu with kvm enabled works fine +server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd /boot/initrd -enable-kvm +Initializing cgroup subsys cpuset +Initializing cgroup subsys cpu +Initializing cgroup subsys cpuacct +Linux version 4.4.162-94.72-default (geeko@buildhost) (gcc version 4.8.5 (SUSE Linux) ) #1 SMP Mon Nov 12 18:57:45 UTC 2018 (9de753f) +setup.289988: Linux is running under KVM in 64-bit mode +setup.b050d0: The maximum memory size is 128MB +numa.196305: NUMA mode: plain +Write protected kernel read-only data: 8692k +[...] + +--> but starting qemu without kvm enabled works fails +server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd /boot/initrd +qemu: fatal: EXECUTE on instruction prefix 0x7f4 not implemented + +PSW=mask 0000000180000000 addr 000000000067ed6e cc 00 +R00=0000000080000000 R01=000000000067ed76 R02=0000000000000000 R03=0000000000000000 +R04=0000000000111548 R05=0000000000000000 R06=0000000000000000 R07=0000000000000000 +R08=00000000000100f6 R09=0000000000000000 R10=0000000000000000 R11=0000000000000000 +R12=0000000000ae2000 R13=0000000000681978 R14=0000000000111548 R15=000000000000bef0 +F00=0000000000000000 F01=0000000000000000 F02=0000000000000000 F03=0000000000000000 +F04=0000000000000000 F05=0000000000000000 F06=0000000000000000 F07=0000000000000000 +F08=0000000000000000 F09=0000000000000000 F10=0000000000000000 F11=0000000000000000 +F12=0000000000000000 F13=0000000000000000 F14=0000000000000000 F15=0000000000000000 +V00=00000000000000000000000000000000 V01=00000000000000000000000000000000 +V02=00000000000000000000000000000000 V03=00000000000000000000000000000000 +V04=00000000000000000000000000000000 V05=00000000000000000000000000000000 +V06=00000000000000000000000000000000 V07=00000000000000000000000000000000 +V08=00000000000000000000000000000000 V09=00000000000000000000000000000000 +V10=00000000000000000000000000000000 V11=00000000000000000000000000000000 +V12=00000000000000000000000000000000 V13=00000000000000000000000000000000 +V14=00000000000000000000000000000000 V15=00000000000000000000000000000000 +V16=00000000000000000000000000000000 V17=00000000000000000000000000000000 +V18=00000000000000000000000000000000 V19=00000000000000000000000000000000 +V20=00000000000000000000000000000000 V21=00000000000000000000000000000000 +V22=00000000000000000000000000000000 V23=00000000000000000000000000000000 +V24=00000000000000000000000000000000 V25=00000000000000000000000000000000 +V26=00000000000000000000000000000000 V27=00000000000000000000000000000000 +V28=00000000000000000000000000000000 V29=00000000000000000000000000000000 +V30=00000000000000000000000000000000 V31=00000000000000000000000000000000 +C00=0000000000000000 C01=0000000000000000 C02=0000000000000000 C03=0000000000000000 +C04=0000000000000000 C05=0000000000000000 C06=0000000000000000 C07=0000000000000000 +C08=0000000000000000 C09=0000000000000000 C10=0000000000000000 C11=0000000000000000 +C12=0000000000000000 C13=0000000000000000 C14=0000000000000000 C15=0000000000000000 + +Aborted (core dumped) + + +server3:~ # lscpu +Architecture: s390x +CPU op-mode(s): 32-bit, 64-bit +Byte Order: Big Endian +CPU(s): 2 +On-line CPU(s) list: 0,1 +Thread(s) per core: 1 +Core(s) per socket: 1 +Socket(s) per book: 1 +Book(s) per drawer: 1 +Drawer(s): 2 +NUMA node(s): 1 +Vendor ID: IBM/S390 +Machine type: 2964 +BogoMIPS: 20325.00 +Hypervisor: z/VM 6.4.0 +Hypervisor vendor: IBM +Virtualization type: full +Dispatching mode: horizontal +L1d cache: 128K +L1i cache: 96K +L2d cache: 2048K +L2i cache: 2048K +L3 cache: 65536K +L4 cache: 491520K +NUMA node0 CPU(s): 0-63 +Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te vx sie +server3:~ # uname -a +Linux server3 4.4.126-94.22-default #1 SMP Wed Apr 11 07:45:03 UTC 2018 (9649989) s390x s390x s390x GNU/Linux +server3:~ # \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1815889 b/results/classifier/gemma3:12b/kvm/1815889 new file mode 100644 index 00000000..c237da61 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1815889 @@ -0,0 +1,42 @@ + +qemu-system-x86_64 crashed with signal 31 in __pthread_setaffinity_new() + +Unable to launch Default Fedora 29 images in gnome-boxes + +ProblemType: Crash +DistroRelease: Ubuntu 19.04 +Package: qemu-system-x86 1:3.1+dfsg-2ubuntu1 +ProcVersionSignature: Ubuntu 4.19.0-12.13-generic 4.19.18 +Uname: Linux 4.19.0-12-generic x86_64 +ApportVersion: 2.20.10-0ubuntu20 +Architecture: amd64 +Date: Thu Feb 14 11:00:45 2019 +ExecutablePath: /usr/bin/qemu-system-x86_64 +KvmCmdLine: COMMAND STAT EUID RUID PID PPID %CPU COMMAND +MachineType: Dell Inc. Precision T3610 +ProcEnviron: PATH=(custom, user) +ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-4.19.0-12-generic root=UUID=939b509b-d627-4642-a655-979b44972d17 ro splash quiet vt.handoff=1 +Signal: 31 +SourcePackage: qemu +StacktraceTop: + __pthread_setaffinity_new (th=<optimized out>, cpusetsize=128, cpuset=0x7f5771fbf680) at ../sysdeps/unix/sysv/linux/pthread_setaffinity.c:34 + () at /usr/lib/x86_64-linux-gnu/dri/radeonsi_dri.so + () at /usr/lib/x86_64-linux-gnu/dri/radeonsi_dri.so + start_thread (arg=<optimized out>) at pthread_create.c:486 + clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 +Title: qemu-system-x86_64 crashed with signal 31 in __pthread_setaffinity_new() +UpgradeStatus: Upgraded to disco on 2018-11-14 (91 days ago) +UserGroups: adm cdrom dip lpadmin plugdev sambashare sudo video +dmi.bios.date: 11/14/2018 +dmi.bios.vendor: Dell Inc. +dmi.bios.version: A18 +dmi.board.name: 09M8Y8 +dmi.board.vendor: Dell Inc. +dmi.board.version: A01 +dmi.chassis.type: 7 +dmi.chassis.vendor: Dell Inc. +dmi.modalias: dmi:bvnDellInc.:bvrA18:bd11/14/2018:svnDellInc.:pnPrecisionT3610:pvr00:rvnDellInc.:rn09M8Y8:rvrA01:cvnDellInc.:ct7:cvr: +dmi.product.name: Precision T3610 +dmi.product.sku: 05D2 +dmi.product.version: 00 +dmi.sys.vendor: Dell Inc. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1821771 b/results/classifier/gemma3:12b/kvm/1821771 new file mode 100644 index 00000000..435d3b34 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1821771 @@ -0,0 +1,71 @@ + +KVM guest does not reflect numa distances configured through qemu + +KVM guest does not reflect numa distances configured through qemu + +Env: +Host/Guest Kernel: 5.1.0-rc1-g72999bbdc +qemu : 3.1.90 (v2.8.0-rc0-18614-g278aebafa0-dirty) [repo: https://github.com/dgibson/qemu; branch:ppc-for-4.1 ] +# git log -1 +commit 278aebafa02f699857ca082d966bcbc05dc9bffb (HEAD -> ppc-for-4.1) +Author: Jafar Abdi <email address hidden> +Date: Sat Mar 23 17:26:36 2019 +0300 + + tests/libqos: fix usage of bool in pci-spapr.c + + Clean up wrong usage of FALSE and TRUE in places that use "bool" from stdbool.h. + + FALSE and TRUE (with capital letters) are the constants defined by glib for + being used with the "gboolean" type of glib. But some parts of the code also use + TRUE and FALSE for variables that are declared as "bool" (the type from <stdbool.h>). + + Signed-off-by: Jafar Abdi <email address hidden> + Reviewed-by: Eric Blake <email address hidden> + Message-Id: <email address hidden> + Signed-off-by: David Gibson <email address hidden> + +# libvirtd -V +libvirtd (libvirt) 5.1.0 + + + +Steps to reproduce: +1. Boot attached guest xml with predefined numa distance. + +qemu-commandline: +/usr/share/avocado-plugins-vt/bin/install_root/bin/qemu-system-ppc64 -name guest=vm2,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-15-vm2/master-key.aes -machine pseries-4.0,accel=kvm,usb=off,dump-guest-core=off -m 4096 -realtime mlock=off -smp 4,sockets=1,cores=4,threads=1 -numa node,nodeid=0,cpus=0-1,mem=2048 -numa node,nodeid=1,cpus=2-3,mem=2048 -uuid 1a870f1d-269a-4a8c-84bc-2b5bda72823a -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=28,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -kernel /home/kvmci/linux/vmlinux -append root=/dev/sda2 rw console=tty0 console=ttyS0,115200 init=/sbin/init initcall_debug selinux=0 -device qemu-xhci,id=usb,bus=pci.0,addr=0x3 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -drive file=/var/lib/avocado/data/avocado-vt/images/jeos-27-ppc64le.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:f4:f5:f6,bus=pci.0,addr=0x1 -chardev pty,id=charserial0 -device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on + + +2. Check numa distance and other details inside guest +# numactl -H +available: 2 nodes (0-1) +node 0 cpus: 0 1 +node 0 size: 2025 MB +node 0 free: 1837 MB +node 1 cpus: 2 3 +node 1 size: 2045 MB +node 1 free: 1646 MB +node distances: +node 0 1 + 0: 10 40 -----------------------------------NOK + 1: 40 10 + +# lsprop /proc/device-tree/cpus/PowerPC\,POWER9\@*/ibm\,associativity +/proc/device-tree/cpus/PowerPC,POWER8@0/ibm,associativity + 00000005 00000000 00000000 00000000 00000000 00000000 +/proc/device-tree/cpus/PowerPC,POWER8@10/ibm,associativity + 00000005 00000000 00000000 00000000 00000001 00000010 +/proc/device-tree/cpus/PowerPC,POWER8@18/ibm,associativity + 00000005 00000000 00000000 00000000 00000001 00000018 +/proc/device-tree/cpus/PowerPC,POWER8@8/ibm,associativity + 00000005 00000000 00000000 00000000 00000000 00000008 + +# lsprop /proc/device-tree/rtas/ibm,associativity-reference-points +/proc/device-tree/rtas/ibm,associativity-reference-points + 00000004 00000004 + +Expected numa distances: +node distances: +node 0 1 + 0: 10 20 + 1: 20 10 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1826422 b/results/classifier/gemma3:12b/kvm/1826422 new file mode 100644 index 00000000..250a26f8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1826422 @@ -0,0 +1,50 @@ + +Regression: QEMU 4.0 hangs the host (*bisect included*) + +The commit b2fc91db84470a78f8e93f5b5f913c17188792c8 seemingly introduced a regression on my system. + +When I start QEMU, the guest and the host hang (I need a hard reset to get back to a working system), before anything shows on the guest. + +I use QEMU with GPU passthrough (which worked perfectly until the commit above). This is the command I use: + +``` +/path/to/qemu-system-x86_64 + -drive if=pflash,format=raw,readonly,file=/path/to/OVMF_CODE.fd + -drive if=pflash,format=raw,file=/tmp/OVMF_VARS.fd.tmp + -enable-kvm + -machine q35,accel=kvm,mem-merge=off + -cpu host,kvm=off,hv_vendor_id=vgaptrocks,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time + -smp 4,cores=4,sockets=1,threads=1 + -m 10240 + -vga none + -rtc base=localtime + -serial none + -parallel none + -usb + -device usb-tablet + -device vfio-pci,host=01:00.0,multifunction=on + -device vfio-pci,host=01:00.1 + -device usb-host,vendorid=<vid>,productid=<pid> + -device usb-host,vendorid=<vid>,productid=<pid> + -device usb-host,vendorid=<vid>,productid=<pid> + -device usb-host,vendorid=<vid>,productid=<pid> + -device usb-host,vendorid=<vid>,productid=<pid> + -device usb-host,vendorid=<vid>,productid=<pid> + -device virtio-scsi-pci,id=scsi + -drive file=/path/to/guest.img,id=hdd1,format=qcow2,if=none,cache=writeback + -device scsi-hd,drive=hdd1 + -net nic,model=virtio + -net user,smb=/path/to/shared +``` + +If I run QEMU without GPU passthrough, it runs fine. + +Some details about my system: + +- O/S: Mint 19.1 x86-64 (it's based on Ubuntu 18.04) +- Kernel: 4.15 +- `configure` options: `--target-list=x86_64-softmmu --enable-gtk --enable-spice --audio-drv-list=pa` +- EDK2 version: 1a734ed85fda71630c795832e6d24ea560caf739 (20/Apr/2019) +- CPU: i7-6700k +- Motherboard: ASRock Z170 Gaming-ITX/ac +- VGA: Gigabyte GTX 960 Mini-ITX \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1830864 b/results/classifier/gemma3:12b/kvm/1830864 new file mode 100644 index 00000000..17970748 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1830864 @@ -0,0 +1,75 @@ + +Assertion `no_aa32 || ({ ARMCPU *cpu_ = (cpu); isar_feature_arm_div(&cpu_->isar); })' failed + +The following assertion: + + assert(no_aa32 || cpu_isar_feature(arm_div, cpu)); + +introduced in commit 0f8d06f16c9d ("target/arm: Conditionalize some +asserts on aarch32 support", 2018-11-02), fails for me. I intended to +launch a 32-bit ARM guest (with KVM acceleration) on my AArch64 host +(APM Mustang A3). + +Libvirt generated the following QEMU command line: + +> LC_ALL=C \ +> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \ +> QEMU_AUDIO_DRV=none \ +> /opt/qemu-installed-optimized/bin/qemu-system-aarch64 \ +> -name guest=f28.32bit,debug-threads=on \ +> -S \ +> -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-f28.32bit/master-key.aes \ +> -machine virt-4.1,accel=kvm,usb=off,dump-guest-core=off,gic-version=2 \ +> -cpu host,aarch64=off \ +> -drive file=/root/QEMU_EFI.fd.padded,if=pflash,format=raw,unit=0,readonly=on \ +> -drive file=/var/lib/libvirt/qemu/nvram/f28.32bit_VARS.fd,if=pflash,format=raw,unit=1 \ +> -m 8192 \ +> -realtime mlock=off \ +> -smp 8,sockets=8,cores=1,threads=1 \ +> -uuid d525042e-1b37-4058-86ca-c6a2086e8485 \ +> -no-user-config \ +> -nodefaults \ +> -chardev socket,id=charmonitor,fd=27,server,nowait \ +> -mon chardev=charmonitor,id=monitor,mode=control \ +> -rtc base=utc \ +> -no-shutdown \ +> -boot strict=on \ +> -device pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 \ +> -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 \ +> -device pcie-root-port,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 \ +> -device pcie-root-port,port=0xb,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x3 \ +> -device pcie-root-port,port=0xc,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x4 \ +> -device pcie-root-port,port=0xd,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x5 \ +> -device qemu-xhci,id=usb,bus=pci.1,addr=0x0 \ +> -device virtio-scsi-pci,id=scsi0,bus=pci.2,addr=0x0 \ +> -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 \ +> -drive file=/var/lib/libvirt/images/f28.32bit.root.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0,werror=enospc,cache=writeback,discard=unmap \ +> -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1,write-cache=on \ +> -drive file=/var/lib/libvirt/images/f28.32bit.home.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-1,werror=enospc,cache=writeback,discard=unmap \ +> -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,write-cache=on \ +> -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 \ +> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:6f:d1:c8,bus=pci.4,addr=0x0,romfile= \ +> -chardev pty,id=charserial0 \ +> -serial chardev:charserial0 \ +> -chardev socket,id=charchannel0,fd=31,server,nowait \ +> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ +> -device usb-tablet,id=input0,bus=usb.0,port=1 \ +> -device usb-kbd,id=input1,bus=usb.0,port=2 \ +> -vnc 127.0.0.1:0 \ +> -device virtio-gpu-pci,id=video0,max_outputs=1,bus=pci.5,addr=0x0 \ +> -object rng-random,id=objrng0,filename=/dev/urandom \ +> -device virtio-rng-pci,rng=objrng0,id=rng0,max-bytes=1048576,period=1000,bus=pci.6,addr=0x0 \ +> -msg timestamp=on + +and then I got: + +> qemu-system-aarch64: /root/src/upstream/qemu/target/arm/cpu.c:986: +> arm_cpu_realizefn: Assertion `no_aa32 || ({ ARMCPU *cpu_ = (cpu); +> isar_feature_arm_div(&cpu_->isar); })' failed. + +QEMU was built at commit 8dc7fd56dd4f ("Merge remote-tracking branch +'remotes/philmd-gitlab/tags/fw_cfg-20190523-pull-request' into staging", +2019-05-23). + +(Originally reported on the mailing list in the following thread: +<http://<email address hidden>>.) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1831225 b/results/classifier/gemma3:12b/kvm/1831225 new file mode 100644 index 00000000..9ee26a3c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1831225 @@ -0,0 +1,110 @@ + +guest migration 100% cpu freeze bug + +# Investigate migration cpu hog(100%) bug + +I have some issues when migrating from kernel 4.14.63 running qemu 2.11.2 to kernel 4.19.43 running qemu 2.11.2. +The hypervisors are running on debian jessie with libvirt v5.3.0. +Linux, libvirt and qemu are all custom compiled. + +I migrated around 10.000 vms and every once in a while a vm is stuck at 100% cpu after what we can see right now is that the target hypervisor runs on linux 4.19.53. This happened with 4 vms so far. It is not that easy to debug, we found this out pretty quickly because we are running monitoring on frozen vms after migrations. + +Last year we were having the same "kind of" bug https://bugs.launchpad.net/qemu/+bug/1775555 when trying to upgrade qemu 2.6 to 2.11. +This bug was fixed after applying the following patch: http://lists.nongnu.org/archive/html/qemu-devel/2018-04/msg00820.html + +This patch is still applied as you can see because of the available pre_load var on the kvmclock_vmsd struct: +(gdb) ptype kvmclock_vmsd +type = const struct VMStateDescription { + const char *name; + int unmigratable; + int version_id; + int minimum_version_id; + int minimum_version_id_old; + MigrationPriority priority; + LoadStateHandler *load_state_old; + int (*pre_load)(void *); + int (*post_load)(void *, int); + int (*pre_save)(void *); + _Bool (*needed)(void *); + VMStateField *fields; + const VMStateDescription **subsections; +} + +I attached gdb to a vcpu thread of one stuck vm, and a bt showed the following info: +Thread 4 (Thread 0x7f3a431a4700 (LWP 37799)): +#0 0x00007f3a576f5017 in ioctl () at ../sysdeps/unix/syscall-template.S:84 +#1 0x000055d84d15de57 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55d84fca78d0, type=type@entry=44672) at /home/dbosschieter/src/qemu-pkg/src/accel/kvm/kvm-all.c:2050 +#2 0x000055d84d15dfc6 in kvm_cpu_exec (cpu=cpu@entry=0x55d84fca78d0) at /home/dbosschieter/src/qemu-pkg/src/accel/kvm/kvm-all.c:1887 +#3 0x000055d84d13ab64 in qemu_kvm_cpu_thread_fn (arg=0x55d84fca78d0) at /home/dbosschieter/src/qemu-pkg/src/cpus.c:1136 +#4 0x00007f3a579ba4a4 in start_thread (arg=0x7f3a431a4700) at pthread_create.c:456 +#5 0x00007f3a576fcd0f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97 + +Thread 3 (Thread 0x7f3a439a5700 (LWP 37798)): +#0 0x00007f3a576f5017 in ioctl () at ../sysdeps/unix/syscall-template.S:84 +#1 0x000055d84d15de57 in kvm_vcpu_ioctl (cpu=cpu@entry=0x55d84fc5cbb0, type=type@entry=44672) at /home/dbosschieter/src/qemu-pkg/src/accel/kvm/kvm-all.c:2050 +#2 0x000055d84d15dfc6 in kvm_cpu_exec (cpu=cpu@entry=0x55d84fc5cbb0) at /home/dbosschieter/src/qemu-pkg/src/accel/kvm/kvm-all.c:1887 +#3 0x000055d84d13ab64 in qemu_kvm_cpu_thread_fn (arg=0x55d84fc5cbb0) at /home/dbosschieter/src/qemu-pkg/src/cpus.c:1136 +#4 0x00007f3a579ba4a4 in start_thread (arg=0x7f3a439a5700) at pthread_create.c:456 +#5 0x00007f3a576fcd0f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97 + +The ioctl call is a ioctl(18, KVM_RUN and it looks like it is looping inside the vm itself. + +I saved the state of the VM (with `virsh save`) after I found it was hanging on its vcpu threads. Then I restored this vm on a test environment running the same kernel, QEMU and libvirt version). After the restore the VM still was haning at 100% cpu usage on all the vcpus. +I tried to use the perf kvm guest option to trace the guest vm with a copy of the kernel, modules and kallsyms files from inside the guest vm and I got to the following perf stat: + + Event Total %Total CurAvg/s + kvm_entry 5198993 23.1 277007 + kvm_exit 5198976 23.1 277006 + kvm_apic 1732103 7.7 92289 + kvm_msr 1732101 7.7 92289 + kvm_inj_virq 1731904 7.7 92278 + kvm_eoi 1731900 7.7 92278 + kvm_apic_accept_irq 1731900 7.7 92278 + kvm_hv_timer_state 1731780 7.7 92274 + kvm_pv_eoi 1731701 7.7 92267 + kvm_ple_window 36 0.0 2 + Total 22521394 1199967 + +We tried to run the crash tool against a dump of guest vm memory and that gave us the following backtrace: +crash> bt +PID: 0 TASK: ffffffff81610040 CPU: 0 COMMAND: "swapper/0" + [exception RIP: native_read_tsc+2] + RIP: ffffffff810146a9 RSP: ffff88003fc03df0 RFLAGS: 00000046 + RAX: 000000008762c0fa RBX: ffff88003fc13680 RCX: 0000000000000001 + RDX: 0000000000fe4871 RSI: 0000000000000000 RDI: ffff88003fc13603 + RBP: 000000000003052c R8: 0000000000000200 R9: ffffffff8169b180 + R10: 0000000000000020 R11: 0000000000000005 R12: 006a33290b40455c + R13: 00000000df1fd292 R14: 000000002ca284ff R15: 00fe485f3febe21a + CS: 0010 SS: 0018 + #0 [ffff88003fc03df0] pvclock_clocksource_read at ffffffff8102cbb3 + #1 [ffff88003fc03e40] kvm_clock_read at ffffffff8102c2c9 + #2 [ffff88003fc03e50] timekeeping_get_ns at ffffffff810691b0 + #3 [ffff88003fc03e60] ktime_get at ffffffff810695c8 + #4 [ffff88003fc03e90] sched_rt_period_timer at ffffffff8103e4f5 + #5 [ffff88003fc03ee0] __run_hrtimer at ffffffff810652d3 + #6 [ffff88003fc03f20] hrtimer_interrupt at ffffffff81065abd + #7 [ffff88003fc03f90] smp_apic_timer_interrupt at ffffffff81024ba8 + #8 [ffff88003fc03fb0] apic_timer_interrupt at ffffffff813587e2 +--- <IRQ stack> --- + #9 [ffffffff81601e98] apic_timer_interrupt at ffffffff813587e2 + [exception RIP: native_safe_halt+2] + RIP: ffffffff8102c360 RSP: ffffffff81601f40 RFLAGS: 00010246 + RAX: 0000000000000000 RBX: ffffffff81601fd8 RCX: 00000000ffffffff + RDX: 00000000ffffffff RSI: 0000000000000000 RDI: 0000000000000001 + RBP: 0000000000000000 R8: 0000000000000000 R9: 0000000000000000 + R10: 0000000000000020 R11: 0000000000000005 R12: ffffffff816f5d80 + R13: ffffffffffffffff R14: 000000000008c800 R15: 0000000000000000 + ORIG_RAX: ffffffffffffff10 CS: 0010 SS: 0018 +#10 [ffffffff81601f40] default_idle at ffffffff81014c35 +#11 [ffffffff81601f50] cpu_idle at ffffffff8100d258 + +So it seems like the vm is reading its clock constantly trying to catch up some time after the migration. + +Last time it was a bug that was only triggered on newer Gold cpu hardware, but this time we also see this coming back on older Intel E5 cpus we tried to reproduce with a migrate loop of 3 days times between kernel 4.14.63 and 4.19.43 but this gave us no results. + +The vms were running ubuntu 14.04, centos 7, debian 7, debian 8 these vms are running linux kernel 3.*. + +The thing is that we are out of ideas for reproducing this, it seems like it the same kind of bug we are hitting, just like last time the vm is basically only trying to read the clock. Perhaps we can try to read the clock data and also try to read what the guest is actually waiting for, which value of the counter does it want to reach. + +I am not sure how to pinpoint the cause of this issue, I would like some help and possible some extra tips on debugging. +We are able to read the guests kernel which makes it a bit easier to debug, reproducing and finding the source of the problem is still something we are trying to figure out. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1833204 b/results/classifier/gemma3:12b/kvm/1833204 new file mode 100644 index 00000000..ec8615aa --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1833204 @@ -0,0 +1,109 @@ + +VM failed to start in nested virtualization with error "KVM: entry failed, hardware error 0x0" + +Hi, + +I have 3 ubuntu nodes provisioned by IaaS. +Then I tried to launch VM again in my ubuntu nodes. +It's a little strange that VM could be started successfully in two nodes. +And always failed in one nodes with error "KVM: entry failed, hardware error 0x0". + +When using virsh to resume the VM, it failed with following error, +virsh # list + Id Name State +---------------------------------- + 1 default_vm-cirros paused + +virsh # resume default_vm-cirros +error: Failed to resume domain default_vm-cirros +error: internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required + + +The detailed log from /var/log/libvirt/qemu/default_vm-cirros.log is as below. +``` +2019-06-18 09:55:52.397+0000: starting up libvirt version: 5.0.0, package: 1.fc28 (Unknown, 2019-01-22-08:04:34, 64723eea657e48d296e6beb0b1be9c4c), qemu version: 3.1.0qemu-3.1.0-4.fc28, kernel: 4.15.0-47-generic, hostname: vm-cirros +LC_ALL=C \ +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \ +HOME=/root \ +QEMU_AUDIO_DRV=none \ +/usr/bin/qemu-system-x86_64 \ +-name guest=default_vm-cirros,debug-threads=on \ +-S \ +-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-default_vm-cirros/master-key.aes \ +-machine pc-q35-3.1,accel=kvm,usb=off,dump-guest-core=off \ +-cpu Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc_adjust=on,mpx=on,avx512f=on,avx512cd=on,ssbd=on,xsaveopt=on,abm=on,invpcid=off \ +-m 489 \ +-realtime mlock=off \ +-smp 1,sockets=1,cores=1,threads=1 \ +-object iothread,id=iothread1 \ +-uuid 0d2a2043-41c0-59c3-9b17-025022203668 \ +-no-user-config \ +-nodefaults \ +-chardev socket,id=charmonitor,fd=22,server,nowait \ +-mon chardev=charmonitor,id=monitor,mode=control \ +-rtc base=utc \ +-no-shutdown \ +-boot strict=on \ +-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ +-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ +-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ +-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ +-device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ +-device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ +-drive file=/var/run/kubevirt-ephemeral-disks/container-disk-data/default/vm-cirros/disk_containerdisk/disk-image.raw,format=raw,if=none,id=drive-ua-containerdisk,cache=none \ +-device virtio-blk-pci,scsi=off,bus=pci.3,addr=0x0,drive=drive-ua-containerdisk,id=ua-containerdisk,bootindex=1,write-cache=on \ +-drive file=/var/run/kubevirt-ephemeral-disks/cloud-init-data/default/vm-cirros/noCloud.iso,format=raw,if=none,id=drive-ua-cloudinitdisk,cache=none \ +-device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=drive-ua-cloudinitdisk,id=ua-cloudinitdisk,write-cache=on \ +-netdev tap,fd=24,id=hostua-default,vhost=on,vhostfd=25 \ +-device virtio-net-pci,host_mtu=1430,netdev=hostua-default,id=ua-default,mac=16:57:38:cd:57:cb,bus=pci.1,addr=0x0 \ +-chardev socket,id=charserial0,fd=26,server,nowait \ +-device isa-serial,chardev=charserial0,id=serial0 \ +-chardev socket,id=charchannel0,fd=27,server,nowait \ +-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ +-vnc vnc=unix:/var/run/kubevirt-private/3b22a138-91af-11e9-af36-0016ac101123/virt-vnc \ +-device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \ +-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ +-msg timestamp=on +KVM: entry failed, hardware error 0x0 +EAX=00000000 EBX=00000000 ECX=00000000 EDX=000306d2 +ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000 +EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 00000000 0000ffff 00009300 +CS =f000 ffff0000 0000ffff 00009b00 +SS =0000 00000000 0000ffff 00009300 +DS =0000 00000000 0000ffff 00009300 +FS =0000 00000000 0000ffff 00009300 +GS =0000 00000000 0000ffff 00009300 +LDT=0000 00000000 0000ffff 00008200 +TR =0000 00000000 0000ffff 00008b00 +GDT= 00000000 0000ffff +IDT= 00000000 0000ffff +CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=06 66 05 00 00 01 00 8e c1 26 66 a3 14 f5 66 5b 66 5e 66 c3 <ea> 5b e0 00 f0 30 36 2f 32 33 2f 39 39 00 fc 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 +``` + +Ubuntu node version as follow, +cat /etc/os-release +NAME="Ubuntu" +VERSION="18.04.2 LTS (Bionic Beaver)" +ID=ubuntu +ID_LIKE=debian +PRETTY_NAME="Ubuntu 18.04.2 LTS" +VERSION_ID="18.04" +HOME_URL="https://www.ubuntu.com/" +SUPPORT_URL="https://help.ubuntu.com/" +BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" +PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" +VERSION_CODENAME=bionic +UBUNTU_CODENAME=bionic + +Output of `uname -a` is: +4.15.0-47-generic #50-Ubuntu SMP Wed Mar 13 10:44:52 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux + + + +Any additional information needed, please let me know. +Thx. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1834051 b/results/classifier/gemma3:12b/kvm/1834051 new file mode 100644 index 00000000..2819847f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1834051 @@ -0,0 +1,12 @@ + +IRQ2 ignored under KVM when using IOAPIC + +When using KVM, and an OS that supports the IOAPIC, interrupts mapped on IRQ2 (for instance, routing an HPET timer on interrupt 2) will cause the interrupts to never be delivered. This is because QEmu, when setting up the KVM interrupt routes, will not set one up for IRQ2[0]. When running without KVM, IRQ2 is identity-mapped to GSI2. + +My understanding is that IRQs should be identity mapped to their equivalent GSI unless a redirection entry is present in the MADT. This is supported by ACPI 6.2 spec[1], 5.2.12.5 Interrupt Source Override Structure, which claims: "It is assumed that the ISA interrupts will be identity-mapped into the first I/O APIC sources.". + +I stumbled across this while working on my own custom OS, got very confused why the HPET wasn't triggering any interruption - and even more confused why the behavior only happened in KVM and not in non-KVM. + +[0]: https://github.com/qemu/qemu/blob/37560c259d7a0d6aceb96e9d6903ee002f4e5e0c/hw/i386/kvm/ioapic.c#L40 + +[1]: https://uefi.org/sites/default/files/resources/ACPI_6_2.pdf \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1836501 b/results/classifier/gemma3:12b/kvm/1836501 new file mode 100644 index 00000000..5007dc84 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1836501 @@ -0,0 +1,34 @@ + +cpu_address_space_init fails with assertion + +qemu-system-arm does not start with version >= 2.6 and KVM enabled. + + cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed. + +Hardware is Odroid XU4 with Exynos with 4.9.61+ Tested with Debian Stretch (9) or Buster (10). + +Without KVM it is running fine but slow. I'm operating Debian Jessie with qemu 2.1 for a long time with KVM virtualization working flawlessly. When I upgraded to Stretch I ran into the trouble described before. I tried Debian Stretch and Buster with all Kernels provided by the Board manufacturer (Hardkernel). + +It seems to be related to the feature introduced in Version 2.6: +https://wiki.qemu.org/ChangeLog/2.6 +- Support for a separate EL3 address space + +KVM is enabled, so I assume the adress space index asidx to be causing the assert to fail. + +dmesg | grep -i KVM +[ 0.741714] kvm [1]: 8-bit VMID +[ 0.741721] kvm [1]: IDMAP page: 40201000 +[ 0.741729] kvm [1]: HYP VA range: c0000000:ffffffff +[ 0.742543] kvm [1]: Hyp mode initialized successfully +[ 0.742600] kvm [1]: vgic-v2@10484000 +[ 0.742924] kvm [1]: vgic interrupt IRQ16 +[ 0.742943] kvm [1]: virtual timer IRQ60 + +Full command line is: +qemu-system-arm -M vexpress-a15 -smp 2 -m 512 -cpu host -enable-kvm -kernel vmlinuz -initrd initrd.gz -dtb vexpress-v2p-ca15-tc1.dtb -device virtio-blk-device,drive=inst-blk -drive file=PATHTOFILE,id=inst-blk,if=none,format=raw -append "vga=normal rw console=ttyAMA0" -nographic + +Is there anything to do to understand, if this is a hardware related failure or probably just a missing parameter? + +Regards + +Lutz \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1837851 b/results/classifier/gemma3:12b/kvm/1837851 new file mode 100644 index 00000000..376f55a3 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1837851 @@ -0,0 +1,12 @@ + +hv-tlbflush malfunctions on Intel host CPUs with neither EPT nor VPID (qemu-kvm) + +Enabling hv-tlbflush on older hosts using Intel CPUs supporting VT-x but neither EPT nor VPID will lead to bluescreens on the guest. + +It seems KVM only checks if EPT is available, and if it isn't it forcibly uses VPID. If that's *also* not available, it defaults to basically a no-op hypercall, though windows is expecting the TLB to be flushed. + +hv-tlbflush is pretty useless on machines not supporting these extensions anyway (only reasonably fix I can see would be to flush the *entire* TLB on tlbflush hypercall in KVM (i.e. a kernel fix), but that would remove any performance benefits), so I would suggest some kind of preliminary check and warning/error if hv-tlbflush is specified on such a host. + +All CPUs mentioned in this thread[0] are confirmed to be affected by the bug, and I have successfully reproduced it on an Intel Core2Duo E8500. + +[0] https://forum.proxmox.com/threads/windows-guest-bluescreen-with-proxmox-6.56053/ \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1838312 b/results/classifier/gemma3:12b/kvm/1838312 new file mode 100644 index 00000000..5abe2ebb --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1838312 @@ -0,0 +1,32 @@ + +Qemu virt-manager Segmentation fault + +Hi! + +I installed all these packages: + +sudo apt install qemu +sudo apt install ipxe-qemu-256k-compat-efi-roms libspice-server1 libbluetooth3 +sudo apt install libbrlapi0.6 libcacard0 libfdt1 libusbredirparser1 libvirglrenderer0 libxen-4.9 libxenstore3.0 +sudo apt install cpu-checker ibverbs-providers ipxe-qemu libibverbs1 libiscsi7 libnl-route-3-200 librados2 librbd1 librdmacm1 msr-tools sharutils +sudo apt install qemu-block-extra qemu-system-common qemu-system-data qemu-system-gui qemu-utils +sudo apt install --no-install-recommends qemu-kvm qemu-system-x86 +sudo apt install libauparse0 ebtables gir1.2-gtk-vnc-2.0 gir1.2-libosinfo-1.0 gir1.2-libvirt-glib-1.0 gir1.2-spiceclientglib-2.0 gir1.2-spiceclientgtk-3.0 libvde0 libvdeplug2 libgovirt-common libgovirt2 libgtk-vnc-2.0-0 libgvnc-1.0-0 libosinfo-1.0-0 libphodav-2.0-0 libphodav-2.0-common libspice-client-glib-2.0-8 libspice-client-gtk-3.0-5 libusbredirhost1 libvirt-clients libvirt-daemon libvirt-daemon-driver-storage-rbd libvirt-daemon-system libvirt-glib-1.0-0 libvirt0 osinfo-db python3-libvirt python3-libxml2 spice-client-glib-usb-acl-helper vde2 vde2-cryptcab virt-viewer virtinst virt-manager + +without the i386 packages for Qemu because I want only 64 bit. + +I installed all these packages without error, but when I run + +# virt-manager + +Output: ...shows me: + +Segmentation fault + + +My hardware is 100% ok. +Maybee a broken lib? + + + +How can I fix that? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1838390 b/results/classifier/gemma3:12b/kvm/1838390 new file mode 100644 index 00000000..214f6087 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1838390 @@ -0,0 +1,13 @@ + +vmx_write_mem: mmu_gva_to_gpa failed when using hvf + +Installed qemu 4.0.0 by homebrew, used below commands: + +1. qemu-img create -f raw arch-vm.img 100G +2. qemu-system-x86_64 -show-cursor -only-migratable -nodefaults -boot order=d -cdrom archlinux-2019.07.01-x86_64.iso -cpu host -device virtio-keyboard -device virtio-mouse -device virtio-tablet -drive file=arch-vm.img,format=raw,if=virtio -m 4096 -machine q35,accel=hvf,vmport=off -nic user,ipv6=off,model=virtio -smp 4,sockets=1,cores=2,threads=2 -soundhw hda -vga virtio + +Displayed bootloader menu successfully, select "Boot Arch Linux" then crashed with message: vmx_write_mem: mmu_gva_to_gpa ffff91953b540000 failed. + +Use tcg accelerator has no problem but very slow. + +See attachment for full crash report. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1842038 b/results/classifier/gemma3:12b/kvm/1842038 new file mode 100644 index 00000000..73bb84bb --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1842038 @@ -0,0 +1,215 @@ + +qemu 4.0/4.1 segfault on live migrate with virtio-scsi iothread + +[root@kvm-nvme5 qemu]# uname -a +Linux kvm-nvme5 4.14.35-1902.4.8.el7uek.x86_64 #2 SMP Sun Aug 4 22:25:18 GMT 2019 x86_64 x86_64 x86_64 GNU/Linux + +[root@kvm-nvme5 qemu]# qemu-system-x86_64 --version +QEMU emulator version 4.1.0 (qemu-4.1.0-1.el7) +Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers + +[root@kvm-nvme5 qemu]# libvirtd --version +libvirtd (libvirt) 5.6.0 + +when migrate +MIGR_OPTS="--live --copy-storage-all --verbose --persistent --undefinesource" +virsh migrate $MIGR_OPTS p12345 qemu+ssh://$SERV/system + +we got segfault if we have option <driver iothread='1'/> in config for virtio-scsi controller + +[1205674.818067] qemu-system-x86[39744]: segfault at 38 ip 00005575890ad411 sp 00007ffd3c10a0e0 error 6 in qemu-system-x86_64[5575889ad000+951000] + +On 4.0 we have error with this context(dont save all output) "qemu_coroutine_get_aio_context(co)' failed" + +If we remove option +<driver iothread='1'/> +migrate work fine without segfaults + +2019-08-30 08:25:35.402+0000: starting up libvirt version: 5.6.0, package: 1.el7 (Unknown, 2019-08-06-09:57:56, mock), qemu version: 4.1.0qemu-4.1.0-1.el7, kernel: 4.14.35-1902.4.8.el7uek.x86_64, hostname: kvm-nvme5 +LC_ALL=C \ +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \ +HOME=/var/lib/libvirt/qemu/domain-75-p541999 \ +XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-75-p541999/.local/share \ +XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-75-p541999/.cache \ +XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-75-p541999/.config \ +QEMU_AUDIO_DRV=none \ +/usr/bin/qemu-system-x86_64 \ +-name guest=p541999,debug-threads=on \ +-S \ +-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-75-p541999/master-key.aes \ +-machine pc-q35-4.0,accel=kvm,usb=off,dump-guest-core=off \ +-cpu Cascadelake-Server,ss=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1000,hv-vpindex,hv-runtime,hv-synic,hv-stimer,hv-fre +quencies,hv-reenlightenment,hv-tlbflush \ +-m 2148 \ +-overcommit mem-lock=off \ +-smp 1,sockets=1,cores=1,threads=1 \ +-object iothread,id=iothread1 \ +-uuid ff20ae7f-8cfe-4ec5-bd50-e78f8a167414 \ +-no-user-config \ +-nodefaults \ +-chardev socket,id=charmonitor,fd=44,server,nowait \ +-mon chardev=charmonitor,id=monitor,mode=control \ +-rtc base=utc,driftfix=slew \ +-global kvm-pit.lost_tick_policy=delay \ +-no-shutdown \ +-boot menu=on,strict=on \ +-device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x5.0x7 \ +-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x5 \ +-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x5.0x1 \ +-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x5.0x2 \ +-device virtio-scsi-pci,iothread=iothread1,id=scsi0,bus=pcie.0,addr=0x9 \ +-device virtio-serial-pci,id=virtio-serial0,bus=pcie.0,addr=0x6 \ +-drive file=/dev/vm/p541999,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,discard=unmap,aio=threads,throttling.bps-write=52428800,throttling.bps-write-max=314572800,throttling.bps-write-max-length=120 \ +-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=2,write-cache=on \ +-drive if=none,id=drive-sata0-0-0,readonly=on \ +-device ide-cd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=1 \ +-netdev tap,fd=47,id=hostnet0,vhost=on,vhostfd=48 \ +-device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:00:54:19:99,bus=pcie.0,addr=0x3 \ +-chardev pty,id=charserial0 \ +-device isa-serial,chardev=charserial0,id=serial0 \ +-chardev socket,id=charchannel0,fd=49,server,nowait \ +-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ +-vnc 0.0.0.0:6128,password \ +-device cirrus-vga,id=video0,bus=pcie.0,addr=0x1 \ +-device virtio-balloon-pci,id=balloon0,bus=pcie.0,addr=0x8 \ +-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ +-msg timestamp=on +char device redirected to /dev/pts/5 (label charserial0) +2019-08-30 08:27:00.539+0000: shutting down, reason=crashed + + +config: +<domain type='kvm'> + <name>p541999</name> + <uuid>ff20ae7f-8cfe-4ec5-bd50-e78f8a167414</uuid> + <memory unit='KiB'>2199552</memory> + <currentMemory unit='KiB'>2199552</currentMemory> + <vcpu placement='static'>1</vcpu> + <iothreads>1</iothreads> + <resource> + <partition>/machine</partition> + </resource> + <os> + <type arch='x86_64' machine='pc-q35-4.0'>hvm</type> + <boot dev='cdrom'/> + <boot dev='hd'/> + <bootmenu enable='yes'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + <hyperv> + <relaxed state='on'/> + <vapic state='on'/> + <spinlocks state='on' retries='4096'/> + <vpindex state='on'/> + <runtime state='on'/> + <synic state='on'/> + <stimer state='on'/> + <frequencies state='on'/> + <reenlightenment state='on'/> + <tlbflush state='on'/> + </hyperv> + <msrs unknown='ignore'/> + </features> + <cpu mode='host-model' check='full'> + <model fallback='forbid'/> + </cpu> + <clock offset='utc'> + <timer name='rtc' tickpolicy='catchup'/> + <timer name='pit' tickpolicy='delay'/> + <timer name='hpet' present='yes'/> + <timer name='hypervclock' present='yes'/> + </clock> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + <disk type='block' device='disk'> + <driver name='qemu' type='raw' cache='none' io='threads' discard='unmap'/> + <source dev='/dev/vm/p541999'/> + <backingStore/> + <target dev='sda' bus='scsi'/> + <iotune> + <write_bytes_sec>52428800</write_bytes_sec> + <write_bytes_sec_max>314572800</write_bytes_sec_max> + <write_bytes_sec_max_length>120</write_bytes_sec_max_length> + </iotune> + <address type='drive' controller='0' bus='0' target='0' unit='0'/> + </disk> + <disk type='file' device='cdrom'> + <driver name='qemu' type='raw'/> + <target dev='sdb' bus='sata'/> + <readonly/> + <address type='drive' controller='0' bus='0' target='0' unit='0'/> + </disk> + <controller type='usb' index='0' model='ich9-ehci1'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci1'> + <master startport='0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci2'> + <master startport='2'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci3'> + <master startport='4'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> + </controller> + <controller type='virtio-serial' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> + </controller> + <controller type='scsi' index='0' model='virtio-scsi'> + <driver iothread='1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> + </controller> + <controller type='pci' index='0' model='pcie-root'/> + <controller type='sata' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> + </controller> + <interface type='bridge'> + <mac address='00:00:00:54:19:99'/> + <source bridge='br0'/> + <bandwidth> + <inbound average='12500' peak='12500' burst='1024'/> + <outbound average='12500' peak='12500' burst='1024'/> + </bandwidth> + <model type='virtio'/> + <filterref filter='clean-traffic'> + <parameter name='CTRL_IP_LEARNING' value='none'/> + <parameter name='IP' value='1.2.3.4'/> + </filterref> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <serial type='pty'> + <target type='isa-serial' port='0'> + <model name='isa-serial'/> + </target> + </serial> + <console type='pty'> + <target type='serial' port='0'/> + </console> + <channel type='unix'> + <source mode='bind' path='/var/lib/libvirt/qemu/p541999.agent'/> + <target type='virtio' name='org.qemu.guest_agent.0'/> + <address type='virtio-serial' controller='0' bus='0' port='1'/> + </channel> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <graphics type='vnc' port='12028' autoport='no' listen='0.0.0.0' passwd='SUPERPASSWORD'> + <listen type='address' address='0.0.0.0'/> + </graphics> + <video> + <model type='cirrus' vram='16384' heads='1' primary='yes'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> + </video> + <memballoon model='virtio'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> + </memballoon> + </devices> + <seclabel type='none' model='none'/> +</domain> \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1846427 b/results/classifier/gemma3:12b/kvm/1846427 new file mode 100644 index 00000000..b6fc5299 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1846427 @@ -0,0 +1,76 @@ + +4.1.0: qcow2 corruption on savevm/quit/loadvm cycle + +I'm seeing massive corruption of qcow2 images with qemu 4.1.0 and git master as of 7f21573c822805a8e6be379d9bcf3ad9effef3dc after a few savevm/quit/loadvm cycles. I've narrowed it down to the following reproducer (further notes below): + +# qemu-img check debian.qcow2 +No errors were found on the image. +251601/327680 = 76.78% allocated, 1.63% fragmented, 0.00% compressed clusters +Image end offset: 18340446208 +# bin/qemu/bin/qemu-system-x86_64 -machine pc-q35-4.0.1,accel=kvm -m 4096 -chardev stdio,id=charmonitor -mon chardev=charmonitor -drive file=debian.qcow2,id=d -S +qemu-system-x86_64: warning: dbind: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. +QEMU 4.1.50 monitor - type 'help' for more information +(qemu) loadvm foo +(qemu) c +(qemu) qcow2_free_clusters failed: Invalid argument +qcow2_free_clusters failed: Invalid argument +qcow2_free_clusters failed: Invalid argument +qcow2_free_clusters failed: Invalid argument +quit +[m@nargothrond:~] qemu-img check debian.qcow2 +Leaked cluster 85179 refcount=2 reference=1 +Leaked cluster 85180 refcount=2 reference=1 +ERROR cluster 266150 refcount=0 reference=2 +[...] +ERROR OFLAG_COPIED data cluster: l2_entry=422840000 refcount=1 + +9493 errors were found on the image. +Data may be corrupted, or further writes to the image may corrupt it. + +2 leaked clusters were found on the image. +This means waste of disk space, but no harm to data. +259266/327680 = 79.12% allocated, 1.67% fragmented, 0.00% compressed clusters +Image end offset: 18340446208 + +This is on a x86_64 Linux 5.3.1 Gentoo host with qemu-system-x86_64 and accel=kvm. The compiler is gcc-9.2.0 with the rest of the system similarly current. + +Reproduced with qemu-4.1.0 from distribution package as well as vanilla git checkout of tag v4.1.0 and commit 7f21573c822805a8e6be379d9bcf3ad9effef3dc (today's master). Does not happen with qemu compiled from vanilla checkout of tag v4.0.0. Build sequence: + +./configure --prefix=$HOME/bin/qemu-bisect --target-list=x86_64-softmmu --disable-werror --disable-docs +[...] +CFLAGS -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g +[...] (can provide full configure output if helpful) +make -j8 install + +The kind of guest OS does not matter: seen with Debian testing 64bit, Windows 7 x86/x64 BIOS and Windows 7 x64 EFI. + +The virtual storage controller does not seem to matter: seen with VirtIO SCSI, emulated SCSI and emulated SATA AHCI. + +Caching modes (none, directsync, writeback), aio mode (threads, native) or discard (ignore, unmap) or detect-zeroes (off, unmap) does not influence occurence either. + +Having more RAM in the guest seems to increase odds of corruption: With 512MB to the Debian guest problem hardly occurs at all, with 4GB RAM it happens almost instantly. + +An automated reproducer works as follows: + +- the guest *does* mount its root fs and swap with option discard and my testing leaves me with the impression that file deletion rather than reading is causing the issue + +- foo is a snapshot of the running Debian VM which is already running command + +# while true ; do dd if=/dev/zero of=foo bs=10240k count=400 ; done + +to produce some I/O to the disk (4GB file with 4GB of RAM). + +- on the host a loop continuously resumes and saves the guest state and quits qemu inbetween: + +# while true ; do (echo loadvm foo ; echo c ; sleep 10 ; echo stop ; echo savevm foo ; echo quit ) | bin/qemu-bisect/bin/qemu-system-x86_64 -machine pc-q35-3.1,accel=kvm -m 4096 -chardev stdio,id=charmonitor -mon chardev=charmonitor -drive file=debian.qcow2,id=d -S -display none ; done + +- quitting qemu inbetween saves and loads seems to be necessary for the problem to occur. Just continusouly in one session saving and loading guest state does not trigger it. + +- For me, after about 2 to 6 iterations of above loop the image is corrupted. + +- corruption manifests with other messages from qemu as well, e.g.: + +(qemu) loadvm foo +Error: Device 'd' does not have the requested snapshot 'foo' + +Using above reproducer I have to the be best of my ability bisected the introduction of the problem to commit 69f47505ee66afaa513305de0c1895a224e52c45 (block: avoid recursive block_status call if possible). qemu compiled from the commit before does not exhibit the issue, from that commit on it does and reverting the commit off of current master makes it disappear. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1847440 b/results/classifier/gemma3:12b/kvm/1847440 new file mode 100644 index 00000000..5e88eeff --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1847440 @@ -0,0 +1,69 @@ + +ppc64le: KVM guest fails to boot with an error `virtio_scsi: probe of virtio1 failed with error -22` on master + +PowerPC KVM Guest fails to boot on current qemu master(98b2e3c9ab3abfe476a2b02f8f51813edb90e72d), + +Env: +HW: IBM Power8 +Host Kernel: 5.4.0-rc2-00038-ge3280b54afed +Guest Kernel: 4.13.9-300.fc27.ppc64le +Qemu: https://github.com/qemu/qemu.git (master) +Libvirt: 5.4.0 + +Guest boot gets stuck: +... +[ OK ] Mounted Kernel Configuration File System. +[ 7.598740] virtio-pci 0000:00:01.0: enabling device (0000 -> 0003) +[ 7.598828] virtio-pci 0000:00:01.0: virtio_pci: leaving for legacy driver +[ 7.598957] virtio-pci 0000:00:02.0: enabling device (0000 -> 0003) +[ 7.599017] virtio-pci 0000:00:02.0: virtio_pci: leaving for legacy driver +[ 7.599123] virtio-pci 0000:00:04.0: enabling device (0000 -> 0003) +[ 7.599182] virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver +[ 7.620620] synth uevent: /devices/vio: failed to send uevent +[ 7.620624] vio vio: uevent: failed to send synthetic uevent +[ OK ] Started udev Coldplug all Devices. +[ 7.624559] audit: type=1130 audit(1570610300.990:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' +[ OK ] Reached target System Initialization. +[ OK ] Reached target Basic System. +[ OK ] Reached target Remote File Systems (Pre). +[ OK ] Reached target Remote File Systems. +[ 7.642961] virtio_scsi: probe of virtio1 failed with error -22 +[ *** ] A start job is running for dev-disk…21b3519a80.device (14s / no limit) +... + + + +git bisect, yielded a bad commit [e68cd0cb5cf49d334abe17231a1d2c28b846afa2] spapr: Render full FDT on ibm,client-architecture-support, reverting this commit boot the guest properly. + +git bisect start +# good: [9e06029aea3b2eca1d5261352e695edc1e7d7b8b] Update version for v4.1.0 release +git bisect good 9e06029aea3b2eca1d5261352e695edc1e7d7b8b +# bad: [98b2e3c9ab3abfe476a2b02f8f51813edb90e72d] Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging +git bisect bad 98b2e3c9ab3abfe476a2b02f8f51813edb90e72d +# good: [56e6250ede81b4e4b4ddb623874d6c3cdad4a96d] target/arm: Convert T16, nop hints +git bisect good 56e6250ede81b4e4b4ddb623874d6c3cdad4a96d +# good: [5d69cbdfdd5cd6dadc9f0c986899844a0e4de703] tests/tcg: target/s390x: Test MVC +git bisect good 5d69cbdfdd5cd6dadc9f0c986899844a0e4de703 +# good: [88112488cf228df8b7588c8aa38e16ecd0dff48e] qapi: Make check_type()'s array case a bit more obvious +git bisect good 88112488cf228df8b7588c8aa38e16ecd0dff48e +# good: [972bd57689f1e11311d86b290134ea2ed9c7c11e] ppc/kvm: Skip writing DPDES back when in run time state +git bisect good 972bd57689f1e11311d86b290134ea2ed9c7c11e +# bad: [1aba8716c8335e88b8c358002a6e1ac89f7dd258] ppc/pnv: Remove the XICSFabric Interface from the POWER9 machine +git bisect bad 1aba8716c8335e88b8c358002a6e1ac89f7dd258 +# bad: [00ed3da9b5c2e66e796a172df3e19545462b9c90] xics: Minor fixes for XICSFabric interface +git bisect bad 00ed3da9b5c2e66e796a172df3e19545462b9c90 +# good: [33432d7737b53c92791f90ece5dbe3b7bb1c79f5] target/ppc: introduce set_dfp{64,128}() helper functions +git bisect good 33432d7737b53c92791f90ece5dbe3b7bb1c79f5 +# good: [f6d4c423a222f02bfa84a49c3d306d7341ec9bab] target/ppc: remove unnecessary if() around calls to set_dfp{64,128}() in DFP macros +git bisect good f6d4c423a222f02bfa84a49c3d306d7341ec9bab +# bad: [e68cd0cb5cf49d334abe17231a1d2c28b846afa2] spapr: Render full FDT on ibm,client-architecture-support +git bisect bad e68cd0cb5cf49d334abe17231a1d2c28b846afa2 +# good: [c4ec08ab70bab90685d1443d6da47293e3aa312a] spapr-pci: Stop providing assigned-addresses +git bisect good c4ec08ab70bab90685d1443d6da47293e3aa312a +# first bad commit: [e68cd0cb5cf49d334abe17231a1d2c28b846afa2] spapr: Render full FDT on ibm,client-architecture-support + + +attached vmxml. + +qemu commandline: +/home/sath/qemu/ppc64-softmmu/qemu-system-ppc64 -name guest=vm1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-19-vm1/master-key.aes -machine pseries-4.2,accel=kvm,usb=off,dump-guest-core=off -m 81920 -overcommit mem-lock=off -smp 512,sockets=1,cores=128,threads=4 -uuid fd4a5d54-0216-490e-82d2-1d4e89683b3d -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=24,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device qemu-xhci,id=usb,bus=pci.0,addr=0x3 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -drive file=/home/sath/tests/data/avocado-vt/images/jeos-27-ppc64le_vm1.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:e6:df:24,bus=pci.0,addr=0x1 -chardev pty,id=charserial0 -device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -M pseries,ic-mode=xics -msg timestamp=on \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1848244 b/results/classifier/gemma3:12b/kvm/1848244 new file mode 100644 index 00000000..db69739c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1848244 @@ -0,0 +1,12 @@ + +QEMU KVM IGD SandyBridge Passthrough crash + +I try to passthrough my Intel GPU with this command: + +qemu-system-x86_64 -nodefaults -parallel none -k de -rtc base=localtime -serial unix:/run/qemu/win7-serial.sock,server,nowait -monitor unix:/run/qemu/win7-monitor.sock,server,nowait -netdev user,id=net0 -device virtio-net-pci,netdev=net0,mac=52:54:00:00:00:07 -device vfio-pci,host=0000:00:02.0,addr=0x2 -device vfio-pci,host=0000:00:1b.0 -device virtio-keyboard-pci -device virtio-mouse-pci -object input-linux,id=kbd1,evdev=/dev/input/by-path/pci-0000:00:1a.0-usb-0:1.2.2:1.2-event-kbd,grab_all=on,repeat=on -object input-linux,id=mouse1,evdev=/dev/input/by-path/pci-0000:00:1a.0-usb-0:1.2.2:1.2-event-mouse -enable-kvm -cpu host -smp 4,sockets=1,cores=4,threads=1 -vga none -display none -m 2g -device virtio-blk-pci,drive=boot,bootindex=1 -drive file=/opt/vm/qcow2/win7.qcow2,format=qcow2,if=none,id=boot + +This ONLY works if i remove "-enable-kvm" else the windows (7 and 10) boot crashes in bluescreen "stop 0x0000003b" (probably while loading the intel gpu driver (intel graphics 3000). + +The system is an older ThinkPad T420 with Intel(R) Core(TM) i5-2520M CPU @ 2.50GHz. + +CMDLINE: BOOT_IMAGE=/vmlinuz-linux root=LABEL=root rw ipv6.disable=0 net.ifnames=0 intel_iommu=on iommu=pt video=LVDS-1:d \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1848901 b/results/classifier/gemma3:12b/kvm/1848901 new file mode 100644 index 00000000..0b887160 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1848901 @@ -0,0 +1,36 @@ + +kvm_mem_ioeventfd_add: error adding ioeventfd: No space left on device (28) + +=> QEMU process has stopped, return code: -6 + +Start QEMU with /usr/bin/qemu-system-x86_64 -name CiscoASAv9.8.1-1 -m 2048M -smp cpus=1 -enable-kvm -machine smm=off -boot order=c -drive 'file=/home/deemon/GNS3/projects/ASAv my ass/project-files/qemu/7725cdea-5e66-4777-b4dd-c3905f258394/hda_disk.qcow2,if=virtio,index=0,media=disk,id=drive0' -uuid 7725cdea-5e66-4777-b4dd-c3905f258394 -serial telnet:127.0.0.1:5000,server,nowait -monitor tcp:127.0.0.1:44629,server,nowait -net none -device e1000,mac=0c:7a:1d:83:94:00,netdev=gns3-0 -netdev socket,id=gns3-0,udp=127.0.0.1:10001,localaddr=127.0.0.1:10000 -device e1000,mac=0c:7a:1d:83:94:01,netdev=gns3-1 -netdev socket,id=gns3-1,udp=127.0.0.1:10003,localaddr=127.0.0.1:10002 -device e1000,mac=0c:7a:1d:83:94:02,netdev=gns3-2 -netdev socket,id=gns3-2,udp=127.0.0.1:10005,localaddr=127.0.0.1:10004 -device e1000,mac=0c:7a:1d:83:94:03,netdev=gns3-3 -netdev socket,id=gns3-3,udp=127.0.0.1:10007,localaddr=127.0.0.1:10006 -device e1000,mac=0c:7a:1d:83:94:04,netdev=gns3-4 -netdev socket,id=gns3-4,udp=127.0.0.1:10009,localaddr=127.0.0.1:10008 -device e1000,mac=0c:7a:1d:83:94:05,netdev=gns3-5 -netdev socket,id=gns3-5,udp=127.0.0.1:10011,localaddr=127.0.0.1:10010 -device e1000,mac=0c:7a:1d:83:94:06,netdev=gns3-6 -netdev socket,id=gns3-6,udp=127.0.0.1:10013,localaddr=127.0.0.1:10012 -device e1000,mac=0c:7a:1d:83:94:07,netdev=gns3-7 -netdev socket,id=gns3-7,udp=127.0.0.1:10015,localaddr=127.0.0.1:10014 -nographic + + +Execution log: +kvm_mem_ioeventfd_add: error adding ioeventfd: No space left on device (28) + +and then it just closes... + + + +[deemon@Zen ~]$ coredumpctl info 8638 + PID: 8638 (qemu-system-x86) + UID: 1000 (deemon) + GID: 1000 (deemon) + Signal: 6 (ABRT) + Timestamp: Sun 2019-10-20 04:27:29 EEST (5min ago) + Command Line: /usr/bin/qemu-system-x86_64 -name CiscoASAv9.8.1-1 -m 2048M -smp cpus=1 -enable-kvm -machine smm=off -boot order=c -drive file=/home/deemon/GNS3/projects/ASAv my ass/project-files/qemu> + Executable: /usr/bin/qemu-system-x86_64 + Control Group: /user.slice/user-1000.slice/session-2.scope + Unit: session-2.scope + Slice: user-1000.slice + Session: 2 + Owner UID: 1000 (deemon) + Boot ID: cd30f69a8d194359a31889dc7b6b026c + Machine ID: d0a2d74a5cd9430797d902f5237c448d + Hostname: Zen + Storage: /var/lib/systemd/coredump/core.qemu-system-x86.1000.cd30f69a8d194359a31889dc7b6b026c.8638.1571534849000000.lz4 (truncated) + Message: Process 8638 (qemu-system-x86) of user 1000 dumped core. + + Stack trace of thread 8642: + #0 0x00007f1a33609f25 n/a (n/a) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1850751 b/results/classifier/gemma3:12b/kvm/1850751 new file mode 100644 index 00000000..2aef9e6a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1850751 @@ -0,0 +1,9 @@ + +kvm flag is not exposed by default + +Hi I found that the kvm flags is not exposed by default, but according to the source code, it should be exposed by default when the CPU Model is a X86CPU. + +we have to specifically add "kvm=on" in QEMU custom cpu args like this: +<qemu:arg value='host,kvm=on,+invtsc,+hypervisor'/> + +Also the libvirt can't expose kvm because this (libvirt assumes the kvm flag is exposed by default, only "kvm hidden = 'true'" can be used. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1851845 b/results/classifier/gemma3:12b/kvm/1851845 new file mode 100644 index 00000000..8b844452 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1851845 @@ -0,0 +1,13 @@ + +Windows 10 panics with BlueIris + +Running Windows 10 64bit. Starting BlueIris 64 bit causes Windows to panic with CPU type is set higher than Penryn or CPU type = host. + +I have been able to reproduce the same issue on Proxmox 4,5,6 as well as oVirt 3. and 4. + +Does not panic when CPU type is set to kvm64. + + +pve-qemu-kvm/stable 4.0.1-4 amd64 + + /usr/bin/kvm -id 102 -name win7-01 -chardev socket,id=qmp,path=/var/run/qemu-server/102.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/102.pid -daemonize -smbios type=1,uuid=3ec61114-c30c-4719-aa00-f3f05be22d48 -smp 8,sockets=1,cores=8,maxcpus=8 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/102.vnc,password -no-hpet -cpu penryn,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,enforce -m 12000 -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device vmgenid,guid=50deb929-1974-4fd0-9ad3-71722149d568 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device VGA,id=vga,bus=pci.0,addr=0x2 -chardev socket,path=/var/run/qemu-server/102.qga,server,nowait,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:203582cea152 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=/disk02/prox/images/102/vm-102-disk-0.raw,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive file=/dev/disk/by-id/ata-WDC_WD80EMAZ-00WJTA0_7SGZLHYC-part1,if=none,id=drive-virtio1,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -netdev type=tap,id=net0,ifname=tap102i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=1e:be:cb:0b:6f:13,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -netdev type=tap,id=net1,ifname=tap102i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=EA:76:56:16:2F:D7,netdev=net1,bus=pci.0,addr=0x13,id=net1,bootindex=301 -rtc driftfix=slew,base=localtime -machine type=pc -global kvm-pit.lost_tick_policy=discard \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1859310 b/results/classifier/gemma3:12b/kvm/1859310 new file mode 100644 index 00000000..4e6e1e0f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1859310 @@ -0,0 +1,21 @@ + +libvirt probing fails due to assertion failure with KVM and 'none' machine type + +Using libvirt on Ubuntu 19.10, I get the following error when I try to set <emulator> to the latest qemu from git (commit dc65a5bdc9): + + error: internal error: Failed to start QEMU binary /usr/local/bin/qemu-system-x86_64 for probing: /home/joey/git/qemu/target/i386/kvm.c:2176:kvm_arch_init: Object 0x564bfd5c3200 is not an instance of type x86-machine + +Qemu command line to reproduce: + + sudo x86_64-softmmu/qemu-system-x86_64 -machine 'none,accel=kvm' + +Commit ed9e923c3c (Dec 12, 2019) introduced the issue by removing an object_dynamic_cast call. In this scenario, kvm_arch_init is passed an instance of "none-machine" instead of "x86-machine". + +The following one-line change to target/i386/kvm.c reintroduces the cast: + + if (kvm_check_extension(s, KVM_CAP_X86_SMM) && ++ object_dynamic_cast(OBJECT(ms), TYPE_X86_MACHINE) && + x86_machine_is_smm_enabled(X86_MACHINE(ms))) { + smram_machine_done.notify = register_smram_listener; + qemu_add_machine_init_done_notifier(&smram_machine_done); + } \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1863819 b/results/classifier/gemma3:12b/kvm/1863819 new file mode 100644 index 00000000..b58ef71e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1863819 @@ -0,0 +1,42 @@ + +repeated KVM single step crashes leaks into SMP guest and crashes guest application + +Guest: Windows 7 x64 +Host: Ubuntu 18.04.4 (kernel 5.3.0-40-generic) +QEMU: master 6c599282f8ab382fe59f03a6cae755b89561a7b3 + +If I try to use GDB to repeatedly single-step a userspace process while running a KVM guest, the userspace process will eventually crash with a 0x80000004 exception (single step). This is easily reproducible on a Windows guest, I've not tried another guest type but I've been told it's the same there also. + +On a Ubuntu 16 host with an older kernel, this will hang the entire machine. However, it seems it may have been fixed by https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5cc244a20b86090c087073c124284381cdf47234 ? + +It's not clear to me whether this is a KVM or a QEMU bug. A TCG guest does not crash the userspace process in the same way, but it does hang the VM. + +I've tried a variety of QEMU versions (3.0, 4.2, master) and they all exhibit the same behavior. I'm happy to dig into this more if someone can point me in the right direction. + +Here's the outline for reproducing the bug: + +* Compile iloop.cpp (attached) as a 32-bit application using MSVC +* Start Windows 7 x64 guest under GDB + * Pass '-enable-kvm -smp 4,cores=2 -gdb tcp::4567' to QEMU along with other typical options + +(need to get CR3 to ensure we're in the right application context -- if there's an easier way to do this I'd love to hear it!) +* Install WinDBG on guest +* Copy SysInternals LiveKD to guest +* Start iloop.exe in guest, note loop address +* Run LiveKD from administrative prompt + * livekd64.exe -w +* In WinDBG: + * !process 0 0 + * Search for iloop.exe, note DirBase (this is CR3) + +In GDB: +* Execute 'target remote tcp::4567' +* Execute 'c' +* Hit CTRL-C to pause the VM +* Execute 'p/x $cr3' + .. continue if not equal to DirBase in WinDBG, keep stopping until it is equal +* Once $cr3 is correct value, if you 'stepi' a few times you'll note the process going in a loop, it should keep hitting the address echoed to the console by iloop.exe + +Crash the process from GDB: +* Execute 'stepi 100000000' +* Watch the process, eventually it'll die with an 0x80000004 error \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1864536 b/results/classifier/gemma3:12b/kvm/1864536 new file mode 100644 index 00000000..7dea9ba3 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1864536 @@ -0,0 +1,27 @@ + +Support for XSAVES intel instructions in QEMU + +Dear QEMU developers, + +I am running Hyper-V on qemu+kvm. During it initialization, it checks for XSAVES support: first it executes CPUID with EAX = 0xd and ECX = 1 and looks at bit 3 in the returned value of EAX (Supports XSAVES/XRSTORS and IA32_XSS [1]), and then it reads the MSR IA32_VMX_PROCBASED_CTLS2 (index 0x48B) and looks at bit 20 (Enable XSAVES/XSTORS [2]). If CPUID shows that XSAVES is supported and the bit is not enabled in the MSR, Hyper-V decides to fail and stops its initialization. It used to work until last spring/summer where something might have changed in either KVM or QEMU. + +It seems that KVM sets the correct flags (in CPUID and the MSR) when the host CPU supports XSAVES. In QEMU, based on comments in target/i386/cpu.c it seems that XSAVES is not added in +builtin_x86_defs[].features[FEAT_VMX_SECONDARY_CTLS] because it might break live migration. Therefore, when setting the MSR for the vcpu, QEMU is masking off the feature. + +I have tested two possible solutions: +- adding the flag in .features[FEAT_VMX_SECONDARY_CTLS] +- removing the support of the instruction in feature_word_info[FEAT_XSAVE].feat_names + +Both solutions work and Hyper-v is happily running. I can provide a patch for the solution you might consider applying. Otherwise, is there a better way to fix the issue? + +Qemu version: 4.2.0 +Kernel version: 5.5.4 +Qemu command: https://gist.github.com/0xabe-io/b4d797538e2160252addc1d1d64738e2 + + +Many thanks, +Alexandre + +Ref: +[1] Intel SDM Volume 2A, chapter 3, page 196 +[2] Intel SDM Volume 3C, chapter 24, page 11 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1866870 b/results/classifier/gemma3:12b/kvm/1866870 new file mode 100644 index 00000000..f45a0704 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1866870 @@ -0,0 +1,10 @@ + +KVM Guest pauses after upgrade to Ubuntu 20.04 + +As outlined here: https://bugs.launchpad.net/qemu/+bug/1813165/comments/15 + +After upgrade, all KVM guests are in a default pause state. Even after forcing them off via virsh, and restarting them the guests are paused. + +These Guests are not nested. + +A lot of diganostic information are outlined in the previous bug report link provided. The solution mentioned in previous report had been allegedly integrated into the downstream updates. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1866962 b/results/classifier/gemma3:12b/kvm/1866962 new file mode 100644 index 00000000..f569f9ee --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1866962 @@ -0,0 +1,150 @@ + +[Regression]Powerpc kvm guest unable to start with hugepage backed memory + +Current upstream qemu master does not boot a powerpc kvm guest backed by hugepage. + +HW: Power9 (DD2.3) +Host Kernel: 5.6.0-rc5 +Guest Kernel: 5.6.0-rc5 +Qemu: ba29883206d92a29ad5a466e679ccfc2ee6132ef + +Steps to reproduce: +1. Allocate enough hugepage to boot a KVM guest +# cat /proc/meminfo |grep ^HugePages +HugePages_Total: 5000 +HugePages_Free: 5000 +HugePages_Rsvd: 0 +HugePages_Surp: 0 + +2. Define and boot a guest +/usr/bin/virt-install --connect=qemu:///system --hvm --accelerate --name 'vm1' --machine pseries --memory=8192,hugepages=yes --vcpu=8,maxvcpus=8,sockets=1,cores=8,threads=1 --import --nographics --serial pty --memballoon model=virtio --controller type=scsi,model=virtio-scsi --disk path=/home/kvmci/tests/data/avocado-vt/images/f31-ppc64le.qcow2,bus=scsi,size=10,format=qcow2 --network=bridge=virbr0,model=virtio,mac=52:54:00:5f:82:83 --mac=52:54:00:5f:82:83 --boot emulator=/home/sath/qemu/ppc64-softmmu/qemu-system-ppc64,kernel=/home/kvmci/linux/vmlinux,kernel_args="root=/dev/sda5 rw console=tty0 console=ttyS0,115200 init=/sbin/init initcall_debug selinux=0" --noautoconsole + +Starting install... +ERROR internal error: qemu unexpectedly closed the monitor: qemu-system-ppc64: util/qemu-thread-posix.c:76: qemu_mutex_lock_impl: Assertion `mutex->initialized' failed. +qemu-system-ppc64: util/qemu-thread-posix.c:76: qemu_mutex_lock_impl: Assertion `mutex->initialized' failed. + + -----------NOK + + +Bisected the issue to below commit. + +037fb5eb3941c80a2b7c36a843e47207ddb004d4 is the first bad commit +commit 037fb5eb3941c80a2b7c36a843e47207ddb004d4 +Author: bauerchen <email address hidden> +Date: Tue Feb 11 17:10:35 2020 +0800 + + mem-prealloc: optimize large guest startup + + [desc]: + Large memory VM starts slowly when using -mem-prealloc, and + there are some areas to optimize in current method; + + 1、mmap will be used to alloc threads stack during create page + clearing threads, and it will attempt mm->mmap_sem for write + lock, but clearing threads have hold read lock, this competition + will cause threads createion very slow; + + 2、methods of calcuating pages for per threads is not well;if we use + 64 threads to split 160 hugepage,63 threads clear 2page,1 thread + clear 34 page,so the entire speed is very slow; + + to solve the first problem,we add a mutex in thread function,and + start all threads when all threads finished createion; + and the second problem, we spread remainder to other threads,in + situation that 160 hugepage and 64 threads, there are 32 threads + clear 3 pages,and 32 threads clear 2 pages. + + [test]: + 320G 84c VM start time can be reduced to 10s + 680G 84c VM start time can be reduced to 18s + + Signed-off-by: bauerchen <email address hidden> + Reviewed-by: Pan Rui <email address hidden> + Reviewed-by: Ivan Ren <email address hidden> + [Simplify computation of the number of pages per thread. - Paolo] + Signed-off-by: Paolo Bonzini <email address hidden> + + util/oslib-posix.c | 32 ++++++++++++++++++++++++-------- + 1 file changed, 24 insertions(+), 8 deletions(-) + + + +bisect log: + +# git bisect log +git bisect start +# good: [52901abf94477b400cf88c1f70bb305e690ba2de] Update version for v4.2.0-rc5 release +git bisect good 52901abf94477b400cf88c1f70bb305e690ba2de +# bad: [ba29883206d92a29ad5a466e679ccfc2ee6132ef] Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20200310' into staging +git bisect bad ba29883206d92a29ad5a466e679ccfc2ee6132ef +# good: [d1ebbc9d16297b54b153ee33abe05eb4f1df0c66] target/arm/kvm: trivial: Clean up header documentation +git bisect good d1ebbc9d16297b54b153ee33abe05eb4f1df0c66 +# good: [87b74e8b6edd287ea2160caa0ebea725fa8f1ca1] target/arm: Vectorize USHL and SSHL +git bisect good 87b74e8b6edd287ea2160caa0ebea725fa8f1ca1 +# bad: [e0175b71638cf4398903c0d25f93fe62e0606389] Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20200228' into staging +git bisect bad e0175b71638cf4398903c0d25f93fe62e0606389 +# bad: [ca6155c0f2bd39b4b4162533be401c98bd960820] Merge tag '<email address hidden>' of https://github.com/patchew-project/qemu into HEAD +git bisect bad ca6155c0f2bd39b4b4162533be401c98bd960820 +# good: [ab74e543112957696f7c79b0c33ecebd18b52af5] ppc/spapr: use memdev for RAM +git bisect good ab74e543112957696f7c79b0c33ecebd18b52af5 +# good: [cb06fdad05f3e546a4e20f1f3c0127f9ae53de1a] fuzz: support for fork-based fuzzing. +git bisect good cb06fdad05f3e546a4e20f1f3c0127f9ae53de1a +# bad: [037fb5eb3941c80a2b7c36a843e47207ddb004d4] mem-prealloc: optimize large guest startup +git bisect bad 037fb5eb3941c80a2b7c36a843e47207ddb004d4 +# good: [88e2b97aa3e369a454c9d8360afddc348070c708] Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-virtiofs-20200221' into staging +git bisect good 88e2b97aa3e369a454c9d8360afddc348070c708 +# good: [b1db8c63169f2139af9f26c884e5e2abd27dd290] fuzz: add virtio-net fuzz target +git bisect good b1db8c63169f2139af9f26c884e5e2abd27dd290 +# good: [e5c59355ae9f724777c61c859292ec9db2c8c2ab] fuzz: add documentation to docs/devel/ +git bisect good e5c59355ae9f724777c61c859292ec9db2c8c2ab +# good: [920d557e5ae58671d335acbcfba3f9a97a02911c] memory: batch allocate ioeventfds[] in address_space_update_ioeventfds() +git bisect good 920d557e5ae58671d335acbcfba3f9a97a02911c +# first bad commit: [037fb5eb3941c80a2b7c36a843e47207ddb004d4] mem-prealloc: optimize large guest startup + + + + +Qemu cmdline: +``` +/home/sath/qemu/ppc64-softmmu/qemu-system-ppc64 \ +-name guest=vm1,debug-threads=on \ +-S \ +-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-9-vm1/master-key.aes \ +-machine pseries-5.0,accel=kvm,usb=off,dump-guest-core=off \ +-m 8192 \ +-mem-prealloc \ +-mem-path /dev/hugepages/libvirt/qemu/9-vm1 \ +-overcommit mem-lock=off \ +-smp 8,sockets=1,cores=8,threads=1 \ +-uuid e5875dd8-0d1c-422f-ae46-9a0b88919902 \ +-display none \ +-no-user-config \ +-nodefaults \ +-chardev socket,id=charmonitor,fd=36,server,nowait \ +-mon chardev=charmonitor,id=monitor,mode=control \ +-rtc base=utc \ +-no-shutdown \ +-boot strict=on \ +-kernel /home/kvmci/linux/vmlinux \ +-append 'root=/dev/sda5 rw console=tty0 console=ttyS0,115200 init=/sbin/init initcall_debug selinux=0' \ +-device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x3 \ +-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 \ +-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \ +-drive file=/home/kvmci/tests/data/avocado-vt/images/f31-ppc64le.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 \ +-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 \ +-netdev tap,fd=38,id=hostnet0,vhost=on,vhostfd=39 \ +-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:5f:82:83,bus=pci.0,addr=0x1 \ +-chardev pty,id=charserial0 \ +-device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 \ +-chardev socket,id=charchannel0,fd=40,server,nowait \ +-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ +-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \ +-msg timestamp=on +2020-03-11 08:11:46.639+0000: 494632: info : libvirt version: 5.6.0, package: 5.fc31 (Fedora Project, 2019-11-11-20:24:40, ) +2020-03-11 08:11:46.639+0000: 494632: info : hostname: ltcmihawk50.aus.stglabs.ibm.com +2020-03-11 08:11:46.639+0000: 494632: info : virObjectUnref:349 : OBJECT_UNREF: obj=0x7fff3c0f6fb0 +char device redirected to /dev/pts/2 (label charserial0) +qemu-system-ppc64: util/qemu-thread-posix.c:76: qemu_mutex_lock_impl: Assertion `mutex->initialized' failed. +qemu-system-ppc64: util/qemu-thread-posix.c:76: qemu_mutex_lock_impl: Assertion `mutex->initialized' failed. +2020-03-11 08:11:47.195+0000: shutting down, reason=failed +``` \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1869858 b/results/classifier/gemma3:12b/kvm/1869858 new file mode 100644 index 00000000..7704db70 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1869858 @@ -0,0 +1,7 @@ + +qemu can't start Windows10arm64 19H1(with kvm) + +My cpu's model is arm64(cortex-a53),I want start Win10arm64 with kvm,Because is fast than x86.But it's did'nt work.The screnn is card in Uefi's logo. But I am use ramfb now,So it has nothing to do with the graphics card.But if I discard kvm,It can start now.But its so slowly.But I use the uefi and kvm can start with Debian arm64 buster. So who's the problem?qemu or kvm or Microsoft?But others use it to start successfully. I don't know what I would like to do +This is start command(Qemu version is 4.1') +qemu-system-aarch64 -hda /win10.vhdx -cdrom /win10arm.iso -m 1G -accel kvm -smp 4 -cpu host -pflash efi.img -pflash var.img -device ramfb -device qemu-xhci -device usb-kbd -device usb-mouse -device usb-tablet +If I replace the above three parameters with "- CPU cortex-a53" and "- accel TCG" and "- device VGA", I can start normally. What's the matter? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1873542 b/results/classifier/gemma3:12b/kvm/1873542 new file mode 100644 index 00000000..8934390e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1873542 @@ -0,0 +1,14 @@ + +Windows 98 videocard passthrough - unable to load higher resolution -Desktop, after some games crashes, without whole physical machine reset.. + +When you are using games which are using fullscreen switching resolutions (some old games are 640x480 or 800x600 max), videocard is often stuck after crash and whole Linux machine has to be rebooted, to fix it.. VM reboot is not enough. + + That stuck is strange one, after restart of machine, text mode is working fine, but graphical mode should be set to higher resolution (Load Windows 98 desktop) there is only black screen and screen input blinking. + + I simulated it with multiple videocards, graphical drivers, its quite often, full Linux reboot is always safe it. Im using right roms for my cards, because otherwise i get often even boot machine twice in one Linux boot session. + + Some there is need for some better card reset. + + Also some videocard reset on Linux level workaround would be nice. + + Simulated on Qemu 2.11 and 4.2 and Linux Mint 19.3, but my guess its whole KVM videocard passthrough problem. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1875012 b/results/classifier/gemma3:12b/kvm/1875012 new file mode 100644 index 00000000..bd4b4a66 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1875012 @@ -0,0 +1,27 @@ + +UC20 running in OVMF triggers qemu emulation error (cloudimage works fine on the same) + +Trying to boot a core20 amd64 image on an amd64 Eoan or Focal host via libvirt leads to: + +KVM internal error. Suberror: 1 +emulation failure +RAX=0000000000000000 RBX=000000003bdcd5c0 RCX=000000003ff1d030 RDX=00000000000019a0 +RSI=00000000000000ff RDI=000000003bd73ee0 RBP=000000003bd73e40 RSP=000000003ff1d1f8 +R8 =000000003df52168 R9 =0000000000000000 R10=ffffffffffffffff R11=000000003bd44c40 +R12=000000003bd76500 R13=000000003bd73e00 R14=0000000000020002 R15=000000003df4b483 +RIP=00000000000b0000 RFL=00210246 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +CS =0038 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +FS =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +GS =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT +TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy +GDT= 000000003fbee698 00000047 +IDT= 000000003f2d8018 00000fff +CR0=80010033 CR2=0000000000000000 CR3=000000003fc01000 CR4=00000668 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d00 +Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <ff> ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1876678 b/results/classifier/gemma3:12b/kvm/1876678 new file mode 100644 index 00000000..e64de868 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1876678 @@ -0,0 +1,128 @@ + +Ubuntu 20.04 KVM / QEMU Failure with nested FreeBSD bhyve + +BUG: + +Starting FreeBSD Layer 2 bhyve Guest within Layer 1 FreeBSD VM Host on Layer 0 Ubuntu 20.04 KVM / QEMU Host result in Layer 1 Guest / Host Pausing with "Emulation Failure" + +TESTING: + +My test scenario is nested virtualisation: +Layer 0 - Ubuntu 20.04 Host +Layer 1 - FreeBSD 12.1 with OVMF + bhyve hypervisor Guest/Host +Layer 2 - FreeBSD 12.1 guest + +Layer 0 Host is: Ubuntu 20.04 LTS KVM / QEMU / libvirt + +<<START QEMU VERSION>> +$ virsh -c qemu:///system version --daemon +Compiled against library: libvirt 6.0.0 +Using library: libvirt 6.0.0 +Using API: QEMU 6.0.0 +Running hypervisor: QEMU 4.2.0 +Running against daemon: 6.0.0 +<<END QEMU VERSION> + +<<START Intel VMX Support & Nesting Enabled>> +$ cat /proc/cpuinfo | grep -c vmx +64 +$ cat /sys/module/kvm_intel/parameters/nested +Y +<<END Intel VMS>> + + + +Layer 1 Guest / Host is: FreeBSD Q35 v4.2 with OVMF: + +Pass Host VMX support to Layer 1 Guest via <cpu mode='host-model> + +<<LIBVIRT CONFIG SNIPPET>> +... +... + <os> + <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> + <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader> + <nvram>/home/USER/swarm.bhyve.freebsd/OVMF_VARS.fd</nvram> + </os> + <features> + <acpi/> + <apic/> + <vmport state='off'/> + </features> + <cpu mode='host-model' check='partial'/> +... +... +<END LIBVIRT CONFIG SNIPPET>> + +Checked that Layer 1 - FreeBSD Quest / Host has VMX feature available: + +<<LAYER 1 - FreeBSD CPU Features>> +# uname -a +FreeBSD swarm.DOMAIN.HERE 12.1-RELEASE FreeBSD 12.1-RELEASE GENERIC amd64 + +# grep Features /var/run/dmesg.boot + Features=0xf83fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,SS> + Features2=0xfffa3223<SSE3,PCLMULQDQ,VMX,SSSE3,FMA,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,TSCDLT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND,HV> + AMD Features=0x2c100800<SYSCALL,NX,Page1GB,RDTSCP,LM> + AMD Features2=0x121<LAHF,ABM,Prefetch> + Structured Extended Features=0x1c0fbb<FSGSBASE,TSCADJ,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM,RDSEED,ADX,SMAP> + Structured Extended Features2=0x4<UMIP> + Structured Extended Features3=0xac000400<MD_CLEAR,IBPB,STIBP,ARCH_CAP,SSBD> + XSAVE Features=0x1<XSAVEOPT> +<<END LAYER 1 - FreeBSD CPU Features> + +On Layer 1 FreeBSD Guest / Host start up the Layer 2 guest.. + +<<START LAYER 2 GUEST START>> +# ls +FreeBSD-11.2-RELEASE-amd64-bootonly.iso FreeBSD-12.1-RELEASE-amd64-dvd1.iso bee-hd1-01.img +# /usr/sbin/bhyve -c 2 -m 2048 -H -A -s 0:0,hostbridge -s 1:0,lpc -s 2:0,e1000,tap0 -s 3:0,ahci-hd,bee-hd1-01.img -l com1,stdio -s 5:0,ahci-cd,./FreeBSD-12.1-RELEASE-amd64-dvd1.iso bee +<<END LAYER 2 GUEST START>> + +Result is that Layer 1 - FreeBSD Host guest "paused". + +To Layer 1 machines freezes I cannot get any further diagnostics from this machine, so I run tail on libvirt log from Layer 0 - Ubuntu Host + +<<LAYER 0 LOG TAIL>> +char device redirected to /dev/pts/29 (label charserial0) +2020-05-04T06:09:15.310474Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12] +2020-05-04T06:09:15.310531Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13] +2020-05-04T06:09:15.312533Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12] +2020-05-04T06:09:15.312548Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13] +2020-05-04T06:09:15.313828Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12] +2020-05-04T06:09:15.313841Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13] +2020-05-04T06:09:15.315185Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12] +2020-05-04T06:09:15.315201Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13] +KVM internal error. Suberror: 1 +emulation failure +EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000000 +ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000 +EIP=00000000 EFL=00000000 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 00000000 00000000 00008000 DPL=0 <hiword> +CS =0000 00000000 00000000 00008000 DPL=0 <hiword> +SS =0000 00000000 00000000 00008000 DPL=0 <hiword> +DS =0000 00000000 00000000 00008000 DPL=0 <hiword> +FS =0000 00000000 00000000 00008000 DPL=0 <hiword> +GS =0000 00000000 00000000 00008000 DPL=0 <hiword> +LDT=0000 00000000 00000000 00008000 DPL=0 <hiword> +TR =0000 00000000 00000000 00008000 DPL=0 <hiword> +GDT= 0000000000000000 00000000 +IDT= 0000000000000000 00000000 +CR0=80050033 CR2=0000000000000000 CR3=0000000000000000 CR4=00372060 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +Code=<??> ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? +2020-05-04T06:35:39.186799Z qemu-system-x86_64: terminating on signal 15 from pid 2155 (/usr/sbin/libvirtd) +2020-05-04 06:35:39.386+0000: shutting down, reason=destroyed +<<END LAYER 0 LOG TAIL>> + + +I am reporting this bug here as result is very similar to that seen with QEMU seabios failure reported here: https://bugs.launchpad.net/qemu/+bug/1866870 + +However in this case my VM Layer 1 VM is using OVMF. + +NOTE 1: I have also tested with Q35 v3.1 and 2.12 and get the same result. +NOTE 2: Due to bug in FreeBSD networking code, I had to compile custom kernel with "netmap driver disabled". This is known bug in FreeBSD that I have reported separately. +NOTE 3: I will cross posted this bug report on FreeBSD bugzilla as well: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=246168 +NOTE 4: Have done extensive testing of Ubuntu 20.04 Nested virtualisation with just Ubuntu hosts and OVMF and the nested virtualisation runs correctly, so problem is specific to using FreeBSD / bhyve guest / host. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1877052 b/results/classifier/gemma3:12b/kvm/1877052 new file mode 100644 index 00000000..93e409ae --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1877052 @@ -0,0 +1,13 @@ + +KVM Win 10 guest pauses after kernel upgrade + + + +Hello! +Unfortunately the bug has apparently reappeared. I have a Windows 10 running in a VM, which after my today's "apt upgrade" goes into pause mode after a few seconds of running time. + +Until yesterday it used to work and I was able to boot the VM. During the kernel update (from 5.4.0-28.33 to 5.4.0-29.34) the VM was active and then went into pause mode. Even after a reboot of my host system the problem still persists: the VM boots for a few seconds and then switches to pause mode. + + +Kind regards, + Andreas \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1877526 b/results/classifier/gemma3:12b/kvm/1877526 new file mode 100644 index 00000000..9297310b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1877526 @@ -0,0 +1,54 @@ + +KVM internal crash + +Hi, +I am new to this. (apologies if I miss something) + +I see the following error when I run an application on my QEMU based VM running ubuntu linux: + +Code=4d 39 c8 7f 64 0f 1f 40 00 4d 8d 40 80 49 81 f8 80 00 00 00 <66> 0f 7f 07 66 0f 7f 47 10 66 0f 7f 47 20 66 0f 7f 47 30 +66 0f 7f 47 40 66 0f 7f 47 50 66 +KVM internal error. Suberror: 1 +emulation failure +RAX=00007fffeb85a000 RBX=00000000069ee400 RCX=0000000000000000 RDX=0000000000000000 +RSI=0000000000000000 RDI=00007fffeb85a000 RBP=00007fffffff9570 RSP=00007fffffff9548 +R8 =0000000000000f80 R9 =0000000001000000 R10=0000000000000000 R11=0000003694e83f3a +R12=0000000000000000 R13=0000000000000000 R14=0000000000000000 R15=0000000006b75350 +RIP=0000003694e8443b RFL=00010206 [-----P-] CPL=3 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 ffffffff 00000000 +CS =0033 0000000000000000 ffffffff 00a0fb00 DPL=3 CS64 [-RA] +SS =002b 0000000000000000 ffffffff 00c0f300 DPL=3 DS [-WA] +DS =0000 0000000000000000 ffffffff 00000000 +FS =0000 00007ffff45b5720 ffffffff 00000000 +GS =0000 0000000000000000 ffffffff 00000000 +LDT=0000 0000000000000000 ffffffff 00000000 +TR =0040 ffff88047fd13140 00002087 00008b00 DPL=0 TSS64-busy +GDT= ffff88047fd04000 0000007f +IDT= ffffffffff57c000 00000fff +CR0=80050033 CR2=00007ffff7ff4000 CR3=000000046cb38000 CR4=000006e0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 + +This occurs with qemu-kvm version(host m/c has RHEL 6.6) : +Name : qemu-kvm +Arch : x86_64 +Epoch : 2 +Version : 0.12.1.2 +Release : 2.506.el6_10.7 + +I have another m/c with RHEL 7.5, and the same test case passes with the 1.5.3 version. +yum info qemu-kvm +Name : qemu-kvm +Arch : x86_64 +Epoch : 10 +Version : 1.5.3 + + +How do I investigate this? +I would need to patch up the qemu-kvm on the host to get this fixed, I think. + +Please let me know if I need to provide more info, (and what?) + +Regards, +Prashant \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1879425 b/results/classifier/gemma3:12b/kvm/1879425 new file mode 100644 index 00000000..01ae23f1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1879425 @@ -0,0 +1,34 @@ + +The thread of "CPU 0 /KVM" keeping 99.9%CPU + +Hi Expert: + +The VM is hung here after (2, or 3, or 5 and the longest time is 10 hours) by qemu-kvm. +Notes: +for VM: + OS: RHEL 7.6 + CPU: 1 + MEM:4G +For qemu-kvm: + 1) version: + /usr/libexec/qemu-kvm -version + QEMU emulator version 2.10.0(qemu-kvm-ev-2.10.0-21.el7_5.4.1) + 2) once the issue is occurred, the CPU of "CPU0 /KVM" is more than 99% by com "top -p VM_pro_ID" + PID UDER PR NI RES S % CPU %MEM TIME+ COMMAND +872067 qemu 20 0 1.6g R 99.9 0.6 37:08.87 CPU 0/KVM + 3) use "pstack 493307" and below is function trace +Thread 1 (Thread 0x7f2572e73040 (LWP 872067)): +#0 0x00007f256cad8fcf in ppoll () from /lib64/libc.so.6 +#1 0x000055ff34bdf4a9 in qemu_poll_ns () +#2 0x000055ff34be02a8 in main_loop_wait () +#3 0x000055ff348bfb1a in main () + 4) use strace "strace -tt -ff -p 872067 -o cfx" and below log keep printing +21:24:02.977833 ppoll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN}, {fd=8, events=POLLIN}, {fd=9, events=POLLIN}, {fd=80, events=POLLIN}, {fd=82, events=POLLIN}, {fd=84, events=POLLIN}, {fd=115, events=POLLIN}, {fd=121, events=POLLIN}], 9, {0, 0}, NULL, 8) = 0 (Timeout) +21:24:02.977918 ppoll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN}, {fd=8, events=POLLIN}, {fd=9, events=POLLIN}, {fd=80, events=POLLIN}, {fd=82, events=POLLIN}, {fd=84, events=POLLIN}, {fd=115, events=POLLIN}, {fd=121, events=POLLIN}], 9, {0, 911447}, NULL, 8) = 0 (Timeout) +21:24:02.978945 ppoll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN}, {fd=8, events=POLLIN}, {fd=9, events=POLLIN}, {fd=80, events=POLLIN}, {fd=82, events=POLLIN}, {fd=84, events=POLLIN}, {fd=115, events=POLLIN}, {fd=121, events=POLLIN}], 9, {0, 0}, NULL, 8) = 0 (Timeout) +Therefore, I think the thread "CPU 0/KVM" is in tight loop. + 5) use reset can recover this issue. however, it will reoccurred again. +Current work around is increase one CPU for this VM, then issue is gone. + +thanks +Cliff \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1879646 b/results/classifier/gemma3:12b/kvm/1879646 new file mode 100644 index 00000000..db8b0c11 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1879646 @@ -0,0 +1,31 @@ + +[Feature request] x86: dump MSR features in human form + +QEMU might fail because host/guest cpu features are not properly configured: + +qemu-system-x86_64: error: failed to set MSR 0x48f to 0x7fefff00036dfb +qemu-system-x86_64: /root/qemu-master/target/i386/kvm.c:2695: +kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. + +To ease debugging, it the MSR features bit could be dumped. + +Example in this thread: + +https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg05593.html + + The high 32 bits are 0111 1111 1110 1111 1111 1111. + + The low 32 bits are 0000 0011 0110 1101 1111 1011. + + The features that are set are the xor, so 0111 1100 1000 0010 0000 0100: + + - bit 2, vmx-exit-nosave-debugctl + - bit 9, host address space size, is handled automatically by QEMU + - bit 15, vmx-exit-ack-intr + - bit 17, vmx-exit-save-pat + - bit 18, vmx-exit-load-pat + - bit 19, vmx-exit-save-efer + - bit 20, vmx-exit-load-efer + - bit 21, vmx-exit-save-preemption-timer + +This output ^^^ is easier to digest. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/188 b/results/classifier/gemma3:12b/kvm/188 new file mode 100644 index 00000000..318936dc --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/188 @@ -0,0 +1,2 @@ + +savevm with hax saves wrong register state diff --git a/results/classifier/gemma3:12b/kvm/1880507 b/results/classifier/gemma3:12b/kvm/1880507 new file mode 100644 index 00000000..bfbcb5ae --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1880507 @@ -0,0 +1,4 @@ + +VMM from Ubuntu 20.04 does not show the memory consumption + +KVM host system: Ubuntu 18.04 and 20.04, guest machines: Windows and Ubuntu. Management through Ubuntu 20.04, vmm does not show RAM consumption for Windows guest systems (Win7, Win2008R2), for Ubuntu values are shown. The error is not observed in Ubuntu 18.04/vmm. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1881231 b/results/classifier/gemma3:12b/kvm/1881231 new file mode 100644 index 00000000..ba97c8a9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1881231 @@ -0,0 +1,32 @@ + +colo: Can not recover colo after svm failover twice + +Hi Expert, +x-blockdev-change met some error, during testing colo + +Host os: +CentOS Linux release 7.6.1810 (Core) + +Reproduce steps: +1. create colo vm following https://github.com/qemu/qemu/blob/master/docs/COLO-FT.txt +2. kill secondary vm and remove the nbd child from the quorum to wait for recover + type those commands on primary vm console: + { 'execute': 'x-blockdev-change', 'arguments': {'parent': 'colo-disk0', 'child': 'children.1'}} + { 'execute': 'human-monitor-command','arguments': {'command-line': 'drive_del replication0'}} + { 'execute': 'x-colo-lost-heartbeat'} +3. recover colo +4. kill secondary vm again after recover colo and type same commands as step 2: + { 'execute': 'x-blockdev-change', 'arguments': {'parent': 'colo-disk0', 'child': 'children.1'}} + { 'execute': 'human-monitor-command','arguments': {'command-line': 'drive_del replication0'}} + { 'execute': 'x-colo-lost-heartbeat'} + but the first command got error + { 'execute': 'x-blockdev-change', 'arguments': {'parent': 'colo-disk0', 'child': 'children.1'}} +{"error": {"class": "GenericError", "desc": "Node 'colo-disk0' does not have child 'children.1'"}} + +according to https://www.qemu.org/docs/master/qemu-qmp-ref.html +Command: x-blockdev-change +Dynamically reconfigure the block driver state graph. It can be used to add, remove, insert or replace a graph node. Currently only the Quorum driver implements this feature to add or remove its child. This is useful to fix a broken quorum child. + +It seems x-blockdev-change not worked as expected. + +Thanks. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1884095 b/results/classifier/gemma3:12b/kvm/1884095 new file mode 100644 index 00000000..4e2030b8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1884095 @@ -0,0 +1,24 @@ + +QEMU not sufficiently focused on qEMUlation, with resulting holes in TCG emulation coverage + +It seems that QEMU has stopped emphasizing the EMU part of the name, and is too much focused on virtualization. + +My interest is at running legacy operating systems, and as such, they must run on foreign CPU platforms. m68 on intel, intel on ARM, etc. +Time doesn't stand still, and reliance on KVM and similar x86-on-x86 tricks, which allow the delegation of certain CPU features to the host CPU is going to not work going forward. + +If the rumored transition of Apple to ARM is going to take place, people will want to e.g. emulate for testing or legacy purposes a variety of operating systems, incl. NeXTSTEP, Windows, earlier versions of MacOS on ARM Macs. + +Testing that scenario, i.e. macOS on an ARM board with the lowest possible CPU capable of running modern macOS, results in these problems (and of course utter failure achieving the goal): + +qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.01H:ECX.fma [bit 12] +qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.01H:ECX.avx [bit 28] +qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.07H:EBX.avx2 [bit 5] +qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.80000007H:EDX.invtsc [bit 8] +qemu-system-x86_64: warning: TCG doesn't support requested feature: CPUID.0DH:EAX.xsavec [bit 1] + +And this is emulating a lowly Penryn CPU with the required CPU flags for macOS: +-cpu Penryn,vendor=GenuineIntel,+sse3,+sse4.2,+aes,+xsave,+avx,+xsaveopt,+xsavec,+xgetbv1,+avx2,+bmi2,+smep,+bmi1,+fma,+movbe,+invtsc + +Attempting to emulate a more feature laden intel CPU results in even more issues. + +I would propose that no CPU should be considered supported unless it can be fully handled by TCG on a non-native host. KVM, native-on-native etc. are nice to have, but peripheral to qEMUlation when it boils down to it. At the very least, there should be a CLEAR distinction which CPUs require KVM to be used, and which can be fully emulated. It should not require wasting an afternoon to figure out that an emulation attempt is futile because TCG lacks essential functionality. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1885175 b/results/classifier/gemma3:12b/kvm/1885175 new file mode 100644 index 00000000..dc5eb2b6 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1885175 @@ -0,0 +1,53 @@ + +memory.c range assertion hit at full invalidating + +I am able to hit this assertion when a Red Hat 7 guest virtio_net device raises an "Invalidation" of all the TLB entries. This happens in the guest's startup if 'intel_iommu=on' argument is passed to the guest kernel and right IOMMU/ATS devices are declared in qemu's command line. + +Command line: /home/qemu/x86_64-softmmu/qemu-system-x86_64 -name guest=rhel7-test,debug-threads=on -machine pc-q35-5.1,accel=kvm,usb=off,dump-guest-core=off,kernel_irqchip=split -cpu Broadwell,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,umip=on,arch-capabilities=on,xsaveopt=on,pdpe1gb=on,abm=on,skip-l1dfl-vmentry=on,rtm=on,hle=on -m 8096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid d022ecbf-679e-4755-87ce-eb87fc5bbc5d -display none -no-user-config -nodefaults -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot strict=on -device intel-iommu,intremap=on,device-iotlb=on -device pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 -device pcie-root-port,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 -device pcie-root-port,port=0xb,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x3 -device pcie-root-port,port=0xc,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x4 -device pcie-root-port,port=0xd,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x5 -device pcie-root-port,port=0xe,chassis=7,id=pci.7,bus=pcie.0,addr=0x1.0x6 -device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 -drive file=/home/virtio-test2.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,vhostforce=on -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:1d:f2,bus=pci.1,addr=0x0,iommu_platform=on,ats=on -device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.6,addr=0x0 -s -msg timestamp=on + +Full backtrace: + +#0 0x00007ffff521370f in raise () at /lib64/libc.so.6 +#1 0x00007ffff51fdb25 in abort () at /lib64/libc.so.6 +#2 0x00007ffff51fd9f9 in _nl_load_domain.cold.0 () at /lib64/libc.so.6 +#3 0x00007ffff520bcc6 in .annobin_assert.c_end () at /lib64/libc.so.6 +#4 0x0000555555888171 in memory_region_notify_one (notifier=0x7ffde05dfde8, entry=0x7ffde5dfe200) at /home/qemu/memory.c:1918 +#5 0x0000555555888247 in memory_region_notify_iommu (iommu_mr=0x555556f6c0b0, iommu_idx=0, entry=...) at /home/qemu/memory.c:1941 +#6 0x0000555555951c8d in vtd_process_device_iotlb_desc (s=0x555557609000, inv_desc=0x7ffde5dfe2d0) + at /home/qemu/hw/i386/intel_iommu.c:2468 +#7 0x0000555555951e6a in vtd_process_inv_desc (s=0x555557609000) at /home/qemu/hw/i386/intel_iommu.c:2531 +#8 0x0000555555951fa5 in vtd_fetch_inv_desc (s=0x555557609000) at /home/qemu/hw/i386/intel_iommu.c:2563 +#9 0x00005555559520e5 in vtd_handle_iqt_write (s=0x555557609000) at /home/qemu/hw/i386/intel_iommu.c:2590 +#10 0x0000555555952b45 in vtd_mem_write (opaque=0x555557609000, addr=136, val=2688, size=4) at /home/qemu/hw/i386/intel_iommu.c:2837 +#11 0x0000555555883e17 in memory_region_write_accessor + (mr=0x555557609330, addr=136, value=0x7ffde5dfe478, size=4, shift=0, mask=4294967295, attrs=...) at /home/qemu/memory.c:483 +#12 0x000055555588401d in access_with_adjusted_size + (addr=136, value=0x7ffde5dfe478, size=4, access_size_min=4, access_size_max=8, access_fn= + 0x555555883d38 <memory_region_write_accessor>, mr=0x555557609330, attrs=...) at /home/qemu/memory.c:544 +#13 0x0000555555886f37 in memory_region_dispatch_write (mr=0x555557609330, addr=136, data=2688, op=MO_32, attrs=...) + at /home/qemu/memory.c:1476 +#14 0x0000555555827a03 in flatview_write_continue + (fv=0x7ffde00935d0, addr=4275634312, attrs=..., ptr=0x7ffff7ff0028, len=4, addr1=136, l=4, mr=0x555557609330) at /home/qemu/exec.c:3146 +#15 0x0000555555827b48 in flatview_write (fv=0x7ffde00935d0, addr=4275634312, attrs=..., buf=0x7ffff7ff0028, len=4) + at /home/qemu/exec.c:3186 +#16 0x0000555555827e9d in address_space_write + (as=0x5555567ca640 <address_space_memory>, addr=4275634312, attrs=..., buf=0x7ffff7ff0028, len=4) at /home/qemu/exec.c:3277 +#17 0x0000555555827f0a in address_space_rw + (as=0x5555567ca640 <address_space_memory>, addr=4275634312, attrs=..., buf=0x7ffff7ff0028, len=4, is_write=true) + at /home/qemu/exec.c:3287 +#18 0x000055555589b633 in kvm_cpu_exec (cpu=0x555556b65640) at /home/qemu/accel/kvm/kvm-all.c:2511 +#19 0x0000555555876ba8 in qemu_kvm_cpu_thread_fn (arg=0x555556b65640) at /home/qemu/cpus.c:1284 +#20 0x0000555555dafff1 in qemu_thread_start (args=0x555556b8c3b0) at util/qemu-thread-posix.c:521 +#21 0x00007ffff55a62de in start_thread () at /lib64/libpthread.so.0 +#22 0x00007ffff52d7e83 in clone () at /lib64/libc.so.6 + +-- + +If we examinate *entry in frame 4 of backtrace: +*entry = {target_as = 0x555556f6c050, iova = 0x0, translated_addr = 0x0, addr_mask = 0xffffffffffffffff, perm = 0x0} + +Which (I think) tries to invalidate all the TLB registers of the device. + +Just deleting that assert is enough for the VM to start and communicate using IOMMU, but maybe a better alternative is possible. We could move it to the caller functions in other cases than IOMMU invalidation, or make it conditional only if not invalidating. + +Guest kernel version: kernel-3.10.0-1136.el7 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1886318 b/results/classifier/gemma3:12b/kvm/1886318 new file mode 100644 index 00000000..e3226ddb --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1886318 @@ -0,0 +1,40 @@ + +Qemu after v5.0.0 breaks macos guests + +The Debian Sid 5.0-6 qemu-kvm package can no longer get further than the Clover bootloader whereas 5.0-6 and earlier worked fine. + +So I built qemu master from github and it has the same problem, whereas git tag v5.0.0 (or 4.2.1) does not, so something between v5.0.0 release and the last few days has caused the problem. + +Here's my qemu script, pretty standard macOS-Simple-KVM setup on a Xeon host: + +qemu-system-x86_64 \ + -enable-kvm \ + -m 4G \ + -machine q35,accel=kvm \ + -smp 4,sockets=1,cores=2,threads=2 \ + -cpu +Penryn,vendor=GenuineIntel,kvm=on,+sse3,+sse4.2,+aes,+xsave,+avx,+xsaveopt,+xsavec,+xgetbv1,+avx2,+bmi2,+smep,+bmi1,+fma,+movbe,+invtsc +\ + -device +isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" +\ + -smbios type=2 \ + -drive if=pflash,format=raw,readonly,file="/tmp/OVMF_CODE.fd" \ + -drive if=pflash,format=raw,file="/tmp/macos_catalina_VARS.fd" \ + -vga qxl \ + -device ich9-ahci,id=sata \ + -drive id=ESP,if=none,format=raw,file=/tmp/ESP.img \ + -device ide-hd,bus=sata.2,drive=ESP \ + -drive id=InstallMedia,format=raw,if=none,file=/tmp/BaseSystem.img \ + -device ide-hd,bus=sata.3,drive=InstallMedia \ + -drive id=SystemDisk,if=none,format=raw,file=/tmp/macos_catalina.img \ + -device ide-hd,bus=sata.4,drive=SystemDisk \ + -usb -device usb-kbd -device usb-mouse + +Perhaps something has changed in Penryn support recently, as that's required for macos? + +See also https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964247 + +Also on a related note, kernel 5.6/5.7 (on Debian) hard crashes the host when I try GPU passthrough on macos, whereas Ubuntu20/Win10 work fine - as does 5.5 kernel. + +See also https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=961676 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1888165 b/results/classifier/gemma3:12b/kvm/1888165 new file mode 100644 index 00000000..d8759ff6 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1888165 @@ -0,0 +1,15 @@ + +loopz/loopnz clearing previous instruction's modified flags on cx -> 0 + +If you run QBasic in qemu, printing a double-type single-digit number will print an extra decimal point (e.g. PRINT CDBL(3) prints "3.") that does not appear when running on a real CPU (or on qemu with -enable-kvm). I tracked this down to the state of the status flags after a loopnz instruction. + +After executing a sequence like this in qemu: + + mov bx,1 + mov cx,1 + dec bx ; sets Z bit in flags +A: loopnz A ; should not modify flags + +Z is incorrectly clear afterwards. loopz does the same thing (but not plain loop). Interestingly, inserting pushf+popf after dec results in Z set, so loopnz/loopz does not always clear Z itself but is rather interfering with the previous instruction's flag setting. + +Version 5.1.0-rc0, x86-64 host. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1888971 b/results/classifier/gemma3:12b/kvm/1888971 new file mode 100644 index 00000000..7cc90f00 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1888971 @@ -0,0 +1,14 @@ + +SMI trigger causes hang with multiple cores + +When using qemu , SMI trigger causes hand/reboot under following conditions: + +1. No KVM but there are more than 1 threads (-smp > 1) +2. When using KVM. + +Info: +qemu-system-x86_64 --version +QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.29) +Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers + +SMI trigger was done by writing 0x00 in IO port 0xB2. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1890290 b/results/classifier/gemma3:12b/kvm/1890290 new file mode 100644 index 00000000..28533a79 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1890290 @@ -0,0 +1,71 @@ + +PowerPC L2(nested virt) kvm guest fails to boot with ic-mode=dual,kernel-irqchip=on - `KVM is too old to support ic-mode=dual,kernel-irqchip=on` + +Env: +HW: Power 9 DD2.3 +Host L0: 5.8.0-rc5-g8ba4ffcd8 +Qemu: 5.0.50 (v5.0.0-533-gdebe78ce14) +Libvirt: 6.4.0 +L1: 5.8.0-rc5-ge9919e11e +qemu_version': '5.0.50 (v5.1.0-rc2-dirty) +libvirt_version': '6.4.0' +L2: 5.8.0-rc7-g6ba1b005f + + +1. boot a L2 KVM guest with `ic-mode=dual,kernel-irqchip=on` + +/usr/bin/virt-install --connect=qemu:///system --hvm --accelerate --name 'vm1' --machine pseries --memory=8192 --vcpu=8,maxvcpus=8,sockets=1,cores=2,t +hreads=4 --import --nographics --serial pty --memballoon model=virtio --disk path=/home/tests/data/avocado-vt/images/f31-ppc64le.qcow2,bus=virtio,size=10,format=qcow2 --network +=bridge=virbr0,model=virtio,mac=52:54:00:e6:fe:f6 --mac=52:54:00:e6:fe:f6 --boot emulator=/usr/share/avocado-plugins-vt/bin/qemu,kernel=/tmp/linux/vmlinux,kernel_args="root=/de +v/vda2 rw console=tty0 console=ttyS0,115200 init=/sbin/init initcall_debug selinux=0" --noautoconsole --qemu-commandline=" -M pseries,ic-mode=dual,kernel-irqchip=on" + + +ERROR internal error: process exited while connecting to monitor: 2020-08-04T11:12:53.304482Z qemu: KVM is too old to support ic-mode=dual,kernel-irqchip=on + + + + +Qemu Log: +``` +/usr/share/avocado-plugins-vt/bin/qemu \ +-name guest=vm1,debug-threads=on \ +-S \ +-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-vm1/master-key.aes \ +-machine pseries-5.1,accel=kvm,usb=off,dump-guest-core=off \ +-cpu POWER9 \ +-m 8192 \ +-overcommit mem-lock=off \ +-smp 8,sockets=1,dies=1,cores=2,threads=4 \ +-uuid 20a3351b-2776-4e75-9059-c070fe3dd44b \ +-display none \ +-no-user-config \ +-nodefaults \ +-chardev socket,id=charmonitor,fd=34,server,nowait \ +-mon chardev=charmonitor,id=monitor,mode=control \ +-rtc base=utc \ +-no-shutdown \ +-boot strict=on \ +-kernel /tmp/linux/vmlinux \ +-append 'root=/dev/vda2 rw console=tty0 console=ttyS0,115200 init=/sbin/init initcall_debug selinux=0' \ +-device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x2 \ +-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 \ +-blockdev '{"driver":"file","filename":"/home/tests/data/avocado-vt/images/f31-ppc64le.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ +-blockdev '{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \ +-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-1-format,id=virtio-disk0,bootindex=1 \ +-netdev tap,fd=37,id=hostnet0,vhost=on,vhostfd=38 \ +-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:e6:fe:f6,bus=pci.0,addr=0x1 \ +-chardev pty,id=charserial0 \ +-device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 \ +-chardev socket,id=charchannel0,fd=39,server,nowait \ +-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ +-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 \ +-M pseries,ic-mode=dual,kernel-irqchip=on \ +-msg timestamp=on +2020-08-04 11:12:53.169+0000: Domain id=5 is tainted: custom-argv +2020-08-04 11:12:53.179+0000: 11120: info : libvirt version: 6.4.0, package: 1.fc31 (Unknown, 2020-06-02-05:09:40, ltc-wspoon4.aus.stglabs.ibm.com) +2020-08-04 11:12:53.179+0000: 11120: info : hostname: atest-guest +2020-08-04 11:12:53.179+0000: 11120: info : virObjectUnref:347 : OBJECT_UNREF: obj=0x7fff0c117c40 +char device redirected to /dev/pts/0 (label charserial0) +2020-08-04T11:12:53.304482Z qemu: KVM is too old to support ic-mode=dual,kernel-irqchip=on +2020-08-04 11:12:53.694+0000: shutting down, reason=failed +``` \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1894836 b/results/classifier/gemma3:12b/kvm/1894836 new file mode 100644 index 00000000..c253ae40 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1894836 @@ -0,0 +1,47 @@ + +kernel panic using hvf with CPU passthrough + +Host Details +QEMU 5.1 (Homebrew) +macOS 10.15.6 Catalina +Late 2014 model +i5-4690 @ 3.5 GHz +8 GB RAM + +Guest Details +Ubuntu Desktop 20.04.1 Installer ISO + +Problem +Whenever I boot with "-accel hvf -cpu host", the Ubuntu desktop installer will immediately crash with a kernel panic after the initial splash screen. +See the attached picture of the kernel panic for more details. + +Steps to recreate +From https://www.jwillikers.com/posts/virtualize_ubuntu_desktop_on_macos_with_qemu/ + +1. Install QEMU with Homebrew. +$ brew install qemu + +2. Create a qcow2 disk image to which to install. +$ qemu-img create -f qcow2 ubuntu2004.qcow2 60G + +3. Download the ISO. +$ curl -L -o ubuntu-20.04.1-desktop-amd64.iso https://releases.ubuntu.com/20.04/ubuntu-20.04.1-desktop-amd64.iso + +4. Run the installer in QEMU. +$ qemu-system-x86_64 \ + -accel hvf \ + -cpu host \ + -smp 2 \ + -m 4G \ + -usb \ + -device usb-tablet \ + -vga virtio \ + -display default,show-cursor=on \ + -device virtio-net,netdev=vmnic -netdev user,id=vmnic \ + -audiodev coreaudio,id=snd0 \ + -device ich9-intel-hda -device hda-output,audiodev=snd0 \ + -cdrom ubuntu-20.04.1-desktop-amd64.iso \ + -drive file=ubuntu2004.qcow2,if=virtio + +Workaround +Emulating the CPU with "-cpu qemu64" does not result in a kernel panic. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1895471 b/results/classifier/gemma3:12b/kvm/1895471 new file mode 100644 index 00000000..bc465b67 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1895471 @@ -0,0 +1,24 @@ + +compilation error with clang in util/async.c + +configured with ` CC=clang CXX=clang++ ../configure --target-list=x86_64-softmmu --enable-kvm --enable-curl --enable-debug --enable-jemalloc --enable-fuzzing --enable-sdl` and after make I get the following error related to c11 atomics. I'm using clang because I'm experimenting with fuzzer + +[glitz@archlinux /code/qemu/build]$ ninja -j5 +[479/2290] Compiling C object libqemuutil.a.p/util_async.c.o +FAILED: libqemuutil.a.p/util_async.c.o +clang -Ilibqemuutil.a.p -I. -I.. -Iqapi -Itrace -Iui -Iui/shader -I/usr/include/p11-kit-1 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/gio-unix-2.0 -Ilinux-headers -Xclang -fcolor-diagnostics -pipe -Wall -Winvalid-pch -Werror -std=gnu99 -g -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wno-initializer-overrides -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-string-plus-int -Wno-typedef-redefinition -Wno-tautological-type-limit-compare -fstack-protector-strong -fsanitize=fuzzer-no-link -iquote /code/qemu/tcg/i386 -isystem /code/qemu/linux-headers -iquote . -iquote /code/qemu -iquote /code/qemu/accel/tcg -iquote /code/qemu/include -iquote /code/qemu/disas/libvixl -pthread -fPIC -MD -MQ libqemuutil.a.p/util_async.c.o -MF libqemuutil.a.p/util_async.c.o.d -o libqemuutil.a.p/util_async.c.o -c ../util/async.c +../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) + old_flags = atomic_fetch_or(&bh->flags, BH_PENDING | new_flags); + ^ ~~~~~~~~~~ +/usr/lib/clang/10.0.1/include/stdatomic.h:138:42: note: expanded from macro 'atomic_fetch_or' +#define atomic_fetch_or(object, operand) __c11_atomic_fetch_or(object, operand, __ATOMIC_SEQ_CST) + ^ ~~~~~~ +../util/async.c:105:14: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) + *flags = atomic_fetch_and(&bh->flags, + ^ ~~~~~~~~~~ +/usr/lib/clang/10.0.1/include/stdatomic.h:144:43: note: expanded from macro 'atomic_fetch_and' +#define atomic_fetch_and(object, operand) __c11_atomic_fetch_and(object, operand, __ATOMIC_SEQ_CST) + ^ ~~~~~~ +2 errors generated. +[483/2290] Compiling C object libqemuutil.a.p/util_qemu-error.c.o +ninja: build stopped: subcommand failed. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1896263 b/results/classifier/gemma3:12b/kvm/1896263 new file mode 100644 index 00000000..ec88c00e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1896263 @@ -0,0 +1,112 @@ + +The bios-tables-test test causes QEMU to crash (Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed) on AMD processors + +QEMU release version: Any recent version (5.0.0, 5.1.0, git master) +Host CPU: AMD Ryzen 3900X + +The following backtrace is from commit e883b492c221241d28aaa322c61536436090538a. + +QTEST_QEMU_BINARY=./build/qemu-system-x86_64 gdb ./build/tests/qtest/bios-tables-test +GNU gdb (GDB) 9.2 +Copyright (C) 2020 Free Software Foundation, Inc. +License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> +This is free software: you are free to change and redistribute it. +There is NO WARRANTY, to the extent permitted by law. +Type "show copying" and "show warranty" for details. +This GDB was configured as "x86_64-unknown-linux-gnu". +Type "show configuration" for configuration details. +For bug reporting instructions, please see: +<http://www.gnu.org/software/gdb/bugs/>. +Find the GDB manual and other documentation resources online at: + <http://www.gnu.org/software/gdb/documentation/>. + +For help, type "help". +Type "apropos word" to search for commands related to "word"... +Reading symbols from ./build/tests/qtest/bios-tables-test... +(gdb) run +Starting program: /home/mcournoyer/src/qemu/build/tests/qtest/bios-tables-test +[Thread debugging using libthread_db enabled] +Using host libthread_db library "/gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libthread_db.so.1". +[New Thread 0x7ffff7af6700 (LWP 18955)] +# random seed: R02S5106b7afa2fd84a0353605795c04ab7d +1..19 +# Start of x86_64 tests +# Start of acpi tests +# starting QEMU: exec ./build/qemu-system-x86_64 -qtest unix:/tmp/qtest-18951.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-18951.qmp,id=char0 -mon chardev=char0,mode=control -display none -machine pc,kernel-irqchip=off -accel kvm -accel tcg -net none -display none -drive id=hd0,if=none,file=tests/acpi-test-disk-R3kbyc,format=raw -device ide-hd,drive=hd0 -accel qtest +[Attaching after Thread 0x7ffff7af7900 (LWP 18951) fork to child process 18956] +[New inferior 2 (process 18956)] +[Detaching after fork from parent process 18951] +[Inferior 1 (process 18951) detached] +[Thread debugging using libthread_db enabled] +Using host libthread_db library "/gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libthread_db.so.1". +[New Thread 0x7ffff7af6700 (LWP 18957)] +[Thread 0x7ffff7af6700 (LWP 18957) exited] +process 18956 is executing new program: /gnu/store/87kif0bpf0anwbsaw0jvg8fyciw4sz67-bash-5.0.16/bin/bash +process 18956 is executing new program: /home/mcournoyer/src/qemu/build/qemu-system-x86_64 +[Thread debugging using libthread_db enabled] +Using host libthread_db library "/gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libthread_db.so.1". +[New Thread 0x7ffff48ed700 (LWP 18958)] +[New Thread 0x7fffeffff700 (LWP 18960)] +[New Thread 0x7fffef61c700 (LWP 18961)] +[New Thread 0x7fffed5ff700 (LWP 18962)] +qemu-system-x86_64: error: failed to set MSR 0x4b564d02 to 0x0 +qemu-system-x86_64: ../target/i386/kvm.c:2714: kvm_buf_set_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed. + +Thread 2.5 "qemu-system-x86" received signal SIGABRT, Aborted. +[Switching to Thread 0x7fffef61c700 (LWP 18961)] +0x00007ffff65dbaba in raise () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 +(gdb) taas bt + +Thread 2.6 (Thread 0x7fffed5ff700 (LWP 18962)): +#0 0x00007ffff6770c4d in pthread_cond_timedwait@@GLIBC_2.3.2 () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libpthread.so.0 +#1 0x0000555555cc8a0e in qemu_sem_timedwait (sem=sem@entry=0x55555662f758, ms=ms@entry=10000) at ../util/qemu-thread-posix.c:282 +#2 0x0000555555cd91b5 in worker_thread (opaque=opaque@entry=0x55555662f6e0) at ../util/thread-pool.c:91 +#3 0x0000555555cc7e86 in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521 +#4 0x00007ffff6769f64 in start_thread () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libpthread.so.0 +#5 0x00007ffff669b9af in clone () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 + +Thread 2.5 (Thread 0x7fffef61c700 (LWP 18961)): +#0 0x00007ffff65dbaba in raise () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 +#1 0x00007ffff65dcbf5 in abort () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 +#2 0x00007ffff65d470a in __assert_fail_base () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 +#3 0x00007ffff65d4782 in __assert_fail () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 +#4 0x0000555555a3e979 in kvm_buf_set_msrs (cpu=0x555556688a20) at ../target/i386/kvm.c:2714 +#5 0x0000555555a438cc in kvm_put_msrs (level=3, cpu=0x555556688a20) at ../target/i386/kvm.c:3005 +#6 kvm_arch_put_registers (cpu=cpu@entry=0x555556688a20, level=level@entry=3) at ../target/i386/kvm.c:3989 +#7 0x0000555555af7b0e in do_kvm_cpu_synchronize_post_init (cpu=0x555556688a20, arg=...) at ../accel/kvm/kvm-all.c:2355 +#8 0x00005555558ef8e2 in process_queued_cpu_work (cpu=cpu@entry=0x555556688a20) at ../cpus-common.c:343 +#9 0x0000555555b6ac25 in qemu_wait_io_event_common (cpu=cpu@entry=0x555556688a20) at ../softmmu/cpus.c:1117 +#10 0x0000555555b6ac84 in qemu_wait_io_event (cpu=cpu@entry=0x555556688a20) at ../softmmu/cpus.c:1157 +#11 0x0000555555b6aec8 in qemu_kvm_cpu_thread_fn (arg=arg@entry=0x555556688a20) at ../softmmu/cpus.c:1193 +#12 0x0000555555cc7e86 in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521 +#13 0x00007ffff6769f64 in start_thread () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libpthread.so.0 +#14 0x00007ffff669b9af in clone () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 + +Thread 2.4 (Thread 0x7fffeffff700 (LWP 18960)): +#0 0x00007ffff66919d9 in poll () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 +#1 0x00007ffff78f0051 in g_main_context_iterate.isra () from /gnu/store/n1mx1dp0hcrzm1akf8qdqa9gmybzazs2-profile/lib/libglib-2.0.so.0 +#2 0x00007ffff78f0392 in g_main_loop_run () from /gnu/store/n1mx1dp0hcrzm1akf8qdqa9gmybzazs2-profile/lib/libglib-2.0.so.0 +#3 0x000055555584b5a1 in iothread_run (opaque=opaque@entry=0x555556557720) at ../iothread.c:80 +#4 0x0000555555cc7e86 in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521 +#5 0x00007ffff6769f64 in start_thread () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libpthread.so.0 +#6 0x00007ffff669b9af in clone () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 + +Thread 2.3 (Thread 0x7ffff48ed700 (LWP 18958)): +#0 0x00007ffff66657a1 in clock_nanosleep@GLIBC_2.2.5 () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 +#1 0x00007ffff666ac03 in nanosleep () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 +#2 0x00007ffff7919cdf in g_usleep () from /gnu/store/n1mx1dp0hcrzm1akf8qdqa9gmybzazs2-profile/lib/libglib-2.0.so.0 +#3 0x0000555555cb3b04 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:250 +#4 0x0000555555cc7e86 in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:521 +#5 0x00007ffff6769f64 in start_thread () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libpthread.so.0 +#6 0x00007ffff669b9af in clone () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libc.so.6 + +Thread 2.1 (Thread 0x7ffff48f2c80 (LWP 18956)): +#0 0x00007ffff677094c in pthread_cond_wait@@GLIBC_2.3.2 () from /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libpthread.so.0 +#1 0x0000555555cc854f in qemu_cond_wait_impl (cond=0x5555563b0020 <qemu_work_cond>, mutex=0x5555563cd620 <qemu_global_mutex>, file=0x555555dbad06 "../cpus-common.c", line=154) at ../util/qemu-thread-posix.c:174 +#2 0x00005555558ef484 in do_run_on_cpu (cpu=cpu@entry=0x555556688a20, func=func@entry=0x555555af7b00 <do_kvm_cpu_synchronize_post_init>, data=..., mutex=mutex@entry=0x5555563cd620 <qemu_global_mutex>) at ../cpus-common.c:154 +#3 0x0000555555b6aa7c in run_on_cpu (cpu=cpu@entry=0x555556688a20, func=func@entry=0x555555af7b00 <do_kvm_cpu_synchronize_post_init>, data=..., data@entry=...) at ../softmmu/cpus.c:1085 +#4 0x0000555555af8d4e in kvm_cpu_synchronize_post_init (cpu=cpu@entry=0x555556688a20) at ../accel/kvm/kvm-all.c:2361 +#5 0x0000555555b6a94a in cpu_synchronize_post_init (cpu=0x555556688a20) at /home/mcournoyer/src/qemu/include/sysemu/hw_accel.h:55 +#6 cpu_synchronize_all_post_init () at ../softmmu/cpus.c:953 +#7 0x0000555555b0dca7 in qemu_init (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/vl.c:4387 +#8 0x0000555555840609 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:49 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1900241 b/results/classifier/gemma3:12b/kvm/1900241 new file mode 100644 index 00000000..c768d692 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1900241 @@ -0,0 +1,116 @@ + +[regression][powerpc] some vcpus are found offline inside guest with different vsmt setting from qemu-cmdline and breaks subsequent vcpu hotplug operation (xive) + +Env: +Host: Power9 HW ppc64le + +# lscpu +Architecture: ppc64le +Byte Order: Little Endian +CPU(s): 128 +On-line CPU(s) list: 24-31,40-159 +Thread(s) per core: 4 +Core(s) per socket: 16 +Socket(s): 2 +NUMA node(s): 2 +Model: 2.3 (pvr 004e 1203) +Model name: POWER9, altivec supported +Frequency boost: enabled +CPU max MHz: 3800.0000 +CPU min MHz: 2300.0000 +L1d cache: 1 MiB +L1i cache: 1 MiB +L2 cache: 8 MiB +L3 cache: 160 MiB +NUMA node0 CPU(s): 24-31,40-79 +NUMA node8 CPU(s): 80-159 +Vulnerability Itlb multihit: Not affected +Vulnerability L1tf: Mitigation; RFI Flush, L1D private per thread +Vulnerability Mds: Not affected +Vulnerability Meltdown: Mitigation; RFI Flush, L1D private per thread +Vulnerability Spec store bypass: Mitigation; Kernel entry/exit barrier (eieio) +Vulnerability Spectre v1: Mitigation; __user pointer sanitization, ori31 speculation barrier enabled +Vulnerability Spectre v2: Mitigation; Software count cache flush (hardware accelerated), Software link stack flush +Vulnerability Srbds: Not affected +Vulnerability Tsx async abort: Not affected + + + +Host Kernel: 5.9.0-0.rc8.28.fc34.ppc64le (Fedora rawhide) +Guest Kernel: Fedora33(5.8.6-301.fc33.ppc64le) + +Qemu: e12ce85b2c79d83a340953291912875c30b3af06 (qemu/master) + + +Steps to reproduce: + +Boot below kvm guest: (-M pseries,vsmt=2 -smp 8,cores=8,threads=1) + + /home/sath/qemu/build/qemu-system-ppc64 -name vm1 -M pseries,vsmt=2 -accel kvm -m 4096 -smp 8,cores=8,threads=1 -nographic -nodefaults -serial mon:stdio -vga none -nographic -device virtio-scsi-pci -drive file=/home/sath/tests/data/avocado-vt/images/fdevel-ppc64le.qcow2,if=none,id=hd0,format=qcow2,cache=none -device scsi-hd,drive=hd0 + + +lscpu inside guest: +Actual: +[root@atest-guest ~]# lscpu +Architecture: ppc64le +Byte Order: Little Endian +CPU(s): 8 +On-line CPU(s) list: 0,2,4,6 +Off-line CPU(s) list: 1,3,5,7 --------------------------NOK +Thread(s) per core: 1 +Core(s) per socket: 4 +Socket(s): 1 +NUMA node(s): 1 +Model: 2.3 (pvr 004e 1203) +Model name: POWER9 (architected), altivec supported +Hypervisor vendor: KVM +Virtualization type: para +L1d cache: 128 KiB +L1i cache: 128 KiB +NUMA node0 CPU(s): 0,2,4,6 +Vulnerability Itlb multihit: Not affected +Vulnerability L1tf: Mitigation; RFI Flush, L1D private per thread +Vulnerability Mds: Not affected +Vulnerability Meltdown: Mitigation; RFI Flush, L1D private per thread +Vulnerability Spec store bypass: Mitigation; Kernel entry/exit barrier (eieio) +Vulnerability Spectre v1: Mitigation; __user pointer sanitization, ori31 + speculation barrier enabled +Vulnerability Spectre v2: Mitigation; Software count cache flush (hardwar + e accelerated), Software link stack flush +Vulnerability Srbds: Not affected +Vulnerability Tsx async abort: Not affected + + +Expected: + +[root@atest-guest ~]# lscpu +Architecture: ppc64le +Byte Order: Little Endian +CPU(s): 8 +On-line CPU(s) list: 0-7 +Thread(s) per core: 1 +Core(s) per socket: 8 +Socket(s): 1 +NUMA node(s): 1 +Model: 2.3 (pvr 004e 1203) +Model name: POWER9 (architected), altivec supported +Hypervisor vendor: KVM +Virtualization type: para +L1d cache: 256 KiB +L1i cache: 256 KiB +NUMA node0 CPU(s): 0-7 +Vulnerability Itlb multihit: Not affected +Vulnerability L1tf: Mitigation; RFI Flush, L1D private per thread +Vulnerability Mds: Not affected +Vulnerability Meltdown: Mitigation; RFI Flush, L1D private per thread +Vulnerability Spec store bypass: Mitigation; Kernel entry/exit barrier (eieio) +Vulnerability Spectre v1: Mitigation; __user pointer sanitization, ori31 + speculation barrier enabled +Vulnerability Spectre v2: Mitigation; Software count cache flush (hardwar + e accelerated), Software link stack flush +Vulnerability Srbds: Not affected +Vulnerability Tsx async abort: Not affected + + + +There by further vcpuhotplug operation fails... \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1902394 b/results/classifier/gemma3:12b/kvm/1902394 new file mode 100644 index 00000000..d8e148bd --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1902394 @@ -0,0 +1,76 @@ + +Guest stuck in Paused state right after created It + +Im using Centos 8 . I have try to use many Distribution such as : Centos, Ubuntum, Debian,.. on the guest but still all the the VM get into paused state immidiately after using virt-install ( I have tried using virt-manager too ) + +CPU INFO : +Architecture: x86_64 +CPU op-mode(s): 32-bit, 64-bit +Byte Order: Little Endian +CPU(s): 8 +On-line CPU(s) list: 0-7 +Thread(s) per core: 1 +Core(s) per socket: 1 +Socket(s): 8 +NUMA node(s): 1 +Vendor ID: GenuineIntel +CPU family: 6 +Model: 85 +Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz +Stepping: 7 +CPU MHz: 2199.998 +BogoMIPS: 4399.99 +Virtualization: VT-x +Hypervisor vendor: KVM +Virtualization type: full +L1d cache: 32K +L1i cache: 32K +L2 cache: 4096K +L3 cache: 16384K +NUMA node0 CPU(s): 0-7 +Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f rdseed adx smap clflushopt clwb avx512cd xsaveopt xsavec xgetbv1 arat + +VM Log : + +2020-10-31 08:29:51.737+0000: starting up libvirt version: 4.5.0, package: 42.module_el8.2.0+320+13f867d7 (CentOS Buildsys <email address hidden>, 2020-05-28-17:13:31, ), qemu version: 2.12.0qemu-kvm-2.12.0-99.module_el8.2.0+524+f765f7e0.4, kernel: 4.18.0-193.28.1.el8_2.x86_64, hostname: interns.novalocal +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name guest=cirros,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-18-cirros/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu Cascadelake-Server,ss=on,hypervisor=on,tsc-adjust=on,arch-capabilities=on,ibpb=on,skip-l1dfl-vmentry=on,invpcid=off,avx512dq=off,avx512bw=off,avx512vl=off,pku=off,avx512vnni=off,pdpe1gb=off -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid ef9573a3-a02d-4ef0-86cb-e38da7b7b20d -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=29,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/home/kvm/cirros-0.3.0-x86_64-disk.img,format=qcow2,if=none,id=drive-ide0-0-0 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=31,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=52:54:00:c3:32:b0,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -spice port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on +2020-10-31T08:29:51.815604Z qemu-kvm: -chardev pty,id=charserial0: char device redirected to /dev/pts/1 (label charserial0) +KVM: exception 0 exit (error code 0x0) +EAX=00000000 EBX=00000000 ECX=00000000 EDX=00050656 +ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000 +EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 00000000 0000ffff 00009300 +CS =f000 ffff0000 0000ffff 00009b00 +SS =0000 00000000 0000ffff 00009300 +DS =0000 00000000 0000ffff 00009300 +FS =0000 00000000 0000ffff 00009300 +GS =0000 00000000 0000ffff 00009300 +LDT=0000 00000000 0000ffff 00008200 +TR =0000 00000000 0000ffff 00008b00 +GDT= 00000000 0000ffff +IDT= 00000000 0000ffff +CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +Code=06 66 05 00 00 01 00 8e c1 26 66 a3 74 f0 66 5b 66 5e 66 c3 <ea> 5b e0 00 f0 30 36 2f 32 33 2f 39 39 00 fc 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 + +The Error I have when try to resume the Guest with Virt Manager : + +Error unpausing domain: internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required + +Traceback (most recent call last): + File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper + callback(asyncjob, *args, **kwargs) + File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb + callback(*args, **kwargs) + File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 66, in newfn + ret = fn(self, *args, **kwargs) + File "/usr/share/virt-manager/virtManager/object/domain.py", line 1311, in resume + self._backend.resume() + File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2012, in resume + if ret == -1: raise libvirtError ('virDomainResume() failed', dom=self) +libvirt.libvirtError: internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required + + +Any help would be so helpful cause I stuck in this case for like 4 days already. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1902451 b/results/classifier/gemma3:12b/kvm/1902451 new file mode 100644 index 00000000..259811c7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1902451 @@ -0,0 +1,22 @@ + +incorrect cpuid feature detection + +Hello, + +I am currently developing a x64 kernel and I wanted to check through cpuid if some features are available in the guest. When I try to enable cpu features like vmcb_clean or constant_tsc qemu is saying that my host doesn't support the requested features. However cat /proc/cpuinfo tells a different story: + +model name: AMD Ryzen 5 3500U +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme pti ssbd sev ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca + +I also checked it myself by running cpuid and check the bits as in the AMD Manual. Everything checks out but qemu still fails. + +QEMU version: QEMU emulator version 4.2.0 + +$ qemu-system-x86_64 -cpu host,+vmcb_clean,enforce -enable-kvm -drive format=raw,file=target/x86_64-os/debug/bootimage-my_kernel.bin -serial stdio -display none +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.8000000AH:EDX.vmcb-clean [bit 5] +qemu-system-x86_64: Host doesn't support requested features + +or + +$ qemu-system-x86_64 -cpu host,+constant_tsc,enforce -enable-kvm -drive format=raw,file=target/x86_64-os/debug/bootimage-my_kernel.bin -serial stdio -display none +qemu-system-x86_64: Property '.constant_tsc' not found \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1903 b/results/classifier/gemma3:12b/kvm/1903 new file mode 100644 index 00000000..acb52416 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1903 @@ -0,0 +1,42 @@ + +qemu/kvm are instantly SIGKILLed by systemd on shutdown, without wait. +Description of problem: +systemd assumes it cannot terminate qemu, and SIGKILLs it. Instantly. +Steps to reproduce: +1. Start qemu on a systemd managed host +2. Shutdown/Reboot +Additional information: +Nothing on qemu's own log, besides that it is starting a vnc server. + +```plaintext +# journalctl -b -1 +... +Sep 22 18:38:04 local kernel: kvm_amd: TSC scaling supported +Sep 22 18:38:04 local kernel: kvm_amd: Nested Virtualization enabled +Sep 22 18:38:04 local kernel: kvm_amd: Nested Paging enabled +Sep 22 18:38:04 local kernel: kvm_amd: Virtual VMLOAD VMSAVE supported +Sep 22 18:38:04 local kernel: kvm_amd: Virtual GIF supported +Sep 22 18:38:04 local kernel: kvm_amd: LBR virtualization supported +... +Sep 22 18:38:50 local systemd-logind[721]: The system will reboot now! +Sep 22 18:38:50 local systemd-logind[721]: System is rebooting. +Sep 22 18:38:50 local sddm-helper[850]: Signal received: SIGTERM +... +Sep 22 18:38:50 local systemd[1]: Stopping User Manager for UID 1000... +Sep 22 18:38:50 local systemd-logind[721]: Removed session 1. +Sep 22 18:38:50 local systemd[854]: Activating special unit Exit the Session... +Sep 22 18:38:50 local systemd[854]: app-org.kde.konsole-1ab3dac6a1db4b29b55899b477b32975.scope: Failed to kill control group /user.slice/user-1000.slice/user@1000.service/app.slice/> +Sep 22 18:38:50 local systemd[854]: app-org.kde.konsole-1ab3dac6a1db4b29b55899b477b32975.scope: Killing process 1708 (qemu-system-x86) with signal SIGKILL. +Sep 22 18:38:50 local systemd[854]: app-org.kde.konsole-1ab3dac6a1db4b29b55899b477b32975.scope: Killing process 1712 (kvm-nx-lpage-recovery-1708) with signal SIGKILL. +Sep 22 18:38:50 local systemd[854]: app-org.kde.konsole-1ab3dac6a1db4b29b55899b477b32975.scope: Failed to kill control group /user.slice/user-1000.slice/user@1000.service/app.slice/> +Sep 22 18:38:50 local systemd[854]: Stopped Konsole - Terminal. +... (some other applications terminanting normally ) +Sep 22 18:38:50 local systemd[854]: app-org.kde.konsole-1ab3dac6a1db4b29b55899b477b32975.scope: Consumed 10.068s CPU time. +Sep 22 18:38:50 local systemd[854]: Removed slice User Background Tasks Slice. +Sep 22 18:38:50 local systemd[854]: background.slice: Consumed 2.960s CPU time. +... +``` + +I cannot explain why it sends SIGKILL to qemu/kvm... it is the same second as the shutdown started, their docs says there's a delay for that. + +Also, other processes owned by the user received a single SIGTERM after qemu was SIGKILLed. Some even take a couple seconds to exit and are not SIGKILLed. diff --git a/results/classifier/gemma3:12b/kvm/1905562 b/results/classifier/gemma3:12b/kvm/1905562 new file mode 100644 index 00000000..9286be8c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1905562 @@ -0,0 +1,31 @@ + +Guest seems suspended after host freed memory for it using oom-killer + +Host: qemu 5.1.0, linux 5.5.13 +Guest: Windows 7 64-bit + +This guest ran a memory intensive process, and triggered oom-killer on host. Luckily, it killed chromium. My understanding is this should mean qemu should have continued running unharmed. But, the spice connection shows the host system clock is stuck at the exact time oom-killer was triggered. The host is completely unresponsive. + +I can telnet to the qemu monitor. "info status" shows "running". But, multiple times running "info registers -a" and saving the output to text files shows the registers are 100% unchanged, so it's not really running. + +On the host, top shows around 4% CPU usage by qemu. strace shows about 1,000 times a second, these 6 lines repeat: + +0.000698 ioctl(18, KVM_IRQ_LINE_STATUS, 0x7fff1f030c10) = 0 <0.000010> +0.000034 ioctl(18, KVM_IRQ_LINE_STATUS, 0x7fff1f030c60) = 0 <0.000009> +0.000031 ioctl(18, KVM_IRQ_LINE_STATUS, 0x7fff1f030c20) = 0 <0.000007> +0.000028 ioctl(18, KVM_IRQ_LINE_STATUS, 0x7fff1f030c70) = 0 <0.000007> +0.000030 ppoll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=9, events=POLLIN}, {fd=11, events =POLLIN}, {fd=16, events=POLLIN}, {fd=32, events=POLLIN}, {fd=34, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=41, events=POLLI N}, {fd=42, events=POLLIN}, {fd=43, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}], 16, {tv_sec=0, tv_nsec=0}, NULL, 8) = 0 (Timeout) <0.000009> +0.000043 ppoll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=8, events=POLLIN}, {fd=9, events=POLLIN}, {fd=11, events =POLLIN}, {fd=16, events=POLLIN}, {fd=32, events=POLLIN}, {fd=34, events=POLLIN}, {fd=39, events=POLLIN}, {fd=40, events=POLLIN}, {fd=41, events=POLLI N}, {fd=42, events=POLLIN}, {fd=43, events=POLLIN}, {fd=44, events=POLLIN}, {fd=45, events=POLLIN}], 16, {tv_sec=0, tv_nsec=769662}, NULL, 8) = 0 (Tim eout) <0.000788> + +In the monitor, "info irq" shows IRQ 0 is increasing about 1,000 times a second. IRQ 0 seems to be for the system clock, and 1,000 times a second seems to be the frequency a windows 7 guest might have the clock at. + +Those fd's are for: (9) [eventfd]; [signalfd], type=STREAM, 4 x the spice socket file, and "TCP localhost:ftnmtp->localhost:36566 (ESTABLISHED)". + +Because the guest's registers aren't changing, it seems to me like monitor thinks the VM is running, but it's actually effectively in a paused state. I think all the strace activity shown above must be generated by the host. Perhaps it's repeatedly trying to contact the guest to inject a new clock, and communicate with it on the various eventfd's, spice socket, etc. So, I'm thinking the strace doesn't give any information about the real reason why the VM is acting as if it's paused. + +I've checked "info block", and there's nothing showing that a device is paused, or that there's any issues with them. (Can't remember what term can be there, but a paused/blocked/etc block device I think caused a VM to act like this for me in the past.) + + +Is there something I can provide to help fix the bug here? + +Is there something I can do, to try to get the VM running again? (I sadly have unsaved work in it.) \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1908781 b/results/classifier/gemma3:12b/kvm/1908781 new file mode 100644 index 00000000..bb5d71ce --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1908781 @@ -0,0 +1,10 @@ + +x86-64 not faulting when CS.L = 1 and CS.D = 1 + +In a UEFI application I accidentally created a code segment descriptor where both the L and D bits were 1. This is supposed to generate a GP fault (e.g. see page 2942 of https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf). When running with KVM a fault did indeed occur, but when not specifying any acceleration, no fault occurred. + +Let me know if you need me to develop a minimum example to debug from. At the moment it's all part of a slightly more complicated bit of code. + +Version: 5.2.0 (compiled from source) +Command line options: -smp cores=4 -m 8192 (plus whatever uefi-run adds to plug in OVMF and my UEFI application). +Environment: Ubuntu 20.04 on Ryzen 3700X \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1912777 b/results/classifier/gemma3:12b/kvm/1912777 new file mode 100644 index 00000000..4425a493 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1912777 @@ -0,0 +1,96 @@ + +KVM_EXIT_MMIO has increased in Qemu4.0.0 when compared to Qemu 2.11.0 + +I was able to generate trace dump in Qemu for kvm_run_exit event in both QEMU 2.11.0 and QEMU 4.0.0 +From the trace i noticed that the number of KVM_KXIT_MMIO calls has increased alot and is causing delay in testcase execution. + +I executed same testcase from Qemu 2.11 and Qemu4. +Inside Virtual machine when using qemu 2.11 testcase got completed in 11 seconds +but the same testcase when executed on Qemu 4.0.0 got executed in 26 seconds. + + +I did a bit of digging and extracted the kvm_run_exit to figure out whats going on. + +Please find +Stats from Qemu2.11: + +KVM_EXIT_UNKNOWN : 0 +KVM_EXIT_EXCEPTION : 0 +KVM_EXIT_IO : 182513 +KVM_EXIT_HYPERCALL : 0 +KVM_EXIT_DEBUG : 0 +KVM_EXIT_HLT : 0 +KVM_EXIT_MMIO : 216701 +KVM_EXIT_IRQ_WINDOW_OPEN : 0 +KVM_EXIT_SHUTDOWN : 0 +KVM_EXIT_FAIL_ENTRY : 0 +KVM_EXIT_INTR : 0 +KVM_EXIT_SET_TPR : 0 +KVM_EXIT_TPR_ACCESS : 0 +KVM_EXIT_S390_SIEIC : 0 +KVM_EXIT_S390_RESET : 0 +KVM_EXIT_DCR : 0 +KVM_EXIT_NMI : 0 +KVM_EXIT_INTERNAL_ERROR : 0 +KVM_EXIT_OSI : 0 +KVM_EXIT_PAPR_HCALL : 0 +KVM_EXIT_S390_UCONTROL : 0 +KVM_EXIT_WATCHDOG : 0 +KVM_EXIT_S390_TSCH : 0 +KVM_EXIT_EPR : 0 +KVM_EXIT_SYSTEM_EVENT : 0 +KVM_EXIT_S390_STSI : 0 +KVM_EXIT_IOAPIC_EOI : 0 +KVM_EXIT_HYPERV : 0 + +KVM_RUN_EXIT : 399214 (Total in Qemu 2.11 for a testcase) + + +Stats For Qemu 4.0.0: + +VM_EXIT_UNKNOWN : 0 +KVM_EXIT_EXCEPTION : 0 +KVM_EXIT_IO : 163729 +KVM_EXIT_HYPERCALL : 0 +KVM_EXIT_DEBUG : 0 +KVM_EXIT_HLT : 0 +KVM_EXIT_MMIO : 1094231 +KVM_EXIT_IRQ_WINDOW_OPEN : 46 +KVM_EXIT_SHUTDOWN : 0 +KVM_EXIT_FAIL_ENTRY : 0 +KVM_EXIT_INTR : 0 +KVM_EXIT_SET_TPR : 0 +KVM_EXIT_TPR_ACCESS : 0 +KVM_EXIT_S390_SIEIC : 0 +KVM_EXIT_S390_RESET : 0 +KVM_EXIT_DCR : 0 +KVM_EXIT_NMI : 0 +KVM_EXIT_INTERNAL_ERROR : 0 +KVM_EXIT_OSI : 0 +KVM_EXIT_PAPR_HCALL : 0 +KVM_EXIT_S390_UCONTROL : 0 +KVM_EXIT_WATCHDOG : 0 +KVM_EXIT_S390_TSCH : 0 +KVM_EXIT_EPR : 0 +KVM_EXIT_SYSTEM_EVENT : 0 +KVM_EXIT_S390_STSI : 0 +KVM_EXIT_IOAPIC_EOI : 464 +KVM_EXIT_HYPERV : 0 + +KVM_RUN_EXIT : 1258470 (Total in qemu 4.0.0 for same testcase) + + + +From above analysis i found that the number of KVM_EXIT_MMIO has increased by 4.x. + +Could someone from qemu community help me understand as to why the MMIO exits have increased in qemu4 ? + +The results i obtained are after running same testcase. +On Qemu2.11 testcase gets completed in : 11seconds +on Qemu4.11 testcase gets completed in : 26 seconds + +VM Qcow2 used in Ubuntu 16.04 +VM kernel OS is : 4.4 generic + + +Let me know incase more information is required . \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1914696 b/results/classifier/gemma3:12b/kvm/1914696 new file mode 100644 index 00000000..5f91aaf9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1914696 @@ -0,0 +1,91 @@ + +aarch64: migration failed: Segmentation fault (core dumped) + +reproduce: + +arch: aarch64 +source qemu: v4.2.0 +destination qemu: 1ed9228f63ea4bcc0ae240365305ee264e9189ce + +cmdline: +source: +$ ./aarch64-softmmu/qemu-system-aarch64 -name 'avocado-vt-vm1' -machine virt-4.2,gic-version=host,graphics=on -nodefaults -m 1024 -smp 2 -cpu 'host' -vnc :10 -enable-kvm -monitor stdio +(qemu) +(qemu) migrate -d tcp:10.19.241.167:888 +(qemu) info status +VM status: paused (postmigrate) + +destination: +./build/aarch64-softmmu/qemu-system-aarch64 -name 'avocado-vt-vm1' -machine virt-4.2,gic-version=host,graphics=on -nodefaults -m 1024 -smp 2 -cpu 'host' -vnc :10 -enable-kvm -monitor stdio -incoming tcp:0:888 +QEMU 5.2.50 monitor - type 'help' for more information +(qemu) Segmentation fault (core dumped) + + +i have bisected and confirmed that the first bad commit is: [f9506e162c33e87b609549157dd8431fcc732085] target/arm: Remove ARM_FEATURE_VFP* + +bisect log: +git bisect log +# bad: [1ed9228f63ea4bcc0ae240365305ee264e9189ce] Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2021-02-02-v2' into staging +git bisect bad 1ed9228f63ea4bcc0ae240365305ee264e9189ce +# good: [b0ca999a43a22b38158a222233d3f5881648bb4f] Update version for v4.2.0 release +git bisect good b0ca999a43a22b38158a222233d3f5881648bb4f +# bad: [59093cc407cb044c72aa786006a07bd404eb36b9] hw/char: Convert the Ibex UART to use the registerfields API +git bisect bad 59093cc407cb044c72aa786006a07bd404eb36b9 +# bad: [4dabf39592e92d692c6f2a1633571114ae25d843] aspeed/smc: Fix DMA support for AST2600 +git bisect bad 4dabf39592e92d692c6f2a1633571114ae25d843 +# good: [93c86fff53a267f657e79ec07dcd04b63882e330] Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20200207' into staging +git bisect good 93c86fff53a267f657e79ec07dcd04b63882e330 +# bad: [2ac031d171ccd18c973014d9978b4a63f0ad5fb0] Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-5.0-sf3' into staging +git bisect bad 2ac031d171ccd18c973014d9978b4a63f0ad5fb0 +# good: [4036b7d1cd9fb1097a5f4bc24d7d31744256260f] target/arm: Use isar_feature function for testing AA32HPD feature +git bisect good 4036b7d1cd9fb1097a5f4bc24d7d31744256260f +# good: [002375895c10df40615fc615e2639f49e0c442fe] tests/iotests: be a little more forgiving on the size test +git bisect good 002375895c10df40615fc615e2639f49e0c442fe +# good: [c695724868ce4049fd79c5a509880dbdf171e744] target/riscv: Emulate TIME CSRs for privileged mode +git bisect good c695724868ce4049fd79c5a509880dbdf171e744 +# good: [f67957e17cbf8fc3cc5d1146a2db2023404578b0] target/arm: Add isar_feature_aa32_{fpsp_v2, fpsp_v3, fpdp_v3} +git bisect good f67957e17cbf8fc3cc5d1146a2db2023404578b0 +# bad: [a1229109dec4375259d3fff99f362405aab7917a] target/arm: Implement v8.4-RCPC +git bisect bad a1229109dec4375259d3fff99f362405aab7917a +# bad: [906b60facc3d3dd3af56cb1a7860175d805e10a3] target/arm: Add formats for some vfp 2 and 3-register insns +git bisect bad 906b60facc3d3dd3af56cb1a7860175d805e10a3 +# good: [c52881bbc22b50db99a6c37171ad3eea7d959ae6] target/arm: Replace ARM_FEATURE_VFP4 with isar_feature_aa32_simdfmac +git bisect good c52881bbc22b50db99a6c37171ad3eea7d959ae6 +# good: [f0f6d5c81be47d593e5ece7f06df6fba4c15738b] target/arm: Move the vfp decodetree calls next to the base isa +git bisect good f0f6d5c81be47d593e5ece7f06df6fba4c15738b +# bad: [f9506e162c33e87b609549157dd8431fcc732085] target/arm: Remove ARM_FEATURE_VFP* +git bisect bad f9506e162c33e87b609549157dd8431fcc732085 +# good: [bfa8a370d2f5d4ed03f7a7e2987982f15fe73758] linux-user/arm: Replace ARM_FEATURE_VFP* tests for HWCAP +git bisect good bfa8a370d2f5d4ed03f7a7e2987982f15fe73758 +# first bad commit: [f9506e162c33e87b609549157dd8431fcc732085] target/arm: Remove ARM_FEATURE_VFP* + + +the root cause is that, some feature bit is not consistent any more with below changes in this commit: +diff --git a/target/arm/cpu.h b/target/arm/cpu.h +index b29b0eddfc..05aa9711cd 100644 +--- a/target/arm/cpu.h ++++ b/target/arm/cpu.h +@@ -1880,7 +1880,6 @@ QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK); + * mapping in linux-user/elfload.c:get_elf_hwcap(). + */ + enum arm_features { +- ARM_FEATURE_VFP, + ARM_FEATURE_AUXCR, /* ARM1026 Auxiliary control register. */ + ARM_FEATURE_XSCALE, /* Intel XScale extensions. */ + ARM_FEATURE_IWMMXT, /* Intel iwMMXt extension. */ +@@ -1889,7 +1888,6 @@ enum arm_features { + ARM_FEATURE_V7, + ARM_FEATURE_THUMB2, + ARM_FEATURE_PMSA, /* no MMU; may have Memory Protection Unit */ +- ARM_FEATURE_VFP3, + ARM_FEATURE_NEON, + ARM_FEATURE_M, /* Microcontroller profile. */ + ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */ +@@ -1900,7 +1898,6 @@ enum arm_features { + ARM_FEATURE_V5, + ARM_FEATURE_STRONGARM, + ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */ +- ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */ + ARM_FEATURE_GENERIC_TIMER, + ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */ + ARM_FEATURE_DUMMY_C15_REGS, /* RAZ/WI all of cp15 crn=15 */ \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1914748 b/results/classifier/gemma3:12b/kvm/1914748 new file mode 100644 index 00000000..98c9acef --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1914748 @@ -0,0 +1,16 @@ + +Confuse error message when KVM can not start requested CPU + +As of commit 1ba089f2255, on Cavium CN8890 (ThunderX cores): + +$ qemu-system-aarch64 -display none -accel kvm -M virt,gic-version=3 -accel kvm -cpu cortex-a57 --trace \*kvm_vcpu\* +kvm_vcpu_ioctl cpu_index 0, type 0x4020aeae, arg 0xffff9b7f9b18 +qemu-system-aarch64: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Invalid argument + +(same using "-cpu cortex-a53" or cortex-a72). + +Explanation from Peter Maydell on IRC: +> using a specific cpu type will only work with KVM if the host CPU really is that +> exact CPU type, otherwise, use "-cpu host" or "-cpu max" + +Having a better error description would help to understand the reason. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1914986 b/results/classifier/gemma3:12b/kvm/1914986 new file mode 100644 index 00000000..b1597d77 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1914986 @@ -0,0 +1,56 @@ + +KVM internal error. Suberror: 1 - OVMF / Audio related + +This is latest release QEMU-5.2.0 on Arch Linux running kernel 5.10.13, latest OVMF etc. + +I'm seeing the following crash when loading an audio driver from the OpenCore[1] project in the UEFI shell: + +KVM internal error. Suberror: 1 +emulation failure +RAX=0000000000000000 RBX=0000000000000000 RCX=0000000000000000 RDX=0000000000000000 +RSI=0000000000000000 RDI=000000007e423628 RBP=000000007fee6a90 RSP=000000007fee6a08 +R8 =0000000000000000 R9 =0000000000000080 R10=0000000000000000 R11=0000000000000000 +R12=000000007eeaf828 R13=0000000000000000 R14=0000000000000000 R15=000000007fee6a67 +RIP=00000000000b0000 RFL=00000246 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +CS =0038 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +FS =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +GS =0030 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT +TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy +GDT= 000000007f9ee698 00000047 +IDT= 000000007f27a018 00000fff +CR0=80010033 CR2=0000000000000000 CR3=000000007fc01000 CR4=00000668 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d00 +Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <ff> ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff + + +Here's the QEMU command line I'm using: + +qemu-system-x86_64 \ +-machine q35,accel=kvm \ +-cpu host,+topoext,+invtsc \ +-smp 4,sockets=1,cores=2 \ +-m 4096 \ +-drive file=/usr/share/edk2-ovmf/x64/OVMF_CODE.fd,if=pflash,format=raw,readonly=on \ +-drive file=OVMF_VARS.fd,if=pflash,format=raw \ +-usb -device usb-tablet -device usb-kbd \ +-drive file=OpenCore-0.6.6.img,format=raw \ +-device ich9-intel-hda,bus=pcie.0,addr=0x1b \ +-device hda-micro,audiodev=hda \ +-audiodev pa,id=hda,server=/run/user/1000/pulse/native + +The driver loads fine when using the "no connect" switch. eg: + +Shell> load -nc fs0:\efi\oc\drivers\audiodxe.efi +Shell> Image 'fs0:\EFI\OC\Drivers\AudioDxe.efi' loaded at 7E3C7000 - Success + +However, the crash occurs when loading normally. + +Any ideas? Thanks. + +[1]: https://github.com/acidanthera/OpenCorePkg/releases \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1916112 b/results/classifier/gemma3:12b/kvm/1916112 new file mode 100644 index 00000000..1c7d1636 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1916112 @@ -0,0 +1,60 @@ + +Illegal instruction crash of QEMU on Jetson Nano + +I have a jetson nano (arm64 SBC) and I want to check the native emulation performance of Raspbian Buster. I used the info available here: + +https://github.com/dhruvvyas90/qemu-rpi-kernel/tree/master/native-emuation + +I have Xubuntut 20.04 with KVM enabled kernel running on the Jetson Nano + +However QEMU crashes with "Illegal Instruction" during kernel boot. I have a built latest QEMU from sources with following configuration + +./configure --prefix=/usr/local --target-list=aarch64-softmmu,arm-softmmu --enable-guest-agent --enable-vnc --enable-vnc-jpeg --enable-vnc-png --enable-kvm --enable-spice --enable-sdl --enable-gtk --enable-virglrenderer --enable-opengl + +qemu-system-aarch64 --version +QEMU emulator version 5.2.50 (v5.2.0-1731-g5b19cb63d9) + +When I run as follows: + +../build/qemu-system-aarch64 -M raspi3 +-append "rw earlyprintk loglevel=8 console=ttyAMA0,115200 dwc_otg.lpm_enable=0 root=/dev/mmcblk0p2 rootdelay=1" +-dtb ./bcm2710-rpi-3-b-plus.dtb +-sd /media/96747D21747D0571/JetsonNano/2020-08-20-raspios-buster-armhf-full.qcow2 +-kernel ./kernel8.img +-m 1G -smp 4 -serial stdio -usb -device usb-mouse -device usb-kbd + +I get : +[ 74.994834] systemd[1]: Condition check resulted in FUSE Control File System being skipped. +[ 76.281274] systemd[1]: Starting Apply Kernel Variables... +Starting Apply Kernel Variables... +Illegal instruction (core dumped) + +When I use GDB I see this: + +Thread 8 "qemu-system-aar" received signal SIGILL, Illegal instruction. +[Switching to Thread 0x7fad7f9ba0 (LWP 28037)] +0x0000007f888ac690 in code_gen_buffer () +(gdb) bt +#0 0x0000007f888ac690 in code_gen_buffer () +#1 0x0000005555d7c038 in cpu_tb_exec (tb_exit=, itb=, cpu=0x7fb4502c40) +at ../accel/tcg/cpu-exec.c:191 +#2 cpu_loop_exec_tb (tb_exit=, last_tb=, tb=, cpu=0x7fb4502c40) +at ../accel/tcg/cpu-exec.c:708 +#3 cpu_exec (cpu=cpu@entry=0x7fb4502c40) at ../accel/tcg/cpu-exec.c:819 +.. + +I have just two questions: + +Is this a problem with QEMU or is there anything specific build or options I need to use. Any specific version of QEMU should be used ? + +Why is TCG used as the accelerator when KVM is present. Is it possible and how to use KVM ? + +If I enabled the KVM then I get this error: + +../build/qemu-system-aarch64 -M raspi3 -enable-kvm -append "rw earlyprintk loglevel=8 console=ttyAMA0,115200 dwc_otg.lpm_enable=0 root=/dev/mmcblk0p2 rootdelay=1" -dtb ./bcm2710-rpi-3-b-plus.dtb -sd /media/96747D21747D0571/JetsonNano/2020-08-20-raspios-buster-armhf-full.qcow2 -kernel ./kernel8.img -m 1G -smp 4 -serial stdio -usb -device usb-mouse -device usb-kbd +WARNING: Image format was not specified for '/media/96747D21747D0571/JetsonNano/2020-08-20-raspios-buster-armhf-full.img' and probing guessed raw. + Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted. + Specify the 'raw' format explicitly to remove the restrictions. +qemu-system-aarch64: ../softmmu/physmem.c:750: cpu_address_space_init: Assertion `asidx == 0 || !kvm_enabled()' failed. + +Thanks a lot. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1919169 b/results/classifier/gemma3:12b/kvm/1919169 new file mode 100644 index 00000000..c66f2a29 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1919169 @@ -0,0 +1,21 @@ + +[git]Startup crash when trying to use an EFI enabled VM in accel/kvm/kvm-all.c + +Hello. + +I build a git version based on commit 6157b0e19721aadb4c7fdcfe57b2924af6144b14. + +When I try to launch an EFI enabled VM, it crashes on start. Here is the command line used: + +qemu-system-x86_64 -bios /usr/share/edk2-ovmf/x64/OVMF.fd -enable-kvm -smp 4 -soundhw all -k fr -m 4096 -vga qxl -hda disk.img -cdrom archlinux-2021.03.01-x86_64.iso -boot cd & + +Here is the log I get: + +``` +qemu-system-x86_64: ../accel/kvm/kvm-all.c:690: kvm_log_clear_one_slot: Assertion `QEMU_IS_ALIGNED(start | size, psize)' failed. +``` + + +ed2k-ovmf version: 202102 + +I tried an older version, edk2-ovmf 202011, same crash on start. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1920784 b/results/classifier/gemma3:12b/kvm/1920784 new file mode 100644 index 00000000..772f2ab5 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1920784 @@ -0,0 +1,69 @@ + +qemu-system-ppc64le fails with kvm acceleration + +(Suspected glibc issue!) + +qemu-system-ppc64(le) fails when invoked with kvm acceleration with error "illegal instruction" + +> qemu-system-ppc64(le) -M pseries,accel=kvm + +Illegal instruction (core dumped) + +In dmesg: + +Facility 'SCV' unavailable (12), exception at 0x7624f8134c0c, MSR=900000000280f033 + + +Version-Release number of selected component (if applicable): +qemu 5.2.0 +Linux kernel 5.11 +glibc 2.33 +all latest updates as of submitting the bug report + +How reproducible: +Always + +Steps to Reproduce: +1. Run qemu with kvm acceleration + +Actual results: +Illegal instruction + +Expected results: +Normal VM execution + +Additional info: +The machine is a Raptor Talos II Lite with a Sforza V1 8-core, but was also observed on a Raptor Blackbird with the same processor. + +This was also observed on Fedora 34 beta, which uses glibc 2.33 +Also tested on ArchPOWER (unofficial port of Arch Linux for ppc64le) with glibc 2.33 +Fedora 33 and Ubuntu 20.10, both using glibc 2.32 do not have this issue, and downgrading the Linux kernel from 5.11 to 5.4 LTS on ArchPOWER solved the problem. Kernel 5.9 and 5.10 have the same issue when combined with glibc2.33 + +ProblemType: Bug +DistroRelease: Ubuntu 21.04 +Package: qemu-system 1:5.2+dfsg-6ubuntu2 +ProcVersionSignature: Ubuntu 5.11.0-11.12-generic 5.11.0 +Uname: Linux 5.11.0-11-generic ppc64le +.sys.firmware.opal.msglog: Error: [Errno 13] Permission denied: '/sys/firmware/opal/msglog' +ApportVersion: 2.20.11-0ubuntu60 +Architecture: ppc64el +CasperMD5CheckResult: pass +CurrentDesktop: Unity:Unity7:ubuntu +Date: Mon Mar 22 14:48:39 2021 +InstallationDate: Installed on 2021-03-22 (0 days ago) +InstallationMedia: Ubuntu-Server 21.04 "Hirsute Hippo" - Alpha ppc64el (20210321) +KvmCmdLine: COMMAND STAT EUID RUID PID PPID %CPU COMMAND +ProcKernelCmdLine: root=UUID=f3d03315-0944-4a02-9c87-09c00eba9fa1 ro +ProcLoadAvg: 1.20 0.73 0.46 1/1054 6071 +ProcSwaps: + Filename Type Size Used Priority + /swap.img file 8388544 0 -2 +ProcVersion: Linux version 5.11.0-11-generic (buildd@bos02-ppc64el-002) (gcc (Ubuntu 10.2.1-20ubuntu1) 10.2.1 20210220, GNU ld (GNU Binutils for Ubuntu) 2.36.1) #12-Ubuntu SMP Mon Mar 1 19:26:20 UTC 2021 +SourcePackage: qemu +UpgradeStatus: No upgrade log present (probably fresh install) +VarLogDump_list: total 0 +acpidump: + +cpu_cores: Number of cores present = 8 +cpu_coreson: Number of cores online = 8 +cpu_smt: SMT=4 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1921444 b/results/classifier/gemma3:12b/kvm/1921444 new file mode 100644 index 00000000..dd41a1cc --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1921444 @@ -0,0 +1,71 @@ + +Q35 doesn't support to hot add the 2nd PCIe device to KVM guest + +KVM: https://git.kernel.org/pub/scm/virt/kvm/kvm.git branch: next, commit: 4a98623d +Qemu: https://git.qemu.org/git/qemu.git branch: master, commit: 9e2e9fe3 + +Created a KVM guest with Q35 chipset, and try to hot add 2 PCIe device to guest with qemu internal command device_add, the 1st device can be added successfully, but the 2nd device failed to hot add. + +If guest chipset is legacy i440fx, the 2 device can be added successfully. + +1. Enable VT-d in BIOS +2. load KVM modules in Linux OS: modprobe kvm; modprobe kvm_intel +3. Bind 2 device to vfio-pci + echo 0000:b1:00.0 > /sys/bus/pci/drivers/i40e/unbind + echo "8086 1572" > /sys/bus/pci/drivers/vfio-pci/new_id + echo 0000:b1:00.1 > /sys/bus/pci/drivers/i40e/unbind + echo "8086 1572" > /sys/bus/pci/drivers/vfio-pci/new_id + +4. create guest with Q35 chipset: +qemu-system-x86_64 --accel kvm -m 4096 -smp 4 -drive file=/home/rhel8.2.qcow2,if=none,id=virtio-disk0 -device virtio-blk-pci,drive=virtio-disk0 -cpu host -machine q35 -device pcie-root-port,id=root1 -daemonize + +5. hot add the 1st device to guest successfully +in guest qemu monitor "device_add vfio-pci,host=b1:00.0,id=nic0,bus=root1" +6. hot add the 2nd device to guest +in guest qemu monitor "device_add vfio-pci,host=b1:00.1,id=nic1,bus=root1" +The 2nd device doesn't be added in guest, and the 1st device is removed from guest. + +Guest partial log: +[ 110.452272] pcieport 0000:00:04.0: pciehp: Slot(0): Attention button pressed +[ 110.453314] pcieport 0000:00:04.0: pciehp: Slot(0) Powering on due to button press +[ 110.454156] pcieport 0000:00:04.0: pciehp: Slot(0): Card present +[ 110.454792] pcieport 0000:00:04.0: pciehp: Slot(0): Link Up +[ 110.580927] pci 0000:01:00.0: [8086:1572] type 00 class 0x020000 +[ 110.582560] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x007fffff 64bit pref] +[ 110.583453] pci 0000:01:00.0: reg 0x1c: [mem 0x00000000-0x00007fff 64bit pref] +[ 110.584278] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] +[ 110.585051] pci 0000:01:00.0: Max Payload Size set to 128 (was 512, max 2048) +[ 110.586621] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold +[ 110.588140] pci 0000:01:00.0: BAR 0: no space for [mem size 0x00800000 64bit pref] +[ 110.588954] pci 0000:01:00.0: BAR 0: failed to assign [mem size 0x00800000 64bit pref] +[ 110.589797] pci 0000:01:00.0: BAR 6: assigned [mem 0xfe800000-0xfe87ffff pref] +[ 110.590703] pci 0000:01:00.0: BAR 3: assigned [mem 0xfe000000-0xfe007fff 64bit pref] +[ 110.592085] pcieport 0000:00:04.0: PCI bridge to [bus 01] +[ 110.592755] pcieport 0000:00:04.0: bridge window [io 0x1000-0x1fff] +[ 110.594403] pcieport 0000:00:04.0: bridge window [mem 0xfe800000-0xfe9fffff] +[ 110.595847] pcieport 0000:00:04.0: bridge window [mem 0xfe000000-0xfe1fffff 64bit pref] +[ 110.597867] PCI: No. 2 try to assign unassigned res +[ 110.597870] release child resource [mem 0xfe000000-0xfe007fff 64bit pref] +[ 110.597871] pcieport 0000:00:04.0: resource 15 [mem 0xfe000000-0xfe1fffff 64bit pref] released +[ 110.598881] pcieport 0000:00:04.0: PCI bridge to [bus 01] +[ 110.600789] pcieport 0000:00:04.0: BAR 15: assigned [mem 0x180000000-0x180bfffff 64bit pref] +[ 110.601731] pci 0000:01:00.0: BAR 0: assigned [mem 0x180000000-0x1807fffff 64bit pref] +[ 110.602849] pci 0000:01:00.0: BAR 3: assigned [mem 0x180800000-0x180807fff 64bit pref] +[ 110.604069] pcieport 0000:00:04.0: PCI bridge to [bus 01] +[ 110.604941] pcieport 0000:00:04.0: bridge window [io 0x1000-0x1fff] +[ 110.606237] pcieport 0000:00:04.0: bridge window [mem 0xfe800000-0xfe9fffff] +[ 110.607401] pcieport 0000:00:04.0: bridge window [mem 0x180000000-0x180bfffff 64bit pref] +[ 110.653661] i40e: Intel(R) Ethernet Connection XL710 Network Driver +[ 110.654443] i40e: Copyright (c) 2013 - 2019 Intel Corporation. +[ 110.655314] i40e 0000:01:00.0: enabling device (0140 -> 0142) +[ 110.672396] i40e 0000:01:00.0: fw 6.0.48442 api 1.7 nvm 6.01 0x800035b1 1.1747.0 [8086:1572] [8086:0008] +[ 110.750054] i40e 0000:01:00.0: MAC address: 3c:fd:fe:c0:59:98 +[ 110.751792] i40e 0000:01:00.0: FW LLDP is enabled +[ 110.764644] i40e 0000:01:00.0 eth1: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None +[ 110.779390] i40e 0000:01:00.0: PCI-Express: Speed 8.0GT/s Width x8 +[ 110.789841] i40e 0000:01:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 4 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA +[ 111.817553] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready +[ 205.130288] pcieport 0000:00:04.0: pciehp: Slot(0): Attention button pressed +[ 205.131743] pcieport 0000:00:04.0: pciehp: Slot(0): Powering off due to button press +[ 205.133233] pcieport 0000:00:04.0: pciehp: Slot(0): Card not present +[ 205.135728] i40e 0000:01:00.0: i40e_ptp_stop: removed PHC on eth1 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1925966 b/results/classifier/gemma3:12b/kvm/1925966 new file mode 100644 index 00000000..dfdc1a83 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1925966 @@ -0,0 +1,34 @@ + +Win10 guest freezes randomly + +In addition to bug #1916775, my Win10 Home guest freezes randomly and infrequently. Unlike bug +#1916775, this is unrecoverable and I see on the host (Debian 4.19.171-2) via iotop that all disk IO has stopped. My only recourse is a hard reset of the guest. + +My setup uses PCI-pass-through graphics (GTX 1650), host cpu (Ryzen 7 3800XT). It seems to occur more frequently when I plug in 3 monitors rather than 2 into the pass-through graphics card. It occurs whether or not I use the qcow disk drive. + +qemu-system-x86_64 + -cpu host,kvm=on,l3-cache=on,hv_relaxed,hv_vapic,hv_time,hv_spinlocks=0x1fff,hv_vendor_id=hv_dummy + -smp 8 + -rtc clock=host,base=localtime + -machine type=q35,accel=kvm,kernel_irqchip=on + -enable-kvm + -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd + -drive if=pflash,format=raw,file=/tmp/OVMF_VARS.fd + -m 32G + -usb + -device usb-tablet + -vga none + -serial none + -parallel none + -boot cd + -nographic + -device usb-host,vendorid=0x045e,productid=0x00db + -device usb-host,vendorid=0x1bcf,productid=0x0005 + -drive id=disk0,index=0,format=qcow2,if=virtio,cache=off,file=./win10_boot_priv.qcow2 + -drive id=disk2,index=2,aio=native,cache.direct=on,if=virtio,cache=off,format=raw,discard=unmap,detect-zeroes=unmap,file=/dev/vg0/win10_hdpriv + -device vfio-pci,host=09:00.0,addr=0x02.0x0,multifunction=on + -device vfio-pci,host=09:00.1,addr=0x02.0x1 + -device vfio-pci,host=09:00.2,addr=0x02.0x2 + -device vfio-pci,host=09:00.3,addr=0x02.0x3 + -netdev tap,id=netid,ifname=taplan,script=no,downscript=no + -device e1000,netdev=netid,mac=52:54:00:01:02:03 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1926249 b/results/classifier/gemma3:12b/kvm/1926249 new file mode 100644 index 00000000..165076c5 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1926249 @@ -0,0 +1,20 @@ + +postcopy migration fails in hirsute (solved) + +FYI: this is an intended change, can be overwritten via config and this bug is mostly to have something puzzled users can find via search engines to explain and solve their issue. + +postcopy migration can in some cases be very useful +=> https://wiki.qemu.org/Features/PostCopyLiveMigration + +But with Hirsute kernel being 5.11 that now contains the following upstream change +=> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d0d4730ac2 + +Due to that postcopy migration will fail like: + ++ lxc exec testkvm-focal-from -- virsh migrate --unsafe --live --postcopy --postcopy-after-precopy kvmguest-focal-postcopy qemu+ssh://10.85.93.248/system +error: internal error: unable to execute QEMU command 'migrate-set-capabilities': Postcopy is not supported + +This will also apply to e.g. a Focal-HWE kernel once on v5.11 or to Focal userspaces in a container under a Hirsute kernel (that is the example above). + +This was done for security reasons, if you want/need to re-enable un-limited userfault handling to be able to use postcopy again you'd want/need to set the control knob to one like: +$ sudo sysctl -w "vm.unprivileged_userfaultfd=1" \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1926596 b/results/classifier/gemma3:12b/kvm/1926596 new file mode 100644 index 00000000..d3d14ce1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1926596 @@ -0,0 +1,35 @@ + +qemu-monitor-event command gets stuck randomly + +We are using kvm virtualization on our servers, We use "qemu-monitor-command"(drive-backup) to take qcow2 backups and to monitor them we use "qemu-monitor-event" command +For eg:- +/usr/bin/virsh qemu-monitor-event VPSNAME --event "BLOCK_JOB_COMPLETED\|BLOCK_JOB_ERROR" --regex + +the above command stucks randomly (backup completes but still it is waiting) and because of which other vms backup are stucked until we kill that process. + +Can you suggest how can we debug this further to find the actual issue. + + +/usr/bin/virsh version + +Compiled against library: libvirt 4.5.0 +Using library: libvirt 4.5.0 +Using API: QEMU 4.5.0 +Running hypervisor: QEMU 2.0.0 + +cat /etc/os-release +NAME="CentOS Linux" +VERSION="7 (Core)" +ID="centos" +ID_LIKE="rhel fedora" +VERSION_ID="7" +PRETTY_NAME="CentOS Linux 7 (Core)" +ANSI_COLOR="0;31" +CPE_NAME="cpe:/o:centos:centos:7" +HOME_URL="https://www.centos.org/" +BUG_REPORT_URL="https://bugs.centos.org/" + +CENTOS_MANTISBT_PROJECT="CentOS-7" +CENTOS_MANTISBT_PROJECT_VERSION="7" +REDHAT_SUPPORT_PRODUCT="centos" +REDHAT_SUPPORT_PRODUCT_VERSION="7" \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1928 b/results/classifier/gemma3:12b/kvm/1928 new file mode 100644 index 00000000..4376e210 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1928 @@ -0,0 +1,66 @@ + +Run testpmd in VM on virtio-net cause qemu crash/assert +Description of problem: +Run testpmd in VM on virtio-net device(vhost-user), dpdk virtio pmd as backend. Qemu crash as: +``` +qemu-system-x86_64: ../accel/kvm/kvm-all.c:1717: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. +2023-10-11 04:44:51.058+0000: shutting down, reason=crashed +``` +If revert this commit `1680542862 virtio-pci: add support for configure interrupt <Cindy Lu>`, no issue observed. +And previous hash `cd336e8346 virtio-mmio: add support for configure interrupt <Cindy Lu>` also tested fine. +Steps to reproduce: +1. Run dpdk-testpmd as vhost-user backend in HV. +``` +build/app/dpdk-testpmd -a 0000:00:00.0 -l 0-3 -n 4 --vdev 'net_vhost0,iface=/tmp/vfe-net0,queues=4' +``` +2. Prepare virtio device inside VM + +``` +ifconfig eth1 down +echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages +mount -t hugetlbfs nodev /mnt/huge +modprobe uio + +insmod dpdk-kmods/linux/igb_uio/igb_uio.ko + +dpdk/usertools/dpdk-devbind.py --bind=igb_uio 00:06.0 +``` +3. Run testpmd inside VM + +``` +dpdk/build/app/dpdk-testpmd -a 00:06.0 -- --txd=128 --rxd=128 --txq=4 --rxq=4 --nb-cores=1 --forward-mode=txonly --stats-period=1 +``` +4. QEMU crashed +Additional information: +Testpmd is working on polling mode, so no VQ interrupt enable. Not sure about config interrupt. + +[dpdk.log.txt](/uploads/d98d6eb959f16c24fc4ebfefbc56b98b/dpdk.log.txt) +``` +#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 +#1 0x00007fab56c7ddb5 in __GI_abort () at abort.c:79 +#2 0x00007fab56c7dc89 in __assert_fail_base (fmt=0x7fab56de65f8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5611b5df95e3 "ret == 0", + file=0x5611b5df9202 "../accel/kvm/kvm-all.c", line=1717, function=<optimized out>) at assert.c:92 +#3 0x00007fab56c8ba76 in __GI___assert_fail (assertion=0x5611b5df95e3 "ret == 0", file=0x5611b5df9202 "../accel/kvm/kvm-all.c", line=1717, + function=0x5611b5df9fd0 <__PRETTY_FUNCTION__.37261> "kvm_irqchip_commit_routes") at assert.c:101 +#4 0x00005611b5a5094b in kvm_irqchip_commit_routes (s=0x5611b7ba2150) at ../accel/kvm/kvm-all.c:1717 +#5 0x00005611b573d00a in virtio_pci_one_vector_unmask (proxy=0x5611b8d6b460, queue_no=4294967295, vector=0, msg=..., n=0x5611b8d73a10) at ../hw/virtio/virtio-pci.c:980 +#6 0x00005611b573d276 in virtio_pci_vector_unmask (dev=0x5611b8d6b460, vector=0, msg=...) at ../hw/virtio/virtio-pci.c:1045 +#7 0x00005611b567eb78 in msix_fire_vector_notifier (dev=0x5611b8d6b460, vector=0, is_masked=false) at ../hw/pci/msix.c:118 +#8 0x00005611b567ebe9 in msix_handle_mask_update (dev=0x5611b8d6b460, vector=0, was_masked=true) at ../hw/pci/msix.c:131 +#9 0x00005611b567efe3 in msix_table_mmio_write (opaque=0x5611b8d6b460, addr=12, val=0, size=4) at ../hw/pci/msix.c:222 +#10 0x00005611b59ae141 in memory_region_write_accessor (mr=0x5611b8d6ba90, addr=12, value=0x7fab3b7fd348, size=4, shift=0, mask=4294967295, attrs=...) at ../softmmu/memory.c:493 +#11 0x00005611b59ae37c in access_with_adjusted_size (addr=12, value=0x7fab3b7fd348, size=4, access_size_min=1, access_size_max=4, + access_fn=0x5611b59ae04f <memory_region_write_accessor>, mr=0x5611b8d6ba90, attrs=...) at ../softmmu/memory.c:555 +#12 0x00005611b59b1470 in memory_region_dispatch_write (mr=0x5611b8d6ba90, addr=12, data=0, op=MO_32, attrs=...) at ../softmmu/memory.c:1515 +#13 0x00005611b59bef55 in flatview_write_continue (fv=0x5611b7ea2860, addr=4273815564, attrs=..., ptr=0x7fab5d980028, len=4, addr1=12, l=4, mr=0x5611b8d6ba90) + at ../softmmu/physmem.c:2825 +#14 0x00005611b59bf0b8 in flatview_write (fv=0x5611b7ea2860, addr=4273815564, attrs=..., buf=0x7fab5d980028, len=4) at ../softmmu/physmem.c:2867 +#15 0x00005611b59bf46a in address_space_write (as=0x5611b6752f80 <address_space_memory>, addr=4273815564, attrs=..., buf=0x7fab5d980028, len=4) at ../softmmu/physmem.c:2963 +#16 0x00005611b59bf4d7 in address_space_rw (as=0x5611b6752f80 <address_space_memory>, addr=4273815564, attrs=..., buf=0x7fab5d980028, len=4, is_write=true) + at ../softmmu/physmem.c:2973 +#17 0x00005611b5a53435 in kvm_cpu_exec (cpu=0x5611b7e4b5f0) at ../accel/kvm/kvm-all.c:2900 +#18 0x00005611b5a560c6 in kvm_vcpu_thread_fn (arg=0x5611b7e4b5f0) at ../accel/kvm/kvm-accel-ops.c:51 +#19 0x00005611b5c42e9b in qemu_thread_start (args=0x5611b7e537d0) at ../util/qemu-thread-posix.c:505 +#20 0x00007fab580d814a in start_thread (arg=<optimized out>) at pthread_create.c:479 +#21 0x00007fab56d58dc3 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 +``` diff --git a/results/classifier/gemma3:12b/kvm/1936 b/results/classifier/gemma3:12b/kvm/1936 new file mode 100644 index 00000000..daeac91b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1936 @@ -0,0 +1,2 @@ + +Pass file descriptor to /dev/kvm device node? diff --git a/results/classifier/gemma3:12b/kvm/1947 b/results/classifier/gemma3:12b/kvm/1947 new file mode 100644 index 00000000..34d4a19f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1947 @@ -0,0 +1,21 @@ + +ACPI (Stop code 0x000000A5) BSOD During Windows XP Professional x64 Edition Setup +Description of problem: +When attempting to launch Windows XP Professional x64 Edition setup, the setup crashes with BSOD stop code 0x000000A5 and the following message: +``` +A problem has been detected and Windows has been shut down to prevent damage to your computer. + +If this is the first time you've seen this Stop error screen, restart your computer. If this screen appears again, follow these steps: + +The BIOS in this system is not fully ACPI compliant. Please contact your system vendor for an updated BIOS. If you are unable to obtain an updated BIOS or the latest BIOS supplied by your vendor is not ACPI compliant, you can turn off ACPI mode during textmode setup. To do this, press the F7 key when you are prompted to install storage drivers. The system will not notify you that the F7 key was pressed - it will silently disable ACPI and allow you to continue your installation. + +Technical information: + +*** STOP: 0x000000A5 (0x0000000000000014, 0xFFFFFA80000CBFC6, 0x000000000000008A, 0xFFFFFADFC8E31A90) +``` +Steps to reproduce: +1. Obtain a copy of Windows XP Professional x64 Edition SP2. +2. Run QEMU using the provided command line (with the name & location of your ISO in place of "Windows XP Professional x64 Edition.iso") +Additional information: +It appears the bug may be dependent on KVM, I've seen some conflicting results, but with the provided command line removing "accel=kvm" or replacing it with "accel=tcg" changes the BSOD to one about lack of disk space. +Also, a similar bug occurs with Windows 2000 SP4, but the setup will hang instead of crash. (The hang can be avoided by pressing F5 and selecting "Standard PC" instead of either ACPI option during setup.) diff --git a/results/classifier/gemma3:12b/kvm/1967814 b/results/classifier/gemma3:12b/kvm/1967814 new file mode 100644 index 00000000..a56ec88e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1967814 @@ -0,0 +1,23 @@ + +Ubuntu 20.04.3 - ilzlnx3g1 - virtio-scsi devs on KVM guest having miscompares on disktests when there is a failed path. + +== Comment: #63 - Halil Pasic <email address hidden> - 2022-03-28 17:33:34 == +I'm pretty confident I've figured out what is going on. + +From the guest side, the decision whether the SCSI command was completed successfully or not comes down to looking at the sense data. Prior to commit +a108557bbf ("scsi: inline sg_io_sense_from_errno() into the callers."), we don't +build sense data as a response to seeing a host status presented by the host SCSI stack (e.g. kernel). + +Thus when the kernel tells us that a given SCSI command did not get completed via +SCSI_HOST_TRANSPORT_DISRUPTED or SCSI_HOST_NO_LUN, we end up fooling the guest into believing that the command completed successfully. + +The guest kernel, and especially virtio and multipath are at no fault (AFAIU). Given these facts, it isn't all that surprising, that we end up with corruptions. + +All we have to do is do backports for QEMU (when necessary). I didn't investigate vhost-scsi -- my guess is, that it ain't affected. + +How do we want to handle the back-ports? + +== Comment: #66 - Halil Pasic <email address hidden> - 2022-04-04 05:36:33 == +This is a proposed backport containing 7 patches in mbox format. I tried to pick patches sanely, and all I had to do was basically resolving merge conflicts. + +I have to admit I have no extensive experience in doing such invasive backports, and my knowledge of the QEMU SCSI stack is very limited. I would be happy if the Ubuntu folks would have a good look at this, and if possible improve on it. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/1999 b/results/classifier/gemma3:12b/kvm/1999 new file mode 100644 index 00000000..43750d5c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/1999 @@ -0,0 +1,52 @@ + +qemu got sigabrt when using vpp in guest and dpdk for qemu +Description of problem: +When set the interface up in vpp, the qemu process is crashed with signal sigabrt. + +After some debug, i have identified that the problem lies in the following function. + +```c +static int setup_routing_entry(struct kvm *kvm, + struct kvm_irq_routing_table *rt, + struct kvm_kernel_irq_routing_entry *e, + const struct kvm_irq_routing_entry *ue) +{ + struct kvm_kernel_irq_routing_entry *ei; + int r; + u32 gsi = array_index_nospec(ue->gsi, KVM_MAX_IRQ_ROUTES); + + /* + * Do not allow GSI to be mapped to the same irqchip more than once. + * Allow only one to one mapping between GSI and non-irqchip routing. + */ + hlist_for_each_entry(ei, &rt->map[gsi], link) + if (ei->type != KVM_IRQ_ROUTING_IRQCHIP || + ue->type != KVM_IRQ_ROUTING_IRQCHIP || + ue->u.irqchip.irqchip == ei->irqchip.irqchip) + return -EINVAL; + +``` + +I added some debug printk like following + +```c + hlist_for_each_entry(ei, &rt->map[gsi], link) + if (ei->type != KVM_IRQ_ROUTING_IRQCHIP || + ue->type != KVM_IRQ_ROUTING_IRQCHIP || + ue->u.irqchip.irqchip == ei->irqchip.irqchip){ + printk("ei->type: %u, KVM_IRQ_ROUTING_IRQCHIP: %u, ue->type: %u, ue->u.irqchip.irqchip: %u , ei->irqchip.irqchip: %u", ei->type, KVM_IRQ_ROUTING_IRQCHIP , ue->type, ue->u.irqchip.irqchip , ei->irqchip.irqchip); + return -EINVAL; + } +``` + +Then i got following in dmesg + +``` +[Thu Nov 23 09:29:10 2023] ei->type: 2, KVM_IRQ_ROUTING_IRQCHIP: 1, ue->type: 1, ue->u.irqchip.irqchip: 2 , ei->irqchip.irqchip: 4276097024 +[Thu Nov 23 09:29:10 2023] ei->type: 2, KVM_IRQ_ROUTING_IRQCHIP: 1, ue->type: 1, ue->u.irqchip.irqchip: 2 , ei->irqchip.irqchip: 4276097024 +``` +Steps to reproduce: +This is a kube-ovn + dpdk env, not easy to reproduce now.. +Additional information: +* I also file a bug on kernel.org: https://bugzilla.kernel.org/show_bug.cgi?id=218177 +* the libvirt xml file is also attached [instance.xml](/uploads/05b391046fdc1263fd7e63bcfab6f4fb/instance.xml) diff --git a/results/classifier/gemma3:12b/kvm/2003 b/results/classifier/gemma3:12b/kvm/2003 new file mode 100644 index 00000000..ad233818 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2003 @@ -0,0 +1,16 @@ + +Windows guest boot happens blue screen and crash by using "-cpu Skylake-Server,+la57,phys-bits=52" +Description of problem: +We are verifying 5-level paging enabling on Windows guest. After creating Windows guest, the system boot caused blue screen and no screen interface response. + +Same QEMU parameter without **+la57,phys-bits=52** (i.e., `./qemu-system-x86_64 -accel kvm -smp 4 -m 4096 -machine q35 -drive file=Winvm5l_host5l_ept5_1698034398,if=none,id=virtio-disk0 -device virtio-blk-pci,drive=virtio-disk0,bootindex=0 -cpu Skylake-Server -monitor pty -daemonize -vnc :40541 -device virtio-net-pci,netdev=nic0,mac=00:5b:0b:59:0d:26 -netdev tap,id=nic0,br=virbr0,helper=/usr/local/libexec/qemu-bridge-helper,vhost=on`), the same Windows image can be booted successfully. Initially suspected this new QEMU release does not support 5-level paging related features. +Steps to reproduce: +1. Create guest by using the command + +``` +./qemu-system-x86_64 -accel kvm -smp 4 -m 4096 -machine q35 -drive file=Winvm5l_host5l_ept5_1698034398,if=none,id=virtio-disk0 -device virtio-blk-pci,drive=virtio-disk0,bootindex=0 -cpu Skylake-Server,+la57,phys-bits=52 -monitor pty -daemonize -vnc :40541 -device virtio-net-pci,netdev=nic0,mac=00:5b:0b:59:0d:26 -netdev tap,id=nic0,br=virbr0,helper=/usr/local/libexec/qemu-bridge-helper,vhost=on +``` +Additional information: +Suspected to be a QEMU regression issue, the first bad commit id: 14f5a7bae4cb5ca45a03e16b5bb0c5d766fd51b7. + +Latest successful version commit id: cea3ea670fe265421131aad90c36fbb87bc4d206 diff --git a/results/classifier/gemma3:12b/kvm/2006 b/results/classifier/gemma3:12b/kvm/2006 new file mode 100644 index 00000000..093c87d0 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2006 @@ -0,0 +1,43 @@ + +migrating failed with rcu_preempt message on proxmox 8 +Description of problem: +when i migrate the VM from one host to another, it fails and give messages: + + ``` +[ 584.109502] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: +[ 584.109534] rcu: 1-...!: (0 ticks this GP) idle=1408/0/0x0 softirq=8428/8428 fqs=0 (false positive?) +[ 584.109556] (detected by 0, t=5252 jiffies, g=2953, q=74 ncpus=2) +[ 584.109561] Sending NMI from CPU 0 to CPUs 1: +[ 584.109587] NMI backtrace for cpu 1 skipped: idling at native_safe_halt+0xb/0x10 +[ 584.110564] rcu: rcu_preempt kthread timer wakeup didn't happen for 5251 jiffies! g2953 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 +[ 584.110585] rcu: Possible timer handling issue on cpu=1 timer-softirq=8006 +[ 584.110597] rcu: rcu_preempt kthread starved for 5252 jiffies! g2953 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=1 +[ 584.110614] rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior. +[ 584.110645] rcu: RCU grace-period kthread stack dump: +[ 584.110658] task:rcu_preempt state:I stack:0 pid:15 ppid:2 flags:0x00004000 +[ 584.110667] Call Trace: +[ 584.110672] <TASK> +[ 584.110688] __schedule+0x351/0xa20 +[ 584.110699] ? rcu_gp_cleanup+0x480/0x480 +[ 584.110704] schedule+0x5d/0xe0 +[ 584.110705] schedule_timeout+0x94/0x150 +[ 584.110709] ? __bpf_trace_tick_stop+0x10/0x10 +[ 584.110714] rcu_gp_fqs_loop+0x141/0x4c0 +[ 584.110717] rcu_gp_kthread+0xd0/0x190 +[ 584.110720] kthread+0xe9/0x110 +[ 584.110725] ? kthread_complete_and_exit+0x20/0x20 +[ 584.110728] ret_from_fork+0x22/0x30 +[ 584.110735] </TASK> +[ 584.110736] rcu: Stack dump where RCU GP kthread last ran: +[ 584.110747] Sending NMI from CPU 0 to CPUs 1: +[ 584.110757] NMI backtrace for cpu 1 skipped: idling at native_safe_halt+0xb/0x10 + + ``` + +we can reproduce on our R630 cluster easily, but it is OK on R730 cluster and R740 cluster. +Steps to reproduce: +1. create and run an VM +2. migrate the vm to other host +3. it failed with message +Additional information: +i downgrade the pve-qemu-kvm from 8.1.2-4 to 8.0.2-3, same problem. diff --git a/results/classifier/gemma3:12b/kvm/2007 b/results/classifier/gemma3:12b/kvm/2007 new file mode 100644 index 00000000..0a5627b1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2007 @@ -0,0 +1,30 @@ + +Unable to update APIC_TPR when x2APIC is enabled and -global kvm-pit.lost_tick_policy=discard parameter provided +Description of problem: +I am developing a custom OS and I wanted to implement x2APIC support. I was able to enable x2APIC, read and write some registers, like APIC_VER and APIC_SIVR. Everything looks good, except that I cannot update APIC_TPR register. Reading it always returns 0. The code I wrote works properly on bare metal. Below some observations: + +Scenario 1: +1. Enable x2APIC +2. Write to CR8 - success +3. Read from CR8 - gives correct value +4. Read from APIC_TPR - gives correct value + +Scenario 2: +1. Enable x2APIC +2. Read from APIC_TPR - gives 0 +3. Write to APIC_TPR +4. Read from APIC_TPR - gives 0 again + +Scenario 3: +1. Initialize APIC (LAPIC or xAPIC) +2. Write to APIC_TPR +3. Read from APIC_TPR - gives correct value +4. Switch to x2APIC +5. Read from APIC_TPR - gives correct value stored in pt. 2 +6. Write to APIC_TPR +7. Read from APIC_TPR - gives values stored in pt.2, not in point 6! + +Looks like APIC_TPR is stuck at value stored there before switching to x2APIC and it cannot be updated with MSR. Only update CR8 works. +I have checked parameters I passed to qemu. After removing `-global kvm-pit.lost_tick_policy=discard` problem is gone and APIC_TPR is updated correctly. +Additional information: +Please let me know if you need additional information. diff --git a/results/classifier/gemma3:12b/kvm/2025586 b/results/classifier/gemma3:12b/kvm/2025586 new file mode 100644 index 00000000..879ad6d3 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2025586 @@ -0,0 +1,44 @@ + +Align the iov length to the logical block size + +[Impact] +When the logical block size of the virtual block device is smaller than the block device it is backed by on the host, +qemu encounters a situation where it needs to bounce unaligned buffers during the use of direct IO. +In the past, the logical block size happened to align with the memory page offset, leading qemu to mistakenly use the memory offset as the block size. +However, a kernel commit b1a000d3b8ec resolved this issue by separating memory alignment from the logical block size. +As a result, qemu now has an incorrect understanding of the minimum vector size. + +[Fix] +Upstream commit 25474d90aa50 fixed this issue. +========== +Author: Keith Busch <email address hidden> +CommitDate: Fri Sep 30 18:43:44 2022 +0200 + + block: use the request length for iov alignment + + An iov length needs to be aligned to the logical block size, which may + be larger than the memory alignment. + + Tested-by: Jens Axboe <email address hidden> + Signed-off-by: Keith Busch <email address hidden> + Message-Id: <email address hidden> + Reviewed-by: Kevin Wolf <email address hidden> + Signed-off-by: Kevin Wolf <email address hidden> +========== + +[Test Plan] +1. Get a ubuntu image and convert it to RAW format +wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64-disk-kvm.img +qemu-img convert jammy-server-cloudimg-amd64-disk-kvm.img jammy-server-cloudimg-amd64-disk-kvm.raw +2. Set up a loop device with RAW image +losetup -b 4096 -f jammy-server-cloudimg-amd64-disk-kvm.raw +3. Get loop device number by `losetup -a` command +4. Start the virtual machine +qemu-system-x86_64 -enable-kvm -drive file=/dev/loopX,format=raw,cache=none --nographic + +[Where problems could occur] +The patch addressed the issue of misusing the memory offset as the block size. +This problem only occurred when the cache option was set to "none" and the Linux kernel being used had the commit b1a000d3b8ec. +However, it is worth noting that the patch also worked effectively with older kernels. + +[Other Info] \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/2037 b/results/classifier/gemma3:12b/kvm/2037 new file mode 100644 index 00000000..1782a2d9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2037 @@ -0,0 +1,16 @@ + +CPUID.07H:EBX.intel-pt not supported warning info shown in terminal when start guest with -cpu qemu64,+intel-pt +Description of problem: +When launch guest with qemu-system-x86_64 with parameter -cpu host,+intel-pt, it will show warning info in terminal : +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.intel-pt [bit 25] 'intel_pt' can not be found in guest's CPU flag. +While host already support intel_pt. +Steps to reproduce: +1. Run the above QEMU command. +Additional information: +This issue was observed with kernel 5.13 + +qemu-system-x86_64 -accel kvm -m 4096 -smp 4 -cpu host,+intel-pt,min-level=0x14 +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.intel-pt [bit 25] +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.intel-pt [bit 25] +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.intel-pt [bit 25] +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.intel-pt [bit 25] diff --git a/results/classifier/gemma3:12b/kvm/2041 b/results/classifier/gemma3:12b/kvm/2041 new file mode 100644 index 00000000..130135f4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2041 @@ -0,0 +1,28 @@ + +RISC-V KVM build error with Alpine Linux +Description of problem: +Native build of qemu fails on alpine linux riscv64. +Steps to reproduce: +1. install alpine on riscv or set up a container with qemu-riscv64 +2. build qemu 8.1.3 from source +3. +Additional information: +``` +kvm.c:(.text+0xc50): undefined reference to `strerrorname_np' +/usr/lib/gcc/riscv64-alpine-linux-musl/13.2.1/../../../../riscv64-alpine-linux-musl/bin/ld: libqemu-riscv64-softmmu.fa.p/target_riscv_kvm.c.o: in function `.L0 ': +kvm.c:(.text+0xcda): undefined reference to `strerrorname_np' +/usr/lib/gcc/riscv64-alpine-linux-musl/13.2.1/../../../../riscv64-alpine-linux-musl/bin/ld: libqemu-riscv64-softmmu.fa.p/target_riscv_kvm.c.o: in function `.L111': +kvm.c:(.text+0xd02): undefined reference to `strerrorname_np' +``` + +The `strerrorname_np` is a GNU specific non-portable function (that what _np stands for). This is the only place where it is use in the entire qemu codebase: +``` +$ rg strerrorname_np +target/riscv/kvm/kvm-cpu.c +837: strerrorname_np(errno)); +899: strerrorname_np(errno)); +909: strerrorname_np(errno)); +932: strerrorname_np(errno)); +``` + +Seems like other places uses `strerror(errno)`. diff --git a/results/classifier/gemma3:12b/kvm/2046 b/results/classifier/gemma3:12b/kvm/2046 new file mode 100644 index 00000000..cf68bd13 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2046 @@ -0,0 +1,2 @@ + +live migration error : qemu-kvm: Missing section footer for 0000:00:01.3/piix4_pm diff --git a/results/classifier/gemma3:12b/kvm/2069 b/results/classifier/gemma3:12b/kvm/2069 new file mode 100644 index 00000000..dce41e7b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2069 @@ -0,0 +1,353 @@ + +[virtio_blk:iothread-vq-mapping]Qemu core dump when checking the deleted device via "info qtree" +Description of problem: +[virtio_blk:iothread-vq-mapping]Qemu core dump when checking the deleted device via "info qtree" +Steps to reproduce: +1.Start guest with qemu cmds: \ + qemu-system-x86_64 \ + -S \ + -name 'avocado-vt-vm1' \ + -machine pc,memory-backend=mem-machine_mem \ + -nodefaults \ + -device '{"driver": "VGA", "bus": "pci.0", "addr": "0x2"}' \ + -m 30720 \ + -object '{"size": 32212254720, "id": "mem-machine_mem", "qom-type": "memory-backend-ram"}' \ + -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2 \ + -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \ + -chardev socket,path=/tmp/monitor-qmpmonitor1-20240104-043347-5Miq4hMP,wait=off,server=on,id=qmp_id_qmpmonitor1 \ + -mon chardev=qmp_id_qmpmonitor1,mode=control \ + -chardev socket,path=tmp/monitor-catch_monitor-20240104-043347-5Miq4hMP,wait=off,server=on,id=qmp_id_catch_monitor \ + -mon chardev=qmp_id_catch_monitor,mode=control \ + -device '{"ioport": 1285, "driver": "pvpanic", "id": "id3KTLMV"}' \ + -chardev socket,path=/tmp/serial-serial0-20240104-043347-5Miq4hMP,wait=off,server=on,id=chardev_serial0 \ + -device '{"id": "serial0", "driver": "isa-serial", "chardev": "chardev_serial0"}' \ + -chardev socket,id=seabioslog_id_20240104-043347-5Miq4hMP,path=/tmp/seabios-20240104-043347-5Miq4hMP,server=on,wait=off \ + -device isa-debugcon,chardev=seabioslog_id_20240104-043347-5Miq4hMP,iobase=0x402 \ + -device '{"driver": "ich9-usb-ehci1", "id": "usb1", "addr": "0x1d.0x7", "multifunction": true, "bus": "pci.0"}' \ + -device '{"driver": "ich9-usb-uhci1", "id": "usb1.0", "multifunction": true, "masterbus": "usb1.0", "addr": "0x1d.0x0", "firstport": 0, "bus": "pci.0"}' \ + -device '{"driver": "ich9-usb-uhci2", "id": "usb1.1", "multifunction": true, "masterbus": "usb1.0", "addr": "0x1d.0x2", "firstport": 2, "bus": "pci.0"}' \ + -device '{"driver": "ich9-usb-uhci3", "id": "usb1.2", "multifunction": true, "masterbus": "usb1.0", "addr": "0x1d.0x4", "firstport": 4, "bus": "pci.0"}' \ + -device '{"driver": "usb-tablet", "id": "usb-tablet1", "bus": "usb1.0", "port": "1"}' \ + -object '{"qom-type": "iothread", "id": "t1"}' \ + -object '{"qom-type": "iothread", "id": "t2"}' \ + -object '{"qom-type": "iothread", "id": "t3"}' \ + -object '{"qom-type": "iothread", "id": "t4"}' \ + -blockdev '{"node-name": "file_image1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/images/rhel9-virtio.qcow2", "cache": {"direct": true, "no-flush": false}}' \ + -blockdev '{"node-name": "drive_image1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_image1"}' \ + -device '{"driver": "virtio-blk-pci", "id": "image1", "drive": "drive_image1", "bootindex": 0, "write-cache": "on", "bus": "pci.0", "addr": "0x3"}' \ + -blockdev '{"node-name": "file_stg1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/images/stg1.qcow2", "cache": {"direct": true, "no-flush": false}}' \ + -blockdev '{"node-name": "drive_stg1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_stg1"}' \ + -device '{"driver": "virtio-blk-pci", "id": "stg1", "drive": "drive_stg1", "bootindex": 1, "write-cache": "on", "serial": "stg1", "bus": "pci.0", "addr": "0x4", "iothread-vq-mapping": [{"iothread": "t2"}, {"iothread": "t3"}]}' \ + -blockdev '{"node-name": "file_stg2", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/images/stg2.qcow2", "cache": {"direct": true, "no-flush": false}}' \ + -blockdev '{"node-name": "drive_stg2", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_stg2"}' \ + -device '{"driver": "virtio-blk-pci", "id": "stg2", "drive": "drive_stg2", "bootindex": 2, "write-cache": "on", "serial": "stg2", "num-queues": 6, "iothread-vq-mapping": [{"iothread": "t1", "vqs": [0, 1, 2]}, {"iothread": "t2", "vqs": [3]}, {"iothread": "t4", "vqs": [4, 5]}], "bus": "pci.0", "addr": "0x5"}' \ + -device '{"driver": "virtio-net-pci", "mac": "9a:5b:6c:5f:5b:5b", "id": "iddNmpYv", "netdev": "idG9Emyl", "bus": "pci.0", "addr": "0x6"}' \ + -netdev '{"id": "idG9Emyl", "type": "tap", "vhost": true}' \ + -vnc :0 \ + -rtc base=utc,clock=host,driftfix=slew \ + -boot menu=off,order=cdn,once=c,strict=off \ + -enable-kvm \ + +2. Continue VM: \ + {"execute": "cont"} \ + +3. Check disk info before hot unplug: \ + (guest)#ls /dev/[vhs]d* | grep -v [0-9]$ \ + +4. Unplug device from vm: \ + {"execute": "device_del", "arguments": {"id": "stg1"}} \ + {"timestamp": {"seconds": 1704360854, "microseconds": 751289}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/stg1/virtio-backend"}} \ + {"timestamp": {"seconds": 1704360854, "microseconds": 752078}, "event": "DEVICE_DELETED", "data": {"device": "stg1", "path": "/machine/peripheral/stg1"}} \ + +5. Check device info via "info qtree": \ + {"execute": "human-monitor-command", "arguments": {"command-line": "info qtree"}} \ + +Actual Result: \ + After step5, qemu core dump with info: \ + qemu-system-x86_64: ../qapi/string-output-visitor.c:316: start_list: Assertion `sov->list_mode == LM_NONE' failed. \ + /tmp/aexpect_fNRmaiS3/aexpect-okx056xs.sh: line 1: 480254 Aborted (core dumped) MALLOC_PERTURB_=1 qemu-system-x86_64 -S -name 'avocado-vt-vm1' -machine pc,memory-backend=mem-machine_mem ... \ + +Coredump info as bellow: \ + #coredumpctl debug 480254 \ + Stack trace of thread 480254: + #0 0x00007f9397ea365c __pthread_kill_implementation (libc.so.6 + 0xa365c) \ + #1 0x00007f9397e54d06 __GI_raise (libc.so.6 + 0x54d06) \ + #2 0x00007f9397e287f3 __GI_abort (libc.so.6 + 0x287f3) \ + #3 0x00007f9397e2871b __assert_fail_base (libc.so.6 + 0x2871b) \ + #4 0x00007f9397e4dca6 __assert_fail (libc.so.6 + 0x4dca6) \ + #5 0x000056472e810e0d start_list (qemu-system-x86_64 + 0xa92e0d) \ + #6 0x000056472e80acb9 visit_start_list (qemu-system-x86_64 + 0xa8ccb9) \ + #7 0x000056472e75e9c0 visit_type_uint16List (qemu-system-x86_64 + 0x9e09c0) \ + #8 0x000056472e7e9955 visit_type_IOThreadVirtQueueMapping_members (qemu-system-x86_64 + 0xa6b955) \ + #9 0x000056472e7e9a1b visit_type_IOThreadVirtQueueMapping (qemu-system-x86_64 + 0xa6ba1b) \ + #10 0x000056472e7e9b0d visit_type_IOThreadVirtQueueMappingList (qemu-system-x86_64 + 0xa6bb0d) \ + #11 0x000056472e1519b2 get_iothread_vq_mapping_list (qemu-system-x86_64 + 0x3d39b2) \ + #12 0x000056472e629d0f field_prop_get (qemu-system-x86_64 + 0x8abd0f) \ + #13 0x000056472e635b24 object_property_get (qemu-system-x86_64 + 0x8b7b24) \ + #14 0x000056472e6368b3 object_property_print (qemu-system-x86_64 + 0x8b88b3) \ + #15 0x000056472e38f97a qdev_print_props (qemu-system-x86_64 + 0x61197a) \ + #16 0x000056472e38fc9f qdev_print (qemu-system-x86_64 + 0x611c9f) \ + #17 0x000056472e38fdd9 qbus_print (qemu-system-x86_64 + 0x611dd9) \ + #18 0x000056472e38fd03 qdev_print (qemu-system-x86_64 + 0x611d03) \ + #19 0x000056472e38fdd9 qbus_print (qemu-system-x86_64 + 0x611dd9) \ + #20 0x000056472e38fd03 qdev_print (qemu-system-x86_64 + 0x611d03) \ + #21 0x000056472e38fdd9 qbus_print (qemu-system-x86_64 + 0x611dd9) \ + #22 0x000056472e38fe26 hmp_info_qtree (qemu-system-x86_64 + 0x611e26) \ + #23 0x000056472e3ed6ed handle_hmp_command_exec (qemu-system-x86_64 + 0x66f6ed) \ + #24 0x000056472e3ed91a handle_hmp_command (qemu-system-x86_64 + 0x66f91a) \ + #25 0x000056472e3eef02 qmp_human_monitor_command (qemu-system-x86_64 + 0x670f02) \ + #26 0x000056472e7cc89b qmp_marshal_human_monitor_command (qemu-system-x86_64 + 0xa4e89b) \ + #27 0x000056472e8117d0 do_qmp_dispatch_bh (qemu-system-x86_64 + 0xa937d0) \ + #28 0x000056472e83be78 aio_bh_call (qemu-system-x86_64 + 0xabde78) \ + #29 0x000056472e83bf93 aio_bh_poll (qemu-system-x86_64 + 0xabdf93) \ + #30 0x000056472e81eb3e aio_dispatch (qemu-system-x86_64 + 0xaa0b3e) \ + #31 0x000056472e83c3d2 aio_ctx_dispatch (qemu-system-x86_64 + 0xabe3d2) \ + #32 0x00007f939829ff4f g_main_dispatch (libglib-2.0.so.0 + 0x54f4f) \ + #33 0x000056472e83d8a8 glib_pollfds_poll (qemu-system-x86_64 + 0xabf8a8) \ + #34 0x000056472e83d925 os_host_main_loop_wait (qemu-system-x86_64 + 0xabf925) \ + #35 0x000056472e83da33 main_loop_wait (qemu-system-x86_64 + 0xabfa33) \ + #36 0x000056472e396150 qemu_main_loop (qemu-system-x86_64 + 0x618150) \ + #37 0x000056472e628b7f qemu_default_main (qemu-system-x86_64 + 0x8aab7f) \ + #38 0x000056472e628bba main (qemu-system-x86_64 + 0x8aabba) \ + #39 0x00007f9397e3feb0 __libc_start_call_main (libc.so.6 + 0x3feb0) \ + #40 0x00007f9397e3ff60 __libc_start_main_impl (libc.so.6 + 0x3ff60) \ + #41 0x000056472e08e435 _start (qemu-system-x86_64 + 0x310435) \ + \ + Stack trace of thread 480255: \ + #0 0x00007f9397e3ee5d syscall (libc.so.6 + 0x3ee5d) \ + #1 0x000056472e82343c qemu_futex_wait (qemu-system-x86_64 + 0xaa543c) \ + #2 0x000056472e823623 qemu_event_wait (qemu-system-x86_64 + 0xaa5623) \ + #3 0x000056472e830d03 call_rcu_thread (qemu-system-x86_64 + 0xab2d03) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480258: \ + #0 0x00007f9397f429be __ppoll (libc.so.6 + 0x1429be) \ + #1 0x000056472e841cf0 qemu_poll_ns (qemu-system-x86_64 + 0xac3cf0) \ + #2 0x000056472e81f95f fdmon_poll_wait (qemu-system-x86_64 + 0xaa195f) \ + #3 0x000056472e81f29b aio_poll (qemu-system-x86_64 + 0xaa129b) \ + #4 0x000056472e67440c iothread_run (qemu-system-x86_64 + 0x8f640c) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480266: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480267: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480257: \ + #0 0x00007f9397f429be __ppoll (libc.so.6 + 0x1429be) \ + #1 0x000056472e841cf0 qemu_poll_ns (qemu-system-x86_64 + 0xac3cf0) \ + #2 0x000056472e81f95f fdmon_poll_wait (qemu-system-x86_64 + 0xaa195f) \ + #3 0x000056472e81f29b aio_poll (qemu-system-x86_64 + 0xaa129b) \ + #4 0x000056472e67440c iothread_run (qemu-system-x86_64 + 0x8f640c) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480256: \ + #0 0x00007f9397f429be __ppoll (libc.so.6 + 0x1429be) \ + #1 0x000056472e841d87 qemu_poll_ns (qemu-system-x86_64 + 0xac3d87) \ + #2 0x000056472e81f95f fdmon_poll_wait (qemu-system-x86_64 + 0xaa195f) \ + #3 0x000056472e81f29b aio_poll (qemu-system-x86_64 + 0xaa129b) \ + #4 0x000056472e67440c iothread_run (qemu-system-x86_64 + 0x8f640c) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480260: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0fb4 __pthread_cond_wait_common (libc.so.6 + 0xa0fb4) \ + #2 0x000056472e823041 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0xaa5041) \ + #3 0x000056472e8230dc qemu_cond_timedwait_impl (qemu-system-x86_64 + 0xaa50dc) \ + #4 0x000056472e840595 worker_thread (qemu-system-x86_64 + 0xac2595) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480264: \ + #0 0x00007f9397f428bf __GI___poll (libc.so.6 + 0x1428bf) \ + #1 0x00007f93982f51fc g_main_context_poll (libglib-2.0.so.0 + 0xaa1fc) \ + #2 0x00007f939829f5a3 g_main_loop_run (libglib-2.0.so.0 + 0x545a3) \ + #3 0x000056472e67443f iothread_run (qemu-system-x86_64 + 0x8f643f) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480274: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480337: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0fb4 __pthread_cond_wait_common (libc.so.6 + 0xa0fb4) \ + #2 0x000056472e823041 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0xaa5041) \ + #3 0x000056472e8230dc qemu_cond_timedwait_impl (qemu-system-x86_64 + 0xaa50dc) \ + #4 0x000056472e840595 worker_thread (qemu-system-x86_64 + 0xac2595) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480273: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480358: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0fb4 __pthread_cond_wait_common (libc.so.6 + 0xa0fb4) \ + #2 0x000056472e823041 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0xaa5041) \ + #3 0x000056472e8230dc qemu_cond_timedwait_impl (qemu-system-x86_64 + 0xaa50dc) \ + #4 0x000056472e840595 worker_thread (qemu-system-x86_64 + 0xac2595) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480276: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0cb0 __pthread_cond_wait_common (libc.so.6 + 0xa0cb0) \ + #2 0x000056472e822f8e qemu_cond_wait_impl (qemu-system-x86_64 + 0xaa4f8e) \ + #3 0x000056472e0c6f39 vnc_worker_thread_loop (qemu-system-x86_64 + 0x348f39) \ + #4 0x000056472e0c7544 vnc_worker_thread (qemu-system-x86_64 + 0x349544) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480259: \ + #0 0x00007f9397f429be __ppoll (libc.so.6 + 0x1429be) \ + #1 0x000056472e841cf0 qemu_poll_ns (qemu-system-x86_64 + 0xac3cf0) \ + #2 0x000056472e81f95f fdmon_poll_wait (qemu-system-x86_64 + 0xaa195f) \ + #3 0x000056472e81f29b aio_poll (qemu-system-x86_64 + 0xaa129b) \ + #4 0x000056472e67440c iothread_run (qemu-system-x86_64 + 0x8f640c) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480357: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0fb4 __pthread_cond_wait_common (libc.so.6 + 0xa0fb4) \ + #2 0x000056472e823041 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0xaa5041) \ + #3 0x000056472e8230dc qemu_cond_timedwait_impl (qemu-system-x86_64 + 0xaa50dc) \ + #4 0x000056472e840595 worker_thread (qemu-system-x86_64 + 0xac2595) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912)\ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480268: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480269: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480353: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0fb4 __pthread_cond_wait_common (libc.so.6 + 0xa0fb4) \ + #2 0x000056472e823041 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0xaa5041) \ + #3 0x000056472e8230dc qemu_cond_timedwait_impl (qemu-system-x86_64 + 0xaa50dc) \ + #4 0x000056472e840595 worker_thread (qemu-system-x86_64 + 0xac2595) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480271: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480354: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0fb4 __pthread_cond_wait_common (libc.so.6 + 0xa0fb4) \ + #2 0x000056472e823041 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0xaa5041) \ + #3 0x000056472e8230dc qemu_cond_timedwait_impl (qemu-system-x86_64 + 0xaa50dc) \ + #4 0x000056472e840595 worker_thread (qemu-system-x86_64 + 0xac2595) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480356: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0fb4 __pthread_cond_wait_common (libc.so.6 + 0xa0fb4) \ + #2 0x000056472e823041 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0xaa5041) \ + #3 0x000056472e8230dc qemu_cond_timedwait_impl (qemu-system-x86_64 + 0xaa50dc) \ + #4 0x000056472e840595 worker_thread (qemu-system-x86_64 + 0xac2595) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480355: \ + #0 0x00007f9397e9e4aa __futex_abstimed_wait_common64 (libc.so.6 + 0x9e4aa) \ + #1 0x00007f9397ea0fb4 __pthread_cond_wait_common (libc.so.6 + 0xa0fb4) \ + #2 0x000056472e823041 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0xaa5041) \ + #3 0x000056472e8230dc qemu_cond_timedwait_impl (qemu-system-x86_64 + 0xaa50dc) \ + #4 0x000056472e840595 worker_thread (qemu-system-x86_64 + 0xac2595) \ + #5 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #6 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #7 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480270: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480272: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + \ + Stack trace of thread 480265: \ + #0 0x00007f9397e3ec6b ioctl (libc.so.6 + 0x3ec6b) \ + #1 0x000056472e619a24 kvm_vcpu_ioctl (qemu-system-x86_64 + 0x89ba24) \ + #2 0x000056472e619236 kvm_cpu_exec (qemu-system-x86_64 + 0x89b236) \ + #3 0x000056472e61c0fc kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x89e0fc) \ + #4 0x000056472e8237d6 qemu_thread_start (qemu-system-x86_64 + 0xaa57d6) \ + #5 0x00007f9397ea1912 start_thread (libc.so.6 + 0xa1912) \ + #6 0x00007f9397e3f450 __clone3 (libc.so.6 + 0x3f450) \ + ELF object binary architecture: AMD x86-64 \ diff --git a/results/classifier/gemma3:12b/kvm/2071 b/results/classifier/gemma3:12b/kvm/2071 new file mode 100644 index 00000000..370d71c7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2071 @@ -0,0 +1,113 @@ + +Segfault when starting a guest with spice configured to listen on a unix socket +Description of problem: +Guest crash immediately when spice is configured to listen on a unix socket. +Steps to reproduce: +1. Configure spice to listen on a unix socket +2. Start the guest +Additional information: +Here's the log when I start the guest: + +``` +[root@localhost ~]# virsh start fedora-waydroid +error: Failed to start domain 'fedora-waydroid' +error: internal error: qemu unexpectedly closed the monitor +``` +Here's the relevant output in journald: + +`SECCOMP auid=4294967295 uid=107 gid=107 ses=4294967295 pid=17930 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=31 arch=c000003e syscall=56 compat=0 ip=0x7f7b95459397 code=0x80000000` + +<details><summary>Full journald</summary> + +``` +Jan 04 11:59:03 localhost polkitd[1436]: Registered Authentication Agent for unix-process:17895:5747660 (system bus name :1.160 [/usr/bin/pkttyagent --process 17895 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) +Jan 04 11:59:03 localhost audit[1595]: VIRT_MACHINE_ID pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 vm-ctx=+107:+107 img-ctx=+107:+107 model=dac exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost virtlogd[1659]: Client hit max requests limit 1. This may result in keep-alive timeouts. Consider tuning the max_client_requests server parameter +Jan 04 11:59:03 localhost virtlogd[1659]: Client hit max requests limit 1. This may result in keep-alive timeouts. Consider tuning the max_client_requests server parameter +Jan 04 11:59:03 localhost polkitd[1436]: Unregistered Authentication Agent for unix-process:17895:5747660 (system bus name :1.160, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus) +Jan 04 11:59:03 localhost audit: ANOM_PROMISCUOUS dev=vnet12 prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=net reason=open vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 net=52:54:00:72:c3:92 path="/dev/net/tun" rdev=0A:C8 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=net reason=open vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 net=52:54:00:72:c3:92 path="/dev/vhost-net" rdev=0A:EE exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2422] manager: (vnet12): new Tun device (/org/freedesktop/NetworkManager/Devices/19) +Jan 04 11:59:03 localhost kernel: br-dmz: port 4(vnet12) entered blocking state +Jan 04 11:59:03 localhost kernel: br-dmz: port 4(vnet12) entered disabled state +Jan 04 11:59:03 localhost kernel: vnet12: entered allmulticast mode +Jan 04 11:59:03 localhost kernel: vnet12: entered promiscuous mode +Jan 04 11:59:03 localhost kernel: br-dmz: port 4(vnet12) entered blocking state +Jan 04 11:59:03 localhost kernel: br-dmz: port 4(vnet12) entered forwarding state +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2468] device (vnet12): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2470] device (vnet12): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2473] device (vnet12): Activation: starting connection 'vnet12' (abcdefgh-ijkl-mnop-qrst-uvwx12345679) +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2478] device (vnet12): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2479] device (vnet12): state change: prepare -> config (reason 'none', sys-iface-state: 'external') +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2480] device (vnet12): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2480] device (br-dmz): bridge port vnet12 was attached +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2480] device (vnet12): Activation: connection 'vnet12' enslaved, continuing activation +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2481] device (vnet12): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') +Jan 04 11:59:03 localhost systemd-machined[1368]: New machine qemu-10-fedora-waydroid. +Jan 04 11:59:03 localhost systemd[1]: Started machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope - Virtual Machine qemu-10-fedora-waydroid. +Jan 04 11:59:03 localhost systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... +Jan 04 11:59:03 localhost audit: BPF prog-id=112 op=LOAD +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=deny vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=all exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/null" rdev=01:03 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/full" rdev=01:07 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/zero" rdev=01:05 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/random" rdev=01:08 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/urandom" rdev=01:09 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/ptmx" rdev=05:02 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/kvm" rdev=0A:E8 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=major category=pty maj=88 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/dri/by-path/pci-0000:00:02.0-render" rdev=E2:80 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=cgroup reason=allow vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 cgroup="/sys/fs/cgroup/machine.slice/machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope/" class=path path="/dev/urandom" rdev=01:09 acl=rw exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. +Jan 04 11:59:03 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2796] device (vnet12): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2797] device (vnet12): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.2799] device (vnet12): Activation: successful, device activated. +Jan 04 11:59:03 localhost systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive. +Jan 04 11:59:03 localhost audit[17930]: SECCOMP auid=4294967295 uid=107 gid=107 ses=4294967295 pid=17930 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=31 arch=c000003e syscall=56 compat=0 ip=0x7f7b95459397 code=0x80000000 +Jan 04 11:59:03 localhost audit[17930]: ANOM_ABEND auid=4294967295 uid=107 gid=107 ses=4294967295 pid=17930 comm="qemu-system-x86" exe="/usr/bin/qemu-system-x86_64" sig=31 res=1 +Jan 04 11:59:03 localhost audit: BPF prog-id=113 op=LOAD +Jan 04 11:59:03 localhost audit: BPF prog-id=114 op=LOAD +Jan 04 11:59:03 localhost audit: BPF prog-id=115 op=LOAD +Jan 04 11:59:03 localhost systemd[1]: Started systemd-coredump@3-17978-0.service - Process Core Dump (PID 17978/UID 0). +Jan 04 11:59:03 localhost audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@3-17978-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost systemd-coredump[17980]: Resource limits disable core dumping for process 17930 (qemu-system-x86). +Jan 04 11:59:03 localhost systemd-coredump[17980]: [🡕] Process 17930 (qemu-system-x86) of user 107 terminated abnormally without generating a coredump. +Jan 04 11:59:03 localhost systemd[1]: systemd-coredump@3-17978-0.service: Deactivated successfully. +Jan 04 11:59:03 localhost audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@3-17978-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit: ANOM_PROMISCUOUS dev=vnet12 prom=0 old_prom=256 auid=4294967295 uid=107 gid=107 ses=4294967295 +Jan 04 11:59:03 localhost kernel: br-dmz: port 4(vnet12) entered disabled state +Jan 04 11:59:03 localhost kernel: vnet12 (unregistering): left allmulticast mode +Jan 04 11:59:03 localhost kernel: vnet12 (unregistering): left promiscuous mode +Jan 04 11:59:03 localhost kernel: br-dmz: port 4(vnet12) entered disabled state +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.3895] device (vnet12): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') +Jan 04 11:59:03 localhost NetworkManager[1338]: <info> [1704394743.3897] device (vnet12): released from master device br-dmz +Jan 04 11:59:03 localhost virtqemud[1595]: Unable to read from monitor: Connection reset by peer +Jan 04 11:59:03 localhost virtqemud[1595]: internal error: qemu unexpectedly closed the monitor +Jan 04 11:59:03 localhost virtqemud[1595]: internal error: process exited while connecting to monitor +Jan 04 11:59:03 localhost virtlogd[1659]: Client hit max requests limit 1. This may result in keep-alive timeouts. Consider tuning the max_client_requests server parameter +Jan 04 11:59:03 localhost virtqemud[1595]: Failed to acquire pid file '/run/libvirt/qemu/swtpm/10-fedora-waydroid-swtpm.pid': Resource temporarily unavailable +Jan 04 11:59:03 localhost systemd[1]: machine-qemu\x2d10\x2dfedora\x2dwaydroid.scope: Deactivated successfully. +Jan 04 11:59:03 localhost systemd-machined[1368]: Machine qemu-10-fedora-waydroid terminated. +Jan 04 11:59:03 localhost audit: BPF prog-id=115 op=UNLOAD +Jan 04 11:59:03 localhost audit: BPF prog-id=114 op=UNLOAD +Jan 04 11:59:03 localhost audit: BPF prog-id=113 op=UNLOAD +Jan 04 11:59:03 localhost audit: BPF prog-id=112 op=UNLOAD +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=disk reason=start vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 old-disk="?" new-disk="/var/lib/libvirt/images/fedora-waydroid.img" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=net reason=start vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 old-net="?" new-net="52:54:00:72:c3:92" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=dev reason=start vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=dev reason=start vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 bus=usb device=555342207265646972646576 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=rng reason=start vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 old-rng="?" new-rng="/dev/urandom" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=tpm-emulator reason=start vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 device="?" exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=mem reason=start vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 old-mem=0 new-mem=4194304 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_RESOURCE pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm resrc=vcpu reason=start vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 old-vcpu=0 new-vcpu=4 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=success' +Jan 04 11:59:03 localhost audit[1595]: VIRT_CONTROL pid=1595 uid=0 auid=4294967295 ses=4294967295 msg='virt=kvm op=start reason=booted vm="fedora-waydroid" uuid=abcdefgh-ijkl-mnop-qrst-uvwx12345678 vm-pid=0 exe="/usr/sbin/virtqemud" hostname=? addr=? terminal=? res=failed' +``` + +<details> + +For the record I filed a bug earlier in libvirt (https://gitlab.com/libvirt/libvirt/-/issues/573) but I now think it's qemu related. + + +/label ~"kind::Bug" diff --git a/results/classifier/gemma3:12b/kvm/2110 b/results/classifier/gemma3:12b/kvm/2110 new file mode 100644 index 00000000..33a12f7d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2110 @@ -0,0 +1,12 @@ + +live migrations fail qemu-kvm +Description of problem: +live migrations fail between two identical hosts +``` +2024-01-18T00:16:31.582070Z qemu-kvm: Missing section footer for 0000:00:01.3/piix4_pm +2024-01-18T00:16:31.582169Z qemu-kvm: load of migration failed: Invalid argument +2024-01-18 00:16:31.611+0000: shutting down, reason=failed +``` +Additional information: +source log for vm [source.log](/uploads/5816f929a5e543f423bb909a0df23fb7/source.log) +dest log for vm [dest.log](/uploads/a1b6ae02e4c8235536e740b86d16ddd6/dest.log) diff --git a/results/classifier/gemma3:12b/kvm/2180 b/results/classifier/gemma3:12b/kvm/2180 new file mode 100644 index 00000000..aa95b7d4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2180 @@ -0,0 +1,37 @@ + +QEMU crashes when an interrupt is triggered whose descriptor is not in physical memory +Description of problem: +When an interrupt is triggered whose descriptor is mapped but not in physical memory, QEMU crashes with the following message: +``` +** +ERROR:../system/cpus.c:524:bql_lock_impl: assertion failed: (!bql_locked()) +Bail out! ERROR:../system/cpus.c:524:bql_lock_impl: assertion failed: (!bql_locked()) +Aborted (core dumped) +``` + +The given code triggers the bug by moving the IDT's base address, but it can also be triggered by any other method of moving the IDT's physical memory location, f.ex paging. With KVM enabled, this specific example loops forever instead of crashing, but if the code is altered to use paging, an internal KVM error is reported and the VM is paused. +Steps to reproduce: +1. Assemble the code listed below using NASM: `nasm test.asm -o test.bin` +2. Run the code using `qemu-system-i386 -drive format=raw,file=test.bin`. Note that the given code only triggers the bug if the guest has 2 gigabytes or less of physical memory. +3. QEMU crashes. +Additional information: +NASM assembly of the code used: +``` +bits 16 +org 0x7c00 + +_start: + ; Disable interrupts and load new IDT + cli + o32 lidt [idtdesc] + ; Descriptor for INT 0 is in nonexistent physical memory, which crashes QEMU. + int 0x00 + +idtdesc: + dw 0x3ff ; Limit: 1 KiB for IDT + dd 0x80000000 ; Base: 2 GiB + +; Like most BIOSes, SeaBIOS requires this magic number to boot +times 510-($-$$) db 0 +dw 0xaa55 +``` diff --git a/results/classifier/gemma3:12b/kvm/2220 b/results/classifier/gemma3:12b/kvm/2220 new file mode 100644 index 00000000..e5aafda9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2220 @@ -0,0 +1,529 @@ + +Intermittent QEMU segfaults on x86_64 with TCG accelerator +Description of problem: +Recently(-ish) in our upstream systemd CI we started seeing an uptrend of QEMU segfaults when running our integration tests. This was first observed in CentOS Stream 9 runs, but was later followed by Fedora Rawhide and Ubuntu Noble, once they picked up the QEMU 8.x branch. I filed a RHEL-only ticked first (before we started seeing it on other distros as well), so I'll share the same information here as well. + +This seems to happen only with TCG - in the CentOS CI infrastructure, where this was first observed, we run two jobs - one on a baremetal, that runs the test VMs with KVM, and one already on VMs that runs the same jobs using TCG; only the TCG job suffer from this issue. The same goes for the Fedora Rawhide and Ubuntu Noble jobs - they also use TCG. + +I managed to get a stack trace from one of the segmentation faults on CentOS Stream 9: +```gdb +[coredumpctl_collect] Collecting coredumps for '/usr/libexec/qemu-kvm' + PID: 1154719 (qemu-system-x86) + UID: 0 (root) + GID: 0 (root) + Signal: 11 (SEGV) + Timestamp: Thu 2024-02-01 21:50:04 UTC (1min 23s ago) + Command Line: /bin/qemu-system-x86_64 -smp 8 -net none -m 768M -nographic -kernel /boot/vmlinuz-5.14.0-412.el9.x86_64 -drive format=raw,cache=unsafe,file=/var/tmp/systemd-test-TEST-63-PATH_1/default.img -device virtio-rng-pci,max-bytes=1024,period=1000 -cpu max -initrd /var/tmp/ci-initramfs-5.14.0-412.el9.x86_64.img -append $'root=LABEL=systemd_boot rw raid=noautodetect rd.luks=0 loglevel=2 init=/usr/lib/systemd/systemd console=ttyS0 SYSTEMD_UNIT_PATH=/usr/lib/systemd/tests/testdata/testsuite-63.units:/usr/lib/systemd/tests/testdata/units: systemd.unit=testsuite.target systemd.wants=testsuite-63.service noresume oops=panic panic=1 softlockup_panic=1 systemd.wants=end.service enforcing=0 watchdog_thresh=60 workqueue.watchdog_thresh=120' + Executable: /usr/libexec/qemu-kvm + Control Group: /user.slice/user-0.slice/session-1.scope + Unit: session-1.scope + Slice: user-0.slice + Session: 1 + Owner UID: 0 (root) + Boot ID: 011f8fd0783c464184955c281ce2c1b7 + Machine ID: af8d424897a0479fa2fc0e5afcff3198 + Hostname: n27-39-6.pool.ci.centos.org + Storage: /var/lib/systemd/coredump/core.qemu-system-x86.0.011f8fd0783c464184955c281ce2c1b7.1154719.1706824204000000.zst (present) + Size on Disk: 124.7M + Message: Process 1154719 (qemu-system-x86) of user 0 dumped core. + + Stack trace of thread 1154728: + #0 0x0000557669385a13 address_space_translate_for_iotlb (qemu-kvm + 0x73ba13) + #1 0x00005576693d149f tlb_set_page_full (qemu-kvm + 0x78749f) + #2 0x0000557669248a18 x86_cpu_tlb_fill (qemu-kvm + 0x5fea18) + #3 0x00005576693db519 mmu_lookup1 (qemu-kvm + 0x791519) + #4 0x00005576693db31b mmu_lookup.llvm.5973256065011438912 (qemu-kvm + 0x79131b) + #5 0x00005576693d3173 do_ld4_mmu.llvm.5973256065011438912 (qemu-kvm + 0x789173) + #6 0x00005576692d44cf do_interrupt_all (qemu-kvm + 0x68a4cf) + #7 0x000055766924f605 x86_cpu_exec_interrupt (qemu-kvm + 0x605605) + #8 0x00005576693bdc25 cpu_exec_loop (qemu-kvm + 0x773c25) + #9 0x00005576693bcee1 cpu_exec_setjmp (qemu-kvm + 0x772ee1) + #10 0x00005576693bcd64 cpu_exec (qemu-kvm + 0x772d64) + #11 0x00007fe0c5e4011c mttcg_cpu_thread_fn (accel-tcg-x86_64.so + 0x411c) + #12 0x0000557669662ada qemu_thread_start.llvm.13264588188580115644 (qemu-kvm + 0xa18ada) + #13 0x00007fe0c68a1912 start_thread (libc.so.6 + 0xa1912) + #14 0x00007fe0c683f450 __clone3 (libc.so.6 + 0x3f450) + + Stack trace of thread 1154721: + #0 0x00007fe0c69159e5 clock_nanosleep@GLIBC_2.2.5 (libc.so.6 + 0x1159e5) + #1 0x00007fe0c691a597 __nanosleep (libc.so.6 + 0x11a597) + #2 0x00007fe0c6b70c87 g_usleep (libglib-2.0.so.0 + 0x7ec87) + #3 0x0000557669670c18 call_rcu_thread (qemu-kvm + 0xa26c18) + #4 0x0000557669662ada qemu_thread_start.llvm.13264588188580115644 (qemu-kvm + 0xa18ada) + #5 0x00007fe0c68a1912 start_thread (libc.so.6 + 0xa1912) + #6 0x00007fe0c683f450 __clone3 (libc.so.6 + 0x3f450) + + Stack trace of thread 1154727: + #0 0x00007fe0c689e4aa __futex_abstimed_wait_common (libc.so.6 + 0x9e4aa) + #1 0x00007fe0c68a0cb0 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0xa0cb0) + #2 0x00005576696620c6 qemu_cond_wait_impl (qemu-kvm + 0xa180c6) + #3 0x000055766919425b qemu_wait_io_event (qemu-kvm + 0x54a25b) + #4 0x00007fe0c5e40180 mttcg_cpu_thread_fn (accel-tcg-x86_64.so + 0x4180) + #5 0x0000557669662ada qemu_thread_start.llvm.13264588188580115644 (qemu-kvm + 0xa18ada) + #6 0x00007fe0c68a1912 start_thread (libc.so.6 + 0xa1912) + #7 0x00007fe0c683f450 __clone3 (libc.so.6 + 0x3f450) + + Stack trace of thread 1154719: + #0 0x00007fe0c689e670 __GI___lll_lock_wait (libc.so.6 + 0x9e670) + #1 0x00007fe0c68a4d02 __pthread_mutex_lock@GLIBC_2.2.5 (libc.so.6 + 0xa4d02) + #2 0x0000557669661b76 qemu_mutex_lock_impl (qemu-kvm + 0xa17b76) + #3 0x000055766967c937 main_loop_wait (qemu-kvm + 0xa32937) + #4 0x00005576691a30c7 qemu_main_loop (qemu-kvm + 0x5590c7) + #5 0x0000557668fe3cca qemu_default_main (qemu-kvm + 0x399cca) + #6 0x00007fe0c683feb0 __libc_start_call_main (libc.so.6 + 0x3feb0) + #7 0x00007fe0c683ff60 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x3ff60) + #8 0x0000557668fe33e5 _start (qemu-kvm + 0x3993e5) + + Stack trace of thread 1154725: + #0 0x00007fe0c689e670 __GI___lll_lock_wait (libc.so.6 + 0x9e670) + #1 0x00007fe0c68a4d02 __pthread_mutex_lock@GLIBC_2.2.5 (libc.so.6 + 0xa4d02) + #2 0x0000557669661b76 qemu_mutex_lock_impl (qemu-kvm + 0xa17b76) + #3 0x00005576693dc514 do_st_mmio_leN.llvm.5973256065011438912 (qemu-kvm + 0x792514) + #4 0x00005576693d3d22 do_st4_mmu.llvm.5973256065011438912 (qemu-kvm + 0x789d22) + #5 0x00007fe07cbfe35b n/a (n/a + 0x0) + ELF object binary architecture: AMD x86-64 + + +[coredumpctl_collect] Trying to run gdb with 'set print pretty on\nbt full' for '/usr/libexec/qemu-kvm' +GNU gdb (GDB) Red Hat Enterprise Linux 10.2-13.el9 +Copyright (C) 2021 Free Software Foundation, Inc. +License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> +This is free software: you are free to change and redistribute it. +There is NO WARRANTY, to the extent permitted by law. +Type "show copying" and "show warranty" for details. +This GDB was configured as "x86_64-redhat-linux-gnu". +Type "show configuration" for configuration details. +For bug reporting instructions, please see: +<https://www.gnu.org/software/gdb/bugs/>. +Find the GDB manual and other documentation resources online at: + <http://www.gnu.org/software/gdb/documentation/>. + +For help, type "help". +Type "apropos word" to search for commands related to "word"... +/root/.gdbinit:1: Error in sourced command file: +No symbol table is loaded. Use the "file" command. +Reading symbols from /usr/libexec/qemu-kvm... +Downloading separate debug info for /usr/libexec/qemu-kvm... +Reading symbols from /root/.cache/debuginfod_client/6fdfad7763b68956a31a335edd490cef23088a9a/debuginfo... +Downloading separate debug info for /root/.cache/debuginfod_client/6fdfad7763b68956a31a335edd490cef23088a9a/debuginfo... +[New LWP 1154728] +[New LWP 1154721] +[New LWP 1154727] +[New LWP 1154719] +[New LWP 1154725] +[New LWP 1154729] +[New LWP 1154726] +[New LWP 1154723] +[New LWP 1154730] +[New LWP 1154724] +[New LWP 1154722] +Downloading separate debug info for /lib64/libpixman-1.so.0... +Downloading separate debug info for /lib64/libcapstone.so.4... +Downloading separate debug info for /root/.cache/debuginfod_client/fabd9508a8df77430d74e376fc1853545deaa9a4/debuginfo... +Downloading separate debug info for /lib64/libgnutls.so.30... +Downloading separate debug info for /root/.cache/debuginfod_client/3ca805ea0a9583fc8272d443181745507c6c1391/debuginfo... +Downloading separate debug info for /lib64/libpng16.so.16... +Downloading separate debug info for /lib64/libz.so.1... +Downloading separate debug info for /lib64/libsasl2.so.3... +Downloading separate debug info for /root/.cache/debuginfod_client/d5669a4356bbdf6b9dba9d25fe4674098af42f8d/debuginfo... +Downloading separate debug info for /lib64/libsnappy.so.1... +Downloading separate debug info for /lib64/liblzo2.so.2... +Downloading separate debug info for /lib64/libpmem.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/571e30ee251154a37d94e8c45def4e0b40fdaa92/debuginfo... +Downloading separate debug info for /lib64/libseccomp.so.2... +Downloading separate debug info for /lib64/libfdt.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/31a56e0009a8824c7a09267c8205034c91cb4095/debuginfo... +Downloading separate debug info for /lib64/libnuma.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/e78797386b6fc540350223e432c3bfee6034d2e1/debuginfo... +Downloading separate debug info for /lib64/libgio-2.0.so.0... +Downloading separate debug info for /root/.cache/debuginfod_client/56c6122b97d5e4dd5fdf68756bdc02058ce02bbf/debuginfo... +Downloading separate debug info for /lib64/libgobject-2.0.so.0... +Downloading separate debug info for /lib64/libglib-2.0.so.0... +Downloading separate debug info for /lib64/librdmacm.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/7714785fff3ebddc1077a3fad30fffa35283766f/debuginfo... +Downloading separate debug info for /lib64/libibverbs.so.1... +Downloading separate debug info for /lib64/libslirp.so.0... +Downloading separate debug info for /lib64/liburing.so.2... +Downloading separate debug info for /root/.cache/debuginfod_client/8f52f15e8dff019c877c3c25083ef4a459429b99/debuginfo... +Downloading separate debug info for /lib64/libgmodule-2.0.so.0... +Downloading separate debug info for /lib64/libaio.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/9b75d21282f8e17ddfa06aff78dae4f8dcce4106/debuginfo... +Downloading separate debug info for /lib64/libm.so.6... +Downloading separate debug info for /lib64/libresolv.so.2... +Downloading separate debug info for /root/.cache/debuginfod_client/8a914905acea217452c928c2e200afceb83341c5/debuginfo... +Downloading separate debug info for /lib64/libgcc_s.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/ef4c928f1372ad155fea761f0e840ecd264fb153/debuginfo... +Downloading separate debug info for /lib64/libc.so.6... +Downloading separate debug info for /lib64/libp11-kit.so.0... +Downloading separate debug info for /root/.cache/debuginfod_client/b935d795aaf6f8cbc392c922b6c97a4c8db44c41/debuginfo... +Downloading separate debug info for /lib64/libidn2.so.0... +Downloading separate debug info for /root/.cache/debuginfod_client/958c50fc94ecb196b24f3619762e7ec3f28a5b40/debuginfo... +Downloading separate debug info for /lib64/libunistring.so.2... +Downloading separate debug info for /lib64/libtasn1.so.6... +Downloading separate debug info for /lib64/libnettle.so.8... +Downloading separate debug info for /root/.cache/debuginfod_client/0dd622456d9a5330679490d3bd9d812582d9f9d3/debuginfo... +Downloading separate debug info for /lib64/libhogweed.so.6... +Downloading separate debug info for /lib64/libcrypt.so.2... +Downloading separate debug info for /root/.cache/debuginfod_client/6ce4e5eb200e61d07398af52f8bcb316cf8466e0/debuginfo... +Downloading separate debug info for /lib64/libgssapi_krb5.so.2... +Downloading separate debug info for /root/.cache/debuginfod_client/5ce5f00c8b502e99ab96853950db60f97a710b28/debuginfo... +Downloading separate debug info for /lib64/libkrb5.so.3... +Downloading separate debug info for /lib64/libk5crypto.so.3... +Downloading separate debug info for /lib64/libcom_err.so.2... +Downloading separate debug info for /root/.cache/debuginfod_client/2313e22f074e5b67e97bb22e01a722cc727512b1/debuginfo... +Downloading separate debug info for /lib64/libstdc++.so.6... +Downloading separate debug info for /lib64/libndctl.so.6... +Downloading separate debug info for /root/.cache/debuginfod_client/e2e24fd2c7061434b2a0cc849cdcd2854a4a0557/debuginfo... +Downloading separate debug info for /lib64/libdaxctl.so.1... +Downloading separate debug info for /lib64/libmount.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/98bababfe2b3d1d0ca128831439521f2b5b7aa95/debuginfo... +Downloading separate debug info for /lib64/libselinux.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/bdc4adbb0901b548f448d6f0d92b49c352e3b9f6/debuginfo... +Downloading separate debug info for /lib64/libffi.so.8... +Downloading separate debug info for /lib64/libpcre.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/cffb947bcc416dca3cd249cdb0a1c6f614549c30/debuginfo... +Downloading separate debug info for /lib64/libnl-3.so.200... +Downloading separate debug info for /root/.cache/debuginfod_client/22262a5a1956360f9f4c1daa89e592b1be03cd14/debuginfo... +Downloading separate debug info for /lib64/libnl-route-3.so.200... +Downloading separate debug info for /lib64/libkrb5support.so.0... +Downloading separate debug info for /lib64/libkeyutils.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/5f6459dcec3e266d994b8d4e5b23507c4c0df11e/debuginfo... +Downloading separate debug info for /lib64/libcrypto.so.3... +Downloading separate debug info for /root/.cache/debuginfod_client/fb8a738ffca8bdbe3172c842ee9d56f969516473/debuginfo... +Downloading separate debug info for /lib64/libuuid.so.1... +Downloading separate debug info for /lib64/libkmod.so.2... +Downloading separate debug info for /root/.cache/debuginfod_client/9057cef69769e25914be12563e5d821aef1bd9cb/debuginfo... +Downloading separate debug info for /lib64/libblkid.so.1... +Downloading separate debug info for /lib64/libpcre2-8.so.0... +Downloading separate debug info for /root/.cache/debuginfod_client/10357f8fa75891b03cd08344d56efa49ad9d607f/debuginfo... +Downloading separate debug info for /lib64/libcap.so.2... +Downloading separate debug info for /root/.cache/debuginfod_client/94e5c930fa02b381df948b2d2909d96da9f31407/debuginfo... +Downloading separate debug info for /lib64/libzstd.so.1... +Downloading separate debug info for /root/.cache/debuginfod_client/f0c68ad1b3f8941857af47c6887736d835317ccc/debuginfo... +Downloading separate debug info for /lib64/liblzma.so.5... +Downloading separate debug info for /usr/libexec/../lib64/qemu-kvm/accel-tcg-x86_64.so... +Downloading separate debug info for /root/systemd/system-supplied DSO at 0x7ffd4cb6b000... +[Thread debugging using libthread_db enabled] +Using host libthread_db library "/lib64/libthread_db.so.1". +Core was generated by `/bin/qemu-system-x86_64 -smp 8 -net none -m 768M -nographic -kernel /boot/vmlin'. +Program terminated with signal SIGSEGV, Segmentation fault. +#0 memory_region_get_iommu (mr=0x418c0fdb85f05d8b) + at /usr/src/debug/qemu-kvm-8.2.0-2.el9.x86_64/include/exec/memory.h:1715 +Downloading source file /usr/src/debug/qemu-kvm-8.2.0-2.el9.x86_64/include/exec/memory.h... +1715 if (mr->alias) { +[Current thread is 1 (Thread 0x7fe033fff640 (LWP 1154728))] +(gdb) (gdb) #0 memory_region_get_iommu (mr=0x418c0fdb85f05d8b) + at /usr/src/debug/qemu-kvm-8.2.0-2.el9.x86_64/include/exec/memory.h:1715 + addr = 18446603473123421792 + d = 0x7fe03c135150 + section = 0x7fe03c621e70 + imrc = <optimized out> + iommu_idx = <optimized out> + iotlb = { + target_as = <optimized out>, + iova = <optimized out>, + translated_addr = <optimized out>, + addr_mask = <optimized out>, + perm = <optimized out> + } +#1 address_space_translate_for_iotlb + (cpu=0x55766c32c480, asidx=<optimized out>, orig_addr=472023040, xlat=0x7fe048df9ea0, plen=0x7fe048df9e98, attrs=..., prot=0x7fe048df9e94) + at ../system/physmem.c:688 + addr = 18446603473123421792 + d = 0x7fe03c135150 + section = 0x7fe03c621e70 + imrc = <optimized out> + iommu_idx = <optimized out> + iotlb = { + target_as = <optimized out>, + iova = <optimized out>, + translated_addr = <optimized out>, + addr_mask = <optimized out>, + perm = <optimized out> + } +#2 0x00005576693d149f in tlb_set_page_full + (cpu=0x55766c32c480, mmu_idx=<optimized out>, addr=18446741874686296064, full=0x7fe048df9ed8) at ../accel/tcg/cputlb.c:1140 + sz = 4096 + addr_page = 18446741874686296064 + paddr_page = 472023040 + prot = 1 + asidx = -536727968 + xlat = 18599936 + section = <optimized out> + read_flags = <optimized out> + is_romd = <optimized out> + addend = <optimized out> + write_flags = <optimized out> + iotlb = <optimized out> + wp_flags = <optimized out> + index = <optimized out> + te = <optimized out> + tn = { + { + addr_read = <optimized out>, + addr_write = <optimized out>, + addr_code = <optimized out>, + addend = <optimized out> + }, + addr_idx = {<optimized out>, <optimized out>, <optimized out>, <optimized out>} + } +#3 0x0000557669248a18 in tlb_set_page_with_attrs + (cpu=0x55766c32c480, addr=18446741874686296064, paddr=<optimized out>, attrs=..., prot=<optimized out>, mmu_idx=0, size=<optimized out>) + at ../accel/tcg/cputlb.c:1290 + out = { + paddr = 472027056, + prot = 1, + page_size = 4096 + } + err = { + exception_index = 472064000, + error_code = 0, + cr2 = 13915309287368685568, + stage2 = (unknown: 0x1c232b28) + } + env = <optimized out> +#4 x86_cpu_tlb_fill + (cs=0x55766c32c480, addr=<optimized out>, size=<optimized out>, access_type=MMU_DATA_LOAD, mmu_idx=0, probe=<optimized out>, retaddr=0) + at ../target/i386/tcg/sysemu/excp_helper.c:610 + out = { + paddr = 472027056, + prot = 1, + page_size = 4096 + } + err = { + exception_index = 472064000, + error_code = 0, + cr2 = 13915309287368685568, + stage2 = (unknown: 0x1c232b28) + } + env = <optimized out> +#5 0x00005576693db519 in tlb_fill + (addr=18446741874686300080, size=-2047844981, access_type=MMU_DATA_LOAD, mmu_idx=0, retaddr=0, cpu=<optimized out>) at ../accel/tcg/cputlb.c:1315 + ok = <optimized out> + addr = 18446741874686300080 + index = <optimized out> + entry = 0x7fe028017080 + tlb_addr = <optimized out> + maybe_resized = false + full = <optimized out> + flags = <optimized out> +#6 mmu_lookup1 + (cpu=<optimized out>, data=0x7fe048df9f00, mmu_idx=0, access_type=MMU_DATA_LOAD, ra=0) at ../accel/tcg/cputlb.c:1713 + addr = 18446741874686300080 + index = <optimized out> + entry = 0x7fe028017080 + tlb_addr = <optimized out> + maybe_resized = false + full = <optimized out> + flags = <optimized out> +#7 0x00005576693db31b in mmu_lookup + (cpu=0x55766c32c480, addr=18446741874686300080, oi=<optimized out>, ra=0, type=MMU_DATA_LOAD, l=0x7fe048df9f00) at ../accel/tcg/cputlb.c:1803 + a_bits = <optimized out> + flags = <optimized out> +#8 0x00005576693d3173 in do_ld4_mmu + (cpu=0x7fe03c135150, addr=18446603473123421792, oi=2247122315, ra=140601056453952, access_type=MMU_DATA_LOAD) at ../accel/tcg/cputlb.c:2416 + l = { + page = {{ + full = 0x1c232000, + haddr = 0xc0700000000, + addr = 18446741874686300080, + flags = 88995840, + size = 4 + }, { + full = 0x7fe033fff458, + haddr = 0xc11d1c12054df800, + addr = 18446741874686296064, + flags = 88995840, + size = 0 + }}, + memop = MO_32, + mmu_idx = 0 + } + crosspage = <optimized out> + ret = <optimized out> +#9 0x00005576692d44cf in cpu_ldl_mmu + (env=0x55766c32ec30, addr=18446741874686300080, oi=2247122315, ra=0) + at ../accel/tcg/ldst_common.c.inc:158 + oi = 2247122315 + has_error_code = <optimized out> + old_eip = 18446744072005078059 + dt = 0x55766c32edc0 + ptr = 18446741874686300080 + e1 = <optimized out> + e2 = <optimized out> + e3 = <optimized out> + type = <optimized out> + dpl = <optimized out> + cpl = <optimized out> + selector = <optimized out> + offset = <optimized out> + ist = <optimized out> + new_stack = <optimized out> + esp = <optimized out> + ss = <optimized out> + count = 0 + env = 0x55766c32ec30 +#10 cpu_ldl_le_mmuidx_ra + (env=0x55766c32ec30, addr=18446741874686300080, mmu_idx=<optimized out>, ra=0) at ../accel/tcg/ldst_common.c.inc:294 + oi = 2247122315 + has_error_code = <optimized out> + old_eip = 18446744072005078059 + dt = 0x55766c32edc0 + ptr = 18446741874686300080 + e1 = <optimized out> + e2 = <optimized out> + e3 = <optimized out> + type = <optimized out> + dpl = <optimized out> + cpl = <optimized out> + selector = <optimized out> + offset = <optimized out> + ist = <optimized out> + new_stack = <optimized out> + esp = <optimized out> + ss = <optimized out> + count = 0 + env = 0x55766c32ec30 +#11 do_interrupt64 + (env=0x55766c32ec30, intno=251, is_int=0, error_code=0, next_eip=<optimized out>, is_hw=<optimized out>) at ../target/i386/tcg/seg_helper.c:889 + has_error_code = <optimized out> + old_eip = 18446744072005078059 + dt = 0x55766c32edc0 + ptr = 18446741874686300080 + e1 = <optimized out> + e2 = <optimized out> + e3 = <optimized out> + type = <optimized out> + dpl = <optimized out> + cpl = <optimized out> + selector = <optimized out> + offset = <optimized out> + ist = <optimized out> + new_stack = <optimized out> + esp = <optimized out> + ss = <optimized out> + count = 0 + env = 0x55766c32ec30 +#12 do_interrupt_all + (cpu=0x55766c32c480, intno=251, is_int=0, error_code=0, next_eip=<optimized out>, is_hw=<optimized out>) at ../target/i386/tcg/seg_helper.c:1130 + count = 0 + env = 0x55766c32ec30 +#13 0x000055766924f605 in do_interrupt_x86_hardirq + (env=<optimized out>, intno=<optimized out>, is_hw=<optimized out>) + at ../target/i386/tcg/seg_helper.c:1162 + cpu = 0x55766c32c480 + env = <optimized out> + intno = <optimized out> +#14 0x000055766924f605 in x86_cpu_exec_interrupt () +#15 0x00005576693bdc25 in cpu_handle_interrupt + (cpu=0x55766c32c480, last_tb=<optimized out>) + at ../accel/tcg/cpu-exec.c:865 + cc = <optimized out> + interrupt_request = 2 + last_tb = <optimized out> + tb_exit = <optimized out> + ret = <optimized out> +#16 cpu_exec_loop (cpu=0x55766c32c480, sc=0x7fe048df9fb0) + at ../accel/tcg/cpu-exec.c:974 + last_tb = <optimized out> + tb_exit = <optimized out> + ret = <optimized out> +#17 0x00005576693bcee1 in cpu_exec_setjmp + (cpu=0x55766c32c480, sc=0x7fe048df9fb0) at ../accel/tcg/cpu-exec.c:1058 +#18 0x00005576693bcd64 in cpu_exec (cpu=0x55766c32c480) + at ../accel/tcg/cpu-exec.c:1084 + sc = { + diff_clk = 0, + last_cpu_icount = 0, + realtime_clock = 0 + } + ret = <optimized out> +#19 0x00007fe0c5e4011c in tcg_cpus_exec (cpu=0x55766c32c480) + at ../accel/tcg/tcg-accel-ops.c:76 + ret = <optimized out> + r = <optimized out> + force_rcu = { + notifier = { + notify = 0x7fe0c5e40250 <mttcg_force_rcu>, + node = { + le_next = 0x0, + le_prev = 0x7fe033fff478 + } + }, + cpu = 0x55766c32c480 + } +#20 mttcg_cpu_thread_fn (arg=0x55766c32c480) + at ../accel/tcg/tcg-accel-ops-mttcg.c:95 + r = <optimized out> + force_rcu = { + notifier = { + notify = 0x7fe0c5e40250 <mttcg_force_rcu>, + node = { + le_next = 0x0, + le_prev = 0x7fe033fff478 + } + }, + cpu = 0x55766c32c480 + } +#21 0x0000557669662ada in qemu_thread_start (args=0x55766c3a1870) + at ../util/qemu-thread-posix.c:541 + __clframe = { + __cancel_routine = <optimized out>, + __cancel_arg = 0x0, + __do_it = 1, + __cancel_type = <synthetic pointer> + } + qemu_thread_args = 0x55766c3a1870 + start_routine = 0x7fe0c5e40000 <mttcg_cpu_thread_fn> + arg = 0x55766c32c480 + r = <optimized out> +#22 0x00007fe0c68a1912 in start_thread (arg=<optimized out>) + at pthread_create.c:443 + ret = <optimized out> + pd = <optimized out> + unwind_buf = { + cancel_jmp_buf = {{ + jmp_buf = {140725889877392, 270352123062618637, 140600921814592, 0, 140603380340288, 0, -288199396121933299, -287677566653593075}, + mask_was_saved = 0 + }}, + priv = { + pad = {0x0, 0x0, 0x0, 0x0}, + data = { + prev = 0x0, + cleanup = 0x0, + canceltype = 0 + } + } + } + not_first_call = <optimized out> +#23 0x00007fe0c683f450 in clone3 () + at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 +``` + +Also, a couple runs failed with: +``` ++ /usr/libexec/qemu-kvm -smp 8 -net none -m 768M -nographic -kernel /boot/vmlinuz-5.14.0-427.el9.x86_64 -drive format=raw,cache=unsafe,file=/var/tmp/systemd-test.7FKAS9/basic.img -device virtio-rng-pci,max-bytes=1024,period=1000 -cpu Nehalem -initrd /var/tmp/ci-sanity-initramfs-5.14.0-390.el9.x86_64.img -append 'root=LABEL=systemd_boot rw raid=noautodetect rd.luks=0 loglevel=2 init=/usr/lib/systemd/systemd console=ttyS0 SYSTEMD_UNIT_PATH=/usr/lib/systemd/tests/testdata/testsuite-01.units:/usr/lib/systemd/tests/testdata/units: systemd.unit=testsuite.target systemd.wants=testsuite-01.service oops=panic panic=1 softlockup_panic=1 systemd.wants=end.service debug systemd.log_level=debug rd.systemd.log_target=console systemd.default_standard_output=journal+console systemd.unified_cgroup_hierarchy=1 systemd.legacy_systemd_cgroup_controller=0 +' +Could not access KVM kernel module: No such file or directory +qemu-kvm: failed to initialize kvm: No such file or directory +qemu-kvm: falling back to tcg +qemu-kvm: warning: Machine type 'pc-i440fx-rhel7.6.0' is deprecated: machine types for previous major releases are deprecated +c[?7l[2J[0mSeaBIOS (version 1.16.3-2.el9) +Booting from ROM... +early console in setup codae +Probing EDD (edd=off to disable)... oc[?7l[2J[0mk +[ 0.000000] Linux version 5.14.0-427.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3), GNU ld version 2.35.2-42.el9) #1 SMP PREEMPT_DYNAMIC Fri Feb 23 04:45:07 UTC 2024 +... +[ 2.152522] pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] +[ 2.153914] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] +[ 2.156615] pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 +[ 2.159388] pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc01f] +qemu-kvm: ../system/memory.c:2424: void *memory_region_get_ram_ptr(MemoryRegion *): Assertion `mr->ram_block' failed. +/bin/qemu-system-x86_64: line 4: 137172 Aborted (core dumped) "/usr/libexec/qemu-kvm" "$@" +``` + +I'm not sure if the two issues are related, or if the assertion is something completely different. +Steps to reproduce: +I, unfortunately, don't have any concrete steps to reproduce the issue, it happens randomly throughout CI runs. However, when needed, I can reproduce the issue in some reliable-ish manner by running the integration tests in a loop (the issue manifests itself usually in a couple of hours in this case). +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/2242 b/results/classifier/gemma3:12b/kvm/2242 new file mode 100644 index 00000000..c5c43e07 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2242 @@ -0,0 +1,15 @@ + +Hugepages are not released after windows guest shutdown +Description of problem: +* Hugepages are not released after windows guest shutdown (tested with server 2019 and 2022), everything is ok with linux guests +* Issue is present in both cases: shutdown is initiated by guest, and with the qemu monitor command ``system_shutdown`` +* If the guest is configured with 4G as memory size, hugepages not released may vary but in most cases, only 1G are not released +* Host is a x86_64 linux system, with 1G hugepages only : kernel cmline contains ``default_hugepagesz=1G hugepagesz=1G hugepages=88`` +* I've done many tests with qemu components disabled (network, monitor, vnc), issue is still present with basic command line (launched as root) ``qemu-system-x86_64 -cpu host -enable-kvm -smp 4 -machine type=q35,accel=kvm -m 4G -mem-path /mnt/hugepages -drive id=drv0,file=win.qcow2 -nodefaults`` +* Same issue with args in command line, with or without prealloc: + + -m 4G -mem-path /mnt/hugepages [-mem-prealloc] + -m 4G -machine memory-backend=mem0 -object memory-backend-memfd,id=mem0,size=4G,hugetlb=on,hugetlbsize=1G[,prealloc=on] +Additional information: +* Hugepages release process is audited with command ``cat /proc/meminfo`` +* I can't find any online documentation to help to troubleshoot used hugepages : articles suggest to audit /proc/[pid]/smaps, but here, issue is raised after qemu process terminates diff --git a/results/classifier/gemma3:12b/kvm/2250 b/results/classifier/gemma3:12b/kvm/2250 new file mode 100644 index 00000000..a6532654 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2250 @@ -0,0 +1,45 @@ + +FEAT_RME: NS EL1/0 Address Translation from EL3 fails +Description of problem: +I'm playing around with the QEMU RME Stack (TF-A, TF-RMM, Linux/KVM) for a research project. +For this I want to access some virtual normal world memory address from within EL3. +To translate the address to the physical address I use the `AT` instructions (e.g., `ats1e2r`). +If the NW memory is initially mapped in the GPT as `GPT_GPI_ANY`, this works fine, however, if the NW memory is mapped as `GPT_GPI_NS` the address translation fails with the error `0b100101`/GPT on PTW. +However, EL3/Root World should be able to access memory from all PAS, and therefore, if I understand the ARM documentation correctly, should also be able to execute a PTW for an address marked NS in the GPT. +Steps to reproduce: +1. Setup GPT with some memory marked as `GPT_GPI_NS` +2. Forward some NW virtual address from the kernel to EL3 +3. Execute a PTW on this address via the `AT` instructions. +Additional information: +I also took a look into the QEMU source code and potentially found the issue. +When executing a PTW we execute `target/arm/ptw.c:granule_protection_check`. +The function extracts the target page's GPI (`ptw.c:440'): +```c + switch (gpi) { + case 0b0000: /* no access */ + break; + case 0b1111: /* all access */ + return true; + case 0b1000: + case 0b1001: + case 0b1010: + case 0b1011: + if (pspace == (gpi & 3)) { + return true; + } + break; + default: + goto fault_walk; /* reserved */ + } +``` +The if statement checks if the current `pstate` (previously set to `ptw->in_space`) is the same security state as the one contained in the GPI. +If this is not the case, we generate a GPF. +However, I think the code misses the fact, that EL3/Root world can access memory from each PAS, meaning that the if statement should be something like +```c +if (pspace == (gpi & 3) || (pspace == ARMSS_Root)) { + return true; +} +``` +Additionally, as both Secure and Realm World can also access Normal World memory, similar checks should also be added in such cases. + +I have a patch prepared for this, however, I first want to check in if I'm in line with the Arm ARM or if I'm missing something and EL3 is indeed not supposed to execute PTWs for NS memory. diff --git a/results/classifier/gemma3:12b/kvm/2251 b/results/classifier/gemma3:12b/kvm/2251 new file mode 100644 index 00000000..1b09eb78 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2251 @@ -0,0 +1,15 @@ + +Windows 11 VM with VBS enabled crashes +Description of problem: + +Steps to reproduce: +1. Run a Windows 11 VM on a node (both VM domain XML and node capabilities XML is provided below). +2. Enable VBS on the guest. For doing so you can use https://github.com/MicrosoftDocs/windows-itpro-docs/files/4020040/DG_Readinessv3.7.zip. Then, in Windows terminal, run DG_Readiness_Tool_{version}.ps1 -Enable. +3. Reboot the guest. +4. Windows cannot start (see picture below). +Additional information: +- Domain Capabilities: https://pastebin.com/GdQGQ639 +- VMX capabilities: https://pastebin.com/5nbUH0ev +- contents of /proc/cpuinfo: https://pastebin.com/xZM4x89z +- Domain XML: https://pastebin.com/s4VehTXK +- Windows crash at boot: https://ibb.co/Ny1xRbz diff --git a/results/classifier/gemma3:12b/kvm/2263 b/results/classifier/gemma3:12b/kvm/2263 new file mode 100644 index 00000000..f13dc177 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2263 @@ -0,0 +1,29 @@ + +guest panics when attempting to perform loadvm operation on x86_64 platform with kvm_intel ept=0 +Description of problem: +The guest experiences a panic when attempting to perform the `loadvm` operation after it has been running for a while on the x86_64 platform with `kvm_intel ept=0`. I'm unsure if this operation is permitted or not, but it functions properly when using `kvm_intel ept=1`. +Steps to reproduce: +1. Load the `kvm-intel` module with the parameter `ept=0`. +2. savevm +Boot the first guest using the previous command line and switch to the QEMU console to execute the `savevm` operation. After that, proceed to shutting down the guest. +3. loadvm +Boot the second guest using the same command line and switch to the QEMU console to execute the `loadevm` operation. After that, the guest panics. +Additional information: +I have performed some debugging and it seems that the issue lies in the fact that the VMM modifies the guest memory without informing the KVM module. Upon further investigation, I noticed that the `loadvm` operation only restores the memory and does not execute any ioctl to modify the user memory region recorded in the KVM module. + +The KVM module calls `kvm_mmu_reset_context()` to unload the current EPT or SPT page table when guest system registers (CR0/CR3/CR4) are restored. However, for EPT, the EPT page table is released directly and can be reconstructed at a later stage. In contrast, for SPT, the KVM only decreases the reference count and retains the outdated SPT page table in the active list that is maintained by the KVM. As a result, this outdated SPT page table is reused later, leading to incorrect mapping. + +To address this, I attempted to call `kvm_arch_flush_shadow_all()` to zap all the page tables in `kvm_mmu_reset_context()`, which allowed the guest to function properly with SPT after the `loadvm` operation. + +Therefore, I believe that QEMU should notify the KVM to clear all the page tables if the KVM is using shadow paging. However, it appears that there is no appropriate ioctl available for the VMM to achieve this. + +guest panic output: + + +Trace the `kvm_mmu_get_page()` event and observe that only one record indicates that the outdated page table is reused instead of being recreated. + + +```shell +perf record -a -e kvmmmu:kvm_mmu_get_page +``` + diff --git a/results/classifier/gemma3:12b/kvm/2313 b/results/classifier/gemma3:12b/kvm/2313 new file mode 100644 index 00000000..d83f9b9f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2313 @@ -0,0 +1,18 @@ + +RISC-V KVM strerrorname_np regression breaks build on Alpine Linux +Description of problem: +Build from source fails on Alpine Linux due to the use of the non-portable `strerrorname_np`: +``` +/usr/lib/gcc/riscv64-alpine-linux-musl/13.2.1/../../../../riscv64-alpine-linux-musl/bin/ld: libqemu-riscv64-softmmu.fa.p/target_riscv_kvm_kvm-cpu.c.o: in function `kvm_cpu_realize': +kvm-cpu.c:(.text+0x538): undefined reference to `strerrorname_np' +/usr/lib/gcc/riscv64-alpine-linux-musl/13.2.1/../../../../riscv64-alpine-linux-musl/bin/ld: libqemu-riscv64-softmmu.fa.p/target_riscv_kvm_kvm-cpu.c.o: in function `kvm_cpu_instance_init': +kvm-cpu.c:(.text+0x1244): undefined reference to `strerrorname_np' +``` +Steps to reproduce: +1. install alpine linux on a riscv64 machine +2. build qemu-9.0.0 from source. +3. +Additional information: +Same problem as https://gitlab.com/qemu-project/qemu/-/issues/2041 + +Re-introduced with d4ff3da8f45c52670941c6e1b94e771d69d887e9 and 0d71f0a34938a6ac11953ae3dbec40113d2838a1 diff --git a/results/classifier/gemma3:12b/kvm/2321 b/results/classifier/gemma3:12b/kvm/2321 new file mode 100644 index 00000000..6f9dbd77 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2321 @@ -0,0 +1,41 @@ + +Segfault when hibernating a KVM VM with QEMU 8.2.3 +Description of problem: +Attempting to hibernate the machine crashes QEMU. +Steps to reproduce: +This involves Nix, please tell me if you want a reproducer that doesn't. + +1. nix build github:NixOS/nixpkgs#nixosTests.hibernate.driver +2. ./result/bin/nixos-test-driver +3. Observe crash +Additional information: +Backtrace: + +``` +#0 kvm_virtio_pci_vq_vector_release (proxy=0x55bd979fd130, vector=<optimized out>) at ../hw/virtio/virtio-pci.c:834 +#1 kvm_virtio_pci_vector_release_one (proxy=proxy@entry=0x55bd979fd130, queue_no=queue_no@entry=0) at ../hw/virtio/virtio-pci.c:965 +#2 0x000055bd9380c430 in virtio_pci_set_vector (vdev=0x55bd97a05500, proxy=0x55bd979fd130, queue_no=0, old_vector=1, new_vector=65535) + at ../hw/virtio/virtio-pci.c:1445 +#3 0x000055bd939c5490 in memory_region_write_accessor (mr=0x55bd979fdc70, addr=26, value=<optimized out>, size=2, shift=<optimized out>, + mask=<optimized out>, attrs=...) at ../system/memory.c:497 +#4 0x000055bd939c4d56 in access_with_adjusted_size (addr=addr@entry=26, value=value@entry=0x7ff49d1ff3e8, size=size@entry=2, + access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x55bd939c5410 <memory_region_write_accessor>, mr=<optimized out>, + attrs=...) at ../system/memory.c:573 +#5 0x000055bd939c5081 in memory_region_dispatch_write (mr=mr@entry=0x55bd979fdc70, addr=addr@entry=26, data=<optimized out>, op=<optimized out>, + attrs=attrs@entry=...) at ../system/memory.c:1528 +#6 0x000055bd939ccb0c in flatview_write_continue (fv=fv@entry=0x7ff4445771c0, addr=addr@entry=61572651286554, attrs=..., attrs@entry=..., + ptr=ptr@entry=0x7ff4a082d028, len=len@entry=2, addr1=<optimized out>, l=<optimized out>, mr=0x55bd979fdc70) at ../system/physmem.c:2714 +#7 0x000055bd939ccd83 in flatview_write (fv=0x7ff4445771c0, addr=addr@entry=61572651286554, attrs=attrs@entry=..., buf=buf@entry=0x7ff4a082d028, + len=len@entry=2) at ../system/physmem.c:2756 +#8 0x000055bd939d0099 in address_space_write (len=2, buf=0x7ff4a082d028, attrs=..., addr=61572651286554, as=0x55bd94a4e720 <address_space_memory>) + at ../system/physmem.c:2863 +#9 address_space_rw (as=0x55bd94a4e720 <address_space_memory>, addr=61572651286554, attrs=attrs@entry=..., buf=buf@entry=0x7ff4a082d028, len=2, + is_write=<optimized out>) at ../system/physmem.c:2873 +#10 0x000055bd93a24548 in kvm_cpu_exec (cpu=cpu@entry=0x55bd9628a3e0) at ../accel/kvm/kvm-all.c:2915 +#11 0x000055bd93a25795 in kvm_vcpu_thread_fn (arg=arg@entry=0x55bd9628a3e0) at ../accel/kvm/kvm-accel-ops.c:51 +#12 0x000055bd93bb5fa8 in qemu_thread_start (args=0x55bd96294940) at ../util/qemu-thread-posix.c:541 +#13 0x00007ff4a19fd272 in start_thread (arg=<optimized out>) at pthread_create.c:447 +#14 0x00007ff4a1a78dcc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 +``` + +Bisected to https://gitlab.com/qemu-project/qemu/-/commit/fcbb086ae590e910614fe5b8bf76e264f71ef304, reverting that change seems to make things work again. diff --git a/results/classifier/gemma3:12b/kvm/2325 b/results/classifier/gemma3:12b/kvm/2325 new file mode 100644 index 00000000..1b0bbd07 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2325 @@ -0,0 +1,12 @@ + +[Performance Regression] Constant freezes on Alder lake and Raptor lake CPUs. +Description of problem: +Strangely, no logs are recorded. The guest just freezes. It can however be rescued by a simple pause and unpause. + +This issue only happens when using the KVM hypervisor. Other hypervisors are fine. + +This issue does NOT happen when I tested my Intel Core i7 8700K. +Steps to reproduce: +1. Create a basic virtual machine for Windows 11 (Or 10). +2. Run it for about 5 - 30 minutes (Sometimes it happens in 20 seconds or even less). +3. The problem should occur. diff --git a/results/classifier/gemma3:12b/kvm/2334 b/results/classifier/gemma3:12b/kvm/2334 new file mode 100644 index 00000000..758751ba --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2334 @@ -0,0 +1,253 @@ + +[9.0.0] qemu breaks mac os vm +Description of problem: +Mac OS Monterey vm not able to boot after upgrading qemu to v. 9.0.0; no issue with qemu 8.2.2. +This vm is booted with opencore latest version. +The vm is not able to boot, apple logo is displayed on the screen for a bit, then the vm shutdowns, this is quite strange. +I can't see anything useful in the logs. +Changing machine type from q35-9.0 back to 8.2 doesn't solve the issue. +The vm is booted via libvirt (latest version) and it's not a quite "base" vm, it has multiple passthroughs and other things. +Before testing into details and starting to run base vms to see if it boots,maybe someone can see something wrong or maybe someone has the same issue. +Reverting back to qemu 8.2.2 fixes all the issues and the vm is able to boot again. +No issues with a windows 11 vm and with a kali vm. +I can say that it's not a DSDT issue (a problem I was having in the past was related with DSTD), because injecting the DSDT of the vm started from v 8.2.2 doesn't boot it. + +This is the xml of the vm: + +``` +<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> + <name>Monterey</name> + <memory unit='KiB'>33554432</memory> + <currentMemory unit='KiB'>33554432</currentMemory> + <memoryBacking> + <nosharepages/> + </memoryBacking> + <vcpu placement='static' current='28'>32</vcpu> + <vcpus> + <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> + <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/> + <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/> + <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/> + <vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/> + <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/> + <vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/> + <vcpu id='7' enabled='yes' hotpluggable='yes' order='8'/> + <vcpu id='8' enabled='yes' hotpluggable='yes' order='9'/> + <vcpu id='9' enabled='yes' hotpluggable='yes' order='10'/> + <vcpu id='10' enabled='yes' hotpluggable='yes' order='11'/> + <vcpu id='11' enabled='yes' hotpluggable='yes' order='12'/> + <vcpu id='12' enabled='yes' hotpluggable='yes' order='13'/> + <vcpu id='13' enabled='yes' hotpluggable='yes' order='14'/> + <vcpu id='14' enabled='yes' hotpluggable='yes' order='15'/> + <vcpu id='15' enabled='yes' hotpluggable='yes' order='16'/> + <vcpu id='16' enabled='yes' hotpluggable='yes' order='17'/> + <vcpu id='17' enabled='yes' hotpluggable='yes' order='18'/> + <vcpu id='18' enabled='yes' hotpluggable='yes' order='19'/> + <vcpu id='19' enabled='yes' hotpluggable='yes' order='20'/> + <vcpu id='20' enabled='yes' hotpluggable='yes' order='21'/> + <vcpu id='21' enabled='yes' hotpluggable='yes' order='22'/> + <vcpu id='22' enabled='yes' hotpluggable='yes' order='23'/> + <vcpu id='23' enabled='yes' hotpluggable='yes' order='24'/> + <vcpu id='24' enabled='yes' hotpluggable='yes' order='25'/> + <vcpu id='25' enabled='yes' hotpluggable='yes' order='26'/> + <vcpu id='26' enabled='yes' hotpluggable='yes' order='27'/> + <vcpu id='27' enabled='yes' hotpluggable='yes' order='28'/> + <vcpu id='28' enabled='no' hotpluggable='yes'/> + <vcpu id='29' enabled='no' hotpluggable='yes'/> + <vcpu id='30' enabled='no' hotpluggable='yes'/> + <vcpu id='31' enabled='no' hotpluggable='yes'/> + </vcpus> + <iothreads>2</iothreads> + <iothreadids> + <iothread id='1'/> + <iothread id='2'/> + </iothreadids> + <cputune> + <vcpupin vcpu='0' cpuset='1'/> + <vcpupin vcpu='1' cpuset='2'/> + <vcpupin vcpu='2' cpuset='3'/> + <vcpupin vcpu='3' cpuset='4'/> + <vcpupin vcpu='4' cpuset='5'/> + <vcpupin vcpu='5' cpuset='6'/> + <vcpupin vcpu='6' cpuset='7'/> + <vcpupin vcpu='7' cpuset='9'/> + <vcpupin vcpu='8' cpuset='10'/> + <vcpupin vcpu='9' cpuset='11'/> + <vcpupin vcpu='10' cpuset='12'/> + <vcpupin vcpu='11' cpuset='13'/> + <vcpupin vcpu='12' cpuset='14'/> + <vcpupin vcpu='13' cpuset='15'/> + <vcpupin vcpu='14' cpuset='17'/> + <vcpupin vcpu='15' cpuset='18'/> + <vcpupin vcpu='16' cpuset='19'/> + <vcpupin vcpu='17' cpuset='20'/> + <vcpupin vcpu='18' cpuset='21'/> + <vcpupin vcpu='19' cpuset='22'/> + <vcpupin vcpu='20' cpuset='23'/> + <vcpupin vcpu='21' cpuset='25'/> + <vcpupin vcpu='22' cpuset='26'/> + <vcpupin vcpu='23' cpuset='27'/> + <vcpupin vcpu='24' cpuset='28'/> + <vcpupin vcpu='25' cpuset='29'/> + <vcpupin vcpu='26' cpuset='30'/> + <vcpupin vcpu='27' cpuset='31'/> + <emulatorpin cpuset='0,8,16,24'/> + </cputune> + <os> + <type arch='x86_64' machine='pc-q35-8.2'>hvm</type> + <loader readonly='yes' type='pflash'>/opt/macos/AUDK_CODE.fd</loader> + <nvram>/opt/macos/AUDK_VARS.fd</nvram> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + </features> + <cpu mode='host-passthrough' check='none' migratable='on'> + <topology sockets='2' dies='1' clusters='1' cores='8' threads='2'/> + </cpu> + <clock offset='utc'> + <timer name='rtc' tickpolicy='catchup'/> + <timer name='pit' tickpolicy='delay'/> + <timer name='hpet' present='no'/> + </clock> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <devices> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + <controller type='pci' index='0' model='pcie-root'/> + <controller type='pci' index='1' model='pcie-root-port'> + <model name='pcie-root-port'/> + <target chassis='1' port='0x8' hotplug='off'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> + </controller> + <controller type='pci' index='2' model='pcie-root-port'> + <model name='pcie-root-port'/> + <target chassis='2' port='0x9' hotplug='off'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> + </controller> + <controller type='pci' index='3' model='pcie-root-port'> + <model name='pcie-root-port'/> + <target chassis='3' port='0xc' hotplug='off'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> + </controller> + <controller type='pci' index='4' model='pcie-root-port'> + <model name='pcie-root-port'/> + <target chassis='4' port='0x13' hotplug='off'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> + </controller> + <controller type='virtio-serial' index='0'> + <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> + </controller> + <controller type='usb' index='0' model='ich9-ehci1'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/> + </controller> + <controller type='usb' index='0' model='ich9-uhci1'> + <master startport='0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/> + </controller> + <controller type='sata' index='0'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> + </controller> + <interface type='bridge'> + <mac address='c8:2a:14:66:2c:a1'/> + <source bridge='br0'/> + <model type='virtio'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> + </interface> + <interface type='bridge'> + <mac address='c8:2a:14:31:32:e2'/> + <source bridge='br1'/> + <model type='virtio'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <serial type='pty'> + <target type='isa-serial' port='0'> + <model name='isa-serial'/> + </target> + </serial> + <console type='pty'> + <target type='serial' port='0'/> + </console> + <channel type='unix'> + <target type='virtio' name='org.qemu.guest_agent.0'/> + <address type='virtio-serial' controller='0' bus='0' port='1'/> + </channel> + <input type='keyboard' bus='ps2'/> + <input type='mouse' bus='ps2'/> + <audio id='1' type='none'/> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> + </source> + <rom file='/opt/gpu-bios/6900xt.rom'/> + <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> + </hostdev> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> + </source> + <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> + </hostdev> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> + </source> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </hostdev> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/> + </source> + <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> + </hostdev> + <hostdev mode='subsystem' type='pci' managed='yes'> + <driver name='vfio'/> + <source> + <address domain='0x0000' bus='0x84' slot='0x00' function='0x0'/> + </source> + <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> + </hostdev> + <hostdev mode='subsystem' type='usb' managed='no'> + <source> + <vendor id='0x046d'/> + <product id='0x0892'/> + </source> + <address type='usb' bus='0' port='2'/> + </hostdev> + <hostdev mode='subsystem' type='usb' managed='no'> + <source> + <vendor id='0x148f'/> + <product id='0x3070'/> + </source> + <address type='usb' bus='0' port='1'/> + </hostdev> + <watchdog model='itco' action='reset'/> + <memballoon model='none'/> + </devices> + <qemu:commandline> + <qemu:arg value='-smbios'/> + <qemu:arg value='type=2'/> + <qemu:arg value='-global'/> + <qemu:arg value='ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off'/> + <qemu:arg value='-global'/> + <qemu:arg value='pcie-root-port.x-speed=8'/> + <qemu:arg value='-global'/> + <qemu:arg value='pcie-root-port.x-width=16'/> + <qemu:arg value='-cpu'/> + <qemu:arg value='host,+hypervisor,migratable=no,-erms,kvm=on,+invtsc,+topoext,+avx,+aes,+xsave,+xsaveopt,+ssse3,+sse4_2,+popcnt,+arat,+pclmuldq,+pdpe1gb,+rdtscp,+vme,+umip,check'/> + </qemu:commandline> +</domain> +``` + +06:00.0/1 --> gpu +00:1b.0 --> audio +0c:00.0 --> sata controller +84:00.0 --> usb controller +0x046d 0x0892 --> usb webcam +0x148f 0x3070 --> usb wifi diff --git a/results/classifier/gemma3:12b/kvm/2335 b/results/classifier/gemma3:12b/kvm/2335 new file mode 100644 index 00000000..3248c668 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2335 @@ -0,0 +1,209 @@ + +SPICE Worker segfault +Description of problem: +Hello. Sometimes we have an error. kvm randomly crashes. +May 07 16:55:50 vdi1 kernel: SPICE Worker[249326]: segfault at 7f1c8c03af40 ip 00007f1fbbbb2579 sp 00007f1dabbf9d20 error 4 in libc.so.6[7f1fbbb41000+155000] likely on CPU 89 (core 20, socket 1) +Steps to reproduce: +1. +2. +3. +Additional information: +`# coredumpctl info + PID: 249293 (kvm) + UID: 0 (root) + GID: 0 (root) + Signal: 11 (SEGV) + Timestamp: Tue 2024-05-07 16:55:50 MSK (18h ago) + Command Line: /usr/bin/kvm -id 141 -name VDI,debug-threads=on -no-shutdown -chardev socket,id=qmp,path=/var/run/qemu-server/141.qmp,server=on,wait=off -mon chardev=qmp,mode=control -chard> + Executable: /usr/bin/qemu-system-x86_64 + Control Group: /qemu.slice/141.scope + Unit: 141.scope + Slice: qemu.slice + Boot ID: 5cfcd2d515a6425fa3880a61d8cd6bfc + Machine ID: 6e4c2fe391324304a856baa8e6c88002 + Hostname: vdi1 + Storage: /var/lib/systemd/coredump/core.kvm.0.5cfcd2d515a6425fa3880a61d8cd6bfc.249293.1715090150000000.zst (present) + Size on Disk: 2.3G + Message: Process 249293 (kvm) of user 0 dumped core. + + Module libsystemd.so.0 from deb systemd-252.22-1~deb12u1.amd64 + Module libudev.so.1 from deb systemd-252.22-1~deb12u1.amd64 + Stack trace of thread 249326: + #0 0x00007f1fbbbb2579 _int_malloc (libc.so.6 + 0x97579) + #1 0x00007f1fbbbb46e2 __libc_calloc (libc.so.6 + 0x996e2) + #2 0x00007f1fbd3f76d1 g_malloc0 (libglib-2.0.so.0 + 0x5a6d1) + #3 0x00007f1fbdadd7a3 red_get_data_chunks_ptr (libspice-server.so.1 + 0x3e7a3) + #4 0x00007f1fbdaddf6b red_get_data_chunks (libspice-server.so.1 + 0x3ef6b) + #5 0x00007f1fbdadedd9 red_get_copy_ptr (libspice-server.so.1 + 0x3fdd9) + #6 0x00007f1fbdadf1e5 red_get_native_drawable (libspice-server.so.1 + 0x401e5) + #7 0x00007f1fbdaf1a2c red_process_display (libspice-server.so.1 + 0x52a2c) + #8 0x00007f1fbdaf1cb7 worker_source_dispatch (libspice-server.so.1 + 0x52cb7) + #9 0x00007f1fbd3f17a9 g_main_context_dispatch (libglib-2.0.so.0 + 0x547a9) + #10 0x00007f1fbd3f1a38 n/a (libglib-2.0.so.0 + 0x54a38) + #11 0x00007f1fbd3f1cef g_main_loop_run (libglib-2.0.so.0 + 0x54cef) + #12 0x00007f1fbdaf0fa9 red_worker_main (libspice-server.so.1 + 0x51fa9) + #13 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #14 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 249321: + #0 0x00007f1fbbc18c5b __GI___ioctl (libc.so.6 + 0xfdc5b) + #1 0x000055b3bae626cf kvm_vcpu_ioctl (qemu-system-x86_64 + 0x72b6cf) + #2 0x000055b3bae62ba5 kvm_cpu_exec (qemu-system-x86_64 + 0x72bba5) + #3 0x000055b3bae6408d kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x72d08d) + #4 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #5 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #6 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 249327: + #0 0x00007f1fbdac9b48 glz_rgb_alpha_compress_seg (libspice-server.so.1 + 0x2ab48) + #1 0x00007f1fbdacc1cb glz_rgb_alpha_compress (libspice-server.so.1 + 0x2d1cb) + #2 0x00007f1fbdad08ed image_encoders_compress_glz (libspice-server.so.1 + 0x318ed) + #3 0x00007f1fbdaba608 _Z18dcc_compress_imageP20DisplayChannelClientP10SpiceImageP11SpiceBitmapP8DrawableiP20compress_send_data_t (libspice-server.so.1 + 0x1b608) + #4 0x00007f1fbdabb7f5 fill_bits (libspice-server.so.1 + 0x1c7f5) + #5 0x00007f1fbdabca2f red_marshall_qxl_draw_copy (libspice-server.so.1 + 0x1da2f) + #6 0x00007f1fbdabe82b marshall_lossless_qxl_drawable (libspice-server.so.1 + 0x1f82b) + #7 0x00007f1fbdadb5d3 _ZN16RedChannelClient4pushEv (libspice-server.so.1 + 0x3c5d3) + #8 0x00007f1fbdadb700 red_channel_client_event (libspice-server.so.1 + 0x3c700) + #9 0x00007f1fbdac579d spice_watch_dispatch (libspice-server.so.1 + 0x2679d) + #10 0x00007f1fbd3f167f g_main_context_dispatch (libglib-2.0.so.0 + 0x5467f) + #11 0x00007f1fbd3f1a38 n/a (libglib-2.0.so.0 + 0x54a38) + #12 0x00007f1fbd3f1cef g_main_loop_run (libglib-2.0.so.0 + 0x54cef) + #13 0x00007f1fbdaf0fa9 red_worker_main (libspice-server.so.1 + 0x51fa9) + #14 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #15 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 249324: + #0 0x00007f1fbbc18c5b __GI___ioctl (libc.so.6 + 0xfdc5b) + #1 0x000055b3bae626cf kvm_vcpu_ioctl (qemu-system-x86_64 + 0x72b6cf) + #2 0x000055b3bae62ba5 kvm_cpu_exec (qemu-system-x86_64 + 0x72bba5) + #3 0x000055b3bae6408d kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x72d08d) + #4 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #5 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #6 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 249293: + #0 0x00007f1fbbc17256 __ppoll (libc.so.6 + 0xfc256) + #1 0x000055b3bb011dfe ppoll (qemu-system-x86_64 + 0x8dadfe) + #2 0x000055b3bb00f6ee os_host_main_loop_wait (qemu-system-x86_64 + 0x8d86ee) + #3 0x000055b3bac6caa7 qemu_main_loop (qemu-system-x86_64 + 0x535aa7) + #4 0x000055b3bae6cf46 qemu_default_main (qemu-system-x86_64 + 0x735f46) + #5 0x00007f1fbbb4224a __libc_start_call_main (libc.so.6 + 0x2724a) + #6 0x00007f1fbbb42305 __libc_start_main_impl (libc.so.6 + 0x27305) + #7 0x000055b3baa5f0a1 _start (qemu-system-x86_64 + 0x3280a1) + + Stack trace of thread 249322: + #0 0x00007f1fbbc18c5b __GI___ioctl (libc.so.6 + 0xfdc5b) + #1 0x000055b3bae626cf kvm_vcpu_ioctl (qemu-system-x86_64 + 0x72b6cf) + #2 0x000055b3bae62ba5 kvm_cpu_exec (qemu-system-x86_64 + 0x72bba5) + #3 0x000055b3bae6408d kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x72d08d) + #4 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #5 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #6 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 249323: + #0 0x00007f1fbbc18c5b __GI___ioctl (libc.so.6 + 0xfdc5b) + #1 0x000055b3bae626cf kvm_vcpu_ioctl (qemu-system-x86_64 + 0x72b6cf) + #2 0x000055b3bae62ba5 kvm_cpu_exec (qemu-system-x86_64 + 0x72bba5) + #3 0x000055b3bae6408d kvm_vcpu_thread_fn (qemu-system-x86_64 + 0x72d08d) + #4 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #5 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #6 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 249294: + #0 0x00007f1fbbc1c719 syscall (libc.so.6 + 0x101719) + #1 0x000055b3baffccfa qemu_futex_wait (qemu-system-x86_64 + 0x8c5cfa) + #2 0x000055b3bb006602 call_rcu_thread (qemu-system-x86_64 + 0x8cf602) + #3 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #4 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #5 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 249329: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba3558 __pthread_cond_wait_common (libc.so.6 + 0x88558) + #2 0x000055b3baffc68b qemu_cond_wait_impl (qemu-system-x86_64 + 0x8c568b) + #3 0x000055b3baa88f2b vnc_worker_thread_loop (qemu-system-x86_64 + 0x351f2b) + #4 0x000055b3baa89bc8 vnc_worker_thread (qemu-system-x86_64 + 0x352bc8) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 3982758: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba383c __pthread_cond_wait_common (libc.so.6 + 0x8883c) + #2 0x000055b3baffbd01 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0x8c4d01) + #3 0x000055b3baffc8a0 qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x8c58a0) + #4 0x000055b3bb0110d4 worker_thread (qemu-system-x86_64 + 0x8da0d4) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 969111: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba383c __pthread_cond_wait_common (libc.so.6 + 0x8883c) + #2 0x000055b3baffbd01 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0x8c4d01) + #3 0x000055b3baffc8a0 qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x8c58a0) + #4 0x000055b3bb0110d4 worker_thread (qemu-system-x86_64 + 0x8da0d4) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 969113: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba383c __pthread_cond_wait_common (libc.so.6 + 0x8883c) + #2 0x000055b3baffbd01 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0x8c4d01) + #3 0x000055b3baffc8a0 qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x8c58a0) + #4 0x000055b3bb0110d4 worker_thread (qemu-system-x86_64 + 0x8da0d4) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 969114: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba383c __pthread_cond_wait_common (libc.so.6 + 0x8883c) + #2 0x000055b3baffbd01 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0x8c4d01) + #3 0x000055b3baffc8a0 qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x8c58a0) + #4 0x000055b3bb0110d4 worker_thread (qemu-system-x86_64 + 0x8da0d4) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 969112: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba383c __pthread_cond_wait_common (libc.so.6 + 0x8883c) + #2 0x000055b3baffbd01 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0x8c4d01) + #3 0x000055b3baffc8a0 qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x8c58a0) + #4 0x000055b3bb0110d4 worker_thread (qemu-system-x86_64 + 0x8da0d4) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 4165267: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba383c __pthread_cond_wait_common (libc.so.6 + 0x8883c) + #2 0x000055b3baffbd01 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0x8c4d01) + #3 0x000055b3baffc8a0 qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x8c58a0) + #4 0x000055b3bb0110d4 worker_thread (qemu-system-x86_64 + 0x8da0d4) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 969116: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba383c __pthread_cond_wait_common (libc.so.6 + 0x8883c) + #2 0x000055b3baffbd01 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0x8c4d01) + #3 0x000055b3baffc8a0 qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x8c58a0) + #4 0x000055b3bb0110d4 worker_thread (qemu-system-x86_64 + 0x8da0d4) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + + Stack trace of thread 969115: + #0 0x00007f1fbbba0e96 __futex_abstimed_wait_common64 (libc.so.6 + 0x85e96) + #1 0x00007f1fbbba383c __pthread_cond_wait_common (libc.so.6 + 0x8883c) + #2 0x000055b3baffbd01 qemu_cond_timedwait_ts (qemu-system-x86_64 + 0x8c4d01) + #3 0x000055b3baffc8a0 qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x8c58a0) + #4 0x000055b3bb0110d4 worker_thread (qemu-system-x86_64 + 0x8da0d4) + #5 0x000055b3baffbb78 qemu_thread_start (qemu-system-x86_64 + 0x8c4b78) + #6 0x00007f1fbbba4134 start_thread (libc.so.6 + 0x89134) + #7 0x00007f1fbbc247dc __clone3 (libc.so.6 + 0x1097dc) + ELF object binary architecture: AMD x86-64` diff --git a/results/classifier/gemma3:12b/kvm/2363 b/results/classifier/gemma3:12b/kvm/2363 new file mode 100644 index 00000000..bc83725b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2363 @@ -0,0 +1,2 @@ + +How can I enable MBI support in QEMU when running in KVM mode? diff --git a/results/classifier/gemma3:12b/kvm/237164 b/results/classifier/gemma3:12b/kvm/237164 new file mode 100644 index 00000000..e9a822c8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/237164 @@ -0,0 +1,7 @@ + +kvm needs to correctly simulate a proper monitor + +Binary package hint: xorg + +With xserver-xor-video-cirrus 1.2.1, there should be no need to require special handling for kvm in dexconf any longer. +See also: bug 193323. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/2379 b/results/classifier/gemma3:12b/kvm/2379 new file mode 100644 index 00000000..0a9aa49f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2379 @@ -0,0 +1,127 @@ + +virHashRemoveAll remove all jobs in priv->blockjobs but not set disk->priv->blockjob is null for qemuDomainObjPrivateDataClear and qemuProcessStop +Description of problem: +it call virHashRemoveAll to remove all jobs in priv->blockjobs but the disk privateData blockjob is not null for qemuDomainObjPrivateDataClear and qemuProcessStop. when virHashRemoveAll is caled, accessing priv->blockjob cause segfault in others. +Steps to reproduce: +1. virsh blockcopy testvm vda /root/disk/centos7-copy.qcow2 --wait --verbose --pivot + migrate disk of vm +2. poweoff in guest vm +3. libvirt core dump +Additional information: +--Type <RET> for more, q to quit, c to continue without paging-- + Program terminated with signal SIGSEGV, Segmentation fault. + #0 qemuBlockJobUnregister (vm=0x7f823c045050, job=0x7f827c03ca90) at ../src/qemu/qemu_blockjob.c:211 + 211 if (job == diskPriv->blockjob) { + [Current thread is 1 (Thread 0x7f8283640640 (LWP 152))] + (gdb) bt + #0 qemuBlockJobUnregister (vm=0x7f823c045050, job=0x7f827c03ca90) at ../src/qemu/qemu_blockjob.c:211 + #1 qemuBlockJobEventProcessConcluded (asyncJob=VIR_ASYNC_JOB_MIGRATION_OUT, vm=<optimized out>, driver=<optimized out>, + job=0x7f827c03ca90) at ../src/qemu/qemu_blockjob.c:1678 + #2 qemuBlockJobEventProcess (asyncJob=VIR_ASYNC_JOB_MIGRATION_OUT, job=0x7f827c03ca90, vm=<optimized out>, + driver=<optimized out>) at ../src/qemu/qemu_blockjob.c:1703 + #3 qemuBlockJobUpdate (vm=<optimized out>, job=0x7f827c03ca90, asyncJob=1) at ../src/qemu/qemu_blockjob.c:1756 + #4 0x00007f828050c95f in qemuMigrationSrcNBDStorageCopyReady (vm=0x7f823c045050, asyncJob=VIR_ASYNC_JOB_MIGRATION_OUT) + at ../src/qemu/qemu_migration.c:605 + #5 0x00007f8280518ca5 in qemuMigrationSrcNBDStorageCopy (flags=587, nbdURI=<optimized out>, tlsHostname=0x7f823c2b51d0 "", + tlsAlias=<optimized out>, dconn=0x7f823c014790, migrate_disks=0x7f827c006660, nmigrate_disks=2, speed=<optimized out>, + host=0x7f827c0156a0 "10.253.160.196", mig=0x7f827c027a30, vm=0x7f823c045050, driver=0x7f823c01ac40) + at ../src/qemu/qemu_migration.c:1202 + #6 qemuMigrationSrcRun (driver=0x7f823c01ac40, vm=0x7f823c045050, persist_xml=<optimized out>, cookiein=<optimized out>, + cookieinlen=<optimized out>, cookieout=0x7f828363f500, cookieoutlen=0x7f828363f4d4, flags=587, resource=1024, + spec=0x7f828363f330, dconn=0x7f823c014790, graphicsuri=0x0, nmigrate_disks=2, migrate_disks=0x7f827c006660, + migParams=0x7f827c00d890, nbdURI=0x0) at ../src/qemu/qemu_migration.c:4167 + #7 0x00007f828051a5dd in qemuMigrationSrcPerformNative (driver=0x7f823c01ac40, vm=0x7f823c045050, + persist_xml=0x7f827c020660 "<domain type=\"kvm\">\n <name>default_vm-8altm</name>\n <uuid>4a40fa64-fd9b-5078-8574-3ce5d0041d31</uuid>\n <metadata>\n <nodeagent xmlns=\"http://kubevirt.io/node-agent.io\">\n <vmid>13fb0e90-2930-"..., + uri=<optimized out>, + cookiein=0x7f827c0519e0 "<qemu-migration>\n <name>default_vm-8altm</name>\n <uuid>4a40fa64-fd9b-5078-8574-3ce5d0041d31</uuid>\n <hostname>ceasphere-node-1</hostname>\n <hostuuid>5b0a0842-6535-27c1-b2e7-89c4ac4fd785</hostuuid>"..., + cookieinlen=876, cookieout=0x7f828363f500, cookieoutlen=0x7f828363f4d4, flags=587, resource=1024, dconn=0x7f823c014790, + graphicsuri=0x0, nmigrate_disks=2, migrate_disks=0x7f827c006660, migParams=0x7f827c00d890, nbdURI=0x0) + at ../src/qemu/qemu_migration.c:4506 + #8 0x00007f828051c3e3 in qemuMigrationSrcPerformPeer2Peer3 (flags=<optimized out>, useParams=true, bandwidth=<optimized out>, + migParams=0x7f827c00d890, nbdURI=0x0, nbdPort=0, migrate_disks=0x7f827c006660, nmigrate_disks=<optimized out>, + listenAddress=<optimized out>, graphicsuri=0x0, uri=<optimized out>, dname=0x0, + persist_xml=0x7f827c020660 "<domain type=\"kvm\">\n <name>default_vm-8altm</name>\n <uuid>4a40fa64-fd9b-5078-8574-3ce5d0041d31</uuid>\n <metadata>\n <nodeagent xmlns=\"http://kubevirt.io/node-agent.io\">\n <vmid>13fb0e90-2930-"..., + xmlin=<optimized out>, vm=0x7f823c045050, + dconnuri=0x7f827c00c2b0 "qemu+unix:///system?socket=/var/run/kubevirt/migrationproxy/13fb0e90-2930-4f0b-959a-cc40346e7d64-source.sock", dconn=0x7f823c014790, sconn=0x7f826c00e890, driver=0x7f823c01ac40) at ../src/qemu/qemu_migration.c:4925 + #9 qemuMigrationSrcPerformPeer2Peer (v3proto=<synthetic pointer>, resource=<optimized out>, dname=0x0, flags=587, + migParams=0x7f827c00d890, nbdURI=0x0, nbdPort=0, migrate_disks=0x7f827c006660, nmigrate_disks=<optimized out>, + listenAddress=<optimized out>, graphicsuri=0x0, uri=<optimized out>, + dconnuri=0x7f827c00c2b0 "qemu+unix:///system?socket=/var/run/kubevirt/migrationproxy/13fb0e90-2930-4f0b-959a-cc40346e7d64-source.sock", + --Type <RET> for more, q to quit, c to continue without paging-- + persist_xml=0x7f827c020660 "<domain type=\"kvm\">\n <name>default_vm-8altm</name>\n <uuid>4a40fa64-fd9b-5078-8574-3ce5d0041d31</uuid>\n <metadata>\n <nodeagent xmlns=\"http://kubevirt.io/node-agent.io\">\n <vmid>13fb0e90-2930-"..., + xmlin=<optimized out>, vm=0x7f823c045050, sconn=0x7f826c00e890, driver=0x7f823c01ac40) at ../src/qemu/qemu_migration.c:5230 + #10 qemuMigrationSrcPerformJob (driver=0x7f823c01ac40, conn=0x7f826c00e890, vm=0x7f823c045050, xmlin=<optimized out>, + persist_xml=0x7f827c020660 "<domain type=\"kvm\">\n <name>default_vm-8altm</name>\n <uuid>4a40fa64-fd9b-5078-8574-3ce5d0041d31</uuid>\n <metadata>\n <nodeagent xmlns=\"http://kubevirt.io/node-agent.io\">\n <vmid>13fb0e90-2930-"..., + dconnuri=0x7f827c00c2b0 "qemu+unix:///system?socket=/var/run/kubevirt/migrationproxy/13fb0e90-2930-4f0b-959a-cc40346e7d64-source.sock", uri=<optimized out>, graphicsuri=<optimized out>, listenAddress=<optimized out>, nmigrate_disks=<optimized out>, + migrate_disks=<optimized out>, nbdPort=0, nbdURI=<optimized out>, migParams=<optimized out>, cookiein=<optimized out>, + cookieinlen=0, cookieout=<optimized out>, cookieoutlen=<optimized out>, flags=<optimized out>, dname=<optimized out>, + resource=<optimized out>, v3proto=<optimized out>) at ../src/qemu/qemu_migration.c:5307 + #11 0x00007f828051cce7 in qemuMigrationSrcPerform (driver=0x7f823c01ac40, conn=0x7f826c00e890, vm=0x7f823c045050, + xmlin=0x7f827c01e630 "<domain type=\"kvm\">\n <name>default_vm-8altm</name>\n <uuid>4a40fa64-fd9b-5078-8574-3ce5d0041d31</uuid>\n <metadata>\n <nodeagent xmlns=\"http://kubevirt.io/node-agent.io\">\n <vmid>13fb0e90-2930-"..., + persist_xml=0x7f827c020660 "<domain type=\"kvm\">\n <name>default_vm-8altm</name>\n <uuid>4a40fa64-fd9b-5078-8574-3ce5d0041d31</uuid>\n <metadata>\n <nodeagent xmlns=\"http://kubevirt.io/node-agent.io\">\n <vmid>13fb0e90-2930-"..., + dconnuri=0x7f827c00c2b0 "qemu+unix:///system?socket=/var/run/kubevirt/migrationproxy/13fb0e90-2930-4f0b-959a-cc40346e7d64-source.sock", uri=0x556a1f856b20 "tcp://10.253.160.196:27939", graphicsuri=0x0, listenAddress=0x0, nmigrate_disks=2, + migrate_disks=0x7f827c006660, nbdPort=0, nbdURI=0x0, migParams=0x7f827c00d890, cookiein=0x0, cookieinlen=0, + cookieout=0x7f828363f8a8, cookieoutlen=0x7f828363f89c, flags=587, dname=0x0, resource=1024, v3proto=true) + at ../src/qemu/qemu_migration.c:5513 + #12 0x00007f82804e34d0 in qemuDomainMigratePerform3Params (dom=0x7f8268002ee0, + dconnuri=0x7f827c00c2b0 "qemu+unix:///system?socket=/var/run/kubevirt/migrationproxy/13fb0e90-2930-4f0b-959a-cc40346e7d64-source.sock", params=0x7f827c01e380, nparams=7, cookiein=0x0, cookieinlen=0, cookieout=0x7f828363f8a8, + cookieoutlen=0x7f828363f89c, flags=587) at ../src/qemu/qemu_driver.c:11796 + #13 0x00007f82853256d6 in virDomainMigratePerform3Params (domain=domain@entry=0x7f8268002ee0, + dconnuri=0x7f827c00c2b0 "qemu+unix:///system?socket=/var/run/kubevirt/migrationproxy/13fb0e90-2930-4f0b-959a-cc40346e7d64-source.sock", params=<optimized out>, nparams=7, cookiein=0x0, cookieinlen=0, cookieout=0x7f828363f8a8, + cookieoutlen=0x7f828363f89c, flags=587) at ../src/libvirt-domain.c:5165 + #14 0x0000556a1f200f17 in remoteDispatchDomainMigratePerform3Params (server=<optimized out>, msg=0x556a1f86ba40, + ret=0x7f827c0197f0, args=0x7f827c019520, rerr=0x7f828363f9a0, client=<optimized out>) + at ../src/remote/remote_daemon_dispatch.c:5710 + #15 remoteDispatchDomainMigratePerform3ParamsHelper (server=<optimized out>, client=<optimized out>, msg=0x556a1f86ba40, + rerr=0x7f828363f9a0, args=0x7f827c019520, ret=0x7f827c0197f0) at src/remote/remote_daemon_dispatch_stubs.h:8761 + #16 0x00007f828522c676 in virNetServerProgramDispatchCall (msg=0x556a1f86ba40, client=0x556a1f85b510, server=0x556a1f84c080, + prog=0x556a1f850410) at ../src/rpc/virnetserverprogram.c:428 + #17 virNetServerProgramDispatch (prog=0x556a1f850410, server=0x556a1f84c080, client=0x556a1f85b510, msg=0x556a1f86ba40) + at ../src/rpc/virnetserverprogram.c:302 + --Type <RET> for more, q to quit, c to continue without paging-- + #18 0x00007f82852331d8 in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, + srv=0x556a1f84c080) at ../src/rpc/virnetserver.c:140 + #19 virNetServerHandleJob (jobOpaque=0x556a1f861f90, opaque=0x556a1f84c080) at ../src/rpc/virnetserver.c:160 + #20 0x00007f8285170653 in virThreadPoolWorker (opaque=<optimized out>) at ../src/util/virthreadpool.c:164 + #21 0x00007f828516fc09 in virThreadHelper (data=<optimized out>) at ../src/util/virthread.c:256 + #22 0x00007f8284b10802 in start_thread () from /lib64/libc.so.6 + #23 0x00007f8284ab0450 in clone3 () from /lib64/libc.so.6 + + + (gdb) p job + $1 = (qemuBlockJobData *) 0x7f827c03ca90 + (gdb) p *job + $2 = {parent = {parent_instance = {g_type_instance = {g_class = 0x7f827c00dc90}, ref_count = 1, qdata = 0x0}}, + name = 0x7f827c038cd0 "drive-ua-vol-vm-8altm", disk = 0x7f823c0475c0, chain = 0x556a1f8548f0, mirrorChain = 0x0, + jobflags = 0, jobflagsmissing = false, data = {pull = {base = 0x0}, commit = {topparent = 0x0, top = 0x0, base = 0x0, + deleteCommittedImages = false}, create = {storage = false, src = 0x0}, copy = {shallownew = false}, backup = { + store = 0x0, bitmap = 0x0}}, type = 2, state = 5, errmsg = 0x0, synchronous = true, newstate = 6, brokentype = 0, + invalidData = false, reconnected = false} + (gdb) p *job->disk + $3 = {src = 0x7f823c047, privateData = 0xffe8eec3390edb93, device = VIR_DOMAIN_DISK_DEVICE_DISK, + bus = VIR_DOMAIN_DISK_BUS_VIRTIO, dst = 0x7f823c047300 "\327_'ą\177", tray_status = VIR_DOMAIN_DISK_TRAY_CLOSED, + removable = VIR_TRISTATE_SWITCH_ABSENT, rotation_rate = 0, mirror = 0x0, mirrorState = 0, mirrorJob = 0, geometry = { + cylinders = 0, heads = 0, sectors = 0, trans = VIR_DOMAIN_DISK_TRANS_DEFAULT}, blockio = {logical_block_size = 0, + physical_block_size = 0}, blkdeviotune = {total_bytes_sec = 0, read_bytes_sec = 0, write_bytes_sec = 0, + total_iops_sec = 0, read_iops_sec = 0, write_iops_sec = 0, total_bytes_sec_max = 0, read_bytes_sec_max = 0, + write_bytes_sec_max = 0, total_iops_sec_max = 0, read_iops_sec_max = 0, write_iops_sec_max = 0, size_iops_sec = 0, + group_name = 0x0, total_bytes_sec_max_length = 0, read_bytes_sec_max_length = 0, write_bytes_sec_max_length = 0, + total_iops_sec_max_length = 0, read_iops_sec_max_length = 0, write_iops_sec_max_length = 0}, + driverName = 0x7f823c047270 "\267\262'ą\177", serial = 0x0, wwn = 0x0, vendor = 0x0, product = 0x0, + cachemode = VIR_DOMAIN_DISK_CACHE_DISABLE, error_policy = VIR_DOMAIN_DISK_ERROR_POLICY_RETRY, + rerror_policy = VIR_DOMAIN_DISK_ERROR_POLICY_DEFAULT, retry_interval = 1000, retry_timeout = 0, + iomode = VIR_DOMAIN_DISK_IO_NATIVE, ioeventfd = VIR_TRISTATE_SWITCH_ABSENT, event_idx = VIR_TRISTATE_SWITCH_ABSENT, + copy_on_read = VIR_TRISTATE_SWITCH_ABSENT, snapshot = VIR_DOMAIN_SNAPSHOT_LOCATION_DEFAULT, + startupPolicy = VIR_DOMAIN_STARTUP_POLICY_DEFAULT, transient = false, transientShareBacking = VIR_TRISTATE_BOOL_ABSENT, + info = {alias = 0x0, type = 0, addr = {pci = {domain = 0, bus = 0, slot = 0, function = 0, + multi = VIR_TRISTATE_SWITCH_ABSENT, extFlags = 0, zpci = {uid = {value = 0, isSet = false}, fid = {value = 0, + isSet = false}}}, drive = {controller = 0, bus = 0, target = 0, unit = 0, diskbus = 0}, vioserial = { + controller = 0, bus = 0, port = 0}, ccid = {controller = 0, slot = 0}, usb = {bus = 0, port = {0, 0, 0, 0}}, + spaprvio = {reg = 0, has_reg = false}, ccw = {cssid = 0, ssid = 0, devno = 0, assigned = false}, isa = {iobase = 0, + irq = 0}, dimm = {slot = 0, base = 0}}, mastertype = 0, master = {usb = {startport = 0}}, + romenabled = VIR_TRISTATE_BOOL_ABSENT, rombar = VIR_TRISTATE_SWITCH_ABSENT, romfile = 0x0, bootIndex = 1, + effectiveBootIndex = 1, acpiIndex = 0, pciConnectFlags = 9, pciAddrExtFlags = 0, loadparm = 0x0, isolationGroup = 0, + isolationGroupLocked = false}, rawio = VIR_TRISTATE_BOOL_ABSENT, sgio = VIR_DOMAIN_DEVICE_SGIO_DEFAULT, + discard = VIR_DOMAIN_DISK_DISCARD_DEFAULT, iothread = 1, detect_zeroes = VIR_DOMAIN_DISK_DETECT_ZEROES_DEFAULT, + domain_name = 0x0, queues = 0, queue_size = 0, model = VIR_DOMAIN_DISK_MODEL_DEFAULT, virtio = 0x7f823c047170, + diskElementAuth = false, diskElementEnc = false} diff --git a/results/classifier/gemma3:12b/kvm/239 b/results/classifier/gemma3:12b/kvm/239 new file mode 100644 index 00000000..e4b090fb --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/239 @@ -0,0 +1,2 @@ + +Confusing error message when KVM can not start requested ARM CPU diff --git a/results/classifier/gemma3:12b/kvm/2392 b/results/classifier/gemma3:12b/kvm/2392 new file mode 100644 index 00000000..2f99e425 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2392 @@ -0,0 +1,2 @@ + +Ability to use KVM on Windows diff --git a/results/classifier/gemma3:12b/kvm/2394 b/results/classifier/gemma3:12b/kvm/2394 new file mode 100644 index 00000000..19c18347 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2394 @@ -0,0 +1,30 @@ + +kvm-unit-tests vmx failed +Description of problem: +On the Sierra Forest platform, the vmx test in kvm-unit-tests failed. But this issue cannot be replicated on Emerald Rapids platform. + +The first bad commit is ba6780905943696d790cc880c8e5684b51f027fe. +Steps to reproduce: +1.git clone https://gitlab.com/kvm-unit-tests/kvm-unit-tests.git + +2.cd kvm-unit-tests; ./configure + +3.make standalone + +4.rmmod kvm_intel + +5.modprobe kvm_intel nested=Y allow_smaller_maxphyaddr=Y + +6.cd tests; ./vmx +Additional information: +... +FAIL: HOST_CR3 2000000001007000: vmlaunch fails + +FAIL: HOST_CR3 4000000001007000: vmlaunch fails +... + +SUMMARY: 430013 tests, 2 unexpected failures, 2 expected failures, 5 skipped + +FAIL vmx (430013 tests, 2 unexpected failures, 2 expected failures, 5 skipped) + +[error.log](/uploads/02456b40f2736c0bf34d3f4b3a0c872a/error.log) diff --git a/results/classifier/gemma3:12b/kvm/2408 b/results/classifier/gemma3:12b/kvm/2408 new file mode 100644 index 00000000..9e3cd080 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2408 @@ -0,0 +1,238 @@ + +QEMU crashes during guest OS boot if virtserialport is present +Description of problem: +QEMU will load the firmware (`OVMF_CODE.fd`) and run the boot manager (`BootDisk.qcow2`) just fine, then shortly after control is passed to the OS installer (`InstallDisk.raw`) it will crash. + +This only happens if a `virtioserialport` is present: dropping that single device from the configuration will allow the installer to run, even if the `virtio-serial-pci` device is still present. The exact value of the `name` attribute doesn't seem to make a difference either, I'm just using the standard one for qemu-ga here. + +Note that `InstallDisk.raw` is attached using `virtio-blk-pci`, so it's this specific virtio device triggering the crash, not the use of virtio devices in general. +Additional information: +The crash happens 100% of the time. + +Running a bisect between 8.2 (known to work) and 9.0 (known to crash) has identified the commit 2ce6cff94df2650c460f809e5ad263f1d22507c0 as the culpit: + +``` +commit 2ce6cff94df2650c460f809e5ad263f1d22507c0 +Author: Cindy Lu <lulu@redhat.com> +Date: Fri Apr 12 14:26:55 2024 +0800 + + virtio-pci: fix use of a released vector + + During the booting process of the non-standard image, the behavior of the + called function in qemu is as follows: + + 1. vhost_net_stop() was triggered by guest image. This will call the function + virtio_pci_set_guest_notifiers() with assgin= false, + virtio_pci_set_guest_notifiers() will release the irqfd for vector 0 + + 2. virtio_reset() was triggered, this will set configure vector to VIRTIO_NO_VECTOR + + 3.vhost_net_start() was called (at this time, the configure vector is + still VIRTIO_NO_VECTOR) and then call virtio_pci_set_guest_notifiers() with + assgin=true, so the irqfd for vector 0 is still not "init" during this process + + 4. The system continues to boot and sets the vector back to 0. After that + msix_fire_vector_notifier() was triggered to unmask the vector 0 and meet the crash + + To fix the issue, we need to support changing the vector after VIRTIO_CONFIG_S_DRIVER_OK is set. + + (gdb) bt + 0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) + at pthread_kill.c:44 + 1 0x00007fc87148ec53 in __pthread_kill_internal (signo=6, threadid=<optimized out>) at pthread_kill.c:78 + 2 0x00007fc87143e956 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26 + 3 0x00007fc8714287f4 in __GI_abort () at abort.c:79 + 4 0x00007fc87142871b in __assert_fail_base + (fmt=0x7fc8715bbde0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5606413efd53 "ret == 0", file=0x5606413ef87d "../accel/kvm/kvm-all.c", line=1837, function=<optimized out>) at assert.c:92 + 5 0x00007fc871437536 in __GI___assert_fail + (assertion=0x5606413efd53 "ret == 0", file=0x5606413ef87d "../accel/kvm/kvm-all.c", line=1837, function=0x5606413f06f0 <__PRETTY_FUNCTION__.19> "kvm_irqchip_commit_routes") at assert.c:101 + 6 0x0000560640f884b5 in kvm_irqchip_commit_routes (s=0x560642cae1f0) at ../accel/kvm/kvm-all.c:1837 + 7 0x0000560640c98f8e in virtio_pci_one_vector_unmask + (proxy=0x560643c65f00, queue_no=4294967295, vector=0, msg=..., n=0x560643c6e4c8) + at ../hw/virtio/virtio-pci.c:1005 + 8 0x0000560640c99201 in virtio_pci_vector_unmask (dev=0x560643c65f00, vector=0, msg=...) + at ../hw/virtio/virtio-pci.c:1070 + 9 0x0000560640bc402e in msix_fire_vector_notifier (dev=0x560643c65f00, vector=0, is_masked=false) + at ../hw/pci/msix.c:120 + 10 0x0000560640bc40f1 in msix_handle_mask_update (dev=0x560643c65f00, vector=0, was_masked=true) + at ../hw/pci/msix.c:140 + 11 0x0000560640bc4503 in msix_table_mmio_write (opaque=0x560643c65f00, addr=12, val=0, size=4) + at ../hw/pci/msix.c:231 + 12 0x0000560640f26d83 in memory_region_write_accessor + (mr=0x560643c66540, addr=12, value=0x7fc86b7bc628, size=4, shift=0, mask=4294967295, attrs=...) + at ../system/memory.c:497 + 13 0x0000560640f270a6 in access_with_adjusted_size + + (addr=12, value=0x7fc86b7bc628, size=4, access_size_min=1, access_size_max=4, access_fn=0x560640f26c8d <memory_region_write_accessor>, mr=0x560643c66540, attrs=...) at ../system/memory.c:573 + 14 0x0000560640f2a2b5 in memory_region_dispatch_write (mr=0x560643c66540, addr=12, data=0, op=MO_32, attrs=...) + at ../system/memory.c:1521 + 15 0x0000560640f37bac in flatview_write_continue + (fv=0x7fc65805e0b0, addr=4273803276, attrs=..., ptr=0x7fc871e9c028, len=4, addr1=12, l=4, mr=0x560643c66540) + at ../system/physmem.c:2714 + 16 0x0000560640f37d0f in flatview_write + (fv=0x7fc65805e0b0, addr=4273803276, attrs=..., buf=0x7fc871e9c028, len=4) at ../system/physmem.c:2756 + 17 0x0000560640f380bf in address_space_write + (as=0x560642161ae0 <address_space_memory>, addr=4273803276, attrs=..., buf=0x7fc871e9c028, len=4) + at ../system/physmem.c:2863 + 18 0x0000560640f3812c in address_space_rw + (as=0x560642161ae0 <address_space_memory>, addr=4273803276, attrs=..., buf=0x7fc871e9c028, len=4, is_write=true) at ../system/physmem.c:2873 + --Type <RET> for more, q to quit, c to continue without paging-- + 19 0x0000560640f8aa55 in kvm_cpu_exec (cpu=0x560642f205e0) at ../accel/kvm/kvm-all.c:2915 + 20 0x0000560640f8d731 in kvm_vcpu_thread_fn (arg=0x560642f205e0) at ../accel/kvm/kvm-accel-ops.c:51 + 21 0x00005606411949f4 in qemu_thread_start (args=0x560642f292b0) at ../util/qemu-thread-posix.c:541 + 22 0x00007fc87148cdcd in start_thread (arg=<optimized out>) at pthread_create.c:442 + 23 0x00007fc871512630 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 + (gdb) + + MST: coding style and typo fixups + + Fixes: f9a09ca3ea ("vhost: add support for configure interrupt") + Cc: qemu-stable@nongnu.org + Signed-off-by: Cindy Lu <lulu@redhat.com> + Message-ID: <2321ade5f601367efe7380c04e3f61379c59b48f.1713173550.git.mst@redhat.com> + Cc: Lei Yang <leiyang@redhat.com> + Cc: Jason Wang <jasowang@redhat.com> + Signed-off-by: Michael S. Tsirkin <mst@redhat.com> + Tested-by: Cindy Lu <lulu@redhat.com> +``` + +Considering that it touches virtio-pci, the results seem plausible. + +This commit was also backported to stable as part of the 8.2.3 release, and indeed I have verified that that version suffers from the crash while 8.2.2 didn't. + +Reverting the commit makes the crash go away, but obviously the change was made for a reason so we probably need a follow-up fix rather than a plain revert. + +Crash and stack trace: + +``` +Thread 10 "qemu-system-x86" received signal SIGSEGV, Segmentation fault. +[Switching to Thread 0x7fffe56006c0 (LWP 323938)] +kvm_virtio_pci_vq_vector_use (vector=0, proxy=0x555558e04690) at ../hw/virtio/virtio-pci.c:817 +817 if (irqfd->users == 0) { +(gdb) t a a bt + +Thread 33 (Thread 0x7fffe6a006c0 (LWP 323987) "qemu-system-x86"): +#0 0x00007ffff4ae1169 in __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fffe69fb010, op=393, expected=0, futex_word=0x555557ad4370) at futex-internal.c:57 +#1 __futex_abstimed_wait_common (futex_word=futex_word@entry=0x555557ad4370, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x7fffe69fb010, private=private@entry=0, cancel=cancel@entry=true) at futex-internal.c:87 +#2 0x00007ffff4ae11ef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555557ad4370, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x7fffe69fb010, private=private@entry=0) at futex-internal.c:139 +#3 0x00007ffff4ae3e72 in __pthread_cond_wait_common (abstime=0x7fffe69fb010, clockid=0, mutex=0x7fffe69faf90, cond=0x555557ad4348) at pthread_cond_wait.c:503 +#4 ___pthread_cond_timedwait64 (cond=cond@entry=0x555557ad4348, mutex=mutex@entry=0x555557ad42e0, abstime=abstime@entry=0x7fffe69fb010) at pthread_cond_wait.c:643 +#5 0x0000555555efc651 in qemu_cond_timedwait_ts (cond=cond@entry=0x555557ad4348, mutex=mutex@entry=0x555557ad42e0, ts=ts@entry=0x7fffe69fb010, file=file@entry=0x55555616c035 "../util/thread-pool.c", line=line@entry=91) at ../util/qemu-thread-posix.c:239 +#6 0x0000555555efd2f8 in qemu_cond_timedwait_impl (cond=0x555557ad4348, mutex=0x555557ad42e0, ms=<optimized out>, file=0x55555616c035 "../util/thread-pool.c", line=91) at ../util/qemu-thread-posix.c:253 +#7 0x0000555555f129bc in worker_thread (opaque=opaque@entry=0x555557ad42d0) at ../util/thread-pool.c:91 +#8 0x0000555555efc4c8 in qemu_thread_start (args=0x555557aef190) at ../util/qemu-thread-posix.c:541 +#9 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#10 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 32 (Thread 0x7fffece006c0 (LWP 323986) "qemu-system-x86"): +#0 0x00007ffff4ae1169 in __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x7fffecdfb010, op=393, expected=0, futex_word=0x555557ad4374) at futex-internal.c:57 +#1 __futex_abstimed_wait_common (futex_word=futex_word@entry=0x555557ad4374, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x7fffecdfb010, private=private@entry=0, cancel=cancel@entry=true) at futex-internal.c:87 +#2 0x00007ffff4ae11ef in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x555557ad4374, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x7fffecdfb010, private=private@entry=0) at futex-internal.c:139 +#3 0x00007ffff4ae3e72 in __pthread_cond_wait_common (abstime=0x7fffecdfb010, clockid=0, mutex=0x7fffecdfaf90, cond=0x555557ad4348) at pthread_cond_wait.c:503 +#4 ___pthread_cond_timedwait64 (cond=cond@entry=0x555557ad4348, mutex=mutex@entry=0x555557ad42e0, abstime=abstime@entry=0x7fffecdfb010) at pthread_cond_wait.c:643 +#5 0x0000555555efc651 in qemu_cond_timedwait_ts (cond=cond@entry=0x555557ad4348, mutex=mutex@entry=0x555557ad42e0, ts=ts@entry=0x7fffecdfb010, file=file@entry=0x55555616c035 "../util/thread-pool.c", line=line@entry=91) at ../util/qemu-thread-posix.c:239 +#6 0x0000555555efd2f8 in qemu_cond_timedwait_impl (cond=0x555557ad4348, mutex=0x555557ad42e0, ms=<optimized out>, file=0x55555616c035 "../util/thread-pool.c", line=91) at ../util/qemu-thread-posix.c:253 +#7 0x0000555555f129bc in worker_thread (opaque=opaque@entry=0x555557ad42d0) at ../util/thread-pool.c:91 +#8 0x0000555555efc4c8 in qemu_thread_start (args=0x555557aee7b0) at ../util/qemu-thread-posix.c:541 +#9 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#10 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 10 (Thread 0x7fffe56006c0 (LWP 323938) "qemu-system-x86"): +#0 kvm_virtio_pci_vq_vector_use (vector=0, proxy=0x555558e04690) at ../hw/virtio/virtio-pci.c:817 +#1 kvm_virtio_pci_vector_use_one (proxy=0x555558e04690, queue_no=5) at ../hw/virtio/virtio-pci.c:893 +#2 0x0000555555cde680 in memory_region_write_accessor (mr=0x555558e05230, addr=26, value=<optimized out>, size=2, shift=<optimized out>, mask=<optimized out>, attrs=...) at ../system/memory.c:497 +#3 0x0000555555cddf26 in access_with_adjusted_size (addr=addr@entry=26, value=value@entry=0x7fffe55fae78, size=size@entry=2, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x555555cde600 <memory_region_write_accessor>, mr=<optimized out>, attrs=...) at ../system/memory.c:573 +#4 0x0000555555cde271 in memory_region_dispatch_write (mr=mr@entry=0x555558e05230, addr=addr@entry=26, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...) at ../system/memory.c:1528 +#5 0x0000555555ce623f in flatview_write_continue_step (attrs=attrs@entry=..., buf=buf@entry=0x7fffeef80028 "", mr_addr=26, l=l@entry=0x7fffe55faf90, mr=0x555558e05230, len=2) at ../system/physmem.c:2757 +#6 0x0000555555ce6918 in flatview_write_continue (mr=<optimized out>, l=<optimized out>, mr_addr=<optimized out>, len=2, ptr=0x8100401a, attrs=..., addr=2164277274, fv=0x7fff343ec810) at ../system/physmem.c:2787 +#7 flatview_write (fv=0x7fff343ec810, addr=addr@entry=2164277274, attrs=attrs@entry=..., buf=buf@entry=0x7fffeef80028, len=len@entry=2) at ../system/physmem.c:2818 +#8 0x0000555555ce9e61 in address_space_write (len=2, buf=0x7fffeef80028, attrs=..., addr=2164277274, as=0x555556e03d40 <address_space_memory>) at ../system/physmem.c:2938 +#9 address_space_rw (as=0x555556e03d40 <address_space_memory>, addr=2164277274, attrs=attrs@entry=..., buf=buf@entry=0x7fffeef80028, len=2, is_write=<optimized out>) at ../system/physmem.c:2948 +#10 0x0000555555d45118 in kvm_cpu_exec (cpu=cpu@entry=0x555557cde8b0) at ../accel/kvm/kvm-all.c:3031 +#11 0x0000555555d46845 in kvm_vcpu_thread_fn (arg=arg@entry=0x555557cde8b0) at ../accel/kvm/kvm-accel-ops.c:50 +#12 0x0000555555efc4c8 in qemu_thread_start (args=0x555557c5a370) at ../util/qemu-thread-posix.c:541 +#13 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#14 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 9 (Thread 0x7fffe60006c0 (LWP 323937) "qemu-system-x86"): +#0 futex_wait (private=0, expected=2, futex_word=0x555556deffe0 <bql>) at ../sysdeps/nptl/futex-internal.h:146 +#1 __GI___lll_lock_wait (futex=futex@entry=0x555556deffe0 <bql>, private=0) at lowlevellock.c:49 +#2 0x00007ffff4ae7e41 in lll_mutex_lock_optimized (mutex=0x555556deffe0 <bql>) at pthread_mutex_lock.c:48 +#3 ___pthread_mutex_lock (mutex=mutex@entry=0x555556deffe0 <bql>) at pthread_mutex_lock.c:93 +#4 0x0000555555efc8c3 in qemu_mutex_lock_impl (mutex=0x555556deffe0 <bql>, file=0x5555560e97ca "../system/physmem.c", line=2689) at ../util/qemu-thread-posix.c:94 +#5 0x0000555555ad6082 in bql_lock_impl (file=file@entry=0x5555560e97ca "../system/physmem.c", line=line@entry=2689) at ../system/cpus.c:536 +#6 0x0000555555ce632f in prepare_mmio_access (mr=0x55555874c4b0) at ../system/physmem.c:2689 +#7 flatview_write_continue_step (attrs=..., attrs@entry=..., buf=buf@entry=0x7fffeef83028 "", mr_addr=536, l=l@entry=0x7fffe5ffaf90, mr=0x55555874c4b0, len=4) at ../system/physmem.c:2738 +#8 0x0000555555ce6918 in flatview_write_continue (mr=<optimized out>, l=<optimized out>, mr_addr=<optimized out>, len=4, ptr=0x81084218, attrs=..., addr=2164802072, fv=0x7fff343ec810) at ../system/physmem.c:2787 +#9 flatview_write (fv=0x7fff343ec810, addr=addr@entry=2164802072, attrs=attrs@entry=..., buf=buf@entry=0x7fffeef83028, len=len@entry=4) at ../system/physmem.c:2818 +#10 0x0000555555ce9e61 in address_space_write (len=4, buf=0x7fffeef83028, attrs=..., addr=2164802072, as=0x555556e03d40 <address_space_memory>) at ../system/physmem.c:2938 +#11 address_space_rw (as=0x555556e03d40 <address_space_memory>, addr=2164802072, attrs=attrs@entry=..., buf=buf@entry=0x7fffeef83028, len=4, is_write=<optimized out>) at ../system/physmem.c:2948 +#12 0x0000555555d45118 in kvm_cpu_exec (cpu=cpu@entry=0x555557dbdcd0) at ../accel/kvm/kvm-all.c:3031 +#13 0x0000555555d46845 in kvm_vcpu_thread_fn (arg=arg@entry=0x555557dbdcd0) at ../accel/kvm/kvm-accel-ops.c:50 +#14 0x0000555555efc4c8 in qemu_thread_start (args=0x555557c0b4a0) at ../util/qemu-thread-posix.c:541 +#15 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#16 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 7 (Thread 0x7fffe74006c0 (LWP 323934) "dconf worker"): +#0 0x00007ffff4b5de3d in __GI___poll (fds=0x7fffc8000b90, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 +#1 0x00007ffff6e38f04 in g_main_context_poll_unlocked (priority=2147483647, n_fds=1, fds=0x7fffc8000b90, timeout=<optimized out>, context=0x555557adfef0) at ../glib/gmain.c:4653 +#2 g_main_context_iterate_unlocked.isra.0 (context=context@entry=0x555557adfef0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4344 +#3 0x00007ffff6ddbad3 in g_main_context_iteration (context=context@entry=0x555557adfef0, may_block=may_block@entry=1) at ../glib/gmain.c:4414 +#4 0x00007ffff7fb16b5 in dconf_gdbus_worker_thread (user_data=0x555557adfef0) at ../gdbus/dconf-gdbus-thread.c:82 +#5 0x00007ffff6e0e573 in g_thread_proxy (data=0x555557ae00d0) at ../glib/gthread.c:831 +#6 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#7 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 6 (Thread 0x7fffe7e006c0 (LWP 323933) "gdbus"): +#0 0x00007ffff4b5de3d in __GI___poll (fds=0x7fffd0000b90, nfds=3, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 +#1 0x00007ffff6e38f04 in g_main_context_poll_unlocked (priority=2147483647, n_fds=3, fds=0x7fffd0000b90, timeout=<optimized out>, context=0x7fffd4005a90) at ../glib/gmain.c:4653 +#2 g_main_context_iterate_unlocked.isra.0 (context=0x7fffd4005a90, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4344 +#3 0x00007ffff6ddf447 in g_main_loop_run (loop=0x7fffd4005b80) at ../glib/gmain.c:4551 +#4 0x00007ffff7048bc2 in gdbus_shared_thread_func (user_data=0x7fffd4005a60) at ../gio/gdbusprivate.c:284 +#5 0x00007ffff6e0e573 in g_thread_proxy (data=0x7fffd4005bc0) at ../glib/gthread.c:831 +#6 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#7 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 4 (Thread 0x7fffed8006c0 (LWP 323931) "gmain"): +#0 0x00007ffff4b5de3d in __GI___poll (fds=0x555557acd200, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29 +#1 0x00007ffff6e38f04 in g_main_context_poll_unlocked (priority=2147483647, n_fds=1, fds=0x555557acd200, timeout=<optimized out>, context=0x555557accfd0) at ../glib/gmain.c:4653 +#2 g_main_context_iterate_unlocked.isra.0 (context=context@entry=0x555557accfd0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4344 +#3 0x00007ffff6ddbad3 in g_main_context_iteration (context=0x555557accfd0, may_block=may_block@entry=1) at ../glib/gmain.c:4414 +#4 0x00007ffff6ddbb29 in glib_worker_main (data=<optimized out>) at ../glib/gmain.c:6574 +#5 0x00007ffff6e0e573 in g_thread_proxy (data=0x555557ac1140) at ../glib/gthread.c:831 +#6 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#7 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 3 (Thread 0x7fffee2006c0 (LWP 323930) "pool-spawner"): +#0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 +#1 0x00007ffff6e35b7d in g_cond_wait (cond=0x555557ac5f28, mutex=0x555557ac5f20) at ../glib/gthread-posix.c:1552 +#2 0x00007ffff6da922b in g_async_queue_pop_intern_unlocked (queue=0x555557ac5f20, wait=1, end_time=-1) at ../glib/gasyncqueue.c:425 +#3 0x00007ffff6e123e3 in g_thread_pool_spawn_thread (data=<optimized out>) at ../glib/gthreadpool.c:311 +#4 0x00007ffff6e0e573 in g_thread_proxy (data=0x555557ac7800) at ../glib/gthread.c:831 +#5 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#6 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 2 (Thread 0x7fffeec006c0 (LWP 323929) "qemu-system-x86"): +#0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 +#1 0x0000555555efd7ca in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at /home/abologna/src/upstream/qemu/include/qemu/futex.h:29 +#2 qemu_event_wait (ev=ev@entry=0x555556e182e8 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:464 +#3 0x0000555555f07216 in call_rcu_thread (opaque=opaque@entry=0x0) at ../util/rcu.c:278 +#4 0x0000555555efc4c8 in qemu_thread_start (args=0x555556ea0ed0) at ../util/qemu-thread-posix.c:541 +#5 0x00007ffff4ae4897 in start_thread (arg=<optimized out>) at pthread_create.c:444 +#6 0x00007ffff4b6ba5c in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 + +Thread 1 (Thread 0x7fffef0864c0 (LWP 323692) "qemu-system-x86"): +#0 futex_wait (private=0, expected=2, futex_word=0x555556deffe0 <bql>) at ../sysdeps/nptl/futex-internal.h:146 +#1 __GI___lll_lock_wait (futex=futex@entry=0x555556deffe0 <bql>, private=0) at lowlevellock.c:49 +#2 0x00007ffff4ae7e41 in lll_mutex_lock_optimized (mutex=0x555556deffe0 <bql>) at pthread_mutex_lock.c:48 +#3 ___pthread_mutex_lock (mutex=mutex@entry=0x555556deffe0 <bql>) at pthread_mutex_lock.c:93 +#4 0x0000555555efc8c3 in qemu_mutex_lock_impl (mutex=0x555556deffe0 <bql>, file=0x55555616b7ef "../util/main-loop.c", line=308) at ../util/qemu-thread-posix.c:94 +#5 0x0000555555ad6082 in bql_lock_impl (file=file@entry=0x55555616b7ef "../util/main-loop.c", line=line@entry=308) at ../system/cpus.c:536 +#6 0x0000555555f109a6 in os_host_main_loop_wait (timeout=6299288) at ../util/main-loop.c:308 +#7 main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:589 +#8 0x0000555555ae0ce9 in qemu_main_loop () at ../system/runstate.c:795 +#9 0x0000555555d50f66 in qemu_default_main () at ../system/main.c:37 +#10 0x00007ffff4a7e14a in __libc_start_call_main (main=main@entry=0x555555897b80 <main>, argc=argc@entry=29, argv=argv@entry=0x7fffffffe0e8) at ../sysdeps/nptl/libc_start_call_main.h:58 +#11 0x00007ffff4a7e20b in __libc_start_main_impl (main=0x555555897b80 <main>, argc=29, argv=0x7fffffffe0e8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fffffffe0d8) at ../csu/libc-start.c:360 +#12 0x00005555558998a5 in _start () +``` diff --git a/results/classifier/gemma3:12b/kvm/2411 b/results/classifier/gemma3:12b/kvm/2411 new file mode 100644 index 00000000..9fbfae4e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2411 @@ -0,0 +1,12 @@ + +[SPICE] How to make SPICE work with GVT-g + DMA-BUF + egl-headless ? +Description of problem: +I try to use GVT-g + DMA-BUF in PVE , vGPU display output can be displayed normally on noVNC, + +but when I try use SPICE, VM would not boot, come up with error: kvm: **The console requires display DMABUF support**. +Steps to reproduce: +1. Create a windows virtual machine +2. Manually add args to the conf file, add the mdev device of GVT-g. +3. Starting the Virtual Machine + +# diff --git a/results/classifier/gemma3:12b/kvm/2414 b/results/classifier/gemma3:12b/kvm/2414 new file mode 100644 index 00000000..0b39a220 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2414 @@ -0,0 +1,118 @@ + +qemu 9.0.0 crashing with OpenBSD 7.5 +Description of problem: +After upgrading from Qemu 8.23 to 9.0 this virtual does not start anymore (others do). The bootloader runs fine and starts the OpenBSD kernel, some kernel messages are shown on VGA console. It never reaches userland. +Additional information: +``` +Jun 29 07:15:10 hypervisor kernel: qemu-system-x86[12021]: segfault at 14 ip 000056547310bee4 sp 00007fc6d68c8310 error 4 in qemu-system-x86_64[565472ee0000+6ea000] +Jun 29 07:15:10 hypervisor kernel: Code: 01 00 00 48 83 c4 58 5b 5d 41 5c 41 5d 41 5e 41 5f c3 0f 1f 40 00 89 f0 48 8b 8b 40 83 00 00 4c 8d 0c 40 49 c1 e1 03 4c 01 c9 <8b> 41 14 85 c0 0f 84 11 01 00 00 83 c0 01 89 41 14 41 80 bf d1 01 +Jun 29 07:15:10 hypervisor systemd[1]: Started Process Core Dump (PID 12122/UID 0). +Jun 29 07:15:39 hypervisor systemd-coredump[12123]: Process 12017 (qemu-system-x86) of user 954 dumped core. + + Stack trace of thread 12021: + #0 0x000056547310bee4 n/a (qemu-system-x86_64 + 0x397ee4) + #1 0x000056547330d5e2 n/a (qemu-system-x86_64 + 0x5995e2) + #2 0x000056547330dba6 n/a (qemu-system-x86_64 + 0x599ba6) + #3 0x000056547330e059 memory_region_dispatch_write (qemu-system-x86_64 + 0x59a059) + #4 0x00005654735c1e1f n/a (qemu-system-x86_64 + 0x84de1f) + #5 0x0000565473314a7d n/a (qemu-system-x86_64 + 0x5a0a7d) + #6 0x0000565473314b76 address_space_write (qemu-system-x86_64 + 0x5a0b76) + #7 0x000056547336cafe kvm_cpu_exec (qemu-system-x86_64 + 0x5f8afe) + #8 0x000056547336f56e n/a (qemu-system-x86_64 + 0x5fb56e) + #9 0x000056547352fca8 n/a (qemu-system-x86_64 + 0x7bbca8) + #10 0x00007fc6d93b6ded n/a (libc.so.6 + 0x92ded) + #11 0x00007fc6d943a0dc n/a (libc.so.6 + 0x1160dc) + + Stack trace of thread 12026: + #0 0x00007fc6d93b3740 n/a (libc.so.6 + 0x8f740) + #1 0x00007fc6d93ba551 pthread_mutex_lock (libc.so.6 + 0x96551) + #2 0x0000565473535858 qemu_mutex_lock_impl (qemu-system-x86_64 + 0x7c1858) + #3 0x000056547313f906 bql_lock_impl (qemu-system-x86_64 + 0x3cb906) + #4 0x00005654735c1c7f n/a (qemu-system-x86_64 + 0x84dc7f) + #5 0x0000565473313776 flatview_read_continue (qemu-system-x86_64 + 0x59f776) + #6 0x0000565473314df0 n/a (qemu-system-x86_64 + 0x5a0df0) + #7 0x0000565473314eb6 address_space_read_full (qemu-system-x86_64 + 0x5a0eb6) + #8 0x000056547336cdf5 kvm_cpu_exec (qemu-system-x86_64 + 0x5f8df5) + #9 0x000056547336f56e n/a (qemu-system-x86_64 + 0x5fb56e) + #10 0x000056547352fca8 n/a (qemu-system-x86_64 + 0x7bbca8) + #11 0x00007fc6d93b6ded n/a (libc.so.6 + 0x92ded) + #12 0x00007fc6d943a0dc n/a (libc.so.6 + 0x1160dc) + + Stack trace of thread 12018: + #0 0x00007fc6d9402f43 clock_nanosleep (libc.so.6 + 0xdef43) + #1 0x00007fc6d940ed77 __nanosleep (libc.so.6 + 0xead77) + #2 0x00007fc6d98ccee0 g_usleep (libglib-2.0.so.0 + 0x8dee0) + #3 0x0000565473545a75 n/a (qemu-system-x86_64 + 0x7d1a75) + #4 0x000056547352fca8 n/a (qemu-system-x86_64 + 0x7bbca8) + #5 0x00007fc6d93b6ded n/a (libc.so.6 + 0x92ded) + #6 0x00007fc6d943a0dc n/a (libc.so.6 + 0x1160dc) + + Stack trace of thread 12020: + #0 0x00007fc6d942c39d __poll (libc.so.6 + 0x10839d) + #1 0x00007fc6d98fd8fd n/a (libglib-2.0.so.0 + 0xbe8fd) + #2 0x00007fc6d989c787 g_main_loop_run (libglib-2.0.so.0 + 0x5d787) + #3 0x00005654733bf7c2 n/a (qemu-system-x86_64 + 0x64b7c2) + #4 0x000056547352fca8 n/a (qemu-system-x86_64 + 0x7bbca8) + #5 0x00007fc6d93b6ded n/a (libc.so.6 + 0x92ded) + #6 0x00007fc6d943a0dc n/a (libc.so.6 + 0x1160dc) + + Stack trace of thread 12017: + #0 0x00007fc6d942c910 ppoll (libc.so.6 + 0x108910) + #1 0x000056547354ae83 qemu_poll_ns (qemu-system-x86_64 + 0x7d6e83) + #2 0x000056547355800e main_loop_wait (qemu-system-x86_64 + 0x7e400e) + #3 0x000056547337a337 qemu_default_main (qemu-system-x86_64 + 0x606337) + #4 0x00007fc6d9349c88 n/a (libc.so.6 + 0x25c88) + #5 0x00007fc6d9349d4c __libc_start_main (libc.so.6 + 0x25d4c) + #6 0x0000565472ef08b5 _start (qemu-system-x86_64 + 0x17c8b5) + + Stack trace of thread 12025: + #0 0x00007fc6d942c39d __poll (libc.so.6 + 0x10839d) + #1 0x00007fc6d98fd8fd n/a (libglib-2.0.so.0 + 0xbe8fd) + #2 0x00007fc6d989c787 g_main_loop_run (libglib-2.0.so.0 + 0x5d787) + #3 0x00007fc6d78ff0cb n/a (libspice-server.so.1 + 0x530cb) + #4 0x00007fc6d93b6ded n/a (libc.so.6 + 0x92ded) + #5 0x00007fc6d943a0dc n/a (libc.so.6 + 0x1160dc) + + Stack trace of thread 12117: + #0 0x00007fc6d93b34e9 n/a (libc.so.6 + 0x8f4e9) + #1 0x00007fc6d93b6242 pthread_cond_timedwait (libc.so.6 + 0x92242) + #2 0x0000565473536546 n/a (qemu-system-x86_64 + 0x7c2546) + #3 0x00005654735367ad qemu_cond_timedwait_impl (qemu-system-x86_64 + 0x7c27ad) + #4 0x00005654735569d5 n/a (qemu-system-x86_64 + 0x7e29d5) + #5 0x000056547352fca8 n/a (qemu-system-x86_64 + 0x7bbca8) + #6 0x00007fc6d93b6ded n/a (libc.so.6 + 0x92ded) + #7 0x00007fc6d943a0dc n/a (libc.so.6 + 0x1160dc) + + Stack trace of thread 12028: + #0 0x00007fc6d93b3740 n/a (libc.so.6 + 0x8f740) + #1 0x00007fc6d93ba551 pthread_mutex_lock (libc.so.6 + 0x96551) + #2 0x0000565473535858 qemu_mutex_lock_impl (qemu-system-x86_64 + 0x7c1858) + #3 0x000056547313f906 bql_lock_impl (qemu-system-x86_64 + 0x3cb906) + #4 0x00005654735c1c7f n/a (qemu-system-x86_64 + 0x84dc7f) + #5 0x0000565473313776 flatview_read_continue (qemu-system-x86_64 + 0x59f776) + #6 0x0000565473314df0 n/a (qemu-system-x86_64 + 0x5a0df0) + #7 0x0000565473314eb6 address_space_read_full (qemu-system-x86_64 + 0x5a0eb6) + #8 0x000056547336cdf5 kvm_cpu_exec (qemu-system-x86_64 + 0x5f8df5) + #9 0x000056547336f56e n/a (qemu-system-x86_64 + 0x5fb56e) + #10 0x000056547352fca8 n/a (qemu-system-x86_64 + 0x7bbca8) + #11 0x00007fc6d93b6ded n/a (libc.so.6 + 0x92ded) + #12 0x00007fc6d943a0dc n/a (libc.so.6 + 0x1160dc) + + Stack trace of thread 12027: + #0 0x00007fc6d93b3740 n/a (libc.so.6 + 0x8f740) + #1 0x00007fc6d93ba551 pthread_mutex_lock (libc.so.6 + 0x96551) + #2 0x0000565473535858 qemu_mutex_lock_impl (qemu-system-x86_64 + 0x7c1858) + #3 0x000056547313f906 bql_lock_impl (qemu-system-x86_64 + 0x3cb906) + #4 0x00005654735c1c7f n/a (qemu-system-x86_64 + 0x84dc7f) + #5 0x0000565473313776 flatview_read_continue (qemu-system-x86_64 + 0x59f776) + #6 0x0000565473314df0 n/a (qemu-system-x86_64 + 0x5a0df0) + #7 0x0000565473314eb6 address_space_read_full (qemu-system-x86_64 + 0x5a0eb6) + #8 0x000056547336cdf5 kvm_cpu_exec (qemu-system-x86_64 + 0x5f8df5) + #9 0x000056547336f56e n/a (qemu-system-x86_64 + 0x5fb56e) + #10 0x000056547352fca8 n/a (qemu-system-x86_64 + 0x7bbca8) + #11 0x00007fc6d93b6ded n/a (libc.so.6 + 0x92ded) + #12 0x00007fc6d943a0dc n/a (libc.so.6 + 0x1160dc) + ELF object binary architecture: AMD x86-64 +Jun 29 07:15:40 hypervisor systemd[1]: systemd-coredump@2-12122-0.service: Deactivated successfully. +Jun 29 07:15:40 hypervisor systemd[1]: systemd-coredump@2-12122-0.service: Consumed 20.231s CPU time. +``` diff --git a/results/classifier/gemma3:12b/kvm/2420 b/results/classifier/gemma3:12b/kvm/2420 new file mode 100644 index 00000000..a7c94b17 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2420 @@ -0,0 +1,45 @@ + +Error: Deprecated CPU topology (considered invalid): Unsupported cluster parameter musn't be specified as 1 +Description of problem: +warning: Deprecated CPU topology (considered invalid): Unsupported clusters parameter mustn't be specified as 1 +VM does not start + +What I've tried so far to fix: + +- Removed the offending `clusters="1"` parameter in the XML, both via virsh edit and virt-manager but the sucker comes back every time! + +- Creating a completely new VM from scratch, just keeping the qcow2 for Windows. What happens then is funny: The initial setup goes well. Machine type automatically gets set to q35 version 9.0. After setting up my cores (pinning) for the VM (7C/14T for the VM 1C/2T for host), there is no "clusters" parameter anymore. So the first start went well. After a RESTART of the whole host machine and subsequent launch of the VM guess what happened? The "clusters" thing is back in full swing. +Steps to reproduce: +1. Create Windows 11 VM with virt-manager +2. Try to do core pinning and setting up the following in virt manager before +- Copy CPU configuration from host (host-passthrough) +- Manually set CPU structure via GUI to 1 Socket, 7 Cores, 2 Threads on an 8 Core (in my case 11900k) +3. Observe result in XML being: + `<topology sockets="1" dies="1" clusters="1" cores="7" threads="2"/>` + +Again, the "clusters" entry leads to the VM not starting. Removing it doesn't work, it comes back straight away. I tried in virt-manager as well as with virsh edit. +Additional information: +My core pinning for reference: + +``` +<vcpu placement="static">14</vcpu> + <iothreads>1</iothreads> + <cputune> + <vcpupin vcpu="0" cpuset="0"/> + <vcpupin vcpu="1" cpuset="8"/> + <vcpupin vcpu="2" cpuset="1"/> + <vcpupin vcpu="3" cpuset="9"/> + <vcpupin vcpu="4" cpuset="2"/> + <vcpupin vcpu="5" cpuset="10"/> + <vcpupin vcpu="6" cpuset="3"/> + <vcpupin vcpu="7" cpuset="11"/> + <vcpupin vcpu="8" cpuset="4"/> + <vcpupin vcpu="9" cpuset="12"/> + <vcpupin vcpu="10" cpuset="5"/> + <vcpupin vcpu="11" cpuset="13"/> + <vcpupin vcpu="12" cpuset="6"/> + <vcpupin vcpu="13" cpuset="14"/> + <emulatorpin cpuset="7,15"/> + <iothreadpin iothread="1" cpuset="7,15"/> + </cputune> +``` diff --git a/results/classifier/gemma3:12b/kvm/2436 b/results/classifier/gemma3:12b/kvm/2436 new file mode 100644 index 00000000..89656c5f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2436 @@ -0,0 +1,2 @@ + +virtio kvm iofd sigfault bypass diff --git a/results/classifier/gemma3:12b/kvm/2442 b/results/classifier/gemma3:12b/kvm/2442 new file mode 100644 index 00000000..d4576143 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2442 @@ -0,0 +1,148 @@ + +kvm-unit-tests ept failed +Description of problem: +On the Sierra Forest and Emerald Rapids platform, the ept test in kvm-unit-tests failed on the latest QEMU. + +QEMU first bad commit is 0b2757412cb1d1947d7e2c1fe14985f1e72bba32. + +This bad commit also caused other errors, such as: + +1.kvm-unit-tests vmx_pf_invvpid_test + +Test suite: vmx_pf_invvpid_test + +Host skipping test: INVVPID ADDR unsupported + +filter = vmx_pf_invvpid_test, test = vmx_pf_vpid_test + +filter = vmx_pf_invvpid_test, test = vmx_exception_test + +SUMMARY: 0 tests + +SKIP vmx_pf_invvpid_test (0 tests) + +2.kvm-unit-tests vmx_pf_no_vpid_test + +Test suite: vmx_pf_no_vpid_test + +run + +x86/vmx_tests.c:10568: assert failed: false: Unexpected exit to L1, exit_reason: VMX_CR (0x1c) + STACK: 40717c 4072a3 402039 403f11 4001bd + +FAIL vmx_pf_no_vpid_test + +3.kvm-unit-tests vmx: + +Test suite: vmx_controls_test + +FAIL: Clear primary processor-based controls bit 15: vmlaunch fails + +FAIL: Clear primary processor-based controls bit 16: vmlaunch fails + +Test suite: vmx_mtf_test + +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000025 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0010'0101 - 37 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_MTF, got VMX_CR. + STACK: 406faa 407478 407911 402039 403f11 4001bd + +4.Failed to boot L2 guest on L1 windows guest, host does not support "Intel EPT" hardware assisted MMU virtualization. +Steps to reproduce: +1.git clone https://gitlab.com/kvm-unit-tests/kvm-unit-tests.git + +2.cd kvm-unit-tests; ./configure + +3.make standalone + +4.rmmod kvm_intel + +5.modprobe kvm_intel nested=Y allow_smaller_maxphyaddr=Y + +6.cd tests; ./ept +Additional information: +... +Test suite: ept_access_test_paddr_not_present_ad_disabled +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000012 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'0010 - 18 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_VMCALL, got VMX_CR. + STACK: 406faa 40730c 416905 416cf2 416f68 402039 403f11 4001bd +filter = ept_access*, test = ept_access_test_paddr_not_present_ad_enabled + +Test suite: ept_access_test_paddr_not_present_ad_enabled +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000012 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'0010 - 18 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_VMCALL, got VMX_CR. + STACK: 406faa 40730c 416905 416cf2 416f09 402039 403f11 4001bd +filter = ept_access*, test = ept_access_test_paddr_read_only_ad_disabled + +Test suite: ept_access_test_paddr_read_only_ad_disabled +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000012 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'0010 - 18 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_VMCALL, got VMX_CR. + STACK: 406faa 40730c 416905 416cf2 417150 402039 403f11 4001bd +filter = ept_access*, test = ept_access_test_paddr_read_only_ad_enabled + +Test suite: ept_access_test_paddr_read_only_ad_enabled +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000012 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'0010 - 18 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_VMCALL, got VMX_CR. + STACK: 406faa 40730c 416905 416cf2 416e14 402039 403f11 4001bd +filter = ept_access*, test = ept_access_test_paddr_read_write + +Test suite: ept_access_test_paddr_read_write +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000012 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'0010 - 18 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_VMCALL, got VMX_CR. + STACK: 406faa 40730c 416905 416fb1 4170fb 402039 403f11 4001bd +filter = ept_access*, test = ept_access_test_paddr_read_write_execute + +Test suite: ept_access_test_paddr_read_write_execute +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000012 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'0010 - 18 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_VMCALL, got VMX_CR. + STACK: 406faa 40730c 416905 416fb1 4170b0 402039 403f11 4001bd +filter = ept_access*, test = ept_access_test_paddr_read_execute_ad_disabled + +Test suite: ept_access_test_paddr_read_execute_ad_disabled +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000012 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'0010 - 18 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_VMCALL, got VMX_CR. + STACK: 406faa 40730c 416905 416cf2 416fde 402039 403f11 4001bd +filter = ept_access*, test = ept_access_test_paddr_read_execute_ad_enabled + +Test suite: ept_access_test_paddr_read_execute_ad_enabled +FAIL: x86/vmx_tests.c:2164: Assertion failed: (expected) == (actual) + LHS: 0x0000000000000012 - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'0010 - 18 + RHS: 0x000000000000001c - 0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0000'0001'1100 - 28 +Expected VMX_VMCALL, got VMX_CR. + STACK: 406faa 40730c 416905 416cf2 416d1f 402039 403f11 4001bd +filter = ept_access*, test = ept_access_test_paddr_not_present_page_fault + +Test suite: ept_access_test_paddr_not_present_page_fault +filter = ept_access*, test = ept_access_test_force_2m_page + +Test suite: ept_access_test_force_2m_page +filter = ept_access*, test = atomic_switch_max_msrs_test +filter = ept_access*, test = atomic_switch_overflow_msrs_test +filter = ept_access*, test = rdtsc_vmexit_diff_test +filter = ept_access*, test = vmx_mtf_test +filter = ept_access*, test = vmx_mtf_pdpte_test +filter = ept_access*, test = vmx_pf_exception_test +filter = ept_access*, test = vmx_pf_exception_forced_emulation_test +filter = ept_access*, test = vmx_pf_no_vpid_test +filter = ept_access*, test = vmx_pf_invvpid_test +filter = ept_access*, test = vmx_pf_vpid_test +filter = ept_access*, test = vmx_exception_test +SUMMARY: 5824 tests, 8 unexpected failures +FAIL ept (5824 tests, 8 unexpected failures) + +[error.log](/uploads/407a04df83bae220bca6fad3c9bba9ff/error.log) diff --git a/results/classifier/gemma3:12b/kvm/2445 b/results/classifier/gemma3:12b/kvm/2445 new file mode 100644 index 00000000..49f81d30 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2445 @@ -0,0 +1,88 @@ + +virtio-pci: the number of irq routes keeps increasing and qemu abort +Description of problem: + +Steps to reproduce: +1. Start a virtual machine and add a virtio-scsi controller for vm, E.g: + + `<controller type='scsi' model='virtio-scsi' index='1'/>` +2. write rand value and rand address in port IO address space of virtio-scsi device in the guest, E.g: + + ``` + int main(){ + iopl(3); + srand(10001); + unsigned port_base = 0xc000; + unsigned port_space_size = 32; + time_t now; + struct tm *tm_struct; + int i; + + for (i=0;i<100000000;i++){ + outb(rand()&0xff,port_base+rand()%port_space_size); + outw(rand()&0xffff,port_base+rand()%port_space_size); + outl(rand(),port_base+rand()%port_space_size); + } + return 0; + } + ``` + + or write some special value: + + ``` + int main(){ + iopl(3); + srand(10001); + unsigned port_base = 0xc000; + unsigned port_space_size = 32; + int i; + + for (i=0;i<100000000;i++){ + outw(13170, port_base + 18); // DRIVER + outw(16, port_base + 20); // config_vector = 16 + outw(34244, port_base + 18); // DRIVE OK + outw(29, port_base + 20); // config_vector = 65535 + outw(5817, port_base + 18); // not DRIVE OK + usleep(1000); + } + return 0; + } + ``` +3. the number of irq routes will keep increasing and qemu process on the host will abort +Additional information: +stack infomation after qemu process aborts: + +``` +#0 0x00007f3cd38500ff in () at /usr/lib64/libc.so.6 +#1 0x00007f3cd3803d06 in raise () at /usr/lib64/libc.so.6 +#2 0x00007f3cd37ef1f7 in abort () at /usr/lib64/libc.so.6 +#3 0x0000563055c54d68 in kvm_irqchip_commit_routes (s=0x563058b24bc0) at ../accel/kvm/kvm-all.c:1872 +#4 kvm_irqchip_commit_routes (s=0x563058b24bc0) at ../accel/kvm/kvm-all.c:1855 +#5 0x0000563055a1c242 in kvm_irqchip_commit_route_changes (c=0x7f3ccaffc040) at /Images/syg/code/openEuler/qemu/include/sysemu/kvm.h:470 +#6 kvm_virtio_pci_vq_vector_use (vector=18, proxy=0x563059b7f320) at ../hw/virtio/virtio-pci.c:875 +#7 kvm_virtio_pci_vector_use_one (proxy=proxy@entry=0x563059b7f320, queue_no=queue_no@entry=17) at ../hw/virtio/virtio-pci.c:948 +#8 0x0000563055a1d718 in kvm_virtio_pci_vector_vq_use (nvqs=18, proxy=0x563059b7f320) at ../hw/virtio/virtio-pci.c:1010 +#9 virtio_pci_set_guest_notifiers (d=0x563059b7f320, nvqs=18, assign=<optimized out>) at ../hw/virtio/virtio-pci.c:1373 +#10 0x00005630559cb5f9 in virtio_scsi_dataplane_start (vdev=0x563059b876f0) at ../hw/scsi/virtio-scsi-dataplane.c:116 +#11 0x0000563055a194f2 in virtio_bus_start_ioeventfd (bus=bus@entry=0x563059b87670) at ../hw/virtio/virtio-bus.c:236 +#12 0x0000563055a1c9f2 in virtio_pci_start_ioeventfd (proxy=0x563059b7f320) at ../hw/virtio/virtio-pci.c:375 +#13 virtio_ioport_write (val=34244, addr=18, opaque=0x563059b7f320) at ../hw/virtio/virtio-pci.c:471 +#14 virtio_pci_config_write (opaque=0x563059b7f320, addr=18, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:617 +#15 0x0000563055bfb3af in memory_region_write_accessor (mr=mr@entry=0x563059b7fd50, addr=18, value=value@entry=0x7f3ccaffc2c8, size=size@entry=2, shift=<optimized out>, mask=mask@entry=65535, attrs=...) + at ../system/memory.c:497 +#16 0x0000563055bfc05e in access_with_adjusted_size (addr=addr@entry=18, value=value@entry=0x7f3ccaffc2c8, size=size@entry=2, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn= + 0x563055bfb330 <memory_region_write_accessor>, mr=0x563059b7fd50, attrs=...) at ../system/memory.c:573 +#17 0x0000563055bfd074 in memory_region_dispatch_write (mr=0x563059b7fd50, addr=18, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...) at ../system/memory.c:1528 +#18 0x0000563055c040f4 in flatview_write_continue + (fv=fv@entry=0x7f3aa40198b0, addr=addr@entry=49170, attrs=attrs@entry=..., ptr=ptr@entry=0x7f3cd0002000, len=len@entry=2, addr1=<optimized out>, l=<optimized out>, mr=<optimized out>) + at /Images/syg/code/openEuler/qemu/include/qemu/host-utils.h:238 +#19 0x0000563055c043e0 in flatview_write (fv=0x7f3aa40198b0, addr=addr@entry=49170, attrs=attrs@entry=..., buf=buf@entry=0x7f3cd0002000, len=len@entry=2) at ../system/physmem.c:2799 +#20 0x0000563055c07c48 in address_space_write (len=2, buf=0x7f3cd0002000, attrs=..., addr=49170, as=0x563056cc8fe0 <address_space_io>) at ../system/physmem.c:2906 +#21 address_space_rw (as=0x563056cc8fe0 <address_space_io>, addr=addr@entry=49170, attrs=attrs@entry=..., buf=0x7f3cd0002000, len=len@entry=2, is_write=is_write@entry=true) at ../system/physmem.c:2916 +#22 0x0000563055c58663 in kvm_handle_io (count=1, size=2, direction=<optimized out>, data=<optimized out>, attrs=..., port=49170) at ../accel/kvm/kvm-all.c:2670 +#23 kvm_cpu_exec (cpu=cpu@entry=0x563058ee2a40) at ../accel/kvm/kvm-all.c:2943 +#24 0x0000563055c59965 in kvm_vcpu_thread_fn (arg=0x563058ee2a40) at ../accel/kvm/kvm-accel-ops.c:51 +#25 0x0000563055ddb9df in qemu_thread_start (args=0x563058eecaa0) at ../util/qemu-thread-posix.c:541 +#26 0x00007f3cd384e51a in () at /usr/lib64/libc.so.6 +#27 0x00007f3cd38d0e00 in () at /usr/lib64/libc.so.6 +``` diff --git a/results/classifier/gemma3:12b/kvm/2452 b/results/classifier/gemma3:12b/kvm/2452 new file mode 100644 index 00000000..b73fb45a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2452 @@ -0,0 +1,2 @@ + +memory allocation for AMDVIIOTLBEntry in amdvi_update_iotlb() diff --git a/results/classifier/gemma3:12b/kvm/252 b/results/classifier/gemma3:12b/kvm/252 new file mode 100644 index 00000000..bafdeb42 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/252 @@ -0,0 +1,2 @@ + +KVM Old ATI(pre) AMD card passthrough is not working diff --git a/results/classifier/gemma3:12b/kvm/2555 b/results/classifier/gemma3:12b/kvm/2555 new file mode 100644 index 00000000..0a4fd3a8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2555 @@ -0,0 +1,21 @@ + +Can't start a guest with 2 IOAPICs +Description of problem: +For a host with multiple IOAPICs, I want to start a guest with 2 IOAPICs. I saw this commit about this function: **[x86: add support for second ioapic]**: + https://gitlab.com/qemu-project/qemu/-/commit/94c5a606379ddd04beecdb11fb34b51b4b28c7f2 + +But after I started a guest in a host with multiple IOAPICs, there was still only one IOAPIC in guest. How should I enable this feature? +Additional information: +Host IOAPICs Info: + ``` +[ 1.268280] IOAPIC[0]: apic_id 0, version 33, address 0xfec00000, GSI 0-23 +[ 1.268286] IOAPIC[1]: apic_id 1, version 33, address 0xfec20000, GSI 24-55 +[ 1.268291] IOAPIC[2]: apic_id 2, version 33, address 0xd9000000, GSI 56-87 +[ 4.415313] ACPI: Using IOAPIC for interrupt routing + ``` + +Guest IOAPIC Info: + ``` +[ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 +[ 0.255045] ACPI: Using IOAPIC for interrupt routing + ``` diff --git a/results/classifier/gemma3:12b/kvm/2571 b/results/classifier/gemma3:12b/kvm/2571 new file mode 100644 index 00000000..bc346ded --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2571 @@ -0,0 +1,67 @@ + +9.1.0 spurious guest journal errors -> linux guest on AMD host +Description of problem: +Since upgrading to 9.1.0 I'm seeing new error messages (see below) inside the guest when booting linux guests on an AMD host. Bisection points to: +``` +2ba8b7ee63589d4063c3b8dff3b70dbf9e224fc6 is the first bad commit +commit 2ba8b7ee63589d4063c3b8dff3b70dbf9e224fc6 +Author: John Allen <john.allen@amd.com> +Date: Mon Jun 3 19:36:21 2024 +0000 + + i386: Add support for SUCCOR feature + + Add cpuid bit definition for the SUCCOR feature. This cpuid bit is required to + be exposed to guests to allow them to handle machine check exceptions on AMD + hosts. +``` +Everything still seems to work so possibly not a bug. But the errors are still very disconcerting. Any thoughts? +Steps to reproduce: +1. e.g. Boot linux with `-cpu host` on an AMD host +Additional information: +``` +Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly. +Sep 14 12:02:53 kernel: unchecked MSR access error: RDMSR from 0x852 at rIP: 0xffffffffb548ffa7 (native_read_msr+0x7/0x40) +Sep 14 12:02:53 kernel: Call Trace: +Sep 14 12:02:53 kernel: <TASK> +Sep 14 12:02:53 kernel: ? ex_handler_msr.isra.0.cold+0x28/0x60 +Sep 14 12:02:53 kernel: ? fixup_exception+0x157/0x380 +Sep 14 12:02:53 kernel: ? gp_try_fixup_and_notify+0x1e/0xb0 +Sep 14 12:02:53 kernel: ? exc_general_protection+0x104/0x400 +Sep 14 12:02:53 kernel: ? asm_exc_general_protection+0x26/0x30 +Sep 14 12:02:53 kernel: ? native_read_msr+0x7/0x40 +Sep 14 12:02:53 kernel: native_apic_msr_read+0x20/0x30 +Sep 14 12:02:53 kernel: setup_APIC_eilvt+0x47/0x110 +Sep 14 12:02:53 kernel: mce_amd_feature_init+0x485/0x4e0 +Sep 14 12:02:53 kernel: mcheck_cpu_init+0x1bb/0x470 +Sep 14 12:02:53 kernel: identify_cpu+0x396/0x5e0 +Sep 14 12:02:53 kernel: arch_cpu_finalize_init+0x20/0x140 +Sep 14 12:02:53 kernel: start_kernel+0x931/0x9c0 +Sep 14 12:02:53 kernel: x86_64_start_reservations+0x24/0x30 +Sep 14 12:02:53 kernel: x86_64_start_kernel+0x95/0xa0 +Sep 14 12:02:53 kernel: common_startup_64+0x13e/0x141 +Sep 14 12:02:53 kernel: </TASK> +Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 0, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x +0 on this cpu +Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly. +Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 2, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x +0 on this cpu +Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly. +Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 4, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x +0 on this cpu +Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly. +Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 6, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x +0 on this cpu +Sep 14 12:02:53 kernel: #1 #3 #5 #7 +Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly. +Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 1, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x +0 on this cpu +Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly. +Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 3, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x +0 on this cpu +Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly. +Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 5, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x +0 on this cpu +Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly. +Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 7, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x +0 on this cpu +``` diff --git a/results/classifier/gemma3:12b/kvm/2573 b/results/classifier/gemma3:12b/kvm/2573 new file mode 100644 index 00000000..2f0c878b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2573 @@ -0,0 +1,10 @@ + +RISC-V: Executing floating point instruction in VS mode under KVM acceleration leads to crash +Description of problem: +Executing `fcvt.d.w fa5,a5` in VS mode leads to crash. +Steps to reproduce: +1. Download the Ubuntu 24.10 image https://cdimage.ubuntu.com/ubuntu-server/daily-preinstalled/current/oracular-preinstalled-server-riscv64.img.xz +2. On your amd64 system launch a VM using -accel tcg +3. Inside the VM launch a new VM using -accel kvm with the payload mentioned above +Additional information: +For more details see https://bugs.launchpad.net/ubuntu/+source/edk2/+bug/2077731 diff --git a/results/classifier/gemma3:12b/kvm/2574 b/results/classifier/gemma3:12b/kvm/2574 new file mode 100644 index 00000000..27f8caa3 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2574 @@ -0,0 +1,50 @@ + +VM hang: 'error: kvm run failed Bad address' with some AMD GPUs since kernel 6.7 +Description of problem: +The Debian ROCm Team runs GPU-utilizing test workloads in QEMU VMs into which we pass through AMD GPUs attached to PCIe x16 slots on the host. We do this to quickly test various Debian distributions/kernels/firmwares on a single physical host per GPU, and to isolate the host as much as possible from potentially hostile code. + +Starting with kernel 6.7 in the **guest**, with Navi 31 GPUs (eg: RX 7900 XT), as soon as anything triggers access to the GPU's memory, the VM hangs with `error: kvm run failed Bad address` and dumps its state. + +I gather that [this](https://gitlab.com/qemu-project/qemu/-/blob/ea9cdbcf3a0b8d5497cddf87990f1b39d8f3bb0a/accel/kvm/kvm-all.c#L3046-L3048) is where this message originates from. It would seem that the preceding [ioctl](https://gitlab.com/qemu-project/qemu/-/blob/ea9cdbcf3a0b8d5497cddf87990f1b39d8f3bb0a/accel/kvm/kvm-all.c#L3025) runs into `EFAULT` which eventually leads to a break out of the surrounding loop. + +Since we can reliably reproduce this starting with 6.7, our assumption is that this is caused by a change in the kernel and/or the `amdgpu` driver. However, as the error originates from kvm *on the host*, we could not rule out that this might also be a emulation issue. In particular, it was only 9.1 [c15e568](c15e568) where the handling of the `EFAULT` and `KVM_EXIT_MEMORY_FAULT` case was added, so perhaps we ran into something that is still incomplete. + +I'd appreciate any advice you could give us for further debugging. We will bisect 6.7 to see what could have triggered this on the guest side, but is there something that we can do on the host to further track this down, in particular which `-trace`s might be helpful? + +Other notes: +- The VM boots and runs fine, GPU initializes fine according to `dmesg`. The issue is only triggered on GPU utilization +- The problematic GPU in question worked fine with kernels 6.3 - 6.6 +- All other GPU architectures that we test this way (eg: Navi 2x) do not experience this issue, they work fine with all kernels we tested +- We have checked with more than one GPU, to rule out a physical defect +Steps to reproduce: +Reproducing the issue requires +1. A suitable image +2. Access to a Navi 3x card. Remote access can be arranged, if necessary. + +Building a suitable image can be rather complicated and requires a Debian host. If needed, it would be easier for me to just share a pre-built image. +Additional information: +This is dumped just before the VM hangs: +``` +ROCk module is loaded +error: kvm run failed Bad address +RAX=00000000000035c8 RBX=00000000000006ba RCX=0003000108b08073 RDX=00000000000006b9 +RSI=ffff9994b00035c8 RDI=ffff899403c80000 RBP=ffff899408b285e0 RSP=ffff9994816ab620 +R8 =0003000000000073 R9 =ffff9994b0000000 R10=ffff899403c8fb18 R11=ffff899408b065b8 +R12=ffff899403c80000 R13=0003000000000073 R14=ffff9994b0000000 R15=00000000000006ba +RIP=ffffffffc11d8f93 RFL=00000282 [--S----] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 00000000 00000000 +CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0000 0000000000000000 00000000 00000000 +FS =0000 00007faa76aea780 00000000 00000000 +GS =0000 ffff899f0dd80000 00000000 00000000 +LDT=0000 0000000000000000 00000000 00000000 +TR =0040 fffffe41c66fc000 00004087 00008b00 DPL=0 TSS64-busy +GDT= fffffe41c66fa000 0000007f +IDT= fffffe0000000000 00000fff +CR0=80050033 CR2=000055bd5a5d8598 CR3=000000010342c000 CR4=00750ef0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +Code=ff ff 00 00 48 21 c1 8d 04 d5 00 00 00 00 4c 09 c1 48 01 c6 <48> 89 0e 31 c0 e9 6e b1 92 d2 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 66 +``` diff --git a/results/classifier/gemma3:12b/kvm/2578 b/results/classifier/gemma3:12b/kvm/2578 new file mode 100644 index 00000000..5b08d25f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2578 @@ -0,0 +1,15 @@ + +x86: exception during hardware interrupt pushes wrong error code +Description of problem: +Exceptions during IDT traversal push the wrong error code when triggered by a hardware interrupt. +The EXT bit in TCG mode is never set. However, it works fine in KVM mode as hardware is generating the number. +Steps to reproduce: +1. load a short IDT e.g. with 64 entries +2. trigger a self IPI through the LAPIC with a vector 100 +3. the pushed error code is 802 instead of 803. +Additional information: +It can be fixed in the lines `raise_exception_err(env, EXCP0D_GPF, intno * 8 + 2);` in `seg_helper.c` +which must include the `is_hw` field when calculating the error number. Something like `intno * 8 + 2 + (is_hw != 0)` +works here. + +Nevertheless, all the other exception cases in the `do_interrupt_*` functions have to set the same bit as well. diff --git a/results/classifier/gemma3:12b/kvm/2582 b/results/classifier/gemma3:12b/kvm/2582 new file mode 100644 index 00000000..c09774ad --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2582 @@ -0,0 +1,24 @@ + +CR4.VMX leaks from L1 into L2 on Intel VMX +Description of problem: +In a nested virtualization setting, `savevm` can cause CR4 bits from leaking from L1 into L2. This causes general-protection faults in certain guests. + +The L2 guest executes this code: + +``` +mov rax, cr4 ; Get CR4 +mov rcx, rax ; Remember the old value +btc rax, 7 ; Toggle CR4.PGE +mov cr4, rax ; #GP! <- Shouldn't happen! +mov cr4, rcx ; Restore old value +``` + +If the guest code is interrupted at the right time (e.g. via `savevm`), Qemu marks CR4 dirty while the guest executes L2 code. Due to really complicated KVM semantics, this will result in L1 CR4 bits (VMXE) leaking into the L2 guest and the L2 will die with a GP: + +Instead of the expected CR4 value, the L2 guest reads a value with VMXE set. When it tries to write this back into CR4, this triggers the general protection fault. +Steps to reproduce: +This is only an issue on **Intel** systems. + +# +Additional information: +See also this discussion where we discussed a (flawed) approach to fixing this in KVM: https://lore.kernel.org/lkml/Zh6WlOB8CS-By3DQ@google.com/t/ diff --git a/results/classifier/gemma3:12b/kvm/2583 b/results/classifier/gemma3:12b/kvm/2583 new file mode 100644 index 00000000..af52c2f1 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2583 @@ -0,0 +1,26 @@ + +libvfio-user.so.0 missing in /lib/x86_64-linux-gnu/ in fresh install of 9.1.50 +Description of problem: +Library libvfio-user.so.0 is missing from /lib/x86_64-linux-gnu. qemu-system-x86_64 does not start due to missing library. + +```` +root@jpbdeb:~# ls -al /usr/local/bin/qemu-system-x86_64 +-rwxr-xr-x 1 root root 81734576 Sep 21 21:48 /usr/local/bin/qemu-system-x86_64 +root@jpbdeb:~# ldd /usr/local/bin/qemu-system-x86_64 + linux-vdso.so.1 (0x00007fff511de000) + libvfio-user.so.0 => not found + libslirp.so.0 => /lib/x86_64-linux-gnu/libslirp.so.0 (0x00007f73eba33000) + libxenctrl.so.4.17 => /lib/x86_64-linux-gnu/libxenctrl.so.4.17 (0x00007f73eba09000) + libxenstore.so.4 => /lib/x86_64-linux-gnu/libxenstore.so.4 (0x00007f73eb9fe000) + libxenforeignmemory.so.1 => /lib/x86_64-linux-gnu/libxenforeignmemory.so.1 (0x00007f73eb9f9000) + ... +```` +Steps to reproduce: +1. Fresh OS install, including all packages necessary to build from source. +2. Download source from gitlab and proceed with documented build instructions. +3. make install +4. Attempt to run /usr/local/bin/qemu-system-x86_64 fails, due to missing library. +Additional information: +Adding the link to the library that exists in /usr/lib/x86_64-linux-gnu resolves the issue: + +(as root) ln -s /usr/local/lib/x86_64-linux-gnu/libvfio-user.so.0 /lib/x86_64-linux-gnu/libvfio-user.so.0 diff --git a/results/classifier/gemma3:12b/kvm/2612 b/results/classifier/gemma3:12b/kvm/2612 new file mode 100644 index 00000000..f7066a85 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2612 @@ -0,0 +1,83 @@ + +In-guest ROCm tests fail with multiple AMD GPUs passed through (bisected to SeaBIOS update) +Description of problem: +We got a report of a VM setup with 8 passed-through AMD GPUs that works well with QEMU 8.1.5, but has issues with QEMU 8.2.2 (see below for details). A QEMU bisect points to commit [14f5a7ba](https://gitlab.com/qemu-project/qemu/-/commit/14f5a7bae4cb5ca45a03e16b5bb0c5d766fd51b7) which updated the seabios snapshot. +Even though Proxmox VE comes with its own packaged QEMU versions, for bisecting we used the [upstream repository](https://gitlab.com/qemu-project/qemu). + +Bisecting seabios between rel-1.16.2 and rel-1.16.3 brought the following 2 commits to attention: + +[bcfed7e2](https://gitlab.com/qemu-project/seabios/-/commit/bcfed7e270776ab5595cafc6f1794bea0cae1c6c) move 64bit pci window to end of address space + +[96a8d130](https://gitlab.com/qemu-project/seabios/-/commit/96a8d130a8c2e908e357ce62cd713f2cc0b0a2eb) be less conservative with the 64bit pci io window + + + +Since bcfed7e2 resulted in KVM errors when trying to start the guest, we could not narrow it down to a single commit. With 96a8d130 the issues in the guest began. + +The issues in the guest were reproduced by running some ROCm tests in the guest using all 8 GPUs. We had no insight into the tests in question, they, as well as the test setup, were provided by one of our customers. The failing test was a DeepSpeed test using all 8 GPUs. + +We're not sure if it's a driver issue in the guest (AMDGPU and ROCm 6.1.x and 6.2.1 tested), a hardware issue or a seabios issue. Since we narrowed it down to these commits (QEMU, seabios) we wanted to open an issue here first. + +The in-guest kernel warning received seems to indicate an issue with the driver:: +``` +kernel: ------------[ cut here ]------------ +kernel: WARNING: CPU: 2 PID: 149 at /tmp/amd.eT2ZshuE/ttm/ttm_bo.c:687 amdttm_bo_unpin+0x72/0x90 [amdttm] +kernel: Modules linked in: veth tls xt_conntrack nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack_netlink nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xfrm_user xfrm_algo xt_addrtype nft_compat n> +kernel: libahci video wmi i2c_algo_bit hid_generic usbhid hid aesni_intel crypto_simd cryptd +kernel: CPU: 2 PID: 149 Comm: kworker/2:1 Tainted: G OE 6.8.0-45-generic #45-Ubuntu +kernel: Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 +kernel: Workqueue: kfd_process_wq kfd_process_wq_release [amdgpu] +kernel: RIP: 0010:amdttm_bo_unpin+0x72/0x90 [amdttm] +kernel: Code: 89 de e8 01 56 00 00 48 8b bb 60 01 00 00 48 81 c7 40 08 00 00 e8 6e 72 89 d2 48 8b 5d f8 c9 31 c0 31 f6 31 ff e9 79 54 b5 d2 <0f> 0b 48 8b 5d f8 c9 31 c0 31 f6 31 ff e9 67 54 b5> +kernel: RSP: 0018:ffffa03380687ca0 EFLAGS: 00010246 +kernel: RAX: 0000000000000000 RBX: ffff8ed6191b6848 RCX: 0000000000000000 +kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff8ed6191b6848 +kernel: RBP: ffffa03380687ca8 R08: 0000000000000000 R09: 0000000000000000 +kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff8ed62268ef38 +kernel: R13: ffff8ed6014fc800 R14: ffff8ed6015f0400 R15: ffff8ed60109b000 +kernel: FS: 0000000000000000(0000) GS:ffff8ef4ff700000(0000) knlGS:0000000000000000 +kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +kernel: CR2: 00007f923c000020 CR3: 00000106f083c006 CR4: 0000000000770ef0 +kernel: PKRU: 55555554 +kernel: Call Trace: +kernel: <TASK> +kernel: ? show_regs+0x6d/0x80 +kernel: ? __warn+0x89/0x160 +kernel: ? amdttm_bo_unpin+0x72/0x90 [amdttm] +kernel: ? report_bug+0x17e/0x1b0 +kernel: ? handle_bug+0x51/0xa0 +kernel: ? exc_invalid_op+0x18/0x80 +kernel: ? asm_exc_invalid_op+0x1b/0x20 +kernel: ? amdttm_bo_unpin+0x72/0x90 [amdttm] +kernel: amdgpu_bo_unpin+0x1f/0xb0 [amdgpu] +kernel: amdgpu_amdkfd_gpuvm_unpin_bo+0x35/0xd0 [amdgpu] +kernel: amdgpu_amdkfd_gpuvm_free_memory_of_gpu+0x3ea/0x460 [amdgpu] +kernel: kfd_process_device_free_bos+0xb7/0x150 [amdgpu] +kernel: kfd_process_wq_release+0x2db/0x410 [amdgpu] +kernel: process_one_work+0x16f/0x350 +kernel: worker_thread+0x306/0x440 +kernel: ? srso_alias_return_thunk+0x5/0xfbef5 +kernel: ? _raw_spin_unlock_irqrestore+0x11/0x60 +kernel: ? __pfx_worker_thread+0x10/0x10 +kernel: kthread+0xf2/0x120 +kernel: ? __pfx_kthread+0x10/0x10 +kernel: ret_from_fork+0x47/0x70 +kernel: ? __pfx_kthread+0x10/0x10 +kernel: ret_from_fork_asm+0x1b/0x30 +kernel: </TASK> +kernel: ---[ end trace 0000000000000000 ]--- +``` + +Does anyone have an idea how to troubleshoot this further? If any more information or logs are required, we can try to provide them. +Steps to reproduce: +Sadly we can't provide steps since we only had the customer's setup that included a proprietary docker image. +Additional information: +We used the options `-chardev pipe,path=qemudebugpipe,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios` specified in [0] to gather some debug logs from seabios: + +The non-working one is from commit `96a8d130` while the working one is from an earlier version. + +[seabios.log](/uploads/4d7f43213c631fb5cf6aea519bfd79ad/seabios.log) +[seabios_working.log](/uploads/978e6c56ff8784bb5639963c9fb0c93f/seabios_working.log) + + +[0] https://gitlab.com/qemu-project/seabios/-/blob/master/docs/Debugging.md?ref_type=heads diff --git a/results/classifier/gemma3:12b/kvm/2622 b/results/classifier/gemma3:12b/kvm/2622 new file mode 100644 index 00000000..135d12d2 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2622 @@ -0,0 +1,268 @@ + +qemu abort in qemu_aio_coroutine_enter +Description of problem: +Start the virtual machine using NFS disk, run sysbench to test myql inside the virtual machine, + and execute command "virsh domblkinfo domid vda" in host. After running for a period of time, qemu crashes. + This issue is not a necessary problem and requires long-term operation for more than ten hours. +It maybe related to NFS disk and not appear with other types of storage. +the qemu log is + +qemu_aio_coroutine_enter Co-routine was already scheduled in aio_co_schedule + +``` +Core was generated by `/usr/libexec/qemu-kvm -name guest=default_vm-csv66,debug-threads=on -S -object'. +Program terminated with signal SIGABRT, Aborted. +#0 0x00007f9702f5a54c in __pthread_kill_implementation () from /lib64/libc.so.6 +[Current thread is 1 (Thread 0x7f9701f7bf40 (LWP 98))] +Missing separate debuginfos, use: dnf debuginfo-install capstone-4.0.2-10.cl9.x86_64 cyrus-sasl-lib-2.1.27-20.cl9.x86_64 daxctl-libs-71.1-7.cl9.x86_64 glib2-2.68.4-5.cl9.x86_64 glibc-2.34-40.cl9_1.2.x86_64 gnutls-3.7.6-18.cl9_1.x86_64 kmod-libs-28-7.cl9.x86_64 krb5-libs-1.19.1-24.cl9_1.x86_64 libaio-0.3.111-13.cl9.x86_64 libblkid-2.37.4-9.cl9.x86_64 libcom_err-1.46.5-3.cl9.x86_64 libfdt-1.6.0-7.cl9.x86_64 libffi-3--Type <RET> for more, q to quit, c to continue without paging-- +.4.2-7.cl9.x86_64 libgcc-11.3.1-2.1.cl9.x86_64 libibverbs-42.0-1.cl9.x86_64 libidn2-2.3.0-7.cl9.x86_64 libmount-2.37.4-9.cl9.x86_64 libnfs-5.0.3-2.cl9.x86_64 libnl3-3.7.0-1.cl9.x86_64 libpmem-1.12.1-1.cl9.x86_64 libpng-1.6.37-12.cl9.x86_64 librdmacm-42.0-1.cl9.x86_64 libseccomp-2.5.2-2.cl9.x86_64 libselinux-3.4-3.cl9.x86_64 libslirp-4.4.0-7.cl9.x86_64 libstdc++-11.3.1-2.1.cl9.x86_64 libtasn1-4.16.0-8.cl9_1.x86_64 libunistring-0.9.10-16.cl9.x86_64 liburing-0.7-7.cl9.x86_64 libuuid-2.37.4-9.cl9.x86_64 libxcrypt-4.4.18-3.cl9.x86_64 libzstd-1.5.1-2.cl9.x86_64 lzo-2.10-7.cl9.x86_64 nettle-3.8-3.cl9_0.x86_64 numactl-libs-2.0.14-8.cl9.x86_64 openssl-libs-3.0.1-49.cl9_1.x86_64 p11-kit-0.24.1-2.cl9.x86_64 pcre-8.44-3.cl9.3.x86_64 pcre2-10.40-2.cl9.x86_64 pixman-0.40.0-5.cl9.x86_64 snappy-1.1.8-8.cl9.x86_64 systemd-libs-250-12.cl9_1.3.x86_64 zlib-1.2.11-34.cl9.x86_64 +(gdb) bt +#0 0x00007f9702f5a54c in __pthread_kill_implementation () from /lib64/libc.so.6 +#1 0x00007f9702f0dce6 in raise () from /lib64/libc.so.6 +#2 0x00007f9702ee17f3 in abort () from /lib64/libc.so.6 +#3 0x00005631681ceed2 in qemu_aio_coroutine_enter (ctx=0x563169dd9550, co=<optimized out>) at ../util/qemu-coroutine.c:277 +#4 0x00005631680a99e9 in bdrv_poll_co (s=0x7ffe072eea80) + at /usr/src/debug/qemu-kvm-8.2.0-1.cl9.gcc.git908b11716.x86_64/block/block-gen.h:42 +#5 bdrv_get_info (bs=bs@entry=0x563169fc1680, bdi=bdi@entry=0x7ffe072eeaf0) at block/block-gen.c:600 +#6 0x00005631680efc3d in bdrv_do_query_node_info (bs=bs@entry=0x563169fc1680, info=info@entry=0x56316a0f6650, + errp=errp@entry=0x7ffe072eed48) at ../block/qapi.c:255 +#7 0x00005631680efe1a in bdrv_query_image_info (bs=0x563169fc1680, p_info=0x56316a53c0d8, flat=<optimized out>, + skip_implicit_filters=<optimized out>, errp=0x7ffe072eed48) at ../block/qapi.c:337 +#8 0x00005631680f026f in bdrv_block_device_info (blk=blk@entry=0x0, bs=bs@entry=0x563169fc1680, flat=flat@entry=true, + errp=errp@entry=0x7ffe072eed48) at ../block/qapi.c:155 +#9 0x00005631680b31e3 in bdrv_named_nodes_list (flat=<optimized out>, errp=errp@entry=0x7ffe072eed48) at ../block.c:6207 +#10 0x00005631680a4162 in qmp_query_named_block_nodes (has_flat=<optimized out>, flat=<optimized out>, errp=errp@entry=0x7ffe072eed48) + at ../blockdev.c:2785 +#11 0x00005631681593eb in qmp_marshal_query_named_block_nodes (args=0x7f96e80093d0, ret=0x7f9701777eb8, errp=0x7f9701777eb0) + at qapi/qapi-commands-block-core.c:553 +#12 0x00005631681ade8d in do_qmp_dispatch_bh (opaque=0x7f9701777ec0) at ../qapi/qmp-dispatch.c:128 +#13 0x00005631681cd155 in aio_bh_poll (ctx=ctx@entry=0x563169da6e70) at ../util/async.c:216 +#14 0x00005631681b7a42 in aio_dispatch (ctx=0x563169da6e70) at ../util/aio-posix.c:423 +#15 0x00005631681ccee2 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) + at ../util/async.c:358 +#16 0x00007f9703356d6f in g_main_context_dispatch () from /lib64/libglib-2.0.so.0 +#17 0x00005631681ce710 in glib_pollfds_poll () at ../util/main-loop.c:290 +#18 os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:313 +#19 main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:592 +#20 0x0000563167edc9b7 in qemu_main_loop () at ../system/runstate.c:782 +#21 0x0000563167daa3ab in qemu_default_main () at ../system/main.c:37 +#22 0x00007f9702ef8eb0 in __libc_start_call_main () from /lib64/libc.so.6 +#23 0x00007f9702ef8f60 in __libc_start_main_impl () from /lib64/libc.so.6 +#24 0x0000563167daa2d5 in _start () +(gdb) list ../util/qemu-coroutine.c:277 +272 * been deleted */ +273 if (scheduled) { +**274 fprintf(stderr, +275 "%s: Co-routine was already scheduled in '%s'\n", +276 __func__, scheduled); +277 abort();** +278 } +279 +280 if (to->caller) { +281 fprintf(stderr, "Co-routine re-entered recursively\n"); + +(gdb) p *(AioContext *)0x563169dd9550 +$3 = {source = {callback_data = 0x0, callback_funcs = 0x0, source_funcs = 0x563168f029c0 <aio_source_funcs>, ref_count = 2, + context = 0x563169dd16b0, priority = 0, flags = 33, source_id = 1, poll_fds = 0x563169d34e70, prev = 0x0, next = 0x563169da6e70, + name = 0x563169ddd400 "aio-context", priv = 0x563169da7230}, lock = {m = {lock = {__data = {__lock = 0, __count = 0, __owner = 0, + __nusers = 0, __kind = 1, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, + __size = '\000' <repeats 16 times>, "\001", '\000' <repeats 22 times>, __align = 0}, initialized = true}}, + bdrv_graph = 0x563169dc7350, aio_handlers = {lh_first = 0x563169ddd020}, deleted_aio_handlers = {lh_first = 0x0}, notify_me = 0, + list_lock = {count = 0}, bh_list = {slh_first = 0x563169dad160}, bh_slice_list = {sqh_first = 0x0, sqh_last = 0x563169dd9608}, + notified = true, notifier = {rfd = 8, wfd = 8, initialized = true}, scheduled_coroutines = {slh_first = 0x563169fbff70}, + co_schedule_bh = 0x563169dad160, thread_pool_min = 0, thread_pool_max = 64, thread_pool = 0x0, linux_aio = 0x0, linux_io_uring = 0x0, + fdmon_io_uring = {sq = {khead = 0x7f9701675000, ktail = 0x7f9701675040, kring_mask = 0x7f9701675100, kring_entries = 0x7f9701675108, + kflags = 0x7f9701675114, kdropped = 0x7f9701675110, array = 0x7f9701676140, sqes = 0x7f9701673000, sqe_head = 0, sqe_tail = 0, + ring_sz = 4928, ring_ptr = 0x7f9701675000}, cq = {khead = 0x7f9701675080, ktail = 0x7f97016750c0, kring_mask = 0x7f9701675104, + kring_entries = 0x7f970167510c, kflags = 0x7f9701675118, koverflow = 0x7f970167511c, cqes = 0x7f9701675140, ring_sz = 4928, + ring_ptr = 0x7f9701675000}, flags = 0, ring_fd = 7}, submit_list = {slh_first = 0x0}, tlg = {tl = {0x563169ddd390, 0x563169dd1a50, + 0x563169dd1ac0, 0x563169dd1b30}}, poll_disable_cnt = 0, poll_ns = 0, poll_max_ns = 0, poll_grow = 0, poll_shrink = 0, + aio_max_batch = 0, poll_aio_handlers = {lh_first = 0x563169ddd020}, poll_started = false, epollfd = -1, + fdmon_ops = 0x563168dfabe0 <fdmon_poll_ops>} +(gdb) list bdrv_poll_co +file: "/usr/src/debug/qemu-kvm-8.2.0-1.cl9.gcc.git908b11716.x86_64/block/block-gen.h", line number: 38, symbol: "bdrv_poll_co" +33 AioContext *ctx; +34 bool in_progress; +35 Coroutine *co; /* Keep pointer here for debugging */ +36 } BdrvPollCo; +37 +38 static inline void bdrv_poll_co(BdrvPollCo *s) +39 { +40 assert(!qemu_in_coroutine()); +41 +42 aio_co_enter(s->ctx, s->co); +file: "/usr/src/debug/qemu-kvm-8.2.0-1.cl9.gcc.git908b11716.x86_64/block/block-gen.h", line number: 40, symbol: "bdrv_poll_co" +35 Coroutine *co; /* Keep pointer here for debugging */ +36 } BdrvPollCo; +37 +38 static inline void bdrv_poll_co(BdrvPollCo *s) +39 { +40 assert(!qemu_in_coroutine()); +41 +42 aio_co_enter(s->ctx, s->co); +43 AIO_WAIT_WHILE(s->ctx, s->in_progress); +44 } +(gdb) p *(BdrvPollCo*)0x7ffe072eea80 +$4 = {ctx = 0x563169dd9550, in_progress = true, co = 0x563169fbff70} + +(gdb) p *(Coroutine*)0x563169fbff70 +$6 = {entry = 0x5631680a7bc0 <bdrv_co_get_info_entry>, entry_arg = 0x7ffe072eea80, caller = 0x0, caller_sp = 0x7ffe072eea28, pool_next = { + sle_next = 0x0}, locks_held = 0, ctx = 0x563169dd9550, scheduled = 0x5631683596c0 <__func__.3> "aio_co_schedule", co_queue_next = { + sqe_next = 0x0}, co_queue_wakeup = {sqh_first = 0x0, sqh_last = 0x563169fbffb8}, co_scheduled_next = {sle_next = 0x0}} +(gdb) +``` +Steps to reproduce: +1. start vm + +the virtual machine xml is +[libnfs-vm-xml](/uploads/f664fe2002a032064f3d574f3cc0b13f/libnfs-vm-xml) + +2. run sysbench test for mysql +the command line: + + +3. run command line: virsh domlbkinfo domid vda +Additional information: +``` +the all theads stack: + +Thread 12 (Thread 0x7f59b1a71640 (LWP 102)): +#0 0x00007f59ba2ed71f in poll () from /lib64/libc.so.6 +#1 0x00007f59ba69e0bc in split_replacement.constprop () from /lib64/libglib-2.0.so.0 +#2 0xddd73fd20744af00 in ?? () +#3 0x00080f8a5afdd040 in ?? () +#4 0x0000563110266fd8 in ?? () +#5 0x0000563110266fd0 in ?? () +#6 0x0000563110239680 in ?? () +#7 0x000056311023ad20 in ?? () +#8 0x00007f59ba24a530 in ?? () from /lib64/libc.so.6 +#9 0x0000000000000000 in ?? () + +Thread 11 (Thread 0x7f579cbbf640 (LWP 107)): +#0 0x00007f59ba24739a in __futex_abstimed_wait_common () from /lib64/libc.so.6 +#1 0x00007f59ba249ba0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libc.so.6 +#2 0x000056310e2edced in qemu_cond_wait_impl (cond=<optimized out>, mutex=0x56311160aea8, file=0x56310e392334 "../ui/vnc-jobs.c", line=248) at ../util/qemu-thread-posix.c:225 +#3 0x000056310df03c47 in vnc_worker_thread_loop (queue=queue@entry=0x56311160ae70) at ../ui/vnc-jobs.c:248 +#4 0x000056310df045c0 in vnc_worker_thread (arg=0x56311160ae70) at ../ui/vnc-jobs.c:362 +#5 0x000056310e2ed7f3 in qemu_thread_start (args=0x563110dbab70) at ../util/qemu-thread-posix.c:541 +#6 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#7 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 10 (Thread 0x7f59b896b640 (LWP 98)): +#0 0x00007f59ba2ed81e in ppoll () from /lib64/libc.so.6 +#1 0x000056310e303d05 in ppoll (__ss=0x0, __timeout=0x7f59b896a540, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:64 +#2 qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at ../util/qemu-timer.c:351 +#3 0x000056310e2eb2d9 in fdmon_poll_wait (ctx=0x56311043f190, ready_list=0x7f59b896a5d0, timeout=93350478717) at ../util/fdmon-poll.c:79 +#4 0x000056310e2eaadd in aio_poll (ctx=0x56311043f190, blocking=blocking@entry=true) at ../util/aio-posix.c:670 +#5 0x000056310e1d8fba in iothread_run (opaque=0x563110239ce0) at ../iothread.c:63 +#6 0x000056310e2ed7f3 in qemu_thread_start (args=0x56311043f6f0) at ../util/qemu-thread-posix.c:541 +#7 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#8 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 9 (Thread 0x7f579fffe640 (LWP 105)): +#0 0x00007f59ba1e9c6b in ioctl () from /lib64/libc.so.6 +--Type <RET> for more, q to quit, c to continue without paging-- +#1 0x000056310e19f0cd in kvm_vcpu_ioctl (cpu=cpu@entry=0x563110535c10, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3078 +#2 0x000056310e19f47a in kvm_cpu_exec (cpu=cpu@entry=0x563110535c10) at ../accel/kvm/kvm-all.c:2890 +#3 0x000056310e1a09cd in kvm_vcpu_thread_fn (arg=0x563110535c10) at ../accel/kvm/kvm-accel-ops.c:51 +#4 0x000056310e2ed7f3 in qemu_thread_start (args=0x56311053ea10) at ../util/qemu-thread-posix.c:541 +#5 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#6 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 8 (Thread 0x7f59b1270640 (LWP 103)): +#0 0x00007f59ba1e9c6b in ioctl () from /lib64/libc.so.6 +#1 0x000056310e19f0cd in kvm_vcpu_ioctl (cpu=cpu@entry=0x5631104fd5e0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3078 +#2 0x000056310e19f47a in kvm_cpu_exec (cpu=cpu@entry=0x5631104fd5e0) at ../accel/kvm/kvm-all.c:2890 +#3 0x000056310e1a09cd in kvm_vcpu_thread_fn (arg=0x5631104fd5e0) at ../accel/kvm/kvm-accel-ops.c:51 +#4 0x000056310e2ed7f3 in qemu_thread_start (args=0x5631104ac540) at ../util/qemu-thread-posix.c:541 +#5 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#6 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 7 (Thread 0x7f59b926d640 (LWP 97)): +#0 0x00007f59ba1e9e5d in syscall () from /lib64/libc.so.6 +#1 0x000056310e2ee262 in qemu_futex_wait (val=<optimized out>, f=<optimized out>) at /usr/src/debug/qemu-kvm-8.2.0-1.cl9.gcc.gita8dcbf606.x86_64/include/qemu/futex.h:29 +#2 qemu_event_wait (ev=ev@entry=0x56310f060688 <rcu_call_ready_event>) at ../util/qemu-thread-posix.c:464 +#3 0x000056310e2f8a52 in call_rcu_thread (opaque=<optimized out>) at ../util/rcu.c:278 +#4 0x000056310e2ed7f3 in qemu_thread_start (args=0x5631101d9df0) at ../util/qemu-thread-posix.c:541 +#5 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#6 0x00007f59ba1ea450 in clone3 () from /lib64/libc.so.6 + +Thread 6 (Thread 0x7f59b37fe640 (LWP 100)): +#0 0x00007f59ba2ed81e in ppoll () from /lib64/libc.so.6 +#1 0x000056310e303d5d in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:64 +#2 0x000056310e2eb2d9 in fdmon_poll_wait (ctx=0x56311043fea0, ready_list=0x7f59b37fd5d0, timeout=-1) at ../util/fdmon-poll.c:79 +#3 0x000056310e2eaadd in aio_poll (ctx=0x56311043fea0, blocking=blocking@entry=true) at ../util/aio-posix.c:670 +#4 0x000056310e1d8fba in iothread_run (opaque=0x56311043f8f0) at ../iothread.c:63 +#5 0x000056310e2ed7f3 in qemu_thread_start (args=0x5631104404c0) at ../util/qemu-thread-posix.c:541 +#6 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#7 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 5 (Thread 0x7f59b2ffd640 (LWP 101)): +#0 0x00007f59ba2ed81e in ppoll () from /lib64/libc.so.6 +--Type <RET> for more, q to quit, c to continue without paging-- +#1 0x000056310e303d5d in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:64 +#2 0x000056310e2eb2d9 in fdmon_poll_wait (ctx=0x5631104407f0, ready_list=0x7f59b2ffc5d0, timeout=-1) at ../util/fdmon-poll.c:79 +#3 0x000056310e2eaadd in aio_poll (ctx=0x5631104407f0, blocking=blocking@entry=true) at ../util/aio-posix.c:670 +#4 0x000056310e1d8fba in iothread_run (opaque=0x56311043fc20) at ../iothread.c:63 +#5 0x000056310e2ed7f3 in qemu_thread_start (args=0x5631104437a0) at ../util/qemu-thread-posix.c:541 +#6 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#7 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 4 (Thread 0x7f59b0a6f640 (LWP 104)): +#0 0x00007f59ba1e9c6b in ioctl () from /lib64/libc.so.6 +#1 0x000056310e19f0cd in kvm_vcpu_ioctl (cpu=cpu@entry=0x56311052bd80, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3078 +#2 0x000056310e19f47a in kvm_cpu_exec (cpu=cpu@entry=0x56311052bd80) at ../accel/kvm/kvm-all.c:2890 +#3 0x000056310e1a09cd in kvm_vcpu_thread_fn (arg=0x56311052bd80) at ../accel/kvm/kvm-accel-ops.c:51 +#4 0x000056310e2ed7f3 in qemu_thread_start (args=0x563110535330) at ../util/qemu-thread-posix.c:541 +#5 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#6 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 3 (Thread 0x7f59b3fff640 (LWP 99)): +#0 0x00007f59ba2ed81e in ppoll () from /lib64/libc.so.6 +#1 0x000056310e303d5d in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:64 +#2 0x000056310e2eb2d9 in fdmon_poll_wait (ctx=0x563110441e00, ready_list=0x7f59b3ffe5d0, timeout=-1) at ../util/fdmon-poll.c:79 +#3 0x000056310e2eaadd in aio_poll (ctx=0x563110441e00, blocking=blocking@entry=true) at ../util/aio-posix.c:670 +#4 0x000056310e1d8fba in iothread_run (opaque=0x56311043fa40) at ../iothread.c:63 +#5 0x000056310e2ed7f3 in qemu_thread_start (args=0x5631104423b0) at ../util/qemu-thread-posix.c:541 +#6 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#7 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 2 (Thread 0x7f579f7fd640 (LWP 106)): +#0 0x00007f59ba1e9c6b in ioctl () from /lib64/libc.so.6 +#1 0x000056310e19f0cd in kvm_vcpu_ioctl (cpu=cpu@entry=0x56311053f2f0, type=type@entry=44672) at ../accel/kvm/kvm-all.c:3078 +#2 0x000056310e19f47a in kvm_cpu_exec (cpu=cpu@entry=0x56311053f2f0) at ../accel/kvm/kvm-all.c:2890 +#3 0x000056310e1a09cd in kvm_vcpu_thread_fn (arg=0x56311053f2f0) at ../accel/kvm/kvm-accel-ops.c:51 +#4 0x000056310e2ed7f3 in qemu_thread_start (args=0x563110548280) at ../util/qemu-thread-posix.c:541 +#5 0x00007f59ba24a802 in start_thread () from /lib64/libc.so.6 +#6 0x00007f59ba1ea314 in clone () from /lib64/libc.so.6 + +Thread 1 (Thread 0x7f59b9270f40 (LWP 95)): +#0 0x00007f59ba24c54c in __pthread_kill_implementation () from /lib64/libc.so.6 +--Type <RET> for more, q to quit, c to continue without paging-- +#1 0x00007f59ba1ffce6 in raise () from /lib64/libc.so.6 +#2 0x00007f59ba1d37f3 in abort () from /lib64/libc.so.6 +#3 0x000056310e301e02 in qemu_aio_coroutine_enter (ctx=0x563110266550, co=<optimized out>) at ../util/qemu-coroutine.c:277 +#4 0x000056310e1dc919 in bdrv_poll_co (s=0x7ffd1c9ec8d0) at /usr/src/debug/qemu-kvm-8.2.0-1.cl9.gcc.gita8dcbf606.x86_64/block/block-gen.h:42 +#5 bdrv_get_info (bs=bs@entry=0x563110481e10, bdi=bdi@entry=0x7ffd1c9ec940) at block/block-gen.c:600 +#6 0x000056310e222b6d in bdrv_do_query_node_info (bs=bs@entry=0x563110481e10, info=info@entry=0x563110480130, errp=errp@entry=0x7ffd1c9ecb98) at ../block/qapi.c:255 +#7 0x000056310e222d4a in bdrv_query_image_info (bs=0x563110481e10, p_info=0x56311121cc18, flat=<optimized out>, skip_implicit_filters=<optimized out>, errp=0x7ffd1c9ecb98) at ../block/qapi.c:337 +#8 0x000056310e22319f in bdrv_block_device_info (blk=blk@entry=0x0, bs=bs@entry=0x563110481e10, flat=flat@entry=true, errp=errp@entry=0x7ffd1c9ecb98) at ../block/qapi.c:155 +#9 0x000056310e1e6113 in bdrv_named_nodes_list (flat=<optimized out>, errp=errp@entry=0x7ffd1c9ecb98) at ../block.c:6207 +#10 0x000056310e1d7092 in qmp_query_named_block_nodes (has_flat=<optimized out>, flat=<optimized out>, errp=errp@entry=0x7ffd1c9ecb98) at ../blockdev.c:2785 +#11 0x000056310e28c31b in qmp_marshal_query_named_block_nodes (args=0x7f579800bbc0, ret=0x7f59b8a6ceb8, errp=0x7f59b8a6ceb0) at qapi/qapi-commands-block-core.c:553 +#12 0x000056310e2e0dbd in do_qmp_dispatch_bh (opaque=0x7f59b8a6cec0) at ../qapi/qmp-dispatch.c:128 +#13 0x000056310e300085 in aio_bh_poll (ctx=ctx@entry=0x563110233e70) at ../util/async.c:216 +#14 0x000056310e2ea972 in aio_dispatch (ctx=0x563110233e70) at ../util/aio-posix.c:423 +#15 0x000056310e2ffe12 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at ../util/async.c:358 +#16 0x00007f59ba648d6f in g_main_context_find_source_by_user_data () from /lib64/libglib-2.0.so.0 +#17 0x000056310f060908 in iohandler_ctx () +#18 0x00007ffd1c9ecd40 in ?? () +#19 0x000056310e301640 in glib_pollfds_poll () at ../util/main-loop.c:290 +#20 os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:313 +#21 main_loop_wait (nonblocking=nonblocking@entry=0) at ../util/main-loop.c:592 +#22 0x000056310e00f9b7 in qemu_main_loop () at ../system/runstate.c:782 +#23 0x000056310dedd3ab in qemu_default_main () at ../system/main.c:37 +#24 0x00007f59ba1eaeb0 in __libc_start_call_main () from /lib64/libc.so.6 +#25 0x00007f59ba1eaf60 in __libc_start_main_impl () from /lib64/libc.so.6 +#26 0x000056310dedd2d5 in _start () +``` diff --git a/results/classifier/gemma3:12b/kvm/2642 b/results/classifier/gemma3:12b/kvm/2642 new file mode 100644 index 00000000..fb7c17a4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2642 @@ -0,0 +1,6 @@ + +guest-set-time not supported +Description of problem: +guest-set-time is not supported un Ubuntu 24.04 guests. It still works on a Ubuntu 22.04 guest and on W10 and W11 guests + +feedback from the Ubuntu 24.04 guest: error: internal error: unable to execute QEMU agent command 'guest-set-time': this feature or command is not currently supported diff --git a/results/classifier/gemma3:12b/kvm/2658 b/results/classifier/gemma3:12b/kvm/2658 new file mode 100644 index 00000000..1261eb6e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2658 @@ -0,0 +1,2 @@ + +How to simulate the L2MERRSR_EL1 register in KVM mode? diff --git a/results/classifier/gemma3:12b/kvm/2678 b/results/classifier/gemma3:12b/kvm/2678 new file mode 100644 index 00000000..c9866dee --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2678 @@ -0,0 +1,10 @@ + +virsh blockcommit failed, however the snapshot was merged into base successfully. +Description of problem: + +Steps to reproduce: +1. +2. +3. +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/2692 b/results/classifier/gemma3:12b/kvm/2692 new file mode 100644 index 00000000..921625bd --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2692 @@ -0,0 +1,2 @@ + +Using the ldp instruction to access the I/O address space in KVM mode causes an exception diff --git a/results/classifier/gemma3:12b/kvm/2699 b/results/classifier/gemma3:12b/kvm/2699 new file mode 100644 index 00000000..3a05035c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2699 @@ -0,0 +1,19 @@ + +kvm_mem_ioeventfd_del: error deleting ioeventfd: Bad file descriptor (9) +Description of problem: +QEMU 9.1.91 monitor - type 'help' for more information +(qemu) kvm_mem_ioeventfd_del: error deleting ioeventfd: Bad file descriptor (9) +test.sh: line 14: 105283 Aborted (core dumped) /usr/local/bin/qemu-system-x86_64 -M q35 -m 8G -smp 8 -cpu host -enable-kvm -device VGA,bus=pcie.0,addr=0x2 -drive file=//home/fedora-38.qcow2,media=disk,if=virtio -device virtio-net-pci,mac=00:11:22:33:44:00,netdev=id8cxFGH,id=idaFLYjy,bus=pcie.0,addr=0x7 -netdev tap,id=id8cxFGH,vhost=on,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown -vnc :0 -monitor stdio -qmp tcp:0:5555,server,nowait +Steps to reproduce: +1. Boot a guest +2. set_link false nic and set_link true nic + +{"execute": "qmp_capabilities"} +{"return": {}} +{"execute": "set_link", "arguments": {"name": "idaFLYjy", "up": false}} +{"return": {}} +{"execute": "set_link", "arguments": {"name": "idaFLYjy", "up": true}} + +3. Guest hit qemu core dump +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/2710 b/results/classifier/gemma3:12b/kvm/2710 new file mode 100644 index 00000000..3bfbf7c4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2710 @@ -0,0 +1,127 @@ + +QEMU can't detect guest debug support on older (pre v5.7) x86 host kernels due to missing KVM_CAP_SET_GUEST_DEBUG +Description of problem: +``` +qemu-system-x86_64: -s: gdbstub: current accelerator doesn't support guest debugging +``` +Additional information: +I initially located the QEMU source code to determine whether KVM supports gdbstub by checking for `KVM_CAP_SET_GUEST_DEBUG`. The corresponding code can be found at: +```c +// qemu/accel/kvm/kvm-all.c:2695 +#ifdef TARGET_KVM_HAVE_GUEST_DEBUG + kvm_has_guest_debug = + (kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0); +#endif +``` +It can be observed that if the return value is <= 0 (in practice, this function only returns 0 on failure), the debug_flag is set to false. + +Upon further investigation of the Linux 4.15 kernel code, I discovered that in earlier versions, support for checking VM debugging capabilities via `KVM_CAP_SET_GUEST_DEBUG` was almost non-existent (it was only supported on arm64). However, for x86_64, VM debugging is supported on the 4.15 kernel. + +```c +// linu4.15/arch/x86/kvm/x86.c:2672 +int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) +{ + int r; + + switch (ext) { + case KVM_CAP_IRQCHIP: + case KVM_CAP_HLT: + case KVM_CAP_MMU_SHADOW_CACHE_CONTROL: + case KVM_CAP_SET_TSS_ADDR: + case KVM_CAP_EXT_CPUID: + case KVM_CAP_EXT_EMUL_CPUID: + case KVM_CAP_CLOCKSOURCE: + case KVM_CAP_PIT: + case KVM_CAP_NOP_IO_DELAY: + case KVM_CAP_MP_STATE: + case KVM_CAP_SYNC_MMU: + case KVM_CAP_USER_NMI: + case KVM_CAP_REINJECT_CONTROL: + case KVM_CAP_IRQ_INJECT_STATUS: + case KVM_CAP_IOEVENTFD: + case KVM_CAP_IOEVENTFD_NO_LENGTH: + case KVM_CAP_PIT2: + case KVM_CAP_PIT_STATE2: + case KVM_CAP_SET_IDENTITY_MAP_ADDR: + case KVM_CAP_XEN_HVM: + case KVM_CAP_VCPU_EVENTS: + case KVM_CAP_HYPERV: + case KVM_CAP_HYPERV_VAPIC: + case KVM_CAP_HYPERV_SPIN: + case KVM_CAP_HYPERV_SYNIC: + case KVM_CAP_HYPERV_SYNIC2: + case KVM_CAP_HYPERV_VP_INDEX: + case KVM_CAP_PCI_SEGMENT: + case KVM_CAP_DEBUGREGS: + case KVM_CAP_X86_ROBUST_SINGLESTEP: + case KVM_CAP_XSAVE: + case KVM_CAP_ASYNC_PF: + case KVM_CAP_GET_TSC_KHZ: + case KVM_CAP_KVMCLOCK_CTRL: + case KVM_CAP_READONLY_MEM: + case KVM_CAP_HYPERV_TIME: + case KVM_CAP_IOAPIC_POLARITY_IGNORED: + case KVM_CAP_TSC_DEADLINE_TIMER: + case KVM_CAP_ENABLE_CAP_VM: + case KVM_CAP_DISABLE_QUIRKS: + case KVM_CAP_SET_BOOT_CPU_ID: + case KVM_CAP_SPLIT_IRQCHIP: + case KVM_CAP_IMMEDIATE_EXIT: + r = 1; + break; + case KVM_CAP_ADJUST_CLOCK: + r = KVM_CLOCK_TSC_STABLE; + break; + case KVM_CAP_X86_GUEST_MWAIT: + r = kvm_mwait_in_guest(); + break; + case KVM_CAP_X86_SMM: + /* SMBASE is usually relocated above 1M on modern chipsets, + * and SMM handlers might indeed rely on 4G segment limits, + * so do not report SMM to be available if real mode is + * emulated via vm86 mode. Still, do not go to great lengths + * to avoid userspace's usage of the feature, because it is a + * fringe case that is not enabled except via specific settings + * of the module parameters. + */ + r = kvm_x86_ops->cpu_has_high_real_mode_segbase(); + break; + case KVM_CAP_VAPIC: + r = !kvm_x86_ops->cpu_has_accelerated_tpr(); + break; + case KVM_CAP_NR_VCPUS: + r = KVM_SOFT_MAX_VCPUS; + break; + case KVM_CAP_MAX_VCPUS: + r = KVM_MAX_VCPUS; + break; + case KVM_CAP_NR_MEMSLOTS: + r = KVM_USER_MEM_SLOTS; + break; + case KVM_CAP_PV_MMU: /* obsolete */ + r = 0; + break; + case KVM_CAP_MCE: + r = KVM_MAX_MCE_BANKS; + break; + case KVM_CAP_XCRS: + r = boot_cpu_has(X86_FEATURE_XSAVE); + break; + case KVM_CAP_TSC_CONTROL: + r = kvm_has_tsc_control; + break; + case KVM_CAP_X2APIC_API: + r = KVM_X2APIC_API_VALID_FLAGS; + break; + default: + r = 0; + break; + } + return r; + +} +``` + +I attempted to bypass this check in QEMU and verified that the QEMU gdbstub works normally on the 4.15 kernel. + +For modifications related to this part in QEMU, you can refer to the email: https://lore.kernel.org/all/20211111110604.207376-5-pbonzini@redhat.com/. diff --git a/results/classifier/gemma3:12b/kvm/2712 b/results/classifier/gemma3:12b/kvm/2712 new file mode 100644 index 00000000..55742c5f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2712 @@ -0,0 +1,12 @@ + +Windows VM doesn't boot on QEMU KVM when hypervisor is disabled in Linux 6.12 +Description of problem: +Windows VM doesn't boot on QEMU KVM when hypervisor is disabled in Linux 6.12. QEMU uses 100% CPU core usage and nothing happens. + +It boots properly in Linux 6.11.10. I don't know if it's a kernel bug or QEMU needs some changes to work with the new kernel correctly. +Steps to reproduce: +1. Boot Windows 10 or 11 (can be installation ISO form official website) with KVM, but set "hypervisor=off" CPU parameter. +2. Wait. +3. Nothing happens - doesn't boot. +Additional information: +Nothing is displayed in console. diff --git a/results/classifier/gemma3:12b/kvm/2782 b/results/classifier/gemma3:12b/kvm/2782 new file mode 100644 index 00000000..1d7478e3 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2782 @@ -0,0 +1,11 @@ + +WHPX won't enable x86_64v3 level instructions +Description of problem: +x86_64v3 support is not available inside guest +Steps to reproduce: +1. Boot the image +2. Open terminal +3. Run `/lib64/ld-linux-x86-64.so.2 --help` and check which levels are available in the output +4. Or run `/lib64/ld-linux-x86-64.so.2 --list-diagnostics | grep isa` and check `isa_1` value (expected 7 for v3 (3 bits being set)) +Additional information: +Due to this some Linux distribution, like Centos Stream 10, will not be able to boot with WHPX acceleration enabled. diff --git a/results/classifier/gemma3:12b/kvm/2817 b/results/classifier/gemma3:12b/kvm/2817 new file mode 100644 index 00000000..41320b99 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2817 @@ -0,0 +1,52 @@ + +Strange floating-point behaviour under Windows with some CPU models +Description of problem: +I'm encountering a very weird bug with some floating-point maths code, but only under very specific configurations. First I thought it was a Clang bug, but then further digging eventually showed it to only occur under Windows VMs with specific QEMU CPU options, I'm not certain whether it is a QEMU/KVM bug or a Windows bug, but thought starting here would be easiest. + +When compiled under MSVC Clang with modern CPU instructions disabled (e.g. `-march=pentium3` or `-march=pentium-mmx`), the `floorf()` call in the following program always returns 0.0, while the truncation works correctly: + +``` +#include <math.h> +#include <stdio.h> +#include <stdlib.h> + +int main(int argc, char **argv) +{ + float n = atof(argv[1]); + printf("n = %f\n", n); + + float f = floorf(n); + printf("f = %f\n", f); + + float c = (int)(n); + printf("c = %f\n", c); + + return 0; +} +``` + +Example output on an affected VM: + +``` +C:\Users\Administrator> floorf-p3.exe 10 +n = 10.000000 +f = 0.000000 +c = 10.000000 + +C:\Users\Administrator> floorf-p4.exe 10 +n = 10.000000 +f = 10.000000 +c = 10.000000 +``` + +(`floorf-p3.exe` was compiled with `-march=pentium3` and `floorf-p4.exe` with `-march=pentium4` above) + +I've tried a few QEMU CPU models on a variety of Intel/AMD VM hosts and two different Windows versions (10 and Server 2022), and observed the following: + +* `host-passthrough` - works (on AMD and Intel hosts) +* `qemu64` - broken +* `EPYC-Milan` - works +* `Westmere` - works +* `Penryn` - broken + +(I also reported this via the mailing list, but I think it might've swallowed my post) diff --git a/results/classifier/gemma3:12b/kvm/2834 b/results/classifier/gemma3:12b/kvm/2834 new file mode 100644 index 00000000..bd0105fd --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2834 @@ -0,0 +1,20 @@ + +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.intel-pt [bit 25] +Description of problem: +when run `./qemu-system-x86_64 -cpu host,intel_pt -m 8192M -smp 4 -hda ubuntu.qcow2 --enable-kvm --nographic` warning `qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.intel-pt [bit 25]`. +Tried adding level/min-level=0x14, but still received a warning. +Steps to reproduce: +run command +``` +./qemu-system-x86_64 -cpu host,intel_pt -m 8192M -smp 4 -hda ubuntu.qcow2 --enable-kvm --nographic +``` +Additional information: +- CPU i5-13600kf +``` +~$ sudo rdmsr 0x485 -f 14:14 # MSR_IA32_VMX_MISC_INTEL_PT +1 +~$ sudo rdmsr 0x48B -f 56:56 # SECONDARY_EXEC_PT_USE_GPA +1 +~$ sudo rdmsr 0x484 -f 50:50 # VM_ENTRY_LOAD_IA32_RTIT_CTL +1 +``` diff --git a/results/classifier/gemma3:12b/kvm/2926 b/results/classifier/gemma3:12b/kvm/2926 new file mode 100644 index 00000000..97418084 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2926 @@ -0,0 +1,37 @@ + +Excessive memory allocation on guest and host with gpu passthrough +Description of problem: +While gpu passthrough is enabled, the maximum amount of ram is allocated on the host (64 GB), even if the guest only has 8 GB configured as "currently allocated". +If I disable the physical gpu, the guest only takes the 8 GB. +Steps to reproduce: +1. Install qemu-kvm virt-manager libvirt-daemon-system virtinst libvirt-clients and bridge-utils. +1. Create a Windows vm with virt-manager +1. Insert discrete GPU on a secondary pcie slot. +1. Add `intel_iommu=on iommu=pt vfio-pci.ids=10de:17c8,10de:0fb0` to the GRUB kernel parameters. +1. Add `options vfio-pci ids=10de:17c8,10de:0fb0` and `softdep nvidia pre: vfio-pci` to `/etc/modprobe.d/vfio.conf`. +1. Update initrmfs image. +1. Add pcie hardware on virt-manager. +1. Install virtio and nvidia drivers on guest. +Additional information: +I'm using an Nvidia gtx 980Ti on a secondary slot for the guest. +The first slot has an rtx 4090 used by the host. + +``` +OS: Linux Mint 22.1 x86_64 +Host: MS-7E07 2.0 +Kernel: 6.8.0-51-generic +Shell: bash 5.2.21 +Resolution: 3840x2160, 3840x2160 +DE: Cinnamon 6.4.8 +WM: Mutter (Muffin) +Terminal: gnome-terminal +CPU: Intel i9-14900K (32) @ 5.700GHz +GPU: NVIDIA GeForce GTX 980 Ti +GPU: NVIDIA GeForce RTX 4090 +GPU: Intel Raptor Lake-S GT1 [UHD Graphics 770] +Memory: 73717MiB / 96317MiB +``` + +[vWin.xml](/uploads/3fe8133f67577f8724b060908b390c32/vWin.xml) +[vWin.log](/uploads/efa029460a62b62cbcff464af7cdb72a/vWin.log) + diff --git a/results/classifier/gemma3:12b/kvm/2931 b/results/classifier/gemma3:12b/kvm/2931 new file mode 100644 index 00000000..cc7bb991 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2931 @@ -0,0 +1,27 @@ + +riscv: satp invalid while kvm set to cpu host +Description of problem: +After boot, no "mmu-type" in dtb +``` + cpu@0 { + + phandle = <0x7>; + device_type = "cpu"; + reg = <0x0>; + status = "okay"; + compatible = "riscv"; + riscv,isa-extensions = "i", "m", "a", "f", "d", "c", "zicntr", "zicsr", "zifencei", "zi +bb"; + riscv,isa-base = "rv64i"; + riscv,isa = "rv64imafdc_zicntr_zicsr_zifencei_zihpm_zba_zbb"; + interrupt-controller { + + #interrupt-cells = <0x1>; + interrupt-controller; + compatible = "riscv,cpu-intc"; + phandle = <0x8>; + }; + }; +``` +Steps to reproduce: +1. boot any qemu with `-cpu host` diff --git a/results/classifier/gemma3:12b/kvm/2943 b/results/classifier/gemma3:12b/kvm/2943 new file mode 100644 index 00000000..cad0ece5 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2943 @@ -0,0 +1,8 @@ + +Please add a configurable for disabling, or by default disable, KVM_X86_QUIRK_IGNORE_GUEST_PAT on Intel host CPU +Additional information: +I am not familiar with QEMU code base or much programming in general. I did a quick grep through the latest QEMU sources pulled from this repository for the string `KVM_X86_QUIRK_IGNORE_GUEST_PAT`. It does not seem to occur anywhere which makes me think its existence and effect on QEMU users has gone unnoticed. + +If there is a handling of this flag which I have not noticed in the QEMU source code or documentation please guide me to where I can read about and probably configure it. + +Thank you. diff --git a/results/classifier/gemma3:12b/kvm/2955 b/results/classifier/gemma3:12b/kvm/2955 new file mode 100644 index 00000000..2444aa9b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2955 @@ -0,0 +1,2 @@ + +Mellanox IRQs Still Showing In Host OS After Passthrough diff --git a/results/classifier/gemma3:12b/kvm/2966 b/results/classifier/gemma3:12b/kvm/2966 new file mode 100644 index 00000000..c563f92b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2966 @@ -0,0 +1,28 @@ + +KVM: Failed to create TCE64 table for liobn 0x80000001 +Description of problem: +When rebooting the system we hit : + ``` + KVM: Failed to create TCE64 table for liobn 0x80000001 + qemu-system-ppc64: ../system/memory.c:2666: memory_region_add_subregion_common: Assertion `!subregion->container' failed. + Aborted (core dumped) + ``` +Steps to reproduce: +1. Start the machine +2. Reboot it + + ``` + curl -LO https://cloud.centos.org/centos/10-stream/ppc64le/images/CentOS-Stream-GenericCloud-10-20250512.0.ppc64le.qcow2 + export LIBGUESTFS_BACKEND=direct + virt-customize -v -a CentOS-Stream-GenericCloud-10-20250512.0.ppc64le.qcow2 --root-password password:centos + qemu-system-ppc64 --enable-kvm -m 4096 -smp 8 -hda CentOS-Stream-GenericCloud-10-20250512.0.ppc64le.qcow2 -vga none -nographic -device qemu-xhci + # once logged into it + systemctl reboot + [...] + KVM: Failed to create TCE64 table for liobn 0x80000001 + qemu-system-ppc64: ../system/memory.c:2666: memory_region_add_subregion_common: Assertion `!subregion->container' failed. + Aborted (core dumped) + ``` +Additional information: +The issue was already reported on ML https://lists.nongnu.org/archive/html/qemu-devel/2025-03/msg05137.html +I also hit that issue while building a CoreOS CentOS Stream 10 image https://github.com/openshift/os/issues/1818. I was able to validate that the commit https://github.com/torvalds/linux/commit/6aa989ab2bd0d37540c812b4270006ff794662e7 introduced the bug. diff --git a/results/classifier/gemma3:12b/kvm/2975 b/results/classifier/gemma3:12b/kvm/2975 new file mode 100644 index 00000000..4f92d9ad --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/2975 @@ -0,0 +1,69 @@ + +qemu-system-x86_64: VFIO_MAP_DMA failed: -22 IVSHMEM +Description of problem: +QEMU do not run with looking glass KVMFR and with host model cpu +It only works when I set cpu to `Snowridge,vmx=on,fma=on,avx=on,f16c=on,hypervisor=on...` (you can see in kvm.sh) +Steps to reproduce: +1. I have a script ( search for 'WITH VFIO') +Additional information: +UPD +Some additional debug info from GDB + +``` +=== vfio_listener_region_add === +Arguments: +listener = 0x55555a4dd2f0 +section = 0x7fffedb389c0 + +Section details: + section->offset_within_address_space: 0x382000000000 + Memory region: 0x555558120dd0 + Memory region name: shmmem-shmem0 + Memory region size: 0x10000000 + Memory region addr: 0x382000000000 +Error accessing section details: There is no member named offset. + +=== vfio_get_section_iova_range ENTRY === +Arguments: +bcontainer = 0x55555a4dd2c0 +section = 0x7fffedb389c0 +out_iova = 0x7fffedb388b0 +out_end = 0x7fffedb388b8 +out_llend = 0x7fffedb38900 + +Local variables at entry: +llend = 140737181354144 +iova = 140737181354432 + +Thread 4 "CPU 0/KVM" hit Breakpoint -96, 0x0000555555b8511a in vfio_listener_region_add (listener=0x55555a4dd2f0, + section=0x7fffedb389c0) at ../../../hw/vfio/listener.c:467 +467 if (!vfio_get_section_iova_range(bcontainer, section, &iova, &end, +(gdb) +Continuing. +2025-05-20T22:46:27.819893Z qemu-system-x86_64: vfio_container_dma_map(0x55555a4dd2c0, 0x382000000000, 0x10000000, 0x7fffcffff000) = -22 (Invalid argument) +qemu: hardware error: vfio: DMA mapping failed, unable to continue +CPU #0: +RAX=00000000e0000000 RBX=00000000e0608004 RCX=0000000000608004 RDX=0000000000000003 +RSI=0000000000000003 RDI=0000000000000000 RBP=000000007ef6b640 RSP=000000007ef6b5f0 +R8 =0000000000000000 R9 =000000007ef6b70f R10=0000000000000000 R11=0000000000000004 +R12=000000007ef6b800 R13=0000000000000003 R14=0000000000000000 R15=000000007ef6b7fe +RIP=000000007e1fe2eb RFL=00000246 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 + +``` + +Tested with latest QEMU 2af4a82ab2cce3412ffc92cd4c96bd870e33bc8e same error +``` +sudo dnf builddep qemu +../../../configure --enable-debug +``` + +[ERROR-QEMU-GIT-2af4a82ab2cce3412ffc92cd4c96bd870e33bc8e.txt](/uploads/060b26f091f0391f0491ea91dbe78f6d/ERROR-QEMU-GIT-2af4a82ab2cce3412ffc92cd4c96bd870e33bc8e.txt) + +[ERROR-trace-iova-values.txt](/uploads/22cacf4a5cb2c91ff6375c792a25dde1/ERROR-trace-iova-values.txt) + +[WORKINg-trace-iova-values.txt](/uploads/d4d53c2e743cf5f2d5bf810d61b9f1e6/WORKINg-trace-iova-values.txt) + + +[kvm.log.txt](/uploads/ac31eebf6e63aa6abe2498d1a4064bef/kvm.log.txt) + +[kvm.sh](/uploads/7f656f7cf0d623a240309ee61b024dc9/kvm.sh) diff --git a/results/classifier/gemma3:12b/kvm/355410 b/results/classifier/gemma3:12b/kvm/355410 new file mode 100644 index 00000000..618fa28a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/355410 @@ -0,0 +1,32 @@ + +kvm crashed with SIGSEGV in malloc_consolidate() + +Binary package hint: kvm + +See Bug #355401. Oddly enough, when Windows tries to install drivers for a WDM device (a W380a on USB), kvm crashes. + +ProblemType: Crash +Architecture: amd64 +DistroRelease: Ubuntu 9.04 +ExecutablePath: /usr/bin/kvm +KvmCmdLine: Error: command ['ps', '-p', '5036', '-F'] failed with exit code 1: UID PID PPID C SZ RSS PSR STIME TTY TIME CMD +MachineType: ASUSTeK Computer Inc. F3Sa +NonfreeKernelModules: fglrx +Package: kvm 1:84+dfsg-0ubuntu10 +ProcCmdLine: root=UUID=1b4d3e6f-e7de-4dda-a22b-4ee8d3da378d ro splash +ProcCmdline: kvm -snapshot -net nic,model=ne2k_pci -net user -soundhw es1370 -usb -usbdevice tablet -m 256 winXP.SP3-IE7-20081018.qcow -usbdevice host:0fce:d0b5 -smb /home/username/temp/Unlock_Sony_Ericsson/ +ProcEnviron: + PATH=(custom, user) + LANG=en_CA.UTF-8 + SHELL=/bin/bash +ProcVersionSignature: Ubuntu 2.6.28-11.40-generic +Signal: 11 +SourcePackage: kvm +StacktraceTop: + malloc_consolidate (av=0x7fe3b0607a00) at malloc.c:4897 + _int_malloc (av=0x7fe3b0607a00, bytes=2128) + *__GI___libc_malloc (bytes=2128) at malloc.c:3551 + ?? () + ?? () +Title: kvm crashed with SIGSEGV in malloc_consolidate() +UserGroups: adm admin audio cdrom dialout dip disk fax fuse kvm lpadmin netdev plugdev sambashare scanner tape video \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/391879 b/results/classifier/gemma3:12b/kvm/391879 new file mode 100644 index 00000000..bbdc5037 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/391879 @@ -0,0 +1,25 @@ + +migrate exec ignores exit status + +Binary package hint: kvm + +Using + + migrate "exec:cat > foo; false" + +in the monitor results in the state of the VM being written to foo, as expected, and the VM then being stopped. This is surprising, as I think it stands to reason that in case of a failed migrate-exec process, which is what a non-zero exit status implies to me, the VM should continue. + +== Version information + +$ lsb_release -rd +Description: Ubuntu 9.04 +Release: 9.04 + +$ apt-cache policy kvm +kvm: + Installed: 1:84+dfsg-0ubuntu11 + Candidate: 1:84+dfsg-0ubuntu11 + Version table: + *** 1:84+dfsg-0ubuntu11 0 + 500 http://gb.archive.ubuntu.com jaunty/main Packages + 100 /var/lib/dpkg/status \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/391880 b/results/classifier/gemma3:12b/kvm/391880 new file mode 100644 index 00000000..5a1694d8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/391880 @@ -0,0 +1,12 @@ + +migrate exec hangs for several minutes if the pipe is closed before all its data is written + +Binary package hint: kvm + +Using + + migrate "exec:true" + +in the monitor hangs the VM for several minutes. What I expect is that the VM stops attempting to migrate after the pipe has been closed. + +Indicating a background migrate with -d doesn't help. Presumably the migration is not backgrounded until a certain amount of data is written to the pipe, or the migration times out What I expect is that the migration is backgrounded immediately. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/412 b/results/classifier/gemma3:12b/kvm/412 new file mode 100644 index 00000000..4f3f65af --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/412 @@ -0,0 +1,2 @@ + +stable-5.0 crashes with SIGSEV while checking for kvm extension diff --git a/results/classifier/gemma3:12b/kvm/474968 b/results/classifier/gemma3:12b/kvm/474968 new file mode 100644 index 00000000..784358ad --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/474968 @@ -0,0 +1,37 @@ + +kvm smb server hogs cpu guest freeze + +Binary package hint: qemu-kvm + +kvm hogs the CPU reproducibly. I installed an Ubuntu using KVM. I run the machine with -net nic,model=rtl8139,macaddr=f0:00:BA:12:34:56 -net user,hostfwd=tcp::2223-:22,smb=/tmp/share, sshed into the machine and typed "telnet 10.0.2.4 139" to try whether the SMB server works. KVM then hogs the CPU. + +ProblemType: Bug +Architecture: amd64 +Date: Thu Nov 5 01:23:09 2009 +DistroRelease: Ubuntu 9.10 +KvmCmdLine: Error: command ['ps', '-C', 'kvm', '-F'] failed with exit code 1: UID PID PPID C SZ RSS PSR STIME TTY TIME CMD +MachineType: LENOVO 766636G +Package: kvm 1:84+dfsg-0ubuntu16+0.11.0+0ubuntu6.3 +PccardctlIdent: + Socket 0: + no product info available +PccardctlStatus: + Socket 0: + no card +ProcCmdLine: root=/dev/mapper/cryptroot source=UUID=9c3d5596-27c6-4fd5-bfcd-fa8eef6f1230 ro quiet splash crashkernel=384M-2G:64M,2G-:128M +SourcePackage: qemu-kvm +Uname: Linux 2.6.32-999-generic x86_64 +dmi.bios.date: 07/01/2008 +dmi.bios.vendor: LENOVO +dmi.bios.version: 7NETB6WW (2.16 ) +dmi.board.name: 766636G +dmi.board.vendor: LENOVO +dmi.board.version: Not Available +dmi.chassis.asset.tag: No Asset Information +dmi.chassis.type: 10 +dmi.chassis.vendor: LENOVO +dmi.chassis.version: Not Available +dmi.modalias: dmi:bvnLENOVO:bvr7NETB6WW(2.16):bd07/01/2008:svnLENOVO:pn766636G:pvrThinkPadX61s:rvnLENOVO:rn766636G:rvrNotAvailable:cvnLENOVO:ct10:cvrNotAvailable: +dmi.product.name: 766636G +dmi.product.version: ThinkPad X61s +dmi.sys.vendor: LENOVO \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/477 b/results/classifier/gemma3:12b/kvm/477 new file mode 100644 index 00000000..22af621b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/477 @@ -0,0 +1,13 @@ + +Nested kvm-svm does not work since f5cc5a5c16 +Description of problem: +Nested SVM virtualization seems to not work. I bisected this to f5cc5a5c16. +Steps to reproduce: +1. Boot up a Linux guest such as the Debian Live CD with -accel kvm -cpu host +2. ```dmesg | grep kvm; ls /dev/kvm```; # Shows that KVM is disabled within the guest +Additional information: +Details about my AMD host: +``` +model name : AMD Ryzen 5 2600 Six-Core Processor +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme ssbd sev ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca +``` diff --git a/results/classifier/gemma3:12b/kvm/490484 b/results/classifier/gemma3:12b/kvm/490484 new file mode 100644 index 00000000..69c32260 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/490484 @@ -0,0 +1,42 @@ + +running 64bit client in 64bit host with intel crashes + +Binary package hint: qemu-kvm + +running windows 7 VM halts on early boot with + +kvm: unhandled exit 80000021 +kvm_run returned -22 + +ProblemType: Bug +Architecture: amd64 +Date: Mon Nov 30 21:28:54 2009 +DistroRelease: Ubuntu 9.10 +KvmCmdLine: Error: command ['ps', '-C', 'kvm', '-F'] failed with exit code 1: UID PID PPID C SZ RSS PSR STIME TTY TIME CMD +MachineType: System manufacturer P5Q-PRO +NonfreeKernelModules: fglrx +Package: kvm (not installed) +ProcCmdLine: BOOT_IMAGE=/vmlinuz-2.6.31-14-generic root=UUID=17a8e181-fac7-461e-8cad-8aea97be2536 ro quiet splash +ProcEnviron: + LANGUAGE=en_US:en + PATH=(custom, user) + LANG=en_US.UTF-8 + SHELL=/bin/bash +ProcVersionSignature: Ubuntu 2.6.31-14.48-generic +SourcePackage: qemu-kvm +Uname: Linux 2.6.31-14-generic x86_64 +dmi.bios.date: 07/10/2008 +dmi.bios.vendor: American Megatrends Inc. +dmi.bios.version: 1004 +dmi.board.asset.tag: To Be Filled By O.E.M. +dmi.board.name: P5Q-PRO +dmi.board.vendor: ASUSTeK Computer INC. +dmi.board.version: Rev 1.xx +dmi.chassis.asset.tag: Asset-1234567890 +dmi.chassis.type: 3 +dmi.chassis.vendor: Chassis Manufacture +dmi.chassis.version: Chassis Version +dmi.modalias: dmi:bvnAmericanMegatrendsInc.:bvr1004:bd07/10/2008:svnSystemmanufacturer:pnP5Q-PRO:pvrSystemVersion:rvnASUSTeKComputerINC.:rnP5Q-PRO:rvrRev1.xx:cvnChassisManufacture:ct3:cvrChassisVersion: +dmi.product.name: P5Q-PRO +dmi.product.version: System Version +dmi.sys.vendor: System manufacturer \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/497273 b/results/classifier/gemma3:12b/kvm/497273 new file mode 100644 index 00000000..aa40acde --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/497273 @@ -0,0 +1,66 @@ + +winxp.64 fails to install in -rc2 with kvm + +Host: Fedora11, 64-bit +Kernel: 2.6.30.9-96.fc11.x86_64 +KVM modules: + +# modinfo kvm +filename: /lib/modules/2.6.30.9-96.fc11.x86_64/kernel/arch/x86/kvm/kvm.ko +license: GPL +author: Qumranet +srcversion: 23A53503602E48217AC12F1 +depends: +vermagic: 2.6.30.9-96.fc11.x86_64 SMP mod_unload +parm: oos_shadow:bool +parm: msi2intx:bool + +]# modinfo kvm-intel +filename: /lib/modules/2.6.30.9-96.fc11.x86_64/kernel/arch/x86/kvm/kvm-intel.ko +license: GPL +author: Qumranet +srcversion: 5DD68E0B8497DC4518A8797 +depends: kvm +vermagic: 2.6.30.9-96.fc11.x86_64 SMP mod_unload +parm: bypass_guest_pf:bool +parm: enable_vpid:bool +parm: flexpriority_enabled:bool +parm: enable_ept:bool +parm: emulate_invalid_guest_state:bool + +Host CPU: Intel(R) Xeon(R) CPU X5550 @ 2.67GHz + +Guest commandline: +sudo ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios -name 'vm1' -monitor stdio -drive file=~/work/images/winXP-64.qcow2,if=ide,cache=writeback -net nic,vlan=0,model=rtl8139,macaddr=52:54:00:12:34:56 -net user,vlan=0 -m 512 -cdrom ~/work/isos/en_windows_xp_professional_x64.iso -enable-kvm -redir tcp:5000::22 + +Steps to reproduce: + +1. git checkout -b 12rc2 v0.12.0-rc2 +2. ./configure --target-list=x86_64-softmmu +3. make +4. qemu-img create -f qcow2 ~/work/images/winXP-64.qcow2 20G +5. sudo ./x86_64-softmmu/qemu-system-x86_64 -L pc-bios -name 'vm1' -monitor stdio -drive file=~/work/images/winXP-64.qcow2,if=ide,cache=writeback -net nic,vlan=0,model=rtl8139,macaddr=52:54:00:12:34:56 -net user,vlan=0 -m 512 -cdrom ~/work/isos/en_windows_xp_professional_x64.iso -enable-kvm -redir tcp:5000::22 + +Guest boots XP.64 installer, loads some files and then hangs at "Starting Windows XP" + +Reverting to -rc1 and XP installs just fine. Git bisect points to: + +commit 066263f37701687c64af9d8825e3376d069ebfd4 +Author: Andre Przywara <email address hidden> +Date: Mon Dec 7 11:58:02 2009 +0100 + +cpuid: Fix multicore setup on Intel + + +Reverting this fixes the problem. + +Different kvm modules seem to affect this install as well. Switching +to different kvm-kmod packages: + +2.6.32 modules work fine with 0.12.0-rc2, no issues at all + +2.6.30 modules fail, reverting the above commit doesn't help, seems to +be in the same boat as 2.6.28 modules + +2.6.31.5 (roughly equivalent to Fedora11 modules) work on -rc1, fail on +rc2, reverting above commit fixes -rc2. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/498035 b/results/classifier/gemma3:12b/kvm/498035 new file mode 100644 index 00000000..68aca373 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/498035 @@ -0,0 +1,8 @@ + +qemu hangs on shutdown or reboot (XP guest) + +When I shut down or reboot my Windows XP guest, about half the time, it hangs at the point where it says "Windows is shutting down...". At that point qemu is using 100% of one host CPU, about 85% user, 15% system. (Core 2 Quad 2.66GHz) + +This is the command line I use to start qemu: + +qemu-system-x86_64 -hda winxp.img -k en-us -m 2048 -smp 2 -vnc :3100 -usbdevice tablet -boot c -enable-kvm & \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/502107 b/results/classifier/gemma3:12b/kvm/502107 new file mode 100644 index 00000000..c8b97134 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/502107 @@ -0,0 +1,49 @@ + +qemu-kvm 0.12.1.2 crashes booting Ubuntu 9.10 with "-vga std" + +I have an Ubuntu VM that works fine without "-vga std" but crashes if I add "-vga std". This is the full command line: + +qemu-system-x86_64 -vga std -drive +cache=writeback,index=0,media=disk,file=ubuntu.img -k en-us -m 2048 -smp 2 -vnc +:3102 -usbdevice tablet -enable-kvm & + +I get this error: + + KVM internal error. Suberror: 1 +rax 00007f789177e000 rbx 0000000000000000 rcx 0000000000000000 rdx +0000000000000000 +rsi 0000000000000000 rdi 00007f789177e000 rsp 00007fff361775e8 rbp +00007fff36177600 +r8 000000000000ff80 r9 0000000000200000 r10 0000000000000000 r11 +00007f789100a3f0 +r12 00000000004017c0 r13 00007fff36178cf0 r14 0000000000000000 r15 +0000000000000000 +rip 00007f789100aa7b rflags 00013206 +cs 0033 (00000000/ffffffff p 1 dpl 3 db 0 s 1 type b l 1 g 1 avl 0) +ds 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +es 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +ss 002b (00000000/ffffffff p 1 dpl 3 db 1 s 1 type 3 l 0 g 1 avl 0) +fs 0000 (7f78917906f0/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +gs 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +tr 0040 (ffff880001a09440/00002087 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) +ldt 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +gdt ffff8800019fa000/7f +idt ffffffff818fd000/fff +cr0 80050033 cr2 2408000 cr3 379d4000 cr4 6f0 cr8 0 efer d01 +emulation failure, check dmesg for details + +I'm running kernel 2.6.32, and I have the kvm stuff compiled directly into the +kernel. There's nothing in dmesg about kvm at all. + +Note that in the VM grub comes up, but the VM dies when I boot the kernel. + +This command line works: + +qemu-system-x86_64 -drive cache=writeback,index=0,media=disk,file=ubuntu.img -k +en-us -m 2048 -smp 2 -vnc :3102 -usbdevice tablet -enable-kvm & + +That is, removing "-vga std" fixes the problem. + +I recently added this option to both my Ubuntu and Windows XP VMs. The Windows VM still works fine. If Windows can detect that the graphics card has changed, then Ubuntu should also have no problem. That being said, I added the std option when using 0.12.1.1, so there may be a qemu regression. + +I have reported this bug elsewhere: http://bugs.gentoo.org/show_bug.cgi?id=299211 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/504 b/results/classifier/gemma3:12b/kvm/504 new file mode 100644 index 00000000..c11c0806 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/504 @@ -0,0 +1,19 @@ + +kvm_log_clear_one_slot: KVM_CLEAR_DIRTY_LOG failed +Description of problem: +``` + $ ./qemu-system-i386 -enable-kvm -cdrom ubuntu-20.04.2.0-desktop-amd64.iso +qemu-system-i386: kvm_log_clear_one_slot: KVM_CLEAR_DIRTY_LOG failed, slot=9, start=0x0, size=0x10, errno=-14 +qemu-system-i386: kvm_log_clear: kvm log clear failed: mr=vga.vram offset=10000 size=10000 +Aborted + + $ ./qemu-system-x86_64 -enable-kvm -cdrom ubuntu-20.04.2.0-desktop-amd64.iso +qemu-system-x86_64: kvm_log_clear_one_slot: KVM_CLEAR_DIRTY_LOG failed, slot=9, start=0x0, size=0x10, errno=-14 +qemu-system-x86_64: kvm_log_clear: kvm log clear failed: mr=vga.vram offset=0 size=10000 +Aborted +``` +Steps to reproduce: +1. qemu crashes right at start +Additional information: +- last successfully used qemu version: 5.2.0 + - first seen failing qemu version: 6.0 diff --git a/results/classifier/gemma3:12b/kvm/525 b/results/classifier/gemma3:12b/kvm/525 new file mode 100644 index 00000000..1d4ec219 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/525 @@ -0,0 +1,15 @@ + +missing features with CPU `qemu64` +Description of problem: +The live migration complains about a missing feature when using the CPU qemu64, which is _guaranteed to work_. +Steps to reproduce: +1. start the VM with qemu64 on the CPU: Intel(R) Xeon(R) CPU E5-2620 v4 +2. live-migrate the VM to a CPU: Intel(R) Xeon(R) CPU E5-2670 0 +Additional information: +The migration fails: +``` +root@covid21:~# virsh migrate --verbose --live --persistent --undefinesource myvm.local qemu+ssh://covid24/system +error: operation failed: guest CPU doesn't match specification: missing features: abm +``` + +This should not happen on a generic CPU, which should always work. Note, that the migration succeeds when using `-cpu qemu64,abm=off …` diff --git a/results/classifier/gemma3:12b/kvm/526653 b/results/classifier/gemma3:12b/kvm/526653 new file mode 100644 index 00000000..200bcede --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/526653 @@ -0,0 +1,39 @@ + +Breakpoint on Memory address fails with KVM + +Using QEMU version 0.12.50 under ubuntu Karmic x64 + +To reproduce the error using a floppy with a bootloder: +qemu-system-x86_64 -s -S -fda floppy.img -boot a -enable-kvm + +connect with gdb: +(gdb) set arch i8086 +The target architecture is assumed to be i8086 +(gdb) target remote localhost:1234 +Remote debugging using localhost:1234 +0x0000fff0 in ?? () +(gdb) break *0x7c00 +Breakpoint 1 at 0x7c00 +(gdb) continue +Continuing. + +The breakpoint is not hit. + +If you close qemu and start it without kvm support: + +qemu-system-x86_64 -s -S -fda floppy.img -boot a + +(gdb) set arch i8086 +The target architecture is assumed to be i8086 +(gdb) target remote localhost:1234 +Remote debugging using localhost:1234 +0x0000fff0 in ?? () +(gdb) break *0x7c00 +Breakpoint 1 at 0x7c00 +(gdb) continue +Continuing. + +Breakpoint 1, 0x00007c00 in ?? () +(gdb) + +The breakpoint is hit. If you wait until after the bootloader has been loaded into memory, you can properly set breakpoints with or without kvm enabled. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/528 b/results/classifier/gemma3:12b/kvm/528 new file mode 100644 index 00000000..81777c0f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/528 @@ -0,0 +1,2 @@ + +arm: trying to use KVM with an EL3-enabled CPU hits an assertion failure diff --git a/results/classifier/gemma3:12b/kvm/530 b/results/classifier/gemma3:12b/kvm/530 new file mode 100644 index 00000000..69032796 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/530 @@ -0,0 +1,44 @@ + +Invalid guest state when rebooting a nesting hypervisor +Description of problem: +On a standard Linux machine, I run a custom hypervisor stack based on [Hedron](https://github.com/cyberus-technology/hedron) in a qemu VM with nesting capabilities. The Hedron stack starts a nested Linux guest with complete pass-through of all resources not required for virtualizing the nested guest. In particular, ACPI and PCI including the reset functionality are directly accessible to the nested guest. As soon as the nested guest issues a machine reset, I get a hardware error with the following error message: + +<details><summary>KVM: entry failed, hardware error 0x80000021</summary> +<pre> +If you're running a guest on an Intel machine without unrestricted mode +support, the failure can be most likely due to the guest entering an invalid +state for Intel VT. For example, the guest maybe running in big real mode +which is not supported on less recent Intel processors. + +EAX=00000000 EBX=00000000 ECX=00000000 EDX=00050657 +ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000 +EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 00000000 0000ffff 00009300 +CS =f000 ffff0000 0000ffff 00009b00 +SS =0000 00000000 0000ffff 00009300 +DS =0000 00000000 0000ffff 00009300 +FS =0000 00000000 0000ffff 00009300 +GS =0000 00000000 0000ffff 00009300 +LDT=0000 00000000 0000ffff 00008200 +TR =0000 00000000 0000ffff 00008b00 +GDT= 00000000 0000ffff +IDT= 00000000 0000ffff +CR0=60000010 CR2=00000000 CR3=00000000 CR4=003726f8 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +</pre> +</details> + +If I'm not mistaken, the CR4 value of `0x003726f8` is the offending state here, because PCIDE (bit 17) is set, even though the arch state indicates real-mode and the Intel SDM states: + +> If the “IA-32e mode guest” VM-entry control is 0, bit 17 in the CR4 field (corresponding to CR4.PCIDE) must be 0. + +Furthermore, the issue is not present when not using PCID in the L1 hypervisor or when PCID/VPID are fused out using `qemu-kvm -cpu host,-pcid,-vmx-vpid,-vmx-invpcid-exit`. +Steps to reproduce: +1. Boot custom hypervisor stack (unfortunately not yet publicly available, I'm working on that) +2. In nested Linux guest, type `reboot`, which eventually directly reboots the main VM (all main VM hardware is passed through to the single nested guest) +Additional information: +I have tracked down the [change](https://gitlab.com/qemu/qemu/-/commit/b16c0e20c74218f2d69710cedad11da7dd4d2190#063d8f78716c7a658841a1d51cc66bf30f697082_3920_3944) that likely introduced this issue. Moving the call to `kvm_put_sregs` back down (I suspect after `kvm_put_nested_state`, but I did not verify that yet) solves the reboot issue for me. The comment makes it clear that it is important to keep a certain order here, so I'm aware just reversing it is not an option. + +Maybe this already helps enough to figure out what exactly the issue and correct fix is, and I am happy to try any suggestions as long as I cannot provide a proper reproducer. diff --git a/results/classifier/gemma3:12b/kvm/530077 b/results/classifier/gemma3:12b/kvm/530077 new file mode 100644 index 00000000..e8b02a51 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/530077 @@ -0,0 +1,12 @@ + +kvm: 16-bit code execution failure should be more friendly + +Today, when kvm fails at 16-bit code execution, we report: + + spirit:~/qemu> qemu-kvm ./hda-fedora.img + kvm: unhandled exit 80000021 + kvm_run returned -22 + +There are three reasons exit reason 21 happens. The first is that a user is executing an image containing a workload that uses GFXBOOT or some other bootloader that exercises big real mode. On pre-Westmere Intel processors, VT could not handle big real mode. The second reason is that the guest's image is corrupted and we're executing random code. We accidentally fall into one of the unsupported modes for VT. Again, this is addressed on WSM. The third case is where there's an actual bug in KVM. This should be exceedingly rare at this stage. + +We should present a friendly error message explaining the possible causes and recommending corrective action. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/563 b/results/classifier/gemma3:12b/kvm/563 new file mode 100644 index 00000000..2d383bc9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/563 @@ -0,0 +1,2 @@ + +KVM ubuntu 20 VPS on Ryzen 9 5950X diff --git a/results/classifier/gemma3:12b/kvm/584514 b/results/classifier/gemma3:12b/kvm/584514 new file mode 100644 index 00000000..dbe75c63 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/584514 @@ -0,0 +1,29 @@ + +Qemu-KVM 0.12.4 Guest entered Paused State + +I recently had a 0.12.4 qemu-kvm with a debian lenny guest which occasionally paused. + +There was no memory exhaustion as suggested earlier. + +qemu-kvm send the following output:: + +VM internal error. Suberror: 1 +rax 0000000000000100 rbx ffff880017585bc0 rcx 00007f84c6d5b000 rdx 0000000000000001 +rsi 0000000000000000 rdi ffff88001d322dec rsp ffff88001e133e88 rbp ffff88001e133e88 +r8 0000000001f25bc2 r9 0000000000000007 r10 00007f84c6b4d97b r11 0000000000000206 +r12 ffff88001d322dec r13 ffff88001d322de8 r14 0000000000000001 r15 0000000000000000 +rip ffffffff81039719 rflags 00010092 +cs 0010 (00000000/ffffffff p 1 dpl 0 db 0 s 1 type b l 1 g 1 avl 0) +ds 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +es 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +ss 0018 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0) +fs 0000 (7f84c6d53700/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +gs 0000 (ffff880001d00000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +tr 0040 (ffff880001d13780/00002087 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0) +ldt 0000 (00000000/ffffffff p 0 dpl 0 db 0 s 0 type 0 l 0 g 0 avl 0) +gdt ffff880001d04000/7f +idt ffffffff8195e000/fff +cr0 80050033 cr2 7f84c6b38ec8 cr3 1db7d000 cr4 6e0 cr8 0 efer 501 +emulation failure, check dmesg for details + +Unfortunately, I found nothing in syslog or dmesg \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/584516 b/results/classifier/gemma3:12b/kvm/584516 new file mode 100644 index 00000000..fb0b3361 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/584516 @@ -0,0 +1,48 @@ + +opensuse 11.2 guest hangs after live migration with clocksource=kvm-clock + +i would like to debug a problem that I encountered some time ago with opensuse 11.2 and also +with Ubuntu (karmic/lucid). + +If I run an opensuse guest 64-bit and do not touch the clocksource settings the guest almost +everytime hangs after live migration at: + +(gdb) thread apply all bt + +Thread 2 (Thread 0x7f846782a950 (LWP 27356)): +#0 0x00007f8467d24cd7 in ioctl () from /lib/libc.so.6 +#1 0x000000000042b945 in kvm_run (env=0x2468170) + at /usr/src/qemu-kvm-0.12.4/qemu-kvm.c:921 +#2 0x000000000042cea2 in kvm_cpu_exec (env=0x2468170) + at /usr/src/qemu-kvm-0.12.4/qemu-kvm.c:1651 +#3 0x000000000042d62c in kvm_main_loop_cpu (env=0x2468170) + at /usr/src/qemu-kvm-0.12.4/qemu-kvm.c:1893 +#4 0x000000000042d76d in ap_main_loop (_env=0x2468170) + at /usr/src/qemu-kvm-0.12.4/qemu-kvm.c:1943 +#5 0x00007f8468caa3ba in start_thread () from /lib/libpthread.so.0 +#6 0x00007f8467d2cfcd in clone () from /lib/libc.so.6 +#7 0x0000000000000000 in ?? () + +Thread 1 (Thread 0x7f84692d96f0 (LWP 27353)): +#0 0x00007f8467d25742 in select () from /lib/libc.so.6 +#1 0x000000000040c25a in main_loop_wait (timeout=1000) + at /usr/src/qemu-kvm-0.12.4/vl.c:3994 +#2 0x000000000042dcf1 in kvm_main_loop () + at /usr/src/qemu-kvm-0.12.4/qemu-kvm.c:2126 +#3 0x000000000040c98c in main_loop () at /usr/src/qemu-kvm-0.12.4/vl.c:4212 +#4 0x000000000041054b in main (argc=31, argv=0x7fffa91351c8, + envp=0x7fffa91352c8) at /usr/src/qemu-kvm-0.12.4/vl.c:6252 + +If I run the same guest with kernel parameter clocksource=acpi_pm, the migration succeeds reliably. + +The hosts runs: +/kernel: /2.6.33.3, /bin: /qemu-kvm-0.12.4, /mod: /2.6.33.3 + +I invoke qemu-kvm with: +/usr/bin/qemu-kvm-0.12.4 -net none -drive file=/dev/sdb,if=ide,boot=on,cache=none,aio=native -m 1024 -cpu qemu64,model_id='Intel(R) Xeon(R) CPU E5430 @ 2.66GHz' -monitor tcp:0:4001,server,nowait -vnc :1 -name 'test' -boot order=dc,menu=on -k de -pidfile /var/run/qemu/vm-149.pid -mem-path /hugepages -mem-prealloc -rtc base=utc,clock=vm -usb -usbdevice tablet + +The Guest is: +OpenSuse 11.2 64-bit with Kernel 2.6.31.5-0.1-desktop #1 SMP PREEMPT 2009-10-26 15:49:03 +0100 x86_64 +The clocksource automatically choosen is kvm-clock. + +Feedback appreciated. I have observed the same problem with 0.12.2 and also with old binaries provided by Ubuntu Karmic (kvm-88). \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/588803 b/results/classifier/gemma3:12b/kvm/588803 new file mode 100644 index 00000000..b7c1b405 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/588803 @@ -0,0 +1,72 @@ + +Image corruption during snapshot creation/deletion + +Hello, + +The creation/deletion of snapshots sometimes crashes and corrupts the VM image and provoke a segmentation fault in "strcmp", called from "bdrv_snapshot_find". + +Here is a patch that temporarily fixes that (it fixes the segfault but not its reason) : + +--- qemu-kvm-0.12.2-old/savevm.c 2010-01-18 19:48:25.000000000 +0100 ++++ qemu-kvm-0.12.2/savevm.c 2010-02-12 13:45:07.225644169 +0100 +@@ -1624,6 +1624,7 @@ + int nb_sns, i, ret; + + ret = -ENOENT; ++ if (!name) return ret; + nb_sns = bdrv_snapshot_list(bs, &sn_tab); + if (nb_sns < 0) + return ret; +@@ -1649,6 +1650,8 @@ + QEMUSnapshotInfo sn1, *snapshot = &sn1; + int ret; + ++ if (!name) return 0; ++ + QTAILQ_FOREACH(dinfo, &drives, next) { + bs = dinfo->bdrv; + if (bdrv_can_snapshot(bs) && +@@ -1777,6 +1780,11 @@ + QTAILQ_FOREACH(dinfo, &drives, next) { + bs1 = dinfo->bdrv; + if (bdrv_has_snapshot(bs1)) { ++ if (!name) { ++ monitor_printf(mon, "Could not find snapshot 'NULL' on " ++ "device '%s'\n", ++ bdrv_get_device_name(bs1)); ++ } + ret = bdrv_snapshot_goto(bs1, name); + if (ret < 0) { + if (bs != bs1) +@@ -1804,6 +1812,11 @@ + } + } + ++ if (!name) { ++ monitor_printf(mon, "VM state name is NULL\n"); ++ return -EINVAL; ++ } ++ + /* Don't even try to load empty VM states */ + ret = bdrv_snapshot_find(bs, &sn, name); + if ((ret >= 0) && (sn.vm_state_size == 0)) +@@ -1840,6 +1853,11 @@ + QTAILQ_FOREACH(dinfo, &drives, next) { + bs1 = dinfo->bdrv; + if (bdrv_has_snapshot(bs1)) { ++ if (!name) { ++ monitor_printf(mon, "Could not find snapshot 'NULL' on " ++ "device '%s'\n", ++ bdrv_get_device_name(bs1)); ++ } + ret = bdrv_snapshot_delete(bs1, name); + if (ret < 0) { + if (ret == -ENOTSUP) + + +The patch is very simple. Some checks on the variable "name" were missing in "savevm.c". + +Regards, + +Nicolas Grandjean +Conix Security \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/597575 b/results/classifier/gemma3:12b/kvm/597575 new file mode 100644 index 00000000..bb312b41 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/597575 @@ -0,0 +1,33 @@ + +Hangs on HTTP errors when using the curl block driver + +Hi, + +It seems that qemu-kvm does not handle HTTP errors gracefully when using the curl block driver and a synchronous request is made (i.e. one using bdrv_read_em() for example). In these cases, if an HTTP error (such as 404 or 416) is returned, the aio thread exits but the main thread never finishes waiting for I/O completion, thus freezing KVM. + +Versions affected: +At least 0.11.1 and 0.12.4 were tested and found to be affected. + +How to reproduce: +Simply specify a non-existing path for an HTTP URL as a CDROM drive. +kvm -drive file=test.img,format=qcow2,if=ide,index=0,boot=on -drive file=http://127.0.0.1/static/test1.iso,media=cdrom,index=2,if=ide -boot c + +qemu-kvm will hang on boot using 100% cpu as it will try to open the block device. At that point, the backtrace is (qemu-kvm-0.12.4): + +#0 0x000000000047aaaf in qemu_aio_wait () at aio.c:163 +#1 0x000000000047a055 in bdrv_read_em (bs=0x1592320, sector_num=0, buf=0x7fffcf7e9ae0 "¨\237~Ïÿ\177", nb_sectors=4) + at block.c:1939 +#2 0x0000000000479c0e in bdrv_pread (bs=0x1592320, offset=<value optimized out>, buf=0x7fffcf7e9ae0, count1=2048) + at block.c:716 +#3 0x000000000047a862 in bdrv_open2 (bs=0x1591a30, filename=0x1559f00 "http://127.0.0.1/static/test1.iso", + flags=0, drv=0x84eca0) at block.c:316 +#4 0x000000000040dcb4 in drive_init (opts=0x1559e60, opaque=<value optimized out>, fatal_error=0x7fffcf7ea494) + at /build/buildd-qemu-kvm_0.12.4+dfsg-1~bpo50+1-amd64-KOah5G/qemu-kvm-0.12.4+dfsg/vl.c:2471 +#5 0x000000000040e086 in drive_init_func (opts=0x155db00, opaque=0x0) + at /build/buildd-qemu-kvm_0.12.4+dfsg-1~bpo50+1-amd64-KOah5G/qemu-kvm-0.12.4+dfsg/vl.c:2488 +#6 0x0000000000475421 in qemu_opts_foreach (list=<value optimized out>, func=0x40e070 <drive_init_func>, + opaque=0x8495e0, abort_on_failure=12) at qemu-option.c:817 +#7 0x000000000040e9af in main (argc=7, argv=0x7fffcf7ea838, envp=<value optimized out>) + at /build/buildd-qemu-kvm_0.12.4+dfsg-1~bpo50+1-amd64-KOah5G/qemu-kvm-0.12.4+dfsg/vl.c:6011 + +Thanks \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/599574 b/results/classifier/gemma3:12b/kvm/599574 new file mode 100644 index 00000000..61e51623 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/599574 @@ -0,0 +1,6 @@ + +qemu-kvm: -no-reboot option broken in 12.x + +When using the "-no-reboot" qemu option with kvm, qemu does nothing and immediately exits with no output or error message. If I add the --no-kvm option to the command line, it works as expected. + +It works fine in 11.0 and 11.1, but I tested all versions of 12.X, and they all have this problem. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/599958 b/results/classifier/gemma3:12b/kvm/599958 new file mode 100644 index 00000000..eebedad5 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/599958 @@ -0,0 +1,43 @@ + +Timedrift problems with Win7: hpet missing time drift fixups + +We've been finding timedrift issues witth Win7 under qemu-kvm on our daily testing + +kvm.qemu-kvm-git.smp2.Win7.64.timedrift.with_load FAIL 1 Time drift too large after rest period: 38.63% +kvm.qemu-kvm-git.smp2.Win7.64.timedrift.with_reboot FAIL 1 Time drift too large at iteration 1: 17.77 seconds +kvm.qemu-kvm-git.smp2.Win7.64.timedrift.with_migration FAIL 1 Time drift too large at iteration 2: 3.08 seconds + +Steps to reproduce: + +timedrift.with_load + +1) Log into a guest. +2) Take a time reading from the guest and host. +3) Run load on the guest and host. +4) Take a second time reading. +5) Stop the load and rest for a while. +6) Take a third time reading. +7) If the drift immediately after load is higher than a user- + specified value (in %), fail. + If the drift after the rest period is higher than a user-specified value, + fail. + +timedrift.with_migration + +1) Log into a guest. +2) Take a time reading from the guest and host. +3) Migrate the guest. +4) Take a second time reading. +5) If the drift (in seconds) is higher than a user specified value, fail. + +timedrift.with_reboot + +1) Log into a guest. +2) Take a time reading from the guest and host. +3) Reboot the guest. +4) Take a second time reading. +5) If the drift (in seconds) is higher than a user specified value, fail. + +This bug is to register those issues and keep an eye on them. + +Attached, some logs from the autotest tests executed on the guest \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/628082 b/results/classifier/gemma3:12b/kvm/628082 new file mode 100644 index 00000000..f2657a33 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/628082 @@ -0,0 +1,8 @@ + +nl-be keymap is wrong + +As mentioned on https://bugs.launchpad.net/ubuntu/+source/kvm/+bug/429965 as well as the kvm mailinglist (http://thread.gmane.org/gmane.comp.emulators.kvm.devel/14413), the nl-be keymap does not work. The number keys above the regular keys (non-numeric keypad numbers) as well as vital keys such as slash, backslash, dash, ... are not working. + +The nl-be keymap that is presented in the above URLs (and also attached to this bug) does work properly. + +Would it be possible to include this keymap rather than the current one? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/642304 b/results/classifier/gemma3:12b/kvm/642304 new file mode 100644 index 00000000..0a35bac4 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/642304 @@ -0,0 +1,10 @@ + +Solaris/x86 v10 hangs under KVM + +Solaris/x86 10 guest hangs when running under KVM with the message "Running Configuration Assistant". It runs fine when -enable-kvm isn't given as a command option. + +Host OS: Linux/x86_64 +Guest OS: Solaris/x86 +Command Line: qemu -hda solaris.img -m 192 -boot c -enable-kvm +Build Configure: ./configure --enable-linux-aio --enable-io-thread --enable-kvm +GIT commit: 58aebb946acff82c62383f350cab593e55cc13dc \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/643430 b/results/classifier/gemma3:12b/kvm/643430 new file mode 100644 index 00000000..ea0c2906 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/643430 @@ -0,0 +1,18 @@ + +system_powerdown NOT working in qemu-kvm with KVM enabled for FreeBSD guests + +system_powerdown stops working in qemu-kvm for FreeBSD guests if KVM is enabled. + +How to reproduce: + +1. qemu -cdrom ~/.VirtualBox/libvirt/FreeBSD-8.1-RELEASE-i386-bootonly.iso +2. Enter system_powerdown in the qemu console +3. Nothing happens. + +Adding --no-kvm option makes system_powerdown work: + +1. qemu --no-kvm -cdrom ~/.VirtualBox/libvirt/FreeBSD-8.1-RELEASE-i386-bootonly.iso +2. system_powerdown +3. FreeBSD installer shows the shutdown dialog as expected + +Tested on FreeBSD 6.4, 7.2, and 8.0 with qemu-kvm 0.12.5 and older versions. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/660060 b/results/classifier/gemma3:12b/kvm/660060 new file mode 100644 index 00000000..5e6a04e7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/660060 @@ -0,0 +1,33 @@ + +virtio block read errors + +Context : +- Gentoo Linux distribution on host and guests. +- qemu-kvm-0.12.5-r1 +- 2.6.34-gentoo-r11 host kernel +- 2.6.29-gentoo-r5 guest kernels +- VM boots from and uses a single virtio block device. + +On the old kvm bugtracker there was a discussion about a bug with virtio block devices : +http://sourceforge.net/tracker/?func=detail&atid=893831&aid=2933400&group_id=180599 +I was affected (user gyver in the above discussion) and believed that the problem was fully solved : we had the read error problems on 4 physical hosts . I migrated 3 of the 4 hosts to Gentoo's qemu-kvm-0.12.5-r1 which fixed the problems and allowed us to use virtio block devices instead of emulated PIIX. + +It seems there's a corner case left or another bug with similar consequences. + +I just used a maintenance window on the last physical host (one hard disk switched for repair in a RAID1 array) to migrate the ancient kvm-85 (which worked for us with virtio) to 0.12.5. The read errors in virtio block mode came back instantly. + +We have 3 VMs on this 4th host, 2 are x86, 1 is x86_64. All of them try to boot from a virtio block device and fail to do so with Gentoo's qemu-kvm-0.12.5-r1. They report read errors on /dev/vda and remount the root fs read-only. Reconfiguring them to use emulated PIIX works. There's something interesting about PIIX mode that I'm not sure I've seen before though: there are errors reported by the ATA stack during the boot and the guest kernels switch to PIO after resetting the ide0 interface. More on that later. + +Booting all these VMs works properly with Gentoo's 0.11.1-r1 with virtio block. + +Two details that might help : +1/ +We use DRBD devices for all our virtual disks (on all 4 physical hosts), + +2/ +The "failing" host has different hardware, main differences : +- Core2 Duo architecture instead of Core i7, +- hardware RAID controller: a 3ware 8006-2LP with two SATA disks in RAID-1 mode instead of plain AHCI SATA controllers and software raid 1. +Currently the controller on the "failing" host is rebuilding the array after we switched a failing disk with a brand new one. Although there's no read error on the physical host as far as its kernel is concerned, read performance is suffering : 5MB/s top from the guest point of view with 0.11.1-r1 and virtio block with a dd if=/dev/vda (only one VM running and host idle to avoid interferences other than the RAID rebuild), ... + +This poor read performance might explain the problem we saw in the guest kernel with PIIX emulation on qemu-kvm-0.12.5-r1 (slow reads might be confused with buggy DMA transfers explaining the PIO fallback). We didn't have time to test PIIX emulation after the RAID array was fully synchronized but can do on request. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/661 b/results/classifier/gemma3:12b/kvm/661 new file mode 100644 index 00000000..cc6ea263 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/661 @@ -0,0 +1,45 @@ + +Unable to enable 5 level paging +Description of problem: +When attempting to set cr4.LA57, qemu just freezes on that instruction. When I say freeze I mean literally freeze, no exceptions, nothing, it just halts forever on that instruction. When this happened, the first thing I did was + +``` +(qemu) info registers +EAX=00001000 EBX=00000001 ECX=80224f08 EDX=00000000 +ESI=8034a3a0 EDI=00026520 EBP=000079f8 ESP=000079c8 +EIP=00019648 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0020 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +CS =0018 00000000 ffffffff 00c09a00 DPL=0 CS32 [-R-] +SS =0020 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0020 00000000 ffffffff 00c09300 DPL=0 DS [-WA] +FS =0020 00000000 ffffffff 00cf9300 DPL=0 DS [-WA] +GS =0020 00000000 ffffffff 00cf9300 DPL=0 DS [-WA] +LDT=0000 00000000 00000000 00008200 DPL=0 LDT +TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy +GDT= 0000e120 00000037 +IDT= 00000000 00000000 +CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000000 +... +``` + +then using gdb to figure out what instruction it is hanging on, I set a breakpoint at 0x19648 at and ran +``` +(gdb) x/1 0x19648 +=> 0x19648: mov %rax,%cr4 +(gdb) +``` + +This instruction corresponds to this LOC within limine https://github.com/limine-bootloader/limine/blob/trunk/stage23/protos/stivale.32.c#L33 +Steps to reproduce: +1. Try to enable 5 level paging +2. qemu freezes when trying to set cr4.LA57 +3. cry +Additional information: +This never happened prior to version 6.1, I test this on multiple different machines and a few of my friends +experienced the same issue + +I have not tested this on linux, however I assume it will do the same on anything else. +Either way, qemu should not be just halting diff --git a/results/classifier/gemma3:12b/kvm/674 b/results/classifier/gemma3:12b/kvm/674 new file mode 100644 index 00000000..59cbf3ac --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/674 @@ -0,0 +1,15 @@ + +Windows 7 fails with blue screen when KVM is enabled. +Description of problem: +The problem appeared immediately after a full system update of Arch Linux (The first for several months). Windows 7 images that had been running normally would fail with a blue screen and Error 0x7E immediately after displaying "Starting Windows". The same error would occur with a Windows 7 installation image, as in the command line above. When the "-enable-kvm" option was removed Windows would run normally but slowly. An old Clonezilla image booted without apparent problems. + +The final line on the blue screen reads: +*** STOP: 0x0000007E (0xC0000005,0x8BA3CA36,0x85186AA0,0x85186680) + +After getting the problem with the Arch package I cloned the source and built the latest version, getting the same error. However, when I build version 5.2.95 (v6.0.0-rc5-dirty) I found that this would run my existing Windows images (qcow2) and the installation ISO image. +Steps to reproduce: +1. +2. +3. +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/680758 b/results/classifier/gemma3:12b/kvm/680758 new file mode 100644 index 00000000..d5f1b5a0 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/680758 @@ -0,0 +1,32 @@ + +balloon only resizes by 2M + +when in monitor and running balloon 512 from a 1024M VM, the vm dropped the size to 1020 (this value changes), then every subsequent request to balloon 512 will drop it by another 2M. The system was running at above 60% RAM free when these requests were made. also requesting to up the ram results in no change above 1024 (I'm guessing this is intentional, but was unable to find any documentation) + +Versions: + +qemu-kvm 0.13.0 +qemu-kvm.git b377474e589e5a1fe2abc7b13fafa8bad802637a + + +Qemu Command Line: + +./x86_64-softmmu/qemu-system-x86_64 -ees/seven.base,if=virtio -net nic,model=virtio,macaddr=02:00:00:00:00:01 -net tap,script=/etc/qemu/qemu-ifup,downscript=/etc/qemu/qemu-ifdown -vga std -usb -usbdevice tablet -rtc base=localtime,clock=host -watchdog i6300esb -balloon virtio -m 1024 -no-quit -smp 2 -monitor stdio + + +Monitor Session: + +QEMU 0.13.50 monitor - type 'help' for more information +(qemu) info balloon +balloon: actual=1024 +(qemu) balloon 1536 +(qemu) info balloon +balloon: actual=1024 +(qemu) balloon 512 +(qemu) info balloon +balloon: actual=1020 +(qemu) info balloon +balloon: actual=1020 +(qemu) balloon 512 +(qemu) info balloon +balloon: actual=1018 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/712416 b/results/classifier/gemma3:12b/kvm/712416 new file mode 100644 index 00000000..b2aa5d27 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/712416 @@ -0,0 +1,12 @@ + +kvm_intel kernel module crash with via nano vmx + +kvm module for hardware virtualisation not work properly on via nano processors. + +Tested with processor: VIA Nano processor U2250. +Processors flags (visible in /proc/cpuinfo): fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush acpi mmx fxsr sse sse2 ss tm syscall nx lm constant_tsc up rep_good pni monitor vmx est tm2 ssse3 cx16 xtpr rng rng_en ace ace_en ace2 phe phe_en lahf_lm + +With kernel 2.6.32: kvm not work and dmesg contains a lot of: +handle_exception: unexpected, vectoring info 0x8000000d intr info 0x80000b0d + +With kernel 2.6.35: all the system crash. Nothing visible in logs \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/722311 b/results/classifier/gemma3:12b/kvm/722311 new file mode 100644 index 00000000..1b516eb8 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/722311 @@ -0,0 +1,8 @@ + +Segmentation fault if started without -enable-kvm parameter + +I start qemu (Linux) from the same USB memory stick on several computers. Up to and including qemu 0.12.5, I could use or not use qemu's "-enable-kvm" command line parameter as appropriate for the hardware, and qemu would run. In contrast, qemu 0.13.0 and 0.14.0 segfault if started without "-enable-kvm". I get a black window appearing for fractions of a second, disappearing immediately, and then the error message "Segmentation fault". + +Hardware: Pentium 4, and Core 2 Duo. +Command line: either "qemu" or "qemu -enable-kvm" (after manually loading the kvm-intel module on the Core 2 Duo). +Reproducible: always. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/727134 b/results/classifier/gemma3:12b/kvm/727134 new file mode 100644 index 00000000..fdfce751 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/727134 @@ -0,0 +1,4 @@ + +pci-stub.o: In function `do_pci_info':0.14.0 compile problem + +Please see this build log. I didn't compile thq qemu-kvm on Mandriva Cooker and haven't any idea. I'm the qemu maintainer on Mandriva. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/732155 b/results/classifier/gemma3:12b/kvm/732155 new file mode 100644 index 00000000..c874a2db --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/732155 @@ -0,0 +1,95 @@ + +system_reset doesn't work with qemu-kvm and latest SeaBIOS + +I've built qemu-kvm and seabios from the latest git sources, and found that the system_reset monitor command causes a freeze if I start qemu-system-x86_64 with the -no-kvm flag. This is a serial log from an attempt at rebooting: + +$ ./x86_64-softmmu/qemu-system-x86_64 -monitor stdio -bios ../seabios/out/bios.bin -serial /dev/stdout -no-kvm +QEMU 0.14.50 monitor - type 'help' for more information +(qemu) Changing serial settings was 0/0 now 3/0 +Start bios (version pre-0.6.3-20110309_171929-desk4) +Ram Size=0x08000000 (0x0000000000000000 high) +CPU Mhz=2202 +PCI: pci_bios_init_bus_rec bus = 0x0 +PIIX3/PIIX4 init: elcr=00 0c +PCI: bus=0 devfn=0x00: vendor_id=0x8086 device_id=0x1237 +PCI: bus=0 devfn=0x08: vendor_id=0x8086 device_id=0x7000 +PCI: bus=0 devfn=0x09: vendor_id=0x8086 device_id=0x7010 +region 4: 0x0000c000 +PCI: bus=0 devfn=0x0b: vendor_id=0x8086 device_id=0x7113 +PCI: bus=0 devfn=0x10: vendor_id=0x1013 device_id=0x00b8 +region 0: 0xf0000000 +region 1: 0xf2000000 +region 6: 0xf2010000 +PCI: bus=0 devfn=0x18: vendor_id=0x10ec device_id=0x8139 +region 0: 0x0000c100 +region 1: 0xf2020000 +region 6: 0xf2030000 +Found 1 cpu(s) max supported 1 cpu(s) +MP table addr=0x000fdb40 MPC table addr=0x000fdb50 size=224 +SMBIOS ptr=0x000fdb20 table=0x07fffef0 +ACPI tables: RSDP=0x000fdaf0 RSDT=0x07ffd6a0 +Scan for VGA option rom +Running option rom at c000:0003 +Turning on vga text mode console +SeaBIOS (version pre-0.6.3-20110309_171929-desk4) + +PS2 keyboard initialized +Found 1 lpt ports +Found 1 serial ports +ATA controller 0 at 1f0/3f4/0 (irq 14 dev 9) +ATA controller 1 at 170/374/0 (irq 15 dev 9) +DVD/CD [ata1-0: QEMU DVD-ROM ATAPI-4 DVD/CD] +Searching bootorder for: /pci@i0cf8/*@1,1/drive@1/disk@0 +Scan for option roms +Running option rom at c900:0003 +pnp call arg1=60 +pmm call arg1=0 +pmm call arg1=2 +pmm call arg1=0 +Searching bootorder for: /pci@i0cf8/*@3 +Searching bootorder for: /rom@genroms/vapic.bin +Running option rom at c980:0003 +ebda moved from 9fc00 to 9f400 +Returned 53248 bytes of ZoneHigh +e820 map has 6 items: + 0: 0000000000000000 - 000000000009f400 = 1 + 1: 000000000009f400 - 00000000000a0000 = 2 + 2: 00000000000f0000 - 0000000000100000 = 2 + 3: 0000000000100000 - 0000000007ffd000 = 1 + 4: 0000000007ffd000 - 0000000008000000 = 2 + 5: 00000000fffc0000 - 0000000100000000 = 2 +enter handle_19: + NULL +Booting from DVD/CD... +Device reports MEDIUM NOT PRESENT +atapi_is_ready returned -1 +Boot failed: Could not read from CDROM (code 0003) +enter handle_18: + NULL +Booting from ROM... +Booting from c900:0336 + +(qemu) +(qemu) system_reset +(qemu) RESET REQUESTEDChanging serial settings was 0/0 now 3/0 +Start bios (version pre-0.6.3-20110309_171929-desk4) +Attempting a hard reboot +prep_reset +apm_shutdown? +i8042_reboot +i8042: wait to write... +i8042: outb +RESET REQUESTED +(qemu) +(qemu) +(qemu) +(qemu) info cpus +* CPU #0: pc=0x00000000fffffff0 thread_id=18125 +(qemu) system_reset +(qemu) RESET REQUESTED +(qemu) +(qemu) q + +I've tried fiddling a few build options in SeaBIOS but I'm not sure that's where the issue lies. The RESET REQUESTED is me adding some extra debug to vl.c:1477 in the clause that tests for a reset request, and the i8042: lines are debug lines from seabios tracing the execution of the reset request. + +This may be a bug in SeaBIOS of course, since I can replicate the behaviour on my distro's qemu and kvm packages. However it seems odd that qemu behaves differently with KVM turned on (i.e. system_reset works) than with it disabled. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/735 b/results/classifier/gemma3:12b/kvm/735 new file mode 100644 index 00000000..2f1610d9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/735 @@ -0,0 +1,27 @@ + +softmmu 'at' not behaving +Description of problem: +This looks like a bug to me, please correct if I'm wrong. The execution context is EL2 here and we run KVM vms on top of the system emulation. Anyway, here we have stopped in the EL2 and want to translate a virtual address '0' with 'at'. While the '0' itself is not mapped, something in the first gigabyte is, and the softmmu refuses to walk to it: + +0x0000000100004a3c <at_s12e1r+8>: 80 78 0c d5 at s12e1r, x0 +0x0000000100004a40 <at_s12e1r+12>: 01 74 38 d5 mrs x1, par_el1 + +(gdb) info registers x0 x1 +x0 0x0 0 +x1 0x809 2057 + +So that would be translation fault level 0, stage 1 if I'm not mistaken. + +(gdb) info all-registers TCR_EL1 VTCR_EL2 TTBR1_EL1 +TCR_EL1 0x400035b5503510 18014629184681232 +VTCR_EL2 0x623590 6436240 +TTBR1_EL1 0x304000041731001 217298683118686209 + +(gdb) p print_table(0x41731000) +000:0x000000ffff9803 256:0x000000fffff803 507:0x00000041fbc803 +508:0x000000ff9ef803 + +The first gigabyte is populated, yet the 'at' knows nothing about it. Did I miss something? This seems to be working fine on the hardware. +Steps to reproduce: +1. Stop in the EL2 while the linux is running (GDB) +2. Use something along the lines of this function to translate any kernel virtual address: https://github.com/jkrh/kvms/blob/4c26c786be9971613b3b7f56121c1a1aa3b9585a/core/helpers.h#L74 diff --git a/results/classifier/gemma3:12b/kvm/747583 b/results/classifier/gemma3:12b/kvm/747583 new file mode 100644 index 00000000..38fc7f3d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/747583 @@ -0,0 +1,23 @@ + +Windows 2008 Time Zone Change Even When Using -locatime + +* What cpu model : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz +* What kvm version you are using. : qemu-kvm-0.12.3 +* The host kernel version : 2.6.32-30-server +* What host kernel arch you are using (i386 or x86_64) : x86_64 +* What guest you are using, including OS type: Windows 2008 Enterprise x86_64 +* The qemu command line you are using to start the guest : /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1 -name 2-6176 -uuid 4d1d56b1-d0b7-506b-31a5-a87c8cb0560b -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/2-6176.monitor,server,nowait -monitor chardev:monitor -localtime -boot c -drive file=/dev/disk/by-id/scsi-3600144f05c11090000004d9602950073,if=virtio,index=0,boot=on,format=raw -drive file=/dev/disk/by-id/scsi-3600144f0eae8810000004c7bb0920037,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=00:00:d1:d0:3f:5e,vlan=0,name=nic.1 -net tap,fd=212,vlan=0,name=tap.1 -net nic,macaddr=00:00:0a:d0:3f:5e,vlan=1,name=nic.1 -net tap,fd=213,vlan=1,name=tap.1 -chardev pty,id=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 0.0.0.0:394,password -k en-us -vga cirrus +* Whether the problem goes away if using the -no-kvm-irqchip or -no-kvm-pit switch. : Unable to test +* Whether the problem also appears with the -no-kvm switch. : Unable to test + +Host time zone: EDT Guest time zone: PDT + +Steps to reproduce: +1) Set time zone to (GMT-08:00) Pacific Time (US & Canada) on guest +2) Power off Windows 2008 Enterprise x86_64 guest completely. Ensure the kvm process exits. +3) Power on Windows 2008 Enterprise x86_64 guest using virsh start <domain> +4) Server will show EDT time but have the time zone still set to (GMT-08:00) Pacific Time (US & Canada). + +Syncing the time after stopping and starting the kvm process using Windows "Internet Time" ntp time sync will sync the time to the correct PDT time. + +Doing a reboot from within the guest's operating system where kvm does not exit will not cause the timezone shift to happen. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/748 b/results/classifier/gemma3:12b/kvm/748 new file mode 100644 index 00000000..24032524 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/748 @@ -0,0 +1,2 @@ + +Enable postcopy migration for mixed Hugepage backed KVM guests and improve handling of dirty-page tracking by QEMU/KVM diff --git a/results/classifier/gemma3:12b/kvm/755 b/results/classifier/gemma3:12b/kvm/755 new file mode 100644 index 00000000..97788ed7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/755 @@ -0,0 +1,60 @@ + +Qemu is stuck on the startup intermittently. +Description of problem: +Qemu is stuck on the startup intermittently. + +We are using kubevirt to launch the VM in kubernetes env. We have compiled qemu with a few flags enabled and using it. +All things are working as expected except we are seeing qemu stuck issue during VM startup. Please find logs from system in additional information + +Qemu version: qemu-system-x86-core-5.1.0-9.fc32.x86_64.rpm +Libvirtd version: 6.6.0 +Steps to reproduce: +1. Create and start a VM. +Additional information: +TOP OUTPUT: +-------------- +``` + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + **125 qemu 0 -20 8519896 73392 15412 R 99.9 0.1 85:27.96 CPU 0/KVM ** + 113 qemu 20 0 8519896 73392 15412 S 0.0 0.1 0:00.14 qemu-system-ori + 121 qemu 20 0 8519896 73392 15412 S 0.0 0.1 0:00.00 qemu-system-ori + 122 qemu 20 0 8519896 73392 15412 S 0.0 0.1 0:00.00 IO iothread1 + 124 qemu 20 0 8519896 73392 15412 S 0.0 0.1 0:00.23 IO mon_iothread + 126 qemu 0 -20 8519896 73392 15412 S 0.0 0.1 0:00.00 CPU 1/KVM + 128 qemu 20 0 8519896 73392 15412 S 0.0 0.1 0:00.00 vnc_worker +``` + +qemu logs on error: +------------------- +``` +KVM: injection failed, MSI lost (Operation not permitted) +KVM: injection failed, MSI lost (Operation not permitted) +KVM: injection failed, MSI lost (Operation not permitted) +KVM: injection failed, MSI lost (Operation not permitted) +KVM: injection failed, MSI lost (Operation not permitted) +KVM: injection failed, MSI lost (Operation not permitted) +KVM: injection failed, MSI lost (Operation not permitted) +``` + +dmesg logs from host:- +---------------------- +``` +[ 7853.643187] kvm: apic: phys broadcast and lowest prio +[ 7853.643265] kvm: apic: phys broadcast and lowest prio +[ 7853.643341] kvm: apic: phys broadcast and lowest prio +[ 7853.643413] kvm: apic: phys broadcast and lowest prio +[ 7853.643486] kvm: apic: phys broadcast and lowest prio +[ 7853.643559] kvm: apic: phys broadcast and lowest prio +[ 7853.643631] kvm: apic: phys broadcast and lowest prio +[ 7853.643703] kvm: apic: phys broadcast and lowest prio +[ 7853.643776] kvm: apic: phys broadcast and lowest prio +[ 7853.643848] kvm: apic: phys broadcast and lowest prio +[ 7853.643920] kvm: apic: phys broadcast and lowest prio +[ 7853.643992] kvm: apic: phys broadcast and lowest prio +[ 7853.644065] kvm: apic: phys broadcast and lowest prio +[ 7853.644137] kvm: apic: phys broadcast and lowest prio +[ 7853.644209] kvm: apic: phys broadcast and lowest prio +[ 7853.644289] kvm: apic: phys broadcast and lowest prio +``` + +--> diff --git a/results/classifier/gemma3:12b/kvm/772 b/results/classifier/gemma3:12b/kvm/772 new file mode 100644 index 00000000..6847b45e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/772 @@ -0,0 +1,13 @@ + +Pop!_OS 20.10 host + RHEL 8.5 guest = Oh no! Something has gone wrong. +Description of problem: +Whenever starting the Qemu VM, there is an error covering the whole desktop "Oh no! Something has gone wrong. A problem has occurred and the system can't recover. Please log out and try again." After clicking the "Log Out" button and waiting for hours, the guest RHEL may or may not recover, based on your luck and other qemu options used. +Steps to reproduce: +1. Build qemu using the following `./configure` options: +``` +--prefix=$HOME/.bin --target-list=x86_64-softmmu --enable-kvm --enable-vnc --enable-gtk --enable-vte --enable-xkbcommon --enable-sdl --enable-spice --enable-spice-protocol --enable-virglrenderer --enable-opengl --enable-guest-agent --enable-avx2 --enable-avx512f --enable-hax --enable-system --enable-linux-user --enable-libssh --enable-linux-aio --enable-linux-io-uring --enable-modules --enable-gio --enable-fuse --enable-fuse-lseek +``` +2. Install Red Hat Enterprise Linux 8.5 in qemu +3. Run qemu using the above command line. +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/772358 b/results/classifier/gemma3:12b/kvm/772358 new file mode 100644 index 00000000..d063d0ed --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/772358 @@ -0,0 +1,23 @@ + +VNC working depends on command line options order + +OS: Ubuntu 10.04.2, amd64 +Pkg version: 0.12.3+noroms-0ubuntu9.5 + +if -nographic option is specified before -vnc, vnc works, if vice-versa, it does not. I have been told (thanks, mjt), that -nographic is supposed to disable any graphic output, including vnc, so possibly it's a documentation bug: + +- kvm man page talks about -nographic disabling SDL , not VNC. While it might be the same to you, it was not to me and my colleagues + +- if -vnc and -nographic are conflicting, perhaps kvm should error out or at least warn + +- monitor console's message on "change vnc 127.0.0.1:1" command: "Could not start server on 127.0.0.1:1" is not helpful either + +- order of the options should not matter + +Example: (VNC works) + +/usr/bin/kvm -name ubuntu.example.com -m 3076 -smp 2 -drive if=virtio,file=/dev/vg0/kvm-ubuntu,boot=on,media=disk,cache=none,index=0 -net nic,vlan=0,model=virtio,macaddr=00:03:03:03:03:01 -net tap,ifname=kvm_ubuntu,vlan=0,script=no,downscript=no -balloon virtio -nographic -daemonize -vnc 127.0.0.1:1 -pidfile /var/run/kvm/1 + +Example: (VNC does not work, also confuses terminal): + +/usr/bin/kvm -name ubuntu.example.com -m 3076 -smp 2 -drive if=virtio,file=/dev/vg0/kvm-ubuntu,boot=on,media=disk,cache=none,index=0 -net nic,vlan=0,model=virtio,macaddr=00:03:03:03:03:01 -net tap,ifname=kvm_ubuntu,vlan=0,script=no,downscript=no -balloon virtio -vnc 127.0.0.1:1 -nographic -daemonize -pidfile /var/run/kvm/1 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/777 b/results/classifier/gemma3:12b/kvm/777 new file mode 100644 index 00000000..9c822f3f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/777 @@ -0,0 +1,10 @@ + +Hang on Alder Lake with multiple cores +Description of problem: +The guest silently hangs after a few seconds or minutes. No output in log, no errors in guest. +Steps to reproduce: +1. Start guest, do anything or nothing for a few minutes +Additional information: +More cores seem to make it less stable. With a single core, I haven't had a problem but at 8 cores it usually doesn't make it much past login on Windows or Linux. + +The guests are stable with 8 cores if I pin the vcpus to P cores. diff --git a/results/classifier/gemma3:12b/kvm/786442 b/results/classifier/gemma3:12b/kvm/786442 new file mode 100644 index 00000000..50de560b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/786442 @@ -0,0 +1,8 @@ + +GCC -O2 causes segfaults + +unless compiled without optimizations, no system may be ran except the default with -kvm-enabled +I had to modify config-host.mak and remove -O2 from CFLAGS to be able to work without kvm. + +GCC 4.4.4 qemu-0.14.1 +***NOTE: this has been an issue for several versions. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/790 b/results/classifier/gemma3:12b/kvm/790 new file mode 100644 index 00000000..19945320 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/790 @@ -0,0 +1,2 @@ + +Attribute bits in stage 1/stage 2 block descriptors are not fully masked during AArch64 page table walks diff --git a/results/classifier/gemma3:12b/kvm/795866 b/results/classifier/gemma3:12b/kvm/795866 new file mode 100644 index 00000000..dbeae9a7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/795866 @@ -0,0 +1,109 @@ + +pci passthrough doesn´t work + +Hi all, + +I have some problems passing through a pci device to kvm guest. +First I have to say that I´m using the latest kvm-kernel und qemu-kvm from git-tree (Date 11.06.2011). + +I want´t to passthrough this device to guest: + +lspci-output: + +02:00.0 Multimedia video controller: Micronas Semiconductor Holding AG Device 0720 (rev 01) + +So at first I have bind the driver to psi-stub: + +modprobe -r kvm-intel +modprobe -r kvm +echo "18c3 0720" > /sys/bus/pci/drivers/pci-stub/new_id +echo 0000:02:00.0 > /sys/bus/pci/devices/0000:02:00.0/driver/unbind +echo 0000:02:00.0 > /sys/bus/pci/drivers/pci-stub/bind +modprobe kvm +modprobe kvm-intel + +Then I have assigned device to guest: +-device pci-assign,host=02:00.0 + +When I start the guest. The device succesfully get´s an msi-IRQ on host-system: + +cat /proc/interrupt output: + + 32: 0 0 0 0 PCI-MSI-edge kvm_assigned_msi_device + + +On guest device is visibel: + +lspci output: +00:04.0 Multimedia video controller: Micronas Semiconductor Holding AG Device 0720 (rev 01) + + +Sometimes the device (on guest) get´s an IRQ between 10-16: + +00:05.0 Multimedia video controller: Micronas Semiconductor Holding AG Device 0720 (rev 01) + Subsystem: Micronas Semiconductor Holding AG Device dd00 + Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ + Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ + Interrupt: pin A routed to IRQ 11 + Region 0: Memory at f2050000 (32-bit, non-prefetchable) [size=64K] + Region 1: Memory at f2060000 (32-bit, non-prefetchable) [size=64K] + Capabilities: [58] Express (v1) Endpoint, MSI 00 + DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us + ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset- + DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- + RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ + MaxPayload 128 bytes, MaxReadReq 128 bytes + DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- + LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s, Latency L0 unlimited, L1 unlimited + ClockPM- Suprise- LLActRep- BwNot- + LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ + ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- + LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt- + Capabilities: [40] Power Management version 2 + Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) + Status: D0 PME-Enable- DSel=0 DScale=0 PME- + Capabilities: [48] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable- + Address: 00000000 Data: 0000 + Kernel modules: ngene + + +In this case the kernel-modul (ngene) can not access the device: + +dmesg | grep ngene + +[ 69.977900] ngene 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 11 (level, high) -> IRQ 11 +[ 69.977909] ngene: Found Linux4Media cineS2 DVB-S2 Twin Tuner (v5) +[ 69.978962] ngene 0000:00:05.0: setting latency timer to 64 +[ 69.979118] ngene: Device version 1 +[ 69.979129] ngene 0000:00:05.0: firmware: requesting ngene_18.fw +[ 69.980884] ngene: Loading firmware file ngene_18.fw. +[ 71.981052] ngene: Command timeout cmd=01 prev=00 +[ 71.981205] host_to_ngene (c000): 01 00 00 00 00 00 00 00 +[ 71.981457] ngene_to_host (c100): 00 00 00 00 00 00 00 00 +[ 71.981704] dev->hosttongene (ec902000): 01 00 00 00 00 00 00 00 +[ 71.981963] dev->ngenetohost (ec902100): 00 00 00 00 00 00 00 00 +[ 73.985111] ngene: Command timeout cmd=02 prev=00 +[ 73.985415] host_to_ngene (c000): 02 04 00 d0 00 04 00 00 +[ 73.985684] ngene_to_host (c100): 00 00 00 00 00 00 00 00 +[ 73.985931] dev->hosttongene (ec902000): 02 04 00 d0 00 04 00 00 +[ 73.986191] dev->ngenetohost (ec902100): 00 00 00 00 00 00 00 00 +[ 73.986568] ngene 0000:00:05.0: PCI INT A disabled +[ 73.986584] ngene: probe of 0000:00:05.0 failed with error -1 + + +Sometimes the device (on guest) gets an msi-irq f. e. IRQ 29. +Then kernel-modul (ngene) can succesfully load the driver and all works fine. + + +Short to say: + +HOST GUEST STATUS +MSI-IRQ MSI-IRQ ALL FINE +MSI-IRQ IOAPIC-IRQ DOESN´t WORK + +with modinfo I had a look at the kernel-modul if there is way to force msi, but without success. + +But I think IRQ between (10-16) should also work because when I load the kernel-modul on host with IRQ (10-16) +it works. (Device only get´s an MSI-IRQ If I start the vm to passthrough) + +Do anyone know where can be the problem? \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/801 b/results/classifier/gemma3:12b/kvm/801 new file mode 100644 index 00000000..aedee906 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/801 @@ -0,0 +1,13 @@ + +QEMU test build failure with --enable-modules +Description of problem: + +Steps to reproduce: +1. ./configure --target-list=x86_64-softmmu --enable-kvm --enable-modules +2. make -j8 check-qtest-x86_64 V=1 + + - A problem happens "qemu-system-x86_64: -accel qtest: invalid accelerator qtest" + - The file accel-qtest-x86_64.so is not built + - This problem happens since 69c4c5c1c47f5dac140eb6485c5281a9f145dcf3 Mon Sep 17 00:00:00 2001 +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/808 b/results/classifier/gemma3:12b/kvm/808 new file mode 100644 index 00000000..b2e98382 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/808 @@ -0,0 +1,19 @@ + +virtio-scsi in Windows guests cause QEMU to abort/crash +Description of problem: +* Attempting to load the virtio-scsi drivers in a Windows guest causes the VM to abort/crash. +Steps to reproduce: +* `qemu-system-x86_64 -accel kvm -m 4G -device virtio-scsi-pci,id=scsi0 -drive media=cdrom,file=windows7-x64.iso -drive media=cdrom,file=virtio-win-0.1.173.iso` + * Boot the installer ISO, click through all the menus to eventually get to Custom Install + * In "Where do you want to install" click Load driver + * Browse E: drive and pick the first amd64/w7 folder + * Should show "Red Had VirtIO SCSI pass-through controller" + * Click Next + * Abort/crash + +Same thing happens with VM's that used to work already running the virtio-scsi drivers. When they boot the VM aborts. +Additional information: +``` +qemu-system-x86_64: ../accel/kvm/kvm-all.c:1760: kvm_irqchip_commit_routes: Assertion `ret == 0' failed. +Aborted (core dumped) +``` diff --git a/results/classifier/gemma3:12b/kvm/809 b/results/classifier/gemma3:12b/kvm/809 new file mode 100644 index 00000000..1792772b --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/809 @@ -0,0 +1,2 @@ + +ppc cpu_interrupt_exittb kvm check is inverted diff --git a/results/classifier/gemma3:12b/kvm/809912 b/results/classifier/gemma3:12b/kvm/809912 new file mode 100644 index 00000000..65359434 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/809912 @@ -0,0 +1,12 @@ + +qemu-kvm -m bigger 4096 aborts with 'Bad ram offset' + +When I try to start a virtual machine (x86_64 guest on a x86_64 host that has 32GB memory, using kvm_amd module, both host and guest running linux-2.6.39 kernels) with "qemu-system-x86_64 -cpu host -smp 2 -m 4096 ...", shortly after the guest kernel starts, qemu aborts with a message "Bad ram offset 11811c000". + +With e.g. "-m 3500" (or lower), the virtual machine runs fine. + +I experience this both using qemu-kvm 0.14.1 and a recent version from git +commit 525e3df73e40290e95743d4c8f8b64d8d9cbe021 +Merge: d589310 75ef849 +Author: Avi Kivity <email address hidden> +Date: Mon Jul 4 13:36:06 2011 +0300 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/810 b/results/classifier/gemma3:12b/kvm/810 new file mode 100644 index 00000000..c589dcdf --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/810 @@ -0,0 +1,72 @@ + +i386/sev: Crash in pc_system_parse_ovmf_flash caused by bad firmware file +Description of problem: +A specially-crafted flash file can cause the `memcpy()` call in +`pc_system_parse_ovmf_flash` (`hw/i386/pc_sysfw_ovmf.c`) to READ out-of-bounds +memory, because there's no check on the `tot_len` field which is read +from the flash file. In such case, `ptr - tot_len` will point to a +memory location *below* `flash_ptr` (hence the out-of-bounds read). + +This path is only taken when SEV is enabled (which requires +KVM and x86_64). +Steps to reproduce: +1. Create `bad_ovmf.fd` using the following python script: + ``` + from uuid import UUID + OVMF_TABLE_FOOTER_GUID = "96b582de-1fb2-45f7-baea-a366c55a082d" + b = bytearray(4096) + b[4046:4048] = b'\xff\xff' # tot_len field + b[4048:4064] = UUID("{" + OVMF_TABLE_FOOTER_GUID + "}").bytes_le + with open("bad_ovmf.fd", "wb") as f: + f.write(b) + ``` +2. Build QEMU with `--enable-sanitizers` +3. Start QEMU with SEV and the bad flash file: + ``` + qemu-system-x86_64 -enable-kvm -cpu host -machine q35 \ + -drive if=pflash,format=raw,unit=0,file=bad_ovmf.fd,readonly=on \ + -machine confidential-guest-support=sev0 \ + -object sev-guest,id=sev0,cbitpos=47,reduced-phys-bits=1,policy=0x0 + ``` +4. QEMU crashes with: `SUMMARY: AddressSanitizer: stack-buffer-underflow` +Additional information: +Crash example: + +``` +$ sudo build/qemu-system-x86_64 -enable-kvm -cpu host -machine q35 \ + -drive if=pflash,format=raw,unit=0,file=bad_ovmf.fd,readonly=on \ + -machine confidential-guest-support=sev0 \ + -object sev-guest,id=sev0,cbitpos=47,reduced-phys-bits=1,policy=0x0 +==523314==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! +================================================================= +==523314==ERROR: AddressSanitizer: stack-buffer-underflow on address 0x7f05305fb180 at pc 0x7f0548d89480 bp 0x7ffed44a1980 sp 0x7ffed44a1128 +READ of size 65517 at 0x7f05305fb180 thread T0 + #0 0x7f0548d8947f (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x9b47f) + #1 0x556127c3331e in memcpy /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34 + #2 0x556127c3331e in pc_system_parse_ovmf_flash ../hw/i386/pc_sysfw_ovmf.c:82 + #3 0x556127c21a0c in pc_system_flash_map ../hw/i386/pc_sysfw.c:203 + #4 0x556127c21a0c in pc_system_firmware_init ../hw/i386/pc_sysfw.c:258 + #5 0x556127c1ddd9 in pc_memory_init ../hw/i386/pc.c:902 + #6 0x556127bdc387 in pc_q35_init ../hw/i386/pc_q35.c:207 + #7 0x5561273bfdd6 in machine_run_board_init ../hw/core/machine.c:1181 + #8 0x556127f77de1 in qemu_init_board ../softmmu/vl.c:2652 + #9 0x556127f77de1 in qmp_x_exit_preconfig ../softmmu/vl.c:2740 + #10 0x556127f7f24d in qemu_init ../softmmu/vl.c:3775 + #11 0x556126f947ac in main ../softmmu/main.c:49 + #12 0x7f05470e80b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2) + #13 0x556126fa639d in _start (/home/dmurik/git/qemu/build/qemu-system-x86_64+0x2a5739d) + +Address 0x7f05305fb180 is located in stack of thread T3 at offset 0 in frame + #0 0x556128a96f1f in qemu_sem_timedwait ../util/qemu-thread-posix.c:293 + + + This frame has 1 object(s): + [32, 48) 'ts' (line 295) <== Memory access at offset 0 partially underflows this variable +HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork + (longjmp and C++ exceptions *are* supported) +Thread T3 created by T0 here: + #0 0x7f0548d28805 in pthread_create (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x3a805) + #1 0x556128a97ecf in qemu_thread_create ../util/qemu-thread-posix.c:596 + +SUMMARY: AddressSanitizer: stack-buffer-underflow (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x9b47f) +``` diff --git a/results/classifier/gemma3:12b/kvm/814222 b/results/classifier/gemma3:12b/kvm/814222 new file mode 100644 index 00000000..1a749de0 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/814222 @@ -0,0 +1,127 @@ + +kvm cannot use vhd files over 127GB + +The primary use case for using vhds with KVM is to perform a conversion to a raw image file so that one could move from Hyper-V to Linux-KVM. See more on this http://blog.allanglesit.com/2011/03/linux-kvm-migrating-hyper-v-vhd-images-to-kvm/ + +# kvm-img convert -f raw -O vpc /root/file.vhd /root/file.img + +The above works great if you have VHDs smaller than 127GB, however if it is larger, then no error is generated during the conversion process, but it appears to just process up to that 127GB barrier and no more. Also of note. VHDs can also be run directly using KVM if they are smaller than 127GB. VHDs can be read and function well using virtualbox as well as hyper-v, so I suspect the problem lies not with the VHD format (since that has a 2TB limitation). But instead with how qemu-kvm is interpreting them. + +BORING VERSION INFO: +# cat /etc/issue +Ubuntu 11.04 \n \l +# uname -rmiv +2.6.38-8-server #42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011 x86_64 x86_64 +# apt-cache policy kvm +kvm: + Installed: 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 + Candidate: 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 + Version table: + *** 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 0 + 500 http://apt.sonosite.com/ubuntu/ natty-updates/main amd64 Packages + 500 http://apt.sonosite.com/ubuntu/ natty-security/main amd64 Packages + 100 /var/lib/dpkg/status + 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 0 + 500 http://apt.sonosite.com/ubuntu/ natty/main amd64 Packages +# apt-cache policy libvirt-bin +libvirt-bin: + Installed: 0.8.8-1ubuntu6.2 + Candidate: 0.8.8-1ubuntu6.2 + Version table: + *** 0.8.8-1ubuntu6.2 0 + 500 http://apt.sonosite.com/ubuntu/ natty-updates/main amd64 Packages + 500 http://apt.sonosite.com/ubuntu/ natty-security/main amd64 Packages + 100 /var/lib/dpkg/status + 0.8.8-1ubuntu6 0 + 500 http://apt.sonosite.com/ubuntu/ natty/main amd64 Packages + +qemu-img version 0.14.0 + +# vboxmanage -v +4.0.12r72916 + + +REPRODUCTION STEPS (requires Windows 7 or Windows 2008 R2 with < 1GB of free space) + +## WINDOWS MACHINE ## + +Use Computer Management > Disk Management +-Create 2 VHD files, both dynamically expanding 120GB and 140GB respectively. +-Do not initialize or format. + +These files will need to be transferred to an Ubuntu KVM machine (pscp is what I used but usb would work as well). + +## UBUNTU KVM MACHINE ## + +# ls *.vhd +120g-dyn.vhd 140g-dyn.vhd +# kvm-img info 120g-dyn.vhd +image: 120g-dyn.vhd +file format: vpc +virtual size: 120G (128847052800 bytes) +disk size: 244K +# kvm-img info 140g-dyn.vhd +image: 140g-dyn.vhd +file format: vpc +virtual size: 127G (136899993600 bytes) +disk size: 284K +# kvm-img info 120g-dyn.vhd | grep "virtual size" +virtual size: 120G (128847052800 bytes) +# kvm-img info 140g-dyn.vhd | grep "virtual size" +virtual size: 127G (136899993600 bytes) + +Regardless of how big the second vhd is I always get a virtual size of 127G + +Now if we use virtualbox to view the vhds we see markedly different results. + +# VBoxManage showhdinfo 120g-dyn.vhd +UUID: e63681e0-ff12-4114-85de-7d13562b36db +Accessible: yes +Logical size: 122880 MBytes +Current size on disk: 0 MBytes +Type: normal (base) +Storage format: VHD +Format variant: dynamic default +Location: /root/120g-dyn.vhd +# VBoxManage showhdinfo 140g-dyn.vhd +UUID: 94531905-46b4-469f-bb44-7a7d388fb38f +Accessible: yes +Logical size: 143360 MBytes +Current size on disk: 0 MBytes +Type: normal (base) +Storage format: VHD +Format variant: dynamic default +Location: /root/140g-dyn.vhd + +# kvm-img convert -f vpc -O raw 120g-dyn.vhd 120g-dyn.img +# +# kvm-img convert -f vpc -O raw 140g-dyn.vhd 140g-dyn.img +# + +# kvm-img info 120g-dyn.img +image: 120g-dyn.img +file format: raw +virtual size: 120G (128847052800 bytes) +disk size: 0 +# kvm-img info 120g-dyn.img | grep "virtual size" +virtual size: 120G (128847052800 bytes) +# kvm-img info 140g-dyn.img +image: 140g-dyn.img +file format: raw +virtual size: 127G (136899993600 bytes) +disk size: 0 +# kvm-img info 140g-dyn.img | grep "virtual size" +virtual size: 127G (136899993600 bytes) + +Notice after the conversion the raw image will the taken on the partial geometry of the vhd, thus rendering that image invalid. + +vboxmanage has a clonehd option which allows you to successfully convert vhd to a raw image, which kvm then sees properly. + +For giggles I also tested with a 140GB fixed VHD (in the same manner as above) and it displayed the virtual size as correct, so a good work around is to convert your VHDs to fixed, then use kvm-img to convert them. + +Keep in mind that these reproduction steps will not have a file systems therefore no valid data, if there were for example NTFS with a text file the problem would still occur but more importantly the guest trying to use it would not be able to open the disk because of it being unable to find the final sector. + +So long story short I think we are dealing with 2 issues here. + +1) kvm not being able to deal with dynamic VHD files larger than 127GB +2) kvm-img not generating an error when it "fails" at converting or displaying information on dynamic VHDs larger than 127GB. The error should be something like "qemu-kvm does not support dynamic VHD files larger that 127GB..." \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/823 b/results/classifier/gemma3:12b/kvm/823 new file mode 100644 index 00000000..3962ea19 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/823 @@ -0,0 +1,22 @@ + +rcutorture: ../tests/unit/rcutorture.c:321: rcu_update_stress_test: Assertion `p != cp' failed. +Description of problem: +qemu rcutorture tests are failing when building qemu for Rawhide. See the scratch build I did here and the follow log files: + +https://koji.fedoraproject.org/koji/taskinfo?taskID=81316487 +https://kojipkgs.fedoraproject.org//work/tasks/6509/81316509/build.log +https://kojipkgs.fedoraproject.org//work/tasks/6508/81316508/build.log +https://kojipkgs.fedoraproject.org//work/tasks/6510/81316510/build.log + +The full error is: + +``` +MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))} G_TEST_SRCDIR=/builddir/build/BUILD/qemu-6.2.0/tests/unit G_TEST_BUILDDIR=/builddir/build/BUILD/qemu-6.2.0/qemu_kvm_build/tests/unit tests/unit/rcutorture --tap -k +ERROR rcutorture - too few tests run (expected 2, got 0) +rcutorture: ../tests/unit/rcutorture.c:321: rcu_update_stress_test: Assertion `p != cp' failed. +make: *** [Makefile.mtest:1208: run-test-149] Error 1 +``` +Steps to reproduce: +1. Compile qemu and run the test suite. +Additional information: +The only significant recent change since it was built successfully is adoption of GCC 12. Could it be a change in compiler that causes this? diff --git a/results/classifier/gemma3:12b/kvm/823733 b/results/classifier/gemma3:12b/kvm/823733 new file mode 100644 index 00000000..9a8a35b9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/823733 @@ -0,0 +1,105 @@ + +Solaris can't be powered off with ACPI shutdown/poweroff + +Thank you forgive my poor English. + +It seems KVM can’t poweroff solairs 10 or sloalrs 11 VM. +I have created solaris 10 and 11 as usual. Everything in VM is running OK, but finally I use shell command ‘poweroff’ or ‘init 5’, the solaris VM (both 10 & 11) system could’t be poweroff but with promoting me the message: perss any key to reboot ….. ,I pressed any key in vnc client, solaris VM reboot immediately. Endless reboot loop above. + +the solaris 10 & 11 from oracle iso file name : +sol-10-u9-ga-x86-dvd.iso +sol-11-exp-201011-text-x86.iso + +the solaris 10 & 11 from oracle iso file name : +sol-10-u9-ga-x86-dvd.iso +sol-11-exp-201011-text-x86.iso + +1. On my real physical machine,the solaris can be poweroff +2. On vmware ,the solaris can be poweroff +3. On my real physical machine,I have try to disbale the ACPI opiton in BOIS, then the solaris can't be poweroff,Like the problem I have described above +so ,I doubt the KVM has a little problem in ACPI + +I have try the suggestion as follows, but I can’t solve the problem. +7.2 Solaris reboot all the time on grub menu +• Run through the installer as usual +• On completion and reboot, the VM will perpetually reboot. "Stop" the VM. +• Start it up again, and immediately open a vnc console and select the Safe Boot from the options screen +• When prompted if you want to try and recover the boot block, say yes +• You should now have a Bourne terminal with your existing filesystem mounted on /a +• Run /a/usr/bin/bash (my preferred shell) +• export TERM=xterm +• vi /a/boot/grub/menu.1st (editing the bootloader on your mounted filesystem), to add "kernel/unix" to the kernel options for the non-safe-mode boot. Ex : +Config File : /a/boot/grub/menu.lst +kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS kernel/unix + +According to KVM requirements, I collected the following information: +CPU model name +model name : Intel(R) Xeon(R) CPU X3450 @ 2.67GHz + +kvm -version +QEMU PC emulator version 0.12.3 (qemu-kvm-0.12.3), Copyright (c) 2003-2008 Fabrice Bellard + +Host kernel version +Ubuntu 10.04.1 LTS 2.6.32-25-server + +What host kernel arch you are using (i386 or x86_64) +X86_64 + +Guest OS +Solaris 10 and Solaris 11,both can not shutdown + +The qemu command line you are using to start the guest + +First, I used the command line as follows: +kvm -m 1024 -drive file=solaris10.img,cache=writeback -net nic -net user -nographic -vnc :1 +then I try to use -no-kvm-irqchip or -no-kvm ,but the problem also appears! + +Secondly, have created and run solaris 10&11 by using Virsh, still solaris can't be poweroff, the XML file content is : +<domain type='kvm'> + <name>solairs</name> + <uuid>85badf15-244d-4719-a2da-8c3de064137d</uuid> + <memory>1677721</memory> + <currentMemory>1677721</currentMemory> + <vcpu>1</vcpu> + <os> + <type arch='i686' machine='pc-0.12'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + </features> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/kvm</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2' cache='writeback'/> + <source file='/opt/GuestOS/solaris10.img'/> + <target dev='hda' bus='ide'/> + </disk> + <interface type='bridge'> + <mac address='00:0c:29:d0:36:c3'/> + <source bridge='br1'/> + <target dev='vnet0'/> + </interface> + <input type='mouse' bus='ps2'/> + <graphics type='vnc' port='5901' autoport='no' keymap='en-us'/> + <video> + <model type='vga' vram='65536' heads='1'/> + </video> + </devices> + <seclabel type='dynamic' model='apparmor'> + <label>libvirt-f36f5289-692e-6f1c-fe71-c6ed19453e2f</label> + <imagelabel>libvirt-f36f5289-692e-6f1c-fe71-c6ed19453e2f</imagelabel> + </seclabel> + </domain> + + + + + + + diff --git a/results/classifier/gemma3:12b/kvm/825 b/results/classifier/gemma3:12b/kvm/825 new file mode 100644 index 00000000..c5e76ff7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/825 @@ -0,0 +1,39 @@ + +compilation error - "VIRTIO_F_VERSION" +Description of problem: +Encountered problem while "make" + +.... +`[65/2464] Compiling C object subprojects/libvhost-user/libvhost-user.a.p/libvhost-user.c.o +FAILED: subprojects/libvhost-user/libvhost-user.a.p/libvhost-user.c.o +cc -m64 -mcx16 -Isubprojects/libvhost-user/libvhost-user.a.p -Isubprojects/libvhost-user -I../subprojects/libvhost-user -fdiagnostics-color=auto -Wall -Winvalid-pch -Werror -std=gnu11 -O2 -g -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wold-style-declaration -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wimplicit-fallthrough=2 -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -fstack-protector-strong -fPIE -pthread -D_GNU_SOURCE -MD -MQ subprojects/libvhost-user/libvhost-user.a.p/libvhost-user.c.o -MF subprojects/libvhost-user/libvhost-user.a.p/libvhost-user.c.o.d -o subprojects/libvhost-user/libvhost-user.a.p/libvhost-user.c.o -c ../subprojects/libvhost-user/libvhost-user.c +../subprojects/libvhost-user/libvhost-user.c: In function 'vu_get_features_exec': +../subprojects/libvhost-user/libvhost-user.c:508:17: error: 'VIRTIO_F_VERSION_1' undeclared (first use in this function); did you mean 'INFLIGHT_VERSION'? + 1ULL << VIRTIO_F_VERSION_1 | + ^~~~~~~~~~~~~~~~~~ + INFLIGHT_VERSION +../subprojects/libvhost-user/libvhost-user.c:508:17: note: each undeclared identifier is reported only once for each function it appears in +../subprojects/libvhost-user/libvhost-user.c: In function 'vu_set_features_exec': +../subprojects/libvhost-user/libvhost-user.c:542:30: error: 'VIRTIO_F_VERSION_1' undeclared (first use in this function); did you mean 'INFLIGHT_VERSION'? + if (!vu_has_feature(dev, VIRTIO_F_VERSION_1)) { + ^~~~~~~~~~~~~~~~~~ + INFLIGHT_VERSION +../subprojects/libvhost-user/libvhost-user.c: In function 'generate_faults': +../subprojects/libvhost-user/libvhost-user.c:612:13: error: unused variable 'ret' [-Werror=unused-variable] + int ret; + ^~~ +../subprojects/libvhost-user/libvhost-user.c:611:22: error: unused variable 'dev_region' [-Werror=unused-variable] + VuDevRegion *dev_region = &dev->regions[i]; + ^~~~~~~~~~ +cc1: all warnings being treated as errors +ninja: build stopped: subcommand failed. +make[1]: *** [Makefile:163: run-ninja] Error 1 +make[1]: Leaving directory '/users/oneuser/qemu/qemu/build' +make: *** [GNUmakefile:11: all] Error 2 +` +Steps to reproduce: +1. ./configure --prefix=/users/oneuser/qemu/myqemu-1 --enable-kvm --target-list=x86_64-softmmu +2. make +3. +Additional information: +Please let me know if more info is needed. diff --git a/results/classifier/gemma3:12b/kvm/855800 b/results/classifier/gemma3:12b/kvm/855800 new file mode 100644 index 00000000..f122749c --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/855800 @@ -0,0 +1,40 @@ + +KVM crashes when attempting to restart migration + +Operations performed: +Sequence to trigger crash: + + * Start two kvm systems, one on gerph (primary), one on nbuild2 (listening for incoming migration) - do not use -daemonize + * On gerph, connect to monitor. + * "migrate -d -b tcp:nbuild2:4444" + * "info migrate" + * "migrate_cancel" + * "info migrate" + * "migrate -d -b tcp:nbuild2:4444" + * crashed with assertion: +kvm: block-migration.c:355: flush_blks: Assertion `block_mig_state.read_done >= 0' failed. + Connection closed by foreign host. +[1]+ Aborted (core dumped) kvm -drive file=./copy-disk2.img,boot=on -m 4096 -serial mon:telnet::23023,server,nowait -balloon virtio -vnc :99 -usbdevice tablet -net nic,macaddr=f6:a6:31:53:89:9a,model=rtl8139,vlan=0 -net tap,vlan=0 + + +Repeating the operations above often dies in different places; just repeat the cancel and restart the operation. Because the KVM system dies, the underlying VM is obviously terminated. + +Distribution: + +jfletcher@gerph:~$ lsb_release -rd +Description: Ubuntu 10.04.3 LTS +Release: 10.04 + +Package: + +jfletcher@gerph:~$ apt-cache policy kvm +kvm: + Installed: 1:84+dfsg-0ubuntu16+0.12.3+noroms+0ubuntu9.15 + Candidate: 1:84+dfsg-0ubuntu16+0.12.3+noroms+0ubuntu9.15 + Version table: + *** 1:84+dfsg-0ubuntu16+0.12.3+noroms+0ubuntu9.15 0 + 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-updates/main Packages + 500 http://security.ubuntu.com/ubuntu/ lucid-security/main Packages + 100 /var/lib/dpkg/status + 1:84+dfsg-0ubuntu16+0.12.3+noroms+0ubuntu9 0 + 500 http://gb.archive.ubuntu.com/ubuntu/ lucid/main Packages \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/860 b/results/classifier/gemma3:12b/kvm/860 new file mode 100644 index 00000000..9bbe7858 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/860 @@ -0,0 +1,311 @@ + +Not able to launch guests in ppc64le P9 OPAL +Description of problem: +Not able to launch guests in ppc64le P9 OPAL +Steps to reproduce: +1. In a RHEL8 using 4.18.0-305.3.1.el8_4.ppc64le create a Fedora CoreOS VM using kernel-5.15.17-200.fc35.ppc64le. +2. Inside the FCOS vm run: +``` +virt-install --import \ + --name buildvm-ppc64le-fcos01.iad2.fedoraproject.org \ + --memory=32768,maxmemory=32768 \ + --vcpus=16,maxvcpus=16 \ + --feature nested-hv=on \ + --network bridge=br0,model=virtio,mac=RANDOM \ + --autostart \ + --memballoon virtio \ + --watchdog default \ + --rng /dev/random \ + --noautoconsole \ + --disk path=$PWD/fcos-ppc64le-builder.ign,format=raw,readonly=on,serial=ignition \ + --disk bus=virtio,path=/dev/vg_guests/buildvm-ppc64le-fcos01.iad2.fedoraproject.org,cache=unsafe,io=threads +``` + +3. Try to run it again and you will get: + +``` +KVM: Failed to create TCE64 table for liobn 0x71000002 +KVM: Failed to create TCE64 table for liobn 0x80000000 +KVM: unknown exit, hardware reason ffffffffffffffc9 +NIP 0000000000000100 LR 0000000000000000 CTR 0000000000000000 XER 0000000000000000 CPU#0 +MSR 8000000000001000 HID0 0000000000000000 HF 6c000004 iidx 3 didx 3 +TB 00000000 00000000 DECR 0 +GPR00 0000000000000000 0000000000000000 0000000000000000 000000007fe00000 +GPR04 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR08 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR12 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR16 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR20 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR24 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR28 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +CR 00000000 [ - - - - - - - - ] RES ffffffffffffffff + SRR0 0000000000000000 SRR1 0000000000000000 PVR 00000000004e1202 VRSAVE 0000000000000000 +SPRG0 0000000000000000 SPRG1 0000000000000000 SPRG2 0000000000000000 SPRG3 0000000000000000 +SPRG4 0000000000000000 SPRG5 0000000000000000 SPRG6 0000000000000000 SPRG7 0000000000000000 +HSRR0 0000000000000000 HSRR1 0000000000000000 + CFAR 0000000000000000 + LPCR 0000000000560413 + PTCR 0000000000000000 DAR 0000000000000000 DSISR 0000000000000000 +``` +Additional information: +Fedora xml: +``` +<domain type='kvm' id='24'> + <name>buildvm-ppc64le-fcos01.iad2.fedoraproject.org</name> + <uuid>ed30c95e-b7c0-4c25-a6ba-b739459f101b</uuid> + <memory unit='KiB'>33554432</memory> + <currentMemory unit='KiB'>33554432</currentMemory> + <vcpu placement='static'>16</vcpu> + <resource> + <partition>/machine</partition> + </resource> + <os> + <type arch='ppc64le' machine='pseries-rhel8.3.0'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <nested-hv state='on'/> + </features> + <cpu mode='custom' match='exact' check='none'> + <model fallback='forbid'>POWER9</model> + </cpu> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/libexec/qemu-kvm</emulator> + <disk type='block' device='disk'> + <driver name='qemu' type='raw' cache='unsafe' io='threads'/> + <source dev='/dev/vg_guests/buildvm-ppc64le-fcos01.iad2.fedoraproject.org' index='2'/> + <backingStore/> + <target dev='vda' bus='virtio'/> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </disk> + <disk type='file' device='disk'> + <driver name='qemu' type='raw'/> + <source file='/tmp/fcos-ppc64le-builder.ign' index='1'/> + <backingStore/> + <target dev='vdb' bus='virtio'/> + <readonly/> + <serial>ignition</serial> + <alias name='virtio-disk1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> + </disk> + <controller type='usb' index='0' model='qemu-xhci' ports='15'> + <alias name='usb'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </controller> + <controller type='pci' index='0' model='pci-root'> + <model name='spapr-pci-host-bridge'/> + <target index='0'/> + <alias name='pci.0'/> + </controller> + <controller type='virtio-serial' index='0'> + <alias name='virtio-serial0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </controller> + <interface type='bridge'> + <mac address='52:54:00:c4:d2:aa'/> + <source bridge='br0'/> + <target dev='vnet23'/> + <model type='virtio'/> + <alias name='net0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> + </interface> + <serial type='pty'> + <source path='/dev/pts/11'/> + <target type='spapr-vio-serial' port='0'> + <model name='spapr-vty'/> + </target> + <alias name='serial0'/> + <address type='spapr-vio' reg='0x30000000'/> + </serial> + <console type='pty' tty='/dev/pts/11'> + <source path='/dev/pts/11'/> + <target type='serial' port='0'/> + <alias name='serial0'/> + <address type='spapr-vio' reg='0x30000000'/> + </console> + <channel type='unix'> + <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-24-buildvm-ppc64le-fcos/org.qemu.guest_agent.0'/> + <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> + <alias name='channel0'/> + <address type='virtio-serial' controller='0' bus='0' port='1'/> + </channel> + <input type='tablet' bus='usb'> + <alias name='input0'/> + <address type='usb' bus='0' port='1'/> + </input> + <input type='keyboard' bus='usb'> + <alias name='input1'/> + <address type='usb' bus='0' port='2'/> + </input> + <graphics type='vnc' port='5910' autoport='yes' listen='127.0.0.1'> + <listen type='address' address='127.0.0.1'/> + </graphics> + <video> + <model type='vga' vram='16384' heads='1' primary='yes'/> + <alias name='video0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> + </video> + <watchdog model='i6300esb' action='reset'> + <alias name='watchdog0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> + </watchdog> + <memballoon model='virtio'> + <alias name='balloon0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> + </memballoon> + <rng model='virtio'> + <backend model='random'>/dev/random</backend> + <alias name='rng0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> + </rng> + <panic model='pseries'/> + </devices> + <seclabel type='dynamic' model='selinux' relabel='yes'> + <label>system_u:system_r:svirt_t:s0:c131,c913</label> + <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c913</imagelabel> + </seclabel> + <seclabel type='dynamic' model='dac' relabel='yes'> + <label>+107:+107</label> + <imagelabel>+107:+107</imagelabel> + </seclabel> +</domain> +``` + +Failure seen in journal when running `virt-ls` + +``` +Feb 04 16:19:39 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: KVMPPC-UVMEM: No support for secure guests +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: vcpu 000000004bd9d345 (0): +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: pc = 0000000000000100 msr = 8000000000001000 trap = ffffffc9 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 0 = 0000000000000000 r16 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 1 = 0000000000000000 r17 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 2 = 0000000000000000 r18 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 3 = 000000003fe00000 r19 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 4 = 0000000000000000 r20 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 5 = 0000000000000000 r21 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 6 = 0000000000000000 r22 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 7 = 0000000000000000 r23 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 8 = 0000000000000000 r24 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r 9 = 0000000000000000 r25 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r10 = 0000000000000000 r26 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r11 = 0000000000000000 r27 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r12 = 0000000000000000 r28 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r13 = 0000000000000000 r29 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r14 = 0000000000000000 r30 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: r15 = 0000000000000000 r31 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: ctr = 0000000000000000 lr = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: srr0 = 0000000000000000 srr1 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: sprg0 = 0000000000000000 sprg1 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: sprg2 = 0000000000000000 sprg3 = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: cr = 00000000 xer = 0000000000000000 dsisr = 00000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: dar = 0000000000000000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: fault dar = 0000000000000000 dsisr = 0c68f000 +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: SLB (0 entries): +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: lpcr = 0040000000560413 sdr1 = 0000000000000000 last_inst = ffffffff +Feb 04 16:19:40 buildvm-ppc64le-fcos01.iad2.fedoraproject.org kernel: trap=0xffffffc9 | pc=0x100 | msr=0x8000000000001000 +``` +Running via qemu: +``` +qemu-system-ppc64 -m 2048 -machine pseries,accel=kvm,kvm-type=HV -cpu host -nographic -snapshot -drive if=virtio,file=fedora-coreos-35.20220131.dev.0-qemu.ppc64le.qcow2 + +KVM: unknown exit, hardware reason ffffffffffffffc9 +NIP 0000000000000100 LR 0000000000000000 CTR 0000000000000000 XER 0000000000000000 CPU#0 +MSR 8000000000001000 HID0 0000000000000000 HF 6c000004 iidx 3 didx 3 +TB 00000000 00000000 DECR 0 +GPR00 0000000000000000 0000000000000000 0000000000000000 000000007fe00000 +GPR04 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR08 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR12 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR16 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR20 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR24 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR28 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +CR 00000000 [ - - - - - - - - ] RES ffffffffffffffff + SRR0 0000000000000000 SRR1 0000000000000000 PVR 00000000004e1202 VRSAVE 0000000000000000 +SPRG0 0000000000000000 SPRG1 0000000000000000 SPRG2 0000000000000000 SPRG3 0000000000000000 +SPRG4 0000000000000000 SPRG5 0000000000000000 SPRG6 0000000000000000 SPRG7 0000000000000000 +HSRR0 0000000000000000 HSRR1 0000000000000000 + CFAR 0000000000000000 + LPCR 0000000000560413 + PTCR 0000000000000000 DAR 0000000000000000 DSISR 0000000000000000 +``` +libguestfs-test-tool also fails to launch guest + +``` +2022-02-04 18:10:02.355+0000: starting up libvirt version: 7.6.0, package: 5.fc35 (Fedora Project, 2021-12-16-17:58:25, ), qemu version: 6.1.0qemu-6.1.0-10.fc35, kernel: 5.15.17-200.fc35.ppc64le, hostname: buildvm-ppc64le-fcos01.iad2.fedoraproject.org +LC_ALL=C \ +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \ +HOME=/var/lib/libvirt/qemu/domain-1-guestfs-9ee177vxogzf \ +XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-guestfs-9ee177vxogzf/.local/share \ +XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-guestfs-9ee177vxogzf/.cache \ +XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-guestfs-9ee177vxogzf/.config \ +TMPDIR=/var/tmp \ +/usr/bin/qemu-system-ppc64 \ +-name guest=guestfs-9ee177vxogzfyj3v,debug-threads=on \ +-S \ +-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-guestfs-9ee177vxogzf/master-key.aes"}' \ +-machine pseries-6.1,accel=kvm,usb=off,dump-guest-core=off,memory-backend=ppc_spapr.ram \ +-cpu POWER9 \ +-m 1280 \ +-object '{"qom-type":"memory-backend-ram","id":"ppc_spapr.ram","size":1342177280}' \ +-overcommit mem-lock=off \ +-smp 1,sockets=1,cores=1,threads=1 \ +-uuid 08cd47d3-91e1-4322-aa53-6665a9bc13c8 \ +-display none \ +-no-user-config \ +-nodefaults \ +-chardev socket,id=charmonitor,fd=22,server=on,wait=off \ +-mon chardev=charmonitor,id=monitor,mode=control \ +-rtc base=utc,driftfix=slew \ +-no-reboot \ +-boot strict=on \ +-kernel /var/tmp/.guestfs-0/appliance.d/kernel \ +-initrd /var/tmp/.guestfs-0/appliance.d/initrd \ +-append 'panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=UUID=0c185770-d5fc-4a67-acc9-3ea85178bda2 selinux=0 guestfs_verbose=1 TERM=screen' \ +-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x1 \ +-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x2 \ +-blockdev '{"driver":"file","filename":"/tmp/libguestfskYy342/scratch1.img","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}' \ +-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":true},"driver":"raw","file":"libvirt-2-storage"}' \ +-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=libvirt-2-format,id=scsi0-0-0-0,bootindex=1,write-cache=on \ +-blockdev '{"driver":"file","filename":"/var/tmp/.guestfs-0/appliance.d/root","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}' \ +-blockdev '{"node-name":"libvirt-3-format","read-only":true,"cache":{"direct":false,"no-flush":true},"driver":"raw","file":"libvirt-3-storage"}' \ +-blockdev '{"driver":"file","filename":"/tmp/libguestfskYy342/overlay2.qcow2","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}' \ +-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":true},"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-3-format"}' \ +-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,device_id=drive-scsi0-0-1-0,drive=libvirt-1-format,id=scsi0-0-1-0,write-cache=on \ +-chardev socket,id=charserial0,path=/tmp/libguestfsFFWbf9/console.sock \ +-device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 \ +-chardev socket,id=charchannel0,path=/tmp/libguestfsFFWbf9/guestfsd.sock \ +-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 \ +-audiodev id=audio1,driver=none \ +-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' \ +-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x3 \ +-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ +-msg timestamp=on +KVM: unknown exit, hardware reason ffffffffffffffc9 +NIP 0000000000000100 LR 0000000000000000 CTR 0000000000000000 XER 0000000000000000 CPU#0 +MSR 8000000000001000 HID0 0000000000000000 HF 6c000004 iidx 3 didx 3 +TB 00000000 00000000 DECR 0 +GPR00 0000000000000000 0000000000000000 0000000000000000 000000003fe00000 +GPR04 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR08 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR12 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR16 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR20 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR24 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +GPR28 0000000000000000 0000000000000000 0000000000000000 0000000000000000 +CR 00000000 [ - - - - - - - - ] RES ffffffffffffffff + SRR0 0000000000000000 SRR1 0000000000000000 PVR 00000000004e1202 VRSAVE 0000000000000000 +SPRG0 0000000000000000 SPRG1 0000000000000000 SPRG2 0000000000000000 SPRG3 0000000000000000 +SPRG4 0000000000000000 SPRG5 0000000000000000 SPRG6 0000000000000000 SPRG7 0000000000000000 +HSRR0 0000000000000000 HSRR1 0000000000000000 + CFAR 0000000000000000 + LPCR 0000000000560413 + PTCR 0000000000000000 DAR 0000000000000000 DSISR 0000000000000000 +2022-02-04T18:19:47.323915Z qemu-system-ppc64: terminating on signal 15 from pid 1645 (<unknown process>) +2022-02-04 18:19:47.524+0000: shutting down, reason=destroyed +``` diff --git a/results/classifier/gemma3:12b/kvm/870 b/results/classifier/gemma3:12b/kvm/870 new file mode 100644 index 00000000..a975260d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/870 @@ -0,0 +1,13 @@ + +Throws a #GP when it should throw a #SS +Description of problem: +When stacks are switched as part of a 64-bit mode privilege-level change (resulting from an interrupt), IA-32e mode loads only an inner-level RSP from the TSS. If the value of rsp from tss is a non-canonical form. It will trigger #SS. But when I test it in qemu it throws #GP instead of #SS +Steps to reproduce: +In order to confirm that it is the #SS triggered by the non-canonical address, We can verify on a real machine. +1. Set the value of the current core's `TSS.IST7` to the the non-canonical address. +2. Set the `ist` field of the interrupt 4 (Overflow Exception) descriptor to 7. +3. Execute the `INT 4` instruction in Ring 3 and it will be taken over by the #SS handler. + +Repeat the above steps in qemu this exception will be taken over by #GP +Additional information: + diff --git a/results/classifier/gemma3:12b/kvm/888016 b/results/classifier/gemma3:12b/kvm/888016 new file mode 100644 index 00000000..f65e03be --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/888016 @@ -0,0 +1,19 @@ + +RHEL 6.1 guest fails to boot with vhost + +Tried to boot 6.1 guest on hs22 blade with/without vhost enabled. + +With vhost enabled, guest aborted with core dump. + +installed guest with autotest. +Command : +/usr/local/bin/qemu-system-x86_64 -name 'vm1' -nodefaults -vga std -monitor unix:'/tmp/monitor-humanmonitor1-20111108-193209-fc6O',server,nowait -serial unix:'/tmp/serial-20111108-193209-fc6O',server,nowait -drive file='/home/pradeep/autotest/client/tests/kvm/images/rhel6.1-64.qed',index=0,if=virtio,cache=none -device virtio-net-pci,netdev=idQhUaOc,mac='9a:b7:ea:c9:0e:0d',id='idVR6XQz' -netdev tap,id=idQhUaOc,vhost=on,script=/home/pradeep/qemu-ifup-latest -m 1024 -smp 8 -vnc :0 -monitor stdio +QEMU 0.15.91 monitor - type 'help' for more information +(qemu) Aborted (core dumped) + + +host: + +2.6.32-214 +m/c: hs22 +vhost modules are loaded. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/889827 b/results/classifier/gemma3:12b/kvm/889827 new file mode 100644 index 00000000..0732e1c9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/889827 @@ -0,0 +1,202 @@ + +Qemu hangs on loadvm + +Hello. + +I'm new here but I believe this is a bug in Qemu since I did nothing special and it stoped to work. Please excuse me if this is not a bug but my error. + +Suddenly loadvm command stoped to work. It restores state (I see this in SDL window) and after that Qemu hangs, you even can close it only with SIGKILL. I did nothing special to cause this. Just only saved one more state with savevm, don't know if this is related. + +I already had this trouble, started from a new image and the trouble disappeared. And now this happened again. + +Here is a piece of strace output (at the end, last few syscalls clock_gettime - ioctl repeat infinitely): +[pid 32681] ioctl(6, KVM_SET_USER_MEMORY_REGION, 0xbfd6305c) = 0 +[pid 32681] ioctl(6, KVM_SET_USER_MEMORY_REGION, 0xbfd6305c) = 0 +[pid 32681] ioctl(6, KVM_SET_USER_MEMORY_REGION, 0xbfd6305c) = 0 +[pid 32681] ioctl(6, KVM_GET_DIRTY_LOG, 0xbfd6313c) = 0 +[pid 32681] ioctl(6, KVM_GET_DIRTY_LOG, 0xbfd6313c) = 0 +[pid 32681] ioctl(6, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0xbfd6332c) = 0 +[pid 32681] ioctl(13, KVM_SET_REGS, 0xbfd62d9c) = 0 +[pid 32681] ioctl(13, KVM_SET_FPU, 0xbfd62c7c) = 0 +[pid 32681] ioctl(13, KVM_SET_SREGS, 0xbfd62e84) = 0 +[pid 32681] ioctl(13, KVM_SET_MSRS, 0xbfd62e84) = 49 +[pid 32681] ioctl(13, KVM_SET_MP_STATE, 0xbfd62e80) = 0 +[pid 32681] ioctl(13, KVM_SET_LAPIC, 0xbfd62a2c) = 0 +[pid 32681] ioctl(13, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0xbfd62e84) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79640, 743993495}) = 0 +[pid 32681] write(9, "\1\0\0\0\0\0\0\0", 8) = 8 +[pid 32681] write(9, "\1\0\0\0\0\0\0\0", 8) = 8 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79640, 744620600}) = 0 +[pid 32681] gettimeofday({1321185168, 995648}, NULL) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79640, 744994949}) = 0 +[pid 32681] timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 +[pid 32681] timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 +[pid 32681] tgkill(32681, 32682, SIGRT_6) = 0 +[pid 32681] gettimeofday({1321185168, 996771}, NULL) = 0 +[pid 32681] write(5, "\0", 1) = 1 +[pid 32679] <... read resumed> "\0", 1) = 1 +[pid 32679] exit_group(0) = ? +[pid 32681] chdir("/") = 0 +[pid 32681] open("/dev/null", O_RDWR|O_LARGEFILE|O_CLOEXEC) = 16 +[pid 32681] dup2(16, 0) = 0 +[pid 32681] dup2(16, 1) = 1 +[pid 32681] dup2(16, 2) = 2 +[pid 32681] close(16) = 0 +[pid 32681] futex(0x8314124, FUTEX_CMP_REQUEUE_PRIVATE, 1, 2147483647, 0x88ea1d4, 2 <unfinished ...> +[pid 32682] <... futex resumed> ) = 0 +[pid 32682] futex(0x88ea1d4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...> +[pid 32681] <... futex resumed> ) = 1 +[pid 32681] futex(0x88ea1d4, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> +[pid 32682] <... futex resumed> ) = 0 +[pid 32682] futex(0x88ea1d4, FUTEX_WAKE_PRIVATE, 1) = 0 +[pid 32682] ioctl(13, KVM_RUN, 0) = -1 EINTR (Interrupted system call) +[pid 32682] rt_sigtimedwait([BUS RT_6], <unfinished ...> +[pid 32681] <... futex resumed> ) = 1 +[pid 32682] <... rt_sigtimedwait resumed> {si_signo=SIGRT_6, si_code=SI_TKILL, si_pid=32681, si_uid=1000, si_value={int=0, ptr=0}}, {0, 0}, 8) = 38 +[pid 32681] select(13, [3 7 8 10 12], [], [], {0, 0} <unfinished ...> +[pid 32682] rt_sigpending( <unfinished ...> +[pid 32681] <... select resumed> ) = 3 (in [7 8 12], left {0, 0}) +[pid 32682] <... rt_sigpending resumed> [USR2]) = 0 +[pid 32681] futex(0x88ea1d4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...> +[pid 32682] futex(0x88ea1d4, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> +[pid 32681] <... futex resumed> ) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32682] <... futex resumed> ) = 0 +[pid 32681] read(12, <unfinished ...> +[pid 32682] ioctl(13, KVM_RUN <unfinished ...> +[pid 32681] <... read resumed> "\f\0\0\0\0\0\0\0\0\0\0\0\251\177\0\0\350\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 128) = 128 +[pid 32681] read(8, "~*\0\0\0\0\0\0", 512) = 8 +[pid 32681] read(7, <unfinished ...> +[pid 32682] <... ioctl resumed> , 0) = 0 +[pid 32681] <... read resumed> "\16\0\0\0\0\0\0\0\376\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 128) = 128 +[pid 32682] futex(0x88ea1d4, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...> +[pid 32681] rt_sigaction(SIGALRM, NULL, {0x811bd80, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 +[pid 32681] write(9, "\1\0\0\0\0\0\0\0", 8) = 8 +[pid 32681] read(7, 0xbfd635ac, 128) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79640, 757376201}) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79640, 757540467}) = 0 +[pid 32681] gettimeofday({1321185169, 8534}, NULL) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79640, 757846302}) = 0 +[pid 32681] timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 +[pid 32681] timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79640, 758327369}) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79640, 758470124}) = 0 +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLIN|POLLOUT}]) +[pid 32681] read(14, "\34\0i\0\3\0 \2j\1\0\0:\321\275\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 192 +[pid 32681] writev(14, [{"\22\0\7\0\3\0 \2'\0\0\0\37\0\0\0\10\1\4\0\4\0\0\0QEMU\22\0\7\0"..., 116}, {NULL, 0}, {"", 0}], 3) = 116 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\34\0j\0\3\0 \2'\0\0\0\2708\277\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 160 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=15, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=15, revents=POLLOUT}]) +[pid 32681] writev(15, [{"+\0\1\0", 4}, {NULL, 0}, {"", 0}], 3) = 4 +[pid 32681] poll([{fd=15, events=POLLIN}], 1, -1) = 1 ([{fd=15, revents=POLLIN}]) +[pid 32681] read(15, "\1\2\n\0\0\0\0\0\4\0\340\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(15, 0xa540448, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"\22\0\30\0\3\0 \2(\0\0\0)\0\0\0 \1\4\0\22\0\0\0\0\0\0\0\0\0\0\0"..., 136}, {NULL, 0}, {"", 0}], 3) = 136 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\34\0o\0\3\0 \2(\0\0\0\3038\277\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 64 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"\20\1\5\0\n\0 \2_WIN_HINTS\4\0", 20}, {NULL, 0}, {"", 0}], 3) = 20 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\1\0r\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"\f\1\5\0\3\0 \2\f\0IN \3\0\0X\2\0\0\f\0\5\0\r\0 \2\f\0\0\0"..., 44}, {NULL, 0}, {"", 0}], 3) = 44 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\0010u\0\2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 40 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"&\1\2\0s\0\0\0", 8}, {NULL, 0}, {"", 0}], 3) = 8 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\26\0u\0\3\0 \2\3\0 \2\0\0\0\0\4\0\30\0 \3X\2\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\1\1v\0\0\0\0\0s\0\0\0*\1\0\1\352\0\303\2\352\0\303\2\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"\214\2\2\0\17\0 \2+\0\1\0", 12}, {NULL, 0}, {"", 0}], 3) = 12 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\1\2x\0\0\0\0\0\4\0\340\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] shmdt(0x747ac000) = 0 +[pid 32681] shmget(IPC_PRIVATE, 1920000, IPC_CREAT|0777) = 6848525 +[pid 32681] shmat(6848525, 0, 0) = 0x73dd7000 +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"\214\1\4\0\24\0 \2\r\200h\0\0\3\0\0+\2\1\0", 20}, {NULL, 0}, {"", 0}], 3) = 20 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\1\2z\0\0\0\0\0\4\0\340\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] shmctl(6848525, IPC_64|IPC_RMID, 0) = 0 +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"+\1\1\0", 4}, {NULL, 0}, {"", 0}], 3) = 4 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\1\2{\0\0\0\0\0\4\0\340\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"\2\1\4\0\r\0 \2\0@\0\0\v\0 \2+\2\1\0", 20}, {NULL, 0}, {"", 0}], 3) = 20 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\1\2}\0\0\0\0\0\4\0\340\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=14, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=14, revents=POLLOUT}]) +[pid 32681] writev(14, [{"&\1\2\0\r\0 \2", 8}, {NULL, 0}, {"", 0}], 3) = 8 +[pid 32681] poll([{fd=14, events=POLLIN}], 1, -1) = 1 ([{fd=14, revents=POLLIN}]) +[pid 32681] read(14, "\1\1~\0\0\0\0\0s\0\0\0\0\0\0\0\352\0\303\2\346\0z\2\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=15, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=15, revents=POLLOUT}]) +[pid 32681] writev(15, [{"b\0\4\0\7\0 \2", 8}, {"MIT-SHM", 7}, {"\0", 1}], 3) = 16 +[pid 32681] poll([{fd=15, events=POLLIN}], 1, -1) = 1 ([{fd=15, revents=POLLIN}]) +[pid 32681] read(15, "\1\0\v\0\0\0\0\0\1\214M\221\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(15, 0xa540448, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=15, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=15, revents=POLLOUT}]) +[pid 32681] writev(15, [{"b\0\10\0\27\0 \2", 8}, {"Generic Event Extension", 23}, {"\0", 1}], 3) = 32 +[pid 32681] poll([{fd=15, events=POLLIN}], 1, -1) = 1 ([{fd=15, revents=POLLIN}]) +[pid 32681] read(15, "\1\0\f\0\0\0\0\0\1\212\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096) = 32 +[pid 32681] read(15, 0xa540448, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=15, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=15, revents=POLLOUT}]) +[pid 32681] writev(15, [{"\212\0\2\0\1\0\0\0", 8}, {NULL, 0}, {"", 0}], 3) = 8 +[pid 32681] poll([{fd=15, events=POLLIN}], 1, -1) = 1 ([{fd=15, revents=POLLIN}]) +[pid 32681] read(15, "\1\0\r\0\0\0\0\0\1\0\0\0w\26\10\10\0\0\0\0\0\0\0\0\310\316\357\10\n\204\17\10", 4096) = 32 +[pid 32681] read(15, 0xa540448, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] poll([{fd=15, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=15, revents=POLLOUT}]) +[pid 32681] writev(15, [{"\214\3\n\0\r\0 \2\16\0 \2 \3X\2\0\0\0\0 \3X\2\0\0\0\0\30\2\0\0"..., 40}, {NULL, 0}, {"", 0}], 3) = 40 +[pid 32681] read(15, 0xa540448, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] select(15, [14], NULL, NULL, {0, 0}) = 0 (Timeout) +[pid 32681] read(14, 0xa535ed8, 4096) = -1 EAGAIN (Resource temporarily unavailable) +[pid 32681] select(15, [14], NULL, NULL, {0, 0}) = 0 (Timeout) +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 111491325}) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 111667535}) = 0 +[pid 32681] gettimeofday({1321185169, 362674}, NULL) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 111990481}) = 0 +[pid 32681] timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 +[pid 32681] timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 +[pid 32681] gettimeofday({1321185169, 363389}, NULL) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 112738342}) = 0 +[pid 32681] gettimeofday({1321185169, 363745}, NULL) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 113119396}) = 0 +[pid 32681] timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 +[pid 32681] timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 +[pid 32681] ioctl(6, KVM_IRQ_LINE_STATUS, 0xbfd63594) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 113866418}) = 0 +[pid 32681] gettimeofday({1321185169, 364876}, NULL) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 114204170}) = 0 +[pid 32681] timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 +[pid 32681] timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 +[pid 32681] ioctl(6, KVM_IRQ_LINE_STATUS, 0xbfd63594) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 114923815}) = 0 +[pid 32681] gettimeofday({1321185169, 365931}, NULL) = 0 +[pid 32681] clock_gettime(CLOCK_MONOTONIC, {79641, 117299536}) = 0 +[pid 32681] timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 +[pid 32681] timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 250000}}, NULL) = 0 +[pid 32681] ioctl(6, KVM_IRQ_LINE_STATUS, 0xbfd63594) = 0 + +Host system is Debian stable 32 bit, guest is Windows XP 32 bit, qemu version is 0.15.1. + +I start qemu with this commandline: +qemu -m 1024 -hda image -localtime -monitor tcp:127.0.0.1:10000,server,nowait -net nic,model=rtl8139 -net user,hostfwd=tcp:127.0.0.1:4444-:4444 -cpu host -daemonize -loadvm somestate +(Or without -loadvm switch and use loadvm command in the monitor, result is the same) + +Now I have 4 states/snapshots in the image, the last one I added is 995 Mb, but I can't load old one either (170Mb). My computer has 2Gb of RAM, I also tried to add 2Gb of swap and that does not help. Tried to delete new state with qemu-img snapshot -d, but still can't load old states. + +I'm not much help in identifying the reason... Probably this is related to rather big states comparing with amount of memory... +May be related: https://bugzilla.redhat.com/show_bug.cgi?id=586643 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/891525 b/results/classifier/gemma3:12b/kvm/891525 new file mode 100644 index 00000000..068a10fd --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/891525 @@ -0,0 +1,55 @@ + +Guest kernel crashes when booting a NUMA guest without explicitly specifying cpus= in -numa option + +Target: x86_64-softmmu + +Qemu Command line: [root@hs22 qemu-1.0-rc2]# ./x86_64-softmmu/qemu-system-x86_64 -smp sockets=2,cores=4,threads=2 -numa node,nodeid=0,mem=4g -numa node,nodeid=1,mem=1g -cpu core2duo -m 5g /home/bharata/f15-lvm -nographic --enable-kvm -net nic,macaddr=54:52:00:46:26:84,model=e1000 -net tap,script=/etc/qemu-if,ifname=vnet0 + +Qemu version: 1.0-rc2 + +When guest is started with -numa option without explicitly specifying the cpus=, guest kernel crashes as below: + +[ 0.252159] divide error: 0000 [#1] SMP +[ 0.252970] last sysfs file: +[ 0.252970] CPU 1 +[ 0.252970] Modules linked in: +[ 0.252970] +[ 0.252970] Pid: 2, comm: kthreadd Not tainted 2.6.38.6-26.rc1.fc15.x86_64 #1 Bochs Bochs +[ 0.252970] RIP: 0010:[<ffffffff8104f4d4>] [<ffffffff8104f4d4>] select_task_rq_fair+0x44a/0x571 +[ 0.252970] RSP: 0000:ffff88011767fc60 EFLAGS: 00010046 +[ 0.252970] RAX: 0000000000000000 RBX: ffff88015d6ad300 RCX: 0000000000000000 +[ 0.252970] RDX: 0000000000000000 RSI: 0000000000000100 RDI: 0000000000000000 +[ 0.252970] RBP: ffff88011767fd10 R08: 0000000000000100 R09: ffff88015d6ad338 +[ 0.252970] R10: 0000000000013840 R11: 0000000000800711 R12: 0000000000000000 +[ 0.252970] R13: ffff88015fc0f810 R14: 0000000000000001 R15: 0000000000000000 +[ 0.252970] FS: 0000000000000000(0000) GS:ffff88015fc00000(0000) knlGS:0000000000000000 +[ 0.252970] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b +[ 0.252970] CR2: 00000000ffffffff CR3: 0000000001a03000 CR4: 00000000000006e0 +[ 0.252970] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 +[ 0.252970] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 +[ 0.252970] Process kthreadd (pid: 2, threadinfo ffff88011767e000, task ffff88015d671720) +[ 0.252970] Stack: +[ 0.252970] ffffffff81475873 ffffffff81a02140 ffff88011767fce0 ffffffff8106c5a3 +[ 0.252970] ffff88015d6ad318 000000010000000e 0000000000013840 0000000000013840 +[ 0.252970] ffff88015d6ad318 0000007d00000001 ffff880100000000 ffff88015d6d81e8 +[ 0.252970] Call Trace: +[ 0.252970] [<ffffffff81475873>] ? _raw_spin_lock_irq+0x1c/0x1e +[ 0.252970] [<ffffffff8106c5a3>] ? alloc_pid+0x2e6/0x335 +[ 0.252970] [<ffffffff81048960>] select_task_rq+0x16/0x46 +[ 0.252970] [<ffffffff8104e29a>] wake_up_new_task+0x3a/0xde +[ 0.252970] [<ffffffff810546ce>] do_fork+0x1f1/0x2bf +[ 0.252970] [<ffffffff8100804e>] ? load_TLS+0x10/0x14 +[ 0.252970] [<ffffffff81008714>] ? __switch_to+0xc6/0x220 +[ 0.252970] [<ffffffff81010c1a>] kernel_thread+0x75/0x77 +[ 0.252970] [<ffffffff8106eacf>] ? kthread+0x0/0x8c +[ 0.252970] [<ffffffff8100a9e0>] ? kernel_thread_helper+0x0/0x10 +[ 0.252970] [<ffffffff8106ee93>] kthreadd+0xe7/0x124 +[ 0.252970] [<ffffffff8100a9e4>] kernel_thread_helper+0x4/0x10 +[ 0.252970] [<ffffffff8106edac>] ? kthreadd+0x0/0x124 +[ 0.252970] [<ffffffff8100a9e0>] ? kernel_thread_helper+0x0/0x10 +[ 0.252970] Code: 01 45 c0 8b 8d 78 ff ff ff 48 8b 75 90 89 cf e8 4a 28 ff ff 3b 05 bd 89 ae 00 89 c1 7c c5 48 8b 45 c0 8b 4b 08 31 d2 48 c1 e0 0a +[ 0.252970] f7 f1 45 85 e4 75 08 48 3b 45 b0 72 08 eb 0d 48 89 45 b8 eb +[ 0.252970] RIP [<ffffffff8104f4d4>] select_task_rq_fair+0x44a/0x571 +[ 0.252970] RSP <ffff88011767fc60> + +When cpus= is specified for each node explicitly, guest boots fine. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/897750 b/results/classifier/gemma3:12b/kvm/897750 new file mode 100644 index 00000000..0a04ff5a --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/897750 @@ -0,0 +1,255 @@ + +libvirt/kvm problem with disk attach/detach/reattach on running virt + +Release: Ubuntu 11.10 (Oneiric) +libvirt-bin: 0.9.2-4ubuntu15.1 +qemu-kvm: 0.14.1+noroms-0ubuntu6 + +Summary: With a running KVM virt, performing an 'attach-disk', then a 'detach-disk', then another 'attach-disk' +in an attempt to reattach the volume at the same point on the virt, fails, with the qemu reporting back to +libvirt a 'Duplicate ID' error. + +Expected behavior: The 2nd invocation of 'attach-disk' should have succeeded +Actual behavior: Duplicate ID error reported + + +I believe this is most likely a qemu-kvm issue, as the DOM kvm spits back at libvirt after the 'detach-disk' +does not show the just-detached disk. There is some kind of registry/lookup for devices in qemu-kvm +and for whatever reason, the entry for the disk does not get removed when it is detached from the +virt. Specifically, the error gets reported at the 2nd attach-disk attempt from: + + qemu-option.c:qemu_opts_create:697 + +684 QemuOpts *qemu_opts_create(QemuOptsList *list, const char *id, int fail_if_exists) +685 { +686 QemuOpts *opts = NULL; +687 +688 if (id) { +689 if (!id_wellformed(id)) { +690 qerror_report(QERR_INVALID_PARAMETER_VALUE, "id", "an identifier"); +691 error_printf_unless_qmp("Identifiers consist of letters, digits, '-', '.', '_', starting with a letter.\n"); +692 return NULL; +693 } +694 opts = qemu_opts_find(list, id); +695 if (opts != NULL) { +696 if (fail_if_exists) { +697 qerror_report(QERR_DUPLICATE_ID, id, list->name); <<<< ====== HERE =========== +698 return NULL; +699 } else { +700 return opts; +701 } +702 } +703 } +704 opts = qemu_mallocz(sizeof(*opts)); +705 if (id) { +706 opts->id = qemu_strdup(id); +707 } +708 opts->list = list; +709 loc_save(&opts->loc); +710 QTAILQ_INIT(&opts->head); +711 QTAILQ_INSERT_TAIL(&list->head, opts, next); +712 return opts; +713 } + +======================================== +Output of my attach/detach/attach +======================================== +virsh # attach-disk base1 /var/lib/libvirt/images/extrastorage.img vdb +Disk attached successfully + +virsh # dumpxml base1 +<domain type='kvm' id='2'> + <name>base1</name> + <uuid>9ebebe7f-7dfa-4735-a80c-c19ebe4e1459</uuid> + <memory>1048576</memory> + <currentMemory>1048576</currentMemory> + <vcpu>2</vcpu> + <os> + <type arch='x86_64' machine='pc-0.14'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + </features> + <cpu match='exact'> + <model>Opteron_G3</model> + <vendor>AMD</vendor> + <feature policy='require' name='skinit'/> + <feature policy='require' name='vme'/> + <feature policy='require' name='mmxext'/> + <feature policy='require' name='fxsr_opt'/> + <feature policy='require' name='cr8legacy'/> + <feature policy='require' name='ht'/> + <feature policy='require' name='3dnowprefetch'/> + <feature policy='require' name='3dnowext'/> + <feature policy='require' name='wdt'/> + <feature policy='require' name='extapic'/> + <feature policy='require' name='pdpe1gb'/> + <feature policy='require' name='osvw'/> + <feature policy='require' name='cmp_legacy'/> + <feature policy='require' name='3dnow'/> + </cpu> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <devices> + <emulator>/usr/bin/kvm</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='raw'/> + <source file='/dev/rbd1'/> + <target dev='vda' bus='virtio'/> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> + </disk> + <disk type='block' device='disk'> + <driver name='qemu' type='raw'/> + <source dev='/var/lib/libvirt/images/extrastorage.img'/> + <target dev='vdb' bus='virtio'/> + <alias name='virtio-disk1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> + </disk> + <controller type='ide' index='0'> + <alias name='ide0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> + </controller> + <interface type='bridge'> + <mac address='52:54:00:a2:c1:2d'/> + <source bridge='br0'/> + <target dev='vnet0'/> + <model type='virtio'/> + <alias name='net0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <serial type='pty'> + <source path='/dev/pts/1'/> + <target port='0'/> + <alias name='serial0'/> + </serial> + <console type='pty' tty='/dev/pts/1'> + <source path='/dev/pts/1'/> + <target type='serial' port='0'/> + <alias name='serial0'/> + </console> + <input type='mouse' bus='ps2'/> + <graphics type='vnc' port='5900' autoport='yes'/> + <sound model='ich6'> + <alias name='sound0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </sound> + <video> + <model type='cirrus' vram='9216' heads='1'/> + <alias name='video0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + <memballoon model='virtio'> + <alias name='balloon0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> + </memballoon> + </devices> + <seclabel type='dynamic' model='apparmor'> + <label>libvirt-9ebebe7f-7dfa-4735-a80c-c19ebe4e1459</label> + <imagelabel>libvirt-9ebebe7f-7dfa-4735-a80c-c19ebe4e1459</imagelabel> + </seclabel> +</domain> + +virsh # detach-disk base1 vdb +Disk detached successfully + +virsh # dumpxml base1 +<domain type='kvm' id='2'> + <name>base1</name> + <uuid>9ebebe7f-7dfa-4735-a80c-c19ebe4e1459</uuid> + <memory>1048576</memory> + <currentMemory>1048576</currentMemory> + <vcpu>2</vcpu> + <os> + <type arch='x86_64' machine='pc-0.14'>hvm</type> + <boot dev='hd'/> + </os> + <features> + <acpi/> + <apic/> + <pae/> + </features> + <cpu match='exact'> + <model>Opteron_G3</model> + <vendor>AMD</vendor> + <feature policy='require' name='skinit'/> + <feature policy='require' name='vme'/> + <feature policy='require' name='mmxext'/> + <feature policy='require' name='fxsr_opt'/> + <feature policy='require' name='cr8legacy'/> + <feature policy='require' name='ht'/> + <feature policy='require' name='3dnowprefetch'/> + <feature policy='require' name='3dnowext'/> + <feature policy='require' name='wdt'/> + <feature policy='require' name='extapic'/> + <feature policy='require' name='pdpe1gb'/> + <feature policy='require' name='osvw'/> + <feature policy='require' name='cmp_legacy'/> + <feature policy='require' name='3dnow'/> + </cpu> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>restart</on_crash> + <devices> + <emulator>/usr/bin/kvm</emulator> + <disk type='file' device='disk'> + <driver name='qemu' type='raw'/> + <source file='/dev/rbd1'/> + <target dev='vda' bus='virtio'/> + <alias name='virtio-disk0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> + </disk> + <controller type='ide' index='0'> + <alias name='ide0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> + </controller> + <interface type='bridge'> + <mac address='52:54:00:a2:c1:2d'/> + <source bridge='br0'/> + <target dev='vnet0'/> + <model type='virtio'/> + <alias name='net0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <serial type='pty'> + <source path='/dev/pts/1'/> + <target port='0'/> + <alias name='serial0'/> + </serial> + <console type='pty' tty='/dev/pts/1'> + <source path='/dev/pts/1'/> + <target type='serial' port='0'/> + <alias name='serial0'/> + </console> + <input type='mouse' bus='ps2'/> + <graphics type='vnc' port='5900' autoport='yes'/> + <sound model='ich6'> + <alias name='sound0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> + </sound> + <video> + <model type='cirrus' vram='9216' heads='1'/> + <alias name='video0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + <memballoon model='virtio'> + <alias name='balloon0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> + </memballoon> + </devices> + <seclabel type='dynamic' model='apparmor'> + <label>libvirt-9ebebe7f-7dfa-4735-a80c-c19ebe4e1459</label> + <imagelabel>libvirt-9ebebe7f-7dfa-4735-a80c-c19ebe4e1459</imagelabel> + </seclabel> +</domain> + +virsh # attach-disk base1 /var/lib/libvirt/images/extrastorage.img vdb +error: Failed to attach disk +error: operation failed: adding virtio-blk-pci,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1 device failed: Duplicate ID 'virtio-disk1' for device +====================================================== \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/898 b/results/classifier/gemma3:12b/kvm/898 new file mode 100644 index 00000000..605a18ab --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/898 @@ -0,0 +1,2 @@ + +check-tcg sha512-mvx test is failing on s390x hosts diff --git a/results/classifier/gemma3:12b/kvm/899961 b/results/classifier/gemma3:12b/kvm/899961 new file mode 100644 index 00000000..6000ff3f --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/899961 @@ -0,0 +1,4 @@ + +qemu/kvm locks up when run 32bit userspace with 64bit kernel + +Applies to both qemu and qemu-kvm 1.0, but only when kernel is 64bit and userspace is 32bit, on x86. Did not happen with previous released versions, such as 0.15. Not all guests triggers this issue - so far, only (32bit) windows 7 guest shows it, but does that quite reliable: first boot of an old guest with new qemu (or qemu-kvm), windows finds a new CPU and suggests rebooting - hit "Reboot" and in a few seconds it will be locked up (including the monitor), with 100% CPU usage. Killable with -9. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/906804 b/results/classifier/gemma3:12b/kvm/906804 new file mode 100644 index 00000000..64c02cac --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/906804 @@ -0,0 +1,42 @@ + +SIGSEGV using sheepdog + +While doing a mkfs on a Sheepdog volume attached inside a VM, qemu-kvm segfaults: + + +Program received signal SIGSEGV, Segmentation fault. +aio_read_response (opaque=0x0) at /build/buildd-qemu-kvm_1.0+dfsg-2-amd64-V1Rh0p/qemu-kvm-1.0+dfsg/block/sheepdog.c:784 +784 /build/buildd-qemu-kvm_1.0+dfsg-2-amd64-V1Rh0p/qemu-kvm-1.0+dfsg/block/sheepdog.c: No such file or directory. + in /build/buildd-qemu-kvm_1.0+dfsg-2-amd64-V1Rh0p/qemu-kvm-1.0+dfsg/block/sheepdog.c +(gdb) bt +#0 aio_read_response (opaque=0x0) at /build/buildd-qemu-kvm_1.0+dfsg-2-amd64-V1Rh0p/qemu-kvm-1.0+dfsg/block/sheepdog.c:784 +#1 0x00007effed02b7bb in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at /build/buildd-qemu-kvm_1.0+dfsg-2-amd64-V1Rh0p/qemu-kvm-1.0+dfsg/coroutine-ucontext.c:125 +#2 0x00007effe89e4d60 in ?? () from /lib/x86_64-linux-gnu/libc.so.6 +#3 0x00007fff90ed7fd0 in ?? () +#4 0x0000000000000000 in ?? () +(gdb) bt full +#0 aio_read_response (opaque=0x0) at /build/buildd-qemu-kvm_1.0+dfsg-2-amd64-V1Rh0p/qemu-kvm-1.0+dfsg/block/sheepdog.c:784 + rsp = {proto_ver = 8 '\b', opcode = 8 '\b', flags = 61231, epoch = 32511, id = 4023393600, data_length = 32511, result = 4022027568, copies = 32511, pad = {3902624371, 32511, 4022027680, 32511, 4022027680, 32511}} + s = <optimized out> + fd = <optimized out> + aio_req = <optimized out> + acb = <optimized out> + idx = 139637703787936 +#1 0x00007effed02b7bb in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at /build/buildd-qemu-kvm_1.0+dfsg-2-amd64-V1Rh0p/qemu-kvm-1.0+dfsg/coroutine-ucontext.c:125 + self = 0x7effefbb45a0 + co = 0x7effefbb45a0 +#2 0x00007effe89e4d60 in ?? () from /lib/x86_64-linux-gnu/libc.so.6 +No symbol table info available. +#3 0x00007fff90ed7fd0 in ?? () +No symbol table info available. +#4 0x0000000000000000 in ?? () +No symbol table info available. +(gdb) info threads + Id Target Id Frame + 12 Thread 0x7eff4d3ea700 (LWP 10461) "kvm" 0x00007effe8d3264b in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0 + 11 Thread 0x7eff4c3e8700 (LWP 10460) "kvm" 0x00007effe8d3264b in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0 + 9 Thread 0x7eff49be3700 (LWP 10442) "kvm" 0x00007effe8d3264b in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0 + 8 Thread 0x7eff4a3e4700 (LWP 10441) "kvm" 0x00007effe8d3264b in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0 + 7 Thread 0x7eff493e2700 (LWP 10440) "kvm" 0x00007effe8d3264b in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0 + 6 Thread 0x7effd2741700 (LWP 10270) "kvm" 0x00007effe8a71407 in ioctl () from /lib/x86_64-linux-gnu/libc.so.6 +* 1 Thread 0x7effecf39900 (LWP 10267) "kvm" aio_read_response (opaque=0x0) at /build/buildd-qemu-kvm_1.0+dfsg-2-amd64-V1Rh0p/qemu-kvm-1.0+dfsg/block/sheepdog.c:784 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/916 b/results/classifier/gemma3:12b/kvm/916 new file mode 100644 index 00000000..9d17c26d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/916 @@ -0,0 +1,12 @@ + +QEMU system emulators immediately crash on AMD hosts when KVM is used +Description of problem: +``` +$ qemu-system-x86_64 -accel kvm +qemu-system-x86_64: ../target/i386/kvm/kvm-cpu.c:105: kvm_cpu_xsave_init: Assertion `esa->size == eax' failed. +Aborted (core dumped) +``` + +This is a regression introduced in + +https://lists.gnu.org/archive/html/qemu-devel/2022-03/msg04312.html diff --git a/results/classifier/gemma3:12b/kvm/918791 b/results/classifier/gemma3:12b/kvm/918791 new file mode 100644 index 00000000..142a16d6 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/918791 @@ -0,0 +1,29 @@ + +qemu-kvm dies when using vmvga driver and unity in the guest + +12.04's qemu-kvm has been unstable for me and Marc Deslauriers and I figured out it has something to do with the interaction of qemu-kvm, unity and the vmvga driver. This is a regression over qemu-kvm in 11.10. + +TEST CASE: +1. start a VM that uses unity (eg, 11.04, 11.10 or 12.04). My tests use unity-2d on an amd64 host and amd64 guests +2. on 11.04 and 11.10, open empathy via the messaging indicator and click 'Chat'. On 12.04, open empathy via the messaging indicator and click 'Chat', close the empathy wizard, move the empathy window over the unity luancher (so it autohides), then do 'ctrl+alt+t' to open a terminal + +When the launcher tries to auto(un)hide, qemu-kvm dies with this: +[10574.958149] do_general_protection: 132 callbacks suppressed +[10574.958154] kvm[13192] general protection ip:7fab9680ea0f sp:7ffff4440148 error:0 in qemu-system-x86_64[7fab966c4000+2c9000] + +Relevant libvirt xml: + <video> + <model type='vmvga' vram='9216' heads='1'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + +If I change to using 'cirrus', then qemu-kvm no longer crashes. Eg: + <video> + <model type='cirrus' vram='9216' heads='1'/> + <alias name='video0'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> + </video> + +The workaround is therefore to use the cirrus driver instead of vmvga, however being able to kill qemu-kvm in this manner is not ideal. Also, unfortunately unity-2d does not run with with cirrus driver under 11.04, so the security and SRU teams are unable to properly test updates in GUI applications under unity when using the current 12.04 qemu-kvm. + +I tried to report this via apport, but apport complained about a CRC error, so I could not. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/920 b/results/classifier/gemma3:12b/kvm/920 new file mode 100644 index 00000000..bb9e4d09 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/920 @@ -0,0 +1,13 @@ + +Aarch64 QEMU+KVM+OVMF RAM Bug +Description of problem: +OVMF EDK2 does not recognize any amount of RAM. It always detects as 0 MB and causes operating systems to crash. +Steps to reproduce: +1. +2. +3. +Additional information: +There was a problem with the Redmi Note 10S device via Termux. +  + + ovmf diff --git a/results/classifier/gemma3:12b/kvm/921208 b/results/classifier/gemma3:12b/kvm/921208 new file mode 100644 index 00000000..e8fccef7 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/921208 @@ -0,0 +1,45 @@ + +win7/x64 installer hangs on startup with 0x0000005d. + +hi, + +during booting win7/x64 installer i'm observing a bsod with 0x0000005d ( msdn: unsupported_processor ). + +used command line: qemu-system-x86_64 -m 2048 -hda w7-system.img -cdrom win7_x64.iso -boot d + +adding '-machine accel=kvm' instead of default tcg accel helps to boot. + + +installed software: + +qemu-1.0 +linux-3.2.1 +glibc-2.14.1 +gcc-4.6.2 + +hw cpu: + +processor : 0..7 +vendor_id : GenuineIntel +cpu family : 6 +model : 42 +model name : Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz +stepping : 7 +microcode : 0x14 +cpu MHz : 1995.739 +cache size : 6144 KB +physical id : 0 +siblings : 8 +core id : 3 +cpu cores : 4 +apicid : 7 +initial apicid : 7 +fpu : yes +fpu_exception : yes +cpuid level : 13 +wp : yes +flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid +bogomips : 3992.23 +clflush size : 64 +cache_alignment : 64 +address sizes : 36 bits physical, 48 bits virtual \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/922355 b/results/classifier/gemma3:12b/kvm/922355 new file mode 100644 index 00000000..2a81d6fa --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/922355 @@ -0,0 +1,10 @@ + +qemu crashes when invoked on Pandaboard + +root@omap:~# uname -a +Linux omap 3.1.6-x6 #1 SMP Thu Dec 22 11:17:51 UTC 2011 armv7l armv7l +armv7l GNU/Linux + +root@omap:~# qemu +Could not initialize KVM, will disable KVM support +/build/buildd/qemu-kvm-0.14.1+noroms/tcg/arm/tcg-target.c:848: tcg fatal error \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/939027 b/results/classifier/gemma3:12b/kvm/939027 new file mode 100644 index 00000000..eb5a1f07 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/939027 @@ -0,0 +1,377 @@ + +Combining direct_io and non-direct_io leads to hang + +Version 0.12.2 - I know this isn't the latest but I looked through the changelogs and couldn't find any references to this being fixed. + +This is related to issues that many apps have, and is described in more detail here: + http://oss.sgi.com/archives/xfs/2010-07/msg00163.html +and: + https://bugs.launchpad.net/percona-xtrabackup/+bug/606981 + +When using both direct io and buffered io on the same file, you can cause corruption on the filesystem and other issues. +XFS out right hangs, but the problem exists on ext3 and other filesystems which silently carry on. + +**this is a data corruption issue**. + +This is the full stack trace we got from a recent hang: + +Feb 22 19:55:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 19:55:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 19:55:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 19:55:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 19:55:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 19:55:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 19:55:14 virt11 kernel: Call Trace: +Feb 22 19:55:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 19:55:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 19:55:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 19:55:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 19:55:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 19:55:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 19:55:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 19:55:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 19:55:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 19:55:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 19:55:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 19:55:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 19:55:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 19:55:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 19:55:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 19:55:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 19:55:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 19:55:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 19:55:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 19:55:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 19:57:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 19:57:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 19:57:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 19:57:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 19:57:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 19:57:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 19:57:14 virt11 kernel: Call Trace: +Feb 22 19:57:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 19:57:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 19:57:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 19:57:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 19:57:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 19:57:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 19:57:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 19:57:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 19:57:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 19:57:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 19:57:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 19:57:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 19:57:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 19:57:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 19:57:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 19:57:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 19:57:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 19:57:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 19:57:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 19:57:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 19:59:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 19:59:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 19:59:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 19:59:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 19:59:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 19:59:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 19:59:14 virt11 kernel: Call Trace: +Feb 22 19:59:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 19:59:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 19:59:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 19:59:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 19:59:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 19:59:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 19:59:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 19:59:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 19:59:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 19:59:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 19:59:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 19:59:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 19:59:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 19:59:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 19:59:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 19:59:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 19:59:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 19:59:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 19:59:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 19:59:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 20:01:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 20:01:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 20:01:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 20:01:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 20:01:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 20:01:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 20:01:14 virt11 kernel: Call Trace: +Feb 22 20:01:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 20:01:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 20:01:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 20:01:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 20:01:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 20:01:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 20:01:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 20:01:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 20:01:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 20:01:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 20:01:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 20:01:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 20:01:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 20:01:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 20:01:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 20:01:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 20:01:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 20:01:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 20:01:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 20:01:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 20:03:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 20:03:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 20:03:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 20:03:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 20:03:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 20:03:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 20:03:14 virt11 kernel: Call Trace: +Feb 22 20:03:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 20:03:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 20:03:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 20:03:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 20:03:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 20:03:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 20:03:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 20:03:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 20:03:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 20:03:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 20:03:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 20:03:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 20:03:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 20:03:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 20:03:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 20:03:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 20:03:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 20:03:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 20:03:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 20:03:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 20:05:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 20:05:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 20:05:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 20:05:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 20:05:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 20:05:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 20:05:14 virt11 kernel: Call Trace: +Feb 22 20:05:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 20:05:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 20:05:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 20:05:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 20:05:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 20:05:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 20:05:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 20:05:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 20:05:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 20:05:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 20:05:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 20:05:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 20:05:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 20:05:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 20:05:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 20:05:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 20:05:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 20:05:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 20:05:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 20:05:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 20:07:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 20:07:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 20:07:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 20:07:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 20:07:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 20:07:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 20:07:14 virt11 kernel: Call Trace: +Feb 22 20:07:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 20:07:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 20:07:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 20:07:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 20:07:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 20:07:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 20:07:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 20:07:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 20:07:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 20:07:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 20:07:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 20:07:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 20:07:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 20:07:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 20:07:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 20:07:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 20:07:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 20:07:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 20:07:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 20:07:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 20:09:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 20:09:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 20:09:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 20:09:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 20:09:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 20:09:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 20:09:14 virt11 kernel: Call Trace: +Feb 22 20:09:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 20:09:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 20:09:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 20:09:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 20:09:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 20:09:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 20:09:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 20:09:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 20:09:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 20:09:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 20:09:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 20:09:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 20:09:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 20:09:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 20:09:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 20:09:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 20:09:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 20:09:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 20:09:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 20:09:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 20:11:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 20:11:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 20:11:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 20:11:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 20:11:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 20:11:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 20:11:14 virt11 kernel: Call Trace: +Feb 22 20:11:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 20:11:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 20:11:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 20:11:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 20:11:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 20:11:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 20:11:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 20:11:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 20:11:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 20:11:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 20:11:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 20:11:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 20:11:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 20:11:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 20:11:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 20:11:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 20:11:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 20:11:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 20:11:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 20:11:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b +Feb 22 20:13:14 virt11 kernel: INFO: task qemu-kvm:18360 blocked for more than 120 seconds. +Feb 22 20:13:14 virt11 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +Feb 22 20:13:14 virt11 kernel: qemu-kvm D 0000000000000002 0 18360 1 0x00000080 +Feb 22 20:13:14 virt11 kernel: ffff8808614457e8 0000000000000086 ffff8800282d5f80 ffff8800282d5f80 +Feb 22 20:13:14 virt11 kernel: ffff880861445768 ffffffff810573ce ffff880861445768 0000000000000086 +Feb 22 20:13:14 virt11 kernel: ffff8808f15bba78 ffff880861445fd8 000000000000f598 ffff8808f15bba78 +Feb 22 20:13:14 virt11 kernel: Call Trace: +Feb 22 20:13:14 virt11 kernel: [<ffffffff810573ce>] ? activate_task+0x2e/0x40 +Feb 22 20:13:14 virt11 kernel: [<ffffffff814dd755>] rwsem_down_failed_common+0x95/0x1d0 +Feb 22 20:13:14 virt11 kernel: [<ffffffff814dd8b3>] rwsem_down_write_failed+0x23/0x30 +Feb 22 20:13:14 virt11 kernel: [<ffffffff8126e573>] call_rwsem_down_write_failed+0x13/0x20 +Feb 22 20:13:14 virt11 kernel: [<ffffffff814dcdb2>] ? down_write+0x32/0x40 +Feb 22 20:13:14 virt11 kernel: [<ffffffffa0332d6e>] xfs_ilock+0x7e/0xd0 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffffa033ab02>] xfs_iomap+0x2e2/0x440 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffffa0354116>] __xfs_get_blocks+0x86/0x200 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffffa03542aa>] xfs_get_blocks_direct+0x1a/0x20 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffff811ac132>] __blockdev_direct_IO+0x872/0xc40 +Feb 22 20:13:14 virt11 kernel: [<ffffffffa0353f50>] xfs_vm_direct_IO+0xb0/0xf0 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffffa0354290>] ? xfs_get_blocks_direct+0x0/0x20 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffffa0353cc0>] ? xfs_end_io_direct+0x0/0xe0 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffff8106dd57>] ? current_fs_time+0x27/0x30 +Feb 22 20:13:14 virt11 kernel: [<ffffffff8110df22>] generic_file_direct_write+0xc2/0x190 +Feb 22 20:13:14 virt11 kernel: [<ffffffffa034bfcf>] ? xfs_trans_unlocked_item+0x4f/0x60 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffffa035dd1d>] xfs_write+0x4fd/0xb70 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffff8105dc72>] ? default_wake_function+0x12/0x20 +Feb 22 20:13:14 virt11 kernel: [<ffffffffa03599a1>] xfs_file_aio_write+0x61/0x70 [xfs] +Feb 22 20:13:14 virt11 kernel: [<ffffffff8117241a>] do_sync_write+0xfa/0x140 +Feb 22 20:13:14 virt11 kernel: [<ffffffff8107fbc2>] ? send_signal+0x42/0x80 +Feb 22 20:13:14 virt11 kernel: [<ffffffff8108e160>] ? autoremove_wake_function+0x0/0x40 +Feb 22 20:13:14 virt11 kernel: [<ffffffff8107ff96>] ? group_send_sig_info+0x56/0x70 +Feb 22 20:13:14 virt11 kernel: [<ffffffff81211d3b>] ? selinux_file_permission+0xfb/0x150 +Feb 22 20:13:14 virt11 kernel: [<ffffffff812051a6>] ? security_file_permission+0x16/0x20 +Feb 22 20:13:14 virt11 kernel: [<ffffffff81172718>] vfs_write+0xb8/0x1a0 +Feb 22 20:13:14 virt11 kernel: [<ffffffff810d1b62>] ? audit_syscall_entry+0x272/0x2a0 +Feb 22 20:13:14 virt11 kernel: [<ffffffff81173212>] sys_pwrite64+0x82/0xa0 +Feb 22 20:13:14 virt11 kernel: [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/939437 b/results/classifier/gemma3:12b/kvm/939437 new file mode 100644 index 00000000..6e732b5e --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/939437 @@ -0,0 +1,9 @@ + +spice is not supported by this qemu build.(ubuntu 12.04) + +$ kvm -spice port=5900,addr=127.0.0.1,disable-ticketing +kvm: -spice port=5900,addr=127.0.0.1,disable-ticketing: there is no option group "spice" +spice is not supported by this qemu build. + +$ kvm -version +QEMU emulator version 1.0 (qemu-kvm-1.0), Copyright (c) 2003-2008 Fabrice Bellard \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/945 b/results/classifier/gemma3:12b/kvm/945 new file mode 100644 index 00000000..5b6a4ca9 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/945 @@ -0,0 +1,11 @@ + +For QEMU 7.0.0-rc1, nbd-server-add fails with qcow2 image with iothread in migration context +Description of problem: +Upon adding the drive for NBD (via QMP), there is an error message +````kvm: ../block.c:3657: bdrv_open_child: Assertion `qemu_in_main_thread()' failed.```` +and then the process aborts. +Steps to reproduce: +1. Create image: `qemu-img create -f qcow2 /root/target-disk.qcow2 4G` +2. Start QEMU as mentioned above. +3. Issue `nbd-server-start` QMP command (I used type unix). +4. Issue `nbd-server-add` command for the single disk. diff --git a/results/classifier/gemma3:12b/kvm/956 b/results/classifier/gemma3:12b/kvm/956 new file mode 100644 index 00000000..73b5cf65 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/956 @@ -0,0 +1,43 @@ + +ARM: When 'virsh dump' exports vmcore, specifies --format compression format, virtual machine assert hangs +Description of problem: +**ARM: virsh dump exports vmcore, specifies --format compression format, virtual machine assert hangs** + +**why 'virsh dump' page size configured as target page size (64KiB), but 'Implement kvm-steal-time' page size configured as host page size (4KB)?** +Steps to reproduce: +The vm image page size is configured as 64KiB, and the host page size is configured as 4KiB + +1.start vm + +2.Execute the virsh dump command to export vmcore + +Specify the compression format of vmcore, --format (kdump-zlib, kdump-snappy, kdump-lzo) + +/usr/bin/virsh dump avocado-vt-vm1 /var/tmp/vm.core --memory-only --format kdump-zlib + +/usr/bin/virsh dump avocado-vt-vm1 /var/tmp/vm.core --memory-only --format kdump-lzo + +/usr/bin/virsh dump avocado-vt-vm1 /var/tmp/vm.core --memory-only --format kdump-snappy + +**expected results**: The vmcore file is successfully exported and the virtual machine is running normally. + +**actual results**: The vmcore file is not exported normally, and the virtual machine is shut down abnormally. +Additional information: +qemu log: + + +host page size: + + +vm page size: + + +dump.c: get_next_page assert: + + +The code for the error assert exit is shown above. Here, it will check whether the memory to be dumped is actually aligned with the termination address. It needs to be aligned with the page size of the virtual machine. You can see through gdb that it is 64KiB. + + + +After binary search, it was found that a feature of kvm_steal_time was added to arm in version 5.2. Added the following code: + diff --git a/results/classifier/gemma3:12b/kvm/961 b/results/classifier/gemma3:12b/kvm/961 new file mode 100644 index 00000000..db5103d2 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/961 @@ -0,0 +1,2 @@ + +Property not found when using aarch64 `-machine=virt,secure=on` with KVM enabled diff --git a/results/classifier/gemma3:12b/kvm/977391 b/results/classifier/gemma3:12b/kvm/977391 new file mode 100644 index 00000000..5a0a1ce0 --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/977391 @@ -0,0 +1,11 @@ + +BUG: soft lockup - CPU#8 stuck for 61s! [kvm:*] in lucid + +Two days back my KVM base machine got hung up all of a sudden. +Not sure what exactly happened. + +cat /proc/version_signature +Ubuntu 2.6.32-28.55-server 2.6.32.27+drm33.12 + + +-Rahul N. \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/992067 b/results/classifier/gemma3:12b/kvm/992067 new file mode 100644 index 00000000..349633aa --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/992067 @@ -0,0 +1,23 @@ + +Windows 2008R2 very slow cold boot when >4GB memory + +I've been having a consistent problem booting 2008R2 guests with 4096MB of RAM or greater. On the initial boot the KVM process starts out with a ~200MB memory allocation and will use 100% of all CPU allocated to it. The RES memory of the KVM process slowly rises by around 200mb every few minutes until it reaches it's memory allocation (several hours in some cases). Whilst this is happening the guest will usually blue screen with the message of - + +A clock interrupt was not received on a secondary processor within the allocated time interval + +If I let the KVM process continue to run it will eventually allocate the required memory the guest will run at full speed, usually restarting after the blue screen and booting into startup repair. From here you can restart it and it will boot perfectly. Once booted the guest has no performance issues at all. + +I've tried everything I could think of. Removing PAE, playing with huge pages, different kernels, different userspaces, different systems, different backing file systems, different processor feature set, with or without Virtio etc. My best theory is that the problem is caused by Windows 2008 zeroing out all the memory on boot and something is causing this to be held up or slowed to a crawl. The hosts always have memory free to boot the guest and are not using swap at all. + +Nothing so far has solved the issue. A few observations I've made about the issue are - +Large memory 2008R2 guests seem to boot fine (or with a small delay) when they are the first to boot on the host after a reboot +Sometimes dropping the disk cache (echo 1 > /proc/sys/vm/drop_caches) will cause them to boot faster + + +The hosts I've tried are - +All Nehalem based (5540, 5620 and 5660) +Host ram of 48GB, 96GB and 192GB +Storage on NFS, Gluster and local (ext4, xfs and zfs) +QED, QCOW and RAW formats +Scientific Linux 6.1 with the standard kernel 2.6.32, 2.6.38 and 3.3.1 +KVM userspaces 0.12, 0.14 and (currently) 0.15.1 \ No newline at end of file diff --git a/results/classifier/gemma3:12b/kvm/994662 b/results/classifier/gemma3:12b/kvm/994662 new file mode 100644 index 00000000..9db00a3d --- /dev/null +++ b/results/classifier/gemma3:12b/kvm/994662 @@ -0,0 +1,160 @@ + +QEMU crashes on ioport access + +While running a fuzzer inside the guest, QEMU crashed with the following message and dumped the state of all vcpus: + + +qemu: hardware error: register_ioport_read: invalid opaque for address 0x0Al +CPU #0: +RAX=ffff880007a73000 RBX=ffff8800095b6000 RCX=ffff880007a33530 RDX=ffff880007a33530 +RSI=0000000000aa6000 RDI=0000000000aa6000 RBP=ffff880007c13c68 RSP=ffff880007c13c48 +R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000001 +R12=0000000000aa6000 R13=8000000033556045 R14=0000000000aa6000 R15=ffff8800095b6000 +RIP=ffffffff8108ae02 RFL=00000282 [--S----] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 ffffffff 00000000 +CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0000 0000000000000000 ffffffff 00000000 +FS =0000 00007f7de18e8700 ffffffff 00000000 +GS =0000 ffff88000d800000 ffffffff 00000000 +LDT=0000 0000000000000000 ffffffff 00000000 +TR =0040 ffff88000d9d2540 00002087 00008b00 DPL=0 TSS64-busy +GDT= ffff88000d804000 0000007f +IDT= ffffffff8436d000 00000fff +CR0=8005003b CR2=00007f2f25752e9c CR3=0000000007a3d000 CR4=000407f0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80 +FPR0=0000000000000000 0000 FPR1=0000000000000000 0000 +FPR2=0000000000000000 0000 FPR3=0000000000000000 0000 +FPR4=0000000000000000 0000 FPR5=0000000000000000 0000 +FPR6=0000000000000000 0000 FPR7=0000000000000000 0000 +XMM00=0000000000ff0000000000ff00000000 XMM01=25252525252525252525252525252525 +XMM02=00000000000000000000000000000000 XMM03=ffff0000000000000000000000000000 +XMM04=00000000000000000000000000000000 XMM05=00000000000000000000000000000000 +XMM06=00000000000000000000000000000000 XMM07=00000000000000000000000000000000 +XMM08=00000000000000000000000000000000 XMM09=00000000000000000000000000000000 +XMM10=00000000000000000000000000000000 XMM11=00000000000000000000000000000000 +XMM12=00000000000000000000000000000000 XMM13=00000000000000000000000000000000 +XMM14=00000000000000000000000000000000 XMM15=00000000000000000000000000000000 +CPU #1: +RAX=ffff88001b588000 RBX=ffffea00004ab300 RCX=ffffc90000304000 RDX=0000000000000005 +RSI=ffffc90000304000 RDI=0050000000380028 RBP=ffff880012681c38 RSP=ffff880012681c28 +R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000002 +R12=0000000000000004 R13=ffff88001bfd3000 R14=0000000000fef000 R15=ffff88000ed51000 +RIP=ffffffff811daf87 RFL=00000006 [-----P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 ffffffff 00000000 +CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0000 0000000000000000 ffffffff 00000000 +FS =0000 00007fe38bb99700 ffffffff 00000000 +GS =0000 ffff88001b800000 ffffffff 00000000 +LDT=0000 0000000000000000 ffffffff 00000000 +TR =0040 ffff88001b9d2540 00002087 00008b00 DPL=0 TSS64-busy +GDT= ffff88001b804000 0000007f +IDT= ffffffff8436d000 00000fff +CR0=8005003b CR2=00007f2f25ac4518 CR3=000000001173e000 CR4=000407e0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80 +FPR0=0000000000000000 0000 FPR1=0000000000000000 0000 +FPR2=0000000000000000 0000 FPR3=0000000000000000 0000 +FPR4=0000000000000000 0000 FPR5=0000000000000000 0000 +FPR6=0000000000000000 0000 FPR7=0000000000000000 0000 +XMM00=0000000000000000ff0000ff000000ff XMM01=25252525252525252525252525252525 +XMM02=00000000000000000000000000000000 XMM03=0000ff000000ff0000000000ff000000 +XMM04=00000000000000000000000000000000 XMM05=00000000000000000000000000000000 +XMM06=00000000000000000000000000000000 XMM07=00000000000000000000000000000000 +XMM08=00000000000000000000000000000000 XMM09=00000000000000000000000000000000 +XMM10=00000000000000000000000000000000 XMM11=00000000000000000000000000000000 +XMM12=00000000000000000000000000000000 XMM13=00000000000000000000000000000000 +XMM14=00000000000000000000000000000000 XMM15=00000000000000000000000000000000 +CPU #2: +RAX=000000000000001d RBX=0000000000000080 RCX=0000000000000080 RDX=0000000000000cfc +RSI=0000000000000000 RDI=0000000000000086 RBP=ffff8800121f7de8 RSP=ffff8800121f7db8 +R8 =0000000000000004 R9 =000000000000001d R10=0000000000000000 R11=0000000000000002 +R12=ffff88001b7b0000 R13=000000000000001d R14=0000000000000084 R15=ffff88003523ad00 +RIP=ffffffff82870591 RFL=00000046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 ffffffff 00000000 +CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0000 0000000000000000 ffffffff 00000000 +FS =0000 00007f2f25ce7700 ffffffff 00000000 +GS =0000 ffff880029800000 ffffffff 00000000 +LDT=0000 0000000000000000 ffffffff 00000000 +TR =0040 ffff8800299d2540 00002087 00008b00 DPL=0 TSS64-busy +GDT= ffff880029804000 0000007f +IDT= ffffffff8436d000 00000fff +CR0=80050033 CR2=00007f2f25750003 CR3=0000000011b88000 CR4=000407e0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80 +FPR0=0000000000000000 0000 FPR1=0000000000000000 0000 +FPR2=0000000000000000 0000 FPR3=0000000000000000 0000 +FPR4=0000000000000000 0000 FPR5=0000000000000000 0000 +FPR6=0000000000000000 0000 FPR7=0000000000000000 0000 +XMM00=0000000000000000ff0000ff000000ff XMM01=25252525252525252525252525252525 +XMM02=00000000000000000000000000000000 XMM03=0000ff000000ff0000000000ff000000 +XMM04=00000000000000000000000000000000 XMM05=00000000000000000000000000000000 +XMM06=00000000000000000000000000000000 XMM07=00000000000000000000000000000000 +XMM08=00000000000000000000000000000000 XMM09=00000000000000000000000000000000 +XMM10=00000000000000000000000000000000 XMM11=00000000000000000000000000000000 +XMM12=00000000000000000000000000000000 XMM13=00000000000000000000000000000000 +XMM14=00000000000000000000000000000000 XMM15=00000000000000000000000000000000 +CPU #3: +RAX=0000000000000086 RBX=0000000000000086 RCX=0000000000000001 RDX=ffff88001afb3000 +RSI=0000000000000001 RDI=ffffffff810f1904 RBP=ffff88001afb9c50 RSP=ffff88001afb9c38 +R8 =0000000000000000 R9 =0000000000000001 R10=0000000000000000 R11=0000000000000001 +R12=ffff88001afb38e0 R13=0000000000000001 R14=ffffffff82d967a8 R15=ffffffff82d967a8 +RIP=ffffffff811171ee RFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 +ES =0000 0000000000000000 ffffffff 00000000 +CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA] +SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA] +DS =0000 0000000000000000 ffffffff 00000000 +FS =0000 0000000000000000 ffffffff 00000000 +GS =0000 ffff880035a00000 ffffffff 00000000 +LDT=0000 0000000000000000 ffffffff 00000000 +TR =0040 ffff880035bd2540 00002087 00008b00 DPL=0 TSS64-busy +GDT= ffff880035a04000 0000007f +IDT= ffffffff8436d000 00000fff +CR0=8005003b CR2=0000000000af7130 CR3=000000002cffb000 CR4=000407e0 +DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 +DR6=00000000ffff0ff0 DR7=0000000000000400 +EFER=0000000000000d01 +FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80 +FPR0=0000000000000000 0000 FPR1=0000000000000000 0000 +FPR2=0000000000000000 0000 FPR3=0000000000000000 0000 +FPR4=0000000000000000 0000 FPR5=0000000000000000 0000 +FPR6=0000000000000000 0000 FPR7=0000000000000000 0000 +XMM00=0000000000000000ff0000ff000000ff XMM01=25252525252525252525252525252525 +XMM02=00000000000000000000000000000000 XMM03=0000ff000000ff0000000000ff000000 +XMM04=00000000000000000000000000000000 XMM05=00000000000000000000000000000000 +XMM06=00000000000000000000000000000000 XMM07=00000000000000000000000000000000 +XMM08=00000000000000000000000000000000 XMM09=00000000000000000000000000000000 +XMM10=00000000000000000000000000000000 XMM11=00000000000000000000000000000000 +XMM12=00000000000000000000000000000000 XMM13=00000000000000000000000000000000 +XMM14=00000000000000000000000000000000 XMM15=00000000000000000000000000000000 + +And this is the trace: + +Thread 5 (Thread 0x7fffee7b8700 (LWP 1754)): +#0 0x00007ffff40d3ad5 in *__GI_raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 +#1 0x00007ffff40d4f56 in *__GI_abort () at abort.c:93 +#2 0x000055555572a0fa in hw_error (fmt=<optimized out>) at /home/sasha/work/src/qemu-kvm/cpus.c:357 +#3 0x0000555555750265 in register_ioport_read (start=<optimized out>, length=<optimized out>, size=<optimized out>, + func=<optimized out>, opaque=<optimized out>) at /home/sasha/work/src/qemu-kvm/ioport.c:154 +#4 0x0000555555750364 in ioport_register (ioport=0x5555565401b8) at /home/sasha/work/src/qemu-kvm/ioport.c:240 +#5 0x000055555575e910 in access_with_adjusted_size (addr=0, value=0x7fffee7b7db8, size=4, access_size_min=<optimized out>, + access_size_max=<optimized out>, access=0x55555575e830 <memory_region_write_accessor>, opaque=0x5555564c1eb0) + at /home/sasha/work/src/qemu-kvm/memory.c:359 +#6 0x0000555555760212 in memory_region_iorange_write (iorange=<optimized out>, offset=0, width=4, data=29) + at /home/sasha/work/src/qemu-kvm/memory.c:436 +#7 0x000055555575375d in kvm_handle_io (count=1, size=4, direction=1025, data=<optimized out>, port=3324) + at /home/sasha/work/src/qemu-kvm/kvm-all.c:1132 +#8 kvm_cpu_exec (env=0x55555648b810) at /home/sasha/work/src/qemu-kvm/kvm-all.c:1274 +#9 0x0000555555729781 in qemu_kvm_cpu_thread_fn (arg=0x55555648b810) at /home/sasha/work/src/qemu-kvm/cpus.c:733 +#10 0x00007ffff647ad0c in start_thread (arg=0x7fffee7b8700) at pthread_create.c:301 +#11 0x00007ffff417af1d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115 \ No newline at end of file |