diff options
Diffstat (limited to '')
| -rw-r--r-- | results/classifier/108/other/1815 | 97 | ||||
| -rw-r--r-- | results/classifier/108/other/1815009 | 29 | ||||
| -rw-r--r-- | results/classifier/108/other/1815078 | 74 | ||||
| -rw-r--r-- | results/classifier/108/other/1815143 | 112 | ||||
| -rw-r--r-- | results/classifier/108/other/1815263 | 123 | ||||
| -rw-r--r-- | results/classifier/108/other/1815721 | 88 | ||||
| -rw-r--r-- | results/classifier/108/other/1815993 | 71 |
7 files changed, 594 insertions, 0 deletions
diff --git a/results/classifier/108/other/1815 b/results/classifier/108/other/1815 new file mode 100644 index 00000000..02eb3484 --- /dev/null +++ b/results/classifier/108/other/1815 @@ -0,0 +1,97 @@ +other: 0.872 +semantic: 0.829 +graphic: 0.828 +device: 0.806 +performance: 0.791 +PID: 0.766 +KVM: 0.744 +debug: 0.742 +socket: 0.720 +network: 0.695 +vnc: 0.689 +permissions: 0.682 +boot: 0.665 +files: 0.633 + +Null pointer access in nvme_directive_receive() +Description of problem: +Got an access within null pointer error when fuzzing nvme. +Steps to reproduce: +Minimized reproducer for the error: + +```plaintext +cat << EOF | ./qemu-system-x86_64 -display none -machine accel=qtest, -m 512M -machine q35 \ +-nodefaults -drive file=null-co://,if=none,format=raw,id=disk0 -device \ +nvme,drive=disk0,serial=1 -qtest /dev/null -qtest stdio +outl 0xcf8 0x80000810 +outl 0xcfc 0xe0000000 +outl 0xcf8 0x80000804 +outw 0xcfc 0x06 +write 0xe0000024 0x4 0x040002 +write 0xe0000014 0x4 0x61004600 +write 0xe0001000 0x1 0x04 +write 0x0 0x1 0x1a +write 0x4 0x1 0x01 +write 0x2c 0x1 0x01 +EOF +``` +Additional information: +The crash report triggered by the reproducer is: + +```plaintext +[I 0.000000] OPENED +[R +0.025407] outl 0xcf8 0x80000810 +[S +0.025443] OK +OK +[R +0.025456] outl 0xcfc 0xe0000000 +[S +0.025470] OK +OK +[R +0.025476] outl 0xcf8 0x80000804 +[S +0.025483] OK +OK +[R +0.025489] outw 0xcfc 0x06 +[S +0.025934] OK +OK +[R +0.025946] write 0xe0000024 0x4 0x040002 +[S +0.025958] OK +OK +[R +0.025964] write 0xe0000014 0x4 0x61004600 +[S +0.025988] OK +OK +[R +0.026025] write 0xe0001000 0x1 0x04 +[S +0.026041] OK +OK +[R +0.026048] write 0x0 0x1 0x1a +[S +0.026256] OK +OK +[R +0.026268] write 0x4 0x1 0x01 +[S +0.026279] OK +OK +[R +0.026292] write 0x2c 0x1 0x01 +[S +0.026303] OK +OK +../hw/nvme/ctrl.c:6890:29: runtime error: member access within null pointer of type 'NvmeEnduranceGroup' (aka 'struct NvmeEnduranceGroup') +SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../hw/nvme/ctrl.c:6890:29 in +AddressSanitizer:DEADLYSIGNAL +================================================================= +==1085476==ERROR: AddressSanitizer: SEGV on unknown address 0x000000001fc8 (pc 0x56306b765ebf bp 0x7ffff17fd890 sp 0x7ffff17f6a00 T0) +==1085476==The signal is caused by a READ memory access. + #0 0x56306b765ebf in nvme_directive_receive ../hw/nvme/ctrl.c:6890:33 + #1 0x56306b765ebf in nvme_admin_cmd ../hw/nvme/ctrl.c:6958:16 + #2 0x56306b765ebf in nvme_process_sq ../hw/nvme/ctrl.c:7015:13 + #3 0x56306cda2c3b in aio_bh_call ../util/async.c:169:5 + #4 0x56306cda3384 in aio_bh_poll ../util/async.c:216:13 + #5 0x56306cd3f15b in aio_dispatch ../util/aio-posix.c:423:5 + #6 0x56306cda72da in aio_ctx_dispatch ../util/async.c:358:5 + #7 0x7fa321cc417c in g_main_context_dispatch (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x5217c) (BuildId: 5fdb313daf182a33a858ba2cc945211b11d34561) + #8 0x56306cda840f in glib_pollfds_poll ../util/main-loop.c:290:9 + #9 0x56306cda840f in os_host_main_loop_wait ../util/main-loop.c:313:5 + #10 0x56306cda840f in main_loop_wait ../util/main-loop.c:592:11 + #11 0x56306bd17f76 in qemu_main_loop ../softmmu/runstate.c:732:9 + #12 0x56306c721835 in qemu_default_main ../softmmu/main.c:37:14 + #13 0x7fa320aeb082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 + #14 0x56306af0309d in _start (./qemu-system-x86_64+0x1e9109d) + +AddressSanitizer can not provide additional info. +SUMMARY: AddressSanitizer: SEGV ../hw/nvme/ctrl.c:6890:33 in nvme_directive_receive +``` diff --git a/results/classifier/108/other/1815009 b/results/classifier/108/other/1815009 new file mode 100644 index 00000000..2b6f3715 --- /dev/null +++ b/results/classifier/108/other/1815009 @@ -0,0 +1,29 @@ +graphic: 0.796 +device: 0.686 +other: 0.644 +network: 0.611 +semantic: 0.577 +permissions: 0.531 +files: 0.524 +performance: 0.521 +vnc: 0.454 +socket: 0.430 +debug: 0.430 +PID: 0.419 +boot: 0.371 +KVM: 0.247 + +Qemu evdev multiple guests/host switch + +Hello, + +Qemu up to version 3.1 + +it would be nice if passed through evdev can be switched (using lctrl + rctrl) through all running guests configured for evdev and the host. Currently, only the last started guest and host can be switched only so the previously started guests can't be controlled. + +The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now. +If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience. + + +[Expired for QEMU because there has been no activity for 60 days.] + diff --git a/results/classifier/108/other/1815078 b/results/classifier/108/other/1815078 new file mode 100644 index 00000000..d79fc201 --- /dev/null +++ b/results/classifier/108/other/1815078 @@ -0,0 +1,74 @@ +other: 0.931 +files: 0.907 +semantic: 0.882 +performance: 0.830 +permissions: 0.810 +graphic: 0.792 +PID: 0.777 +device: 0.766 +network: 0.736 +vnc: 0.732 +debug: 0.723 +socket: 0.713 +boot: 0.647 +KVM: 0.616 + +Qemu 3.1.0 risc-v mie.MEIE + +Hello all, + +There is a bug in qemu for Risc-v, related to the mie register: when we try to set the MEIE bit (11) nothing is done, even when we are running at machine mode. + +Li a0 , 1 << 11 +Csrs mie , a0 + +And when we read mie it is as though nothing was done. + +Going through the qemu source code I was able to correct it: on file op_helper.c, line 94, the variable all_ints should be initialized with: + +uint64_t all_ints = delegable_ints | MIP_MSIP | MIP_MTIP | MIP_MEIP; + +That is, the MIP_MEIP was missing. + +I've successfully triggered uart interrupts with this patch (virt machine). + +All the best, +Pharos team + +It looks like this is fixed as of c7b951718815 ("RISC-V: Implement modular CSR helper interface"), which was merged on January 14th. + +Good news, + +Thanks + +LMK if that patch doesn't fix your issue. QEMU master is pretty stable for RISC-V right now and since there's a handful of intertwined patches the best bet is probably just to use the commit hash above. + +This should be fixed in the 4.0 release, which is targeted for the middle of April. + +OK, I'll give it a try and give you some feedback. + +Thanks + +So I tried it but got the error: + +ERROR: missing file ../qemu-3.1.0/ui/keycodemapdb/README + +This is not a GIT checkout but module content appears to +be missing. Do not use 'git archive' or GitHub download links +to acquire QEMU source archives. Non-GIT builds are only +supported with source archives linked from: + + https://www.qemu.org/download/ + +Developers working with GIT can use scripts/archive-source.sh +if they need to create valid source archives. + +Makefile.cross-compiler:259: recipe for target 'qemu-3.1' failed +make: *** [qemu-3.1] Error 1 + + + +Get this is standard error, but I don't have time now to see how to work around it. Maybe later I can + + + diff --git a/results/classifier/108/other/1815143 b/results/classifier/108/other/1815143 new file mode 100644 index 00000000..d54bb183 --- /dev/null +++ b/results/classifier/108/other/1815143 @@ -0,0 +1,112 @@ +other: 0.779 +debug: 0.748 +semantic: 0.745 +permissions: 0.731 +device: 0.723 +socket: 0.697 +performance: 0.676 +KVM: 0.652 +PID: 0.634 +boot: 0.634 +graphic: 0.606 +network: 0.582 +vnc: 0.574 +files: 0.518 + + qemu-system-s390x fails when running without kvm: fatal: EXECUTE on instruction prefix 0x7f4 not implemented + +just wondering if TCG implements instruction prefix 0x7f4 + +server3:~ # zcat /boot/vmlinux-4.4.162-94.72-default.gz > /tmp/kernel + +--> starting qemu with kvm enabled works fine +server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd /boot/initrd -enable-kvm +Initializing cgroup subsys cpuset +Initializing cgroup subsys cpu +Initializing cgroup subsys cpuacct +Linux version 4.4.162-94.72-default (geeko@buildhost) (gcc version 4.8.5 (SUSE Linux) ) #1 SMP Mon Nov 12 18:57:45 UTC 2018 (9de753f) +setup.289988: Linux is running under KVM in 64-bit mode +setup.b050d0: The maximum memory size is 128MB +numa.196305: NUMA mode: plain +Write protected kernel read-only data: 8692k +[...] + +--> but starting qemu without kvm enabled works fails +server3:~ # qemu-system-s390x -nographic -kernel /tmp/kernel -initrd /boot/initrd +qemu: fatal: EXECUTE on instruction prefix 0x7f4 not implemented + +PSW=mask 0000000180000000 addr 000000000067ed6e cc 00 +R00=0000000080000000 R01=000000000067ed76 R02=0000000000000000 R03=0000000000000000 +R04=0000000000111548 R05=0000000000000000 R06=0000000000000000 R07=0000000000000000 +R08=00000000000100f6 R09=0000000000000000 R10=0000000000000000 R11=0000000000000000 +R12=0000000000ae2000 R13=0000000000681978 R14=0000000000111548 R15=000000000000bef0 +F00=0000000000000000 F01=0000000000000000 F02=0000000000000000 F03=0000000000000000 +F04=0000000000000000 F05=0000000000000000 F06=0000000000000000 F07=0000000000000000 +F08=0000000000000000 F09=0000000000000000 F10=0000000000000000 F11=0000000000000000 +F12=0000000000000000 F13=0000000000000000 F14=0000000000000000 F15=0000000000000000 +V00=00000000000000000000000000000000 V01=00000000000000000000000000000000 +V02=00000000000000000000000000000000 V03=00000000000000000000000000000000 +V04=00000000000000000000000000000000 V05=00000000000000000000000000000000 +V06=00000000000000000000000000000000 V07=00000000000000000000000000000000 +V08=00000000000000000000000000000000 V09=00000000000000000000000000000000 +V10=00000000000000000000000000000000 V11=00000000000000000000000000000000 +V12=00000000000000000000000000000000 V13=00000000000000000000000000000000 +V14=00000000000000000000000000000000 V15=00000000000000000000000000000000 +V16=00000000000000000000000000000000 V17=00000000000000000000000000000000 +V18=00000000000000000000000000000000 V19=00000000000000000000000000000000 +V20=00000000000000000000000000000000 V21=00000000000000000000000000000000 +V22=00000000000000000000000000000000 V23=00000000000000000000000000000000 +V24=00000000000000000000000000000000 V25=00000000000000000000000000000000 +V26=00000000000000000000000000000000 V27=00000000000000000000000000000000 +V28=00000000000000000000000000000000 V29=00000000000000000000000000000000 +V30=00000000000000000000000000000000 V31=00000000000000000000000000000000 +C00=0000000000000000 C01=0000000000000000 C02=0000000000000000 C03=0000000000000000 +C04=0000000000000000 C05=0000000000000000 C06=0000000000000000 C07=0000000000000000 +C08=0000000000000000 C09=0000000000000000 C10=0000000000000000 C11=0000000000000000 +C12=0000000000000000 C13=0000000000000000 C14=0000000000000000 C15=0000000000000000 + +Aborted (core dumped) + + +server3:~ # lscpu +Architecture: s390x +CPU op-mode(s): 32-bit, 64-bit +Byte Order: Big Endian +CPU(s): 2 +On-line CPU(s) list: 0,1 +Thread(s) per core: 1 +Core(s) per socket: 1 +Socket(s) per book: 1 +Book(s) per drawer: 1 +Drawer(s): 2 +NUMA node(s): 1 +Vendor ID: IBM/S390 +Machine type: 2964 +BogoMIPS: 20325.00 +Hypervisor: z/VM 6.4.0 +Hypervisor vendor: IBM +Virtualization type: full +Dispatching mode: horizontal +L1d cache: 128K +L1i cache: 96K +L2d cache: 2048K +L2i cache: 2048K +L3 cache: 65536K +L4 cache: 491520K +NUMA node0 CPU(s): 0-63 +Flags: esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te vx sie +server3:~ # uname -a +Linux server3 4.4.126-94.22-default #1 SMP Wed Apr 11 07:45:03 UTC 2018 (9649989) s390x s390x s390x GNU/Linux +server3:~ # + +Which version of QEMU are you using here? I think this should be working fine with the latest version of QEMU (>= v2.10). + +Hi, Thomas, you are right, I am using 2.9.1, and it does look OK in 2.10. do you mind to point me which part of code fixed it? Thanks. + +A little bit confused here, I tired to bisect it from 2.10, but it was always good from this branch. then I went back to 2.9.1, it was always crashed. Machine type related? + +This should be the commit that fixed this issue: +https://git.qemu.org/?p=qemu.git;a=commitdiff;h=303c681a8f50eb88fbafc + +Confirmed the fix, thanks for the help. + diff --git a/results/classifier/108/other/1815263 b/results/classifier/108/other/1815263 new file mode 100644 index 00000000..e10fbb50 --- /dev/null +++ b/results/classifier/108/other/1815263 @@ -0,0 +1,123 @@ +debug: 0.762 +permissions: 0.749 +device: 0.737 +graphic: 0.722 +boot: 0.701 +KVM: 0.698 +other: 0.696 +semantic: 0.695 +vnc: 0.677 +performance: 0.671 +PID: 0.624 +socket: 0.543 +files: 0.517 +network: 0.505 + +hvf accelerator crashes on quest boot + +Host OS: macOS High Sierra (10.13.6) +MacBook Pro (Retina, Mid 2015) +Processor: 2.8GHz Intel Core i7 +Guest OS: OpenBSD 6.4 install media (install64.iso) +Qemu 3.1.0 release, built with: +./configure --prefix=/usr/local/Cellar/qemu/3.1.0_1 --cc=clang + --host-cc=clang + --disable-bsd-user + --disable-guest-agent + --enable-curses + --enable-libssh2 + --enable-vde + --extra-cflags=-DNCURSES_WIDECHAR=1 + --enable-cocoa + --disable-sdl + --disable-gtk + --enable-hvf + --target-list=x86_64-softmmu + --enable-debug + +I invoke qemu like this: +Last command had exit code: 0 at 22:58 +nwallace@nwallace-ltm3:~ +$ sudo qemu-system-x86_64 -M accel=hvf -boot d -cdrom ~/Downloads/install64.iso +Password: +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2] +bad size + +Abort trap: 6 +Last command had exit code: 134 at 22:58 +nwallace@nwallace-ltm3:~ +$ + +I ran qemu in lldb to get a stack trace and I get: +Last command had exit code: 0 at 22:54 +nwallace@nwallace-ltm3:~/Downloads +$ sudo lldb -- qemu-system-x86_64 -M accel=hvf -boot d -cdrom /Users/nwallace/Downloads/install64.iso +Password: +(lldb) target create "qemu-system-x86_64" +Current executable set to 'qemu-system-x86_64' (x86_64). +(lldb) settings set -- target.run-args "-M" "accel=hvf" "-boot" "d" "-cdrom" "/Users/nwallace/Downloads/install64.i +so" +(lldb) run +Process 96474 launched: '/usr/local/bin/qemu-system-x86_64' (x86_64) +Process 96474 stopped +* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGUSR2 + frame #0: 0x00007fff5ef0c00a libsystem_kernel.dylib`__sigsuspend + 10 +libsystem_kernel.dylib`__sigsuspend: +-> 0x7fff5ef0c00a <+10>: jae 0x7fff5ef0c014 ; <+20> + 0x7fff5ef0c00c <+12>: movq %rax, %rdi + 0x7fff5ef0c00f <+15>: jmp 0x7fff5ef02b0e ; cerror + 0x7fff5ef0c014 <+20>: retq +Target 0: (qemu-system-x86_64) stopped. +(lldb) process handle SIGUSR1 -n true -p true -s false +NAME PASS STOP NOTIFY +=========== ===== ===== ====== +SIGUSR1 true false true +(lldb) process handle SIGUSR2 -n true -p true -s false +NAME PASS STOP NOTIFY +=========== ===== ===== ====== +SIGUSR2 true false true +(lldb) c +Process 96474 resuming +qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2] +Process 96474 stopped and restarted: thread 9 received signal: SIGUSR2 +<line above repeats about 64 times or so> +Process 96474 stopped and restarted: thread 9 received signal: SIGUSR2 +bad size + +Process 96474 stopped +* thread #9, stop reason = signal SIGABRT + frame #0: 0x00007fff5ef0bb66 libsystem_kernel.dylib`__pthread_kill + 10 +libsystem_kernel.dylib`__pthread_kill: +-> 0x7fff5ef0bb66 <+10>: jae 0x7fff5ef0bb70 ; <+20> + 0x7fff5ef0bb68 <+12>: movq %rax, %rdi + 0x7fff5ef0bb6b <+15>: jmp 0x7fff5ef02ae9 ; cerror_nocancel + 0x7fff5ef0bb70 <+20>: retq +Target 0: (qemu-system-x86_64) stopped. +(lldb) bt +* thread #9, stop reason = signal SIGABRT + * frame #0: 0x00007fff5ef0bb66 libsystem_kernel.dylib`__pthread_kill + 10 + frame #1: 0x00007fff5f0d6080 libsystem_pthread.dylib`pthread_kill + 333 + frame #2: 0x00007fff5ee671ae libsystem_c.dylib`abort + 127 + frame #3: 0x000000010016b6ec qemu-system-x86_64`exec_cmps_single + 400 + frame #4: 0x000000010016ada4 qemu-system-x86_64`exec_cmps + 65 + frame #5: 0x0000000100169aaa qemu-system-x86_64`exec_instruction + 48 + frame #6: 0x0000000100164eb2 qemu-system-x86_64`hvf_vcpu_exec + 2658 + frame #7: 0x000000010005bed6 qemu-system-x86_64`qemu_hvf_cpu_thread_fn + 200 + frame #8: 0x00000001003ee531 qemu-system-x86_64`qemu_thread_start + 107 + frame #9: 0x00007fff5f0d3661 libsystem_pthread.dylib`_pthread_body + 340 + frame #10: 0x00007fff5f0d350d libsystem_pthread.dylib`_pthread_start + 377 + frame #11: 0x00007fff5f0d2bf9 libsystem_pthread.dylib`thread_start + 13 +(lldb) quit +Quitting LLDB will kill one or more processes. Do you really want to proceed: [Y/n] Y +Last command had exit code: 0 at 23:01 +nwallace@nwallace-ltm3:~/Downloads +$ + + +I'm happy to work with someone more knowledgeable to reproduce this issue and provide debugging assistance as I'm able. + +Looking through old bug tickets... is this still an issue with the latest version of QEMU? Or could we close this ticket nowadays? + + +[Expired for QEMU because there has been no activity for 60 days.] + diff --git a/results/classifier/108/other/1815721 b/results/classifier/108/other/1815721 new file mode 100644 index 00000000..b1c0ed1b --- /dev/null +++ b/results/classifier/108/other/1815721 @@ -0,0 +1,88 @@ +permissions: 0.905 +debug: 0.876 +PID: 0.820 +other: 0.817 +device: 0.812 +performance: 0.803 +semantic: 0.800 +vnc: 0.797 +graphic: 0.778 +files: 0.724 +network: 0.718 +KVM: 0.708 +boot: 0.696 +socket: 0.668 + +RISC-V PLIC enable interrupt for multicore + +Hello all, + +There is a bug in Qemu related to the enabling of external interrupts for multicores (Virt machine). + +After correcting Qemu as described in #1815078 (https://bugs.launchpad.net/qemu/+bug/1815078), when we try to enable interrupts for core 1 at address 0x0C00_2080 we don't seem to be able to trigger an external interrupt (e.g. UART0). + +This works perfectly for core 0, but fore core 1 it does not work at all. I assume that given bug #1815078 does not enable any external interrupt then this feature has not been tested. I tried to look at the qemu source code but with no luck so far. + +I guess the problem is related to function parse_hart_config (in sfive_plic.c) that initializes incorrectly the plic->addr_config[addrid].hartid, which is later on read in sifive_plic_update. But this is a guess. + +Best regards, +Pharos team + +Hi, + +After some debugging (and luck), the problem (at least in the Virt board) was that the PLIC code inside QEMU addresses the core x 2 instead of just the core (core=hart). That is why it worked for core 0 (0x2 = 0) but for core 1 it has to address the PLIC memory area for core 2. + +For example, the interrupt enable address for core 1 starts at offset 0x002080 (see https://github.com/riscv/riscv-plic-spec/blob/master/riscv-plic.adoc) but we actually have to change the enable bit for core 2 (at 0x002100) to make to work for core 1. + +The same is true for the priority threshold and claim complete registers (we need to multiply the core by 2) + +Either the documentation at https://github.com/riscv/riscv-plic-spec/blob/master/riscv-plic.adoc does not have the correct memory addresses for qemu virt board, or qemu appears to be wrong. + +On Tue, Mar 24, 2020 at 4:20 PM RTOS Pharos <email address hidden> wrote: +> +> Hi, +> +> After some debugging (and luck), the problem (at least in the Virt +> board) was that the PLIC code inside QEMU addresses the core x 2 instead +> of just the core (core=hart). That is why it worked for core 0 (0x2 = 0) +> but for core 1 it has to address the PLIC memory area for core 2. +> +> For example, the interrupt enable address for core 1 starts at offset +> 0x002080 (see https://github.com/riscv/riscv-plic-spec/blob/master +> /riscv-plic.adoc) but we actually have to change the enable bit for core +> 2 (at 0x002100) to make to work for core 1. + + +https://github.com/riscv/riscv-plic-spec/blob/master/riscv-plic.adoc says: + +"base + 0x002080: Enable bits for sources 0-31 on context 1" + +This is context 1, not core 1. + +It looks to me you were running an image built for SiFive FU540. +Please test your image against "sifive_u" machine instead. + +> +> The same is true for the priority threshold and claim complete registers +> (we need to multiply the core by 2) +> +> Either the documentation at https://github.com/riscv/riscv-plic- +> spec/blob/master/riscv-plic.adoc does not have the correct memory +> addresses for qemu virt board, or qemu appears to be wrong. +> +> -- + +Regards, +Bin + + +Thank you for the explanation. I actually built it for "Virt" machine. I'll try the "sifive_u" when I can. + +But I guess your explanation is correct so this bug could be closed from my part. + +Hello as far as I can tell, there is a major problem with PLIC implementation. When decompiling DTB on virt board with X harts, I see that hartid 0 has MEI and SEI, hartid 1 has MEI and SEI, etc... But when configuring context 1 (hartid 0 SEI) no interrupt is generated, but context 0, 2, 4 etc... work. So for me the problem is within PLIC or RISC-V implementation... If anyone wants to correct it, I can help. Best regards. Serge Teodori + +I'm going to close this bug as it seems like the issue that RTOS Pharos raised is not an issue. + +@Teodori Serge please open a new issue if you have a bug. Make sure to include as much detail as possible and steps to reproduce it. + diff --git a/results/classifier/108/other/1815993 b/results/classifier/108/other/1815993 new file mode 100644 index 00000000..133fa58e --- /dev/null +++ b/results/classifier/108/other/1815993 @@ -0,0 +1,71 @@ +other: 0.756 +KVM: 0.752 +vnc: 0.727 +permissions: 0.727 +boot: 0.710 +performance: 0.705 +network: 0.684 +debug: 0.683 +device: 0.679 +graphic: 0.667 +socket: 0.663 +files: 0.657 +semantic: 0.656 +PID: 0.650 + +drive-backup with iscsi will cause vm disk no response + +virsh qemu-monitor-command ${DOMAIN} '{ "execute" : "drive-backup" , "arguments" : { "device" : "drive-virtio-disk0" , "sync" : "top" , "target" : "iscsi://192.168.1.100:3260/iqn.2019-01.com.iaas/0" } }' + +When the drive-backup is running, I manually crash the iscsi server(or interrupt network, eg. iptables -j DROP). + +Then after less than 1 minute: +virsh qemu-monitor-command ${DOMAIN} --pretty '{ "execute": "query-block" }' will block and no any response, until timeout. This is still excusable. +But, the disk(drive-virtio-disk0)will occur the same situation:in vm os, the disk will block and no any response. + +In other words, when qemu and iscsi-server lose contact, It will cause the vm unable. + +--- +Host: centos 7.5 +qemu version: ovirt-4.2(qemu-2.12.0) +qemu command line: qemu-system-x86_64 -name guest=test,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-190-test./master-key.aes -machine pc-i440fx-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -m 1024 -mem-prealloc -mem-path /dev/hugepages1G/libvirt/qemu/190-test -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 1c8611c2-a18a-4b1c-b40b-9d82040eafa4 -smbios type=1,manufacturer=IaaS -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=31,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot menu=on,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=/opt/vol/sas/fb0c7c37-13e7-41fe-b3f8-f0fbaaeec7ce,format=qcow2,if=none,id=drive-virtio-disk0,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -drive file=/opt/vol/sas/bde66671-536d-49cd-8b46-a4f1ea7be513,format=qcow2,if=none,id=drive-virtio-disk1,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1,write-cache=on -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:85:45:3e:d4:3a,bus=pci.0,addr=0x6 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=35,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on + +iscsi: +yum -y install targetcli python-rtslib +systemctl start target +systemctl enable target + +targetcli /iscsi create iqn.2019-01.com.iaas + +targetcli /iscsi/iqn.2019-01.com.iaas/tpg1 set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 + +targetcli /iscsi/iqn.2019-01.com.iaas/tpg1/portals create 192.168.1.100 3260 +targetcli /backstores/fileio create testfile1 /backup/file1 2G +targetcli /iscsi/iqn.2019-01.com.iaas/tpg1/luns create /backstores/fileio/testfile1 + +On Fri, Feb 15, 2019 at 03:03:34AM -0000, Cheng Chen wrote: +> When the drive-backup is running, I manually crash the iscsi server(or +> interrupt network, eg. iptables -j DROP). +> +> Then after less than 1 minute: +> virsh qemu-monitor-command ${DOMAIN} --pretty '{ "execute": "query-block" }' will block and no any response, until timeout. This is still excusable. +> But, the disk(drive-virtio-disk0)will occur the same situation:in vm os, the disk will block and no any response. +> +> In other words, when qemu and iscsi-server lose contact, It will cause +> the vm unable. + +I haven't tried to reproduce this but I guess QEMU reaches a +synchronization point where it waits for all outstanding requests to +complete. Since the iSCSI target is unresponsive QEMU gets stuck. + +These issues can sometimes be fixed by avoiding the synchronization +point (a backtrace should reveal where the main loop thread is stuck) +but other times it really is necessary to wait for all requests and the +solution isn't as obvious. + + +The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now. +If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience. + +[Expired for QEMU because there has been no activity for 60 days.] + |