diff options
Diffstat (limited to '')
| -rw-r--r-- | results/classifier/108/other/81 | 16 | ||||
| -rw-r--r-- | results/classifier/108/other/810 | 86 | ||||
| -rw-r--r-- | results/classifier/108/other/810588 | 54 | ||||
| -rw-r--r-- | results/classifier/108/other/811 | 16 | ||||
| -rw-r--r-- | results/classifier/108/other/812 | 139 | ||||
| -rw-r--r-- | results/classifier/108/other/812398 | 40 | ||||
| -rw-r--r-- | results/classifier/108/other/813 | 30 | ||||
| -rw-r--r-- | results/classifier/108/other/813546 | 31 | ||||
| -rw-r--r-- | results/classifier/108/other/814 | 52 | ||||
| -rw-r--r-- | results/classifier/108/other/814222 | 260 | ||||
| -rw-r--r-- | results/classifier/108/other/815 | 16 | ||||
| -rw-r--r-- | results/classifier/108/other/816860 | 46 | ||||
| -rw-r--r-- | results/classifier/108/other/81775929 | 245 | ||||
| -rw-r--r-- | results/classifier/108/other/818 | 20 | ||||
| -rw-r--r-- | results/classifier/108/other/818673 | 797 | ||||
| -rw-r--r-- | results/classifier/108/other/819 | 90 |
16 files changed, 1938 insertions, 0 deletions
diff --git a/results/classifier/108/other/81 b/results/classifier/108/other/81 new file mode 100644 index 000000000..a3d325e4a --- /dev/null +++ b/results/classifier/108/other/81 @@ -0,0 +1,16 @@ +performance: 0.800 +device: 0.800 +graphic: 0.712 +network: 0.477 +files: 0.351 +semantic: 0.342 +boot: 0.321 +PID: 0.314 +other: 0.259 +permissions: 0.161 +debug: 0.150 +vnc: 0.128 +socket: 0.101 +KVM: 0.029 + +[Feature request] qemu-img option about recompressing diff --git a/results/classifier/108/other/810 b/results/classifier/108/other/810 new file mode 100644 index 000000000..f6321eb1d --- /dev/null +++ b/results/classifier/108/other/810 @@ -0,0 +1,86 @@ +other: 0.900 +device: 0.895 +graphic: 0.881 +vnc: 0.859 +KVM: 0.851 +debug: 0.850 +semantic: 0.848 +performance: 0.842 +permissions: 0.839 +PID: 0.822 +boot: 0.792 +socket: 0.749 +files: 0.742 +network: 0.727 + +i386/sev: Crash in pc_system_parse_ovmf_flash caused by bad firmware file +Description of problem: +A specially-crafted flash file can cause the `memcpy()` call in +`pc_system_parse_ovmf_flash` (`hw/i386/pc_sysfw_ovmf.c`) to READ out-of-bounds +memory, because there's no check on the `tot_len` field which is read +from the flash file. In such case, `ptr - tot_len` will point to a +memory location *below* `flash_ptr` (hence the out-of-bounds read). + +This path is only taken when SEV is enabled (which requires +KVM and x86_64). +Steps to reproduce: +1. Create `bad_ovmf.fd` using the following python script: + ``` + from uuid import UUID + OVMF_TABLE_FOOTER_GUID = "96b582de-1fb2-45f7-baea-a366c55a082d" + b = bytearray(4096) + b[4046:4048] = b'\xff\xff' # tot_len field + b[4048:4064] = UUID("{" + OVMF_TABLE_FOOTER_GUID + "}").bytes_le + with open("bad_ovmf.fd", "wb") as f: + f.write(b) + ``` +2. Build QEMU with `--enable-sanitizers` +3. Start QEMU with SEV and the bad flash file: + ``` + qemu-system-x86_64 -enable-kvm -cpu host -machine q35 \ + -drive if=pflash,format=raw,unit=0,file=bad_ovmf.fd,readonly=on \ + -machine confidential-guest-support=sev0 \ + -object sev-guest,id=sev0,cbitpos=47,reduced-phys-bits=1,policy=0x0 + ``` +4. QEMU crashes with: `SUMMARY: AddressSanitizer: stack-buffer-underflow` +Additional information: +Crash example: + +``` +$ sudo build/qemu-system-x86_64 -enable-kvm -cpu host -machine q35 \ + -drive if=pflash,format=raw,unit=0,file=bad_ovmf.fd,readonly=on \ + -machine confidential-guest-support=sev0 \ + -object sev-guest,id=sev0,cbitpos=47,reduced-phys-bits=1,policy=0x0 +==523314==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases! +================================================================= +==523314==ERROR: AddressSanitizer: stack-buffer-underflow on address 0x7f05305fb180 at pc 0x7f0548d89480 bp 0x7ffed44a1980 sp 0x7ffed44a1128 +READ of size 65517 at 0x7f05305fb180 thread T0 + #0 0x7f0548d8947f (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x9b47f) + #1 0x556127c3331e in memcpy /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34 + #2 0x556127c3331e in pc_system_parse_ovmf_flash ../hw/i386/pc_sysfw_ovmf.c:82 + #3 0x556127c21a0c in pc_system_flash_map ../hw/i386/pc_sysfw.c:203 + #4 0x556127c21a0c in pc_system_firmware_init ../hw/i386/pc_sysfw.c:258 + #5 0x556127c1ddd9 in pc_memory_init ../hw/i386/pc.c:902 + #6 0x556127bdc387 in pc_q35_init ../hw/i386/pc_q35.c:207 + #7 0x5561273bfdd6 in machine_run_board_init ../hw/core/machine.c:1181 + #8 0x556127f77de1 in qemu_init_board ../softmmu/vl.c:2652 + #9 0x556127f77de1 in qmp_x_exit_preconfig ../softmmu/vl.c:2740 + #10 0x556127f7f24d in qemu_init ../softmmu/vl.c:3775 + #11 0x556126f947ac in main ../softmmu/main.c:49 + #12 0x7f05470e80b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2) + #13 0x556126fa639d in _start (/home/dmurik/git/qemu/build/qemu-system-x86_64+0x2a5739d) + +Address 0x7f05305fb180 is located in stack of thread T3 at offset 0 in frame + #0 0x556128a96f1f in qemu_sem_timedwait ../util/qemu-thread-posix.c:293 + + + This frame has 1 object(s): + [32, 48) 'ts' (line 295) <== Memory access at offset 0 partially underflows this variable +HINT: this may be a false positive if your program uses some custom stack unwind mechanism, swapcontext or vfork + (longjmp and C++ exceptions *are* supported) +Thread T3 created by T0 here: + #0 0x7f0548d28805 in pthread_create (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x3a805) + #1 0x556128a97ecf in qemu_thread_create ../util/qemu-thread-posix.c:596 + +SUMMARY: AddressSanitizer: stack-buffer-underflow (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x9b47f) +``` diff --git a/results/classifier/108/other/810588 b/results/classifier/108/other/810588 new file mode 100644 index 000000000..635befc4b --- /dev/null +++ b/results/classifier/108/other/810588 @@ -0,0 +1,54 @@ +debug: 0.647 +KVM: 0.551 +device: 0.547 +semantic: 0.531 +PID: 0.481 +graphic: 0.473 +performance: 0.461 +other: 0.442 +socket: 0.373 +boot: 0.338 +vnc: 0.337 +files: 0.327 +network: 0.304 +permissions: 0.250 + +Unexpected crash of qemu-kvm with SCSI disk emulation. + +Virual machine with MS windows 2003 installed on the virtual scsi disk (-drive file=/my/path/myimage.qcow2.img,boot=on,if=scsi,media=disk,bus=0,unit=1) unexpectedly crashes without core dump. When the image is connected as an ide disk (-hda ) vm flies normally. +Qemu-kvm version: 0.12.5 +Os/distr.: Debian squeeze, x86_64 + +On Thu, Jul 14, 2011 at 5:43 PM, Constantine Chernov +<email address hidden> wrote: +> Virual machine with MS windows 2003 installed on the virtual scsi disk (-drive file=/my/path/myimage.qcow2.img,boot=on,if=scsi,media=disk,bus=0,unit=1) unexpectedly crashes without core dump. When the image is connected as an ide disk (-hda ) vm flies normally. +> Qemu-kvm version: 0.12.5 +> Os/distr.: Debian squeeze, x86_64 + +Please post your full QEMU command-line. + +Did you enable core dumps before launch QEMU? Do "ulimit -c +unlimited" in the same shell before running the QEMU command-line. + +If it is exiting instead of crashing I suggest launching QEMU from gdb +and catching the exit: +1. Install qemu-kvm-dbg to get the debuginfo for useful backtraces +2. Start gdb with QEMU and its usual command-line arguments: gdb +--args qemu-kvm ... +3. Set breakpoints on exit(3) and abort(2): +b exit +b abort +4. Run the VM and reproduce the exit: +r +5. When it exits you will hopefully be at an exit/abort breakpoint and +can print the stack trace: +bt + +Please post the backtrace so we have more information on how the exit happens. + +Thanks, +Stefan + + +[Expired for QEMU because there has been no activity for 60 days.] + diff --git a/results/classifier/108/other/811 b/results/classifier/108/other/811 new file mode 100644 index 000000000..d9ba7e4a9 --- /dev/null +++ b/results/classifier/108/other/811 @@ -0,0 +1,16 @@ +network: 0.665 +device: 0.642 +performance: 0.517 +other: 0.327 +semantic: 0.324 +graphic: 0.220 +boot: 0.210 +socket: 0.128 +debug: 0.121 +files: 0.102 +permissions: 0.084 +vnc: 0.069 +PID: 0.037 +KVM: 0.017 + +qemu_irq_split() callers should use TYPE_SPLIT_IRQ device instead diff --git a/results/classifier/108/other/812 b/results/classifier/108/other/812 new file mode 100644 index 000000000..d6d49451e --- /dev/null +++ b/results/classifier/108/other/812 @@ -0,0 +1,139 @@ +permissions: 0.682 +performance: 0.616 +network: 0.569 +PID: 0.564 +device: 0.559 +debug: 0.542 +vnc: 0.530 +other: 0.527 +KVM: 0.515 +semantic: 0.501 +graphic: 0.500 +boot: 0.444 +files: 0.429 +socket: 0.415 + +Multicast packets (mDNS) are not sent out of VM +Description of problem: +The app is sending multicast packets (mDNS), but they are not sent out of VM. +Here is the configuration of the network: `-netdev user,id=net0,hostfwd=tcp::2222-:22,hostfwd=tcp::50051-:50051,hostfwd=tcp::50050-:50050` +Steps to reproduce: +1. Install arduino-cli from https://github.com/arduino/arduino-cli/releases (eg. 0.20.2) +2. `arduino-cli config init` +3. `vi ~/.arduino15/arduino-cli.yaml` +4. edit it to have it as follows: +``` +board_manager: + additional_urls: ["http://arduino.esp8266.com/stable/package_esp8266com_index.json"] +daemon: + port: "50051" +directories: + data: /root/app/data + downloads: /root/app/downloads + user: /root/app/user +library: + enable_unsafe_install: false +logging: + file: "" + format: text + level: info +metrics: + addr: :9090 + enabled: false +output: + no_color: false +sketch: + always_export_binaries: false +updater: + enable_notification: true +``` + +5. `arduino-cli core update-index` +6. `arduino-cli core install esp8266:esp8266` +7. `arduino-cli board list -v` + +This will give an output similar to: +``` +INFO[0000] Using config file: /root/.arduino15/arduino-cli.yaml +INFO[0000] arduino-cli.x86_64 version git-snapshot +INFO[0000] Checking if CLI is Bundled into the IDE +INFO[0000] Adding libraries dir dir=/root/app/user/libraries location=user +INFO[0000] Checking signature index=/root/app/data/package_index.json signatureFile=/root/app/data/package_index.json.sig = +INFO[0000] Checking signature error="opening signature file: open /root/app/data/package_esp8266com_index.json.sig: no such file or d= +INFO[0000] Loading hardware from: /root/app/data/packages +INFO[0000] Loading package builtin from: /root/app/data/packages/builtin +INFO[0000] Checking existence of 'tools' path: /root/app/data/packages/builtin/tools +INFO[0000] Loading tools from dir: /root/app/data/packages/builtin/tools +INFO[0000] Loaded tool tool="builtin:ctags@5.8-arduino11" +INFO[0000] Loaded tool tool="builtin:mdns-discovery@1.0.2" +INFO[0000] Loaded tool tool="builtin:serial-discovery@1.3.1" +INFO[0000] Loaded tool tool="builtin:serial-monitor@0.9.1" +INFO[0000] Loading package esp8266 from: /root/app/data/packages/esp8266/hardware +INFO[0000] Checking signature error="opening signature file: open /root/app/data/packages/esp8266/hardware/esp8266/3.0.2/installed.js= +INFO[0000] Adding monitor tool protocol=serial tool="builtin:serial-monitor" +INFO[0000] Loaded platform platform="esp8266:esp8266@3.0.2" +INFO[0000] Checking existence of 'tools' path: /root/app/data/packages/esp8266/tools +INFO[0000] Loading tools from dir: /root/app/data/packages/esp8266/tools +INFO[0000] Loaded tool tool="esp8266:mklittlefs@3.0.4-gcc10.3-1757bed" +INFO[0000] Loaded tool tool="esp8266:mkspiffs@3.0.4-gcc10.3-1757bed" +INFO[0000] Loaded tool tool="esp8266:python3@3.7.2-post1" +INFO[0000] Loaded tool tool="esp8266:xtensa-lx106-elf-gcc@3.0.4-gcc10.3-1757bed" +INFO[0000] Adding libraries dir dir=/root/app/data/packages/esp8266/hardware/esp8266/3.0.2/libraries location=platform +INFO[0007] Executing `arduino-cli board list` +INFO[0007] starting discovery builtin:serial-discovery process +INFO[0007] started discovery builtin:serial-discovery process +INFO[0007] sending command HELLO 1 "arduino-cli git-snapshot" to discovery builtin:serial-discovery +INFO[0007] starting discovery builtin:mdns-discovery process +INFO[0007] started discovery builtin:mdns-discovery process +INFO[0007] sending command HELLO 1 "arduino-cli git-snapshot" to discovery builtin:mdns-discovery +INFO[0007] from discovery builtin:serial-discovery received message type: hello, message: OK, protocol version: 1 +INFO[0007] from discovery builtin:mdns-discovery received message type: hello, message: OK, protocol version: 1 +INFO[0007] sending command START to discovery builtin:serial-discovery +INFO[0007] sending command START to discovery builtin:mdns-discovery +INFO[0007] from discovery builtin:mdns-discovery received message type: start, message: OK +INFO[0007] from discovery builtin:serial-discovery received message type: start, message: OK +INFO[0008] sending command LIST to discovery builtin:serial-discovery +INFO[0008] sending command LIST to discovery builtin:mdns-discovery +INFO[0008] from discovery builtin:mdns-discovery received message type: list +INFO[0008] from discovery builtin:serial-discovery received message type: list, ports: [/dev/ttyS0] +INFO[0008] sending command STOP to discovery builtin:serial-discovery +INFO[0008] sending command STOP to discovery builtin:mdns-discovery +INFO[0008] from discovery builtin:mdns-discovery received message type: stop, message: OK +INFO[0008] from discovery builtin:serial-discovery received message type: stop, message: OK +Port Protocol Type Board Name FQBN Core +/dev/ttyS0 serial Unknown +``` + +Note `builtin:mdns-discovery` discovery started. It is expected to send the packets as follows (the screenshot from the host with Wireshark): + + + +The screenshot is taken if running the same app (but for macOS) from the host and **i can't see the packets sent if executed from the QEMU guest os**. +I believe i either configured it the wrong way (`-netdev user,id=net0,...`) or it's a QEMU bug. +Additional information: +I've tested on macOS host with qemu 6.0.0 and on Linux (Android) host with qemu 6.1.0 and both were not working. + +the network interface seems to be configured for multicasting: +``` +# ifconfig +eth0 Link encap:Ethernet HWaddr 52:54:00:12:34:57 + inet addr:10.0.2.15 Bcast:0.0.0.0 Mask:255.255.255.0 + inet6 addr: fec0::5054:ff:fe12:3457/64 Scope:Site + inet6 addr: fe80::5054:ff:fe12:3457/64 Scope:Link + UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 + RX packets:91955 errors:0 dropped:0 overruns:0 frame:0 + TX packets:25203 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:119904373 (114.3 MiB) TX bytes:1868274 (1.7 MiB) + +lo Link encap:Local Loopback + inet addr:127.0.0.1 Mask:255.0.0.0 + inet6 addr: ::1/128 Scope:Host + UP LOOPBACK RUNNING MTU:65536 Metric:1 + RX packets:0 errors:0 dropped:0 overruns:0 frame:0 + TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) +``` + +It might be easier to skip using arduino-cli and just use any mdns discovery app. diff --git a/results/classifier/108/other/812398 b/results/classifier/108/other/812398 new file mode 100644 index 000000000..fc5b47e6c --- /dev/null +++ b/results/classifier/108/other/812398 @@ -0,0 +1,40 @@ +device: 0.792 +vnc: 0.601 +graphic: 0.525 +socket: 0.510 +other: 0.504 +network: 0.458 +semantic: 0.448 +performance: 0.440 +PID: 0.430 +files: 0.420 +boot: 0.382 +permissions: 0.326 +debug: 0.192 +KVM: 0.125 + +powerpc 7450 MMU initialization broken + +The 7540 family of PPCs' MMU can update TLBs using hardware search (like a 604 or 7400) but also using a software algorithm. The mechanism used is defined by HID0[STEN]. + +By default (CPU reset) HID0 is set to 0x80000000 (BTW; another small bug, qemu doesn't set the hardwired MSB), hence +the software-table lookup feature is *disabled*. However, the default (and immutable) 'mmu_model' for this CPU family is POWERC_MMU_SOFT_74XX which choses the soft TLB replacement scheme. + +To fix this: + +1) the initial mmu_model for the 7450 family (includes 7441, 7445, 7451, 7455, 7457, 7447, 7448) should be: POWERPC_MMU_32B +2) when HID0[STEN] is written then the mmu_model should be changed accordingly (I'm not familiar enough with the qemu internal state to judge if any cached state would have to be updated). + +Looking through old bug tickets... is this still an issue with the latest version of QEMU? Or could we close this ticket nowadays? + + +From looking at the source code of 5.1.0-rc3 (target/ppc/translate_init.inc.c) it seems that this is still an issue. + + +This is an automated cleanup. This bug report has been moved to QEMU's +new bug tracker on gitlab.com and thus gets marked as 'expired' now. +Please continue with the discussion here: + + https://gitlab.com/qemu-project/qemu/-/issues/86 + + diff --git a/results/classifier/108/other/813 b/results/classifier/108/other/813 new file mode 100644 index 000000000..183e91e8b --- /dev/null +++ b/results/classifier/108/other/813 @@ -0,0 +1,30 @@ +graphic: 0.857 +device: 0.825 +performance: 0.718 +files: 0.672 +PID: 0.627 +semantic: 0.486 +permissions: 0.484 +vnc: 0.483 +boot: 0.467 +debug: 0.463 +other: 0.427 +socket: 0.404 +network: 0.268 +KVM: 0.164 + +On windows, preallocation=full qcow2 not creatable, qcow2 not resizable +Description of problem: +Not possible to create a fixed-virtual-disk qcow as one may do on linux. +One sometimes may want to create a fixed size qcow2, as can be done with the fixed variants of VHDX, VMDK, VDI, + +The advantage of a fixed virtual-disk format, such as fixed-VHDX, fixed-VMDK, fixed-VDI is that it keeps the disk-meta-data as a header bundled along with that is essentially a raw image, allowing for seamless tooling and management of virtual-disks + +Workaround use a raw file as diskimage. (see workaround given below) + +To be very general, the implementation of this may need to factor in what underlying operations (fallocate, fallocate_punchhole, truncate, sparse) are supported by what filesystems (NTFS, ExFAT, ext4), choice of filesystem-driver (sometimes the driver may not have yet implemented an underlying operation), and operating systems (Linux/Win), and possible workarounds to achieve the same effect in the absence of underlying-operation. +Steps to reproduce: +1. open command shell +2. run the qemu-img command. In my case, qcow2 file is attempted to be created on a drive with ExFAT filesystem. +Additional information: + diff --git a/results/classifier/108/other/813546 b/results/classifier/108/other/813546 new file mode 100644 index 000000000..1b3638801 --- /dev/null +++ b/results/classifier/108/other/813546 @@ -0,0 +1,31 @@ +graphic: 0.844 +device: 0.831 +other: 0.817 +semantic: 0.685 +performance: 0.625 +network: 0.548 +socket: 0.507 +vnc: 0.500 +debug: 0.482 +permissions: 0.473 +PID: 0.433 +boot: 0.420 +KVM: 0.385 +files: 0.373 + +option to disable PS/2 mouse + +Adds an option to disable the PS/2 mouse. + +This is useful to work around bugs in PS/2 drivers in some system or testing system without a PS/2 mouse present. + + + +Sorry, we don't take any patches from the bug tracker... Could you please post patches on the qemu-devel mailing list instead? Thanks! + +The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now. +If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience. + + +[Expired for QEMU because there has been no activity for 60 days.] + diff --git a/results/classifier/108/other/814 b/results/classifier/108/other/814 new file mode 100644 index 000000000..3e7c2e197 --- /dev/null +++ b/results/classifier/108/other/814 @@ -0,0 +1,52 @@ +graphic: 0.824 +other: 0.814 +permissions: 0.811 +debug: 0.808 +device: 0.787 +semantic: 0.785 +network: 0.773 +socket: 0.772 +PID: 0.769 +files: 0.764 +KVM: 0.746 +boot: 0.745 +vnc: 0.743 +performance: 0.738 + +On Windows, qcow2 is corrupted on expansion +Description of problem: +On Windows, the qcow2 loses blocks on account of which the filesystem withing is corrupted as data is copied to it, just the same way as in #727 VHDX is corrupted on expansion on both Linux/Windows. + +After filing a bug for WNBD https://github.com/cloudbase/wnbd/issues/63 , I was suggested to try raw and qcow2. In the process I found that qcow2 is also affected. But it is also true that the kernel-5.15.4 ... 5.15.13 series have also been buggy https://bugzilla.kernel.org/show_bug.cgi?id=215460 . +On Linux, qcow2 never showed any signs of corruption. +On Windows, however, qcow2 does corrupt. + +It is possible that, as Linux is so much more efficient at files and disk-IO, the kernel-block-code, qemu-block-code and qemu-qcow2-code do not hit the bug, and so the corruption does not show up as easily in Linux. Windows, being a little slower at this, might be causing the bug to show up in this qcow2 test. Possibly, the issue more likely to show up on slower machines. I am using an 2013-era intel-4rth gen i7-4700mq Haswell machine. + +It is possible that, the resolution for this issue and that for #727 could be the same or very closely related. The bug may not be in qcow2.c or vhdx.c but maybe in the qemu/block subsystem. If the data-block that arrives from the VM-interface/nbd-interface which has to be written to file, but never gets to the virtual-disk code, not allocated and written to, then the data-block is lost. +Steps to reproduce: +1. Prepare virtual-disk1 as empty qcow2. In my-setup, the qcow2 file resides on an 150 GiB ExFAT partition on 512 GiB SSD. I use ExFAT as the ExFAT-filesystem does not have a concept of sparse files, eliminating that factor from troubleshooting. + ```qemu-img.exe create -f qcow2 H:\gkpics01.qcow2 99723771904``` +2. Prepare virtual-disk2 VHDX with synthetic generated data (sgdata). Scriptlets to recreate sgdata are described in https://gitlab.com/qemu-project/qemu/-/issues/727#note_739930694 . In my-setup, the vhdx file resides on an 1 TiB NTFS partition on a 2 TiB HDD. +3. Start qemu with arguments as given above. +4. Inside VM, boot and bringup livecd desktop, close the installer and open a terminal +5. Use gdisk to put an ext4 partition on /dev/sda +6. Put ext4 partition on sda1 ```mkfs.ext4 -L fs_gkpics01 /dev/sda1``` +7. Create mount directories ```mkdir /mnt/a /mnt/b``` +8. Mount the empty partition from virtual-disk-1 ```mount -t ext4 /dev/sda1 /mnt/a``` +9. Mount the sgdata partition from virtual-disk-2 ```mount.ntfs-3g /dev/sdb2 /mnt/b``` or ```mount -t ntfs3 /dev/sdb2 /mnt/b``` +10. Keep a terminal tab open with ```dmesg -w``` running +11. Rsync sgdata ```( sdate=`date` ; cd /mnt/b ; rsync -avH ./photos001 /mnt/a | tee /tmp/rst.txt ; echo $sdate ; date )``` +12. Check sha256sum ```( sdate=`date` ; cd /mnt/a/photos001 ; shas256sum -c ./find.CHECKSUM --quiet ; echo $sdate ; date )``` + corruption will show even without needing to unmount-remount or reboot-remount. + +- About 1.4 GiB free-space left on the ext4 partition. +- Compared to #727, The number of files corrupted are less ``` sha256sum: WARNING: 31 computed checksums did not match ``` +- After, VM guest OS warm reboot, a recheck of the sha256sum shows the same 31 files as corrupted +- After, qemu poweroff, restart qemu, VM guest OS cold boot, a recheck of the sha256sum shows the same 31 files as corrupted +- df shows: sda1 has 95271336 1k-blocks, of which 88840860 are used, 1544820 available, 99% used. The numbers don't add up. Either file-blocks are lost in lost-clusters or the ext4-filesystem has a large journal or the file-system-metadata is too large, or the ext4-filesystem has large cluster-size which results in inefficient space usage. +- An ```unmount /dev/sda1 ; fsck -y /dev/sda1 ; mount -t ext4 /dev/sda1 /mnt/a``` did not find any lost clusters. + +The reason I don't think this is a kernel bug, is because the raw-file as virtual-disk-1 doesn't show this issue. Also, it happens regardless of whether sgdata is on ntfs-3g or ntfs3-paragon. +Additional information: + diff --git a/results/classifier/108/other/814222 b/results/classifier/108/other/814222 new file mode 100644 index 000000000..4eb096740 --- /dev/null +++ b/results/classifier/108/other/814222 @@ -0,0 +1,260 @@ +debug: 0.675 +semantic: 0.641 +other: 0.555 +permissions: 0.529 +PID: 0.513 +graphic: 0.500 +KVM: 0.499 +files: 0.421 +vnc: 0.413 +socket: 0.410 +device: 0.408 +boot: 0.391 +network: 0.382 +performance: 0.307 + +kvm cannot use vhd files over 127GB + +The primary use case for using vhds with KVM is to perform a conversion to a raw image file so that one could move from Hyper-V to Linux-KVM. See more on this http://blog.allanglesit.com/2011/03/linux-kvm-migrating-hyper-v-vhd-images-to-kvm/ + +# kvm-img convert -f raw -O vpc /root/file.vhd /root/file.img + +The above works great if you have VHDs smaller than 127GB, however if it is larger, then no error is generated during the conversion process, but it appears to just process up to that 127GB barrier and no more. Also of note. VHDs can also be run directly using KVM if they are smaller than 127GB. VHDs can be read and function well using virtualbox as well as hyper-v, so I suspect the problem lies not with the VHD format (since that has a 2TB limitation). But instead with how qemu-kvm is interpreting them. + +BORING VERSION INFO: +# cat /etc/issue +Ubuntu 11.04 \n \l +# uname -rmiv +2.6.38-8-server #42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011 x86_64 x86_64 +# apt-cache policy kvm +kvm: + Installed: 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 + Candidate: 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 + Version table: + *** 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 0 + 500 http://apt.sonosite.com/ubuntu/ natty-updates/main amd64 Packages + 500 http://apt.sonosite.com/ubuntu/ natty-security/main amd64 Packages + 100 /var/lib/dpkg/status + 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 0 + 500 http://apt.sonosite.com/ubuntu/ natty/main amd64 Packages +# apt-cache policy libvirt-bin +libvirt-bin: + Installed: 0.8.8-1ubuntu6.2 + Candidate: 0.8.8-1ubuntu6.2 + Version table: + *** 0.8.8-1ubuntu6.2 0 + 500 http://apt.sonosite.com/ubuntu/ natty-updates/main amd64 Packages + 500 http://apt.sonosite.com/ubuntu/ natty-security/main amd64 Packages + 100 /var/lib/dpkg/status + 0.8.8-1ubuntu6 0 + 500 http://apt.sonosite.com/ubuntu/ natty/main amd64 Packages + +qemu-img version 0.14.0 + +# vboxmanage -v +4.0.12r72916 + + +REPRODUCTION STEPS (requires Windows 7 or Windows 2008 R2 with < 1GB of free space) + +## WINDOWS MACHINE ## + +Use Computer Management > Disk Management +-Create 2 VHD files, both dynamically expanding 120GB and 140GB respectively. +-Do not initialize or format. + +These files will need to be transferred to an Ubuntu KVM machine (pscp is what I used but usb would work as well). + +## UBUNTU KVM MACHINE ## + +# ls *.vhd +120g-dyn.vhd 140g-dyn.vhd +# kvm-img info 120g-dyn.vhd +image: 120g-dyn.vhd +file format: vpc +virtual size: 120G (128847052800 bytes) +disk size: 244K +# kvm-img info 140g-dyn.vhd +image: 140g-dyn.vhd +file format: vpc +virtual size: 127G (136899993600 bytes) +disk size: 284K +# kvm-img info 120g-dyn.vhd | grep "virtual size" +virtual size: 120G (128847052800 bytes) +# kvm-img info 140g-dyn.vhd | grep "virtual size" +virtual size: 127G (136899993600 bytes) + +Regardless of how big the second vhd is I always get a virtual size of 127G + +Now if we use virtualbox to view the vhds we see markedly different results. + +# VBoxManage showhdinfo 120g-dyn.vhd +UUID: e63681e0-ff12-4114-85de-7d13562b36db +Accessible: yes +Logical size: 122880 MBytes +Current size on disk: 0 MBytes +Type: normal (base) +Storage format: VHD +Format variant: dynamic default +Location: /root/120g-dyn.vhd +# VBoxManage showhdinfo 140g-dyn.vhd +UUID: 94531905-46b4-469f-bb44-7a7d388fb38f +Accessible: yes +Logical size: 143360 MBytes +Current size on disk: 0 MBytes +Type: normal (base) +Storage format: VHD +Format variant: dynamic default +Location: /root/140g-dyn.vhd + +# kvm-img convert -f vpc -O raw 120g-dyn.vhd 120g-dyn.img +# +# kvm-img convert -f vpc -O raw 140g-dyn.vhd 140g-dyn.img +# + +# kvm-img info 120g-dyn.img +image: 120g-dyn.img +file format: raw +virtual size: 120G (128847052800 bytes) +disk size: 0 +# kvm-img info 120g-dyn.img | grep "virtual size" +virtual size: 120G (128847052800 bytes) +# kvm-img info 140g-dyn.img +image: 140g-dyn.img +file format: raw +virtual size: 127G (136899993600 bytes) +disk size: 0 +# kvm-img info 140g-dyn.img | grep "virtual size" +virtual size: 127G (136899993600 bytes) + +Notice after the conversion the raw image will the taken on the partial geometry of the vhd, thus rendering that image invalid. + +vboxmanage has a clonehd option which allows you to successfully convert vhd to a raw image, which kvm then sees properly. + +For giggles I also tested with a 140GB fixed VHD (in the same manner as above) and it displayed the virtual size as correct, so a good work around is to convert your VHDs to fixed, then use kvm-img to convert them. + +Keep in mind that these reproduction steps will not have a file systems therefore no valid data, if there were for example NTFS with a text file the problem would still occur but more importantly the guest trying to use it would not be able to open the disk because of it being unable to find the final sector. + +So long story short I think we are dealing with 2 issues here. + +1) kvm not being able to deal with dynamic VHD files larger than 127GB +2) kvm-img not generating an error when it "fails" at converting or displaying information on dynamic VHDs larger than 127GB. The error should be something like "qemu-kvm does not support dynamic VHD files larger that 127GB..." + +Thanks for taking the time to submit this bug. + +It looks like the 127G limit is known. A recent patch went in to help with the symptom you are seeing, but unfortunately it only makes the failure detectable :) It's a start at least. + +The following commit should be pulled in: + +commit 6e9ea0c0629fe25723494a19498bedf4b781cbfa +Author: aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-71466251a162> +Date: Wed Apr 15 14:42:46 2009 +0000 + + block-vpc: Don't silently create smaller image than requested + + The algorithm from the VHD specification for CHS calculation silently limits + images to 127 GB which may confuse a user who requested a larger image. Better + output an error message and abort. + + Signed-off-by: Kevin Wolf <email address hidden> + Signed-off-by: Aurelien Jarno <email address hidden> + + git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7109 c046a42c-6fe2-441c-8c8c-71466251a162 + + +Hm, no - that patch is already in natty and oneiric's qemu-kvm. + +I'm afraid I'll need time to find a place where I can reproduce this. + +As converting to fixed is listed as a workaround, I'm changing the priority to low per priority definitions. + +Are the priority definitions documented somewhere? + +I personally think you were right on when you had the priority at medium. + +Primarily because of the fact that no error is generated. It can't just silently fail. If it generated an error (so that people knew they needed to look for a work around) then I would agree that fixing the bug itself would be a low priority, as the work around is simple for anyone to implement. + +@Matthew, + +yes, the definitions are at https://wiki.ubuntu.com/Bugs/Importance. + +Your reasoning makes sense. I'll bump it back up, thanks. + +This could be tough to reproduce, as I don't seem to have a way to +create a vpc image > 127G in the first place: + +root@ip-10-36-186-165:/mnt# qemu-img create -f vpc vpc2.img 130G +Formatting 'vpc2.img', fmt=vpc size=139586437120 +qemu-img: The image size is too large for file format 'vpc' + +root@ip-10-36-186-165:/mnt# qemu-img create -f raw raw1.img 130G +Formatting 'raw1.img', fmt=raw size=139586437120 +root@ip-10-36-186-165:/mnt# qemu-img convert -f raw -O vpc raw1.img vpc1.img +qemu-img: The image size is too large for file format 'vpc' + +root@ip-10-36-186-165:/mnt# qemu-img create -f vpc vpc1.img 127G +Formatting 'vpc1.img', fmt=vpc size=136365211648 +root@ip-10-36-186-165:/mnt# qemu-img convert -f vpc -O raw vpc1.img raw2.img +root@ip-10-36-186-165:/mnt# qemu-img info raw2.img +image: raw2.img +file format: raw +virtual size: 127G (136365219840 bytes) +disk size: 0 + + + + +This is a dynamically expanding VHD file created using the reproduction steps above on Windows 7. This one is 120GB and converts correctly. + +This has not been formatted or even initialized. + +"kvm-img info 120g-dynamic.vhd" shows the proper geometry. + + +This is a dynamically expanding VHD file created using the reproduction steps above on Windows 7. This one is 140GB and silently errors on conversion. + +This has not been formatted or even initialized. + +"kvm-img info 140g-dynamic.vhd" does not show the proper geometry. + +I have attached a couple of VHDs that I created with Windows 7. These should be helpful in your reproduction. + +Also looking at your notes it looks like that previous patch which was committed only affected the creation. So perhaps the same sort of check can be incorporated into the conversion process as well, so that you don't have the silent error. + +@Matthew + +thanks for the attachments. + + + +This bug was fixed in the package qemu-kvm - 0.14.1+noroms-0ubuntu5 + +--------------- +qemu-kvm (0.14.1+noroms-0ubuntu5) oneiric; urgency=low + + * debian/patches/vpc.patch: detect vpc files which are too big + (LP: #814222) + -- Serge Hallyn <email address hidden> Mon, 12 Sep 2011 11:28:36 -0500 + +I came here from : http://lists.gnu.org/archive/html/qemu-devel/2011-07/msg02806.html + +Actually, I experience an issue which may be useful to you. + +I have a corrupted VHD file (as explained in that thread : https://forums.virtualbox.org/viewtopic.php?f=7&t=20614 ). I wanted to follow that procedure to solve my issue : + + qemu-img convert -O raw miimagen.vhd miimagen.bin + VBoxManage convertdd miimagen.bin miimagen.vdi + + +but qemu-img convert -O raw miimagen.vhd miimagen.bin triggers the qemu-img: Could not open 'img.VHD': File too large error message. + +Since, my file is 52,6 Go and the output is raw format. I guess it should not trigger that exception? or is that the normal behavior? Is there a way to bypass this limit? I use qemu-img 1.0 version. + +Hope it can help your development (and it can help me back) +Thanks, +simon + +Looks like Serge's fix has been included here: +http://git.qemu.org/?p=qemu.git;a=commitdiff;h=efc8243d00ab4cf4fa05a9b +... so let's close this bug now. + diff --git a/results/classifier/108/other/815 b/results/classifier/108/other/815 new file mode 100644 index 000000000..1a3b4400d --- /dev/null +++ b/results/classifier/108/other/815 @@ -0,0 +1,16 @@ +performance: 0.911 +device: 0.610 +network: 0.535 +graphic: 0.358 +semantic: 0.345 +files: 0.051 +debug: 0.049 +vnc: 0.035 +other: 0.032 +boot: 0.026 +permissions: 0.015 +socket: 0.007 +PID: 0.004 +KVM: 0.002 + +Using spdk Vhost to accelerate QEMU, which QEMU version is the most appropriate? diff --git a/results/classifier/108/other/816860 b/results/classifier/108/other/816860 new file mode 100644 index 000000000..2c8a2e4ea --- /dev/null +++ b/results/classifier/108/other/816860 @@ -0,0 +1,46 @@ +KVM: 0.909 +performance: 0.878 +device: 0.772 +network: 0.741 +graphic: 0.713 +semantic: 0.651 +PID: 0.591 +vnc: 0.546 +permissions: 0.539 +socket: 0.508 +boot: 0.476 +files: 0.436 +other: 0.358 +debug: 0.231 + +Guest machine freezes when NFS mount goes offline + +I have a virtual KVM machine that has 2 CDROM units with ISOs mounted from a NFS mount point. When NFS server goes offline the virtual machine blocks completely instead of throwing read errors for the CDROM device. + +Host: Proxmox VE 1.8-11 (Debian GNU/Linux 5.0) +KVM commandline version: QEMU emulator version 0.14.1 (qemu-kvm-devel) +Guest: Windows 7 professional SP 1 + +On Wed, Jul 27, 2011 at 10:09 AM, Igor Blanco <email address hidden> wrote: +> Public bug reported: +> +> I have a virtual KVM machine that has 2 CDROM units with ISOs mounted +> from a NFS mount point. When NFS server goes offline the virtual machine +> blocks completely instead of throwing read errors for the CDROM device. +> +> Host: Proxmox VE 1.8-11 (Debian GNU/Linux 5.0) +> KVM commandline version: QEMU emulator version 0.14.1 (qemu-kvm-devel) +> Guest: Windows 7 professional SP 1 + +Thanks for reporting this. There are instances where QEMU performs +blocking operations in a thread that will prevent the guest from +running. I suspect you are hitting this case and refactoring work +needs to be done to ensure that QEMU threads never block. + +Stefan + + +Can you still reproduce this problem with the latest version of QEMU (currently version 2.9.0)? + +[Expired for QEMU because there has been no activity for 60 days.] + diff --git a/results/classifier/108/other/81775929 b/results/classifier/108/other/81775929 new file mode 100644 index 000000000..0c1809c02 --- /dev/null +++ b/results/classifier/108/other/81775929 @@ -0,0 +1,245 @@ +other: 0.877 +permissions: 0.849 +PID: 0.847 +performance: 0.831 +vnc: 0.825 +semantic: 0.825 +graphic: 0.818 +KVM: 0.815 +socket: 0.810 +debug: 0.799 +files: 0.788 +device: 0.777 +network: 0.759 +boot: 0.742 + +[Qemu-devel] [BUG] Monitor QMP is broken ? + +Hello! + + I have updated my qemu to the recent version and it seems to have lost +compatibility with +libvirt. The error message is: +--- cut --- +internal error: unable to execute QEMU command 'qmp_capabilities': QMP input +object member +'id' is unexpected +--- cut --- + What does it mean? Is it intentional or not? + +Kind regards, +Pavel Fedin +Expert Engineer +Samsung Electronics Research center Russia + +Hello! + +> +I have updated my qemu to the recent version and it seems to have lost +> +compatibility +with +> +libvirt. The error message is: +> +--- cut --- +> +internal error: unable to execute QEMU command 'qmp_capabilities': QMP input +> +object +> +member +> +'id' is unexpected +> +--- cut --- +> +What does it mean? Is it intentional or not? +I have found the problem. It is caused by commit +65207c59d99f2260c5f1d3b9c491146616a522aa. libvirt does not seem to use the +removed +asynchronous interface but it still feeds in JSONs with 'id' field set to +something. So i +think the related fragment in qmp_check_input_obj() function should be brought +back + +Kind regards, +Pavel Fedin +Expert Engineer +Samsung Electronics Research center Russia + +On Fri, Jun 05, 2015 at 04:58:46PM +0300, Pavel Fedin wrote: +> +Hello! +> +> +> I have updated my qemu to the recent version and it seems to have lost +> +> compatibility +> +with +> +> libvirt. The error message is: +> +> --- cut --- +> +> internal error: unable to execute QEMU command 'qmp_capabilities': QMP +> +> input object +> +> member +> +> 'id' is unexpected +> +> --- cut --- +> +> What does it mean? Is it intentional or not? +> +> +I have found the problem. It is caused by commit +> +65207c59d99f2260c5f1d3b9c491146616a522aa. libvirt does not seem to use the +> +removed +> +asynchronous interface but it still feeds in JSONs with 'id' field set to +> +something. So i +> +think the related fragment in qmp_check_input_obj() function should be +> +brought back +If QMP is rejecting the 'id' parameter that is a regression bug. + +[quote] +The QMP spec says + +2.3 Issuing Commands +-------------------- + +The format for command execution is: + +{ "execute": json-string, "arguments": json-object, "id": json-value } + + Where, + +- The "execute" member identifies the command to be executed by the Server +- The "arguments" member is used to pass any arguments required for the + execution of the command, it is optional when no arguments are + required. Each command documents what contents will be considered + valid when handling the json-argument +- The "id" member is a transaction identification associated with the + command execution, it is optional and will be part of the response if + provided. The "id" member can be any json-value, although most + clients merely use a json-number incremented for each successive + command + + +2.4 Commands Responses +---------------------- + +There are two possible responses which the Server will issue as the result +of a command execution: success or error. + +2.4.1 success +------------- + +The format of a success response is: + +{ "return": json-value, "id": json-value } + + Where, + +- The "return" member contains the data returned by the command, which + is defined on a per-command basis (usually a json-object or + json-array of json-objects, but sometimes a json-number, json-string, + or json-array of json-strings); it is an empty json-object if the + command does not return data +- The "id" member contains the transaction identification associated + with the command execution if issued by the Client + +[/quote] + +And as such, libvirt chose to /always/ send an 'id' parameter in all +commands it issues. + +We don't however validate the id in the reply, though arguably we +should have done so. + +Regards, +Daniel +-- +|: +http://berrange.com +-o- +http://www.flickr.com/photos/dberrange/ +:| +|: +http://libvirt.org +-o- +http://virt-manager.org +:| +|: +http://autobuild.org +-o- +http://search.cpan.org/~danberr/ +:| +|: +http://entangle-photo.org +-o- +http://live.gnome.org/gtk-vnc +:| + +"Daniel P. Berrange" <address@hidden> writes: + +> +On Fri, Jun 05, 2015 at 04:58:46PM +0300, Pavel Fedin wrote: +> +> Hello! +> +> +> +> > I have updated my qemu to the recent version and it seems to have +> +> > lost compatibility +> +> with +> +> > libvirt. The error message is: +> +> > --- cut --- +> +> > internal error: unable to execute QEMU command 'qmp_capabilities': +> +> > QMP input object +> +> > member +> +> > 'id' is unexpected +> +> > --- cut --- +> +> > What does it mean? Is it intentional or not? +> +> +> +> I have found the problem. It is caused by commit +> +> 65207c59d99f2260c5f1d3b9c491146616a522aa. libvirt does not seem to +> +> use the removed +> +> asynchronous interface but it still feeds in JSONs with 'id' field +> +> set to something. So i +> +> think the related fragment in qmp_check_input_obj() function should +> +> be brought back +> +> +If QMP is rejecting the 'id' parameter that is a regression bug. +It is definitely a regression, my fault, and I'll get it fixed a.s.a.p. + +[...] + diff --git a/results/classifier/108/other/818 b/results/classifier/108/other/818 new file mode 100644 index 000000000..5fd163f14 --- /dev/null +++ b/results/classifier/108/other/818 @@ -0,0 +1,20 @@ +device: 0.837 +network: 0.546 +graphic: 0.431 +vnc: 0.415 +debug: 0.391 +performance: 0.381 +other: 0.166 +boot: 0.166 +semantic: 0.166 +PID: 0.151 +permissions: 0.142 +files: 0.133 +socket: 0.133 +KVM: 0.020 + +qemu with invalid arg will cause monitor error +Steps to reproduce: +``` +qemu-system-ppc.exe -m 1024M -monitor +``` diff --git a/results/classifier/108/other/818673 b/results/classifier/108/other/818673 new file mode 100644 index 000000000..b9f1053e6 --- /dev/null +++ b/results/classifier/108/other/818673 @@ -0,0 +1,797 @@ +permissions: 0.892 +PID: 0.891 +other: 0.888 +graphic: 0.883 +vnc: 0.875 +socket: 0.874 +semantic: 0.873 +performance: 0.870 +debug: 0.863 +files: 0.861 +boot: 0.859 +device: 0.855 +network: 0.847 +KVM: 0.802 + +virtio: trying to map MMIO memory + +Qemu host is Core i7, running Linux. Guest is Windows XP sp3. +Often, qemu will crash shortly after starting (1-5 minutes) with a statement "qemu-system-x86_64: virtio: trying to map MMIO memory" +This has occured with qemu-kvm 0.14, qemu-kvm 0.14.1, qemu-0.15.0-rc0 and qemu 0.15.0-rc1. +Qemu is started as such: +qemu-system-x86_64 -cpu host -enable-kvm -pidfile /home/rick/qemu/hds/wxp.pid -drive file=/home/rick/qemu/hds/wxp.raw,if=virtio -m 768 -name WinXP -net nic,model=virtio -net user -localtime -usb -vga qxl -device virtio-serial -chardev spicevmc,name=vdagent,id=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -spice port=1234,disable-ticketing -daemonize -monitor telnet:localhost:12341,server,nowait +The WXP guest has virtio 1.1.16 drivers for net and scsi, and the most current spice binaries from spice-space.org. + +On Sun, Jul 31, 2011 at 12:01 AM, Rick Vernam <email address hidden> wrote: +> Public bug reported: +> +> Qemu host is Core i7, running Linux. Guest is Windows XP sp3. +> Often, qemu will crash shortly after starting (1-5 minutes) with a statement "qemu-system-x86_64: virtio: trying to map MMIO memory" +> This has occured with qemu-kvm 0.14, qemu-kvm 0.14.1, qemu-0.15.0-rc0 and qemu 0.15.0-rc1. +> Qemu is started as such: +> qemu-system-x86_64 -cpu host -enable-kvm -pidfile /home/rick/qemu/hds/wxp.pid -drive file=/home/rick/qemu/hds/wxp.raw,if=virtio -m 768 -name WinXP -net nic,model=virtio -net user -localtime -usb -vga qxl -device virtio-serial -chardev spicevmc,name=vdagent,id=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -spice port=1234,disable-ticketing -daemonize -monitor telnet:localhost:12341,server,nowait +> The WXP guest has virtio 1.1.16 drivers for net and scsi, and the most current spice binaries from spice-space.org. + +This is probably a guest virtio driver bug. + +Vadim: Any known issues like this with 1.1.16? + +Stefan + + +On Sun, 2011-07-31 at 18:54 +0100, Stefan Hajnoczi wrote: +> On Sun, Jul 31, 2011 at 12:01 AM, Rick Vernam <email address hidden> wrote: +> > Public bug reported: +> > +> > Qemu host is Core i7, running Linux. Guest is Windows XP sp3. +> > Often, qemu will crash shortly after starting (1-5 minutes) with a statement "qemu-system-x86_64: virtio: trying to map MMIO memory" +> > This has occured with qemu-kvm 0.14, qemu-kvm 0.14.1, qemu-0.15.0-rc0 and qemu 0.15.0-rc1. +> > Qemu is started as such: +> > qemu-system-x86_64 -cpu host -enable-kvm -pidfile /home/rick/qemu/hds/wxp.pid -drive file=/home/rick/qemu/hds/wxp.raw,if=virtio -m 768 -name WinXP -net nic,model=virtio -net user -localtime -usb -vga qxl -device virtio-serial -chardev spicevmc,name=vdagent,id=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -spice port=1234,disable-ticketing -daemonize -monitor telnet:localhost:12341,server,nowait +> > The WXP guest has virtio 1.1.16 drivers for net and scsi, and the most current spice binaries from spice-space.org. +> +> This is probably a guest virtio driver bug. +> +> Vadim: Any known issues like this with 1.1.16? +No, it something new to me. +Will try to reproduce and fix it. +Thank you, +Vadim. +> +> Stefan + + + + +Seems to only crash the first time qemu is started after booting the host machine. +After the first crash, qemu will run solid for days if the host machine is not rebooted. +If I have an opportunity, I'll test if it also crashes after first start when kvm and/or kvm_intel modules are unloaded and reloaded. + +I have the same problem. I'm using the packages from the Sergei ppa with spice enabled on a server with 9 windows xp machines and 1 linux (ubuntu 10.04) one. Ubuntu is rock solid and never crash, but the windows machines do randomnly. I've updated everything i could (using the version from spice-space.org), i've disabled the memory ballooning, disabled spice etc... but it's always the same. It just crash with that message. + +I guess it's a driver problem. I've searched on google and found some clues (there's a guy porting spice drivers to BSD and got that problem and could resolve it). + +If i can do a test that helps, here i am. I've tried a few things but i'm out of ideas. + +With my test, the only thing that all the machines had always enabled is the virtio storage driver. I've tried disabling the network one and the memory one but no luck, so maybe it's related to the harddisk controller. (i can't disable it because windows doesn't like to mess with the hard disk controller, you know). + +Thanks a lot ;-) + +Continues to occur with recently updated qxl, vdagent & virtio serial windows binaries from spice-space.org. +Also continues with qemu-kvm-0.15.0-rc1, qemu-0.15.0-rc1 & qemu-0.15.0-rc2 + +Vadim, + +Have you been able to reproduce this? +Do you require any additional information? + +Thanks, +-Rick + +Continues with Qemu 0.15.0 and Qemu-KVM 0.15.0 + +It's something related to Windows. I have in the same machine a linux server working with spice enabled and is rock solid. The windows machines crash with that error randomnly. + +So that would point to virtio. This appears to be the place for virtio bugs, correct? +Should I be doing anything to help usher this along? + +On Sun, Aug 14, 2011 at 7:11 AM, Rick Vernam <email address hidden> wrote: +> So that would point to virtio. This appears to be the place for virtio bugs, correct? +> Should I be doing anything to help usher this along? + +Either we need to help Vadim reproduce this so he can take a look. +Vadim: were you able to reproduce this? + +Or someone interested in Windows driver debugging can see if they can +debug this. The symptom is that the guest driver is placing invalid +descriptors into the vring. QEMU tries to map the memory and finds +the address is in a memory-mapped I/O region instead of a RAM region. +Normally the vring descriptors only point into RAM, so this is a guest +driver bug where the vring is being corrupted somehow. If anyone +wants to take a look I can try to help guide them along the +virtio-specifics. + +Stefan + + +Do you know if it's something related to the virtio net driver? anyone tried going to the e1000 only? i have some machines with e1000 and some of them with virtio-net, but i have crash no matter what driver is using (but the virtio driver is installed anyway, despite i'm using the e1000). + +I was searching in the git repository for windows drivers, (http://git.kernel.org/?p=virt/kvm/kvm-guest-drivers-windows.git;a=history;f=NetKVM/Common/ndis56common.h;hb=HEAD ) but couldn't find anything related to this. + +Any news? i can't debug the driver, i would do it if i knew how. + +David Rando. + +On Thu, Aug 25, 2011 at 3:53 PM, David Rando <email address hidden> wrote: +> Do you know if it's something related to the virtio net driver? anyone +> tried going to the e1000 only? i have some machines with e1000 and some +> of them with virtio-net, but i have crash no matter what driver is using +> (but the virtio driver is installed anyway, despite i'm using the +> e1000). +> +> I was searching in the git repository for windows drivers, +> (http://git.kernel.org/?p=virt/kvm/kvm-guest-drivers- +> windows.git;a=history;f=NetKVM/Common/ndis56common.h;hb=HEAD ) but +> couldn't find anything related to this. +> +> Any news? i can't debug the driver, i would do it if i knew how. +> +> David Rando. +> +> -- +> You received this bug notification because you are a member of qemu- +> devel-ml, which is subscribed to QEMU. +> https://bugs.launchpad.net/bugs/818673 +> +> Title: +> virtio: trying to map MMIO memory +> +> Status in QEMU: +> New +> +> Bug description: +> Qemu host is Core i7, running Linux. Guest is Windows XP sp3. +> Often, qemu will crash shortly after starting (1-5 minutes) with a statement "qemu-system-x86_64: virtio: trying to map MMIO memory" +> This has occured with qemu-kvm 0.14, qemu-kvm 0.14.1, qemu-0.15.0-rc0 and qemu 0.15.0-rc1. +> Qemu is started as such: +> qemu-system-x86_64 -cpu host -enable-kvm -pidfile /home/rick/qemu/hds/wxp.pid -drive file=/home/rick/qemu/hds/wxp.raw,if=virtio -m 768 -name WinXP -net nic,model=virtio -net user -localtime -usb -vga qxl -device virtio-serial -chardev spicevmc,name=vdagent,id=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -spice port=1234,disable-ticketing -daemonize -monitor telnet:localhost:12341,server,nowait +> The WXP guest has virtio 1.1.16 drivers for net and scsi, and the most current spice binaries from spice-space.org. +> +> To manage notifications about this bug go to: +> https://bugs.launchpad.net/qemu/+bug/818673/+subscriptions + +Vadim, +Have you had any luck reproducing this issue or any advice for David? + +Thanks, +Stefan + + +On Thu, 2011-08-25 at 16:54 +0100, Stefan Hajnoczi wrote: +> On Thu, Aug 25, 2011 at 3:53 PM, David Rando <email address hidden> wrote: +> > Do you know if it's something related to the virtio net driver? anyone +> > tried going to the e1000 only? i have some machines with e1000 and some +> > of them with virtio-net, but i have crash no matter what driver is using +> > (but the virtio driver is installed anyway, despite i'm using the +> > e1000). +> > +> > I was searching in the git repository for windows drivers, +> > (http://git.kernel.org/?p=virt/kvm/kvm-guest-drivers- +> > windows.git;a=history;f=NetKVM/Common/ndis56common.h;hb=HEAD ) but +> > couldn't find anything related to this. +> > +> > Any news? i can't debug the driver, i would do it if i knew how. +> > +> > David Rando. +> > +> > -- +> > You received this bug notification because you are a member of qemu- +> > devel-ml, which is subscribed to QEMU. +> > https://bugs.launchpad.net/bugs/818673 +> > +> > Title: +> > virtio: trying to map MMIO memory +> > +> > Status in QEMU: +> > New +> > +> > Bug description: +> > Qemu host is Core i7, running Linux. Guest is Windows XP sp3. +> > Often, qemu will crash shortly after starting (1-5 minutes) with a statement "qemu-system-x86_64: virtio: trying to map MMIO memory" +> > This has occured with qemu-kvm 0.14, qemu-kvm 0.14.1, qemu-0.15.0-rc0 and qemu 0.15.0-rc1. +> > Qemu is started as such: +> > qemu-system-x86_64 -cpu host -enable-kvm -pidfile /home/rick/qemu/hds/wxp.pid -drive file=/home/rick/qemu/hds/wxp.raw,if=virtio -m 768 -name WinXP -net nic,model=virtio -net user -localtime -usb -vga qxl -device virtio-serial -chardev spicevmc,name=vdagent,id=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -spice port=1234,disable-ticketing -daemonize -monitor telnet:localhost:12341,server,nowait +> > The WXP guest has virtio 1.1.16 drivers for net and scsi, and the most current spice binaries from spice-space.org. +> > +> > To manage notifications about this bug go to: +> > https://bugs.launchpad.net/qemu/+bug/818673/+subscriptions +> +> Vadim, +> Have you had any luck reproducing this issue or any advice for David? +Guys, I'm sorry. Not yet. I'll try my best to trace this problem on the +following weekend. + +Best, +Vadim. + +> +> Thanks, +> Stefan + + + + +We're affected by this bug, too. Trying to find a workaround, last friday we changed VGA model to cirrus, and the machine is working properly without new entries in log. + +ping... + +I understand that Vadim must be very busy that he can't look at this - I can relate. +But is there really only one person in all of Qemu and/or Spice who can be addressed to look into this? + +So that I can plan around the viability of Qemu for my users, I need to know if the technologies I seek will be well maintained. + +Thanks, +-Rick + +I've made several unsuccessful attempts to reproduce this problem, +running VMs on top of F14 and RHEL6.2 + +The only one relatively close problem, reported by our QE, was +https://bugzilla.redhat.com/show_bug.cgi?id=727034 +It must be fixed in our internal repository. (Public repository +is out of sync, but I'm going to update it soon) + +I put our recent (WHQL candidates) drivers here: +http://people.redhat.com/vrozenfe/virtio-win-prewhql-0.1.zip + +Please give them a try and share your experience. + +Best, +Vadim. + +Still crashes just the same. +I updated the drivers for virt net, scsi & serial from the XP and WXp folders in the zip file that you referenced. +Then I shutdown the VM. +Because it only seems to happen every other time that Qemu is started, I started it back up and shut it down again. +Then the VM was started a third time and left idle prior to crashing. + +Thanks, and sorry that I didn't have better news. +(also, note that I've built qemu-kvm straight from www.linux-kvm.org, and qemu straight from qemu.org). + +-Rick + +so if I use -vga std instead of -vga qxl (and of course take out the -spice stuff), I don't crash. +perhaps this is spice/qxl related? + +sorry, scratch that last about -vga std ... it still crashed just the same using -vga std. + +Thank you, Rick. + +Could you help me to narrow this problem down? + +As I see, you have three virtio drivers installed on your system - block, net, and virtio serial. +Technically, anyone of them can create "trying to map MMIO memory" problem. +The best way to find a buggy driver ( or drivers) will be to isolate one from the other. +If you can, please try running only one virtio device every time to see which driver +sends incorrect scatter/gather list element to QEMU. + +Another question. You said, the problem happens after every second or third restart. +Do you shutdown your VM, or just restart it? How does it work after going through +several hibernate/resume, and/or suspend/resume cycles. + +Best regards, +Vadim. + +On Wednesday 14 September 2011 14:42:09 vrozenfe wrote: +> Thank you, Rick. +> +> Could you help me to narrow this problem down? +Absolutely. + +> +> As I see, you have three virtio drivers installed on your system - block, +> net, and virtio serial. Technically, anyone of them can create "trying to +> map MMIO memory" problem. The best way to find a buggy driver ( or +> drivers) will be to isolate one from the other. If you can, please try +> running only one virtio device every time to see which driver sends +> incorrect scatter/gather list element to QEMU. +Sure, no problem. I'll have that in the next few days. + +> +> Another question. You said, the problem happens after every second or third +> restart. Do you shutdown your VM, or just restart it? +Have to shut down the VM guest so that the qemu process exits. + +> How does it work +> after going through several hibernate/resume, and/or suspend/resume +> cycles. +I often will suspend with or without pausing qemu (via monitor commands 'stop' +and 'cont'). I have never experienced any problem with the qemu process that +was running prior to the suspend. + +> +> Best regards, +> Vadim. + +Thanks, +-Rik + + +On Wednesday 14 September 2011 16:30:11 Rick Vernam wrote: +> On Wednesday 14 September 2011 14:42:09 vrozenfe wrote: +> > Thank you, Rick. +> > +> > Could you help me to narrow this problem down? +> +> Absolutely. +> +> > As I see, you have three virtio drivers installed on your system - block, +> > net, and virtio serial. Technically, anyone of them can create "trying to +> > map MMIO memory" problem. The best way to find a buggy driver ( or +> > drivers) will be to isolate one from the other. If you can, please try +> > running only one virtio device every time to see which driver sends +> > incorrect scatter/gather list element to QEMU. +> +> Sure, no problem. I'll have that in the next few days. +I started qemu without any of the virt-serial stuff, specfically: +qemu-system-x86_64 -cpu host -enable-kvm -pidfile /home/rick/qemu/hds/wxp.pid - +drive file=/home/rick/qemu/hds/wxp.raw,if=virtio,aio=native -m 1536 -name WinXP +-net nic,model=virtio -net user -localtime -usb -vga qxl -spice +port=1234,disable-ticketing -monitor stdio + +It's been running for around 2 hours and no crash yet. + +Thanks, +-Rick + +> +> > Another question. You said, the problem happens after every second or +> > third restart. Do you shutdown your VM, or just restart it? +> +> Have to shut down the VM guest so that the qemu process exits. +> +> > How does it work +> > after going through several hibernate/resume, and/or suspend/resume +> > cycles. +> +> I often will suspend with or without pausing qemu (via monitor commands +> 'stop' and 'cont'). I have never experienced any problem with the qemu +> process that was running prior to the suspend. +> +> > Best regards, +> > Vadim. +> +> Thanks, +> -Rik + + +Thank you, Rick. + +I will start checking virtio-serial driver tomorrow. + +Best, +Vadim. + + +On Thursday 15 September 2011 11:23:53 Rick Vernam wrote: +> On Wednesday 14 September 2011 16:30:11 Rick Vernam wrote: +> > On Wednesday 14 September 2011 14:42:09 vrozenfe wrote: +> > > Thank you, Rick. +> > > +> > > Could you help me to narrow this problem down? +> > +> > Absolutely. +> > +> > > As I see, you have three virtio drivers installed on your system - +> > > block, net, and virtio serial. Technically, anyone of them can create +> > > "trying to map MMIO memory" problem. The best way to find a buggy +> > > driver ( or drivers) will be to isolate one from the other. If you +> > > can, please try running only one virtio device every time to see which +> > > driver sends incorrect scatter/gather list element to QEMU. +> > +> > Sure, no problem. I'll have that in the next few days. +> +> I started qemu without any of the virt-serial stuff, specfically: +> qemu-system-x86_64 -cpu host -enable-kvm -pidfile +> /home/rick/qemu/hds/wxp.pid - drive +> file=/home/rick/qemu/hds/wxp.raw,if=virtio,aio=native -m 1536 -name WinXP +> -net nic,model=virtio -net user -localtime -usb -vga qxl -spice +> port=1234,disable-ticketing -monitor stdio +> +> It's been running for around 2 hours and no crash yet. +So without virt-serial, the machine ran until I rebooted the guest OS, then +crashed with the same error message. Without virt-serial it seemed to be +stable so long as it was just left running. + +Now I'll run it without virt-net, and let you know how that goes. + +> +> Thanks, +> -Rick +> +> > > Another question. You said, the problem happens after every second or +> > > third restart. Do you shutdown your VM, or just restart it? +> > +> > Have to shut down the VM guest so that the qemu process exits. +> > +> > > How does it work +> > > after going through several hibernate/resume, and/or suspend/resume +> > > cycles. +> > +> > I often will suspend with or without pausing qemu (via monitor commands +> > 'stop' and 'cont'). I have never experienced any problem with the qemu +> > process that was running prior to the suspend. +> > +> > > Best regards, +> > > Vadim. +> > +> > Thanks, +> > -Rik + + +On Friday 16 September 2011 03:52:34 hkran wrote: +[snip] +> +> I have tried many times with many restarts or shutdown-and-boot xp guest +> but failed to meet the crashing. +> (I am using the virtio drivers referenced in the earlier mail list.) +> my command: +> +> /home/huikai/qemu15/bin/qemu --enable-kvm -m 768 -drive +> file=/home/huikai/winxp_dev.img,if=virtio -net nic,model=virtio -net +> user -usb -usbdevice tablet -localtime -vga qxl -device virtio-serial +> -chardev spicevmc,name=vdagent,id=vdagent -device +> virtserialport,chardev=vdagent,name=spice0 -spice +> port=1234,disable-ticketing -monitor telnet:localhost:12341,server,nowait + +Okay, I tried a variation of that: +qemu-system-x86_64 -cpu host -enable-kvm -m 1536 -pidfile +/home/rick/qemu/hds/wxp.pid -drive file=/home/rick/qemu/hds/wxp.raw,if=virtio +-net nic,model=virtio -net user -localtime -usb -vga qxl -device virtio-serial +-chardev spicevmc,name=vdagent,id=vdagent -device +virtserialport,chardev=vdagent,name=spice0 -spice port=1234,disable-ticketing +-monitor telnet:localhost:12341,server,nowait + +And it's been running stable all day. +The differences between the command line that crashes and yours are: +- yours doesn't have "aio=native" in the -drive declaration. +- yours has some differences in the virtio-serial device declaration. +- yours has some differences in the virtserialport device declaration. + +As time permits I'm going to try each of those differences individually. + +Thanks, +-Rick + + +On Friday 16 September 2011 12:42:02 Rick Vernam wrote: +> On Friday 16 September 2011 03:52:34 hkran wrote: +> [snip] +> +> > I have tried many times with many restarts or shutdown-and-boot xp guest +> > but failed to meet the crashing. +> > (I am using the virtio drivers referenced in the earlier mail list.) +> > my command: +> > +> > /home/huikai/qemu15/bin/qemu --enable-kvm -m 768 -drive +> > file=/home/huikai/winxp_dev.img,if=virtio -net nic,model=virtio -net +> > user -usb -usbdevice tablet -localtime -vga qxl -device virtio-serial +> > -chardev spicevmc,name=vdagent,id=vdagent -device +> > virtserialport,chardev=vdagent,name=spice0 -spice +> > port=1234,disable-ticketing -monitor +> > telnet:localhost:12341,server,nowait +> +> Okay, I tried a variation of that: +> qemu-system-x86_64 -cpu host -enable-kvm -m 1536 -pidfile +> /home/rick/qemu/hds/wxp.pid -drive +> file=/home/rick/qemu/hds/wxp.raw,if=virtio -net nic,model=virtio -net user +> -localtime -usb -vga qxl -device virtio-serial -chardev +> spicevmc,name=vdagent,id=vdagent -device +> virtserialport,chardev=vdagent,name=spice0 -spice +> port=1234,disable-ticketing -monitor telnet:localhost:12341,server,nowait +> +> And it's been running stable all day. +> The differences between the command line that crashes and yours are: +> - yours doesn't have "aio=native" in the -drive declaration. +> - yours has some differences in the virtio-serial device declaration. +> - yours has some differences in the virtserialport device declaration. + +I added "aio=native" and it crashed. +If it helps, I ran config like so: +./configure --target-list=x86_64-softmmu --disable-curses --disable-curl -- +audio-drv-list=alsa --audio-card-list=sb16,ac97,hda --enable-vnc-thread -- +disable-bluez --enable-vhost-net --enable-spice + +and I've attached config.log, as well as the output of configure. + +Thanks, +-Rick + + +On Friday 16 September 2011 12:42:02 Rick Vernam wrote: +> On Friday 16 September 2011 03:52:34 hkran wrote: +> [snip] +> +> > I have tried many times with many restarts or shutdown-and-boot xp guest +> > but failed to meet the crashing. +> > (I am using the virtio drivers referenced in the earlier mail list.) +> > my command: +> > +> > /home/huikai/qemu15/bin/qemu --enable-kvm -m 768 -drive +> > file=/home/huikai/winxp_dev.img,if=virtio -net nic,model=virtio -net +> > user -usb -usbdevice tablet -localtime -vga qxl -device virtio-serial +> > -chardev spicevmc,name=vdagent,id=vdagent -device +> > virtserialport,chardev=vdagent,name=spice0 -spice +> > port=1234,disable-ticketing -monitor +> > telnet:localhost:12341,server,nowait +> +> Okay, I tried a variation of that: +> qemu-system-x86_64 -cpu host -enable-kvm -m 1536 -pidfile +> /home/rick/qemu/hds/wxp.pid -drive +> file=/home/rick/qemu/hds/wxp.raw,if=virtio -net nic,model=virtio -net user +> -localtime -usb -vga qxl -device virtio-serial -chardev +> spicevmc,name=vdagent,id=vdagent -device +> virtserialport,chardev=vdagent,name=spice0 -spice +> port=1234,disable-ticketing -monitor telnet:localhost:12341,server,nowait +> +> And it's been running stable all day. +> The differences between the command line that crashes and yours are: +> - yours doesn't have "aio=native" in the -drive declaration. +> - yours has some differences in the virtio-serial device declaration. +> - yours has some differences in the virtserialport device declaration. +> +> As time permits I'm going to try each of those differences individually. +Without "aio=native" ... +in the definition of virtserialport, I changed "name=spice0" to +"name=com.redhat.spice.0" - with this change, the guest vdagent works, but it +crashed... + +> +> Thanks, +> -Rick + + +On Friday 23 September 2011 14:07:17 Alon Levy wrote: +> On Thu, Sep 22, 2011 at 02:10:04PM -0500, Rick Vernam wrote: +> > On Friday 16 September 2011 12:42:02 Rick Vernam wrote: +> > > On Friday 16 September 2011 03:52:34 hkran wrote: +> > > [snip] +> > > +> > > > I have tried many times with many restarts or shutdown-and-boot xp +> > > > guest but failed to meet the crashing. +> > > > (I am using the virtio drivers referenced in the earlier mail list.) +> > > > my command: +> > > > +> > > > /home/huikai/qemu15/bin/qemu --enable-kvm -m 768 -drive +> > > > file=/home/huikai/winxp_dev.img,if=virtio -net nic,model=virtio -net +> > > > user -usb -usbdevice tablet -localtime -vga qxl -device +> > > > virtio-serial -chardev spicevmc,name=vdagent,id=vdagent -device +> > > > virtserialport,chardev=vdagent,name=spice0 -spice +> > > > port=1234,disable-ticketing -monitor +> > > > telnet:localhost:12341,server,nowait +> > > +> > > Okay, I tried a variation of that: +> > > qemu-system-x86_64 -cpu host -enable-kvm -m 1536 -pidfile +> > > /home/rick/qemu/hds/wxp.pid -drive +> > > file=/home/rick/qemu/hds/wxp.raw,if=virtio -net nic,model=virtio -net +> > > user -localtime -usb -vga qxl -device virtio-serial -chardev +> > > spicevmc,name=vdagent,id=vdagent -device +> > > virtserialport,chardev=vdagent,name=spice0 -spice +> > > port=1234,disable-ticketing -monitor +> > > telnet:localhost:12341,server,nowait +> > > +> > > And it's been running stable all day. +> > > The differences between the command line that crashes and yours are: +> > > - yours doesn't have "aio=native" in the -drive declaration. +> > > - yours has some differences in the virtio-serial device declaration. +> > > - yours has some differences in the virtserialport device declaration. +> > > +> > > As time permits I'm going to try each of those differences +> > > individually. +> > +> > Without "aio=native" ... +> > in the definition of virtserialport, I changed "name=spice0" to +> > "name=com.redhat.spice.0" - with this change, the guest vdagent works, +> > but it crashed... +> +> If you provide details on the crash maybe someone can help. +This email thread has details early on the thread, and there is a bug report +here: https://bugs.launchpad.net/bugs/818673 +All the details of the crash that are available to me are previously +described. + +> +> > > Thanks, +> > > -Rick + + +Vadim, + +Did you see comment #27? Is that helpful, would you like any additional info? Are there other things you would like for me to try? + +Thanks, +-Rick + +So I've built qemu with -enable-debug and tried running with an attached GDB, but got nothing. +I've never tried to debug Qemu before, but I know it's not quite a simple as debugging other apps. +I am honestly clueless about how to further debug this problem. + +Should I give up on using virtio-serial for spice vdagent integration? +It seems that nobody really has any interest in this problem. + +Are other people using qemu with spice (including functional guest agent support, copy/paste, etc)? How? +I know hkran posted how (s)he uses qemu without hitting this bug, but when I use qemu in that way, I loose guest agent. + +I like to fix things myself, and I hate to be asking about this when everybody clearly has more interesting things to do. +I just need some input so that I can have realistic expectations. + +Thanks, +-Rick + +I've been dealing with this bug for some time on Fedora. Until recently, I was using the VirtIO drivers from RHEV 2.2, which don't suffer from this problem. As of Fedora 16, however, that isn't an option, because they cause the guest to blue-screen early in the boot process. + +So ... I've been doing some more testing with the following setup: + + Host: + Intel DQ67SW motherboard with Q67 chipset (including IOMMU) + BIOS version SWQ6710H.86A.0050.2011.0401.1409 (release date 04/01/2011) + Intel Core i7 2600, 4-cores, 8 threads, 3.4 GHz + 16GB memory + Fedora 15 64-bit, fully updated including updates-testing repo + qemu-kvm-0.14.0-8.fc15.x86_64 + libvirt-0.8.8-7.fc15.x86_64 + kernel-2.6.41.6-1.fc15.x86_64 + + Guest: + Windows 7 Professional 32-bit, fully updated + 2 VCPUs + 3.5GB memory + Red Hat VirtIO Ethernet Adapter driver version 6.0.209.605 (9/20/2010) + Red Hat VirtIO SCSI Controller driver version 6.0.0.10 (9/20/2010) + (No VirtIO serial ports or channels defined) + +(The VirtIO drivers are from http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/.) + +I have determined that disabling the Intel IOMMU has no effect; the problem still occurs. + +Perhaps more interestingly, it seems that the problem only occurs when I am using the VirtIO SCSI *and* the VirtIO Ethernet drivers. It seems that the problem does not occur if I only use one of the drivers; an IDE disk with a VirtIO NIC seems to be stable, as does a VirtIO disk with an e1000 NIC. + +Now to the big question ... what the heck can be done to get this problem fixed? I hope that everyone agrees that it's totally unacceptable for a problem like this to sit unfixed for so long. I am more than willing to test any patches, enable +debugging, etc.; just tell me what to do. + +Thanks! + +Two other observations: + +* The problem is also present in the latest drivers in the RHEL 6.2 virtio-win package (both driver versions 60.62.102.3000, dates 9/12/2011). +* The problem does not seem to occur if the guest has only 1 VCPU. + +So the problem only occurs when using 2 VirtIO devices with 2 VCPUs. This leads me to speculate that there is some sort of VirtIO "core" that is shared between the 2 devices, and that there is some sort of race condition or locking problem in that core. + +In reply to comment #32, I encounter this problem with 1VCPU - see the original description of the bug. +Also note that after qemu quits with the error, the subsequent execution of the same qemu invocation will run stable. + + +And I have this bug! +Linux test-2 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 x86_64 GNU/Linux +In container i have Windows XP SP3 + +In log: +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 2048 -sm +p 2 -name boss_xp -uuid 9041090d-acee-da4a-921d-238f2a43be64 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/boss_xp.monitor,server,nowait -monitor cha +rdev:monitor -localtime -boot c -drive file=/root/boss_xp.qcow2,if=virtio,index=0,boot=on,format=raw,cache=none -drive file=/home/admino/virtio-win-1.1.16.is +o,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=00:16:36:49:83:6c,vlan=0,model=virtio,name=virtio.0 -net tap,fd=43,vlan=0,name=tap.0 -chardev pty,id +=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:2 -k en-us -vga cirrus +char device redirected to /dev/pts/3 +pci_add_option_rom: failed to find romfile "pxe-virtio.bin" +virtio_ioport_write: unexpected address 0x13 value 0x1 +virtio: trying to map MMIO memory + +After this Windows go shutdown. +I tried this (in log): + +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 2048 -sm +p 2 -name boss_xp -uuid 9041090d-acee-da4a-921d-238f2a43be64 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/boss_xp.monitor,server,nowait -monitor cha +rdev:monitor -localtime -boot c -drive file=/root/boss_xp.qcow2,if=virtio,index=0,boot=on,format=raw,cache=none -drive file=/home/admino/virtio-win-1.1.16.is +o,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=00:16:36:49:83:6c,vlan=0,model=rtl8139,name=rtl8139.0 -net tap,fd=43,vlan=0,name=tap.0 -chardev pty, +id=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:2 -k en-us -vga cirrus +char device redirected to /dev/pts/3 +pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" +virtio: trying to map MMIO memory +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 2048 -sm +p 2 -name boss_xp -uuid 9041090d-acee-da4a-921d-238f2a43be64 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/boss_xp.monitor,server,nowait -monitor cha +rdev:monitor -localtime -boot c -drive file=/root/boss_xp.qcow2,if=virtio,index=0,boot=on,format=raw,cache=none -drive file=/home/admino/virtio-win-1.1.16.is +o,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=00:16:36:49:83:6c,vlan=0,model=rtl8139,name=rtl8139.0 -net tap,fd=43,vlan=0,name=tap.0 -chardev pty, +id=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:2 -k en-us -vga cirrus +char device redirected to /dev/pts/3 +pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 2048 -sm +p 1 -name boss_xp -uuid 9041090d-acee-da4a-921d-238f2a43be64 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/boss_xp.monitor,server,nowait -monitor cha +rdev:monitor -localtime -boot c -drive file=/root/boss_xp.qcow2,if=virtio,index=0,boot=on,format=raw,cache=none -drive file=/home/admino/virtio-win-1.1.16.is +o,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=00:16:36:49:83:6c,vlan=0,model=rtl8139,name=rtl8139.0 -net tap,fd=43,vlan=0,name=tap.0 -chardev pty, +id=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:2 -k en-us -vga cirrus +char device redirected to /dev/pts/3 +pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" +virtio: trying to map MMIO memory + +Its log i got within the limits of 15 minuts. + +Now i try this config: +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 2048 -sm +p 1 -name boss_xp -uuid 9041090d-acee-da4a-921d-238f2a43be64 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/boss_xp.monitor,server,nowait -monitor cha +rdev:monitor -localtime -boot c -drive file=/root/boss_xp.qcow2,if=virtio,index=0,boot=on,format=raw,cache=none -drive file=/home/admino/virtio-win-1.1.16.is +o,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=00:16:36:49:83:6c,vlan=0,model=rtl8139,name=rtl8139.0 -net tap,fd=43,vlan=0,name=tap.0 -serial none +-parallel none -usb -usbdevice tablet -vnc 127.0.0.1:2 -k en-us -vga cirrus +pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" + +I Will write later with result. + +ps sorry for English + +And more: i have too more virtual PC with WindowsXP SP3 and with one CPU, but them doesnt have any problems. Maybe this bug depends on 2 and more CPU?? + +I experience this on uni-processor. + +On Tuesday 24 January 2012 16:48:04 Vitalis wrote: +> And more: i have too more virtual PC with WindowsXP SP3 and with one +> CPU, but them doesnt have any problems. Maybe this bug depends on 2 and +> more CPU?? + + +Does this Bug similiar with https://bugzilla.redhat.com/show_bug.cgi?id=771390 ? + +Yes, I would say it is the same bug. I will test the driver that Vadim linked in Comment 33 (https://bugzilla.redhat.com/show_bug.cgi?id=771390#c33) and report back. + +Thanks, Mike, for posting here. + +well, the link in the redhat bug, comment 33, is no good apparently. I will follow that bug, and test when I see Vadim has posted a new driver to test. + +Now have next config and bug still: +/usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 2048 -smp 1 -name boss_xp -uuid 9041090d-acee-da4a-921d-238f2a43be64 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/boss_xp.monitor,server,nowait -monitor chardev:monitor -localtime -boot c -drive file=/root/boss_xp.qcow2,if=virtio,index=0,boot=on,format=raw,cache=none -drive file=/home/admino/virtio-win-1.1.16.iso,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=00:16:36:49:83:6c,vlan=0,model=rtl8139,name=rtl8139.0 -net tap,fd=43,vlan=0,name=tap.0 -serial none -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:2 -k en-us -vga cirrus + + +It was a long journey. +But now it seem like we've managed to fix this problem. +https://bugzilla.redhat.com/show_bug.cgi?id=771390#c45 + +I put new drivers here: +http://people.redhat.com/vrozenfe/vioscsi.vfd + +Best regards, +Vadim. + +Thanks! Where can I get ISO of new drivers pack? for Ubuntu 10.04 + +Vadim, Could this be related to the hangs during boot with qxl and virtio-serial in a single windows vm? + +Alon + +I have no idea regarding Ubuntu, but you can find the new drivers +at Fedora project site +http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-22.iso + +Hi Alon, +Unfortunately, qxl and virtio-serial +hang is a different problem. + + +Hello with bad news! I have: + +virtio_ioport_write: unexpected address 0x13 value 0x1 + +on config: + +LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 3072 -smp 1 -name nata_xp -uuid da607499-1d8f-e7ef-d1d2-38 +1c1839e4ba -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/nata_xp.monitor,server,nowait -monitor chardev:monitor -localtime -boot c -drive file=/root/nata_xp.qcow2,if=virtio,index=0,boot=on,format=raw +,cache=none -drive file=/home/admino/virtio-win-0.1-22.iso,if=ide,media=cdrom,index=2,format=raw -net nic,macaddr=00:16:36:06:02:69,vlan=0,model=virtio,name=virtio.0 -net tap,fd=43,vlan=0,name=tap.0 -serial +none -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:3 -k en-us -vga cirrus +pci_add_option_rom: failed to find romfile "pxe-virtio.bin" + +with kernel 2.6.32-40-generic #87-Ubuntu SMP Tue Mar 6 00:56:56 UTC 2012 x86_64 GNU/Linux +qemu drivers are virtio-win-0.1-22.iso + +Anybody help me? + + +According to comment 41, this bug has been fixed, so I'm setting the status to "Fix released" now ... Vitalis, your problem from comment 46 sounds differently - if it still persists today, please open a new bug ticket for this instead. + diff --git a/results/classifier/108/other/819 b/results/classifier/108/other/819 new file mode 100644 index 000000000..83fd7093d --- /dev/null +++ b/results/classifier/108/other/819 @@ -0,0 +1,90 @@ +KVM: 0.773 +other: 0.643 +vnc: 0.628 +permissions: 0.588 +debug: 0.578 +semantic: 0.575 +files: 0.575 +PID: 0.575 +graphic: 0.568 +device: 0.566 +network: 0.510 +performance: 0.509 +socket: 0.437 +boot: 0.411 + +watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [swapper/1:0] +Description of problem: +During virtual disk live move/migration, VMs get severe stuttering and even cpu soft lockups, as described here: + +https://bugzilla.kernel.org/show_bug.cgi?id=199727 + +This also happens on some of our virtual machines when i/o load inside VM is high or workload is fsync centric. + +i'm searching for a solution to mitigate this problem, i.e. i can live with the stuttering/delays of several seconds, but getting cpu soft lockups of 22s or higher is inacceptable. + +i have searched the web for a long long time now, but did not find a solution , nor did i find a way on how to troubleshoot this more in depth to find the real root cause. + +if this issue report will not getting accepted because of "non native qemu" (i.e. proxmox platform) , please tell me which qemu/distro i can/should use instead (which has easy usable live migration feature) to try reproducing the problem. +Steps to reproduce: +1. do a live migration of one or more virtual machine disks +2. watch "ioping -WWWYy test.dat" inside VM (being moved) for disk latency +3. you disk latency is heavily varying , from time to time it goes up to vaues of tens seconds, even leading to kernel messages like " kernel:[ 2155.520846] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [swapper/1:0]" + +``` +4 KiB >>> test.dat (ext4 /dev/sda1): request=55 time=1.07 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=56 time=1.24 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=57 time=567.4 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=58 time=779.0 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=59 time=589.0 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=60 time=1.57 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=61 time=847.7 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=62 time=933.0 ms +4 KiB >>> test.dat (ext4 /dev/sda1): request=63 time=891.4 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=64 time=820.8 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=65 time=1.02 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=66 time=2.44 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=67 time=620.7 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=68 time=1.03 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=69 time=1.24 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=70 time=1.42 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=71 time=1.36 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=72 time=1.41 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=73 time=1.33 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=74 time=2.36 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=75 time=1.46 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=76 time=1.45 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=77 time=1.28 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=78 time=1.41 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=79 time=2.33 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=80 time=1.39 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=81 time=1.35 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=82 time=1.54 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=83 time=1.52 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=84 time=1.50 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=85 time=2.00 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=86 time=1.47 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=87 time=1.26 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=88 time=1.29 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=89 time=2.05 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=90 time=1.44 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=91 time=1.43 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=92 time=1.72 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=93 time=1.77 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=94 time=2.56 s + +Message from syslogd@iotest2 at Jan 14 14:51:12 ... + kernel:[ 2155.520846] watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [swapper/1:0] +4 KiB >>> test.dat (ext4 /dev/sda1): request=95 time=22.5 s (slow) +4 KiB >>> test.dat (ext4 /dev/sda1): request=96 time=3.56 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=97 time=1.52 s (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=98 time=1.69 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=99 time=1.90 s +4 KiB >>> test.dat (ext4 /dev/sda1): request=100 time=1.15 s (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=101 time=890.0 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=102 time=959.6 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=103 time=926.5 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=104 time=791.5 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=105 time=577.8 ms (fast) +4 KiB >>> test.dat (ext4 /dev/sda1): request=106 time=867.7 ms (fast) +``` |