diff options
| author | Christian Krinitsin <mail@krinitsin.com> | 2025-06-16 16:59:00 +0000 |
|---|---|---|
| committer | Christian Krinitsin <mail@krinitsin.com> | 2025-06-16 16:59:33 +0000 |
| commit | 9aba81d8eb048db908c94a3c40c25a5fde0caee6 (patch) | |
| tree | b765e7fb5e9a3c2143c68b0414e0055adb70e785 /results/classifier/118/user-level/1259499 | |
| parent | b89a938452613061c0f1f23e710281cf5c83cb29 (diff) | |
| download | qemu-analysis-9aba81d8eb048db908c94a3c40c25a5fde0caee6.tar.gz qemu-analysis-9aba81d8eb048db908c94a3c40c25a5fde0caee6.zip | |
add 18th iteration of classifier
Diffstat (limited to 'results/classifier/118/user-level/1259499')
| -rw-r--r-- | results/classifier/118/user-level/1259499 | 132 |
1 files changed, 132 insertions, 0 deletions
diff --git a/results/classifier/118/user-level/1259499 b/results/classifier/118/user-level/1259499 new file mode 100644 index 000000000..1786866c6 --- /dev/null +++ b/results/classifier/118/user-level/1259499 @@ -0,0 +1,132 @@ +user-level: 0.861 +KVM: 0.811 +permissions: 0.804 +peripherals: 0.786 +files: 0.775 +mistranslation: 0.775 +hypervisor: 0.765 +ppc: 0.753 +x86: 0.753 +register: 0.752 +VMM: 0.747 +network: 0.740 +virtual: 0.740 +device: 0.733 +arm: 0.733 +vnc: 0.720 +boot: 0.719 +risc-v: 0.719 +graphic: 0.712 +assembly: 0.709 +socket: 0.705 +PID: 0.690 +performance: 0.689 +kernel: 0.688 +TCG: 0.684 +debug: 0.680 +architecture: 0.675 +semantic: 0.636 +i386: 0.573 +-------------------- +x86: 0.977 +virtual: 0.963 +hypervisor: 0.941 +user-level: 0.864 +socket: 0.419 +KVM: 0.304 +boot: 0.109 +debug: 0.049 +PID: 0.040 +TCG: 0.030 +files: 0.027 +kernel: 0.021 +VMM: 0.020 +device: 0.015 +register: 0.014 +network: 0.005 +semantic: 0.005 +assembly: 0.003 +vnc: 0.003 +risc-v: 0.002 +architecture: 0.002 +performance: 0.002 +graphic: 0.002 +peripherals: 0.002 +ppc: 0.001 +permissions: 0.001 +i386: 0.001 +mistranslation: 0.000 +arm: 0.000 + +QEmu 1.7.0 cannot restore a 1.6.0 live snapshot made in qemu-system-x86_64 + +I have upgraded to QEmu 1.7.0 (Debian 1.7.0+dfsg-2) but now when I try to restore a live snapshot made in QEmu 1.6.0 (Debian 1.6.0+dfsg-1) I see that the VM boots from scratch instead of starting directly in the snapshot's running state. + +Furthermore if the VM is already running and I try to revert to the snapshot again I get the following message: + +$ virsh --connect qemu:///system snapshot-revert fgtbbuild wtb; echo $? +error: operation failed: Error -22 while loading VM state +1 + +I have test VMs with live snapshots corresponding to different testing configurations. So I typically revert the VMs in one of the live snapshots and run the tests. It would be pretty annoying to have to recreate all these live snapshots any time I upgrade QEmu bug it looks like I'll have to do it again. + +This all sounds very much like bug 1123975 where QEmu 1.3 broke compatibility with previous versions live snapshots :-( + +Here is the command being run by libvirt: + +/usr/bin/qemu-system-x86_64 -name fgtbbuild -S -machine pc-1.1,accel=kvm,usb=off -m 512 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid f510955c-17de-9907-1e33-dfe1ef7a08b6 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/fgtbbuild.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/mnt/storage1/qemu/fgtbbuild.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,id=drive-ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0a:3c:e8,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -loadvm wtb + +ipxe-qemu 1.0.0+git-20120202.f6840ba-3 +qemu 1.7.0+dfsg-2 +qemu-keymaps 1.7.0+dfsg-2 +qemu-slof 20130430+dfsg-1 +qemu-system 1.7.0+dfsg-2 +qemu-system-arm 1.7.0+dfsg-2 +qemu-system-common 1.7.0+dfsg-2 +qemu-system-mips 1.7.0+dfsg-2 +qemu-system-misc 1.7.0+dfsg-2 +qemu-system-ppc 1.7.0+dfsg-2 +qemu-system-sparc 1.7.0+dfsg-2 +qemu-system-x86 1.7.0+dfsg-2 +qemu-user 1.7.0+dfsg-2 +qemu-utils 1.7.0+dfsg-2 +libvirt-bin 1.1.4-2 +libvirt0 1.1.4-2 +libvirtodbc0 6.1.6+dfsg-4 + +Hi Francois, + I've managed to reproduce this, in my log file (/var/log/libvirt/qemu/machinename.log) I see: + +Unknown ramblock "0000:02.0/qxl.vram", cannot accept migration +qemu: warning: error while loading state for instance 0x0 of device 'ram' +qemu-system-x86_64: Error -22 while loading VM state + +do you also see that unknown ramblock warning? + +(I'm running on F20 using 1.6.0 and 1.7.0 qemu's built from source running minimal F20 guests) + +Dave + +Hi Francois, + I've done some more digging. +It looks like the problem you've hit is related to the same one that's fixed by: + +http://lists.gnu.org/archive/html/qemu-devel/2013-11/msg00513.html + +however that only fixes older restores ; there is a work around which is to pass to QEMU: + +-global i440FX-pcihost.short_root_bus=1 + +on loading the snapshot; which can be done by editing the snapshot xml files - but is obviously a bit +messy. + +Thanks for digging into this. +I am indeed getting the same ramblock error. So it's good that there appears to be a fix for it. +Also if I understand it correctly this particular issue only affects the 1.6.0 snapshots so given that most of my snapshots are still on 1.3.x a direct upgrade to 1.7+ will hopefully let me avoid the issue. + +Yes, my understanding of the bug is that 1.7+ should load your 1.3.x images and then snapshots taken on 1.7.x should be OK into the future. + +I don't think there's currently a way of fixing those 1.6.0 snapshots; that workaround will let you load them in 1.7, but I think if you were then to take a snapshot on 1.7 with that flag, the snapshot would have the same problem. + +Setting status to "Won't fix" since there is no good solution to this problem according to comment #4. + |