diff options
Diffstat (limited to 'results/classifier/118/all/686')
| -rw-r--r-- | results/classifier/118/all/686 | 69 |
1 files changed, 69 insertions, 0 deletions
diff --git a/results/classifier/118/all/686 b/results/classifier/118/all/686 new file mode 100644 index 000000000..50d285af8 --- /dev/null +++ b/results/classifier/118/all/686 @@ -0,0 +1,69 @@ +graphic: 0.978 +debug: 0.975 +performance: 0.974 +device: 0.974 +permissions: 0.974 +user-level: 0.972 +network: 0.969 +assembly: 0.968 +register: 0.968 +socket: 0.968 +ppc: 0.967 +architecture: 0.967 +semantic: 0.966 +hypervisor: 0.965 +arm: 0.964 +mistranslation: 0.964 +virtual: 0.962 +files: 0.962 +kernel: 0.962 +boot: 0.959 +TCG: 0.955 +peripherals: 0.953 +PID: 0.953 +x86: 0.952 +vnc: 0.944 +KVM: 0.941 +risc-v: 0.938 +VMM: 0.936 +i386: 0.891 + +Qemu crashes if it is paused and migrated twice +Description of problem: +If the vm is in PAUSED state (in Openstack parlance) (I think libvirt calls that paused as well but uses the command `virsh suspend`), and live-migrated twice, the second time the Qemu process terminates. + +This is perfectly repeatable. + +If the VM is unpaused and re-paused after the first migration, then the problem does not occur on the next migration. +Steps to reproduce: +See also the referenced bug report to openstack, above. +1. `$ openstack stack create ....` +2. `$ openstack server pause <UUID>` +(wait until done) +3. `$ openstack server migrate --live-migration <UUID>` +(wait until done) +4. `$ openstack server migrate --live-migration <UUID>` + +The VM is now in ERROR state because it has disappeared: `libvirt.libvirtError: Domain not found: no domain with matching uuid '<UUID>'` +Additional information: +The last few lines from the instance-00000ba2.log seem pertinent (this is from the receiving Qemu instance): +``` +2021-10-22 15:32:53.829+0000: initiating migration +qemu-system-x86_64: /build/qemu-lb4V37/qemu-4.2/block.c:5523: bdrv_inactivate_recurse: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed. +2021-10-22 15:32:59.122+0000: shutting down, reason=crashed +``` +This is logged by libvirt (also on the receiving side): +``` +Oct 22 15:29:04 ybk140931 ovs-vsctl[20174]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port tap3a71aa63-6a +Oct 22 15:31:31 ybk140931 ovs-vsctl[21412]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port tap3a71aa63-6a -- add-port br-int tap3a71aa63-6a -- set Interface tap3a71aa63-6a "external-ids:attached-mac=\"fa:16:3e:da:03:56\"" -- set Interface tap3a71aa63-6a "external-ids:iface-id=\"3a71aa63-6a39-41d8-9602-04b84834db9e\"" -- set Interface tap3a71aa63-6a "external-ids:vm-id=\"de2b27d2-345c-45fc-8f37-2fa0ed1a1151\"" -- set Interface tap3a71aa63-6a external-ids:iface-status=active +Oct 22 15:32:58 ybk140931 libvirtd[3237]: Unable to read from monitor: Connection reset by peer +Oct 22 15:32:59 ybk140931 ovs-vsctl[22001]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port tap3a71aa63-6a +Oct 22 15:32:59 ybk140931 libvirtd[3237]: operation failed: domain is not running +Oct 22 15:32:59 ybk140931 libvirtd[3237]: internal error: qemu unexpectedly closed the monitor: 2021-10-22T15:32:58.845667Z qemu-system-x86_64: Failed to load virtio_pci/modern_queue_state:used + 2021-10-22T15:32:58.845687Z qemu-system-x86_64: Failed to load virtio_pci/modern_state:vqs + 2021-10-22T15:32:58.845690Z qemu-system-x86_64: Failed to load virtio/extra_state:extra_state + 2021-10-22T15:32:58.845692Z qemu-system-x86_64: Failed to load virtio-rng:virtio + 2021-10-22T15:32:58.845695Z qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:06.0/virtio-rng' + 2021-10-22T15:32:58.847860Z qemu-system-x86_64: load of migration failed: Input/output error +Oct 22 15:32:59 ybk140931 libvirtd[3237]: operation failed: domain 'instance-00000ba2' is not running +``` |