diff options
Diffstat (limited to 'results/classifier/zero-shot/108/other/1358619')
| -rw-r--r-- | results/classifier/zero-shot/108/other/1358619 | 120 |
1 files changed, 120 insertions, 0 deletions
diff --git a/results/classifier/zero-shot/108/other/1358619 b/results/classifier/zero-shot/108/other/1358619 new file mode 100644 index 000000000..db322e1bc --- /dev/null +++ b/results/classifier/zero-shot/108/other/1358619 @@ -0,0 +1,120 @@ +KVM: 0.689 +device: 0.671 +permissions: 0.630 +vnc: 0.627 +other: 0.625 +boot: 0.573 +semantic: 0.561 +files: 0.527 +graphic: 0.491 +performance: 0.489 +PID: 0.485 +network: 0.458 +debug: 0.449 +socket: 0.365 + +keep savevm/loadvm and migration cause snapshot crash + +--Version: qemu 2.1.0 public release +--OS: CentOS release 6.4 +--gcc: 4.4.7 + +Hi: + I found problems when doing some tests on qemu migration and savevm/loadvm. +On my experiment, a quest is migrated between two same host back and forth. +Source host would savevm after migration completed and incoming host loadvm before migration coming. + +image=./image/centos-1.qcow2 + +====Migration Part==== +qemu-system-x86_64 \ + -qmp tcp:$host_ip:4451,server,nowait \ + -drive file=$image \ + --enable-kvm \ + -monitor stdio \ + -m 8192 \ + -device virtio-net-pci,netdev=net0,mac=$mac \ + -netdev tap,id=net0,script=./qemu-ifup + +====Incoming Part==== +qemu-system-x86_64 \ + -qmp tcp:$host_ip:4451,server,nowait \ + -incoming tcp:0:4449 \ + --enable-kvm \ + -monitor stdio \ + -drive file=$image \ + -m 8192 \ + -device virtio-net-pci,netdev=net0,mac=$mac \ + -netdev tap,id=net0,script=./qemu-ifup + + +Command lines: + +host1 $: qemu-system-x86_64 ........ //migration part +host2 $: qemu-system-x86_64 ...incoming... //incoming part + +After finishing boot + +host1 (qemu) : migrate tcp:host2:4449 +host1 (qemu) : savevm s1 +host1 (qemu) : quit +host1 $: qemu-system-x86_64 ...incoming... //incoming part +host1 (qemu) : loadvm s1 + +host2 (qemu) : migrate tcp:host1:4449 +host2 (qemu) : savevm s2 +host2 (qemu) : quit +host2 $: qemu-system-x86_64 ...incoming... //incoming part +host2 (qemu) : loadvm s2 + +host1 (qemu) : migrate tcp:host2:4449 +host1 (qemu) : savevm s3 +host1 (qemu) : quit +host1 $: qemu-system-x86_64 ...incoming... //incoming part +host1 (qemu) : loadvm s3 + +I wish those operation can be success every time. +However problem happened irregularly when loadvm and error messages are not the same. + +host1 (qemu) : loadvm s3 +qcow2: Preventing invalid write on metadata (overlaps with active L1 table); image marked as corrupt. +Error -5 while activating snapshot 's3' on 'ide0-hd0' + +or + +host2 (qemu) : loadvm s2 +Error -22 while loading VM state + + +I have done some sample test on savevm/loadvm +On same host +(qemu) savevm test1 +(qemu) loadvm test1 +(qemu) savevm test2 +(qemu) loadvm test2 +(qemu) savevm test3 +(qemu) loadvm test3 +(qemu) savevm test4 +(qemu) loadvm test4 +(qemu) savevm test5 +(qemu) loadvm test5 +(qemu) savevm test6 +(qemu) loadvm test6 + +This is OK. No any problem. + + +Any idea? +I think it is related to migration. + +Hi, + Can I just check, when you do the incoming migrate, do you wait for the incoming migrate to finish before you do the loadvm, or do you do the loadvm during the incoming migrate? + +I execute incoming migration command and wait there. Then I do loadvm. After finishing loadvm, type migration command on source host to start migration. + +In fact, This action is useless for vm status before migration. I just modify some codes and then found this bug. + +Triaging old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays? + +[Expired for QEMU because there has been no activity for 60 days.] + |