diff options
Diffstat (limited to 'results/classifier/deepseek-2-tmp/output/hypervisor/1358619')
| -rw-r--r-- | results/classifier/deepseek-2-tmp/output/hypervisor/1358619 | 94 |
1 files changed, 0 insertions, 94 deletions
diff --git a/results/classifier/deepseek-2-tmp/output/hypervisor/1358619 b/results/classifier/deepseek-2-tmp/output/hypervisor/1358619 deleted file mode 100644 index 12cd4efae..000000000 --- a/results/classifier/deepseek-2-tmp/output/hypervisor/1358619 +++ /dev/null @@ -1,94 +0,0 @@ - -keep savevm/loadvm and migration cause snapshot crash - ---Version: qemu 2.1.0 public release ---OS: CentOS release 6.4 ---gcc: 4.4.7 - -Hi: - I found problems when doing some tests on qemu migration and savevm/loadvm. -On my experiment, a quest is migrated between two same host back and forth. -Source host would savevm after migration completed and incoming host loadvm before migration coming. - -image=./image/centos-1.qcow2 - -====Migration Part==== -qemu-system-x86_64 \ - -qmp tcp:$host_ip:4451,server,nowait \ - -drive file=$image \ - --enable-kvm \ - -monitor stdio \ - -m 8192 \ - -device virtio-net-pci,netdev=net0,mac=$mac \ - -netdev tap,id=net0,script=./qemu-ifup - -====Incoming Part==== -qemu-system-x86_64 \ - -qmp tcp:$host_ip:4451,server,nowait \ - -incoming tcp:0:4449 \ - --enable-kvm \ - -monitor stdio \ - -drive file=$image \ - -m 8192 \ - -device virtio-net-pci,netdev=net0,mac=$mac \ - -netdev tap,id=net0,script=./qemu-ifup - - -Command lines: - -host1 $: qemu-system-x86_64 ........ //migration part -host2 $: qemu-system-x86_64 ...incoming... //incoming part - -After finishing boot - -host1 (qemu) : migrate tcp:host2:4449 -host1 (qemu) : savevm s1 -host1 (qemu) : quit -host1 $: qemu-system-x86_64 ...incoming... //incoming part -host1 (qemu) : loadvm s1 - -host2 (qemu) : migrate tcp:host1:4449 -host2 (qemu) : savevm s2 -host2 (qemu) : quit -host2 $: qemu-system-x86_64 ...incoming... //incoming part -host2 (qemu) : loadvm s2 - -host1 (qemu) : migrate tcp:host2:4449 -host1 (qemu) : savevm s3 -host1 (qemu) : quit -host1 $: qemu-system-x86_64 ...incoming... //incoming part -host1 (qemu) : loadvm s3 - -I wish those operation can be success every time. -However problem happened irregularly when loadvm and error messages are not the same. - -host1 (qemu) : loadvm s3 -qcow2: Preventing invalid write on metadata (overlaps with active L1 table); image marked as corrupt. -Error -5 while activating snapshot 's3' on 'ide0-hd0' - -or - -host2 (qemu) : loadvm s2 -Error -22 while loading VM state - - -I have done some sample test on savevm/loadvm -On same host -(qemu) savevm test1 -(qemu) loadvm test1 -(qemu) savevm test2 -(qemu) loadvm test2 -(qemu) savevm test3 -(qemu) loadvm test3 -(qemu) savevm test4 -(qemu) loadvm test4 -(qemu) savevm test5 -(qemu) loadvm test5 -(qemu) savevm test6 -(qemu) loadvm test6 - -This is OK. No any problem. - - -Any idea? -I think it is related to migration. \ No newline at end of file |