summary refs log tree commit diff stats
path: root/results/scraper/launchpad-without-comments/1358619
blob: 55739be532906a1e81dc576ed655b7b032fd9a3a (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
keep savevm/loadvm and migration cause snapshot crash

--Version:  qemu 2.1.0  public release
--OS:          CentOS release 6.4
--gcc:         4.4.7

Hi:
     I found problems when doing some tests on qemu migration and savevm/loadvm.
On my experiment, a quest is migrated between two same host back and forth. 
Source host would savevm after migration completed and incoming host loadvm before migration coming.

image=./image/centos-1.qcow2

====Migration Part====
qemu-system-x86_64 \
        -qmp tcp:$host_ip:4451,server,nowait \
        -drive file=$image \
        --enable-kvm \
        -monitor stdio \
        -m 8192 \
        -device virtio-net-pci,netdev=net0,mac=$mac \
        -netdev tap,id=net0,script=./qemu-ifup

====Incoming Part====
qemu-system-x86_64 \
        -qmp tcp:$host_ip:4451,server,nowait \
        -incoming tcp:0:4449 \
        --enable-kvm \
        -monitor stdio \
        -drive file=$image \
        -m 8192 \
        -device virtio-net-pci,netdev=net0,mac=$mac \
        -netdev tap,id=net0,script=./qemu-ifup


Command lines:

host1 $:  qemu-system-x86_64 ........  //migration part
host2 $:  qemu-system-x86_64 ...incoming... //incoming part

After finishing boot

host1 (qemu) : migrate tcp:host2:4449
host1 (qemu) : savevm s1
host1 (qemu) : quit
host1 $: qemu-system-x86_64 ...incoming... //incoming part
host1 (qemu) : loadvm s1

host2 (qemu) : migrate tcp:host1:4449
host2 (qemu) : savevm s2
host2 (qemu) : quit
host2 $:  qemu-system-x86_64 ...incoming... //incoming part
host2 (qemu) : loadvm s2

host1 (qemu) : migrate tcp:host2:4449
host1 (qemu) : savevm s3
host1 (qemu) : quit
host1 $: qemu-system-x86_64 ...incoming... //incoming part
host1 (qemu) : loadvm s3

I wish those operation can be success every time. 
However problem happened irregularly when loadvm and error messages are not the same.

host1 (qemu) :  loadvm s3
qcow2: Preventing invalid write on metadata (overlaps with active L1 table); image marked as corrupt.
Error -5 while activating snapshot 's3' on 'ide0-hd0'

or

host2 (qemu) : loadvm s2
Error -22 while loading VM state


I have done some sample test on savevm/loadvm
On same host
(qemu) savevm test1
(qemu) loadvm test1
(qemu) savevm test2
(qemu) loadvm test2
(qemu) savevm test3
(qemu) loadvm test3
(qemu) savevm test4
(qemu) loadvm test4
(qemu) savevm test5
(qemu) loadvm test5
(qemu) savevm test6
(qemu) loadvm test6

This is OK. No any problem.


Any idea? 
I think it is related to migration.