1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
|
debug: 0.971
PID: 0.966
permissions: 0.960
device: 0.957
peripherals: 0.957
semantic: 0.957
user-level: 0.956
graphic: 0.956
architecture: 0.948
assembly: 0.944
kernel: 0.940
virtual: 0.937
KVM: 0.936
performance: 0.933
mistranslation: 0.928
risc-v: 0.927
ppc: 0.927
register: 0.926
socket: 0.925
files: 0.918
arm: 0.917
hypervisor: 0.914
boot: 0.906
VMM: 0.905
x86: 0.905
network: 0.884
TCG: 0.866
vnc: 0.864
i386: 0.739
migrate: add tls option in virsh, migrate failed
version:
libvirt-3.4.0 + qemu-2.9.90(latest)
domain:
any
step:
1. generate tls certificate in /etc/pki/libvirt-migrate
2. start vm
3. migrate vm, cmdline:
virsh migrate rh7.1-3 --live --undefinesource --persistent --verbose --tls qemu+ssh://IP/system
4. then migrate failed and reported:
Migration: [ 64 %]error: internal error: qemu unexpectedly closed the monitor: Domain pid=5288, libvirtd pid=49634
kvm: warning: CPU(s) not present in any NUMA nodes: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config
kvm: error while loading state section id 2(ram)
kvm: load of migration failed: Input/output error
other:
Analysis qemu code and debug:
#0 ram_save_page (f=0x55ca8be370e0, pss=0x7fefdfc7b9a0, last_stage=false, bytes_transferred=0x55ca885ec1d8)
#1 0x000055ca87b00b21 in ram_save_target_page (ms=0x55ca885b8d80, f=0x55ca8be370e0, pss=0x7fefdfc7b9a0, last_stage=false, bytes_transferred=0x55ca885ec1d8, dirty_ram_abs=0)
#2 0x000055ca87b00bda in ram_save_host_page (ms=0x55ca885b8d80, f=0x55ca8be370e0, pss=0x7fefdfc7b9a0, last_stage=false, bytes_transferred=0x55ca885ec1d8, dirty_ram_abs=0)
#3 0x000055ca87b00d39 in ram_find_and_save_block (f=0x55ca8be370e0, last_stage=false, bytes_transferred=0x55ca885ec1d8)
#4 0x000055ca87b020b8 in ram_save_iterate (f=0x55ca8be370e0, opaque=0x0)
#5 0x000055ca87b07a9a in qemu_savevm_state_iterate (f=0x55ca8be370e0, postcopy=false)
#6 0x000055ca87e404e5 in migration_thread (opaque=0x55ca885b8d80)
This is the qemu bug tracker here. Please report libvirt bugs to the libvirt project instead (see http://libvirt.org/bugs.html). Thanks!
with libvirt test, qemu report error, this is problem for qemu,so report here!
But then please follow this recommendation from http://www.qemu.org/contribute/report-a-bug/ :
"Reproduce the problem directly with a QEMU command-line. Avoid frontends and management stacks, to ensure that the bug is in QEMU itself and not in a frontend."
So can you reproduce the problem by starting QEMU directly from the command line?
ok, thank you!
chan: The backtrace you show there - how did you capture that? Was that from a core dump ? If so then please do a bt full to get more detail.
Even if you can't reproduce it with qemu without libvirt I'd be interested to debug it - but the libvirt logs from /var/log/libvirt/qemu/.... on both the source and the destination would help.
[Expired for QEMU because there has been no activity for 60 days.]
|