1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
|
virsh backup-begin crashes guest - qcow2_get_specific_info: Assertion `false' failed.
Description of problem:
I run a daily backup job of around 350 guests, scattered on different host machines.
Each day around 1-2 guests crashes on virsh backup-begin with the following error in /var/log/libvirt/qemu/$GUEST.log:
```qemu-system-x86_64: ../../block/qcow2.c:5175: qcow2_get_specific_info: Assertion `false' failed.``` (https://github.com/qemu/qemu/blob/0c8022876f2183f93e23a7314862140c94ee62e7/block/qcow2.c)
Different guests every day, no patterns what I can see.
I'm using a top and a base image with incremental backups, qcow2 compat 1.1, output of qemu-img info of the base and top image;
```
qemu-img info base.qcow2
image: base.qcow2
file format: qcow2
virtual size: 5 GiB (5368709120 bytes)
disk size: 1.9 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
qemu-img info -U top.qcow2
image: top.qcow2
file format: qcow2
virtual size: 60 GiB (64424509440 bytes)
disk size: 1.36 GiB
cluster_size: 65536
backing file: base.qcow2
backing file format: qcow2
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
bitmaps:
[0]:
flags:
[0]: in-use
[1]: auto
name: 1680670811
granularity: 65536
refcount bits: 16
corrupt: false
extended l2: false
```
I know I'm not be using the latest qemu and that this is difficult to reproduce. This bug happens in production and upgrading qemu would be a huge task, given that I would have to upgrade the entire production. Nevertheless I of course would be willing to do it if deemed necessary but at this point I'm just looking for directions on how to pin point this bug.
A "guest-1" grepped version of libvirt debug logs during the seconds this happened:
```
2023-04-08 20:37:20.453+0000: 431153: debug : virDomainLookupByName:413 : conn=0x7fbff000ca30, name=guest-1
2023-04-08 20:37:20.453+0000: 431153: debug : virDomainDispose:348 : release domain 0x7fc068021c60 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.454+0000: 431155: debug : virDomainGetState:2493 : dom=0x7fc068024330, (VM: name=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f), state=0x7fc08c052cf0, reason=0x7fc08c052cf4, flags=0x0
2023-04-08 20:37:20.454+0000: 431155: debug : virDomainDispose:348 : release domain 0x7fc068024330 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.483+0000: 431152: debug : virDomainLookupByName:413 : conn=0x7fc070014e90, name=guest-1
2023-04-08 20:37:20.483+0000: 431152: debug : virDomainDispose:348 : release domain 0x7fc0500075f0 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.483+0000: 431148: debug : virDomainListAllCheckpoints:292 : dom=0x7fc0ac002380, (VM: name=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f), checkpoints=0x7fc0b79018a8, flags=0x0
2023-04-08 20:37:20.483+0000: 431148: debug : virDomainDispose:348 : release domain 0x7fc0ac002380 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.484+0000: 431151: debug : virDomainDispose:348 : release domain 0x7fc0b0006950 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.516+0000: 431150: debug : virDomainLookupByName:413 : conn=0x7fc0a80027a0, name=guest-1
2023-04-08 20:37:20.516+0000: 431150: debug : virDomainDispose:348 : release domain 0x7fc08c007c60 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.516+0000: 431152: debug : virDomainGetState:2493 : dom=0x7fc068021e90, (VM: name=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f), state=0x7fc0a47c64d0, reason=0x7fc0a47c64d4, flags=0x0
2023-04-08 20:37:20.516+0000: 431152: debug : virDomainDispose:348 : release domain 0x7fc068021e90 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.544+0000: 431156: debug : virDomainLookupByName:413 : conn=0x7fc0a80025a0, name=guest-1
2023-04-08 20:37:20.544+0000: 431156: debug : virDomainDispose:348 : release domain 0x7fc068029d00 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.544+0000: 431149: debug : virDomainSuspend:623 : dom=0x7fc050007500, (VM: name=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f)
2023-04-08 20:37:20.544+0000: 431149: debug : qemuDomainObjBeginJobInternal:831 : Starting job: job=suspend agentJob=none asyncJob=none (vm=0x7fc0a4033a10 name=guest-1, current job=none agentJob=none async=none)
2023-04-08 20:37:20.544+0000: 431149: debug : qemuDomainObjBeginJobInternal:883 : Started job: suspend (async=none vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.544+0000: 431149: debug : qemuDomainObjEnterMonitorInternal:5872 : Entering monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.580+0000: 1882669: debug : qemuProcessHandleStop:660 : Transitioned guest guest-1 to paused state, reason user, event detail 0
2023-04-08 20:37:20.580+0000: 1882669: debug : virLockManagerLogParams:90 : key=name type=string value=guest-1
2023-04-08 20:37:20.580+0000: 1882669: debug : virDomainLockManagerAddImage:90 : Add disk /home/vm/domains/guest-1/disk.qcow2
2023-04-08 20:37:20.580+0000: 1882669: debug : virLockManagerAddResource:325 : lock=0x7fbf8801fdc0 type=0 name=/home/vm/domains/guest-1/disk.qcow2 nparams=0 params=(nil) flags=0x0
2023-04-08 20:37:20.581+0000: 431149: debug : qemuDomainObjExitMonitor:5902 : Exited monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.581+0000: 431149: debug : virLockManagerLogParams:90 : key=name type=string value=guest-1
2023-04-08 20:37:20.581+0000: 431149: debug : virDomainLockManagerAddImage:90 : Add disk /home/vm/domains/guest-1/disk.qcow2
2023-04-08 20:37:20.581+0000: 431149: debug : virLockManagerAddResource:325 : lock=0x7fc0a8968e60 type=0 name=/home/vm/domains/guest-1/disk.qcow2 nparams=0 params=(nil) flags=0x0
2023-04-08 20:37:20.582+0000: 431149: debug : qemuDomainObjEndJob:1135 : Stopping job: suspend (async=none vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.582+0000: 431149: debug : virDomainDispose:348 : release domain 0x7fc050007500 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.608+0000: 431148: debug : virDomainLookupByName:413 : conn=0x7fbff000cc30, name=guest-1
2023-04-08 20:37:20.608+0000: 431148: debug : virDomainDispose:348 : release domain 0x7fc07001e330 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.608+0000: 431151: debug : virDomainGetState:2493 : dom=0x7fc050007550, (VM: name=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f), state=0x7fc0a003d640, reason=0x7fc0a003d644, flags=0x0
2023-04-08 20:37:20.608+0000: 431151: debug : virDomainDispose:348 : release domain 0x7fc050007550 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.634+0000: 431150: debug : virDomainLookupByName:413 : conn=0x7fc0a8002ea0, name=guest-1
2023-04-08 20:37:20.634+0000: 431150: debug : virDomainDispose:348 : release domain 0x7fbfc00072e0 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.634+0000: 431152: debug : virDomainBackupBegin:13040 : dom=0x7fc0500075f0, (VM: name=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f), backupXML=<domainbackup><incremental>1680930625</incremental></domainbackup>, checkpointXML=<domaincheckpoint><name>1680986240</name></domaincheckpoint>, flags=0x0
2023-04-08 20:37:20.667+0000: 431152: debug : qemuDomainObjBeginJobInternal:831 : Starting job: job=none agentJob=none asyncJob=backup (vm=0x7fc0a4033a10 name=guest-1, current job=none agentJob=none async=none)
2023-04-08 20:37:20.667+0000: 431152: debug : qemuDomainObjBeginJobInternal:892 : Started async job: backup (vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.668+0000: 431152: debug : virStringMatch:662 : match '/home/vm/domains/guest-1/qemu.agent' for '^/var/lib/libvirt/qemu/channel/target/([^/]+\.)|(domain-[^/]+/)org\.qemu\.guest_agent\.0$'
2023-04-08 20:37:20.669+0000: 431152: debug : virStringMatch:662 : match '/home/vm/domains/guest-1/qemu.agent' for '^/var/lib/libvirt/qemu/channel/target/([^/]+\.)|(domain-[^/]+/)org\.qemu\.guest_agent\.0$'
2023-04-08 20:37:20.670+0000: 431152: debug : virStringMatch:662 : match '/home/vm/domains/guest-1/qemu.agent' for '^/var/lib/libvirt/qemu/channel/target/([^/]+\.)|(domain-[^/]+/)org\.qemu\.guest_agent\.0$'
2023-04-08 20:37:20.670+0000: 431152: debug : qemuDomainObjBeginJobInternal:831 : Starting job: job=async nested agentJob=none asyncJob=none (vm=0x7fc0a4033a10 name=guest-1, current job=none agentJob=none async=backup)
2023-04-08 20:37:20.670+0000: 431152: debug : qemuDomainObjBeginJobInternal:883 : Started job: async nested (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.670+0000: 431152: debug : qemuDomainObjEnterMonitorInternal:5872 : Entering monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.671+0000: 1882669: debug : qemuMonitorJSONIOProcessLine:222 : Line [{"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 32212254720, "filename": "/home/vm/domains/guest-1/disk.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 7361290240, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "dirty-bitmaps": [{"name": "1680930625", "recording": true, "persistent": true, "busy": false, "granularity": 65536, "count": 458293248}], "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/home/vm/domains/guest-1/disk.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 7882014720, "filename": "/home/vm/domains/guest-1/disk.qcow2", "format": "file", "actual-size": 7361290240, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/home/vm/domains/guest-1/disk.qcow2"}], "id": "libvirt-39597736"}]
2023-04-08 20:37:20.671+0000: 1882669: debug : virJSONValueFromString:1691 : string={"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 32212254720, "filename": "/home/vm/domains/guest-1/disk.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 7361290240, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "dirty-bitmaps": [{"name": "1680930625", "recording": true, "persistent": true, "busy": false, "granularity": 65536, "count": 458293248}], "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/home/vm/domains/guest-1/disk.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 7882014720, "filename": "/home/vm/domains/guest-1/disk.qcow2", "format": "file", "actual-size": 7361290240, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/home/vm/domains/guest-1/disk.qcow2"}], "id": "libvirt-39597736"}
2023-04-08 20:37:20.672+0000: 1882669: info : qemuMonitorJSONIOProcessLine:241 : QEMU_MONITOR_RECV_REPLY: mon=0x7fc0480048b0 reply={"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 32212254720, "filename": "/home/vm/domains/guest-1/disk.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 7361290240, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "compression-type": "zlib", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false, "extended-l2": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "dirty-bitmaps": [{"name": "1680930625", "recording": true, "persistent": true, "busy": false, "granularity": 65536, "count": 458293248}], "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/home/vm/domains/guest-1/disk.qcow2"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 7882014720, "filename": "/home/vm/domains/guest-1/disk.qcow2", "format": "file", "actual-size": 7361290240, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "file", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/home/vm/domains/guest-1/disk.qcow2"}], "id": "libvirt-39597736"}
2023-04-08 20:37:20.672+0000: 431152: debug : qemuDomainObjExitMonitor:5902 : Exited monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.672+0000: 431152: debug : qemuDomainObjEndJob:1135 : Stopping job: async nested (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.672+0000: 431152: debug : virStorageFileBackendFileInit:57 : initializing FS storage file 0x7fc0704b3740 (file:/home/vm/domains/guest-1/disk.qcow2.1680986240)[64055:108]
2023-04-08 20:37:20.672+0000: 431152: debug : qemuDomainStorageSourceAccessModify:7767 : src='/home/vm/domains/guest-1/disk.qcow2.1680986240' readonly=0 force_ro=0 force_rw=1 revoke=0 chain=0
2023-04-08 20:37:20.672+0000: 431152: debug : virLockManagerLogParams:90 : key=name type=string value=guest-1
2023-04-08 20:37:20.672+0000: 431152: debug : virDomainLockManagerAddImage:90 : Add disk /home/vm/domains/guest-1/disk.qcow2.1680986240
2023-04-08 20:37:20.672+0000: 431152: debug : virLockManagerAddResource:325 : lock=0x7fc0a460ca40 type=0 name=/home/vm/domains/guest-1/disk.qcow2.1680986240 nparams=0 params=(nil) flags=0x0
2023-04-08 20:37:20.683+0000: 431152: debug : virCommandRunAsync:2630 : About to run LIBVIRT_LOG_OUTPUTS=3:stderr /usr/lib/libvirt/virt-aa-helper -r -u libvirt-29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f -F /home/vm/domains/guest-1/disk.qcow2.1680986240
2023-04-08 20:37:20.796+0000: 431151: debug : virDomainLookupByName:413 : conn=0x7fc0a80020a0, name=guest-1
2023-04-08 20:37:20.893+0000: 431152: debug : qemuSetupImagePathCgroup:74 : Allow path /home/vm/domains/guest-1/disk.qcow2.1680986240, perms: rw
2023-04-08 20:37:20.894+0000: 431152: debug : qemuDomainObjBeginJobInternal:831 : Starting job: job=async nested agentJob=none asyncJob=none (vm=0x7fc0a4033a10 name=guest-1, current job=none agentJob=none async=backup)
2023-04-08 20:37:20.894+0000: 431152: debug : qemuDomainObjBeginJobInternal:883 : Started job: async nested (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.894+0000: 431152: debug : qemuDomainObjEnterMonitorInternal:5872 : Entering monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.894+0000: 431152: debug : qemuDomainObjExitMonitor:5902 : Exited monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.894+0000: 431152: debug : qemuDomainObjEndJob:1135 : Stopping job: async nested (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.894+0000: 431152: debug : qemuDomainObjBeginJobInternal:831 : Starting job: job=async nested agentJob=none asyncJob=none (vm=0x7fc0a4033a10 name=guest-1, current job=none agentJob=none async=backup)
2023-04-08 20:37:20.894+0000: 431152: debug : qemuDomainObjBeginJobInternal:883 : Started job: async nested (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.894+0000: 431152: debug : qemuDomainObjEnterMonitorInternal:5872 : Entering monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.894+0000: 431151: debug : virDomainDispose:348 : release domain 0x7fc014007240 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.894+0000: 431152: info : qemuMonitorSend:914 : QEMU_MONITOR_SEND_MSG: mon=0x7fc0480048b0 msg={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/home/vm/domains/guest-1/disk.qcow2.1680986240","node-name":"libvirt-78-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-39597737"}
2023-04-08 20:37:20.894+0000: 1882669: info : qemuMonitorIOWrite:402 : QEMU_MONITOR_IO_WRITE: mon=0x7fc0480048b0 buf={"execute":"blockdev-add","arguments":{"driver":"file","filename":"/home/vm/domains/guest-1/disk.qcow2.1680986240","node-name":"libvirt-78-storage","auto-read-only":true,"discard":"unmap"},"id":"libvirt-39597737"}
2023-04-08 20:37:20.895+0000: 1385058: debug : virDomainGetInfo:2444 : dom=0x7fc0ac001b80, (VM: name=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f), info=0x7fc009ffa880
2023-04-08 20:37:20.895+0000: 1385058: debug : virDomainDispose:348 : release domain 0x7fc0ac001b80 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:20.895+0000: 431149: debug : virDomainGetBlockInfo:6284 : dom=0x7fc068024100, (VM: name=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f), info=0x7fc0b7100890, flags=0x0
2023-04-08 20:37:20.895+0000: 431149: debug : qemuDomainObjBeginJobInternal:831 : Starting job: job=query agentJob=none asyncJob=none (vm=0x7fc0a4033a10 name=guest-1, current job=async nested agentJob=none async=backup)
2023-04-08 20:37:20.895+0000: 431149: debug : qemuDomainObjBeginJobInternal:867 : Waiting for job (vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.895+0000: 431152: debug : qemuDomainObjExitMonitor:5902 : Exited monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.895+0000: 431152: debug : qemuDomainObjEndJob:1135 : Stopping job: async nested (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.896+0000: 431152: debug : qemuDomainObjBeginJobInternal:831 : Starting job: job=async nested agentJob=none asyncJob=none (vm=0x7fc0a4033a10 name=guest-1, current job=none agentJob=none async=backup)
2023-04-08 20:37:20.896+0000: 431152: debug : qemuDomainObjBeginJobInternal:883 : Started job: async nested (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.896+0000: 431152: debug : qemuDomainObjEnterMonitorInternal:5872 : Entering monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.896+0000: 431149: debug : qemuDomainObjBeginJobInternal:867 : Waiting for job (vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.896+0000: 431152: info : qemuMonitorSend:914 : QEMU_MONITOR_SEND_MSG: mon=0x7fc0480048b0 msg={"execute":"blockdev-create","arguments":{"job-id":"create-libvirt-78-format","options":{"driver":"qcow2","file":"libvirt-78-storage","size":32212254720,"cluster-size":65536,"backing-file":"/home/vm/domains/guest-1/disk.qcow2","backing-fmt":"qcow2"}},"id":"libvirt-39597738"}
2023-04-08 20:37:20.896+0000: 1882669: info : qemuMonitorIOWrite:402 : QEMU_MONITOR_IO_WRITE: mon=0x7fc0480048b0 buf={"execute":"blockdev-create","arguments":{"job-id":"create-libvirt-78-format","options":{"driver":"qcow2","file":"libvirt-78-storage","size":32212254720,"cluster-size":65536,"backing-file":"/home/vm/domains/guest-1/disk.qcow2","backing-fmt":"qcow2"}},"id":"libvirt-39597738"}
2023-04-08 20:37:20.898+0000: 1882669: debug : qemuProcessHandleJobStatusChange:956 : job 'create-libvirt-78-format'(domain: 0x7fc0a4033a10,guest-1) state changed to 'created'(1)
2023-04-08 20:37:20.898+0000: 1882669: debug : qemuProcessHandleJobStatusChange:956 : job 'create-libvirt-78-format'(domain: 0x7fc0a4033a10,guest-1) state changed to 'running'(2)
2023-04-08 20:37:20.898+0000: 431152: debug : qemuDomainObjExitMonitor:5902 : Exited monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.898+0000: 431152: debug : qemuDomainObjEndJob:1135 : Stopping job: async nested (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.899+0000: 431149: debug : qemuDomainObjBeginJobInternal:883 : Started job: query (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:20.899+0000: 431149: debug : qemuDomainObjEnterMonitorInternal:5872 : Entering monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:21.432+0000: 1882669: debug : qemuProcessHandleAgentEOF:147 : Received EOF from agent on 0x7fc0a4033a10 'guest-1'
2023-04-08 20:37:21.432+0000: 1882669: debug : qemuMonitorIO:576 : Error on monitor Unable to read from monitor: Connection reset by peer mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1
2023-04-08 20:37:21.432+0000: 1882669: debug : qemuMonitorIO:609 : Triggering error callback mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1
2023-04-08 20:37:21.432+0000: 1882669: debug : qemuProcessHandleMonitorError:355 : Received error on 0x7fc0a4033a10 'guest-1'
2023-04-08 20:37:21.432+0000: 431149: debug : qemuMonitorSend:927 : Send command resulted in error Unable to read from monitor: Connection reset by peer mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1
2023-04-08 20:37:21.433+0000: 431149: debug : qemuDomainObjExitMonitor:5902 : Exited monitor (mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:21.433+0000: 1882669: debug : qemuMonitorIO:576 : Error on monitor Unable to read from monitor: Connection reset by peer mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1
2023-04-08 20:37:21.433+0000: 431149: debug : qemuDomainObjEndJob:1135 : Stopping job: query (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:21.433+0000: 1882669: debug : qemuMonitorIO:598 : Triggering EOF callback mon=0x7fc0480048b0 vm=0x7fc0a4033a10 name=guest-1
2023-04-08 20:37:21.433+0000: 1882669: debug : qemuProcessHandleMonitorEOF:310 : Received EOF on 0x7fc0a4033a10 'guest-1'
2023-04-08 20:37:21.433+0000: 431149: debug : virDomainDispose:348 : release domain 0x7fc068024100 guest-1 29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f
2023-04-08 20:37:21.433+0000: 620333: debug : qemuProcessKill:7931 : vm=0x7fc0a4033a10 name=guest-1 pid=1882665 flags=0x1
2023-04-08 20:37:21.633+0000: 620333: debug : qemuDomainObjBeginJobInternal:831 : Starting job: job=destroy agentJob=none asyncJob=none (vm=0x7fc0a4033a10 name=guest-1, current job=none agentJob=none async=backup)
2023-04-08 20:37:21.633+0000: 620333: debug : qemuDomainObjBeginJobInternal:883 : Started job: destroy (async=backup vm=0x7fc0a4033a10 name=guest-1)
2023-04-08 20:37:21.634+0000: 620333: debug : processMonitorEOFEvent:4025 : Monitor connection to 'guest-1' closed without SHUTDOWN event; assuming the domain crashed
2023-04-08 20:37:21.634+0000: 620333: debug : qemuProcessStop:8014 : Shutting down vm=0x7fc0a4033a10 name=guest-1 id=814 pid=1882665, reason=crashed, asyncJob=none, flags=0x0
2023-04-08 20:37:21.634+0000: 620333: debug : qemuDomainLogAppendMessage:6740 : Append log message (vm='guest-1' message='2023-04-08 20:37:21.634+0000: shutting down, reason=crashed
2023-04-08 20:37:22.617+0000: 620333: debug : qemuProcessKill:7931 : vm=0x7fc0a4033a10 name=guest-1 pid=1882665 flags=0x5
2023-04-08 20:37:22.617+0000: 620333: debug : qemuDomainCleanupRun:7321 : driver=0x7fc07015c730, vm=guest-1
2023-04-08 20:37:22.617+0000: 620333: debug : qemuProcessAutoDestroyRemove:8416 : vm=guest-1
2023-04-08 20:37:22.617+0000: 620333: debug : virCloseCallbacksUnset:145 : vm=guest-1, uuid=29ac5dd8-6eb9-4140-a9d1-cdcbae01ac0f, cb=0x7fc09e3f9ba0
```
If you need any further information just let me know. As per request, ping @pipo.sk
Additional information:
|