1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
<think>
Alright, let's try to figure out the category for this bug report. The issue is with virsh migration copy-storage-all failing and the guest VM shutting down on the source host. The error message mentions "Unable to read from monitor: Connection reset by peer."
First, I'll consider what each category means. Categories like 'network' or 'socket' relate to connection issues. Since the problem occurs during migration and involves a connection reset, network issues seem plausible. But let's dig deeper.
Looking at the steps, the user set up a QCOW2 image on both source and destination, then initiated live migration using virsh with --copy-storage-all. The guest was running, and they were monitoring it via VNC. During the process, the VM on the source shut down, and migration failed.
The error message points to a connection reset, which often relates to network problems. But I also notice that when using NFS storage, the live migration worked fine. So maybe the issue isn't with the storage protocol itself but something else related to the connection between the source and destination hosts during migration.
Looking at the provided debug logs, there are mentions of virEventPollInterruptLocked and virEventPollAddTimeout, which suggest some event handling issues in libvirt, possibly related to monitoring the QEMU process. The fact that the VM shut down on the source indicates a problem with how the hypervisor or QEMU is handling the migration.
Given that KVM is involved (as seen in the guest XML), and the issue occurs during live migration which heavily relies on proper communication between the source and destination, I'm leaning towards 'hypervisor' as the category. The connection reset might be a symptom of an underlying problem in how QEMU or libvirt handles the migration process at the hypervisor level.
Alternatively, it could also be related to network since it's a connectivity issue, but considering that other storage types worked and the VM shutdown is involved, 'hypervisor' seems more accurate. The failure is during live migration with QCOW2 copy-storage-all, which involves more than just network; it touches on how QEMU handles the state transfer between hosts.
So, after weighing the options, I think the most appropriate category here is 'hypervisor'.
</think>
hypervisor
|