device: 0.856 virtual: 0.826 hypervisor: 0.788 semantic: 0.772 network: 0.756 performance: 0.708 graphic: 0.698 files: 0.660 register: 0.636 user-level: 0.620 mistranslation: 0.606 risc-v: 0.593 architecture: 0.590 permissions: 0.581 PID: 0.579 ppc: 0.576 vnc: 0.572 socket: 0.526 arm: 0.510 debug: 0.506 boot: 0.504 peripherals: 0.500 VMM: 0.469 i386: 0.461 x86: 0.448 assembly: 0.435 TCG: 0.428 kernel: 0.415 KVM: 0.328 VM will not resume on GlusterFS oVirt uses libvirt to run QEMU. Images are passed to QEMU as files, not file descriptors. When running images from a GlusterFS, the file descriptors may get invalidated because of network problems or the glusterfs process being restarted. In this case, the VM goes into paused state. When trying to resume the VM ('cont' command), QEMU uses the same invalidated file descriptors throwing a: "block I/O error in device 'drive-virtio-disk0': Transport endpoint is not connected (107)". Please check file-descriptors and reopen image file on 'cont' event in QEMU. Thanks. References: [1] http://lists.nongnu.org/archive/html/qemu-devel/2015-03/msg01269.html [2] https://bugzilla.redhat.com/show_bug.cgi?id=1058300 We can't just reopen files, we don't know what state they are in. Any data that has been written to the image between the last flush and the point where gluster made the fd invalid may be there or may be missing. If any data is missing, we can't continue the guest or you'll get data corruption. The correct fix for resuming after I/O errors is on gluster. As long as it invalidates the fd, without a way to resume, there is no way for qemu to correctly continue after an error. Hi Kevin, I understand. In this case (where the gluster process was killed or crashed) I guess the best option would be to poweroff and restart the VM, which can be done client-side (ovirt + libvirt) Please mark as "Won't fix". Thanks. Marking as "Won't Fix" according to the last comment.