summary refs log tree commit diff stats
path: root/results/classifier/108/other/1569
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--results/classifier/108/other/156942
-rw-r--r--results/classifier/108/other/156905360
2 files changed, 102 insertions, 0 deletions
diff --git a/results/classifier/108/other/1569 b/results/classifier/108/other/1569
new file mode 100644
index 000000000..d1316d15a
--- /dev/null
+++ b/results/classifier/108/other/1569
@@ -0,0 +1,42 @@
+performance: 0.883
+debug: 0.873
+semantic: 0.856
+vnc: 0.850
+boot: 0.830
+graphic: 0.828
+device: 0.759
+PID: 0.740
+permissions: 0.679
+files: 0.646
+network: 0.617
+socket: 0.599
+other: 0.405
+KVM: 0.213
+
+NVMe FS operations hang after suspending and resuming both guest and host
+Description of problem:
+Hello and thank you for your work on QEMU!
+
+Using the NVMe driver with my Seagate FireCuda 530 2TB M.2 works fine until I encounter this problem, which is reliably reproducible for me.
+
+When I suspend the guest and then suspend (s2idle) my host all is well until I resume the guest (manually with `virsh dompmwakeup $VMNAME`, after the host has resumed). Although the guest resumes and is interactive, it seems that anything involving filesystem operations hang forever and do not return.
+
+Suspending and resuming the Linux guest seems to work perfectly if I don't suspend/resume the host.
+
+Ultimately what I'm wanting to do is share the drive between VMs with qemu-storage-daemon. I can reproduce the problem in that scenario in much the same way. Using PCI passthrough with the same VM and device works fine and doesn't exhibit this problem.
+
+Hopefully that's clear enough - let me know if there's anything else I can provide.
+Steps to reproduce:
+1. Create a VM with a dedicated NVMe disk.
+2. Boot an ISO and install to the disk.
+3. Verify that suspend and resume works when not suspending the host.
+4. Suspend the guest.
+5. Suspend the host.
+6. Wake the host.
+7. Wake the guest.
+8. Try just about anything that isn't likely already cached somewhere: `du -s /etc`.
+Additional information:
+I've attached the libvirt domain XML[1] and libvirtd debug logs for QEMU[2] ("1:qemu") that covers suspending the guest & host, resuming host & guest and doing something to cause a hang. I tried to leave enough time afterwards for any timeout to occur.
+
+1. [nvme-voidlinux.xml](/uploads/1dea47af096ce58175f7aa526eca455e/nvme-voidlinux.xml)
+2. [nvme-qemu-debug.log](/uploads/42d3bed456a795069023a61d38fa5ccd/nvme-qemu-debug.log)
diff --git a/results/classifier/108/other/1569053 b/results/classifier/108/other/1569053
new file mode 100644
index 000000000..e0c8d94f0
--- /dev/null
+++ b/results/classifier/108/other/1569053
@@ -0,0 +1,60 @@
+other: 0.910
+debug: 0.875
+semantic: 0.868
+device: 0.862
+performance: 0.861
+vnc: 0.858
+graphic: 0.856
+boot: 0.852
+KVM: 0.851
+PID: 0.842
+socket: 0.841
+files: 0.840
+permissions: 0.836
+network: 0.828
+
+Qemu crashes when I start a second VM from command line
+
+I am using Qemu on 64 bit  x86 platform, operating system ubuntu 14.04. I cloned the latest version of qemu from github and installed on my system.
+
+I booted the 1st VM with the instruction:
+
+sudo qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /var/lib/libvirt/images/vm1p4.img -boot c -enable-kvm -no-reboot -net none -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc
+
+It is was launched successfully.
+Then I lanched the second VM with the instruction:
+
+sudo qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /var/lib/libvirt/images/vm1p4-2.img -boot c -enable-kvm -no-reboot -net none -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user2 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2 -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc
+
+Qemu crashed right away. Plea see the log below.
+
+
+
+sudo qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda /var/lib/libvirt/images/vm1p4-2.img -boot c -enable-kvm -no-reboot -net none -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user2 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2 -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc
+KVM internal error. Suberror: 1
+emulation failure
+EAX=000cc765 EBX=00000007 ECX=000cc6ac EDX=0000df00
+ESI=1ff00000 EDI=0000d5d7 EBP=ffffffff ESP=0000f9ce
+EIP=d5d70000 EFL=00010012 [----A--] CPL=0 II=0 A20=1 SMM=0 HLT=0
+ES =df00 000df000 ffffffff 00809300
+CS =f000 000f0000 ffffffff 00809b00
+SS =df00 000df000 ffffffff 00809300
+DS =df00 000df000 ffffffff 00809300
+FS =0000 00000000 ffffffff 00809300
+GS =0000 00000000 ffffffff 00809300
+LDT=0000 00000000 0000ffff 00008200
+TR =0000 00000000 0000ffff 00008b00
+GDT=     00000000 00000000
+IDT=     00000000 000003ff
+CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000
+DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
+DR6=00000000ffff0ff0 DR7=0000000000000400
+EFER=0000000000000000
+Code=00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+
+when the second VM crashed,how about first VM?  It seems like that you use vhost-user as your backend type.Does the backend of the 1st VM  and 2nd VM connect the same switch ?
+
+Can you still reproduce the problem with the latest upstream version of QEMU and the latest version of the upstream Linux kernel?
+
+[Expired for QEMU because there has been no activity for 60 days.]
+