diff options
Diffstat (limited to 'results/classifier/118/peripherals/1863200')
| -rw-r--r-- | results/classifier/118/peripherals/1863200 | 122 |
1 files changed, 122 insertions, 0 deletions
diff --git a/results/classifier/118/peripherals/1863200 b/results/classifier/118/peripherals/1863200 new file mode 100644 index 000000000..8b4714353 --- /dev/null +++ b/results/classifier/118/peripherals/1863200 @@ -0,0 +1,122 @@ +user-level: 0.933 +peripherals: 0.920 +mistranslation: 0.913 +vnc: 0.907 +ppc: 0.901 +KVM: 0.884 +register: 0.875 +risc-v: 0.874 +TCG: 0.869 +hypervisor: 0.865 +x86: 0.861 +virtual: 0.859 +VMM: 0.853 +permissions: 0.851 +performance: 0.847 +network: 0.839 +graphic: 0.834 +debug: 0.831 +architecture: 0.816 +socket: 0.811 +device: 0.805 +semantic: 0.805 +arm: 0.801 +assembly: 0.801 +files: 0.799 +PID: 0.762 +boot: 0.761 +kernel: 0.712 +i386: 0.566 + +Reconnect failed with loopback virtio1.1 server mode test + +Issue discription: +Packed ring server mode is a new feature to enable the virtio-user or virtio-pmd(in VM) as the server, vhost as the client, then when the vhost-user is killed then re-launched, the vhost-user can connect back to virtio-user/virtio-pmd again. Test with dpdk 20.02 ,virtio-pmd loopback reconnect from vhost-user failed. + +Test Environment: +DPDK version: DPDK v20.02 +Other software versions: virtio1.1 +Qemu versions:4.2.0 +OS: Linux 4.15.0-20-generic +Compiler: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 +Hardware platform: R2208WFTZSR. + +The reproduce step is : +Test Case: vhost-user/virtio-pmd loopback reconnect from vhost-user +=============================================================== +Flow: Vhost-user --> Virtio --> Vhost-user + +1. Launch vhost-user with client mode by below commands:: + + ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=/tmp/vhost-net,client=1,queues=1' -- -i --nb-cores=1 + testpmd>set fwd mac + +2. Start VM with 1 virtio device, and set the qemu as server mode:: + + ./qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 100 -m 8G \ + -object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \ + -numa node,memdev=mem -mem-prealloc -drive file=/home/xuan/dpdk_project/shell/u18.img \ + -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ + -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \ + -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \ + -net user,hostfwd=tcp:127.0.0.1:6002-:22 \ + -chardev socket,id=char0,path=/tmp/vhost-net,server \ + -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ + -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \ + -vnc :10 + +3. On VM, bind virtio net to igb_uio and run testpmd:: + + ./testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + testpmd>set fwd mac + testpmd>start + +4. Send packets by vhost-user, check if packets can be RX/TX with virtio-pmd:: + + testpmd>start tx_first 32 + testpmd>show port stats all + +5. On host, quit vhost-user, then re-launch the vhost-user with below command:: + + testpmd>quit + ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=/tmp/vhost-net,client=1,queues=1' -- -i --nb-cores=1 + testpmd>set fwd mac + testpmd>start tx_first 32 + +6. Check if the reconnection can work, still send packets by vhost-user, check if packets can be RX/TX with virtio-pmd:: + + testpmd>show port stats all + +Expected result:: + +After the vhost-user is killed then re-launched, the VM can connect back to vhost-user again with throughput. + +Real result:: + +After the vhost-user is killed then re-launched, no throughput with PVP. + +Analysis:: + +QEMU has its own way to handle reconnect function for virtio server mode. However, for packed ring, when reconnecting to virtio, vhost cannot get the status of descriptors via the descriptor ring. This bug is caused since the reconnection for packed ring need additional reset operation. + +The QEMU project is currently considering to move its bug tracking to +another system. For this we need to know which bugs are still valid +and which could be closed already. Thus we are setting older bugs to +"Incomplete" now. + +If you still think this bug report here is valid, then please switch +the state back to "New" within the next 60 days, otherwise this report +will be marked as "Expired". Or please mark it as "Fix Released" if +the problem has been solved with a newer version of QEMU already. + +Thank you and sorry for the inconvenience. + + + +This is an automated cleanup. This bug report has been moved to QEMU's +new bug tracker on gitlab.com and thus gets marked as 'expired' now. +Please continue with the discussion here: + + https://gitlab.com/qemu-project/qemu/-/issues/248 + + |