Network performance regression with vde_switch I've noticed a significant network performance regression when using vde_switch, starting about one week ago (10/05/2012); before that date, I used to get about 1.5 Gbits host to guest, but now I can only get about 320 Mbits; I didn't find any modification in net/vde.*, just in hw/virtio*. My command line: qemu-system-i386 -cdrom /bpd/bpd.iso -m 512 -boot d -enable-kvm \ -localtime -ctrl-grab -usbdevice tablet \ -device virtio-net-pci,mac=52:54:00:18:01:01,netdev=vde0,tx=bh,ioeventfd=on,x-txburst=32 \ -netdev vde,id=vde0 -vga std -tb-size 2M -cpu host -clock unix My host runs a kernel 3.6.1 and my guest runs a kernel 3.5.4; the same problem happens with other host and guest versions, too. I know there are better ways of running a guest, but using vde I get a cleaner environment in the host (just one tun/tap interface to manage...), which is quite good when running some accademic experiments. Interestingly, at the same time I've noticed a performance enhancement of about 25~30 % when using a tun/tap interface, bridged or not. Thank you, very much. Edivaldo de Araujo Pereira On Fri, Oct 12, 2012 at 05:34:23PM -0000, Edivaldo de Araujo Pereira wrote: > I've noticed a significant network performance regression when using > vde_switch, starting about one week ago (10/05/2012); before that date, > I used to get about 1.5 Gbits host to guest, but now I can only get > about 320 Mbits; I didn't find any modification in net/vde.*, just in > hw/virtio*. > > My command line: > qemu-system-i386 -cdrom /bpd/bpd.iso -m 512 -boot d -enable-kvm \ > -localtime -ctrl-grab -usbdevice tablet \ > -device virtio-net-pci,mac=52:54:00:18:01:01,netdev=vde0,tx=bh,ioeventfd=on,x-txburst=32 \ > -netdev vde,id=vde0 -vga std -tb-size 2M -cpu host -clock unix > > My host runs a kernel 3.6.1 and my guest runs a kernel 3.5.4; the same > problem happens with other host and guest versions, too. > > I know there are better ways of running a guest, but using vde I get a > cleaner environment in the host (just one tun/tap interface to > manage...), which is quite good when running some accademic experiments. > > Interestingly, at the same time I've noticed a performance enhancement > of about 25~30 % when using a tun/tap interface, bridged or not. Hi Edivaldo, It would be great if you can help find the commit that caused this regression. The basic process is: 1. Identify a QEMU release or git tree that gives you 1.5 Gbit/s. 2. Double-check that qemu.git/master suffers reduced performance. 3. git bisect start where and are the git commits that show differing performance (for example, bad=HEAD good=v1.1.0) Then git will step through the commit history and ask you to test at each step. (This is a binary search so even finding regressions that happened many commits ago requires few steps.) You can read more about git-bisect(1) here: http://git-scm.com/book/en/Git-Tools-Debugging-with-Git#Binary-Search http://www.kernel.org/pub/software/scm/git/docs/git-bisect.html The end result is the commit introduced the regression. Please post what you find! Stefan Hi Stefan, Thank you, very much for taking the time to help me, and excuse me for not seeing your answer early... I've run the procedure you pointed me out, and the result is: 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f is the first bad commit commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f Author: Amit Shah