diff options
Diffstat (limited to '')
| -rw-r--r-- | results/classifier/118/performance/597 | 49 | ||||
| -rw-r--r-- | results/classifier/118/performance/597351 | 67 |
2 files changed, 116 insertions, 0 deletions
diff --git a/results/classifier/118/performance/597 b/results/classifier/118/performance/597 new file mode 100644 index 000000000..920d7906c --- /dev/null +++ b/results/classifier/118/performance/597 @@ -0,0 +1,49 @@ +performance: 0.931 +virtual: 0.876 +device: 0.815 +graphic: 0.769 +network: 0.682 +hypervisor: 0.660 +mistranslation: 0.570 +boot: 0.550 +semantic: 0.540 +register: 0.533 +PID: 0.513 +kernel: 0.492 +architecture: 0.487 +socket: 0.471 +arm: 0.425 +user-level: 0.398 +vnc: 0.388 +i386: 0.378 +VMM: 0.368 +debug: 0.348 +ppc: 0.345 +TCG: 0.317 +x86: 0.311 +permissions: 0.302 +peripherals: 0.286 +risc-v: 0.272 +KVM: 0.232 +files: 0.183 +assembly: 0.055 + +sunhme sometimes causes the VM to hang forever +Description of problem: +When using sunhme, sometimes on receiving traffic (and doing disk IO?) it will get slower and slower until it becomes entirely unresponsive, which does not happen on the real hardware I have sitting next to me (Sun Netra T1, running the same OS+kernel, though not the same image) + +virtio-net-pci does not, so far, demonstrate the problem, and neither does just sending a lot of traffic out over the sunhme interface, so it appears to require receiving or some more complex interaction. + +It doesn't always happen immediately, it sometimes takes a couple of tries with the command, but when it does, it's gone. + +Output logged to console below. +Steps to reproduce: +1. Log into VM (rich/omgqemu) +2. sudo apt clean;sudo apt update; +3. If it doesn't lock up the VM, repeat step 2 a few times. +Additional information: +Disk image can be found [here](https://www.dropbox.com/s/0oosyf7xej44v9n/sunhme_repro_disk.tgz?dl=0) (tarred in the hope that it does something reasonable with sparseness) + +Console output can be found [here](https://www.dropbox.com/s/t1wxx41vzv8p3l6/sunhme%20sadness.txt?dl=0) + +Ah yes, [the initrd and vmlinux](https://www.dropbox.com/s/t7i4gs7poqaeanz/oops_boot.tgz?dl=0) would help, wouldn't they, though I imagine the ones in the VM itself would boot... diff --git a/results/classifier/118/performance/597351 b/results/classifier/118/performance/597351 new file mode 100644 index 000000000..57278a7d2 --- /dev/null +++ b/results/classifier/118/performance/597351 @@ -0,0 +1,67 @@ +performance: 0.968 +graphic: 0.682 +device: 0.648 +semantic: 0.548 +user-level: 0.514 +architecture: 0.456 +peripherals: 0.440 +network: 0.415 +mistranslation: 0.389 +virtual: 0.332 +socket: 0.313 +x86: 0.190 +PID: 0.182 +permissions: 0.171 +ppc: 0.163 +hypervisor: 0.148 +debug: 0.136 +assembly: 0.134 +i386: 0.112 +KVM: 0.111 +register: 0.085 +boot: 0.082 +TCG: 0.072 +kernel: 0.063 +VMM: 0.055 +risc-v: 0.051 +arm: 0.048 +vnc: 0.043 +files: 0.036 + +Slow UDP performance with virtio device + +I'm working on an app that is very sensitive to round-trip latency +between the guest and host, and qemu/kvm seems to be significantly +slower than it needs to be. + +The attached program is a ping/pong over UDP. Call it with a single +argument to start a listener/echo server on that port. With three +arguments it becomes a counted "pinger" that will exit after a +specified number of round trips for performance measurements. For +example: + + $ gcc -o udp-pong udp-pong.c + $ ./udp-pong 12345 & # start a listener on port 12345 + $ time ./udp-pong 127.0.0.1 12345 1000000 # time a million round trips + +When run on the loopback device on a single machine (true on the host +or within a guest), I get about 100k/s. + +When run across a port forward using "user" networking on qemu (or +kvm, the performance is the same) and the default rtl8139 driver (both +the host and guest are Ubuntu Lucid), I get about 10k/s. This seems +very slow, but perhaps unavoidably so? + +When run in the same configuration using the "virtio" driver, I get +only 2k/s. This is almost certainly a bug in the virtio driver, given +that it's a paravirtualized device that is 5x slower than the "slow" +hardware emulation. + +I get no meaningful change in performance between kvm/qemu. + + + +Triaging old bug tickets ... can you still reproduce this issue with the latest version of QEMU? Have you already tried vhost? + +[Expired for QEMU because there has been no activity for 60 days.] + |