summary refs log tree commit diff stats
path: root/results/classifier/zero-shot/118/performance/1725707
diff options
context:
space:
mode:
Diffstat (limited to 'results/classifier/zero-shot/118/performance/1725707')
-rw-r--r--results/classifier/zero-shot/118/performance/1725707114
1 files changed, 114 insertions, 0 deletions
diff --git a/results/classifier/zero-shot/118/performance/1725707 b/results/classifier/zero-shot/118/performance/1725707
new file mode 100644
index 00000000..a1acd295
--- /dev/null
+++ b/results/classifier/zero-shot/118/performance/1725707
@@ -0,0 +1,114 @@
+performance: 0.925
+debug: 0.918
+semantic: 0.909
+architecture: 0.905
+graphic: 0.894
+network: 0.889
+virtual: 0.886
+register: 0.880
+device: 0.872
+assembly: 0.864
+vnc: 0.858
+arm: 0.853
+peripherals: 0.836
+mistranslation: 0.819
+permissions: 0.818
+PID: 0.817
+boot: 0.810
+VMM: 0.789
+KVM: 0.788
+user-level: 0.786
+risc-v: 0.781
+files: 0.780
+hypervisor: 0.770
+TCG: 0.754
+socket: 0.745
+ppc: 0.725
+kernel: 0.715
+x86: 0.570
+i386: 0.561
+
+QEMU sends excess VNC data to websockify even when network is poor
+
+Description of problem
+-------------------------
+In my latest topic, I reported a bug relate to QEMU's websocket:
+https://bugs.launchpad.net/qemu/+bug/1718964
+
+It has been fixed but someone mentioned that he met the same problem when using QEMU with a standalone websocket proxy.
+That makes me confused because in that scenario QEMU will get a "RAW" VNC connection.
+So I did a test and found that there indeed existed some problems. The problem is:
+
+When the client's network is poor (on a low speed WAN), QEMU still sends a lot of data to the websocket proxy, then the client get stuck. It seems that only QEMU has this problem, other VNC servers works fine.
+
+Environment
+-------------------------
+All of the following versions have been tested:
+
+QEMU: 2.8.1.1 / 2.9.1 / 2.10.1 / master (Up to date)
+Host OS: Ubuntu 16.04 Server LTS / CentOS 7 x86_64_1611
+Websocket Proxy: websockify 0.6.0 / 0.7.0 / 0.8.0 / master
+VNC Web Client: noVNC 0.5.1 / 0.61 / 0.62 / master
+Other VNC Servers: TigerVNC 1.8 / x11vnc 0.9.13 / TightVNC 2.8.8
+
+Steps to reproduce:
+-------------------------
+100% reproducible.
+
+1. Launch a QEMU instance (No need websocket option):
+qemu-system-x86_64 -enable-kvm -m 6G ./win_x64.qcow2 -vnc :0
+
+2. Launch websockify on a separate host and connect to QEMU's VNC port
+
+3. Open VNC Web Client (noVNC/vnc.html) in browser and connect to websockify
+
+4. Play a video (e.g. Watch YouTube) on VM (To produce a lot of frame buffer update)
+
+5. Limit (e.g. Use NetLimiter) the client inbound bandwidth to 300KB/S (To simulate a low speed WAN)
+
+6. Then client's output gets stuck(less than 1 fps), the cursor is almost impossible to move
+
+7. Monitor network traffic on the proxy server
+
+Current result:
+-------------------------
+Monitor Downlink/Uplink network traffic on the proxy server
+(Refer to the attachments for more details).
+
+1. Used with QEMU
+- D: 5.9 MB/s U: 5.7 MB/s (Client on LAN)
+- D: 4.3 MB/s U: 334 KB/s (Client on WAN)
+
+2. Used with other VNC servers
+- D: 5.9 MB/s U: 5.6 MB/s (Client on LAN)
+- D: 369 KB/s U: 328 KB/s (Client on WAN)
+
+It is found that when the client's network is poor, all the VNC servers (tigervnc/x11vnc/tightvnc) 
+will reduce the VNC data send to websocket proxy (uplink and downlink symmetry), but QEMU never drop any frames and still sends a lot of data to websockify, the client has no capacity to accept so much data, more and more data are accumulated in the websockify, then it crashes.
+
+Expected results:
+-------------------------
+When the client's network is poor (WAN), QEMU will reduce the VNC data send to websocket proxy.
+
+
+
+
+
+
+
+This is nothing specific to websockets AFAIK. Even using regular VNC QEMU doesn't try to dynamically throttle data / quality settings.
+
+NB, if websockify crashes, then that is a serious flaw in websockify - it shouldn't read an unbounded amount of data from QEMU, if it is unable to send it onto the client.  If websockify stopped reading data from QEMU, then QEMU would in turn stop sending it once the TCP buffer was full
+
+
+Reference:
+https://github.com/novnc/noVNC/issues/431#issuecomment-71883085
+
+QEMU uses many more (30x) operations with much smaller amounts of data than other VNC server, perhaps this leads to the different result.
+
+The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
+If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+