network: 0.916 instruction: 0.916 assembly: 0.916 graphic: 0.915 device: 0.914 semantic: 0.911 socket: 0.906 vnc: 0.902 other: 0.888 boot: 0.839 KVM: 0.836 mistranslation: 0.728 QEMU VNC websocket proxy requires non-standard 'binary' subprotocol When running a machine using "-vnc" and the "websocket" option QEMU seems to require the subprotocol called 'binary'. This subprotocol does not exist in the WebSocket specification. In fact it has never existed in the spec, in one of the very early drafts of WebSockets it was briefly mentioned but it never made it to a final version. When the WebSocket server requires a non-standard subprotocol any WebSocket client that works correctly won't be able to connect. One example of such a client is noVNC, it tells the server that it doesn't want to use any subprotocol. QEMU's WebSocket proxy doesn't let noVNC connect. If noVNC is modified to ask for 'binary' it will work, this is, however, incorrect behavior. Looking at the code in "io/channel-websock.c" it seems it's quite hard-coded to binary: Look at line 58 and 433 here: https://git.qemu.org/?p=qemu.git;a=blob;f=io/channel-websock.c This code has to be made more dynamic, and shouldn't require binary. It isn't mandatory to use a standardized subprotocol, all that's required is that the client & server agree https://developer.mozilla.org/en-US/docs/Web/HTTP/Protocol_upgrade_mechanism "The subprotocols may be selected from the IANA WebSocket Subprotocol Name Registry or may be a custom name jointly understood by the client and the server." QEMU used/required 'binary' because that is what noVNC used when the QEMU websockets code was first implemented. It appears that noVNC was changed though to not send a "binary" subprotocol in commit f8318361b1b62c4d76b091132d4a8ccfdd2957e4 Author: Pierre Ossman