KVM: 0.799 vnc: 0.708 debug: 0.582 network: 0.577 boot: 0.557 semantic: 0.552 permissions: 0.541 device: 0.523 other: 0.504 socket: 0.494 performance: 0.485 PID: 0.485 files: 0.448 graphic: 0.438 virtio-serial loses writes when used over virtio-mmio virtio-serial appears to lose writes, but only when used on top of virtio-mmio. The scenario is this: /home/rjones/d/qemu/arm-softmmu/qemu-system-arm \ -global virtio-blk-device.scsi=off \ -nodefconfig \ -nodefaults \ -nographic \ -M vexpress-a15 \ -machine accel=kvm:tcg \ -m 500 \ -no-reboot \ -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1001/kernel.27944 \ -dtb /home/rjones/d/libguestfs/tmp/.guestfs-1001/dtb.27944 \ -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1001/initrd.27944 \ -device virtio-scsi-device,id=scsi \ -drive file=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/scratch.1,cache=unsafe,format=raw,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/home/rjones/d/libguestfs/tmp/.guestfs-1001/root.27944,snapshot=on,id=appliance,cache=unsafe,if=none \ -device scsi-hd,drive=appliance \ -device virtio-serial-device \ -serial stdio \ -chardev socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0 \ -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ -append 'panic=1 mem=500M console=ttyAMA0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color' After the guest starts up, a daemon writes 4 bytes to a virtio-serial socket. The host side reads these 4 bytes correctly and writes a 64 byte message. The guest never sees this message. I enabled virtio-mmio debugging, and this is what is printed (## = my comment): ## guest opens the socket: trying to open virtio-serial channel '/dev/virtio-ports/org.libguestfs.channel.0' virtio_mmio: virtio_mmio_write offset 0x50 value 0x3 opened the socket, sock = 3 udevadm settle ## guest writes 4 bytes to the socket: virtio_mmio: virtio_mmio_write offset 0x50 value 0x5 virtio_mmio: virtio_mmio setting IRQ 1 virtio_mmio: virtio_mmio_read offset 0x60 virtio_mmio: virtio_mmio_write offset 0x64 value 0x1 virtio_mmio: virtio_mmio setting IRQ 0 sent magic GUESTFS_LAUNCH_FLAG ## host reads 4 bytes successfully: main_loop libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG libguestfs: [14605ms] appliance is up Guest launched OK. ## host writes 64 bytes to socket: libguestfs: writing the data to the socket (size = 64) waiting for next request libguestfs: data written OK ## hangs here forever with guest in read() call never receiving any data I am using qemu from git today (2d1fe1873a984). strace -f of qemu when it fails. Notes: - fd = 6 is the Unix domain socket connected to virtio-serial - only one 4 byte write occurs to this socket (expected guest -> host communication) - the socket isn't read at all (even though the library on the other side has written) - the socket is never added to any poll/ppoll syscall, so it's no wonder that qemu never sees any data on the socket Recall this bug only happens intermittently. This is an strace -f of qemu when it happens to work. Notes: - fd = 6 is the Unix domain socket - there are an expected number of recvmsg & writes, all with the correct sizes - this time qemu adds the socket to ppoll I can reproduce this bug on a second ARM machine which doesn't have KVM (ie. using TCG). Note it's still linked to virtio-mmio. On 09/12/13 14:04, Richard Jones wrote: > + -chardev socket,path=/home/rjones/d/libguestfs/tmp/libguestfsLa9dE2/guestfsd.sock,id=channel0 \ Is this a socket that libguestfs pre-creates on the host-side? > the socket is never added to any poll/ppoll syscall, so it's no > wonder that qemu never sees any data on the socket This should be happening: qemu_chr_open_socket() [qemu-char.c] unix_connect_opts() [util/qemu-sockets.c] qemu_socket() connect() qemu_set_nonblock() [util/oslib-posix.c] qemu_chr_open_socket_fd() socket_set_nodelay() [util/osdep.c] io_channel_from_socket() g_io_channel_unix_new() tcp_chr_connect() io_add_watch_poll() g_source_new() g_source_attach() g_source_unref() qemu_chr_be_generic_open() io_add_watch_poll() should make sure the fd is polled starting with the next main loop iteration. Interestingly, even in the "successful" case, there's a slew of ppoll() calls between connect() returning 6, and the first ppoll() that actually covers fd=6. Laszlo > Is this a socket that libguestfs pre-creates on the host-side? Yes it is: https://github.com/libguestfs/libguestfs/blob/master/src/launch-direct.c#L208 You mention a scenario that might cause this. But that appears to be when the socket is opened. Note that the guest did send 4 bytes successfully (received OK at the host). The lost write occurs when the host next tries to send a message back to the guest. On 09/16/13 16:39, Richard Jones wrote: >> Is this a socket that libguestfs pre-creates on the host-side? > > Yes it is: > https://github.com/libguestfs/libguestfs/blob/master/src/launch-direct.c#L208 > > You mention a scenario that might cause this. But that appears to be > when the socket is opened. Note that the guest did send 4 bytes > successfully (received OK at the host). The lost write occurs when the > host next tries to send a message back to the guest. Which is the first time ever that a GLib event loop context messed up only for reading would be exposed. In other words, if the action register fd 6 for reading in the GLib main loop context fails, that wouldn't prevent qemu from *writing* to the UNIX domain socket. In both traces, the IO-thread (thread-id 8488 in the successful case, and thread-id 7586 in the failing case) is the one opening / registering etc. fd 6. The IO-thread is also the one calling ppoll(). However, all write(6, ...) syscalls are issued by one of the VCPU threads (thread-id 8490 in the successful case, and thread-id 7588 in the failing case). Hmmmm. Normally (as in, virtio-pci), when a VCPU thread (running KVM) executes guest code that sends data to the host via virtio, KVM kicks the "host notifier" eventfd. Once this "host notifier" eventfd is kicked, the IO thread should do: virtio_queue_host_notifier_read() virtio_queue_notify_vq() vq->handle_output() handle_output() [hw/char/virtio-serial-bus.c] do_flush_queued_data() vsc->have_data() flush_buf() [hw/char/virtio-console.c] qemu_chr_fe_write() ... goes to the unix domain socket ... When virtio-mmio is used though, the same seems to happen in VCPU thread: virtio_mmio_write() virtio_queue_notify() virtio_queue_notify_vq() ...same as above... A long shot: (a) With virtio-pci: (a1) guest writes to virtio-serial port, (a2) KVM sets the host notifier eventfd "pending", (a3) the IO thread sees that in the main loop / ppoll(), and copies the data to the UNIX domain socket (the backend), (a4) host-side libguestfs reads the data and responds, (a5) the IO-thread reads the data from the UNIX domain socket, (a6) the IO-thread pushes the data to the guest. (b) with virtio-mmio: (b1) guest writes to virtio-serial port, (b2) the VCPU thread in qemu reads the data (virtio-mmio) and copies it to the UNIX domain socket, (b3) host-side libguestfs reads the data and responds, (b4) the IO-thread is not (yet?) ready to read the data from the UNIX domain socket. I can't quite pin it down, but I think that in the virtio-pci case, the fact that everything runs through the IO-thread automatically serializes the connection to the UNIX domain socket (and its addition to the GLib main loop context) with the message from the guest. Due to the KVM eventfd (the "host notifier") everything goes through the same ppoll(). Maybe it doesn't enforce any theoretical serialization, it might just add a sufficiently long delay that there's never a problem in practice. Whereas in the virtio-mmio case, the initial write to the UNIX domain socket, and the response from host-side libguestfs, runs unfettered. I imagine something like: - (IO thread) connect to socket - (IO thread) add fd to main loop context - (guest) write to virtio-serial port - (VCPU thread) copy data to UNIX domain socket - (host libguestfs) read req, write resp to UNIX domain socket - (IO thread) "I should probably check readiness on that socket sometime" I don't know why the IO-thread doesn't get there *eventually*. What happens if you add a five second delay to libguestfs, before writing the response? Laszlo On 16 September 2013 17:13, Laszlo Ersek