summary refs log tree commit diff stats
path: root/scripts/tracetool/backend/syslog.py (unfollow)
Commit message (Collapse)AuthorFilesLines
2025-06-02hw/i386/pc_piix: Fix RTC ISA IRQ wiring of isapc machineBernhard Beschow1-0/+5
Commit 56b1f50e3c10 ("hw/i386/pc: Wire RTC ISA IRQs in south bridges") attempted to refactor RTC IRQ wiring which was previously done in pc_basic_device_init() but forgot about the isapc machine. Fix this by wiring in the code section dedicated exclusively to the isapc machine. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2961 Fixes: 56b1f50e3c10 ("hw/i386/pc: Wire RTC ISA IRQs in south bridges") cc: qemu-stable Signed-off-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Mark Cave-Ayland <mark.caveayland@nutanix.com> Message-Id: <20250526203820.1853-1-shentey@gmail.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-06-02vdpa: move memory listener register to vhost_vdpa_initEugenio Pérez1-7/+28
Current memory operations like pinning may take a lot of time at the destination. Currently they are done after the source of the migration is stopped, and before the workload is resumed at the destination. This is a period where neigher traffic can flow, nor the VM workload can continue (downtime). We can do better as we know the memory layout of the guest RAM at the destination from the moment that all devices are initializaed. So moving that operation allows QEMU to communicate the kernel the maps while the workload is still running in the source, so Linux can start mapping them. As a small drawback, there is a time in the initialization where QEMU cannot respond to QMP etc. By some testing, this time is about 0.2seconds. This may be further reduced (or increased) depending on the vdpa driver and the platform hardware, and it is dominated by the cost of memory pinning. This matches the time that we move out of the called downtime window. The downtime is measured as the elapsed trace time between the last vhost_vdpa_suspend on the source and the last vhost_vdpa_set_vring_enable_one on the destination. In other words, from "guest CPUs freeze" to the instant the final Rx/Tx queue-pair is able to start moving data. Using ConnectX-6 Dx (MLX5) NICs in vhost-vDPA mode with 8 queue-pairs, the series reduces guest-visible downtime during back-to-back live migrations by more than half: - 39G VM: 4.72s -> 2.09s (-2.63s, ~56% improvement) - 128G VM: 14.72s -> 5.83s (-8.89s, ~60% improvement) Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jonah Palmer <jonah.palmer@oracle.com> Message-Id: <20250522145839.59974-8-jonah.palmer@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-06-02vdpa: move iova_tree allocation to net_vhost_vdpa_initEugenio Pérez2-34/+18
As we are moving to keep the mapping through all the vdpa device life instead of resetting it at VirtIO reset, we need to move all its dependencies to the initialization too. In particular devices with x-svq=on need a valid iova_tree from the beginning. Simplify the code also consolidating the two creation points: the first data vq in case of SVQ active and CVQ start in case only CVQ uses it. Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Suggested-by: Si-Wei Liu <si-wei.liu@oracle.com> Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jonah Palmer <jonah.palmer@oracle.com> Message-Id: <20250522145839.59974-7-jonah.palmer@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-06-02vdpa: reorder listener assignmentEugenio Pérez1-1/+1
Since commit f6fe3e333f ("vdpa: move memory listener to vhost_vdpa_shared") this piece of code repeatedly assign shared->listener members. This was not a problem as it was not used until device start. However next patches move the listener registration to this vhost_vdpa_init function. When the listener is registered it is added to an embedded linked list, so setting its members again will cause memory corruption to the linked list node. Do the right thing and only set it in the first vdpa device. Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jonah Palmer <jonah.palmer@oracle.com> Message-Id: <20250522145839.59974-6-jonah.palmer@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-06-02vdpa: add listener_registeredEugenio Pérez2-1/+12
Check if the listener has been registered or not, so it needs to be registered again at start. Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jonah Palmer <jonah.palmer@oracle.com> Message-Id: <20250522145839.59974-5-jonah.palmer@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-06-02vdpa: set backend capabilities at vhost_vdpa_initEugenio Pérez1-1/+6
The backend does not reset them until the vdpa file descriptor is closed so there is no harm in doing it only once. This allows the destination of a live migration to premap memory in batches, using VHOST_BACKEND_F_IOTLB_BATCH. Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jonah Palmer <jonah.palmer@oracle.com> Message-Id: <20250522145839.59974-4-jonah.palmer@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-06-02vdpa: reorder vhost_vdpa_set_backend_capEugenio Pérez1-30/+30
It will be used directly by vhost_vdpa_init. Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jonah Palmer <jonah.palmer@oracle.com> Message-Id: <20250522145839.59974-3-jonah.palmer@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-06-02vdpa: check for iova tree initialized at net_client_startEugenio Pérez1-1/+3
To map the guest memory while it is migrating we need to create the iova_tree, as long as the destination uses x-svq=on. Checking to not override it. The function vhost_vdpa_net_client_stop clear it if the device is stopped. If the guest starts the device again, the iova tree is recreated by vhost_vdpa_net_data_start_first or vhost_vdpa_net_cvq_start if needed, so old behavior is kept. Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Signed-off-by: Jonah Palmer <jonah.palmer@oracle.com> Message-Id: <20250522145839.59974-2-jonah.palmer@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2025-06-02vhost: Don't set vring call if guest notifier is unusedHuaitong Han3-2/+8
The vring call fd is set even when the guest does not use MSI-X (e.g., in the case of virtio PMD), leading to unnecessary CPU overhead for processing interrupts. The commit 96a3d98d2c("vhost: don't set vring call if no vector") optimized the case where MSI-X is enabled but the queue vector is unset. However, there's an additional case where the guest uses INTx and the INTx_DISABLED bit in the PCI config is set, meaning that no interrupt notifier will actually be used. In such cases, the vring call fd should also be cleared to avoid redundant interrupt handling. Fixes: 96a3d98d2c("vhost: don't set vring call if no vector") Reported-by: Zhiyuan Yuan <yuanzhiyuan@chinatelecom.cn> Signed-off-by: Jidong Xia <xiajd@chinatelecom.cn> Signed-off-by: Huaitong Han <hanht2@chinatelecom.cn> Message-Id: <20250522100548.212740-1-hanht2@chinatelecom.cn> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>