summary refs log tree commit diff stats
path: root/include/hw/virtio/vhost-vdpa.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* vdpa: define SVQ transitioning state for mode switchingSi-Wei Liu2024-03-121-0/+9
| | | | | | | | | | | | | | | | Will be used in following patches. DISABLING(-1) means SVQ is being switched off to passthrough mode. ENABLING(1) means passthrough VQs are being switched to SVQ. DONE(0) means SVQ switching is completed. Message-Id: <1707910082-10243-11-git-send-email-si-wei.liu@oracle.com> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: move memory listener to vhost_vdpa_sharedEugenio Pérez2023-12-261-1/+1
| | | | | | | | | | | | | | | | | | | | | Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the memory listener to a common place rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-14-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: use dev_shared in vdpa_iommuEugenio Pérez2023-12-261-1/+1
| | | | | | | | | | | | The memory listener functions can call these too. Make vdpa_iommu work with VhostVDPAShared. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-13-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: use VhostVDPAShared in vdpa_dma_map and unmapEugenio Pérez2023-12-261-2/+2
| | | | | | | | | | | | The callers only have the shared information by the end of this series. Start converting this functions. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-12-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: move iommu_list to vhost_vdpa_sharedEugenio Pérez2023-12-261-1/+1
| | | | | | | | | | | | | | | | | | | | | Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iommu_list member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-11-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: remove msg type of vhost_vdpaEugenio Pérez2023-12-261-1/+0
| | | | | | | | | | | | | | | It is always VHOST_IOTLB_MSG_V2. We can always make it back per vhost_dev if needed. This change makes easier for vhost_vdpa_map and unmap not to depend on vhost_vdpa but only in VhostVDPAShared. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-10-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: move backend_cap to vhost_vdpa_sharedEugenio Pérez2023-12-261-0/+3
| | | | | | | | | | | | | | | | | | | | | Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the backend_cap member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-9-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: move iotlb_batch_begin_sent to vhost_vdpa_sharedEugenio Pérez2023-12-261-1/+2
| | | | | | | | | | | | | | | | | | | | | | Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iotlb_batch_begin_sent member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-8-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: move file descriptor to vhost_vdpa_sharedEugenio Pérez2023-12-261-1/+1
| | | | | | | | | | | | | | | | | | | | | Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the file descriptor to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-7-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: move shadow_data to vhost_vdpa_sharedEugenio Pérez2023-12-261-2/+3
| | | | | | | | | | | | | | | | | | | | | Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the shadow_data member to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-5-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: move iova_range to vhost_vdpa_sharedEugenio Pérez2023-12-261-1/+2
| | | | | | | | | | | | | | | | | | | | | Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iova range to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-4-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: move iova tree to the shared structEugenio Pérez2023-12-261-2/+2
| | | | | | | | | | | | | | | | | | | | | Next patches will register the vhost_vdpa memory listener while the VM is migrating at the destination, so we can map the memory to the device before stopping the VM at the source. The main goal is to reduce the downtime. However, the destination QEMU is unaware of which vhost_vdpa device will register its memory_listener. If the source guest has CVQ enabled, it will be the CVQ device. Otherwise, it will be the first one. Move the iova tree to VhostVDPAShared so all vhost_vdpa can use it, rather than always in the first or last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-3-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: add VhostVDPASharedEugenio Pérez2023-12-261-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | It will hold properties shared among all vhost_vdpa instances associated with of the same device. For example, we just need one iova_tree or one memory listener for the entire device. Next patches will register the vhost_vdpa memory listener at the beginning of the VM migration at the destination. This enables QEMU to map the memory to the device before stopping the VM at the source, instead of doing while both source and destination are stopped, thus minimizing the downtime. However, the destination QEMU is unaware of which vhost_vdpa struct will register its memory_listener. If the source guest has CVQ enabled, it will be the one associated with the CVQ. Otherwise, it will be the first one. Save the memory operations related members in a common place rather than always in the first / last vhost_vdpa. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20231221174322.3130442-2-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: export vhost_vdpa_set_vring_readyEugenio Pérez2023-10-041-0/+1
| | | | | | | | | | | | | | | | | | | The vhost-vdpa net backend needs to enable vrings in a different order than default, so export it. No functional change intended except for tracing, that now includes the (virtio) index being enabled and the return value of the ioctl. Still ignoring return value of this function if called from vhost_vdpa_dev_start, as reorganize calling code around it is out of the scope of this series. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20230822085330.3978829-3-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost-vdpa: Add support for vIOMMU.Cindy Lu2023-05-191-0/+11
| | | | | | | | | | | | | | | | | | | | | 1. The vIOMMU support will make vDPA can work in IOMMU mode. This will fix security issues while using the no-IOMMU mode. To support this feature we need to add new functions for IOMMU MR adds and deletes. Also since the SVQ does not support vIOMMU yet, add the check for IOMMU in vhost_vdpa_dev_start, if the SVQ and IOMMU enable at the same time the function will return fail. 2. Skip the iova_max check vhost_vdpa_listener_skipped_section(). While MR is IOMMU, move this check to vhost_vdpa_iommu_map_notify() Verified in vp_vdpa and vdpa_sim_net driver Signed-off-by: Cindy Lu <lulu@redhat.com> Message-Id: <20230510054631.2951812-5-lulu@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa net: block migration if the device has CVQEugenio Pérez2023-03-071-0/+1
| | | | | | | | | | | Devices with CVQ need to migrate state beyond vq state. Leaving this to future series. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230303172445.1089785-11-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: add vhost_vdpa->suspended parameterEugenio Pérez2023-03-071-0/+2
| | | | | | | | | | | | | | | This allows vhost_vdpa to track if it is safe to get the vring base from the device or not. If it is not, vhost can fall back to fetch idx from the guest buffer again. No functional change intended in this patch, later patches will use this field. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230303172445.1089785-6-eperezma@redhat.com> Tested-by: Lei Yang <leiyang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa-dev: get iova range explicitlyLongpeng2023-01-081-0/+2
| | | | | | | | | | | | | | In commit a585fad26b ("vdpa: request iova_range only once") we remove GET_IOVA_RANGE form vhost_vdpa_init, the generic vdpa device will start without iova_range populated, so the device won't work. Let's call GET_IOVA_RANGE ioctl explicitly. Fixes: a585fad26b2e6ccc ("vdpa: request iova_range only once") Signed-off-by: Longpeng <longpeng2@huawei.com> Message-Id: <20221224114848.3062-2-longpeng2@huawei.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>
* vdpa: add shadow_data to vhost_vdpaEugenio Pérez2022-12-211-0/+2
| | | | | | | | | | | | | | | The memory listener that thells the device how to convert GPA to qemu's va is registered against CVQ vhost_vdpa. memory listener translations are always ASID 0, CVQ ones are ASID 1 if supported. Let's tell the listener if it needs to register them on iova tree or not. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-12-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: add asid parameter to vhost_vdpa_dma_map/unmapEugenio Pérez2022-12-211-3/+11
| | | | | | | | | | | | | | | | So the caller can choose which ASID is destined. No need to update the batch functions as they will always be called from memory listener updates at the moment. Memory listener updates will always update ASID 0, as it's the passthrough ASID. All vhost devices's ASID are 0 at this moment. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Message-Id: <20221215113144.322011-10-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: Delete CVQ migration blockerEugenio Pérez2022-09-021-1/+0
| | | | | | | | | We can restore the device state in the destination via CVQ now. Remove the migration blocker. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
* vdpa: Add device migration blockerEugenio Pérez2022-07-201-0/+1
| | | | | | | | | | | | | | Since the vhost-vdpa device is exposing _F_LOG, adding a migration blocker if it uses CVQ. However, qemu is able to migrate simple devices with no CVQ as long as they use SVQ. To allow it, add a placeholder error to vhost_vdpa, and only add to vhost_dev when used. vhost_dev machinery place the migration blocker if needed. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
* vdpa: manual forward CVQ buffersEugenio Pérez2022-07-201-0/+3
| | | | | | | | | Do a simple forwarding of CVQ buffers, the same work SVQ could do but through callbacks. No functional change intended. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
* vdpa: Export vhost_vdpa_dma_map and unmap callsEugenio Pérez2022-07-201-0/+4
| | | | | | | | | | | | | Shadow CVQ will copy buffers on qemu VA, so we avoid TOCTOU attacks from the guest that could set a different state in qemu device model and vdpa device. To do so, it needs to be able to map these new buffers to the device. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
* vdpa: Expose VHOST_F_LOG_ALL on SVQEugenio Pérez2022-03-151-0/+1
| | | | | | | | | | | | | | | | SVQ is able to log the dirty bits by itself, so let's use it to not block migration. Also, ignore set and clear of VHOST_F_LOG_ALL on set_features if SVQ is enabled. Even if the device supports it, the reports would be nonsense because SVQ memory is in the qemu region. The log region is still allocated. Future changes might skip that, but this series is already long enough. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
* vdpa: Add custom IOTLB translations to SVQEugenio Pérez2022-03-151-0/+3
| | | | | | | | | | | | | Use translations added in VhostIOVATree in SVQ. Only introduce usage here, not allocation and deallocation. As with previous patches, we use the dead code paths of shadow_vqs_enabled to avoid commiting too many changes at once. These are impossible to take at the moment. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
* vhost: Add Shadow VirtQueue kick forwarding capabilitiesEugenio Pérez2022-03-151-0/+4
| | | | | | | | | | | | At this mode no buffer forwarding will be performed in SVQ mode: Qemu will just forward the guest's kicks to the device. Host memory notifiers regions are left out for simplicity, and they will not be addressed in this series. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
* vhost-vdpa: classify one time requestJason Wang2021-10-201-0/+1
| | | | | | | | | | | | | Vhost-vdpa uses one device multiqueue queue (pairs) model. So we need to classify the one time request (e.g SET_OWNER) and make sure those request were only called once per device. This is used for multiqueue support. Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20211020045600.16082-3-jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vdpa: Check for iova range at mappings changesEugenio Pérez2021-10-201-0/+2
| | | | | | | | | | | | Check vdpa device range before updating memory regions so we don't add any outside of it, and report the invalid change if any. Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20211014141236.923287-4-eperezma@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
* vhost-vdpa: Do not send empty IOTLB update batchesEugenio Pérez2021-09-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the introduction of the batch hinting, meaningless batches can be created with no IOTLB updates if the memory region was skipped by vhost_vdpa_listener_skipped_section. This is the case of host notifiers memory regions, device un/realize, and others. This causes the vdpa device to receive dma mapping settings with no changes, a possibly expensive operation for nothing. To avoid that, VHOST_IOTLB_BATCH_BEGIN hint is delayed until we have a meaningful (not skipped section) mapping or unmapping operation, and VHOST_IOTLB_BATCH_END is not written unless at least one of _UPDATE / _INVALIDATE has been issued. v3: * Use a bool instead of a counter avoiding potential number wrapping * Fix bad check on _commit * Move VHOST_BACKEND_F_IOTLB_BATCH check to vhost_vdpa_iotlb_batch_begin_once v2 (from RFC): * Rename misleading name * Abstract start batching function for listener_add/del Signed-off-by: Eugenio Pérez <eperezma@redhat.com> Message-Id: <20210812140933.226288-1-eperezma@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost-vdpa: map virtqueue notification area if possibleJason Wang2021-06-111-0/+6
| | | | | | | | | | | This patch implements the vq notification mapping support for vhost-vDPA. This is simply done by using mmap()/munmap() for the vhost-vDPA fd during device start/stop. For the device without notification mapping support, we fall back to eventfd based notification gracefully. Reviewed-by: Si-Wei Liu <si-wei.liu@oracle.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
* vhost-vdpa: Remove redundant declaration of address_space_memoryXie Yongji2021-06-051-1/+0
| | | | | | | | | | | | The symbol address_space_memory are already declared in include/exec/address-spaces.h. So let's add this header file and remove the redundant declaration in include/hw/virtio/vhost-vdpa.h. Signed-off-by: Xie Yongji <xieyongji@bytedance.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Message-Id: <20210517123246.999-1-xieyongji@bytedance.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
* vhost-vdpa: Make vhost_vdpa_get_device_id() staticZenghui Yu2021-05-141-2/+0
| | | | | | | | | | | As it's only used inside hw/virtio/vhost-vdpa.c. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Message-Id: <20210413133737.1574-1-yuzenghui@huawei.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost-vdpa: batch updating IOTLB mappingsJason Wang2020-09-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | To speed up the memory mapping updating between vhost-vDPA and vDPA device driver, this patch passes the IOTLB batching flags via IOTLB API. Two new flags was introduced, VHOST_IOTLB_BATCH_BEGIN is a hint that a bathced IOTLB updating may be initiated from the userspace. VHOST_IOTLB_BATCH_END is a hint that userspace has finished the updating: VHOST_IOTLB_BATCH_BEGIN VHOST_IOTLB_UPDATE/VHOST_IOTLB_INVALIDATE ... VHOST_IOTLB_BATCH_END Vhost-vDPA can then know that all mappings has been set and can do optimization like passing all the mappings to the vDPA device driver. Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20200907104903.31551-4-jasowang@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
* vhost-vdpa: introduce vhost-vdpa backendCindy Lu2020-07-071-0/+26
Currently we have 2 types of vhost backends in QEMU: vhost kernel and vhost-user. The above patch provides a generic device for vDPA purpose, this vDPA device exposes to user space a non-vendor-specific configuration interface for setting up a vhost HW accelerator, this patch set introduces a third vhost backend called vhost-vdpa based on the vDPA interface. Vhost-vdpa usage: qemu-system-x86_64 -cpu host -enable-kvm \ ...... -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-id,id=vhost-vdpa0 \ -device virtio-net-pci,netdev=vhost-vdpa0,page-per-vq=on \ Signed-off-by: Lingshan zhu <lingshan.zhu@intel.com> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Signed-off-by: Cindy Lu <lulu@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Message-Id: <20200701145538.22333-14-lulu@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com>