summary refs log tree commit diff stats
path: root/hw/intc/omap_intc.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2020-11-02configure: Test that gio libs from pkg-config workPeter Maydell1-1/+9
On some hosts (eg Ubuntu Bionic) pkg-config returns a set of libraries for gio-2.0 which don't actually work when compiling statically. (Specifically, the returned library string includes -lmount, but not -lblkid which -lmount depends upon, so linking fails due to missing symbols.) Check that the libraries work, and don't enable gio if they don't, in the same way we do for gnutls. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20200928160402.7961-1-peter.maydell@linaro.org
2020-11-02target/arm: Get correct MMU index for other-security-statePeter Maydell1-1/+2
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el(). This is incorrect when the security state being queried is not the current one, because arm_current_el() uses the current security state to determine which of the banked CONTROL.nPRIV bits to look at. The effect was that if (for instance) Secure state was in privileged mode but Non-Secure was not then we would return the wrong MMU index. The only places where we are using this function in a way that could trigger this bug are for the stack loads during a v8M function-return and for the instruction fetch of a v8M SG insn. Fix the bug by expanding out the M-profile version of the arm_current_el() logic inline so it can use the passed in secstate rather than env->v7m.secure. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
2020-11-02hw/display/exynos4210_fimd: Fix potential NULL pointer dereferenceAlexChen1-1/+3
In exynos4210_fimd_update(), the pointer s is dereferinced before being check if it is valid, which may lead to NULL pointer dereference. So move the assignment to global_width after checking that the s is valid. Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: Alex Chen <alex.chen@huawei.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 5F9F8D88.9030102@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02hw/display/omap_lcdc: Fix potential NULL pointer dereferenceAlexChen1-3/+7
In omap_lcd_interrupts(), the pointer omap_lcd is dereferinced before being check if it is valid, which may lead to NULL pointer dereference. So move the assignment to surface after checking that the omap_lcd is valid and move surface_bits_per_pixel(surface) to after the surface assignment. Reported-by: Euler Robot <euler.robot@huawei.com> Signed-off-by: AlexChen <alex.chen@huawei.com> Message-id: 5F9CDB8A.9000001@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02hw/arm/boot: fix SVE for EL3 direct kernel bootRémi Denis-Courmont1-0/+3
When booting a CPU with EL3 using the -kernel flag, set up CPTR_EL3 so that SVE will not trap to EL3. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030151541.11976-1-remi@remlab.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02hw/arm/smmuv3: Fix potential integer overflow (CID 1432363)Philippe Mathieu-Daudé1-1/+2
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic. This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN): CID 1432363 (#1 of 1): Unintentional integer overflow: overflow_before_widen: Potentially overflowing expression 1 << scale with type int (32 bits, signed) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type hwaddr (64 bits, unsigned). Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Eric Auger <eric.auger@redhat.com> Message-id: 20201030144617.1535064-1-philmd@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02disas/capstone: Fix monitor disassembly of >32 bytesPeter Maydell1-1/+1
If we're using the capstone disassembler, disassembly of a run of instructions more than 32 bytes long disassembles the wrong data for instructions beyond the 32 byte mark: (qemu) xp /16x 0x100 0000000000000100: 0x00000005 0x54410001 0x00000001 0x00001000 0000000000000110: 0x00000000 0x00000004 0x54410002 0x3c000000 0000000000000120: 0x00000000 0x00000004 0x54410009 0x74736574 0000000000000130: 0x00000000 0x00000000 0x00000000 0x00000000 (qemu) xp /16i 0x100 0x00000100: 00000005 andeq r0, r0, r5 0x00000104: 54410001 strbpl r0, [r1], #-1 0x00000108: 00000001 andeq r0, r0, r1 0x0000010c: 00001000 andeq r1, r0, r0 0x00000110: 00000000 andeq r0, r0, r0 0x00000114: 00000004 andeq r0, r0, r4 0x00000118: 54410002 strbpl r0, [r1], #-2 0x0000011c: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c 0x00000120: 54410001 strbpl r0, [r1], #-1 0x00000124: 00000001 andeq r0, r0, r1 0x00000128: 00001000 andeq r1, r0, r0 0x0000012c: 00000000 andeq r0, r0, r0 0x00000130: 00000004 andeq r0, r0, r4 0x00000134: 54410002 strbpl r0, [r1], #-2 0x00000138: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c 0x0000013c: 00000000 andeq r0, r0, r0 Here the disassembly of 0x120..0x13f is using the data that is in 0x104..0x123. This is caused by passing the wrong value to the read_memory_func(). The intention is that at this point in the loop the 'cap_buf' buffer already contains 'csize' bytes of data for the instruction at guest addr 'pc', and we want to read in an extra 'tsize' bytes. Those extra bytes are therefore at 'pc + csize', not 'pc'. On the first time through the loop 'csize' happens to be zero, so the initial read of 32 bytes into cap_buf is correct and as long as the disassembly never needs to read more data we return the correct information. Use the correct guest address in the call to read_memory_func(). Cc: qemu-stable@nongnu.org Fixes: https://bugs.launchpad.net/qemu/+bug/1900779 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201022132445.25039-1-peter.maydell@linaro.org
2020-11-02target/arm: fix LORID_EL1 access checkRémi Denis-Courmont1-14/+5
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the future HCR_EL2.TLOR when S-EL2 is enabled. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: fix handling of HCR.FBRémi Denis-Courmont1-3/+2
HCR should be applied when NS is set, not when it is cleared. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hostsPeter Maydell1-2/+2
The helper functions for performing the udot/sdot operations against a scalar were not using an address-swizzling macro when converting the index of the scalar element into a pointer into the vm array. This had no effect on little-endian hosts but meant we generated incorrect results on big-endian hosts. For these insns, the index is indexing over group of 4 8-bit values, so 32 bits per indexed entity, and H4() is therefore what we want. (For Neon the only possible input indexes are 0 and 1.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
2020-11-02target/arm: Fix float16 pairwise Neon ops on big-endian hostsPeter Maydell1-4/+4
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error meant we were using the H4() address swizzler macro rather than the H2() which is required for 2-byte data. This had no effect on little-endian hosts but meant we put the result data into the destination Dreg in the wrong order on big-endian hosts. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
2020-11-02target/arm: Improve do_prewiden_3dRichard Henderson2-31/+45
We can use proper widening loads to extend 32-bit inputs, and skip the "widenfn" step. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-12-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Simplify do_long_3d and do_2scalar_longRichard Henderson1-14/+9
In both cases, we can sink the write-back and perform the accumulate into the normal destination temps. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-11-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Rename neon_load_reg64 to vfp_load_reg64Richard Henderson2-46/+46
The only uses of this function are for loading VFP double-precision values, and nothing to do with NEON. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-10-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Add read/write_neon_element64Richard Henderson2-47/+73
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-9-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Rename neon_load_reg32 to vfp_load_reg32Richard Henderson2-94/+94
The only uses of this function are for loading VFP single-precision values, and nothing to do with NEON. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-8-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Expand read/write_neon_element32 to all MemOpRichard Henderson2-84/+37
We can then use this to improve VMOV (scalar to gp) and VMOV (gp to scalar) so that we simply perform the memory operation that we wanted, rather than inserting or extracting from a 32-bit quantity. These were the last uses of neon_load/store_reg, so remove them. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-7-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Add read/write_neon_element32Richard Henderson2-99/+183
Model these off the aa64 read/write_vec_element functions. Use it within translate-neon.c.inc. The new functions do not allocate or free temps, so this rearranges the calling code a bit. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Use neon_element_offset in vfp_reg_offsetRichard Henderson1-9/+4
This seems a bit more readable than using offsetof CPU_DoubleU. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-5-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Use neon_element_offset in neon_load/store_regRichard Henderson1-12/+2
These are the only users of neon_reg_offset, so remove that. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-4-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Move neon_element_offset to translate.cRichard Henderson2-19/+20
This will shortly have users outside of translate-neon.c.inc. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-02target/arm: Introduce neon_full_reg_offsetRichard Henderson3-23/+31
This function makes it clear that we're talking about the whole register, and not the 32-bit piece at index 0. This fixes a bug when running on a big-endian host. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2020-11-01vfio: fix incorrect print typeZhengui li1-2/+2
The type of input variable is unsigned int while the printer type is int. So fix incorrect print type. Signed-off-by: Zhengui li <lizhengui@huawei.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01hw/vfio: Use lock guard macrosAmey Narkhede1-5/+2
Use qemu LOCK_GUARD macros in hw/vfio. Saves manual unlock calls Signed-off-by: Amey Narkhede <ameynarkhede03@gmail.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01s390x/pci: get zPCI function info from hostMatthew Rosato6-7/+202
We use the capability chains of the VFIO_DEVICE_GET_INFO ioctl to retrieve the CLP information that the kernel exports. To be compatible with previous kernel versions we fall back on previous predefined values, same as the emulation values, when the ioctl is found to not support capability chains. If individual CLP capabilities are not found, we fall back on default values for only those capabilities missing from the chain. This patch is based on work previously done by Pierre Morel. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> [aw: non-Linux build fixes] Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Add routine for finding VFIO_DEVICE_GET_INFO capabilitiesMatthew Rosato2-0/+12
Now that VFIO_DEVICE_GET_INFO supports capability chains, add a helper function to find specific capabilities in the chain. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01s390x/pci: use a PCI Function structurePierre Morel3-6/+15
We use a ClpRspQueryPci structure to hold the information related to a zPCI Function. This allows us to be ready to support different zPCI functions and to retrieve the zPCI function information from the host. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01s390x/pci: clean up s390 PCI groupsMatthew Rosato1-0/+12
Add a step to remove all stashed PCI groups to avoid stale data between machine resets. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01s390x/pci: use a PCI Group structurePierre Morel3-9/+66
We use a S390PCIGroup structure to hold the information related to a zPCI Function group. This allows us to be ready to support multiple groups and to retrieve the group information from the host. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01s390x/pci: create a header dedicated to PCI CLPPierre Morel3-196/+212
To have a clean separation between s390-pci-bus.h and s390-pci-inst.h headers we export the PCI CLP instructions in a dedicated header. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01s390x/pci: Honor DMA limits set by vfioMatthew Rosato6-11/+116
When an s390 guest is using lazy unmapping, it can result in a very large number of oustanding DMA requests, far beyond the default limit configured for vfio. Let's track DMA usage similar to vfio in the host, and trigger the guest to flush their DMA mappings before vfio runs out. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> [aw: non-Linux build fixes] Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01s390x/pci: Add routine to get the vfio dma available countMatthew Rosato3-0/+79
Create new files for separating out vfio-specific work for s390 pci. Add the first such routine, which issues VFIO_IOMMU_GET_INFO ioctl to collect the current dma available count. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> [aw: Fix non-Linux build with CONFIG_LINUX] Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Find DMA available capabilityMatthew Rosato2-0/+33
The underlying host may be limiting the number of outstanding DMA requests for type 1 IOMMU. Add helper functions to check for the DMA available capability and retrieve the current number of DMA mappings allowed. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> [aw: vfio_get_info_dma_avail moved inside CONFIG_LINUX] Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Create shared routine for scanning info capabilitiesMatthew Rosato1-8/+13
Rather than duplicating the same loop in multiple locations, create a static function to do the work. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01s390x/pci: Move header files to include/hw/s390xMatthew Rosato6-5/+6
Seems a more appropriate location for them. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01linux-headers: update against 5.10-rc1Matthew Rosato28-16/+294
commit 3650b228f83adda7e5ee532e2b90429c03f7b9ec Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> [aw: drop pvrdma_ring.h changes to avoid revert of d73415a31547 ("qemu/atomic.h: rename atomic_ to qatomic_")] Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01update-linux-headers: Add vfio_zdev.hMatthew Rosato1-1/+1
vfio_zdev.h is used by s390x zPCI support to pass device-specific CLP information between host and userspace. Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01qapi: Add VFIO devices migration stats in Migration statsKirti Wankhede6-0/+71
Added amount of bytes transferred to the VM at destination by all VFIO devices Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Make vfio-pci device migration capableKirti Wankhede2-21/+8
If the device is not a failover primary device, call vfio_migration_probe() and vfio_migration_finalize() to enable migration support for those devices that support it respectively to tear it down again. Removed migration blocker from VFIO PCI device specific structure and use migration blocker from generic structure of VFIO device. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Add ioctl to get dirty pages bitmap during dma unmapKirti Wankhede1-4/+93
With vIOMMU, IO virtual address range can get unmapped while in pre-copy phase of migration. In that case, unmap ioctl should return pages pinned in that range and QEMU should find its correcponding guest physical addresses and report those dirty. Suggested-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> [aw: fix error_report types, fix cpu_physical_memory_set_dirty_lebitmap() cast] Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Dirty page tracking when vIOMMU is enabledKirti Wankhede2-6/+83
When vIOMMU is enabled, register MAP notifier from log_sync when all devices in container are in stop and copy phase of migration. Call replay and get dirty pages from notifier callback. Suggested-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Add vfio_listener_log_sync to mark dirty pagesKirti Wankhede2-0/+117
vfio_listener_log_sync gets list of dirty pages from container using VFIO_IOMMU_GET_DIRTY_BITMAP ioctl and mark those pages dirty when all devices are stopped and saving state. Return early for the RAM block section of mapped MMIO region. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> [aw: fix error_report types, fix cpu_physical_memory_set_dirty_lebitmap() cast] Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Add function to start and stop dirty pages trackingKirti Wankhede1-0/+36
Call VFIO_IOMMU_DIRTY_PAGES ioctl to start and stop dirty pages tracking for VFIO devices. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Get migration capability flags for containerKirti Wankhede3-9/+91
Added helper functions to get IOMMU info capability chain. Added function to get migration capability information from that capability chain for IOMMU container. Similar change was proposed earlier: https://lists.gnu.org/archive/html/qemu-devel/2018-05/msg03759.html Disable migration for devices if IOMMU module doesn't support migration capability. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Cc: Eric Auger <eric.auger@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01memory: Set DIRTY_MEMORY_MIGRATION when IOMMU is enabledKirti Wankhede1-1/+1
mr->ram_block is NULL when mr->is_iommu is true, then fr.dirty_log_mask wasn't set correctly due to which memory listener's log_sync doesn't get called. This patch returns log_mask with DIRTY_MEMORY_MIGRATION set when IOMMU is enabled. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Add load state functions to SaveVMHandlersKirti Wankhede2-0/+199
Sequence during _RESUMING device state: While data for this device is available, repeat below steps: a. read data_offset from where user application should write data. b. write data of data_size to migration region from data_offset. c. write data_size which indicates vendor driver that data is written in staging buffer. For user, data is opaque. User should write data in the same order as received. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Add save state functions to SaveVMHandlersKirti Wankhede3-0/+283
Added .save_live_pending, .save_live_iterate and .save_live_complete_precopy functions. These functions handles pre-copy and stop-and-copy phase. In _SAVING|_RUNNING device state or pre-copy phase: - read pending_bytes. If pending_bytes > 0, go through below steps. - read data_offset - indicates kernel driver to write data to staging buffer. - read data_size - amount of data in bytes written by vendor driver in migration region. - read data_size bytes of data from data_offset in the migration region. - Write data packet to file stream as below: {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data, VFIO_MIG_FLAG_END_OF_STATE } In _SAVING device state or stop-and-copy phase a. read config space of device and save to migration file stream. This doesn't need to be from vendor driver. Any other special config state from driver can be saved as data in following iteration. b. read pending_bytes. If pending_bytes > 0, go through below steps. c. read data_offset - indicates kernel driver to write data to staging buffer. d. read data_size - amount of data in bytes written by vendor driver in migration region. e. read data_size bytes of data from data_offset in the migration region. f. Write data packet as below: {VFIO_MIG_FLAG_DEV_DATA_STATE, data_size, actual data} g. iterate through steps b to f while (pending_bytes > 0) h. Write {VFIO_MIG_FLAG_END_OF_STATE} When data region is mapped, its user's responsibility to read data from data_offset of data_size before moving to next steps. Added fix suggested by Artem Polyakov to reset pending_bytes in vfio_save_iterate(). Added fix suggested by Zhi Wang to add 0 as data size in migration stream and add END_OF_STATE delimiter to indicate phase complete. Suggested-by: Artem Polyakov <artemp@nvidia.com> Suggested-by: Zhi Wang <zhi.wang.linux@gmail.com> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Register SaveVMHandlers for VFIO deviceKirti Wankhede2-0/+104
Define flags to be used as delimiter in migration stream for VFIO devices. Added .save_setup and .save_cleanup functions. Map & unmap migration region from these functions at source during saving or pre-copy phase. Set VFIO device state depending on VM's state. During live migration, VM is running when .save_setup is called, _SAVING | _RUNNING state is set for VFIO device. During save-restore, VM is paused, _SAVING state is set for VFIO device. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Add migration state change notifierKirti Wankhede3-0/+31
Added migration state change notifier to get notification on migration state change. These states are translated to VFIO device state and conveyed to vendor driver. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
2020-11-01vfio: Add VM state change handler to know state of VMKirti Wankhede3-0/+166
VM state change handler is called on change in VM's state. Based on VM state, VFIO device state should be changed. Added read/write helper functions for migration region. Added function to set device_state. Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by: Neo Jia <cjia@nvidia.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> [aw: lx -> HWADDR_PRIx, remove redundant parens] Signed-off-by: Alex Williamson <alex.williamson@redhat.com>