device: 0.880 kernel: 0.654 performance: 0.600 graphic: 0.583 PID: 0.545 ppc: 0.500 VMM: 0.482 risc-v: 0.474 mistranslation: 0.445 socket: 0.444 register: 0.444 i386: 0.432 x86: 0.415 boot: 0.415 files: 0.411 architecture: 0.408 network: 0.405 TCG: 0.395 vnc: 0.383 debug: 0.378 arm: 0.363 peripherals: 0.343 permissions: 0.288 semantic: 0.278 user-level: 0.236 hypervisor: 0.214 assembly: 0.182 virtual: 0.153 KVM: 0.137 Two questions about VFIO device live migration Description of problem: For my own pcie device, i implement system memory && device memory dirty bitmap track and works well use pre-copy mode live migration by the way. first question: - for system memory dirty bitmap sync, notice that last sync will come early than i expected read qemu code and found qemu will call every savevm_state.handlers->save_live_complete_precopy callback in "qemu_savevm_state_complete_precopy_iterable", and "vfio" handler will always behind "ram". so here is question, my own vfio device will only be halted after "vfio" handler enter save_live_complete_precopy, and last system memory dirty bitmap sync will come with "ram"'s save_live_complete_precopy, there will be some system dirty between this period, should we add one more system dirty bitmap sync after "vfio"'s save_live_complete_precopy second question: - notice that qemu will clean up migration and call every savevm_state.handlers->save_cleanup call back, and in this function, qemu will only call vfio listener's log_global_stop call back when vm_is_running but for my vfio device, state will be paused(postmigrate) when enter here, so there is no chance for qemu to relese some resource create by my device kernel mode driver, where should i put the logic about "stop migration resource" anyway Thanks ^_^ Steps to reproduce: 1. 2. 3. Additional information: