summary refs log tree commit diff stats
path: root/migration/migration.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* migration: cpr-exec modeSteve Sistare2025-10-031-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the cpr-exec migration mode. Usage: qemu-system-$arch -machine aux-ram-share=on ... migrate_set_parameter mode cpr-exec migrate_set_parameter cpr-exec-command \ <arg1> <arg2> ... -incoming <uri-1> \ migrate -d <uri-1> The migrate command stops the VM, saves state to uri-1, directly exec's a new version of QEMU on the same host, replacing the original process while retaining its PID, and loads state from uri-1. Guest RAM is preserved in place, albeit with new virtual addresses. The new QEMU process is started by exec'ing the command specified by the @cpr-exec-command parameter. The first word of the command is the binary, and the remaining words are its arguments. The command may be a direct invocation of new QEMU, or may be a non-QEMU command that exec's the new QEMU binary. This mode creates a second migration channel that is not visible to the user. At the start of migration, old QEMU saves CPR state to the second channel, and at the end of migration, it tells the main loop to call cpr_exec. New QEMU loads CPR state early, before objects are created. Because old QEMU terminates when new QEMU starts, one cannot stream data between the two, so uri-1 must be a type, such as a file, that accepts all data before old QEMU exits. Otherwise, old QEMU may quietly block writing to the channel. Memory-backend objects must have the share=on attribute, but memory-backend-epc is not supported. The VM must be started with the '-machine aux-ram-share=on' option, which allows anonymous memory to be transferred in place to the new process. The memfds are kept open across exec by clearing the close-on-exec flag, their values are saved in CPR state, and they are mmap'd in new QEMU. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Acked-by: Markus Armbruster <armbru@redhat.com> Link: https://lore.kernel.org/r/1759332851-370353-7-git-send-email-steven.sistare@oracle.com Signed-off-by: Peter Xu <peterx@redhat.com>
* migration: multi-mode notifierSteve Sistare2025-10-031-13/+47
| | | | | | | | | | | | Allow a notifier to be added for multiple migration modes. To allow a notifier to appear on multiple per-node lists, use a generic list type. We can no longer use NotifierWithReturnList, because it shoe horns the notifier onto a single list. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/1759332851-370353-2-git-send-email-steven.sistare@oracle.com Signed-off-by: Peter Xu <peterx@redhat.com>
* migration: Fix state transition in postcopy_start() error handlingJuraj Marcin2025-10-031-2/+3
| | | | | | | | | | | | | | | | | | | | | | | Commit 48814111366b ("migration: Always set DEVICE state") introduced DEVICE state to postcopy, which moved the actual state transition that leads to POSTCOPY_ACTIVE. However, the error handling part of the postcopy_start() function still expects the state POSTCOPY_ACTIVE, but depending on where an error happens, now the state can be either ACTIVE, DEVICE or CANCELLING, but never POSTCOPY_ACTIVE, as this transition now happens just before a successful return from the function. Instead, accept any state except CANCELLING when transitioning to FAILED state. Cc: qemu-stable@nongnu.org Fixes: 48814111366b ("migration: Always set DEVICE state") Signed-off-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20250826115145.871272-1-jmarcin@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
* migration: Make migration_has_failed() work even for CANCELLINGPeter Xu2025-10-031-1/+2
| | | | | | | | | | | | | | | | | | | No issue I hit, the change is only from code observation when I am looking at a TLS premature termination issue. We set CANCELLED very late, it means migration_has_failed() may not work correctly if it's invoked before updating CANCELLING to CANCELLED. Allow that state will make migration_has_failed() working as expected even if it's invoked slightly earlier. One current user is the multifd code for the TLS graceful termination, where it's before updating to CANCELLED. Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20250918203937.200833-3-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
* migration: push Error **errp into loadvm_process_enable_colo()Arun Menon2025-10-031-6/+6
| | | | | | | | | | | | | | | | This is an incremental step in converting vmstate loading code to report error via Error objects instead of directly printing it to console/monitor. It is ensured that loadvm_process_enable_colo() must report an error in errp, in case of failure. Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Arun Menon <armenon@redhat.com> Tested-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Akihiko Odaki <odaki@rsg.ci.i.u-tokyo.ac.jp> Link: https://lore.kernel.org/r/20250918-propagate_tpm_error-v14-21-36f11a6fb9d3@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
* migration: Update qemu_file_get_return_path() docs and remove dead checksArun Menon2025-10-031-10/+2
| | | | | | | | | | | | | | | | | | | | The documentation of qemu_file_get_return_path() states that it can return NULL on failure. However, a review of the current implementation reveals that it is guaranteed that it will always succeed and will never return NULL. As a result, the NULL checks post calling the function become redundant. This commit updates the documentation for the function and removes all NULL checks throughout the migration code. Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Arun Menon <armenon@redhat.com> Tested-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Akihiko Odaki <odaki@rsg.ci.i.u-tokyo.ac.jp> Link: https://lore.kernel.org/r/20250918-propagate_tpm_error-v14-12-36f11a6fb9d3@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
* migration: push Error **errp into qemu_loadvm_state()Arun Menon2025-10-031-2/+12
| | | | | | | | | | | | | | | | | | | | | | | | This is an incremental step in converting vmstate loading code to report error via Error objects instead of directly printing it to console/monitor. It is ensured that qemu_loadvm_state() must report an error in errp, in case of failure. When postcopy live migration runs, the device states are loaded by both the qemu coroutine process_incoming_migration_co() and the postcopy_ram_listen_thread(). Therefore, it is important that the coroutine also reports the error in case of failure, with error_report_err(). Otherwise, the source qemu will not display any errors before going into the postcopy pause state. Suggested-by: Akihiko Odaki <odaki@rsg.ci.i.u-tokyo.ac.jp> Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Arun Menon <armenon@redhat.com> Tested-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Akihiko Odaki <odaki@rsg.ci.i.u-tokyo.ac.jp> Link: https://lore.kernel.org/r/20250918-propagate_tpm_error-v14-7-36f11a6fb9d3@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com>
* migration: qemu_file_set_blocking(): add errp parameterVladimir Sementsov-Ogievskiy2025-09-191-3/+5
| | | | | | | | | | | | | | | | | | | | qemu_file_set_blocking() is a wrapper on qio_channel_set_blocking(), so let's passthrough the errp. Note the migration should not be using &error_abort in these calls, however, this is done to expedite the API conversion. The original code would have eventually ended up calling either qemu_socket_set_nonblock which would asset on Linux, or g_unix_set_fd_nonblocking which would propagate errors. We never saw asserts in practice, and conceptually they should not happen, but ideally this code will be later adapted to remove use of &error_abort. Acked-by: Peter Xu <peterx@redhat.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
* migration/postcopy: Push blocktime start/end into page req mutexPeter Xu2025-07-111-10/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The postcopy blocktime feature was tricky that it used quite some atomic operations over quite a few arrays and vars, without explaining how that would be thread safe. The thread safety here is about concurrency between the fault thread and the fault resolution threads, possible to access the same chunk of data. All these atomic ops can be expensive too before knowing clearly how it works. OTOH, postcopy has one page_request_mutex used to serialize the received bitmap updates. So far it's ok - we don't yet have a lot of threads contending the lock. It might change after multifd will be supported, but that's a separate story. What is important is, with that mutex, it's pretty lightweight to move all the blocktime maintenance into the mutex critical section. It's because the blocktime layer is lightweighted: almost "remember which vcpu faulted on which address", and "ok we get some fault resolved, calculate how long it takes". It's also an optional feature for now (but I have thought of changing that, maybe in the future). Let's push the blocktime layer into the mutex, so that it's always thread-safe even without any atomic ops. To achieve that, I'll need to add a tid parameter on fault path so that it'll start to pass the faulted thread ID into deeper the stack, but not too deep. When at it, add a comment for the shared fault handler (for example, vhost-user devices running with postcopy), to mention a TODO. One reason it might not be trivial is that vhost-user's userfaultfds should be opened by vhost-user process, so it's pretty hard to control making sure the TID feature will be around. It wasn't supported before, so keep it like that for now. Now we should be as ease when everything is protected by a mutex that we always take anyway. One side effect: we can finally remove one ramblock_recv_bitmap_test() in mark_postcopy_blocktime_begin(), which was pretty weird and which also includes a weird (but maybe necessary.. but maybe not?) operation to inject a blocktime entry then quickly erase it.. When we're with the mutex, and when we make sure it's invoked after checking the receive bitmap, it's not needed anymore. Instead, we assert. As another side effect, this paves way for removing all atomic ops in all the mem accesses in blocktime layer. Note that we need a stub for mark_postcopy_blocktime_begin() for Windows builds. Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20250613141217.474825-3-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Rewrite the migration complete detect logicPeter Xu2025-07-111-15/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There're a few things off here in that logic, rewrite it. When at it, add rich comment to explain each of the decisions. Since this is very sensitive path for migration, below are the list of things changed with their reasonings. (1) Exact pending size is only needed for precopy not postcopy Fundamentally it's because "exact" version only does one more deep sync to fetch the pending results, while in postcopy's case it's never going to sync anything more than estimate as the VM on source is stopped. (2) Do _not_ rely on threshold_size anymore to decide whether postcopy should complete threshold_size was calculated from the expected downtime and bandwidth only during precopy as an efficient way to decide when to switchover. It's not sensible to rely on threshold_size in postcopy. For precopy, if switchover is decided, the migration will complete soon. It's not true for postcopy. Logically speaking, postcopy should only complete the migration if all pending data is flushed. Here it used to work because save_complete() used to implicitly contain save_live_iterate() when there's pending size. Even if that looks benign, having RAMs to be migrated in postcopy's save_complete() has other bad side effects: (a) Since save_complete() needs to be run once at a time, it means when moving RAM there's no way moving other things (rather than round-robin iterating the vmstate handlers like what we do with ITERABLE phase). Not an immediate concern, but it may stop working in the future when there're more than one iterables (e.g. vfio postcopy). (b) postcopy recovery, unfortunately, only works during ITERABLE phase. IOW, if src QEMU moves RAM during postcopy's save_complete() and network failed, then it'll crash both QEMUs... OTOH if it failed during iteration it'll still be recoverable. IOW, this change should further reduce the window QEMU split brain and crash in extreme cases. If we enable the ram_save_complete() tracepoints, we'll see this before this patch: 1267959@1748381938.294066:ram_save_complete dirty=9627, done=0 1267959@1748381938.308884:ram_save_complete dirty=0, done=1 It means in this migration there're 9627 pages migrated at complete() of postcopy phase. After this change, all the postcopy RAM should be migrated in iterable phase, rather than save_complete(): 1267959@1748381938.294066:ram_save_complete dirty=0, done=0 1267959@1748381938.308884:ram_save_complete dirty=0, done=1 (3) Adjust when to decide to switch to postcopy This shouldn't be super important, the movement makes sure there's only one in_postcopy check, then we are clear on what we do with the two completely differnt use cases (precopy v.s. postcopy). (4) Trivial touch up on threshold_size comparision Which changes: "(!pending_size || pending_size < s->threshold_size)" into: "(pending_size <= s->threshold_size)" Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Link: https://lore.kernel.org/r/20250613140801.474264-11-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration/bg-snapshot: Do not check for SKIP in iteratorPeter Xu2025-07-111-3/+2
| | | | | | | | | It's not possible to happen in bg-snapshot case. Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250613140801.474264-5-peterx@redhat.com Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Replace QemuSemaphore with QemuEventAkihiko Odaki2025-06-061-12/+9
| | | | | | | | | pause_event can utilize qemu_event_reset() to discard events. Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Link: https://lore.kernel.org/r/20250529-event-v5-7-53b285203794@daynix.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* migration: Add save_postcopy_prepare() savevm handlerPeter Xu2025-05-021-0/+4
| | | | | | | | | | | | | | | | | | | | | Add a savevm handler for a module to opt-in sending extra sections right before postcopy starts, and before VM is stopped. RAM will start to use this new savevm handler in the next patch to do flush and sync for multifd pages. Note that we choose to do it before VM stopped because the current only potential user is not sensitive to VM status, so doing it before VM is stopped is preferred to enlarge any postcopy downtime. It is still a bit unfortunate that we need to introduce such a new savevm handler just for the only use case, however it's so far the cleanest. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Prasad Pandit <pjp@fedoraproject.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-ID: <20250411114534.3370816-4-ppandit@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: refactor channel discovery mechanismPrasad Pandit2025-05-021-60/+70
| | | | | | | | | | | | | | | | | | | | | The various logical migration channels don't have a standardized way of advertising themselves and their connections may be seen out of order by the migration destination. When a new connection arrives, the incoming migration currently make use of heuristics to determine which channel it belongs to. The next few patches will need to change how the multifd and postcopy capabilities interact and that affects the channel discovery heuristic. Refactor the channel discovery heuristic to make it less opaque and simplify the subsequent patches. Signed-off-by: Prasad Pandit <pjp@fedoraproject.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-ID: <20250411114534.3370816-3-ppandit@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: check RDMA and capabilities are compatible on both sidesLi Zhijian2025-05-021-10/+20
| | | | | | | | | | | | | | | | | | | | | | | | Depending on the order of starting RDMA and setting capability, they can be categorized into the following scenarios: Source: S1: [set capabilities] -> [Start RDMA outgoing] Destination: D1: [set capabilities] -> [Start RDMA incoming] D2: [Start RDMA incoming] -> [set capabilities] Previously, compatibility between RDMA and capabilities was verified only in scenario D1, potentially causing migration failures in other situations. For scenarios S1 and D1, we can seamlessly incorporate migration_transport_compatible() to address compatibility between channels and capabilities vs transport. For scenario D2, ensure compatibility within migrate_caps_check(). Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Li Zhijian <lizhijian@fujitsu.com> Message-ID: <20250305062825.772629-3-lizhijian@fujitsu.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* qom: Have class_init() take a const data argumentPhilippe Mathieu-Daudé2025-04-251-1/+1
| | | | | | | | | | Mechanical change using gsed, then style manually adapted to pass checkpatch.pl script. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20250424194905.82506-4-philmd@linaro.org>
* migration: Fix UAF for incoming migration on MigrationStatePeter Xu2025-03-101-2/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On the incoming migration side, QEMU uses a coroutine to load all the VM states. Inside, it may reference MigrationState on global states like migration capabilities, parameters, error state, shared mutexes and more. However there's nothing yet to make sure MigrationState won't get destroyed (e.g. after migration_shutdown()). Meanwhile there's also no API available to remove the incoming coroutine in migration_shutdown(), avoiding it to access the freed elements. There's a bug report showing this can happen and crash dest QEMU when migration is cancelled on source. When it happens, the dest main thread is trying to cleanup everything: #0 qemu_aio_coroutine_enter #1 aio_dispatch_handler #2 aio_poll #3 monitor_cleanup #4 qemu_cleanup #5 qemu_default_main Then it found the migration incoming coroutine, schedule it (even after migration_shutdown()), causing crash: #0 __pthread_kill_implementation #1 __pthread_kill_internal #2 __GI_raise #3 __GI_abort #4 __assert_fail_base #5 __assert_fail #6 qemu_mutex_lock_impl #7 qemu_lockable_mutex_lock #8 qemu_lockable_lock #9 qemu_lockable_auto_lock #10 migrate_set_error #11 process_incoming_migration_co #12 coroutine_trampoline To fix it, take a refcount after an incoming setup is properly done when qmp_migrate_incoming() succeeded the 1st time. As it's during a QMP handler which needs BQL, it means the main loop is still alive (without going into cleanups, which also needs BQL). Releasing the refcount now only until the incoming migration coroutine finished or failed. Hence the refcount is valid for both (1) setup phase of incoming ports, mostly IO watches (e.g. qio_channel_add_watch_full()), and (2) the incoming coroutine itself (process_incoming_migration_co()). Note that we can't unref in migration_incoming_state_destroy(), because both qmp_xen_load_devices_state() and load_snapshot() will use it without an incoming migration. Those hold BQL so they're not prone to this issue. PS: I suspect nobody uses Xen's command at all, as it didn't register yank, hence AFAIU the command should crash on master when trying to unregister yank in migration_incoming_state_destroy().. but that's another story. Also note that in some incoming failure cases we may not always unref the MigrationState refcount, which is a trade-off to keep things simple. We could make it accurate, but it can be an overkill. Some examples: - Unlike most of the rest protocols, socket_start_incoming_migration() may create net listener after incoming port setup sucessfully. It means we can't unref in migration_channel_process_incoming() as a generic path because socket protocol might keep using MigrationState. - For either socket or file, multiple IO watches might be created, it means logically each IO watch needs to take one refcount for MigrationState so as to be 100% accurate on ownership of refcount taken. In general, we at least need per-protocol handling to make it accurate, which can be an overkill if we know incoming failed after all. Add a short comment to explain that when taking the refcount in qmp_migrate_incoming(). Bugzilla: https://issues.redhat.com/browse/RHEL-69775 Tested-by: Yan Fu <yafu@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-ID: <20250220132459.512610-1-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Add thread pool of optional load threadsMaciej S. Szmigiero2025-03-061-1/+1
| | | | | | | | | | | | | | | | Some drivers might want to make use of auxiliary helper threads during VM state loading, for example to make sure that their blocking (sync) I/O operations don't block the rest of the migration process. Add a migration core managed thread pool to facilitate this use case. The migration core will wait for these threads to finish before (re)starting the VM at destination. Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Link: https://lore.kernel.org/qemu-devel/b09fd70369b6159c75847e69f235cb908b02570c.1741124640.git.maciej.szmigiero@oracle.com Signed-off-by: Cédric Le Goater <clg@redhat.com>
* migration: Always take BQL for migration_incoming_state_destroy()Maciej S. Szmigiero2025-03-061-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All callers to migration_incoming_state_destroy() other than postcopy_ram_listen_thread() do this call with BQL held. Since migration_incoming_state_destroy() ultimately calls "load_cleanup" SaveVMHandlers and it will soon call BQL-sensitive code it makes sense to always call that function under BQL rather than to have it deal with both cases (with BQL and without BQL). Add the necessary bql_lock() and bql_unlock() to postcopy_ram_listen_thread(). qemu_loadvm_state_main() in postcopy_ram_listen_thread() could call "load_state" SaveVMHandlers that are expecting BQL to be held. In principle, the only devices that should be arriving on migration channel serviced by postcopy_ram_listen_thread() are those that are postcopiable and whose load handlers are safe to be called without BQL being held. But nothing currently prevents the source from sending data for "unsafe" devices which would cause trouble there. Add a TODO comment there so it's clear that it would be good to improve handling of such (erroneous) case in the future. Acked-by: Peter Xu <peterx@redhat.com> Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Link: https://lore.kernel.org/qemu-devel/21bb5ca337b1d5a802e697f553f37faf296b5ff4.1741193259.git.maciej.szmigiero@oracle.com Signed-off-by: Cédric Le Goater <clg@redhat.com>
* migration: Add MIG_CMD_SWITCHOVER_START and its load handlerMaciej S. Szmigiero2025-03-061-0/+2
| | | | | | | | | | | | | | | | | | | This QEMU_VM_COMMAND sub-command and its switchover_start SaveVMHandler is used to mark the switchover point in main migration stream. It can be used to inform the destination that all pre-switchover main migration stream data has been sent/received so it can start to process post-switchover data that it might have received via other migration channels like the multifd ones. Add also the relevant MigrationState bit stream compatibility property and its hw_compat entry. Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Zhang Chen <zhangckid@gmail.com> # for the COLO part Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Link: https://lore.kernel.org/qemu-devel/311be6da85fc7e49a7598684d80aa631778dcbce.1741124640.git.maciej.szmigiero@oracle.com Signed-off-by: Cédric Le Goater <clg@redhat.com>
* migration: Don't set FAILED state when cancellingFabiano Rosas2025-02-141-4/+11
| | | | | | | | | | | | | | | | | | | | | | | | | The expected outcome from qmp_migrate_cancel() is that the source migration goes to the terminal state MIGRATION_STATUS_CANCELLED. Anything different from this is a bug when cancelling. Make sure there is never a state transition from an unspecified state into FAILED. Code that sets FAILED, should always either make sure that the old state is not CANCELLING or specify the old state. Note that the destination is allowed to go into FAILED, so there's no issue there. (I don't think this is relevant as a backport because cancelling does work, it just doesn't show the right state at the end) Fixes: 3dde8fdbad ("migration: Merge precopy/postcopy on switchover start") Fixes: d0edb8a173 ("migration: Create the postcopy preempt channel asynchronously") Fixes: 8518278a6a ("migration: implementation of background snapshot thread") Fixes: bf78a046b9 ("migration: refactor migrate_fd_connect failures") Reviewed-by: Peter Xu <peterx@redhat.com> Message-ID: <20250213175927.19642-7-farosas@suse.de> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Reject qmp_migrate_cancel after postcopyFabiano Rosas2025-02-141-1/+12
| | | | | | | | | | | | After postcopy has started, it's not possible to recover the source machine in case a migration error occurs because the destination has already been changing the state of the machine. For that same reason, it doesn't make sense to try to cancel the migration after postcopy has started. Reject the cancel command during postcopy. Reviewed-by: Peter Xu <peterx@redhat.com> Message-ID: <20250213175927.19642-6-farosas@suse.de> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Change migrate_fd_ to migration_Fabiano Rosas2025-02-141-15/+15
| | | | | | | | | | | | | | Remove all instances of _fd_ from the migration generic code. These functions have grown over time and the _fd_ part is now just confusing. migration_fd_error() -> migration_error() makes it a little vague. Since it's only used for migration_connect() failures, change it to migration_connect_set_error(). Reviewed-by: Peter Xu <peterx@redhat.com> Message-ID: <20250213175927.19642-4-farosas@suse.de> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Unify migration_cancel and migrate_fd_cancelFabiano Rosas2025-02-141-11/+7
| | | | | | | | | There's no need for two separate functions and this _fd_ is a historic artifact that makes little sense nowadays. Reviewed-by: Peter Xu <peterx@redhat.com> Message-ID: <20250213175927.19642-3-farosas@suse.de> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Set migration error outside of migrate_cancelFabiano Rosas2025-02-141-5/+2
| | | | | | | | | There's no point passing the error into migration cancel only for it to call migrate_set_error(). Reviewed-by: Peter Xu <peterx@redhat.com> Message-ID: <20250213175927.19642-2-farosas@suse.de> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into stagingStefan Hajnoczi2025-02-101-8/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Block layer patches - Managing inactive nodes (enables QSD migration with shared storage) - Fix swapped values for BLOCK_IO_ERROR 'device' and 'qom-path' - vpc: Read images exported from Azure correctly - scripts/qemu-gdb: Support coroutine dumps in coredumps - Minor cleanups # -----BEGIN PGP SIGNATURE----- # # iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmek34IRHGt3b2xmQHJl # ZGhhdC5jb20ACgkQfwmycsiPL9bDpxAAnTvwmdazAXG0g9GzqvrEB/+6rStjAsqE # 9MTWV4WxyN41d0RXxN8CYKb8CXSiTRyw6r3CSGNYEI2eShe9e934PriSkZm41HyX # n9Yh5YxqGZqitzvPtx62Ii/1KG+PcjQbfHuK1p4+rlKa0yQ2eGlio1JIIrZrCkBZ # ikZcQUrhIyD0XV8hTQ2+Ysa+ZN6itjnlTQIG3gS3m8f8WR7kyUXD8YFMQFJFyjVx # NrAIpLnc/ln9+5PZR9tje8U7XEn2KCgI5pgGaQnrd0h0G1H4ig8ogzYYnKTLhjU/ # AmQpS8np8Tyg6S1UZTiekEq0VuAhThEQc5b3sGbmHWH/R2ABMStyf18oCBAkPzZ7 # s6h+3XzTKKY2Q5Q3ZG/ANkUJjTNBhdj1fcaARvbSWsqsuk5CWX/I3jzvgihFtCSs # eGu+b/bLeW6P7hu4qPHBcgLHuB1Fc7Rd2t4BoIGM1wcO2CeC9DzUKOiIMZOEJIh0 # GGqCkEWDHgckDTakD4/vSqm0UDKt6FSlQC9ga/ILBY3IB5HpHoArY58selymy28i # X7MgAvbjdsmNuUuXDZZOiObcFt3j8jlmwPJpPyzXPQIiPX1RXeBPRhVAEeZCKn6Z # tfHr72SJdMeVOGXVTvOrJ2iW+4g03rPdmkDFCUhpOwo62RODq7ahvCIXsNf3nEFR # rSB3T1M/8EM= # =iQLP # -----END PGP SIGNATURE----- # gpg: Signature made Thu 06 Feb 2025 11:12:50 EST # gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6 # gpg: issuer "kwolf@redhat.com" # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (25 commits) block: remove unused BLOCK_OP_TYPE_DATAPLANE iotests: Add (NBD-based) tests for inactive nodes iotests: Add qsd-migrate case iotests: Add filter_qtest() nbd/server: Support inactive nodes block/export: Add option to allow export of inactive nodes block: Drain nodes before inactivating them block/export: Don't ignore image activation error in blk_exp_add() block: Support inactive nodes in blk_insert_bs() block: Add blockdev-set-active QMP command block: Add option to create inactive nodes block: Fix crash on block_resize on inactive node block: Don't attach inactive child to active node migration/block-active: Remove global active flag block: Inactivate external snapshot overlays when necessary block: Allow inactivating already inactive nodes block: Add 'active' field to BlockDeviceInfo block-backend: Fix argument order when calling 'qapi_event_send_block_io_error()' scripts/qemu-gdb: Support coroutine dumps in coredumps scripts/qemu-gdb: Simplify fs_base fetching for coroutines ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| * migration/block-active: Remove global active flagKevin Wolf2025-02-061-8/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Block devices have an individual active state, a single global flag can't cover this correctly. This becomes more important as we allow users to manually manage which nodes are active or inactive. Now that it's allowed to call bdrv_inactivate_all() even when some nodes are already inactive, we can remove the flag and just unconditionally call bdrv_inactivate_all() and, more importantly, bdrv_activate_all() before we make use of the nodes. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20250204211407.381505-5-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* | qapi: Move include/qapi/qmp/ to include/qobject/Daniel P. Berrangé2025-02-101-1/+1
|/ | | | | | | | | | | | | | | | | | | | | | The general expectation is that header files should follow the same file/path naming scheme as the corresponding source file. There are various historical exceptions to this practice in QEMU, with one of the most notable being the include/qapi/qmp/ directory. Most of the headers there correspond to source files in qobject/. This patch corrects most of that inconsistency by creating include/qobject/ and moving the headers for qobject/ there. This also fixes MAINTAINERS for include/qapi/qmp/dispatch.h: scripts/get_maintainer.pl now reports "QAPI" instead of "No maintainers found". Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Acked-by: Halil Pasic <pasic@linux.ibm.com> #s390x Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-ID: <20241118151235.2665921-2-armbru@redhat.com> [Rebased]
* migration: Merge precopy/postcopy on switchover startPeter Xu2025-01-291-30/+32
| | | | | | | | | | | Now after all the cleanups, finally we can merge the switchover startup phase into one single function for precopy/postcopy. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-16-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Always set DEVICE statePeter Xu2025-01-291-31/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DEVICE state was introduced back in 2017: https://lore.kernel.org/qemu-devel/20171020090556.18631-1-dgilbert@redhat.com/ Quote from Dave's cover letter, when the pre-switchover phase was enabled, the state transition looks like this: The precopy flow is: active->pre-switchover->device->completed The postcopy flow is: active->pre-switchover->postcopy-active->completed To supplement above, when the cap is not enabled: The precopy flow is: active->completed The postcopy flow is: active->postcopy-active->completed It works for us, though we have some code just to special case these state transitions, so the DEVICE state currently is special only to precopy, and only conditionally. I had a quick discussion with Libvirt developers, it turns out that this may not be necessary. IOW, it seems okay we can have DEVICE state to be generic, so that we don't have over-complicated state machines. It not only helps align all the migration state machine, help cleanup the code path especially on pre-switchover handling (see the patch itself), another side benefit is we can unconditionally have a specific state to mark the switchover phase, which might be helpful for debugging too. This patch makes the DEVICE state to be present always, marking that source QEMU is switching over. Then the state machine will be always as simple as: active-> [pre-switchover->] -> device -> [postcopy-active->] -> complete After the change, no matter whether pre-switchover or postcopy is enabled or not, we always have DEVICE state showing the switchover phase. When pre-switchover enabled, we'll have an extra stage before that. When postcopy is enabled, we'll have an extra stage after that. A few qtests need touch up in QEMU tree for this change: - A few iotest outputs (194, 203, 234, 262, 280) - Teach libqos's migrate() on "device" state Cc: Jiri Denemark <jdenemar@redhat.com> Cc: Daniel P. Berrangé <berrange@redhat.com> Cc: Dr. David Alan Gilbert <dave@treblig.org> Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-15-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Unwrap qemu_savevm_state_complete_precopy() in postcopyPeter Xu2025-01-291-2/+11
| | | | | | | | | | | | | | | | | | | | Postcopy invokes qemu_savevm_state_complete_precopy() twice for a long time, and that caused way too much confusions. Let's clean this up and make postcopy easier to read. It's actually fairly straightforward: postcopy starts with saving non-postcopiable iterables, then later it saves again with non-iterable only. Move these two calls out makes everything much easier to follow. Otherwise it's very unclear what qemu_savevm_state_complete_precopy() did in either of the calls. No functional change intended. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-13-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Notify COMPLETE once for postcopyPeter Xu2025-01-291-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | Postcopy invokes qemu_savevm_state_complete_precopy() twice, that means it'll invoke COMPLETE notify twice.. also twice the tracepoints that marking precopy complete. Move that notification (along with the tracepoint) out to the caller, so that postcopy will only notify once right at the start of switchover phase from precopy. When at it, rename it to suite the file now it locates. For precopy, there should have no functional change except the tracepoint has a name change. For the other two users of qemu_savevm_state_complete_precopy(), namely: qemu_savevm_state() and qemu_savevm_live_state(): the notifier shouldn't matter because they're not precopy at all. Now in these two contexts (aka, "savevm", and "colo") sometimes the precopy notifiers will still be invoked, but that's outside the scope of this patch. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-12-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Take BQL slightly longer in postcopy_start()Peter Xu2025-01-291-3/+2
| | | | | | | | | | | | | | | | | This paves way for some follow up patch to modify migration states at the end of postcopy_start(), which should better be with the BQL so that there's no way of concurrent cancellation. So we'll do something slightly more with BQL but they're really trivial, hopefully nothing will really chance with this. A side benefit is we can drop another explicit lock() in failure path. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-11-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Drop cached migration state in migration_maybe_pause()Peter Xu2025-01-291-19/+8
| | | | | | | | | | | | | | I can't see why we must cache the state now after we avoided possible CANCEL race: that's the only thing I can think of that can modify the migration state concurrently with the migration thread itself. Make all the state updates to happen always, then we don't need to cache the state anymore. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-10-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Adjust locking in migration_maybe_pause()Peter Xu2025-01-291-2/+2
| | | | | | | | | | | | | | | | | | | | | In migration_maybe_pause() QEMU may yield BQL before waiting for a semaphore. However it yields the BQL too early, which logically gives it chance for the main thread to quickly take the BQL and modify the state to CANCELLING. To avoid such race condition from happening at all, always update the migration states within the BQL. It'll make sure no concurrent cancellation can ever happen. With that, IIUC there's chance we can remove the extra parameter in migration_maybe_pause() to update active state, but that'll be done separately later. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-9-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Adjust postcopy bandwidth during switchoverPeter Xu2025-01-291-7/+9
| | | | | | | | | | | | | | | | | | | | | Precopy uses unlimited bandwidth always during switchover, it makes sense because this is so critical and no one would like to throttle bandwidth during the VM blackout. OTOH, postcopy surprisingly didn't do that. There's one line that in the middle of the postcopy switchover it tries to switch to postcopy's specified max-postcopy-bandwidth, but even so it's somewhere in the middle which is strange. This patch brings the two modes to always use unlimited bandwidth for switchover, meanwhile only apply the postcopy max bandwidth after the switchover is completed. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-8-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Synchronize all CPU states only for non-iterable dumpPeter Xu2025-01-291-5/+1
| | | | | | | | | | | | | | | | | | | Do one shot cpu sync at qemu_savevm_state_complete_precopy_non_iterable(), instead of coding it separately in two places. Note that in the context of qemu_savevm_state_complete_precopy(), this patch is also an optimization for postcopy path, in that we can avoid sync cpu twice during switchover: before this patch, postcopy_start() invokes twice on qemu_savevm_state_complete_precopy(), each of them will try to sync CPU info. In reality, only one of them would be enough. For background snapshot, there's no intended functional change. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-7-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Drop inactivate_disk param in qemu_savevm_state_complete*Peter Xu2025-01-291-6/+16
| | | | | | | | | | | | | | | | | | | | | | This parameter is only used by one caller, which is the genuine precopy complete path (migration_completion_precopy). The parameter was introduced in a1fbe750fd ("migration: Fix race of image locking between src and dst") to make sure the inactivate will happen before EOF to make sure dest will always be able to activate the disk properly. However there's no limitation on how early we inactivate the disk. For precopy completion path, we can always do that as long as VM is stopped. Move the disk inactivate there, then we can remove this inactivate_disk parameter in the whole call stack, because all the rest users pass in false always. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-6-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Avoid two src-downtime-end tracepoints for postcopyPeter Xu2025-01-291-2/+1
| | | | | | | | | | | | Postcopy can trigger this tracepoint twice, while only the 1st one is valid. Avoid triggering the 2nd tracepoint just like what we do with recording the total downtime. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-5-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Optimize postcopy on downtime by avoiding JSON writerPeter Xu2025-01-291-2/+15
| | | | | | | | | | | | | | | | | | | | | | postcopy_start() is the entry function that postcopy is destined to start. It also means QEMU source will not dump VM description, aka, the JSON writer is garbage now. We can leave that to be cleaned up when migration completes, however when with the JSON writer object being present, vmstate_save() will still try to construct the JSON objects for the VM descriptions, even though it'll never be used later if it's postcopy. To save those cycles, release the JSON writer earlier for postcopy. Then vmstate_save() later will be smart enough to skip the JSON object constructions completely. It can logically reduce downtime because all such JSON constructions happen during postcopy blackout. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-4-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Do not construct JSON description if suppressedPeter Xu2025-01-291-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | QEMU machine has a property "suppress-vmdesc". When it is enabled, QEMU will stop attaching JSON VM description at the end of the precopy migration stream (postcopy is never affected because postcopy never attach that). However even if it's suppressed by the user, the source QEMU will still construct the JSON descriptions, which is a complete waste of CPU and memory resources. To avoid it, only create the JSON writer object if suppress-vmdesc is not specified. Luckily, vmstate_save() already supports vmdesc==NULL, so only a few spots that are left to be prepared that vmdesc can be NULL now. When at it, move the init / destroy of the JSON writer object to start / end of the migration - the JSON writer object is a sub-struct of migration state, and that looks like the only object that was dynamically allocated / destroyed within migration process. Make it the same as the rest objects that migration uses. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-3-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: cpr-transfer modeSteve Sistare2025-01-291-1/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the cpr-transfer migration mode, which allows the user to transfer a guest to a new QEMU instance on the same host with minimal guest pause time, by preserving guest RAM in place, albeit with new virtual addresses in new QEMU, and by preserving device file descriptors. Pages that were locked in memory for DMA in old QEMU remain locked in new QEMU, because the descriptor of the device that locked them remains open. cpr-transfer preserves memory and devices descriptors by sending them to new QEMU over a unix domain socket using SCM_RIGHTS. Such CPR state cannot be sent over the normal migration channel, because devices and backends are created prior to reading the channel, so this mode sends CPR state over a second "cpr" migration channel. New QEMU reads the cpr channel prior to creating devices or backends. The user specifies the cpr channel in the channel arguments on the outgoing side, and in a second -incoming command-line parameter on the incoming side. The user must start old QEMU with the the '-machine aux-ram-share=on' option, which allows anonymous memory to be transferred in place to the new process by transferring a memory descriptor for each ram block. Memory-backend objects must have the share=on attribute, but memory-backend-epc is not supported. The user starts new QEMU on the same host as old QEMU, with command-line arguments to create the same machine, plus the -incoming option for the main migration channel, like normal live migration. In addition, the user adds a second -incoming option with channel type "cpr". This CPR channel must support file descriptor transfer with SCM_RIGHTS, i.e. it must be a UNIX domain socket. To initiate CPR, the user issues a migrate command to old QEMU, adding a second migration channel of type "cpr" in the channels argument. Old QEMU stops the VM, saves state to the migration channels, and enters the postmigrate state. New QEMU mmap's memory descriptors, and execution resumes. The implementation splits qmp_migrate into start and finish functions. Start sends CPR state to new QEMU, which responds by closing the CPR channel. Old QEMU detects the HUP then calls finish, which connects the main migration channel. In summary, the usage is: qemu-system-$arch -machine aux-ram-share=on ... start new QEMU with "-incoming <main-uri> -incoming <cpr-channel>" Issue commands to old QEMU: migrate_set_parameter mode cpr-transfer {"execute": "migrate", ... {"channel-type": "main"...}, {"channel-type": "cpr"...} ... } Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Peter Xu <peterx@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-17-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: incoming channelSteve Sistare2025-01-291-5/+16
| | | | | | | | | | | Extend the -incoming option to allow an @MigrationChannel to be specified. This allows channels other than 'main' to be described on the command line, which will be needed for CPR. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Acked-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-13-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: enhance migrate_uri_parseSteve Sistare2025-01-291-0/+11
| | | | | | | | | | | Export migrate_uri_parse for use outside migration internals, and define a method migrate_is_uri that indicates when migrate_uri_parse should be used. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-12-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: cpr-stateSteve Sistare2025-01-291-0/+1
| | | | | | | | | | | | | | | | | | | | CPR must save state that is needed after QEMU is restarted, when devices are realized. Thus the extra state cannot be saved in the migration channel, as objects must already exist before that channel can be loaded. Instead, define auxilliary state structures and vmstate descriptions, not associated with any registered object, and serialize the aux state to a cpr-specific channel in cpr_state_save. Deserialize in cpr_state_load after QEMU restarts, before devices are realized. Provide accessors for clients to register file descriptors for saving. The mechanism for passing the fd's to the new process will be specific to each migration mode, and added in subsequent patches. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-8-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration/block: Rewrite disk activationPeter Xu2025-01-091-61/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch proposes a flag to maintain disk activation status globally. It mostly rewrites disk activation mgmt for QEMU, including COLO and QMP command xen_save_devices_state. Backgrounds =========== We have two problems on disk activations, one resolved, one not. Problem 1: disk activation recover (for switchover interruptions) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When migration is either cancelled or failed during switchover, especially when after the disks are inactivated, QEMU needs to remember re-activate the disks again before vm starts. It used to be done separately in two paths: one in qmp_migrate_cancel(), the other one in the failure path of migration_completion(). It used to be fixed in different commits, all over the places in QEMU. So these are the relevant changes I saw, I'm not sure if it's complete list: - In 2016, commit fe904ea824 ("migration: regain control of images when migration fails to complete") - In 2017, commit 1d2acc3162 ("migration: re-active images while migration been canceled after inactive them") - In 2023, commit 6dab4c93ec ("migration: Attempt disk reactivation in more failure scenarios") Now since we have a slightly better picture maybe we can unify the reactivation in a single path. One side benefit of doing so is, we can move the disk operation outside QMP command "migrate_cancel". It's possible that in the future we may want to make "migrate_cancel" be OOB-compatible, while that requires the command doesn't need BQL in the first place. This will already do that and make migrate_cancel command lightweight. Problem 2: disk invalidation on top of invalidated disks ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is an unresolved bug for current QEMU. Link in "Resolves:" at the end. It turns out besides the src switchover phase (problem 1 above), QEMU also needs to remember block activation on destination. Consider two continuous migration in a row, where the VM was always paused. In that scenario, the disks are not activated even until migration completed in the 1st round. When the 2nd round starts, if QEMU doesn't know the status of the disks, it needs to try inactivate the disk again. Here the issue is the block layer API bdrv_inactivate_all() will crash a QEMU if invoked on already inactive disks for the 2nd migration. For detail, see the bug link at the end. Implementation ============== This patch proposes to maintain disk activation with a global flag, so we know: - If we used to inactivate disks for migration, but migration got cancelled, or failed, QEMU will know it should reactivate the disks. - On incoming side, if the disks are never activated but then another migration is triggered, QEMU should be able to tell that inactivate is not needed for the 2nd migration. We used to have disk_inactive, but it only solves the 1st issue, not the 2nd. Also, it's done in completely separate paths so it's extremely hard to follow either how the flag changes, or the duration that the flag is valid, and when we will reactivate the disks. Convert the existing disk_inactive flag into that global flag (also invert its naming), and maintain the disk activation status for the whole lifecycle of qemu. That includes the incoming QEMU. Put both of the error cases of source migration (failure, cancelled) together into migration_iteration_finish(), which will be invoked for either of the scenario. So from that part QEMU should behave the same as before. However with such global maintenance on disk activation status, we not only cleanup quite a few temporary paths that we try to maintain the disk activation status (e.g. in postcopy code), meanwhile it fixes the crash for problem 2 in one shot. For freshly started QEMU, the flag is initialized to TRUE showing that the QEMU owns the disks by default. For incoming migrated QEMU, the flag will be initialized to FALSE once and for all showing that the dest QEMU doesn't own the disks until switchover. That is guaranteed by the "once" variable. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2395 Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206230838.1111496-7-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration/block: Fix possible race with block_inactivePeter Xu2025-01-091-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | Src QEMU sets block_inactive=true very early before the invalidation takes place. It means if something wrong happened during setting the flag but before reaching qemu_savevm_state_complete_precopy_non_iterable() where it did the invalidation work, it'll make block_inactive flag inconsistent. For example, think about when qemu_savevm_state_complete_precopy_iterable() can fail: it will have block_inactive set to true even if all block drives are active. Fix that by only update the flag after the invalidation is done. No Fixes for any commit, because it's not an issue if bdrv_activate_all() is re-entrant upon all-active disks - false positive block_inactive can bring nothing more than "trying to active the blocks but they're already active". However let's still do it right to avoid the inconsistent flag v.s. reality. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206230838.1111496-6-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration/block: Make late-block-active the defaultPeter Xu2025-01-091-19/+19
| | | | | | | | | | | | | | | | | | | | Migration capability 'late-block-active' controls when the block drives will be activated. If enabled, block drives will only be activated until VM starts, either src runstate was "live" (RUNNING, or SUSPENDED), or it'll be postponed until qmp_cont(). Let's do this unconditionally. There's no harm to delay activation of block drives. Meanwhile there's no ABI breakage if dest does it, because src QEMU has nothing to do with it, so it's no concern on ABI breakage. IIUC we could avoid introducing this cap when introducing it before, but now it's still not too late to just always do it. Cap now prone to removal, but it'll be for later patches. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206230838.1111496-4-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Add helper to get target runstatePeter Xu2025-01-091-4/+17
| | | | | | | | | | | | | | | | | | | | In 99% cases, after QEMU migrates to dest host, it tries to detect the target VM runstate using global_state_get_runstate(). There's one outlier so far which is Xen that won't send global state. That's the major reason why global_state_received() check was always there together with global_state_get_runstate(). However it's utterly confusing why global_state_received() has anything to do with "let's start VM or not". Provide a helper to explain it, then we have an unified entry for getting the target dest QEMU runstate after migration. Suggested-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20241206230838.1111496-2-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de>
* migration: Unexport migration_is_active()Avihai Horon2024-12-261-8/+8
| | | | | | | | | | | | | | | After being removed from VFIO and dirty limit, migration_is_active() no longer has any users outside the migration subsystem, and in fact, it's only used in migration.c. Unexport it and also relocate it so it can be made static. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Tested-by: Joao Martins <joao.m.martins@oracle.com> Link: https://lore.kernel.org/r/20241218134022.21264-8-avihaih@nvidia.com Signed-off-by: Cédric Le Goater <clg@redhat.com>