summary refs log tree commit diff stats
path: root/migration (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* migration: Move compression_counters cleanup ram-compress.cJuan Quintela2023-10-172-3/+6
| | | | | | Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-6-quintela@redhat.com>
* migration: RDMA is not compatible with anything elseJuan Quintela2023-10-171-0/+12
| | | | | | | | So give an error instead of just ignoring the other methods. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-4-quintela@redhat.com>
* migration/multifd: Clarify Error usage in multifd_channel_connectFabiano Rosas2023-10-172-33/+30
| | | | | | | | | | | | | | | | The function is currently called from two sites, one always gives it a NULL Error and the other always gives it a non-NULL Error. In the non-NULL case, all it does it trace the error and return. One of the callers already have tracing, add a tracepoint to the other and stop passing the error into the function. Cc: Markus Armbruster <armbru@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231012134343.23757-4-farosas@suse.de>
* migration/multifd: Unify multifd_send_thread error pathsFabiano Rosas2023-10-171-9/+3
| | | | | | | | | | | | | | | | | | | The preferred usage of the Error type is to always set both the return code and the error when a failure happens. As all code called from the send thread follows this pattern, we'll always have the return code and the error set at the same time. Aside from the convention, in this piece of code this must be the case, otherwise the if (ret != 0) would be exiting the thread without calling multifd_send_terminate_threads() which is incorrect. Unify both paths to make it clear that both are taken when there's an error. Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231012134343.23757-3-farosas@suse.de>
* migration/multifd: Remove direct "socket" referencesFabiano Rosas2023-10-171-6/+16
| | | | | | | | | | | We're about to enable support for other transports in multifd, so remove direct references to sockets. Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231012134343.23757-2-farosas@suse.de>
* migration/ram: Merge save_zero_page functionsFabiano Rosas2023-10-171-33/+13
| | | | | | | | | | | | We don't need to do this in two pieces. One single function makes it easier to grasp, specially since it removes the indirection on the return value handling. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-7-farosas@suse.de>
* migration/ram: Move xbzrle zero page handling into save_zero_pageFabiano Rosas2023-10-171-15/+20
| | | | | | | | | | | | | | It makes a bit more sense to have the zero page handling of xbzrle right where we save the zero page. Also invert the exit condition to remove one level of indentation which makes the next patch easier to grasp. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-6-farosas@suse.de>
* migration/ram: Stop passing QEMUFile around in save_zero_pageFabiano Rosas2023-10-171-5/+6
| | | | | | | | | | | We don't need the QEMUFile when we're already passing the PageSearchStatus. Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-5-farosas@suse.de>
* migration/ram: Remove RAMState from xbzrle_cache_zero_pageFabiano Rosas2023-10-171-3/+2
| | | | | | | | | | | 'rs' is not used in that function. It's a leftover from commit 9360447d34 ("ram: Use MigrationStats for statistics"). Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-4-farosas@suse.de>
* migration/ram: Refactor precopy ram loading codeNikolay Borisov2023-10-171-64/+82
| | | | | | | | | | | | | | | Extract the ramblock parsing code into a routine that operates on the sequence of headers from the stream and another the parses the individual ramblock. This makes ram_load_precopy() easier to comprehend. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-3-farosas@suse.de>
* multifd: reset next_packet_len after sending pagesElena Ufimtseva2023-10-171-0/+1
| | | | | | | | | | | | | | Sometimes multifd sends just sync packet with no pages (normal_num is 0). In this case the old value is being preserved and being accounted for while only packet_len is being transferred. Reset it to 0 after sending and accounting for. Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011184358.97349-5-elena.ufimtseva@oracle.com>
* multifd: fix counters in multifd_send_threadElena Ufimtseva2023-10-171-4/+4
| | | | | | | | | | | | | | | | Previous commit cbec7eb76879d419e7dbf531ee2506ec0722e825 "migration/multifd: Compute transferred bytes correctly" removed accounting for packet_len in non-rdma case, but the next_packet_size only accounts for pages, not for the header packet (normal_pages * PAGE_SIZE) that is being sent as iov[0]. The packet_len part should be added to account for the size of MultiFDPacket and the array of the offsets. Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011184358.97349-4-elena.ufimtseva@oracle.com>
* migration: check for rate_limit_max for RATE_LIMIT_DISABLEDElena Ufimtseva2023-10-171-4/+5
| | | | | | | | | | | | | | | | | | | | | In migration rate limiting atomic operations are used to read the rate limit variables and transferred bytes and they are expensive. Check first if rate_limit_max is equal to RATE_LIMIT_DISABLED and return false immediately if so. Note that with this patch we will also will stop flushing by not calling qemu_fflush() from migration_transferred_bytes() if the migration rate is not exceeded. This should be fine since migration thread calls in the loop migration_update_counters from migration_rate_limit() that calls the migration_transferred_bytes() and flushes there. Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011184358.97349-2-elena.ufimtseva@oracle.com>
* migration/rdma: Remove all "ret" variables that are used only onceJuan Quintela2023-10-171-21/+8
| | | | | | | | | | | | | | | | | | | Change code that is: int ret; ... ret = foo(); if (ret[ < 0]?) { to: if (foo()[ < 0]) { Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-14-quintela@redhat.com>
* migration/rdma: Declare for index variables localJuan Quintela2023-10-171-26/+18
| | | | | | | | | | | | Declare all variables that are only used inside a for loop inside the for statement. This makes clear that they are not used outside of the for loop. Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-13-quintela@redhat.com>
* migration/rdma: Use i as for index instead of idxJuan Quintela2023-10-171-27/+22
| | | | | | | | | | Once there, all the uses are local to the for, so declare the variable inside the for statement. Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-12-quintela@redhat.com>
* migration/rdma: Check sooner if we are in postcopy for save_page()Juan Quintela2023-10-171-5/+1
| | | | | | | Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-11-quintela@redhat.com>
* migration/rdma: Remove qemu_ prefix from exported functionsJuan Quintela2023-10-174-48/+46
| | | | | | | | | Functions are long enough even without this. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-10-quintela@redhat.com>
* migration/rdma: Move rdma constants from qemu-file.h to rdma.hJuan Quintela2023-10-173-18/+17
| | | | | | | Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-9-quintela@redhat.com>
* qemu-file: Remove QEMUFileHooksJuan Quintela2023-10-173-19/+0
| | | | | | | | | The only user was rdma, and its use is gone. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-8-quintela@redhat.com>
* migration/rdma: Create rdma_control_save_page()Juan Quintela2023-10-175-35/+30
| | | | | | | | | | | The only user of ram_control_save_page() and save_page() hook was rdma. Just move the function to rdma.c, rename it to rdma_control_save_page(). Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-7-quintela@redhat.com>
* migration/rdma: Unfold hook_ram_load()Juan Quintela2023-10-175-46/+18
| | | | | | | | There is only one flag called with: RAM_CONTROL_BLOCK_REG. Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-6-quintela@redhat.com>
* migration/rdma: Remove all uses of RAM_CONTROL_HOOKJuan Quintela2023-10-174-7/+14
| | | | | | | | | Instead of going through ram_control_load_hook(), call qemu_rdma_registration_handle() directly. Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-5-quintela@redhat.com>
* migration/rdma: Unfold ram_control_after_iterate()Juan Quintela2023-10-175-22/+21
| | | | | | | | | | | | Once there: - Remove unused data parameter - unfold it in its callers - change all callers to call qemu_rdma_registration_stop() - We need to call QIO_CHANNEL_RDMA() after we check for migrate_rdma() Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-4-quintela@redhat.com>
* migration/rdma: Unfold ram_control_before_iterate()Juan Quintela2023-10-175-25/+25
| | | | | | | | | | | | | Once there: - Remove unused data parameter - unfold it in its callers. - change all callers to call qemu_rdma_registration_start() - We need to call QIO_CHANNEL_RDMA() after we check for migrate_rdma() Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-3-quintela@redhat.com>
* migration: Create migrate_rdma()Juan Quintela2023-10-175-1/+14
| | | | | | | | | Helper to say if we are doing a migration over rdma. Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-2-quintela@redhat.com>
* migration: Non multifd migration don't care about multifd flushesJuan Quintela2023-10-171-6/+9
| | | | | | | | | | | | | | | | RDMA was having trouble because migrate_multifd_flush_after_each_section() can only be true or false, but we don't want to send any flush when we are not in multifd migration. CC: Fabiano Rosas <farosas@suse.de Fixes: 294e5a4034e81 ("multifd: Only flush once each full round of memory") Reported-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011205548.10571-2-quintela@redhat.com>
* migration: hold the BQL during setupFiona Ebner2023-10-175-13/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is intended to be a semantic revert of commit 9b09503752 ("migration: run setup callbacks out of big lock"). There have been so many changes since that commit (e.g. a new setup callback dirty_bitmap_save_setup() that also needs to be adapted now), it's easier to do the revert manually. For snapshots, the bdrv_writev_vmstate() function is used during setup (in QIOChannelBlock backing the QEMUFile), but not holding the BQL while calling it could lead to an assertion failure. To understand how, first note the following: 1. Generated coroutine wrappers for block layer functions spawn the coroutine and use AIO_WAIT_WHILE()/aio_poll() to wait for it. 2. If the host OS switches threads at an inconvenient time, it can happen that a bottom half scheduled for the main thread's AioContext is executed as part of a vCPU thread's aio_poll(). An example leading to the assertion failure is as follows: main thread: 1. A snapshot-save QMP command gets issued. 2. snapshot_save_job_bh() is scheduled. vCPU thread: 3. aio_poll() for the main thread's AioContext is called (e.g. when the guest writes to a pflash device, as part of blk_pwrite which is a generated coroutine wrapper). 4. snapshot_save_job_bh() is executed as part of aio_poll(). 3. qemu_savevm_state() is called. 4. qemu_mutex_unlock_iothread() is called. Now qemu_get_current_aio_context() returns 0x0. 5. bdrv_writev_vmstate() is executed during the usual savevm setup via qemu_fflush(). But this function is a generated coroutine wrapper, so it uses AIO_WAIT_WHILE. There, the assertion assert(qemu_get_current_aio_context() == qemu_get_aio_context()); will fail. To fix it, ensure that the BQL is held during setup. While it would only be needed for snapshots, adapting migration too avoids additional logic for conditional locking/unlocking in the setup callbacks. Writing the header could (in theory) also trigger qemu_fflush() and thus bdrv_writev_vmstate(), so the locked section also covers the qemu_savevm_state_header() call, even for migration for consistency. The section around multifd_send_sync_main() needs to be unlocked to avoid a deadlock. In particular, the multifd_save_setup() function calls socket_send_channel_create() using multifd_new_send_channel_async() as a callback and then waits for the callback to signal via the channels_ready semaphore. The connection happens via qio_task_run_in_thread(), but the callback is only executed via qio_task_thread_result() which is scheduled for the main event loop. Without unlocking the section, the main thread would never get to process the task result and the callback meaning there would be no signal via the channels_ready semaphore. The comment in ram_init_bitmaps() was introduced by 4987783400 ("migration: fix incorrect memory_global_dirty_log_start outside BQL") and is removed, because it referred to the qemu_mutex_lock_iothread() call. Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231013105839.415989-1-f.ebner@proxmox.com>
* migration: Add the configuration vmstate to the json writerNikolay Borisov2023-10-172-4/+17
| | | | | | | | | | | | | | Make the migration json writer part of MigrationState struct, allowing the 'configuration' object be serialized to json. This will facilitate the parsing of the 'configuration' object in the next patch that fixes analyze-migration.py for arm. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231009184326.15777-2-farosas@suse.de>
* migration: fix RAMBlock add NULL checkDmitry Frolov2023-10-171-0/+5
| | | | | | | | | | | | | qemu_ram_block_from_host() may return NULL, which will be dereferenced w/o check. Usualy return value is checked for this function. Found by Linux Verification Center (linuxtesting.org) with SVACE. Signed-off-by: Dmitry Frolov <frolov@swemel.ru> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231010104851.802947-1-frolov@swemel.ru>
* migration: Allow user to specify available switchover bandwidthPeter Xu2023-10-176-5/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migration bandwidth is a very important value to live migration. It's because it's one of the major factors that we'll make decision on when to switchover to destination in a precopy process. This value is currently estimated by QEMU during the whole live migration process by monitoring how fast we were sending the data. This can be the most accurate bandwidth if in the ideal world, where we're always feeding unlimited data to the migration channel, and then it'll be limited to the bandwidth that is available. However in reality it may be very different, e.g., over a 10Gbps network we can see query-migrate showing migration bandwidth of only a few tens of MB/s just because there are plenty of other things the migration thread might be doing. For example, the migration thread can be busy scanning zero pages, or it can be fetching dirty bitmap from other external dirty sources (like vhost or KVM). It means we may not be pushing data as much as possible to migration channel, so the bandwidth estimated from "how many data we sent in the channel" can be dramatically inaccurate sometimes. With that, the decision to switchover will be affected, by assuming that we may not be able to switchover at all with such a low bandwidth, but in reality we can. The migration may not even converge at all with the downtime specified, with that wrong estimation of bandwidth, keeping iterations forever with a low estimation of bandwidth. The issue is QEMU itself may not be able to avoid those uncertainties on measuing the real "available migration bandwidth". At least not something I can think of so far. One way to fix this is when the user is fully aware of the available bandwidth, then we can allow the user to help providing an accurate value. For example, if the user has a dedicated channel of 10Gbps for migration for this specific VM, the user can specify this bandwidth so QEMU can always do the calculation based on this fact, trusting the user as long as specified. It may not be the exact bandwidth when switching over (in which case qemu will push migration data as fast as possible), but much better than QEMU trying to wildly guess, especially when very wrong. A new parameter "avail-switchover-bandwidth" is introduced just for this. So when the user specified this parameter, instead of trusting the estimated value from QEMU itself (based on the QEMUFile send speed), it trusts the user more by using this value to decide when to switchover, assuming that we'll have such bandwidth available then. Note that specifying this value will not throttle the bandwidth for switchover yet, so QEMU will always use the full bandwidth possible for sending switchover data, assuming that should always be the most important way to use the network at that time. This can resolve issues like "unconvergence migration" which is caused by hilarious low "migration bandwidth" detected for whatever reason. Reported-by: Zhiyi Guo <zhguo@redhat.com> Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231010221922.40638-1-peterx@redhat.com>
* migration: Use g_autofree to simplify ram_dirty_bitmap_reload()Philippe Mathieu-Daudé2023-10-171-11/+6
| | | | | | | | | Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231011023627.86691-1-philmd@linaro.org>
* migration: refactor migration_completionWei Wang2023-10-171-72/+93
| | | | | | | | | | | | | | | | | | | | | | Current migration_completion function is a bit long. Refactor the long implementation into different subfunctions: - migration_completion_precopy: completion code related to precopy - migration_completion_postcopy: completion code related to postcopy Rename await_return_path_close_on_source to close_return_path_on_source: It is renamed to match with open_return_path_on_source. This improves readability and is easier for future updates (e.g. add new subfunctions when completion code related to new features are needed). No functional changes intended. Signed-off-by: Wei Wang <wei.w.wang@intel.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Isaku Yamahata <isaku.yamahata@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230804093053.5037-1-wei.w.wang@intel.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* block: Mark bdrv_first_blk() and bdrv_is_root_node() GRAPH_RDLOCKKevin Wolf2023-10-122-0/+4
| | | | | | | | | | | | This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_first_blk() and bdrv_is_root_node() need to hold a reader lock for the graph. These functions are the only functions in block-backend.c that access the parent list of a node. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20230929145157.45443-5-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* migration: Add migration_rp_wait|kick()Peter Xu2023-10-113-11/+34
| | | | | | | | | | It's just a simple wrapper for rp_sem on either wait() or kick(), make it even clearer on how it is used. Prepared to be used even for other things. Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Peter Xu <peterx@redhat.com> Message-ID: <20231004220240.167175-8-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
* migration: Remember num of ramblocks to sync during recoveryPeter Xu2023-10-111-3/+14
| | | | | | | | | | | | | | Instead of only relying on the count of rp_sem, make the counter be part of RAMState so it can be used in both threads to synchronize on the process. rp_sem will be further reused in follow up patches, as a way to kick the main thread, e.g., on recovery failures. Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231004220240.167175-7-peterx@redhat.com>
* qemufile: Always return a verbose errorPeter Xu2023-10-111-3/+12
| | | | | | | | | | | | | | | | | There're a lot of cases where we only have an errno set in last_error but without a detailed error description. When this happens, try to generate an error contains the errno as a descriptive error. This will be helpful in cases where one relies on the Error*. E.g., migration state only caches Error* in MigrationState.error. With this, we'll display correct error messages in e.g. query-migrate when the error was only set by qemu_file_set_error(). Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231004220240.167175-6-peterx@redhat.com>
* migration: Introduce migrate_has_error()Peter Xu2023-10-112-0/+8
| | | | | | | | | | | | | | | Introduce a helper to detect whether MigrationState.error is set for whatever reason. This is preparation work for any thread (e.g. source return path thread) to setup errors in an unified way to MigrationState, rather than relying on its own way to set errors (mark_source_rp_bad()). Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231004220240.167175-3-peterx@redhat.com>
* migration: Display error in query-migrate irrelevant of statusPeter Xu2023-10-111-3/+5
| | | | | | | | | | | | | | | | | | | Display it as long as being set, irrelevant of FAILED status. E.g., it may also be applicable to PAUSED stage of postcopy, to provide hint on what has gone wrong. The error_mutex seems to be overlooked when referencing the error, add it to be very safe. This will change QAPI behavior by showing up error message outside !FAILED status, but it's intended and doesn't expect to break anyone. Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2018404 Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20231004220240.167175-2-peterx@redhat.com>
* migration/rdma: Replace flawed device detail dump by tracingMarkus Armbruster2023-10-112-15/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | qemu_rdma_dump_id() dumps RDMA device details to stdout. rdma_start_outgoing_migration() calls it via qemu_rdma_source_init() and qemu_rdma_resolve_host() to show source device details. rdma_start_incoming_migration() arranges its call via rdma_accept_incoming_migration() and qemu_rdma_accept() to show destination device details. Two issues: 1. rdma_start_outgoing_migration() can run in HMP context. The information should arguably go the monitor, not stdout. 2. ibv_query_port() failure is reported as error. Its callers remain unaware of this failure (qemu_rdma_dump_id() can't fail), so reporting this to the user as an error is problematic. Fixable, but the device detail dump is noise, except when troubleshooting. Tracing is a better fit. Similar function qemu_rdma_dump_id() was converted to tracing in commit 733252deb8b (Tracify migration/rdma.c). Convert qemu_rdma_dump_id(), too. While there, touch up qemu_rdma_dump_gid()'s outdated comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-54-armbru@redhat.com>
* migration/rdma: Use error_report() & friends instead of stderrMarkus Armbruster2023-10-111-23/+21
| | | | | | | | | | | | | | | error_report() obeys -msg, reports the current error location if any, and reports to the current monitor if any. Reporting to stderr directly with fprintf() or perror() is wrong, because it loses all this. Fix the offenders. Bonus: resolves a FIXME about problematic use of errno. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-53-armbru@redhat.com>
* migration/rdma: Downgrade qemu_rdma_cleanup() errors to warningsMarkus Armbruster2023-10-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Functions that use an Error **errp parameter to return errors should not also report them to the user, because reporting is the caller's job. When the caller does, the error is reported twice. When it doesn't (because it recovered from the error), there is no error to report, i.e. the report is bogus. qemu_rdma_source_init(), qemu_rdma_connect(), rdma_start_incoming_migration(), and rdma_start_outgoing_migration() violate this principle: they call error_report() via qemu_rdma_cleanup(). Moreover, qemu_rdma_cleanup() can't fail. It is called on error paths, and QIOChannel close and finalization. Are the conditions it reports really errors? I doubt it. Downgrade qemu_rdma_cleanup()'s errors to warnings. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-52-armbru@redhat.com>
* migration/rdma: Silence qemu_rdma_register_and_get_keys()Markus Armbruster2023-10-111-9/+0
| | | | | | | | | | | | | | | | | | | | | | | Functions that use an Error **errp parameter to return errors should not also report them to the user, because reporting is the caller's job. When the caller does, the error is reported twice. When it doesn't (because it recovered from the error), there is no error to report, i.e. the report is bogus. qemu_rdma_write_one() violates this principle: it reports errors to stderr via qemu_rdma_register_and_get_keys(). I elected not to investigate how callers handle the error, i.e. precise impact is not known. Clean this up: silence qemu_rdma_register_and_get_keys(). I believe the caller's error reports suffice. If they don't, we need to convert to Error instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-51-armbru@redhat.com>
* migration/rdma: Silence qemu_rdma_block_for_wrid()Markus Armbruster2023-10-111-16/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Functions that use an Error **errp parameter to return errors should not also report them to the user, because reporting is the caller's job. When the caller does, the error is reported twice. When it doesn't (because it recovered from the error), there is no error to report, i.e. the report is bogus. qemu_rdma_post_send_control(), qemu_rdma_exchange_get_response(), and qemu_rdma_write_one() violate this principle: they call error_report(), fprintf(stderr, ...), and perror() via qemu_rdma_block_for_wrid(), qemu_rdma_poll(), and qemu_rdma_wait_comp_channel(). I elected not to investigate how callers handle the error, i.e. precise impact is not known. Clean this up by dropping the error reporting from qemu_rdma_poll(), qemu_rdma_wait_comp_channel(), and qemu_rdma_block_for_wrid(). I believe the callers' error reports suffice. If they don't, we need to convert to Error instead. Bonus: resolves a FIXME about problematic use of errno. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-50-armbru@redhat.com>
* migration/rdma: Don't report received completion events as errorMarkus Armbruster2023-10-111-2/+2
| | | | | | | | | | | | | | | | | | | When qemu_rdma_wait_comp_channel() receives an event from the completion channel, it reports an error "receive cm event while wait comp channel,cm event is T", where T is the numeric event type. However, the function fails only when T is a disconnect or device removal. Events other than these two are not actually an error, and reporting them as an error is wrong. If we need to report them to the user, we should use something else, and what to use depends on why we need to report them to the user. For now, report this error only when the function actually fails. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-49-armbru@redhat.com>
* migration/rdma: Silence qemu_rdma_reg_control()Markus Armbruster2023-10-111-1/+0
| | | | | | | | | | | | | | | | | | | | | | | Functions that use an Error **errp parameter to return errors should not also report them to the user, because reporting is the caller's job. When the caller does, the error is reported twice. When it doesn't (because it recovered from the error), there is no error to report, i.e. the report is bogus. qemu_rdma_source_init() and qemu_rdma_accept() violate this principle: they call error_report() via qemu_rdma_reg_control(). I elected not to investigate how callers handle the error, i.e. precise impact is not known. Clean this up by dropping the error reporting from qemu_rdma_reg_control(). I believe the callers' error reports suffice. If they don't, we need to convert to Error instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-48-armbru@redhat.com>
* migration/rdma: Silence qemu_rdma_connect()Markus Armbruster2023-10-111-10/+4
| | | | | | | | | | | | | | | | | | | | | | | | | Functions that use an Error **errp parameter to return errors should not also report them to the user, because reporting is the caller's job. When the caller does, the error is reported twice. When it doesn't (because it recovered from the error), there is no error to report, i.e. the report is bogus. qemu_rdma_connect() violates this principle: it calls error_report() and perror(). I elected not to investigate how callers handle the error, i.e. precise impact is not known. Clean this up: replace perror() by changing error_setg() to error_setg_errno(), and drop error_report(). I believe the callers' error reports suffice then. If they don't, we need to convert to Error instead. Bonus: resolves a FIXME about problematic use of errno. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-47-armbru@redhat.com>
* migration/rdma: Silence qemu_rdma_resolve_host()Markus Armbruster2023-10-111-1/+0
| | | | | | | | | | | | | | | | | | | Functions that use an Error **errp parameter to return errors should not also report them to the user, because reporting is the caller's job. When the caller does, the error is reported twice. When it doesn't (because it recovered from the error), there is no error to report, i.e. the report is bogus. qemu_rdma_resolve_host() violates this principle: it calls error_report(). Clean this up: drop error_report(). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-46-armbru@redhat.com>
* migration/rdma: Convert qemu_rdma_alloc_pd_cq() to ErrorMarkus Armbruster2023-10-111-13/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | Functions that use an Error **errp parameter to return errors should not also report them to the user, because reporting is the caller's job. When the caller does, the error is reported twice. When it doesn't (because it recovered from the error), there is no error to report, i.e. the report is bogus. qemu_rdma_source_init() violates this principle: it calls error_report() via qemu_rdma_alloc_pd_cq(). I elected not to investigate how callers handle the error, i.e. precise impact is not known. Clean this up by converting qemu_rdma_alloc_pd_cq() to Error. The conversion loses a piece of advice on one of two failure paths: Your mlock() limits may be too low. Please check $ ulimit -a # and search for 'ulimit -l' in the output Not worth retaining. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-45-armbru@redhat.com>
* migration/rdma: Convert qemu_rdma_post_recv_control() to ErrorMarkus Armbruster2023-10-111-12/+10
| | | | | | | | | | | Just for symmetry with qemu_rdma_post_send_control(). Error messages lose detail I consider of no use to users. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <20230928132019.2544702-44-armbru@redhat.com>