summary refs log tree commit diff stats
path: root/target/i386/kvm/hyperv.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* cpus: properly kick CPUs out of inner execution loopPaolo Bonzini2025-09-171-1/+0
| | | | | | | | Now that cpu_exit() actually kicks all accelerators, use it whenever the message to another thread is processed in qemu_wait_io_event(). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* exec/cpu-all: remove exec/target_page includePierrick Bouvier2025-04-231-0/+1
| | | | | | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/i386: Make sure SynIC state is really updated before KVM_RUNVitaly Kuznetsov2024-10-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'hyperv_synic' test from KVM unittests was observed to be flaky on certain hardware (hangs sometimes). Debugging shows that the problem happens in hyperv_sint_route_new() when the test tries to set up a new SynIC route. The function bails out on: if (!synic->sctl_enabled) { goto cleanup; } but the test writes to HV_X64_MSR_SCONTROL just before it starts establishing SINT routes. Further investigation shows that synic_update() (called from async_synic_update()) happens after the SINT setup attempt and not before. Apparently, the comment before async_safe_run_on_cpu() in kvm_hv_handle_exit() does not correctly describe the guarantees async_safe_run_on_cpu() gives. In particular, async worked added to a CPU is actually processed from qemu_wait_io_event() which is not always called before KVM_RUN, i.e. kvm_cpu_exec() checks whether an exit request is pending for a CPU and if not, keeps running the vCPU until it meets an exit it can't handle internally. Hyper-V specific MSR writes are not automatically trigger an exit. Fix the issue by simply raising an exit request for the vCPU where SynIC update was queued. This is not a performance critical path as SynIC state does not get updated so often (and async_safe_run_on_cpu() is a big hammer anyways). Reported-by: Jan Richter <jarichte@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20240917160051.2637594-4-vkuznets@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-i386: hyper-v: Correct kvm_hv_handle_exit return valuedonsheng2024-05-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This bug fix addresses the incorrect return value of kvm_hv_handle_exit for KVM_EXIT_HYPERV_SYNIC, which should be EXCP_INTERRUPT. Handling of KVM_EXIT_HYPERV_SYNIC in QEMU needs to be synchronous. This means that async_synic_update should run in the current QEMU vCPU thread before returning to KVM, returning EXCP_INTERRUPT to guarantee this. Returning 0 can cause async_synic_update to run asynchronously. One problem (kvm-unit-tests's hyperv_synic test fails with timeout error) caused by this bug: When a guest VM writes to the HV_X64_MSR_SCONTROL MSR to enable Hyper-V SynIC, a VM exit is triggered and processed by the kvm_hv_handle_exit function of the QEMU vCPU. This function then calls the async_synic_update function to set synic->sctl_enabled to true. A true value of synic->sctl_enabled is required before creating SINT routes using the hyperv_sint_route_new() function. If kvm_hv_handle_exit returns 0 for KVM_EXIT_HYPERV_SYNIC, the current QEMU vCPU thread may return to KVM and enter the guest VM before running async_synic_update. In such case, the hyperv_synic test’s subsequent call to synic_ctl(HV_TEST_DEV_SINT_ROUTE_CREATE, ...) immediately after writing to HV_X64_MSR_SCONTROL can cause QEMU’s hyperv_sint_route_new() function to return prematurely (because synic->sctl_enabled is false). If the SINT route is not created successfully, the SINT interrupt will not be fired, resulting in a timeout error in the hyperv_synic test. Fixes: 267e071bd6d6 (“hyperv: make overlay pages for SynIC”) Suggested-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Dongsheng Zhang <dongsheng.x.zhang@intel.com> Message-ID: <20240521200114.11588-1-dongsheng.x.zhang@intel.com> Cc: qemu-stable@nongnu.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* vmbus: Print a warning when enabled without the recommended set of featuresMaciej S. Szmigiero2024-03-081-0/+5
| | | | | | | | | | | | | | | Some Windows versions crash at boot or fail to enable the VMBus device if they don't see the expected set of Hyper-V features (enlightenments). Since this provides poor user experience let's warn user if the VMBus device is enabled without the recommended set of Hyper-V features. The recommended set is the minimum set of Hyper-V features required to make the VMBus device work properly in Windows Server versions 2016, 2019 and 2022. Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
* system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()Stefan Hajnoczi2024-01-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Big QEMU Lock (BQL) has many names and they are confusing. The actual QemuMutex variable is called qemu_global_mutex but it's commonly referred to as the BQL in discussions and some code comments. The locking APIs, however, are called qemu_mutex_lock_iothread() and qemu_mutex_unlock_iothread(). The "iothread" name is historic and comes from when the main thread was split into into KVM vcpu threads and the "iothread" (now called the main loop thread). I have contributed to the confusion myself by introducing a separate --object iothread, a separate concept unrelated to the BQL. The "iothread" name is no longer appropriate for the BQL. Rename the locking APIs to: - void bql_lock(void) - void bql_unlock(void) - bool bql_locked(void) There are more APIs with "iothread" in their names. Subsequent patches will rename them. There are also comments and documentation that will be updated in later patches. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Fabiano Rosas <farosas@suse.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Peter Xu <peterx@redhat.com> Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Acked-by: Hyman Huang <yong.huang@smartx.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-2-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* hyperv: fix SynIC SINT assertion failure on guest resetMaciej S. Szmigiero2022-10-181-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Resetting a guest that has Hyper-V VMBus support enabled triggers a QEMU assertion failure: hw/hyperv/hyperv.c:131: synic_reset: Assertion `QLIST_EMPTY(&synic->sint_routes)' failed. This happens both on normal guest reboot or when using "system_reset" HMP command. The failing assertion was introduced by commit 64ddecc88bcf ("hyperv: SControl is optional to enable SynIc") to catch dangling SINT routes on SynIC reset. The root cause of this problem is that the SynIC itself is reset before devices using SINT routes have chance to clean up these routes. Since there seems to be no existing mechanism to force reset callbacks (or methods) to be executed in specific order let's use a similar method that is already used to reset another interrupt controller (APIC) after devices have been reset - by invoking the SynIC reset from the machine reset handler via a new x86_cpu_after_reset() function co-located with the existing x86_cpu_reset() in target/i386/cpu.c. Opportunistically move the APIC reset handler there, too. Fixes: 64ddecc88bcf ("hyperv: SControl is optional to enable SynIc") # exposed the bug Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <cb57cee2e29b20d06f81dce054cbcea8b5d497e8.1664552976.git.maciej.szmigiero@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* hyperv: Add support to process syndbg commandsJon Doron2022-04-061-3/+49
| | | | | | | | | | | | | SynDbg commands can come from two different flows: 1. Hypercalls, in this mode the data being sent is fully encapsulated network packets. 2. SynDbg specific MSRs, in this mode only the data that needs to be transfered is passed. Signed-off-by: Jon Doron <arilou@gmail.com> Reviewed-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220216102500.692781-4-arilou@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* i386: move kvm accel files into kvm/Claudio Fontana2020-12-161-0/+101
Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20201212155530.23098-2-cfontana@suse.de> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>