summary refs log tree commit diff stats
path: root/rust/qemu-api-macros/src/utils.rs (unfollow)
Commit message (Collapse)AuthorFilesLines
2025-01-28rust: pl011: drop use of ControlFlowPaolo Bonzini1-21/+18
It is a poor match for what the code is doing, anyway. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-28rust: pl011: pull device-specific code out of MemoryRegionOps callbacksPaolo Bonzini2-26/+15
read() can now return a simple u64. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-28rust: pl011: remove duplicate definitionsPaolo Bonzini2-49/+29
Unify the "Interrupt" enum and the "INT_*" constants with a struct that contains the bits. The "int_level" and "int_enabled" fields could use a crate such as "bitflags". Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-28rust: pl011: wrap registers with BqlRefCellPaolo Bonzini2-22/+32
This is a step towards making memory ops use a shared reference to the device type; it's not yet possible due to the calls to character device functions. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-27rust: pl011: extract PL011RegistersPaolo Bonzini2-127/+166
Pull all the mutable fields of PL011State into a separate struct. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-27rust: pl011: pull interrupt updates out of read/write opsPaolo Bonzini1-36/+48
qemu_irqs are not part of the vmstate, therefore they will remain in PL011State. Update them if needed after regs_read()/regs_write(). Apply #[must_use] to functions that return whether the interrupt state could have changed, so that it's harder to forget the call to update(). Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-27rust: pl011: extract CharBackend receive logic into a separate functionPaolo Bonzini1-6/+9
Prepare for moving all references to the registers and the FIFO into a separate struct. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-27rust: pl011: extract conversion to RegisterOffsetPaolo Bonzini2-60/+79
As an added bonus, this also makes the new function return u32 instead of u64, thus factoring some casts into a single place. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: pl011: hide unnecessarily "pub" items from outside pl011::devicePaolo Bonzini3-7/+10
The only public interfaces for pl011 are TYPE_PL011 and pl011_create. Remove pub from everything else. Note: the "allow(dead_code)" is removed later. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: pl011: remove unnecessary "extern crate"Paolo Bonzini1-4/+0
Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: prefer NonNull::new to assertionsPaolo Bonzini5-47/+35
Do not use new_unchecked; the effect is the same, but the code is easier to read and unsafe regions become smaller. Likewise, NonNull::new can be used instead of assertion and followed by as_ref() or as_mut() instead of dereferencing the pointer. Suggested-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: vmstate: make order of parameters consistent in vmstate_clockPaolo Bonzini2-2/+2
Place struct_name before field_name, similar to offset_of. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: vmstate: remove translation of C vmstate macrosPaolo Bonzini1-251/+23
Keep vmstate_clock!; because it uses a field of type VMStateDescription, it cannot be converted to the VMState trait without access to the const_refs_static feature. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: pl011: switch vmstate to new-style macrosPaolo Bonzini3-19/+26
Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: qemu_api: add vmstate_structPaolo Bonzini1-0/+33
It is not type safe, but it's the best that can be done without const_refs_static. It can also be used with BqlCell and BqlRefCell. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: vmstate: add public utility macros to implement VMStatePaolo Bonzini1-3/+58
Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: vmstate: implement VMState for scalar typesPaolo Bonzini1-2/+126
Scalar types are those that have their own VMStateInfo. This poses a problem in that references to VMStateInfo can only be included in associated consts starting with Rust 1.83.0, when the const_refs_static was stabilized. Removing the requirement is done by placing a limited list of VMStateInfos in an enum, and going from enum to &VMStateInfo only when building the VMStateField. The same thing cannot be done with VMS_STRUCT because the set of VMStateDescriptions extends to structs defined by the devices. Therefore, structs and cells cannot yet use vmstate_of!. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: vmstate: implement Zeroable for VMStateFieldPaolo Bonzini2-15/+34
This shortens a bit the constants. Do not bother using it in the vmstate macros since most of them will go away soon. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: vmstate: add varray support to vmstate_of!Paolo Bonzini1-2/+40
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: vmstate: implement VMState for non-leaf typesPaolo Bonzini1-1/+78
Arrays, pointers and cells use a VMStateField that is based on that for the inner type. The implementation therefore delegates to the VMState implementation of the inner type. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust: vmstate: add new type safe implementationPaolo Bonzini2-6/+109
The existing translation of the C macros for vmstate does not make any attempt to type-check vmstate declarations against the struct, so introduce a new system that computes VMStateField based on the actual struct declaration. Macros do not have full access to the type system, therefore a full implementation of this scheme requires a helper trait to analyze the type and produce a VMStateField from it; a macro "vmstate_of!" accepts arguments similar to "offset_of!" and tricks the compiler into looking up the trait for the right type. The patch introduces not just vmstate_of!, but also the slightly too clever enabling macro call_func_with_field!. The particular trick used here was proposed on the users.rust-lang.org forum, so I take no merit and all the blame. Introduce the trait and some functions to access it; the actual implementation comes later. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23memattrs: Check the size of MemTxAttrsZhao Liu1-0/+2
Make sure MemTxAttrs is packed into 8 bytes and does not exceed 8 bytes. Suggested-by: Philippe Mathieu-Daudà <philmd@linaro.org> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121151322.171832-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23memattrs: Convert unspecified member to boolZhao Liu1-7/+12
Convert `unspecified` member of MemTxAttrs from bit field to bool, so that bindgen could generate more ergonomic Rust binding with bool type. As a result, MemTxAttrs needs to be expanded from 4 bytes to 8 bytes. Therefore, move `unspecified` to after the bit fields and add reserved members to ensure that the whole structure is packed into 8 bytes. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121151322.171832-2-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust/pl011: Avoid bindings::*Zhao Liu1-3/+10
List all the necessary bindings to better identify gaps in rust/qapi. And include the bindings wrapped by rust/qapi instead mapping the raw bindings directly. Inspired-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121140457.84631-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23rust/qdev: Make REALIZE safeZhao Liu2-6/+6
A safe REALIZE accepts immutable reference. Since current PL011's realize() only calls a char binding function ( qemu_chr_fe_set_handlers), it is possible to convert mutable reference (&mut self) to immutable reference (&self), which only needs to convert the pointers passed to C to mutable pointers. Thus, make REALIZE accept immutable reference. Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121140457.84631-2-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23stub: Fix build failure with --enable-user --disable-system --enable-toolsZhao Liu1-2/+2
Configuring "--enable-user --disable-system --enable-tools" causes the build failure with the following information: /usr/bin/ld: libhwcore.a.p/hw_core_qdev.c.o: in function `device_finalize': /qemu/build/../hw/core/qdev.c:688: undefined reference to `qapi_event_send_device_deleted' collect2: error: ld returned 1 exit status To fix the above issue, add qdev.c stub when build with `have_tools`. With this fix, QEMU could be successfully built in the following cases: --enable-user --disable-system --enable-tools --enable-user --disable-system --disable-tools --enable-user --disable-system Cc: qemu-stable@nongnu.org Fixes: 388b849fb6c3 ("stubs: avoid duplicate symbols in libqemuutil.a") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2766 Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121154318.214680-1-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23docs: Add GNR, SRF and CWF CPU modelsTao Su1-4/+46
Update GraniteRapids, SierraForest and ClearwaterForest CPU models in section "Preferred CPU models for Intel x86 hosts". Also introduce bhi-no, gds-no and rfds-no in doc. Suggested-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121020650.1899618-5-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: Add new CPU model ClearwaterForestTao Su2-6/+162
According to table 1-2 in Intel Architecture Instruction Set Extensions and Future Features (rev 056) [1], ClearwaterForest has the following new features which have already been virtualized: - AVX-VNNI-INT16 CPUID.(EAX=7,ECX=1):EDX[bit 10] - SHA512 CPUID.(EAX=7,ECX=1):EAX[bit 0] - SM3 CPUID.(EAX=7,ECX=1):EAX[bit 1] - SM4 CPUID.(EAX=7,ECX=1):EAX[bit 2] Add above features to new CPU model ClearwaterForest. Comparing with SierraForest, ClearwaterForest bare-metal contains all features of SierraForest-v2 CPU model and adds: - PREFETCHI CPUID.(EAX=7,ECX=1):EDX[bit 14] - DDPD_U CPUID.(EAX=7,ECX=2):EDX[bit 3] - BHI_NO IA32_ARCH_CAPABILITIES[bit 20] Add above and all features of SierraForest-v2 CPU model to new CPU model ClearwaterForest. [1] https://cdrdv2.intel.com/v1/dl/getContent/671368 Tested-by: Xuelian Guo <xuelian.guo@intel.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121020650.1899618-4-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: Export BHI_NO bit to guestsTao Su1-1/+1
Branch History Injection (BHI) is a CPU side-channel vulnerability, where an attacker may manipulate branch history before transitioning from user to supervisor mode or from VMX non-root/guest to root mode. CPUs that set BHI_NO bit in MSR IA32_ARCH_CAPABILITIES to indicate no additional mitigation is required to prevent BHI. Make BHI_NO bit available to guests. Tested-by: Xuelian Guo <xuelian.guo@intel.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121020650.1899618-3-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: Introduce SierraForest-v2 modelTao Su1-0/+19
Update SierraForest CPU model to add LAM, 4 bits indicating certain bits of IA32_SPEC_CTR are supported(intel-psfd, ipred-ctrl, rrsba-ctrl, bhi-ctrl) and the missing features(ss, tsc-adjust, cldemote, movdiri, movdir64b) Also add GDS-NO and RFDS-NO to indicate the related vulnerabilities are mitigated in stepping 3. Tested-by: Xuelian Guo <xuelian.guo@intel.com> Signed-off-by: Tao Su <tao1.su@linux.intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250121020650.1899618-2-tao1.su@linux.intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: avoid using s->tmp0 for add to implicit registersPaolo Bonzini1-7/+14
For updates to implicit registers (RCX in LOOP instructions, RSI or RDI in string instructions, or the stack pointer) do the add directly using the registers (with no temporary) if 32-bit or 64-bit, or use a temporary created for the occasion if 16-bit. This is more efficient and removes move instructions for the MO_TL case. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-14-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: extract common bits of gen_repz/gen_repz_nzPaolo Bonzini1-20/+14
Now that everything has been cleaned up, look at DF and prefixes in a single function, and call that one from gen_repz and gen_repz_nz. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: pull computation of string update value out of loopPaolo Bonzini1-28/+26
This is a common operation that is executed many times in rep movs or rep stos loops. It can improve performance by several percentage points. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20241215090613.89588-13-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: execute multiple REP/REPZ iterations without leaving TBPaolo Bonzini1-6/+49
Use a TCG loop so that it is not necessary to go through the setup steps of REP and through the I/O check on every iteration. Interestingly, this is not a particularly effective optimization on its own, though it avoids the cost of correct RF emulation that was added in the previous patch. The main benefit lies in allowing the hoisting of loop invariants outside the loop, which will happen separately. The loop exits when the low 16 bits of CX/ECX/RCX are zero (so generally speaking the string operation runs in 65536 iteration batches) to give the main loop an opportunity to pick up interrupts. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-12-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: optimize CX handling in repeated string operationsPaolo Bonzini1-1/+14
In a repeated string operation, CX/ECX will be decremented until it is 0 but never underflow. Use this observation to avoid a deposit or zero-extend operation if the address size of the operation is smaller than MO_TL. As in the previous patch, the patch is structured to include some preparatory work for subsequent changes. In particular, introducing cx_next prepares for when ECX will be decremented *before* calling fn(s, ot), and therefore cannot yet be written back to cpu_regs. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-11-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: do not use gen_op_jz_ecx for repeated string operationsPaolo Bonzini1-1/+2
Explicitly generate a TSTEQ branch (which is optimized to NE x,0 if possible). This does not make much sense yet, but later we will add more checks and some will use a temporary to check on the decremented value of CX/ECX/RCX; it will be clearer for all checks to share the same logic using TSTEQ(reg, cx_mask). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-10-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: make cc_op handling more explicit for repeated string instructions.Paolo Bonzini1-3/+21
Since the cost of gen_update_cc_op() must be paid anyway, it's easier to place them manually and not rely on spilling that is buried under multiple levels of function calls. While at it, clarify the circumstances in which the gen_update_cc_op() is needed, and why it is not for REPxx SCAS and REPxx CMPS. And since cc_op will have been spilled at the point of a fault, just make the whole insn CC_OP_DYNAMIC. Once repz_opt is reintroduced, a fault could happen either before or after the first execution of CMPS/SCAS, and CC_OP_DYNAMIC sidesteps the complicated matter of what x86_restore_state_to_opc would do. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20241215090613.89588-9-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: fix RF handling for string instructionsPaolo Bonzini1-1/+21
RF must be set on traps and interrupts from a string instruction, except if they occur after the last iteration. Ensure it is set before giving the main loop a chance to execute. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-8-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: tcg: move gen_set/reset_* earlier in the filePaolo Bonzini1-40/+40
Allow using them in the code that translates REP/REPZ, without forward declarations. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-7-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: reorganize ops emitted by do_gen_rep, drop repz_optPaolo Bonzini1-47/+13
The condition for optimizing repeat instruction is more or less the opposite of what you imagine: almost always the string instruction was _not_ optimized and optimizing the loop relied on goto_tb. This is obviously not great for performance, due to the cost of the exit-to-main-loop check, but also wrong. In fact, after expanding dc->jmp_opt and simplifying "!!x" to "x", the condition for looping used to be: ((cflags & CF_NO_GOTO_TB) || (flags & (HF_RF_MASK | HF_TF_MASK | HF_INHIBIT_IRQ_MASK))) && !(cflags & CF_USE_ICOUNT) In other words, setting aside RF (it requires special handling for REP instructions and it was completely missing), repeat instruction were being optimized if TF or inhibit IRQ flags were set. This is certainly wrong for TF, because string instructions trap after every execution, and probably for interrupt shadow too. Get rid of repz_opt completely. The next patches will reintroduce the optimization, applying it in the common case instead of the unlikely and wrong one. While at it, place the CX/ECX/RCX=0 case is at the end of the function, which saves a label and is clearer when reading the generated ops. For clarity, mark the cc_op explicitly as DYNAMIC even if at the end of the translation block; the cc_op can come from either the previous instruction or the string instruction, and currently we rely on a gen_update_cc_op() that is hidden in the bowels of gen_jcc() to spill cc_op and mark it clean. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-6-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: unify choice between single and repeated string instructionsPaolo Bonzini2-37/+17
The same "if" is present in all generator functions for string instructions. Push it inside gen_repz() and gen_repz_nz() instead. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20241215090613.89588-5-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: unify REP and REPZ/REPNZ generationPaolo Bonzini1-19/+20
It only differs in a single call to gen_jcc, so use a "bool" argument to distinguish the two cases; do not duplicate code. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-4-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: remove trailing 1 from gen_{j, cmov, set}cc1Paolo Bonzini2-12/+12
This is not needed anymore now that gen_jcc has been eliminated (merged into the similarly-named gen_Jcc, where the uppercase letter gives away that it is an emission function). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-3-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-23target/i386: inline gen_jcc into sole callerPaolo Bonzini2-9/+4
The code of gen_Jcc is very similar to gen_LOOP* and gen_JCXZ, but this is hidden by gen_jcc. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20241215090613.89588-2-pbonzini@redhat.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-22rust: pl011: fix repr(C) for PL011ClassPaolo Bonzini1-0/+1
Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-01-19hw/char/riscv_htif: Convert HTIF_DEBUG() to trace eventsPhilippe Mathieu-Daudé2-12/+7
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20250116223609.81594-1-philmd@linaro.org> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2025-01-19target/riscv: Support Supm and Sspm as part of Zjpm v1.0Alexey Baturo2-0/+25
The Zjpm v1.0 spec states there should be Supm and Sspm extensions that are used in profile specification. Enabling Supm extension enables both Ssnpm and Smnpm, while Sspm enables only Smnpm. Signed-off-by: Alexey Baturo <baturo.alexey@gmail.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-ID: <20250113194410.1307494-1-baturo.alexey@gmail.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2025-01-19hw/riscv/riscv-iommu.c: Introduce a translation tag for the page table cacheJason Chien1-52/+153
This commit introduces a translation tag to avoid invalidating an entry that should not be invalidated when IOMMU executes invalidation commands. E.g. IOTINVAL.VMA with GV=0, AV=0, PSCV=1 invalidates both a mapping of single stage translation and a mapping of nested translation with the same PSCID, but only the former one should be invalidated. Signed-off-by: Jason Chien <jason.chien@sifive.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-ID: <20241108110147.11178-1-jason.chien@sifive.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2025-01-19target/riscv: Add Smdbltrp ISA extension enable switchClément Léger2-0/+12
Add the switch to enable the Smdbltrp ISA extension and disable it for the max cpu. Indeed, OpenSBI when Smdbltrp is present, M-mode double trap is enabled by default and MSTATUS.MDT needs to be cleared to avoid taking a double trap. OpenSBI does not currently support it so disable it for the max cpu to avoid breaking regression tests. Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-ID: <20250116131539.2475785-1-cleger@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2025-01-19target/riscv: Implement Smdbltrp behaviorClément Léger1-16/+41
When the Smsdbltrp ISA extension is enabled, if a trap happens while MSTATUS.MDT is already set, it will trigger an abort or an NMI is the Smrnmi extension is available. Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-ID: <20250110125441.3208676-9-cleger@rivosinc.com> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>