summary refs log tree commit diff stats
path: root/target/arm/helper.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* target/arm: Do hflags rebuild in cpsr_write()Peter Maydell2021-08-261-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we rely on all the callsites of cpsr_write() to rebuild the cached hflags if they change one of the CPSR bits which we use as a TB flag and cache in hflags. This is a bit awkward when we want to change the set of CPSR bits that we cache, because it means we need to re-audit all the cpsr_write() callsites to see which flags they are writing and whether they now need to rebuild the hflags. Switch instead to making cpsr_write() call arm_rebuild_hflags() itself if one of the bits being changed is a cached bit. We don't do the rebuild for the CPSRWriteRaw write type, because that kind of write is generally doing something special anyway. For the CPSRWriteRaw callsites in the KVM code and inbound migration we definitely don't want to recalculate the hflags; the callsites in boot.c and arm-powerctl.c have to do a rebuild-hflags call themselves anyway because of other CPU state changes they make. This allows us to drop explicit arm_rebuild_hflags() calls in a couple of places where the only reason we needed to call it was the CPSR write. This fixes a bug where we were incorrectly failing to rebuild hflags in the code path for a gdbstub write to CPSR, which meant that you could make QEMU assert by breaking into a running guest, altering the CPSR to change the value of, for example, CPSR.E, and then continuing. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210817201843.3829-1-peter.maydell@linaro.org
* target/arm: Implement HSTR.TJDBXPeter Maydell2021-08-261-0/+17
| | | | | | | | | | | In v7A, the HSTR register has a TJDBX bit which traps NS EL0/EL1 access to the JOSCR and JMCR trivial Jazelle registers, and also BXJ. Implement these traps. In v8A this HSTR bit doesn't exist, so don't trap for v8A CPUs. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-3-peter.maydell@linaro.org
* target/arm: Implement HSTR.TTEEPeter Maydell2021-08-261-2/+16
| | | | | | | | | | In v7, the HSTR register has a TTEE bit which allows EL0/EL1 accesses to the Thumb2EE TEECR and TEEHBR registers to be trapped to the hypervisor. Implement these traps. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210816180305.20137-2-peter.maydell@linaro.org
* target/arm: Implement M-profile trapping on division by zeroPeter Maydell2021-08-251-2/+17
| | | | | | | | | | | | | Unlike A-profile, for M-profile the UDIV and SDIV insns can be configured to raise an exception on division by zero, using the CCR DIV_0_TRP bit. Implement support for setting this bit by making the helper functions raise the appropriate exception. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210730151636.17254-3-peter.maydell@linaro.org
* target/arm: Re-indent sdiv and udiv helpersPeter Maydell2021-08-251-6/+9
| | | | | | | | | We're about to make a code change to the sdiv and udiv helper functions, so first fix their indentation and coding style. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210730151636.17254-2-peter.maydell@linaro.org
* target/arm: Export aarch64_sve_zcr_get_valid_lenRichard Henderson2021-07-271-2/+2
| | | | | | | | | | Rename from sve_zcr_get_valid_len and make accessible from outside of helper.c. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Correctly bound length in sve_zcr_get_valid_lenRichard Henderson2021-07-271-1/+3
| | | | | | | | | | | | | | | Currently, our only caller is sve_zcr_len_for_el, which has already masked the length extracted from ZCR_ELx, so the masking done here is a nop. But we will shortly have uses from other locations, where the length will be unmasked. Saturate the length to ARM_MAX_VQ instead of truncating to the low 4 bits. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20210723203344.968563-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix offsets for TTBCRRichard Henderson2021-07-181-4/+7
| | | | | | | | | | | | | | | | | The functions vmsa_ttbcr_write and vmsa_ttbcr_raw_write expect the offset to be for the complete TCR structure, not the offset to the low 32-bits of a uint64_t. Using offsetoflow32 in this case breaks big-endian hosts. For TTBCR2, we do want the high 32-bits of a uint64_t. Use cp15.tcr_el[*].raw_tcr as the offsetofhigh32 argument to clarify this. Buglink: https://gitlab.com/qemu-project/qemu/-/issues/187 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210709230621.938821-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Correct the encoding of MDCCSR_EL0 and DBGDSCRinthnick@vmware.com2021-07-091-3/+13
| | | | | | Signed-off-by: Nick Hudson <hnick@vmware.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add ID_AA64ZFR0 fields and isar_feature_aa64_sve2Richard Henderson2021-05-251-2/+1
| | | | | | | | | | Will be used for SVE2 isa subset enablement. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-2-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add support for FEAT_TLBIOSRebecca Cran2021-05-251-0/+43
| | | | | | | | | | ARMv8.4 adds the mandatory FEAT_TLBIOS. It provides TLBI maintenance instructions that extend to the Outer Shareable domain. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210512182337.18563-3-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add support for FEAT_TLBIRANGERebecca Cran2021-05-251-0/+281
| | | | | | | | | | ARMv8.4 adds the mandatory FEAT_TLBIRANGE. It provides TLBI maintenance instructions that apply to a range of input addresses. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210512182337.18563-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix tlbbits calculation in tlbi_aa64_vae2is_write()Peter Maydell2021-05-101-1/+1
| | | | | | | | | | | | | | | | In tlbi_aa64_vae2is_write() the calculation bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_E2 : ARMMMUIdx_SE2, pageaddr) has the two arms of the ?: expression reversed. Fix the bug. Fixes: b6ad6062f1e5 Reported-by: Rebecca Cran <rebecca@nuviainc.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Rebecca Cran <rebecca@nuviainc.com> Message-id: 20210420123106.10861-1-peter.maydell@linaro.org
* target/arm: Add ALIGN_MEM to TBFLAG_ANYRichard Henderson2021-04-301-2/+17
| | | | | | | | | | | Use this to signal when memory access alignment is required. This value comes from the CCR register for M-profile, and from the SCTLR register for A-profile. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Move mode specific TB flags to tb->cs_baseRichard Henderson2021-04-301-4/+6
| | | | | | | | | | | Now that we have all of the proper macros defined, expanding the CPUARMTBFlags structure and populating the two TB fields is relatively simple. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Introduce CPUARMTBFlagsRichard Henderson2021-04-301-22/+26
| | | | | | | | | | | | In preparation for splitting tb->flags across multiple fields, introduce a structure to hold the value(s). So far this only migrates the one uint32_t and fixes all of the places that require adjustment to match. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add wrapper macros for accessing tbflagsRichard Henderson2021-04-301-46/+39
| | | | | | | | | | | We're about to split tbflags into two parts. These macros will ensure that the correct part is used with the correct set of bits. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename TBFLAG_ANY, PSTATE_SSRichard Henderson2021-04-301-2/+2
| | | | | | | | | | | | | We're about to rearrange the macro expansion surrounding tbflags, and this field name will be expanded using the bit definition of the same name, resulting in a token pasting error. So PSTATE_SS -> PSTATE__SS in the uses, and document it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Rename TBFLAG_A32, SCTLR_BRichard Henderson2021-04-301-1/+1
| | | | | | | | | | | | | We're about to rearrange the macro expansion surrounding tbflags, and this field name will be expanded using the bit definition of the same name, resulting in a token pasting error. So SCTLR_B -> SCTLR__B in the 3 uses, and document it. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210419202257.161730-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Revert "target/arm: Make number of counters in PMCR follow the CPU"Peter Maydell2021-04-061-17/+12
| | | | | | | | | | | | | | | | | | | This reverts commit f7fb73b8cdd3f77e26f9fcff8cf24ff1b58d200f. This change turned out to be a bit half-baked, and doesn't work with KVM, which fails with the error: "qemu-system-aarch64: Failed to retrieve host CPU features" because KVM does not allow accessing of the PMCR_EL0 value in the scratch "query CPU ID registers" VM unless we have first set the KVM_ARM_VCPU_PMU_V3 feature on the VM. Revert the change for 6.0. Reported-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Zenghui Yu <yuzenghui@huawei.com> Message-id: 20210331154822.23332-1-peter.maydell@linaro.org
* target/arm: Make number of counters in PMCR follow the CPUPeter Maydell2021-03-301-12/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently we give all the v7-and-up CPUs a PMU with 4 counters. This means that we don't provide the 6 counters that are required by the Arm BSA (Base System Architecture) specification if the CPU supports the Virtualization extensions. Instead of having a single PMCR_NUM_COUNTERS, make each CPU type specify the PMCR reset value (obtained from the appropriate TRM), and use the 'N' field of that value to define the number of counters provided. This means that we now supply 6 counters for Cortex-A53, A57, A72, A15 and A9 as well as '-cpu max'; Cortex-A7 and A8 stay at 4; and Cortex-R5 goes down to 3. Note that because we now use the PMCR reset value of the specific implementation, we no longer set the LC bit out of reset. This has an UNKNOWN value out of reset for all cores with any AArch32 support, so guest software should be setting it anyway if it wants it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Message-id: 20210311165947.27470-1-peter.maydell@linaro.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* semihosting: Move include/hw/semihosting/ -> include/semihosting/Philippe Mathieu-Daudé2021-03-101-2/+2
| | | | | | | | | | | | | | We want to move the semihosting code out of hw/ in the next patch. This patch contains the mechanical steps, created using: $ git mv include/hw/semihosting/ include/ $ sed -i s,hw/semihosting,semihosting, $(git grep -l hw/semihosting) Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210226131356.3964782-2-f4bug@amsat.org> Message-Id: <20210305135451.15427-2-alex.bennee@linaro.org>
* target/arm: Use TCF0 and TFSRE0 for unprivileged tag checksPeter Collingbourne2021-03-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Section D6.7 of the ARM ARM states: For the purpose of determining Tag Check Fault handling, unprivileged load and store instructions are treated as if executed at EL0 when executed at either: - EL1, when the Effective value of PSTATE.UAO is 0. - EL2, when both the Effective value of HCR_EL2.{E2H, TGE} is {1, 1} and the Effective value of PSTATE.UAO is 0. ARM has confirmed a defect in the pseudocode function AArch64.TagCheckFault that makes it inconsistent with the above wording. The remedy is to adjust references to PSTATE.EL in that function to instead refer to AArch64.AccessUsesEL(acctype), so that unprivileged instructions use SCTLR_EL1.TCF0 and TFSRE0_EL1. The exception type for synchronous tag check faults remains unchanged. This patch implements the described change by partially reverting commits 50244cc76abc and cc97b0019bb5. Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210219201820.2672077-1-pcc@google.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add support for FEAT_SSBS, Speculative Store Bypass SafeRebecca Cran2021-03-051-0/+37
| | | | | | | | | | Add support for FEAT_SSBS. SSBS (Speculative Store Bypass Safe) is an optional feature in ARMv8.0, and mandatory in ARMv8.5. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210216224543.16142-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Correctly initialize MDCR_EL2.HPMNDaniel Müller2021-02-111-5/+4
| | | | | | | | | | | | | | | | | | | | When working with performance monitoring counters, we look at MDCR_EL2.HPMN as part of the check whether a counter is enabled. This check fails, because MDCR_EL2.HPMN is reset to 0, meaning that no counters are "enabled" for < EL2. That's in violation of the Arm specification, which states that > On a Warm reset, this field [MDCR_EL2.HPMN] resets to the value in > PMCR_EL0.N That's also what a comment in the code acknowledges, but the necessary adjustment seems to have been forgotten when support for more counters was added. This change fixes the issue by setting the reset value to PMCR.N, which is four. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Support AA32 DIT by moving PSTATE_SS from cpsr into env->pstateRebecca Cran2021-02-111-6/+18
| | | | | | | | | | | | | cpsr has been treated as being the same as spsr, but it isn't. Since PSTATE_SS isn't in cpsr, remove it and move it into env->pstate. This allows us to add support for CPSR_DIT, adding helper functions to merge SPSR_ELx to and from CPSR. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210208065700.19454-3-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add support for FEAT_DIT, Data Independent TimingRebecca Cran2021-02-111-0/+22
| | | | | | | | | | | | Add support for FEAT_DIT. DIT (Data Independent Timing) is a required feature for ARMv8.4. Since virtual machine execution is largely nondeterministic and TCG is outside of the security domain, it's implemented as a NOP. Signed-off-by: Rebecca Cran <rebecca@nuviainc.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210208065700.19454-2-rebecca@nuviainc.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix SCR RES1 handlingMike Nawrocki2021-02-111-2/+14
| | | | | | | | | | | | | | | | The FW and AW bits of SCR_EL3 are RES1 only in some contexts. Force them to 1 only when there is no support for AArch32 at EL1 or above. The reset value will be 0x30 only if the CPU is AArch64-only; if there is support for AArch32 at EL1 or above, it will be reset to 0. Also adds helper function isar_feature_aa64_aa32_el1 to check if AArch32 is supported at EL1 or above. Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210203165552.16306-2-michael.nawrocki@gtri.gatech.edu Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: do not use cc->do_interrupt for KVM directlyClaudio Fontana2021-02-051-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cc->do_interrupt is in theory a TCG callback used in accel/tcg only, to prepare the emulated architecture to take an interrupt as defined in the hardware specifications, but in reality the _do_interrupt style of functions in targets are also occasionally reused by KVM to prepare the architecture state in a similar way where userspace code has identified that it needs to deliver an exception to the guest. In the case of ARM, that includes: 1) the vcpu thread got a SIGBUS indicating a memory error, and we need to deliver a Synchronous External Abort to the guest to let it know about the error. 2) the kernel told us about a debug exception (breakpoint, watchpoint) but it is not for one of QEMU's own gdbstub breakpoints/watchpoints so it must be a breakpoint the guest itself has set up, therefore we need to deliver it to the guest. So in order to reuse code, the same arm_do_interrupt function is used. This is all fine, but we need to avoid calling it using the callback registered in CPUClass, since that one is now TCG-only. Fortunately this is easily solved by replacing calls to CPUClass::do_interrupt() with explicit calls to arm_do_interrupt(). Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20210204163931.7358-9-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Replace magic value by MMU_DATA_LOAD definitionPhilippe Mathieu-Daudé2021-01-291-1/+1
| | | | | | | | | cpu_get_phys_page_debug() uses 'DATA LOAD' MMU access type. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210127232822.3530782-1-f4bug@amsat.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Conditionalize DBGDIDRRichard Henderson2021-01-291-6/+15
| | | | | | | | | Only define the register if it exists for the cpu. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210120031656.737646-1-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement ID_PFR2Richard Henderson2021-01-291-2/+2
| | | | | | | | | | This was defined at some point before ARMv8.4, and will shortly be used by new processor descriptions. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210120204400.1056582-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: refactor vae1_tlbmask()Rémi Denis-Courmont2021-01-191-14/+11
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-19-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement SCR_EL2.EEL2Rémi Denis-Courmont2021-01-191-3/+16
| | | | | | | | | | | | This adds handling for the SCR_EL3.EEL2 bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Message-id: 20210112104511.36576-17-remi.denis.courmont@huawei.com [PMM: Applied fixes for review issues noted by RTH: - check for FEATURE_AARCH64 before checking sel2 isar feature - correct the commit message subject line] Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: set HPFAR_EL2.NS on secure stage 2 faultsRémi Denis-Courmont2021-01-191-0/+6
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-15-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: secure stage 2 translation regimeRémi Denis-Courmont2021-01-191-24/+54
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-14-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: generalize 2-stage page-walk conditionRémi Denis-Courmont2021-01-191-7/+6
| | | | | | | | | | The stage_1_mmu_idx() already effectively keeps track of which translation regimes have two stages. Don't hard-code another test. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-13-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: translate NS bit in page-walksRémi Denis-Courmont2021-01-191-0/+12
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-12-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: do S1_ptw_translate() before address space lookupRémi Denis-Courmont2021-01-191-3/+6
| | | | | | | | | | | In the secure stage 2 translation regime, the VSTCR.SW and VTCR.NSW bits can invert the secure flag for pagetable walks. This patchset allows S1_ptw_translate() to change the non-secure bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-11-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: handle VMID change in secure stateRémi Denis-Courmont2021-01-191-4/+9
| | | | | | | | | | The VTTBR write callback so far assumes that the underlying VM lies in non-secure state. This handles the secure state scenario. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-10-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: add ARMv8.4-SEL2 system registersRémi Denis-Courmont2021-01-191-0/+24
| | | | | | | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-9-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: add MMU stage 1 for Secure EL2Rémi Denis-Courmont2021-01-191-43/+84
| | | | | | | | | | | | | This adds the MMU indices for EL2 stage 1 in secure state. To keep code contained, which is largelly identical between secure and non-secure modes, the MMU indices are reassigned. The new assignments provide a systematic pattern with a non-secure bit. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: add 64-bit S-EL2 to EL exception tableRémi Denis-Courmont2021-01-191-5/+5
| | | | | | | | | | | | | | | | With the ARMv8.4-SEL2 extension, EL2 is a legal exception level in secure mode, though it can only be AArch64. This patch adds the target EL for exceptions from 64-bit S-EL2. It also fixes the target EL to EL2 when HCR.{A,F,I}MO are set in secure mode. Those values were never used in practice as the effective value of HCR was always 0 in secure mode. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-7-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: factor MDCR_EL2 common handlingRémi Denis-Courmont2021-01-191-16/+22
| | | | | | | | | | | This adds a common helper to compute the effective value of MDCR_EL2. That is the actual value if EL2 is enabled in the current security context, or 0 elsewise. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-5-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: use arm_hcr_el2_eff() where applicableRémi Denis-Courmont2021-01-191-13/+18
| | | | | | | | | This will simplify accessing HCR conditionally in secure state. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-4-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: use arm_is_el2_enabled() where applicableRémi Denis-Courmont2021-01-191-20/+13
| | | | | | | | | | Do not assume that EL2 is available in and only in non-secure context. That equivalence is broken by ARMv8.4-SEL2. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-3-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: remove redundant testsRémi Denis-Courmont2021-01-191-6/+4
| | | | | | | | | | In this context, the HCR value is the effective value, and thus is zero in secure mode. The tests for HCR.{F,I}MO are sufficient. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20210112104511.36576-1-remi.denis.courmont@huawei.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* semihosting: Change common-semi API to be architecture-independentKeith Packard2021-01-181-2/+3
| | | | | | | | | | | | | | The public API is now defined in hw/semihosting/common-semi.h. do_common_semihosting takes CPUState * instead of CPUARMState *. All internal functions have been renamed common_semi_ instead of arm_semi_ or arm_. Aside from the API change, there are no functional changes in this patch. Signed-off-by: Keith Packard <keithp@keithp.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20210107170717.2098982-3-keithp@keithp.com> Message-Id: <20210108224256.2321-14-alex.bennee@linaro.org>
* target/arm: use official org.gnu.gdb.aarch64.sve layout for registersAlex Bennée2021-01-181-1/+1
| | | | | | | | | | | | | | | | | | While GDB can work with any XML description given to it there is special handling for SVE registers on the GDB side which makes the users life a little better. The changes aren't that major and all the registers save the $vg reported the same. All that changes is: - report org.gnu.gdb.aarch64.sve - use gdb nomenclature for names and types - minor re-ordering of the types to match reference - re-enable ieee_half (as we know gdb supports it now) - $vg is now a 64 bit int - check $vN and $zN aliasing in test Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Luis Machado <luis.machado@linaro.org> Message-Id: <20210108224256.2321-11-alex.bennee@linaro.org>
* target/arm: ARMv8.4-TTST extensionRémi Denis-Courmont2021-01-121-2/+13
| | | | | | | | This adds for the Small Translation tables extension in AArch64 state. Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>