summary refs log tree commit diff stats
path: root/target/arm/ptw.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* target/arm: Implement APPSAARichard Henderson2025-10-071-1/+2
| | | | | | | | | This bit allows all spaces to access memory above PPS. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250926001134.295547-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix GPT fault type for address outside PPSRichard Henderson2025-10-071-1/+1
| | | | | | | | | | The GPT address size fault is for the table itself. The physical address being checked gets Granule protection fault at Level 0 (R_JFFHB). Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250926001134.295547-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement SPAD, NSPAD, RLPADRichard Henderson2025-10-071-2/+21
| | | | | | | | | These bits disable all access to a particular address space. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250926001134.295547-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Implement GPT_NonSecureOnlyRichard Henderson2025-10-071-1/+9
| | | | | | | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250926001134.295547-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: GPT_Secure is reserved without FEAT_SEL2Richard Henderson2025-10-071-4/+8
| | | | | | | | | For GPT_Secure, if SEL2 is not enabled, raise a GPCF_Walk exception. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250926001134.295547-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add cur_space to S1TranslateRichard Henderson2025-10-071-18/+19
| | | | | | | | | | | We've been updating in_space and then using hacks to access the original space. Instead, update cur_space and leave in_space unchanged. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250926001134.295547-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Drop ARM_FEATURE_XSCALE handlingPeter Maydell2025-09-161-4/+3
| | | | | | | | | | | | We have now removed all the CPU types which had the Intel XScale extensions indicated via ARM_FEATURE_XSCALE, so this feature bit is never set. Remove all the code that can only be reached when using this flag. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20250828140422.3271703-5-peter.maydell@linaro.org
* target/arm: Skip AF and DB updates for AccessType_ATRichard Henderson2025-09-161-1/+14
| | | | | | | | | | We are required to skip DB update for AT instructions, and we are allowed to skip AF updates. Choose to skip both. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20250830054128.448363-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Introduce get_phys_addr_for_atRichard Henderson2025-09-161-7/+14
| | | | | | | | | | | | Rename get_phys_addr_with_space_nogpc for its only caller, do_ats_write. Drop the MemOp memop argument as it doesn't make sense in the new context. Replace the access_type parameter with prot_check. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20250830054128.448363-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Skip permission check from arm_cpu_get_phys_page_attrs_debugRichard Henderson2025-09-161-1/+1
| | | | | | | | | | Do not require read permission when translating addresses for debugging purposes. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20250830054128.448363-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add in_prot_check to S1TranslateRichard Henderson2025-09-161-5/+14
| | | | | | | | | | | Separate the access_type from the protection check. Save the trouble of modifying all helper functions by passing the new data in the control structure. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20250830054128.448363-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add prot_check parameter to pmsav8_mpu_lookupRichard Henderson2025-09-161-5/+6
| | | | | | | | | | Separate the access_type from the protection check. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20250830054128.448363-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Clean up of register field definitionsGustavo Romero2025-08-301-4/+4
| | | | | | | | | | | | | Clean up the definitions of NSW and NSA fields in the VTCR register. These two fields are already defined properly using FIELD() so they are actually duplications. Also, define the NSW and NSA fields in the VSTCR register using FIELD() and remove their definitions based on VTCR fields. Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20250725014755.2122579-1-gustavo.romero@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* arm/cpu: Store aa64mmfr0-3 into the idregs arrayEric Auger2025-07-011-3/+3
| | | | | | | | | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Sebastian Ott <sebott@redhat.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> Message-id: 20250617153931.1330449-6-cohuck@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm/ptw: replace TARGET_AARCH64 by CONFIG_ATOMIC64 from arm_casq_ptwPierrick Bouvier2025-05-141-1/+1
| | | | | | | | | | | This function needs 64 bit compare exchange, so we hide implementation for hosts not supporting it (some 32 bit target, which don't run 64 bit guests anyway). Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250512180502.2395029-31-pierrick.bouvier@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm/ptw: replace target_ulong with int64_tPierrick Bouvier2025-05-141-2/+2
| | | | | | | | | | sextract64 returns a signed value. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250512180502.2395029-30-pierrick.bouvier@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Merge tag 'pull-target-arm-20250506' of ↵Stefan Hajnoczi2025-05-071-17/+48
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | https://git.linaro.org/people/pmaydell/qemu-arm into staging target-arm queue: * hw/arm/npcm8xx_boards: Correct valid_cpu_types setting of NPCM8XX SoC * arm/hvf: fix crashes when using gdbstub * target/arm/ptw: fix arm_cpu_get_phys_page_attrs_debug * hw/arm/virt: Remove deprecated old versions of 'virt' machine * tests/functional: Add test for imx8mp-evk board with USDHC coverage * hw/arm: Attach PSPI module to NPCM8XX SoC * target/arm: Don't assert() for ISB/SB inside IT block * docs: Don't define duplicate label in qemu-block-drivers.rst.inc * target/arm/kvm: Drop support for kernels without KVM_ARM_PREFERRED_TARGET * hw/pci-host/designware: Fix viewport configuration * hw/gpio/imx_gpio: Fix interpretation of GDIR polarity # -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmgaH50ZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3l4ED/0QOV6oev1ILqA1INBjY7Ct # VrjzjsynFnUkyU0MLKyuK+mBRYmeR1OWtIRTkbgIsRA23XqV4de/BhGsVCGrRA0r # VS/hV2kTQM0GYU2dCr9LpOC3jX0dDzft5uW9GjW/sW9infAwXRwKhGgkIV6q/G5V # Y6cMN7UXrOnomF8Spk5VvK8HH9OHV/fuSlWenk9X1bXPpVQ3jymqZ1eRSDXOzDdM # uP6lVdI3oHCpRPeXKa1EA8cfQa9M/y9XSzDIrF8OTZKVcIzbX8/XR+y74e4UMIvK # DD3nAuAXcezy3286Pu7OfciRBJfq3eFHZVXOKfQWFI3MStPmexKqoHm8JtQxXJOT # uJdaugItLahlPtNk41nAydYzYimK/MBKCWAfTqecEhZ9Cd64jeOPM9zXwRkXwyuu # n9XQUhm5Ll22urd4q2M8cCxKBP2OoaEBFS4Hn9uDpVDcWpRMLe2DP7ywzZjdLU9b # jLSlana5+wpMuwIasXlNzWgT37RA+xlDE2Snaz7K/Z3JV/XNZAZD6WXV72zTzhFs # EI10edHI+JXXlbT1Ev/yVv4cN9h/Kr3hyoOKat2ySaomW26H27wNPuvPTto4rCYU # 6VQJmJvwPSBWELI5eRbcN269K0ar1UXUsvDsy97cq35me3gFvfAZFksLpnPWKef6 # pvwwPuxLWQXs+chepuQyXA== # =c21p # -----END PGP SIGNATURE----- # gpg: Signature made Tue 06 May 2025 10:41:33 EDT # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * tag 'pull-target-arm-20250506' of https://git.linaro.org/people/pmaydell/qemu-arm: (32 commits) hw/arm/virt: Remove deprecated virt-4.0 machine hw/arm/virt: Remove deprecated virt-3.1 machine hw/arm/virt: Remove deprecated virt-3.0 machine hw/arm/virt: Update comment about Multiprocessor Affinity Register hw/gpio/imx_gpio: Fix interpretation of GDIR polarity hw/pci-host/designware: Fix viewport configuration hw/pci-host/designware: Remove unused include target/arm/kvm: Drop support for kernels without KVM_ARM_PREFERRED_TARGET docs: Don't define duplicate label in qemu-block-drivers.rst.inc target/arm: Don't assert() for ISB/SB inside IT block hw/arm: Attach PSPI module to NPCM8XX SoC tests/functional: Add test for imx8mp-evk board with USDHC coverage hw/arm/virt: Remove VirtMachineClass::no_highmem_ecam field hw/arm/virt: Remove deprecated virt-2.12 machine hw/arm/virt: Remove VirtMachineClass::smbios_old_sys_ver field hw/arm/virt: Remove deprecated virt-2.11 machine hw/arm/virt: Remove deprecated virt-2.10 machine hw/arm/virt: Remove deprecated virt-2.9 machine hw/arm/virt: Remove VirtMachineClass::claim_edge_triggered_timers field hw/arm/virt: Remove deprecated virt-2.8 machine ... Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
| * target/arm/ptw: fix arm_cpu_get_phys_page_attrs_debugPierrick Bouvier2025-05-061-1/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was reported that QEMU monitor command gva2gpa was reporting unmapped memory for a valid access (qemu-system-aarch64), during a copy from kernel to user space (__arch_copy_to_user symbol in Linux) [1]. This was affecting cpu_memory_rw_debug also, which is used in numerous places in our codebase. After investigating, the problem was specific to arm_cpu_get_phys_page_attrs_debug. When performing user access from a privileged space, we need to do a second lookup for user mmu idx, following what get_a64_user_mem_index is doing at translation time. [1] https://lists.nongnu.org/archive/html/qemu-discuss/2025-04/msg00013.html Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250414153027.1486719-5-pierrick.bouvier@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * target/arm/ptw: extract arm_cpu_get_phys_pagePierrick Bouvier2025-05-061-10/+14
| | | | | | | | | | | | | | | | | | | | Allow to call that function easily several times in next commit. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250414153027.1486719-4-pierrick.bouvier@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * target/arm/ptw: get current security_space for current mmu_idxPierrick Bouvier2025-05-061-1/+1
| | | | | | | | | | | | | | | | | | | | It should be equivalent to previous code. Allow to call common function to get a page address later. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250414153027.1486719-3-pierrick.bouvier@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * target/arm/ptw: extract arm_mmu_idx_to_security_spacePierrick Bouvier2025-05-061-7/+14
| | | | | | | | | | | | | | | | | | | | We'll reuse this function later. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Message-id: 20250414153027.1486719-2-pierrick.bouvier@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* | include: Remove 'exec/exec-all.h'Philippe Mathieu-Daudé2025-04-301-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | "exec/exec-all.h" is now fully empty, let's remove it. Mechanical change running: $ sed -i '/exec\/exec-all.h/d' $(git grep -wl exec/exec-all.h) Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Mark Cave-Ayland <mark.caveayland@nutanix.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20250424202412.91612-14-philmd@linaro.org>
* | accel/tcg: Extract probe API out of 'exec/exec-all.h'Philippe Mathieu-Daudé2025-04-301-0/+1
|/ | | | | | | | | | | Declare probe methods in "accel/tcg/probe.h" to emphasize they are specific to TCG accelerator. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Mark Cave-Ayland <mark.caveayland@nutanix.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20250424202412.91612-13-philmd@linaro.org>
* exec/cpu-all: remove exec/target_page includePierrick Bouvier2025-04-231-0/+1
| | | | | | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* exec/cpu-all: extract tlb flags defines to exec/tlb-flags.hPierrick Bouvier2025-04-231-0/+1
| | | | | | | Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20250320223002.2915728-3-pierrick.bouvier@linaro.org>
* tcg: Remove TCG_OVERSIZED_GUESTRichard Henderson2025-02-181-34/+0
| | | | | | | This is now prohibited in configuration. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* arm/ptw: Honour WXN/UWXN and SIF in short-format descriptorsPavel Skripkin2024-11-191-31/+24
| | | | | | | | | | | | | | | | | | | Currently the handling of page protection in the short-format descriptor is open-coded. This means that we forgot to update it to handle some newer architectural features, including: * handling of SCTLR.{UWXN,WXN} * handling of SCR.SIF Make the short-format descriptor code call the same get_S1prot() that we already use for the LPAE descriptor format. This makes the code simpler and means it now correctly honours the WXN/UWXN and SIF bits. Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Message-id: 20241118152537.45277-1-paskripkin@gmail.com [PMM: fixed a couple of checkpatch nits, tweaked commit message] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* arm/ptw: Make get_S1prot accept decoded APPavel Skripkin2024-11-191-8/+9
| | | | | | | | | | | | | | | AP in armv7 short descriptor mode has 3 bits and also domain, which makes it incompatible with other arm schemas. To make it possible to share get_S1prot between armv8, armv7 long format, armv7 short format and armv6 it's easier to make caller decode AP. Signed-off-by: Pavel Skripkin <paskripkin@gmail.com> Message-id: 20241118152526.45185-1-paskripkin@gmail.com [PMM: fixed checkpatch nit] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Add new MMU indexes for AArch32 Secure PL1&0Peter Maydell2024-11-051-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0) This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime. We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits. The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it. Fix this by adding two new MMU indexes: * ARMMMUIdx_E30_0 is for Secure PL0 * ARMMMUIdx_E30_3_PAN is for Secure PL1 when PAN is enabled The existing ARMMMUIdx_E3 is used to mean "Secure PL1 without PAN" (and would be named ARMMMUIdx_E30_3 in an AArch32-centric scheme). These extra two indexes bring us up to the maximum of 16 that the core code can currently support. This commit: * adds the new MMU index handling to the various places where we deal in MMU index values * adds assertions that we aren't AArch32 EL3 in a couple of places that currently use the E10 indexes, to document why they don't also need to handle the E30 indexes * documents in a comment why regime_has_2_ranges() doesn't need updating Notes for backporting: this commit depends on the preceding revert of 4c2c04746932; that revert and this commit should probably be backported to everywhere that we originally backported 4c2c04746932. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2588 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241101142845.1712482-3-peter.maydell@linaro.org
* Revert "target/arm: Fix usage of MMU indexes when EL3 is AArch32"Peter Maydell2024-11-051-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 4c2c0474693229c1f533239bb983495c5427784d. This commit tried to fix a problem with our usage of MMU indexes when EL3 is AArch32, using what it described as a "more complicated approach" where we share the same MMU index values for Secure PL1&0 and NonSecure PL1&0. In theory this should work, but the change didn't account for (at least) two things: (1) The design change means we need to flush the TLBs at any point where the CPU state flips from one to the other. We already flush the TLB when SCR.NS is changed, but we don't flush the TLB when we take an exception from NS PL1&0 into Mon or when we return from Mon to NS PL1&0, and the commit didn't add any code to do that. (2) The ATS12NS* address translate instructions allow Mon code (which is Secure) to do a stage 1+2 page table walk for NS. I thought this was OK because do_ats_write() does a page table walk which doesn't use the TLBs, so because it can pass both the MMU index and also an ARMSecuritySpace argument we can tell the table walk that we want NS stage1+2, not S. But that means that all the code within the ptw that needs to find e.g. the regime EL cannot do so only with an mmu_idx -- all these functions like regime_sctlr(), regime_el(), etc would need to pass both an mmu_idx and the security_space, so they can tell whether this is a translation regime controlled by EL1 or EL3 (and so whether to look at SCTLR.S or SCTLR.NS, etc). In particular, because regime_el() wasn't updated to look at the ARMSecuritySpace it would return 1 even when the CPU was in Monitor mode (and the controlling EL is 3). This meant that page table walks in Monitor mode would look at the wrong SCTLR, TCR, etc and would generally fault when they should not. Rather than trying to make the complicated changes needed to rescue the design of 4c2c04746932, we revert it in order to instead take the route that that commit describes as "the most straightforward" fix, where we add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN". This revert will re-expose the "spurious alignment faults in Secure PL0" issue #2326; we'll fix it again in the next commit. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Message-id: 20241101142845.1712482-2-peter.maydell@linaro.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Fix alignment fault priority in get_phys_addr_lpaeRichard Henderson2024-10-131-21/+30
| | | | | | | | | | | | | Now that we have the MemOp for the access, we can order the alignment fault caused by memory type before the permission fault for the page. For subsequent page hits, permission and stage 2 checks are known to pass, and so the TLB_CHECK_ALIGNED fault raised in generic code is not mis-ordered. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Move device detection earlier in get_phys_addr_lpaeRichard Henderson2024-10-131-24/+25
| | | | | | | | | Determine cache attributes, and thence Device vs Normal memory, earlier in the function. We have an existing regime_is_stage2 if block into which this can be slotted. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Pass MemOp to get_phys_addr_lpaeRichard Henderson2024-10-131-2/+4
| | | | | | | | | Pass the value through from get_phys_addr_nogpc. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Pass MemOp through get_phys_addr_twostageRichard Henderson2024-10-131-4/+6
| | | | | | | | | | Pass memop through get_phys_addr_twostage with its recursion with get_phys_addr_nogpc. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Pass MemOp to get_phys_addr_nogpcRichard Henderson2024-10-131-6/+8
| | | | | | | | | | | Zero is the safe do-nothing value for callers to use. Pass the value through from get_phys_addr_gpc and get_phys_addr_with_space_nogpc. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Pass MemOp to get_phys_addr_gpcRichard Henderson2024-10-131-5/+6
| | | | | | | | | | Zero is the safe do-nothing value for callers to use. Pass the value through from get_phys_addr. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Pass MemOp to get_phys_addr_with_space_nogpcRichard Henderson2024-10-131-1/+1
| | | | | | | | | Zero is the safe do-nothing value for callers to use. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Pass MemOp to get_phys_addrRichard Henderson2024-10-131-1/+1
| | | | | | | | Zero is the safe do-nothing value for callers to use. Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* target/arm: Avoid target_ulong for physical address lookupsArd Biesheuvel2024-10-011-8/+8
| | | | | | | | | | | | | | | | | | | | | | | target_ulong is typedef'ed as a 32-bit integer when building the qemu-system-arm target, and this is smaller than the size of an intermediate physical address when LPAE is being used. Given that Linux may place leaf level user page tables in high memory when built for LPAE, the kernel will crash with an external abort as soon as it enters user space when running with more than ~3 GiB of system RAM. So replace target_ulong with vaddr in places where it may carry an address value that is not representable in 32 bits. Fixes: f3639a64f602ea ("target/arm: Use softmmu tlbs for page table walking") Cc: qemu-stable@nongnu.org Reported-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-id: 20240927071051.1444768-1-ardb+git@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* hvf: arm: Implement and use hvf_get_physical_address_rangeDanny Canter2024-09-131-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch's main focus is to use the previously added hvf_get_physical_address_range to inform VM creation about the IPA size we need for the VM, so we can extend the default 36b IPA size and support VMs with 64+GB of RAM. This is done by freezing the memory map, computing the highest GPA and then (depending on if the platform supports an IPA size that large) telling the kernel to use a size >= for the VM. In pursuit of this a couple of things related to how we handle the physical address range we expose to guests were altered, but for an explanation of what we were doing: Today, to get the IPA size we were reading id_aa64mmfr0_el1's PARange field from a newly made vcpu. Unfortunately, HVF just returns the hosts PARange directly for the initial value and not the IPA size that will actually back the VM, so we believe we have much more address space than we actually do today it seems. Starting in macOS 13.0 some APIs were introduced to be able to query the maximum IPA size the kernel supports, and to set the IPA size for a given VM. However, this still has a couple of issues on < macOS 15. Up until macOS 15 (and if the hardware supported it) the max IPA size was 39 bits which is not a valid PARange value, so we can't clamp down what we advertise in the vcpu's id_aa64mmfr0_el1 to our IPA size. Starting in macOS 15 however, the maximum IPA size is 40 bits (if it's supported in the hardware as well) which is also a valid PARange value so we can set our IPA size to the maximum as well as clamp down the PARange we advertise to the guest. This allows VMs with 64+ GB of RAM and should fix the oddness of the PARange situation as well. Signed-off-by: Danny Canter <danny_canter@apple.com> Message-id: 20240828111552.93482-4-danny_canter@apple.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: Fix usage of MMU indexes when EL3 is AArch32Peter Maydell2024-08-131-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0) This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime. We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits. The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it. We could fix this in one of two ways: * The most straightforward is to add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN". This matches how we use indexes for the AArch64 regimes, and preserves propirties like being able to determine the privilege level from an MMU index without any other information. However it would add two MMU indexes (we can share one with ARMMMUIdx_EL3), and we are already using 14 of the 16 the core TLB code permits. * The more complicated approach is the one we take here. We use the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0 than we do for NonSecure PL1&0. This saves on MMU indexes, but means we need to check in some places whether we're in the Secure PL1&0 regime or not before we interpret an MMU index. The changes in this commit were created by auditing all the places where we use specific ARMMMUIdx_ values, and checking whether they needed to be changed to handle the new index value usage. Note for potential stable backports: taking also the previous (comment-change-only) commit might make the backport easier. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
* exec/cpu: Extract page-protection definitions to page-protection.hPhilippe Mathieu-Daudé2024-05-061-0/+1
| | | | | | | | | | | | | | | | Extract page-protection definitions from "exec/cpu-all.h" to "exec/page-protection.h". The list of files requiring the new header was generated using: $ git grep -wE \ 'PAGE_(READ|WRITE|EXEC|RWX|VALID|ANON|RESERVED|TARGET_.|PASSTHROUGH)' Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240427155714.53669-3-philmd@linaro.org>
* target/arm: Do memory type alignment check when translation enabledRichard Henderson2024-03-051-0/+39
| | | | | | | | | | | | | If translation is enabled, and the PTE memory type is Device, enable checking alignment via TLB_CHECK_ALIGNMENT. While the check is done later than it should be per the ARM, it's better than not performing the check at all. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240301204110.656742-7-richard.henderson@linaro.org [PMM: tweaks to comment text] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
* arm/ptw: Handle atomic updates of page tables entries in MMIO during PTW.Jonathan Cameron2024-02-271-2/+62
| | | | | | | | | | | | | | | | I'm far from confident this handling here is correct. Hence RFC. In particular not sure on what locks I should hold for this to be even moderately safe. The function already appears to be inconsistent in what it returns as the CONFIG_ATOMIC64 block returns the endian converted 'eventual' value of the cmpxchg whereas the TCG_OVERSIZED_GUEST case returns the previous value. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-id: 20240219161229.11776-1-Jonathan.Cameron@huawei.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target/arm: arm_pamax() no longer needs to do feature propagationPeter Maydell2024-01-151-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In arm_pamax(), we need to cope with the virt board calling this function on a CPU object which has been inited but not realize. We used to do propagation of feature-flag implications (such as "V7VE implies LPAE") at realize, so we have some code in arm_pamax() which manually checks for both V7VE and LPAE feature flags. In commit b8f7959f28c4f36 we moved the feature propagation for almost all features from realize to post-init. That means that now when the virt board calls arm_pamax(), the feature propagation has been done. So we can drop the manual propagation handling and check only for the feature we actually care about, which is ARM_FEATURE_LPAE. Retain the comment that the virt board is calling this function with a not completely realized CPU object, because that is a potential beartrap for later changes which is worth calling out. (Note that b8f7959f28c4f36 actually fixed a bug in the arm_pamax() handling: arm_pamax() was missing a check for ARM_FEATURE_V8, so it incorrectly thought that the qemu-system-arm 'max' CPU did not have LPAE and turned off 'highmem' support in the virt board. Following b8f7959f28c4f36 qemu-system-arm 'max' is treated the same as 'cortex-a15' and other v7 LPAE CPUs, because the generic feature propagation code does correctly propagate V8 -> V7VE -> LPAE.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240109143804.1118307-1-peter.maydell@linaro.org
* target/arm: Handle FEAT_NV page table attribute changesPeter Maydell2024-01-091-0/+21
| | | | | | | | | | | | | | | | | | | | | FEAT_NV requires that when HCR_EL2.{NV,NV1} == {1,1} the handling of some of the page table attribute bits changes for the EL1&0 translation regime: * for block and page descriptors: - bit [54] holds PXN, not UXN - bit [53] is RES0, and the effective value of UXN is 0 - bit [6], AP[1], is treated as 0 * for table descriptors, when hierarchical permissions are enabled: - bit [60] holds PXNTable, not UXNTable - bit [59] is RES0 - bit [61], APTable[0] is treated as 0 Implement these changes to the page table attribute handling. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
* system/cpus: rename qemu_mutex_lock_iothread() to bql_lock()Stefan Hajnoczi2024-01-081-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Big QEMU Lock (BQL) has many names and they are confusing. The actual QemuMutex variable is called qemu_global_mutex but it's commonly referred to as the BQL in discussions and some code comments. The locking APIs, however, are called qemu_mutex_lock_iothread() and qemu_mutex_unlock_iothread(). The "iothread" name is historic and comes from when the main thread was split into into KVM vcpu threads and the "iothread" (now called the main loop thread). I have contributed to the confusion myself by introducing a separate --object iothread, a separate concept unrelated to the BQL. The "iothread" name is no longer appropriate for the BQL. Rename the locking APIs to: - void bql_lock(void) - void bql_unlock(void) - bool bql_locked(void) There are more APIs with "iothread" in their names. Subsequent patches will rename them. There are also comments and documentation that will be updated in later patches. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Fabiano Rosas <farosas@suse.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Reviewed-by: Cédric Le Goater <clg@kaod.org> Acked-by: Peter Xu <peterx@redhat.com> Acked-by: Eric Farman <farman@linux.ibm.com> Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Acked-by: Hyman Huang <yong.huang@smartx.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Message-id: 20240102153529.486531-2-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* target/arm: Correctly propagate stage 1 BTI guarded bit in a two-stage walkPeter Maydell2023-11-021-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | In a two-stage translation, the result of the BTI guarded bit should be the guarded bit from the first stage of translation, as there is no BTI guard information in stage two. Our code tried to do this, but got it wrong, because we currently have two fields where the GP bit information might live (ARMCacheAttrs::guarded and CPUTLBEntryFull::extra::arm::guarded), and we were storing the GP bit in the latter during the stage 1 walk but trying to copy the former in combine_cacheattrs(). Remove the duplicated storage, and always use the field in CPUTLBEntryFull; correctly propagate the stage 1 value to the output in get_phys_addr_twostage(). Note for stable backports: in v8.0 and earlier the field is named result->f.guarded, not result->f.extra.arm.guarded. Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1950 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231031173723.26582-1-peter.maydell@linaro.org
* target/arm: Move feature test functions to their own headerPeter Maydell2023-10-271-0/+1
| | | | | | | | | | | | The feature test functions isar_feature_*() now take up nearly a thousand lines in target/arm/cpu.h. This header file is included by a lot of source files, most of which don't need these functions. Move the feature test functions to their own header file. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231024163510.2972081-2-peter.maydell@linaro.org
* target/arm: Replace TARGET_PAGE_ENTRY_EXTRAAnton Johansson2023-10-031-2/+2
| | | | | | | | | | | | | | | | TARGET_PAGE_ENTRY_EXTRA is a macro that allows guests to specify additional fields for caching with the full TLB entry. This macro is replaced with a union in CPUTLBEntryFull, thus making CPUTLB target-agnostic at the cost of slightly inflated CPUTLBEntryFull for non-arm guests. Note, this is needed to ensure that fields in CPUTLB don't vary in offset between various targets. (arm is the only guest actually making use of this feature.) Signed-off-by: Anton Johansson <anjo@rev.ng> Message-Id: <20230912153428.17816-2-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>