summary refs log tree commit diff stats
path: root/accel/tcg/cputlb.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* accel/tcg/cputlb.c: Widen addr in MMULookupPageDataAnton Johansson2023-06-261-15/+15
| | | | | | | | | Functions accessing MMULookupPageData are also updated. Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230621135633.1649-6-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg/cputlb.c: Widen CPUTLBEntry access functionsAnton Johansson2023-06-261-4/+4
| | | | | | | Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230621135633.1649-5-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel: Replace target_ulong in tlb_*()Anton Johansson2023-06-261-90/+87
| | | | | | | | | | Replaces target_ulong with vaddr for guest virtual addresses in tlb_*() functions and auxilliary structs. Signed-off-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230621135633.1649-2-anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Handle MO_ATOM_WITHIN16 in do_st16_leNRichard Henderson2023-06-201-0/+1
| | | | | | | | | Otherwise we hit the default assert not reached. Handle it as MO_ATOM_NONE, because of size and misalignment. We already handle this correctly in do_ld16_beN. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Split helper-proto.hRichard Henderson2023-06-051-2/+1
| | | | | | | | | | Create helper-proto-common.h without the target specific portion. Use that in tcg-op-common.h. Include helper-proto.h in target/arm and target/hexagon before helper-info.c.inc; all other targets are already correct in this regard. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Split out tcg/oversized-guest.hRichard Henderson2023-06-051-0/+1
| | | | | | | | Move a use of TARGET_LONG_BITS out of tcg/tcg.h. Include the new file only where required. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Widen CPUTLBEntry comparators to 64-bitsRichard Henderson2023-06-051-2/+6
| | | | | | | | | | | | | | | | | | | | | | This makes CPUTLBEntry agnostic to the address size of the guest. When 32-bit addresses are in effect, we can simply read the low 32 bits of the 64-bit field. Similarly when we need to update the field for setting TLB_NOTDIRTY. For TCG backends that could in theory be big-endian, but in practice are not (arm, loongarch, riscv), use QEMU_BUILD_BUG_ON to document and ensure this is not accidentally missed. For s390x, which is always big-endian, use HOST_BIG_ENDIAN anyway, to document the reason for the adjustment. For sparc64 and ppc64, always perform a 64-bit load, and rely on the following 32-bit comparison to ignore the high bits. Rearrange mips and ppc if ladders for clarity. Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Correctly use atomic128.h in ldst_atomicity.c.incRichard Henderson2023-05-231-1/+1
| | | | | | | | | Remove the locally defined load_atomic16 and store_atomic16, along with HAVE_al16 and HAVE_al16_fast in favor of the routines defined in atomic128.h. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Eliminate #if on HAVE_ATOMIC128 and HAVE_CMPXCHG128Richard Henderson2023-05-231-1/+1
| | | | | | | | | These symbols will shortly become dynamic runtime tests and therefore not appropriate for the preprocessor. Use the matching CONFIG_* symbols for that purpose. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Remove prot argument to atomic_mmu_lookupRichard Henderson2023-05-231-53/+30
| | | | | | | | Now that load/store are gone, we're always passing PAGE_READ | PAGE_WRITE for RMW atomic operations. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Unify cpu_{ld,st}*_{be,le}_mmuRichard Henderson2023-05-231-99/+23
| | | | | | | | | | | | | | With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. The only use of the functions with explicit endianness was in target/sparc64, and that was only to satisfy the assert: the correct endianness is already built into memop. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Widen helper_{ld,st}_i128 addresses to uint64_tRichard Henderson2023-05-161-3/+2
| | | | | | | Always pass the target address as uint64_t. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Widen tcg-ldst.h addresses to uint64_tRichard Henderson2023-05-161-13/+13
| | | | | | | | Always pass the target address as uint64_t. Adjust tcg_out_{ld,st}_helper_args to match. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Add 128-bit guest memory primitivesRichard Henderson2023-05-161-94/+305
| | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Unify helper_{be,le}_{ld,st}*Richard Henderson2023-05-161-129/+61
| | | | | | | | | | | | With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions. Hoist the qemu_{ld,st}_helpers arrays to tcg.c. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Honor atomicity of storesRichard Henderson2023-05-161-60/+48
| | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Honor atomicity of loadsRichard Henderson2023-05-161-39/+136
| | | | | | | | | | Create ldst_atomicity.c.inc. Not required for user-only code loads, because we've ensured that the page is read-only before beginning to translate code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Reorg system mode store helpersRichard Henderson2023-05-111-208/+186
| | | | | | | | | Instead of trying to unify all operations on uint64_t, use mmu_lookup() to perform the basic tlb hit and resolution. Create individual functions to handle access by size. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Reorg system mode load helpersRichard Henderson2023-05-111-209/+412
| | | | | | | | | | Instead of trying to unify all operations on uint64_t, pull out mmu_lookup() to perform the basic tlb hit and resolution. Create individual functions to handle access by size. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Introduce tlb_read_idxRichard Henderson2023-05-111-71/+33
| | | | | | | | | | | Instead of playing with offsetof in various places, use MMUAccessType to index an array. This is easily defined instead of the previous dummy padding array in the union. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Fix atomic_mmu_lookup for readsRichard Henderson2023-05-111-1/+1
| | | | | | | | | | | | A copy-paste bug had us looking at the victim cache for writes. Cc: qemu-stable@nongnu.org Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Fixes: 08dff435e2 ("tcg: Probe the proper permissions for atomic ops") Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20230505204049.352469-1-richard.henderson@linaro.org>
* tcg: Widen helper_*_st[bw]_mmu val argumentsRichard Henderson2023-05-051-3/+3
| | | | | | | | | While the old type was correct in the ideal sense, some ABIs require the argument to be zero-extended. Using uint32_t for all such values is a decent compromise. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Add cpu_ld*_code_mmuRichard Henderson2023-05-021-0/+48
| | | | | | | | | | | | | At least RISC-V has the need to be able to perform a read using execute permissions, outside of translation. Add helpers to facilitate this. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn> Tested-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-Id: <20230325105429.1142530-9-richard.henderson@linaro.org> Message-Id: <20230412114333.118895-9-richard.henderson@linaro.org>
* accel/tcg: Uncache the host address for instruction fetch when tlb size < 1Weiwei Li2023-05-021-0/+5
| | | | | | | | | | | | | | | | When PMP entry overlap part of the page, we'll set the tlb_size to 1, which will make the address in tlb entry set with TLB_INVALID_MASK, and the next access will again go through tlb_fill.However, this way will not work in tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1542. Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230422130329.23555-6-liweiwei@iscas.ac.cn>
* accel/tcg: Trigger watchpoints from atomic_mmu_lookupRichard Henderson2023-03-051-11/+29
| | | | | | | Fixes a bug in that we weren't reporting these changes. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Honor TLB_DISCARD_WRITE in atomic_mmu_lookupRichard Henderson2023-03-051-1/+1
| | | | | | | | | | Using an atomic write or read-write insn on ROM is basically a happens-never case. Handle it via stop-the-world, which will generate non-atomic serial code, where we can correctly ignore the write while producing the correct read result. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Retain prot flags from tlb_fillRichard Henderson2023-03-051-1/+0
| | | | | | | | | | While changes are made to prot within tlb_set_page_full, they are an implementation detail of softmmu. Retain the original for any target use of probe_access_full. Fixes: 4047368938f6 ("accel/tcg: Introduce tlb_set_page_full") Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Add 'size' param to probe_access_fullRichard Henderson2023-02-281-2/+2
| | | | | | | | | Change to match the recent change to probe_access_flags. All existing callers updated to supply 0, so no change in behaviour. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Add 'size' param to probe_access_flags()Daniel Henrique Barboza2023-02-281-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | probe_access_flags() as it is today uses probe_access_full(), which in turn uses probe_access_internal() with size = 0. probe_access_internal() then uses the size to call the tlb_fill() callback for the given CPU. This size param ('fault_size' as probe_access_internal() calls it) is ignored by most existing .tlb_fill callback implementations, e.g. arm_cpu_tlb_fill(), ppc_cpu_tlb_fill(), x86_cpu_tlb_fill() and mips_cpu_tlb_fill() to name a few. But RISC-V riscv_cpu_tlb_fill() actually uses it. The 'size' parameter is used to check for PMP (Physical Memory Protection) access. This is necessary because PMP does not make any guarantees about all the bytes of the same page having the same permissions, i.e. the same page can have different PMP properties, so we're forced to make sub-page range checks. To allow RISC-V emulation to do a probe_acess_flags() that covers PMP, we need to either add a 'size' param to the existing probe_acess_flags() or create a new interface (e.g. probe_access_range_flags). There are quite a few probe_* APIs already, so let's add a 'size' param to probe_access_flags() and re-use this API. This is done by open coding what probe_access_full() does inside probe_acess_flags() and passing the 'size' param to probe_acess_internal(). Existing probe_access_flags() callers use size = 0 to not change their current API usage. 'size' is asserted to enforce single page access like probe_access() already does. No behavioral changes intended. Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com> Message-Id: <20230223234427.521114-2-dbarboza@ventanamicro.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Add guest load/store primitives for TCGv_i128Richard Henderson2023-02-041-0/+112
| | | | | | | | | These are not yet considering atomicity of the 16-byte value; this is a direct replacement for the current target code which uses a pair of 8-byte operations. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Test CPUJumpCache in tb_jmp_cache_clear_pageEric Auger2023-02-041-1/+6
| | | | | | | | | | | | | | After commit 4e4fa6c12d ("accel/tcg: Complete cpu initialization before registration"), it looks the CPUJumpCache pointer can be NULL. This causes a SIGSEV when running debug-wp-migration kvm unit test. At the first place it should be clarified why this TCG code is called with KVM acceleration. This may hide another bug. Fixes: 4e4fa6c12d ("accel/tcg: Complete cpu initialization before registration") Signed-off-by: Eric Auger <eric.auger@redhat.com> Message-Id: <20230203171510.2867451-1-eric.auger@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* bulk: Rename TARGET_FMT_plx -> HWADDR_FMT_plxPhilippe Mathieu-Daudé2023-01-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 'hwaddr' type is defined in "exec/hwaddr.h" as: hwaddr is the type of a physical address (its size can be different from 'target_ulong'). All definitions use the 'HWADDR_' prefix, except TARGET_FMT_plx: $ fgrep define include/exec/hwaddr.h #define HWADDR_H #define HWADDR_BITS 64 #define HWADDR_MAX UINT64_MAX #define TARGET_FMT_plx "%016" PRIx64 ^^^^^^ #define HWADDR_PRId PRId64 #define HWADDR_PRIi PRIi64 #define HWADDR_PRIo PRIo64 #define HWADDR_PRIu PRIu64 #define HWADDR_PRIx PRIx64 #define HWADDR_PRIX PRIX64 Since hwaddr's size can be *different* from target_ulong, it is very confusing to read one of its format using the 'TARGET_FMT_' prefix, normally used for the target_long / target_ulong types: $ fgrep TARGET_FMT_ include/exec/cpu-defs.h #define TARGET_FMT_lx "%08x" #define TARGET_FMT_ld "%d" #define TARGET_FMT_lu "%u" #define TARGET_FMT_lx "%016" PRIx64 #define TARGET_FMT_ld "%" PRId64 #define TARGET_FMT_lu "%" PRIu64 Apparently this format was missed during commit a8170e5e97 ("Rename target_phys_addr_t to hwaddr"), so complete it by doing a bulk-rename with: $ sed -i -e s/TARGET_FMT_plx/HWADDR_FMT_plx/g $(git grep -l TARGET_FMT_plx) Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230110212947.34557-1-philmd@linaro.org> [thuth: Fix some warnings from checkpatch.pl along the way] Signed-off-by: Thomas Huth <thuth@redhat.com>
* accel/tcg: Use QEMU_IOTHREAD_LOCK_GUARD in io_readx/io_writexRichard Henderson2023-01-041-17/+8
| | | | | | | | Narrow the scope of the lock to the actual read/write, moving the cpu_transation_failed call outside the lock. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Factor tb_invalidate_phys_range_fast() outPhilippe Mathieu-Daudé2022-12-201-4/+1
| | | | | | | Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221209093649.43738-5-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Rename tb_invalidate_phys_page_fast{,__locked}()Philippe Mathieu-Daudé2022-12-201-1/+1
| | | | | | | | | | Emphasize this function is called with pages locked. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221209093649.43738-4-philmd@linaro.org> [rth: Use "__locked" suffix, to match other instances.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Remove trace events from trace-root.hPhilippe Mathieu-Daudé2022-12-201-1/+1
| | | | | | | | | | | Commit d9bb58e510 ("tcg: move tcg related files into accel/tcg/ subdirectory") introduced accel/tcg/trace-events, so we don't need to use the root trace-events anymore. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20221209093649.43738-3-philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* include/hw/core: Create struct CPUJumpCacheRichard Henderson2022-10-041-4/+5
| | | | | | | | Wrap the bare TranslationBlock pointer into a structure. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Inline tb_flush_jmp_cacheRichard Henderson2022-10-041-11/+14
| | | | | | | | | | | | | This function has two users, who use it incompatibly. In tlb_flush_page_by_mmuidx_async_0, when flushing a single page, we need to flush exactly two pages. In tlb_flush_range_by_mmuidx_async_0, when flushing a range of pages, we need to flush N+1 pages. This avoids double-flushing of jmp cache pages in a range. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Do not align tb->page_addr[0]Richard Henderson2022-10-041-1/+2
| | | | | | | | | | | Let tb->page_addr[0] contain the address of the first byte of the translated block, rather than the address of the page containing the start of the translated block. We need to recover this value anyway at various points, and it is easier to discard a page offset when it is not needed, which happens naturally via the existing find_page shift. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Introduce tlb_set_page_fullRichard Henderson2022-10-031-18/+33
| | | | | | | | | | | | | Now that we have collected all of the page data into CPUTLBEntryFull, provide an interface to record that all in one go, instead of using 4 arguments. This interface allows CPUTLBEntryFull to be extended without having to change the number of arguments. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Introduce probe_access_fullRichard Henderson2022-10-031-18/+29
| | | | | | | | | | | | Add an interface to return the CPUTLBEntryFull struct that goes with the lookup. The result is not intended to be valid across multiple lookups, so the user must use the results immediately. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Suppress auto-invalidate in probe_access_internalRichard Henderson2022-10-031-1/+9
| | | | | | | | | | | | | | | | When PAGE_WRITE_INV is set when calling tlb_set_page, we immediately set TLB_INVALID_MASK in order to force tlb_fill to be called on the next lookup. Here in probe_access_internal, we have just called tlb_fill and eliminated true misses, thus the lookup must be valid. This allows us to remove a warning comment from s390x. There doesn't seem to be a reason to change the code though. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Drop addr member from SavedIOTLBRichard Henderson2022-10-031-4/+3
| | | | | | | | | This field is only written, not read; remove it. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Rename CPUIOTLBEntry to CPUTLBEntryFullRichard Henderson2022-10-031-50/+52
| | | | | | | | | | | This structure will shortly contain more than just data for accessing MMIO. Rename the 'addr' member to 'xlat_section' to more clearly indicate its purpose. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: used cached CPUClass in our hot-pathsAlex Bennée2022-10-031-9/+6
| | | | | | | | | | | | Before: 35.912 s ± 0.168 s After: 35.565 s ± 0.087 s Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220811151413.3350684-5-alex.bennee@linaro.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Message-Id: <20220923084803.498337-5-clg@kaod.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Use probe_access_internal for softmmu get_page_addr_code_hostpRichard Henderson2022-09-061-50/+26
| | | | | | | | | Simplify the implementation of get_page_addr_code_hostp by reusing the existing probe_access infrastructure. Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Move qemu_ram_addr_from_host_nofail to physmem.cRichard Henderson2022-09-061-12/+0
| | | | | | | | | | The base qemu_ram_addr_from_host function is already in softmmu/physmem.c; move the nofail version to be adjacent. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Properly implement get_page_addr_code for user-onlyRichard Henderson2022-09-061-5/+0
| | | | | | | | | | | | | The current implementation is a no-op, simply returning addr. This is incorrect, because we ought to be checking the page permissions for execution. Make get_page_addr_code inline for both implementations. Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Tested-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Fix unaligned stores to s390x low-address-protected lowcoreIlya Leoshkevich2022-07-121-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | If low-address-protection is active, unaligned stores to non-protected parts of lowcore lead to protection exceptions. The reason is that in such cases tlb_fill() call in store_helper_unaligned() covers [0, addr + size) range, which contains the protected portion of lowcore. This range is too large. The most straightforward fix would be to make sure we stay within the original [addr, addr + size) range. However, if an unaligned access affects a single page, we don't need to call tlb_fill() in store_helper_unaligned() at all, since it would be identical to the previous tlb_fill() call in store_helper(), and therefore a no-op. If an unaligned access covers multiple pages, this situation does not occur. Therefore simply skip TLB handling in store_helper_unaligned() if we are dealing with a single page. Fixes: 2bcf018340cb ("s390x/tcg: low-address protection support") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Message-Id: <20220711185640.3558813-2-iii@linux.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Assert mmu_idx in range before use in cputlbRichard Henderson2022-04-261-13/+27
| | | | | | | | | | | | | | | | Coverity reports out-of-bound accesses within cputlb.c. This should be a false positive due to how the index is decoded from MemOpIdx. To be fair, nothing is checking the correct bounds during encoding either. Assert index in range before use, both to catch user errors and to pacify static analysis. Fixes: Coverity CID 1487120, 1487127, 1487170, 1487196, 1487215, 1487238 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20220401170813.318609-1-richard.henderson@linaro.org>