summary refs log tree commit diff stats
path: root/accel/tcg/cputlb.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* accel/tcg: Remove ATOMIC_MMU_IDXRichard Henderson2022-04-201-1/+0
| | | | | | | | The last use of this macro was removed in f3e182b10013 ("accel/tcg: Push trace info building into atomic_common.c.inc") Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Fix cpu_ldq_be_mmu typoRichard Henderson2022-03-161-1/+1
| | | | | | | | | | | | | In the conversion to cpu_ld_*_mmu, the retaddr parameter was corrupted in the one case of cpu_ldq_be_mmu. Fixes: f83bcecb1 ("accel/tcg: Add cpu_{ld,st}*_mmu interfaces") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/902 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220315002506.152030-1-richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
* Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20220211' ↵Peter Maydell2022-02-141-0/+9
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | into staging Fix safe_syscall_base for sparc64. Fix host signal handling for sparc64-linux. Speedups for jump cache and work list probing. Fix for exception replays. Raise guest SIGBUS for user-only misaligned accesses. # gpg: Signature made Fri 11 Feb 2022 01:27:16 GMT # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth-gitlab/tags/pull-tcg-20220211: (34 commits) tests/tcg/multiarch: Add sigbus.c tcg/sparc: Support unaligned access for user-only tcg/sparc: Add tcg_out_jmpl_const for better tail calls tcg/sparc: Use the constant pool for 64-bit constants tcg/sparc: Convert patch_reloc to return bool tcg/sparc: Improve code gen for shifted 32-bit constants tcg/sparc: Add scratch argument to tcg_out_movi_int tcg/sparc: Split out tcg_out_movi_imm32 tcg/sparc: Use tcg_out_movi_imm13 in tcg_out_addsub2_i64 tcg/mips: Support unaligned access for softmmu tcg/mips: Support unaligned access for user-only tcg/arm: Support raising sigbus for user-only tcg/arm: Reserve a register for guest_base tcg/arm: Support unaligned access for softmmu tcg/arm: Check alignment for ldrd and strd tcg/arm: Remove use_armv6_instructions tcg/arm: Remove use_armv5t_instructions tcg/arm: Drop support for armv4 and armv5 hosts tcg/loongarch64: Support raising sigbus for user-only tcg/tci: Support raising sigbus for user-only ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * accel/tcg: Optimize jump cache flush during tlb range flushIdan Horowitz2022-02-091-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When the length of the range is large enough, clearing the whole cache is faster than iterating over the (possibly extremely large) set of pages contained in the range. This mimics the pre-existing similar optimization done on the flush of the tlb itself. Signed-off-by: Idan Horowitz <idan.horowitz@gmail.com> Message-Id: <20220110164754.1066025-1-idan.horowitz@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | tracing: remove TCG memory access tracingAlex Bennée2022-02-091-2/+0
|/ | | | | | | | | | | | If you really want to trace all memory operations TCG plugins gives you a more flexible interface for doing so. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Cc: Luis Vilanova <vilanova@imperial.ac.uk> Cc: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220204204335.1689602-19-alex.bennee@linaro.org>
* exec/memop: Adding signedness to quad definitionsFrédéric Pétrot2022-01-081-15/+15
| | | | | | | | | | | | | | Renaming defines for quad in their various forms so that their signedness is now explicit. Done using git grep as suggested by Philippe, with a bit of hand edition to keep assignments aligned. Signed-off-by: Frédéric Pétrot <frederic.petrot@univ-grenoble-alpes.fr> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20220106210108.138226-2-frederic.petrot@univ-grenoble-alpes.fr Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
* tcg: Move helper_*_mmu decls to tcg/tcg-ldst.hRichard Henderson2021-10-131-0/+1
| | | | | | | | | | These functions have been replaced by cpu_*_mmu as the most proper interface to use from target code. Hide these declarations from code that should not use them. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Add cpu_{ld,st}*_mmu interfacesRichard Henderson2021-10-131-266/+126
| | | | | | | | | | | | | | | | These functions are much closer to the softmmu helper functions, in that they take the complete MemOpIdx, and from that they may enforce required alignment. The previous cpu_ldst.h functions did not have alignment info, and so did not enforce it. Retain this by adding MO_UNALN to the MemOp that we create in calling the new functions. Note that we are not yet enforcing alignment for user-only, but we now have the information with which to do so. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* trace: Split guest_mem_beforeRichard Henderson2021-10-051-5/+2
| | | | | | | | | | There is no point in encoding load/store within a bit of the memory trace info operand. Represent atomic operations as a single read-modify-write tracepoint. Use MemOpIdx instead of inventing a form specifically for traces. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* plugins: Reorg arguments to qemu_plugin_vcpu_mem_cbRichard Henderson2021-10-051-2/+2
| | | | | | | | | | Use the MemOpIdx directly, rather than the rearrangement of the same bits currently done by the trace infrastructure. Pass in enum qemu_plugin_mem_rw so that we are able to treat read-modify-write operations as a single operation. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* trace/mem: Pass MemOpIdx to trace_mem_get_infoRichard Henderson2021-10-051-8/+4
| | | | | | | | We (will) often have the complete MemOpIdx handy, so use that. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Rename TCGMemOpIdx to MemOpIdxRichard Henderson2021-10-051-39/+39
| | | | | | | | | We're about to move this out of tcg.h, so rename it as we did when moving MemOp. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Drop signness in tracing in cputlb.cRichard Henderson2021-10-051-7/+3
| | | | | | | | | We are already inconsistent about whether or not MO_SIGN is set in trace_mem_get_info. Dropping it entirely allows some simplification. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Expand ATOMIC_MMU_LOOKUP_*Richard Henderson2021-07-211-6/+1
| | | | | | | | | Unify the parameters of atomic_mmu_lookup between cputlb.c and user-exec.c. Call the function directly, and remove the macros. Tested-by: Cole Robinson <crobinso@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Remove ATOMIC_MMU_DECLSRichard Henderson2021-07-211-1/+0
| | | | | | | | | All definitions are now empty. Tested-by: Cole Robinson <crobinso@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Fold EXTRA_ARGS into atomic_template.hRichard Henderson2021-07-211-1/+0
| | | | | | | | | All instances of EXTRA_ARGS are now identical. Tested-by: Cole Robinson <crobinso@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Standardize atomic helpers on softmmu apiRichard Henderson2021-07-211-32/+0
| | | | | | | | | | Reduce the amount of code duplication by always passing the TCGMemOpIdx argument to helper_atomic_*. This is not currently used for user-only, but it's easy to ignore. Tested-by: Cole Robinson <crobinso@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Rename helper_atomic_*_mmu and provide for user-onlyRichard Henderson2021-07-211-3/+5
| | | | | | | | | | | Always provide the atomic interface using TCGMemOpIdx oi and uintptr_t retaddr. Rename from helper_* to cpu_* so as to (mostly) match the exec/cpu_ldst.h functions, and to emphasize that they are not callable from TCG directly. Tested-by: Cole Robinson <crobinso@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* plugins: fix-up handling of internal hostaddr for 32 bitAlex Bennée2021-07-141-1/+1
| | | | | | | | | | | | The compiler rightly complains when we build on 32 bit that casting uint64_t into a void is a bad idea. We are really dealing with a host pointer at this point so treat it as such. This does involve a uintptr_t cast of the result of the TLB addend as we know that has to point to the host memory. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210709143005.1554-28-alex.bennee@linaro.org>
* accel/tcg: Probe the proper permissions for atomic opsRichard Henderson2021-06-191-29/+66
| | | | | | | | | | | | We had a single ATOMIC_MMU_LOOKUP macro that probed for read+write on all atomic ops. This is incorrect for plain atomic load and atomic store. For user-only, we rely on the host page permissions. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/390 Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Keep TranslationBlock headers local to TCGPhilippe Mathieu-Daudé2021-05-261-1/+1
| | | | | | | | | | Only the TCG accelerator uses the TranslationBlock API. Move the tb-context.h / tb-hash.h / tb-lookup.h from the global namespace to the TCG one (in accel/tcg). Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210524170453.3791436-3-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tlb: Rename tlb_flush_[page_bits > range]_by_mmuidx_async_[2 > 1]Richard Henderson2021-05-251-6/+6
| | | | | | | | | | | | | Rename to match tlb_flush_range_locked. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-9-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tcg: Rename tlb_flush_page_bits -> range]_by_mmuidx_async_0Richard Henderson2021-05-251-6/+5
| | | | | | | | | | | | | Rename to match tlb_flush_range_locked. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-8-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tlb: Add tlb_flush_range_by_mmuidx_all_cpus_synced()Richard Henderson2021-05-251-7/+20
| | | | | | | | | | | | | | Forward tlb_flush_page_bits_by_mmuidx_all_cpus_synced to tlb_flush_range_by_mmuidx_all_cpus_synced passing TARGET_PAGE_SIZE. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-7-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tcg: Add tlb_flush_range_by_mmuidx_all_cpus()Richard Henderson2021-05-251-7/+17
| | | | | | | | | | | | | | Forward tlb_flush_page_bits_by_mmuidx_all_cpus to tlb_flush_range_by_mmuidx_all_cpus passing TARGET_PAGE_SIZE. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-6-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tcg: Add tlb_flush_range_by_mmuidx()Richard Henderson2021-05-251-5/+15
| | | | | | | | | | | | | | Forward tlb_flush_page_bits_by_mmuidx to tlb_flush_range_by_mmuidx passing TARGET_PAGE_SIZE. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-5-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tcg: Remove {encode,decode}_pbm_to_runonRichard Henderson2021-05-251-66/+20
| | | | | | | | | | | | | | We will not be able to fit address + length into a 64-bit packet. Drop this optimization before re-organizing this code. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-10-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> [PMM: Moved patch earlier in the series] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tlb: Rename TLBFlushPageBitsByMMUIdxData -> TLBFlushRangeDataRichard Henderson2021-05-251-12/+12
| | | | | | | | | | | | | Rename the structure to match the rename of tlb_flush_range_locked. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-4-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tcg: Pass length argument to tlb_flush_range_locked()Richard Henderson2021-05-251-15/+33
| | | | | | | | | | | | | | | Rename tlb_flush_page_bits_locked() -> tlb_flush_range_locked(), and have callers pass a length argument (currently TARGET_PAGE_SIZE) via the TLBFlushPageBitsByMMUIdxData structure. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-3-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tcg: Replace g_new() + memcpy() by g_memdup()Richard Henderson2021-05-251-11/+4
| | | | | | | | | | | | | Using g_memdup is a bit more compact than g_new + memcpy. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-2-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* Do not include exec/address-spaces.h if it's not really necessaryThomas Huth2021-05-021-1/+0
| | | | | | | | Stop including exec/address-spaces.h in files that don't need it. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210416171314.2074665-5-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
* Do not include cpu.h if it's not really necessaryThomas Huth2021-05-021-1/+0
| | | | | | | | Stop including cpu.h in files that don't need it. Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <20210416171314.2074665-4-thuth@redhat.com> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
* cpu: tcg_ops: move to tcg-cpu-ops.h, keep a pointer in CPUClassClaudio Fontana2021-02-051-4/+31
| | | | | | | | | | | | | | | | | | | we cannot in principle make the TCG Operations field definitions conditional on CONFIG_TCG in code that is included by both common_ss and specific_ss modules. Therefore, what we can do safely to restrict the TCG fields to TCG-only builds, is to move all tcg cpu operations into a separate header file, which is only included by TCG, target-specific code. This leaves just a NULL pointer in the cpu.h for the non-TCG builds. This also tidies up the code in all targets a bit, having all TCG cpu operations neatly contained by a dedicated data struct. Signed-off-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20210204163931.7358-16-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cpu: Move tlb_fill to tcg_opsEduardo Habkost2021-02-051-3/+4
| | | | | | | | | | | | [claudio: wrapped target code in CONFIG_TCG] Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210204163931.7358-7-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Restrict cpu_io_recompile() from other acceleratorsPhilippe Mathieu-Daudé2021-01-231-0/+1
| | | | | | | | | | As cpu_io_recompile() is only called within TCG accelerator in cputlb.c, declare it locally. Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20210117164813.4101761-6-f4bug@amsat.org> [rth: Adjust vs changed tb_flush_jmp_cache patch.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* accel/tcg: Move tb_flush_jmp_cache() to cputlb.cRichard Henderson2021-01-231-0/+18
| | | | | | | | Move and make the function static, as the only users are here in cputlb.c. Suggested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* remove TCG includes from common codePaolo Bonzini2021-01-021-1/+1
| | | | | | | | Enable removing tcg/$tcg_arch from the include path when TCG is disabled. Move translate-all.h to include/exec, since stubs exist for the functions defined therein. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* accel/tcg: Add tlb_flush_page_bits_by_mmuidx*Richard Henderson2020-10-201-9/+266
| | | | | | | | | | | | | | | | On ARM, the Top Byte Ignore feature means that only 56 bits of the address are significant in the virtual address. We are required to give the entire 64-bit address to FAR_ELx on fault, which means that we do not "clean" the top byte early in TCG. This new interface allows us to flush all 256 possible aliases for a given page, currently missed by tlb_flush_page*. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20201016210754.818257-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* exec: Remove MemoryRegion::global_locking fieldPhilippe Mathieu-Daudé2020-09-301-2/+2
| | | | | | | | | | | | | | | | Last uses of memory_region_clear_global_locking() have been removed in commit 7070e085d4 ("acpi: mark PMTIMER as unlocked") and commit 08565552f7 ("cputlb: Move NOTDIRTY handling from I/O path to TLB path"). Remove memory_region_clear_global_locking() and the now unused 'global_locking' field in MemoryRegion. Reported-by: Alexander Bulekov <alxndr@bu.edu> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20200806150726.962-1-philmd@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* qemu/atomic.h: rename atomic_ to qatomic_Stefan Hajnoczi2020-09-231-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | clang's C11 atomic_fetch_*() functions only take a C11 atomic type pointer argument. QEMU uses direct types (int, etc) and this causes a compiler error when a QEMU code calls these functions in a source file that also included <stdatomic.h> via a system header file: $ CC=clang CXX=clang++ ./configure ... && make ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid) Avoid using atomic_*() names in QEMU's atomic.h since that namespace is used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h and <stdatomic.h> can co-exist. I checked /usr/include on my machine and searched GitHub for existing "qatomic_" users but there seem to be none. This patch was generated using: $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \ sort -u >/tmp/changed_identifiers $ for identifier in $(</tmp/changed_identifiers); do sed -i "s%\<$identifier\>%q$identifier%g" \ $(git grep -I -l "\<$identifier\>") done I manually fixed line-wrap issues and misaligned rST tables. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
* cputlb: Make store_helper less fragile to compiler optimizationsRichard Henderson2020-09-031-59/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This has no functional change. The current function structure is: inline QEMU_ALWAYSINLINE store_memop() { switch () { ... default: qemu_build_not_reached(); } } inline QEMU_ALWAYSINLINE store_helper() { ... if (span_two_pages_or_io) { ... helper_ret_stb_mmu(); } store_memop(); } helper_ret_stb_mmu() { store_helper(); } Whereas GCC will generate an error at compile-time when an always_inline function is not inlined, Clang does not. Nor does Clang prioritize the inlining of always_inline functions. Both of these are arguably bugs. Both `store_memop` and `store_helper` need to be inlined and allow constant propogations to eliminate the `qemu_build_not_reached` call. However, if the compiler instead chooses to inline helper_ret_stb_mmu into store_helper, then store_helper is now self-recursive and the compiler is no longer able to propagate the constant in the same way. This does not produce at current QEMU head, but was reproducible at v4.2.0 with `clang-10 -O2 -fexperimental-new-pass-manager`. The inline recursion problem can be fixed solely by marking helper_ret_stb_mmu as noinline, so the compiler does not make an incorrect decision about which functions to inline. In addition, extract store_helper_unaligned as a noinline subroutine that can be shared by all of the helpers. This saves about 6k code size in an optimized x86_64 build. Reported-by: Shu-Chun Weng <scw@google.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* meson: rename included C source files to .c.incPaolo Bonzini2020-08-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | With Makefiles that have automatically generated dependencies, you generated includes are set as dependencies of the Makefile, so that they are built before everything else and they are available when first building the .c files. Alternatively you can use a fine-grained dependency, e.g. target/arm/translate.o: target/arm/decode-neon-shared.inc.c With Meson you have only one choice and it is a third option, namely "build at the beginning of the corresponding target"; the way you express it is to list the includes in the sources of that target. The problem is that Meson decides if something is a source vs. a generated include by looking at the extension: '.c', '.cc', '.m', '.C' are sources, while everything else is considered an include---including '.inc.c'. Use '.c.inc' to avoid this, as it is consistent with our other convention of using '.rst.inc' for included reStructuredText files. The editorconfig file is adjusted. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* trace: switch position of headers to what Meson requiresPaolo Bonzini2020-08-211-1/+1
| | | | | | | | | | | | | | | | | Meson doesn't enjoy the same flexibility we have with Make in choosing the include path. In particular the tracing headers are using $(build_root)/$(<D). In order to keep the include directives unchanged, the simplest solution is to generate headers with patterns like "trace/trace-audio.h" and place forwarding headers in the source tree such that for example "audio/trace.h" includes "trace/trace-audio.h". This patch is too ugly to be applied to the Makefiles now. It's only a way to separate the changes to the tracing header files from the Meson rewrite of the tracing logic. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: update comments for save_iotlb_data in cputlbAlex Bennée2020-07-241-6/+5
| | | | | | | | | | | | | I missed Emilio's review comments: Message-ID: <20200718205107.GA994221@sff> and the patch got merged. Correcting the comments now. Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200720122358.26881-1-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: ensure we save the IOTLB data in case of resetAlex Bennée2020-07-151-3/+35
| | | | | | | | | | | | | | | | | Any write to a device might cause a re-arrangement of memory triggering a TLB flush and potential re-size of the TLB invalidating previous entries. This would cause users of qemu_plugin_get_hwaddr() to see the warning: invalid use of qemu_plugin_get_hwaddr because of the failed tlb_lookup which should always succeed. To prevent this we save the IOTLB data in case it is later needed by a plugin doing a lookup. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200713200415.26214-7-alex.bennee@linaro.org>
* cputlb: destroy CPUTLB with tlb_destroyEmilio G. Cota2020-06-161-0/+15
| | | | | | | | | | | | I was after adding qemu_spin_destroy calls, but while at it I noticed that we are leaking some memory. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Robert Foley <robert.foley@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20200609200738.445-5-robert.foley@linaro.org> Message-Id: <20200612190237.30436-8-alex.bennee@linaro.org>
* accel/tcg: Add endian-specific cpu_{ld, st}* operationsRichard Henderson2020-05-111-61/+175
| | | | | | | | | | | | We currently have target-endian versions of these operations, but no easy way to force a specific endianness. This can be helpful if the target has endian-specific operations, or a mode that swaps endianness. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* accel/tcg: Add probe_access_flagsRichard Henderson2020-05-111-77/+80
| | | | | | | | | | | | | This new interface will allow targets to probe for a page and then handle watchpoints themselves. This will be most useful for vector predicated memory operations, where one page lookup can be used for many operations, and one test can avoid many watchpoint checks. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20200508154359.7494-6-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* cputlb: Hoist timestamp outside of loops over tlbsRichard Henderson2020-01-211-6/+8
| | | | | | | | | | | | Do not call get_clock_realtime() in tlb_mmu_resize_locked, but hoist outside of any loop over a set of tlbs. This is only two (indirect) callers, tlb_flush_by_mmuidx_async_work and tlb_flush_page_locked, so not onerous. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Initialize tlbs as flushedRichard Henderson2020-01-211-2/+3
| | | | | | | | | There's little point in leaving these data structures half initialized, and relying on a flush to be done during reset. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>