summary refs log tree commit diff stats
path: root/accel/tcg/cputlb.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
...
* cputlb: Partially merge tlb_dyn_init into tlb_initRichard Henderson2020-01-211-17/+16
| | | | | | | | | | Merge into the only caller, but at the same time split out tlb_mmu_init to initialize a single tlb entry. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Split out tlb_mmu_flush_lockedRichard Henderson2020-01-211-5/+10
| | | | | | | | | We will want to be able to flush a tlb without resizing. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Hoist tlb portions in tlb_flush_one_mmuidx_lockedRichard Henderson2020-01-211-9/+10
| | | | | | | | | | No functional change, but the smaller expressions make the code easier to read. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Hoist tlb portions in tlb_mmu_resize_lockedRichard Henderson2020-01-211-18/+17
| | | | | | | | | | No functional change, but the smaller expressions make the code easier to read. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Pass CPUTLBDescFast to tlb_n_entries and sizeof_tlbRichard Henderson2020-01-211-7/+8
| | | | | | | | | We do not need the entire CPUArchState to compute these values. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Make tlb_n_entries private to cputlb.cRichard Henderson2020-01-211-0/+5
| | | | | | | | | | There are no users of this function outside cputlb.c, and its interface will change in the next patch. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Merge tlb_table_flush_by_mmuidx into tlb_flush_one_mmuidx_lockedRichard Henderson2020-01-211-12/+7
| | | | | | | | | | There is only one caller for tlb_table_flush_by_mmuidx. Place the result at the earlier line number, due to an expected user in the near future. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Handle NB_MMU_MODES > TARGET_PAGE_BITS_MINRichard Henderson2020-01-211-35/+132
| | | | | | | | | | | | | | | In target/arm we will shortly have "too many" mmu_idx. The current minimum barrier is caused by the way in which tlb_flush_page_by_mmuidx is coded. We can remove this limitation by allocating memory for consumption by the worker. Let us assume that this is the unlikely case, as will be the case for the majority of targets which have so far satisfied the BUILD_BUG_ON, and only allocate memory when necessary. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Expand cpu_ldst_template.h in cputlb.cRichard Henderson2020-01-151-1/+106
| | | | | | | | | | | Reduce the amount of preprocessor obfuscation by expanding the text of each of the functions generated. The result is only slightly smaller than the original. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Rename helper_ret_ld*_cmmu to cpu_ld*_codeRichard Henderson2020-01-151-71/+23
| | | | | | | | | | | | | There are no uses of the *_cmmu names other than the bare wrapping within the *_code inlines. Therefore rename the functions so we can drop the inlines. Use abi_ptr instead of target_ulong in preparation for user-only; the two types are identical for softmmu. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Move body of cpu_ldst_template.h out of lineRichard Henderson2020-01-151-0/+116
| | | | | | | | | | | | | | | | | | | With the tracing hooks, the inline functions are no longer so simple. Once out-of-line, the current tlb_entry lookup is redundant with the one in the main load/store_helper. This also begins the introduction of a new target facing interface, with suffix *_mmuidx_ra. This is not yet official because the interface is not done for user-only. Use abi_ptr instead of target_ulong in preparation for user-only; the two types are identical for softmmu. What remains in cpu_ldst_template.h are the expansions for _code, _data, and MMU_MODE<N>_SUFFIX. Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* Remove unassigned_access CPU hookPeter Maydell2019-11-111-2/+0
| | | | | | | | | | | All targets have now migrated away from the old unassigned_access hook to the new do_transaction_failed hook. This means we can remove the core-code infrastructure for that hook and the code that calls it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 20191108173732.11816-1-peter.maydell@linaro.org
* Merge remote-tracking branch ↵Peter Maydell2019-10-301-1/+59
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'remotes/stsquad/tags/pull-tcg-plugins-281019-4' into staging TCG Plugins initial implementation - use --enable-plugins @ configure - low impact introspection (-plugin empty.so to measure overhead) - plugins cannot alter guest state - example plugins included in source tree (tests/plugins) - -d plugin to enable plugin output in logs - check-tcg runs extra tests when plugins enabled - documentation in docs/devel/plugins.rst # gpg: Signature made Mon 28 Oct 2019 15:13:23 GMT # gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44 # gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full] # Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44 * remotes/stsquad/tags/pull-tcg-plugins-281019-4: (57 commits) travis.yml: enable linux-gcc-debug-tcg cache MAINTAINERS: add me for the TCG plugins code scripts/checkpatch.pl: don't complain about (foo, /* empty */) .travis.yml: add --enable-plugins tests include/exec: wrap cpu_ldst.h in CONFIG_TCG accel/stubs: reduce headers from tcg-stub tests/plugin: add hotpages to analyse memory access patterns tests/plugin: add instruction execution breakdown tests/plugin: add a hotblocks plugin tests/tcg: enable plugin testing tests/tcg: drop test-i386-fprem from TESTS when not SLOW tests/tcg: move "virtual" tests to EXTRA_TESTS tests/tcg: set QEMU_OPTS for all cris runs tests/tcg/Makefile.target: fix path to config-host.mak tests/plugin: add sample plugins linux-user: support -plugin option vl: support -plugin option plugin: add qemu_plugin_outs helper plugin: add qemu_plugin_insn_disas helper plugin: expand the plugin_init function to include an info block ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * cputlb: ensure _cmmu helper functions follow the naming standardAlex Bennée2019-10-281-3/+21
| | | | | | | | | | | | | | | | | | | | | | | | We document this in docs/devel/load-stores.rst so lets follow it. The 32 bit and 64 bit access functions have historically not included the sign so we leave those as is. We also introduce some signed helpers which are used for loading immediate values in the translator. Fixes: 282dffc8 Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20191021150910.23216-1-alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
| * plugins: implement helpers for resolving hwaddrAlex Bennée2019-10-281-0/+42
| | | | | | | | | | | | | | | | | | | | | | We need to keep a local per-cpu copy of the data as other threads may be running. Currently we can provide insight as to if the access was IO or not and give the offset into a given device (usually the main RAMBlock). We store enough information to get details such as the MemoryRegion which might be useful in later expansions to the API. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
| * atomic_template: add inline trace/plugin helpersEmilio G. Cota2019-10-281-0/+2
| | | | | | | | | | | | | | | | In preparation for plugin support. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
| * cputlb: introduce get_page_addr_code_hostpEmilio G. Cota2019-10-281-1/+13
| | | | | | | | | | | | | | | | | | This will be used by plugins to get the host address of instructions. Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
| * trace: add mmu_index to mem_infoAlex Bennée2019-10-281-0/+2
| | | | | | | | | | | | | | We are going to re-use mem_info later for plugins and will need to track the mmu_idx for softmmu code. Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
* | cputlb: Fix tlb_vaddr_to_hostRichard Henderson2019-10-281-1/+1
| | | | | | | | | | | | | | | | | | | | Using uintptr_t instead of target_ulong meant that, for 64-bit guest and 32-bit host, we truncated the guest address comparator and so may not hit the tlb when we should. Fixes: 4811e9095c0 Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* | cputlb: ensure _cmmu helper functions follow the naming standardAlex Bennée2019-10-281-3/+21
|/ | | | | | | | | | | | | We document this in docs/devel/load-stores.rst so lets follow it. The 32 bit and 64 bit access functions have historically not included the sign so we leave those as is. We also introduce some signed helpers which are used for loading immediate values in the translator. Fixes: 282dffc8 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20191021150910.23216-1-alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Pass retaddr to tb_invalidate_phys_page_fastRichard Henderson2019-09-251-5/+1
| | | | | | | | | | | | | | | Rather than rely on cpu->mem_io_pc, pass retaddr down directly. Within tb_invalidate_phys_page_range__locked, the is_cpu_write_access parameter is non-zero exactly when retaddr would be non-zero, so that is a simple replacement. Recognize that current_tb_not_found is true only when mem_io_pc (and now retaddr) are also non-zero, so remove a redundant test. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Remove cpu->mem_io_vaddrRichard Henderson2019-09-251-2/+0
| | | | | | | | | With the merge of notdirty handling into store_helper, the last user of cpu->mem_io_vaddr was removed. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Handle TLB_NOTDIRTY in probe_accessRichard Henderson2019-09-251-9/+17
| | | | | | | | | We can use notdirty_write for the write and return a valid host pointer for this case. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Merge and move memory_notdirty_write_{prepare,complete}Richard Henderson2019-09-251-34/+42
| | | | | | | | | | | | | | | | | | | Since 9458a9a1df1a, all readers of the dirty bitmaps wait for the rcu lock, which means that they wait until the end of any executing TranslationBlock. As a consequence, there is no need for the actual access to happen in between the _prepare and _complete. Therefore, we can improve things by merging the two functions into notdirty_write and dropping the NotDirtyInfo structure. In addition, the only users of notdirty_write are in cputlb.c, so move the merged function there. Pass in the CPUIOTLBEntry from which the ram_addr_t may be computed. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Partially inline memory_region_section_get_iotlbRichard Henderson2019-09-251-24/+42
| | | | | | | | | | | | | | There is only one caller, tlb_set_page_with_attrs. We cannot inline the entire function because the AddressSpaceDispatch structure is private to exec.c, and cannot easily be moved to include/exec/memory-internal.h. Compute is_ram and is_romd once within tlb_set_page_with_attrs. Fold the number of tests against these predicates. Compute cpu_physical_memory_is_clean outside of the tlb lock region. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Move NOTDIRTY handling from I/O path to TLB pathRichard Henderson2019-09-251-3/+23
| | | | | | | | | | Pages that we want to track for NOTDIRTY are RAM. We do not really need to go through the I/O path to handle them. Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Move ROM handling from I/O path to TLB pathRichard Henderson2019-09-251-15/+21
| | | | | | | | It does not require going through the whole I/O path in order to discard a write. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Introduce TLB_BSWAPRichard Henderson2019-09-251-29/+43
| | | | | | | | | | | Handle bswap on ram directly in load/store_helper. This fixes a bug with the previous implementation in that one cannot use the I/O path for RAM. Fixes: a26fc6f5152b47f1 Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Split out load/store_memopRichard Henderson2019-09-251-52/+55
| | | | | | | | We will shortly be using these more than once. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Use qemu_build_not_reached in load/store_helpersRichard Henderson2019-09-251-3/+2
| | | | | | | | Increase the current runtime assert to a compile-time assert. Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Disable __always_inline__ without optimizationRichard Henderson2019-09-251-2/+2
| | | | | | | | | | | This forced inlining can result in missing symbols, which makes a debugging build harder to follow. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Factor out probe_write() logic into probe_access()David Hildenbrand2019-09-031-11/+32
| | | | | | | | | Let's also allow to probe other access types. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190830100959.26615-3-david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Make probe_write() return a pointer to the host pageDavid Hildenbrand2019-09-031-5/+16
| | | | | | | | | | ... similar to tlb_vaddr_to_host(); however, allow access to the host page except when TLB_NOTDIRTY or TLB_MMIO is set. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190830100959.26615-2-david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Enforce single page access in probe_write()David Hildenbrand2019-09-031-0/+2
| | | | | | | | | Let's enforce the interface restriction. Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190826075112.25637-5-david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Check for watchpoints in probe_write()David Hildenbrand2019-09-031-2/+13
| | | | | | | | | | | | Let size > 0 indicate a promise to write to those bytes. Check for write watchpoints in the probed range. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190823100741.9621-10-david@redhat.com> [rth: Recompute index after tlb_fill; check TLB_WATCHPOINT.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Handle watchpoints via TLB_WATCHPOINTRichard Henderson2019-09-031-10/+79
| | | | | | | | | | | | | | | | The raising of exceptions from check_watchpoint, buried inside of the I/O subsystem, is fundamentally broken. We do not have the helper return address with which we can unwind guest state. Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT. Move the call to cpu_check_watchpoint into the cputlb helpers where we do have the helper return address. This allows watchpoints on RAM to bypass the full i/o access path. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Remove double-alignment in store_helperRichard Henderson2019-09-031-2/+1
| | | | | | | | | We have already aligned page2 to the start of the next page. There is no reason to do that a second time. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Fix size operand for tlb_fill on unaligned storeRichard Henderson2019-09-031-1/+4
| | | | | | | | | | | | | | | | We are currently passing the size of the full write to the tlb_fill for the second page. Instead pass the real size of the write to that page. This argument is unused within all tlb_fill, except to be logged via tracing, so in practice this makes no difference. But in a moment we'll need the value of size2 for watchpoints, and if we've computed the value we might as well use it. Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Fold TLB_RECHECK into TLB_INVALID_MASKRichard Henderson2019-09-031-63/+23
| | | | | | | | | | | | | | We had two different mechanisms to force a recheck of the tlb. Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit that would immediate set TLB_INVALID_MASK, which automatically means that a second check of the tlb entry fails. We can use the same mechanism to handle small pages. Conserve TLB_* bits by removing TLB_RECHECK. Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Byte swap memory transaction attributeTony Nguyen2019-09-031-0/+12
| | | | | | | | | | | | | | Notice new attribute, byte swap, and force the transaction through the memory slow path. Required by architectures that can invert endianness of memory transaction, e.g. SPARC64 has the Invert Endian TTE bit. Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <2a10a1f1c00a894af1212c8f68ef09c2966023c1.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* memory: Single byte swap along the I/O pathTony Nguyen2019-09-031-39/+3
| | | | | | | | | | | | | | | | | Now that MemOp has been pushed down into the memory API, and callers are encoding endianness, we can collapse byte swaps along the I/O path into the accelerator and target independent adjust_endianness. Collapsing byte swaps along the I/O path enables additional endian inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant byte swaps cancelling out. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Message-Id: <911ff31af11922a9afba9b7ce128af8b8b80f316.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Replace size and endian operands for MemOpTony Nguyen2019-09-031-89/+81
| | | | | | | | | | Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <755b7104410956b743e1f1e9c34ab87db113360f.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* memory: Access MemoryRegion with endiannessTony Nguyen2019-09-031-2/+6
| | | | | | | | | | | | | | | | | | | Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former. Call memory_region_dispatch_{read|write} with endianness encoded into the "MemOp op" operand. This patch does not change any behaviour as memory_region_dispatch_{read|write} is yet to handle the endianness. Once it does handle endianness, callers with byte swaps can collapse them into adjust_endianness. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Message-Id: <8066ab3eb037c0388dfadfe53c5118429dd1de3a.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: Access MemoryRegion with MemOpTony Nguyen2019-09-031-4/+4
| | | | | | | | | | | | | | | | | | | The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op". Convert interfaces by using no-op size_memop. After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op". As size_memop is a no-op, this patch does not change any behaviour. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <c4571c76467ade83660970f7ef9d7292297f1908.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: TCGMemOp is now accelerator independent MemOpTony Nguyen2019-09-031-1/+1
| | | | | | | | | | | | | | Preparation for collapsing the two byte swaps, adjust_endianness and handle_bswap, along the I/O path. Target dependant attributes are conditionalized upon NEED_CPU_H. Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <81d9cd7d7f5aaadfa772d6c48ecee834e9cf7882.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* cputlb: cast size_t to target_ulong before using for address masksAlex Bennée2019-06-121-1/+1
| | | | | | | | | | | | | | While size_t is defined to happily access the biggest host object this isn't the case when generating masks for 64 bit guests on 32 bit hosts. Otherwise we end up truncating the address when we fall back to our unaligned helper. Fixes: https://bugs.launchpad.net/qemu/+bug/1831545 Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Andrew Randrianasulu <randrianasulu@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
* cputlb: use uint64_t for interim values for unaligned loadAlex Bennée2019-06-121-1/+1
| | | | | | | | | | | | | | | | When running on 32 bit TCG backends a wide unaligned load ends up truncating data before returning to the guest. We specifically have the return type as uint64_t to avoid any premature truncation so we should use the same for the interim types. Fixes: https://bugs.launchpad.net/qemu/+bug/1830872 Fixes: eed5664238e Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Laszlo Ersek <lersek@redhat.com> Tested-by: Igor Mammedov <imammedo@redhat.com>
* cpu: Replace ENV_GET_CPU with env_cpuRichard Henderson2019-06-101-19/+19
| | | | | | | | | Now that we have both ArchCPU and CPUArchState, we can define this generically instead of via macro in each target's cpu.h. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Create struct CPUTLBRichard Henderson2019-06-101-76/+88
| | | | | | | | | | Move all softmmu tlb data into this structure. Arrange the members so that we are able to place mask+table together and at a smaller absolute offset from ENV. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@wdc.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Fold CPUTLBWindow into CPUTLBDescRichard Henderson2019-06-101-12/+12
| | | | | | | | | Both structures are allocated once per mmu_idx. There is no reason for them to be separate. Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>