summary refs log tree commit diff stats
path: root/scripts/qapi/source.py (unfollow)
Commit message (Collapse)AuthorFilesLines
2021-02-08confidential guest support: Update documentationDavid Gibson2-1/+44
Now that we've implemented a generic machine option for configuring various confidential guest support mechanisms: 1. Update docs/amd-memory-encryption.txt to reference this rather than the earlier SEV specific option 2. Add a docs/confidential-guest-support.txt to cover the generalities of the confidential guest support scheme Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org>
2021-02-08confidential guest support: Move SEV initialization into arch specific codeDavid Gibson4-17/+28
While we've abstracted some (potential) differences between mechanisms for securing guest memory, the initialization is still specific to SEV. Given that, move it into x86's kvm_arch_init() code, rather than the generic kvm_init() code. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org>
2021-02-08confidential guest support: Introduce cgs "ready" flagDavid Gibson3-0/+36
The platform specific details of mechanisms for implementing confidential guest support may require setup at various points during initialization. Thus, it's not really feasible to have a single cgs initialization hook, but instead each mechanism needs its own initialization calls in arch or machine specific code. However, to make it harder to have a bug where a mechanism isn't properly initialized under some circumstances, we want to have a common place, late in boot, where we verify that cgs has been initialized if it was requested. This patch introduces a ready flag to the ConfidentialGuestSupport base type to accomplish this, which we verify in qemu_machine_creation_done(). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org>
2021-02-08sev: Add Error ** to sev_kvm_init()David Gibson4-19/+20
This allows failures to be reported richly and idiomatically. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
2021-02-08confidential guest support: Rework the "memory-encryption" propertyDavid Gibson6-42/+47
Currently the "memory-encryption" property is only looked at once we get to kvm_init(). Although protection of guest memory from the hypervisor isn't something that could really ever work with TCG, it's not conceptually tied to the KVM accelerator. In addition, the way the string property is resolved to an object is almost identical to how a QOM link property is handled. So, create a new "confidential-guest-support" link property which sets this QOM interface link directly in the machine. For compatibility we keep the "memory-encryption" property, but now implemented in terms of the new property. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
2021-02-08confidential guest support: Move side effect out of ↵David Gibson1-8/+9
machine_set_memory_encryption() When the "memory-encryption" property is set, we also disable KSM merging for the guest, since it won't accomplish anything. We want that, but doing it in the property set function itself is thereoretically incorrect, in the unlikely event of some configuration environment that set the property then cleared it again before constructing the guest. More importantly, it makes some other cleanups we want more difficult. So, instead move this logic to machine_run_board_init() conditional on the final value of the property. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Greg Kurz <groug@kaod.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
2021-02-08sev: Remove false abstraction of flash encryptionDavid Gibson8-85/+31
When AMD's SEV memory encryption is in use, flash memory banks (which are initialed by pc_system_flash_map()) need to be encrypted with the guest's key, so that the guest can read them. That's abstracted via the kvm_memcrypt_encrypt_data() callback in the KVM state.. except, that it doesn't really abstract much at all. For starters, the only call site is in code specific to the 'pc' family of machine types, so it's obviously specific to those and to x86 to begin with. But it makes a bunch of further assumptions that need not be true about an arbitrary confidential guest system based on memory encryption, let alone one based on other mechanisms: * it assumes that the flash memory is defined to be encrypted with the guest key, rather than being shared with hypervisor * it assumes that that hypervisor has some mechanism to encrypt data into the guest, even though it can't decrypt it out, since that's the whole point * the interface assumes that this encrypt can be done in place, which implies that the hypervisor can write into a confidential guests's memory, even if what it writes isn't meaningful So really, this "abstraction" is actually pretty specific to the way SEV works. So, this patch removes it and instead has the PC flash initialization code call into a SEV specific callback. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Cornelia Huck <cohuck@redhat.com>
2021-02-08confidential guest support: Introduce new confidential guest support classDavid Gibson5-2/+76
Several architectures have mechanisms which are designed to protect guest memory from interference or eavesdropping by a compromised hypervisor. AMD SEV does this with in-chip memory encryption and Intel's TDX can do similar things. POWER's Protected Execution Framework (PEF) accomplishes a similar goal using an ultravisor and new memory protection features, instead of encryption. To (partially) unify handling for these, this introduces a new ConfidentialGuestSupport QOM base class. "Confidential" is kind of vague, but "confidential computing" seems to be the buzzword about these schemes, and "secure" or "protected" are often used in connection to unrelated things (such as hypervisor-from-guest or guest-from-guest security). The "support" in the name is significant because in at least some of the cases it requires the guest to take specific actions in order to protect itself from hypervisor eavesdropping. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2021-02-08qom: Allow optional sugar propsGreg Kurz4-9/+18
Global properties have an @optional field, which allows to apply a given property to a given type even if one of its subclasses doesn't support it. This is especially used in the compat code when dealing with the "disable-modern" and "disable-legacy" properties and the "virtio-pci" type. Allow object_register_sugar_prop() to set this field as well. Signed-off-by: Greg Kurz <groug@kaod.org> Message-Id: <159738953558.377274.16617742952571083440.stgit@bahia.lan> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2021-02-05accel: introduce AccelCPUClass extending CPUClassClaudio Fontana4-0/+87
add a new optional interface to CPUClass, which allows accelerators to extend the CPUClass with additional accelerator-specific initializations. This will allow to separate the target cpu code that is specific to each accelerator, and register it automatically with object hierarchy lookup depending on accelerator code availability, as part of the accel_init_interfaces() initialization step. Signed-off-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20210204163931.7358-19-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05accel: replace struct CpusAccel with AccelOpsClassClaudio Fontana44-163/+361
This will allow us to centralize the registration of the cpus.c module accelerator operations (in accel/accel-softmmu.c), and trigger it automatically using object hierarchy lookup from the new accel_init_interfaces() initialization step, depending just on which accelerators are available in the code. Rename all tcg-cpus.c, kvm-cpus.c, etc to tcg-accel-ops.c, kvm-accel-ops.c, etc, matching the object type names. Signed-off-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20210204163931.7358-18-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05accel: extend AccelState and AccelClass to user-modeClaudio Fontana24-53/+125
Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [claudio: rebased on Richard's splitwx work] Signed-off-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20210204163931.7358-17-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: tcg_ops: move to tcg-cpu-ops.h, keep a pointer in CPUClassClaudio Fontana36-306/+582
we cannot in principle make the TCG Operations field definitions conditional on CONFIG_TCG in code that is included by both common_ss and specific_ss modules. Therefore, what we can do safely to restrict the TCG fields to TCG-only builds, is to move all tcg cpu operations into a separate header file, which is only included by TCG, target-specific code. This leaves just a NULL pointer in the cpu.h for the non-TCG builds. This also tidies up the code in all targets a bit, having all TCG cpu operations neatly contained by a dedicated data struct. Signed-off-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20210204163931.7358-16-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: move debug_check_watchpoint to tcg_opsClaudio Fontana5-17/+12
commit 568496c0c0f1 ("cpu: Add callback to check architectural") and commit 3826121d9298 ("target-arm: Implement checking of fired") introduced an ARM-specific hack for cpu_check_watchpoint. Make debug_check_watchpoint optional, and move it to tcg_ops. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210204163931.7358-15-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: move adjust_watchpoint_address to tcg_opsClaudio Fontana4-9/+10
commit 40612000599e ("arm: Correctly handle watchpoints for BE32 CPUs") introduced this ARM-specific, TCG-specific hack to adjust the address, before checking it with cpu_check_watchpoint. Make adjust_watchpoint_address optional and move it to tcg_ops. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210204163931.7358-14-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05physmem: make watchpoint checking code TCG-onlyClaudio Fontana1-69/+72
cpu_check_watchpoint, watchpoint_address_matches are TCG-only. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210204163931.7358-13-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: move do_unaligned_access to tcg_opsClaudio Fontana14-19/+23
make it consistently SOFTMMU-only. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [claudio: make the field presence in cpu.h unconditional, removing the ifdefs] Message-Id: <20210204163931.7358-12-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: move cc->transaction_failed to tcg_opsClaudio Fontana12-29/+34
Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [claudio: wrap target code around CONFIG_TCG and !CONFIG_USER_ONLY] avoiding its use in headers used by common_ss code (should be poisoned). Note: need to be careful with the use of CONFIG_USER_ONLY, Message-Id: <20210204163931.7358-11-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: move cc->do_interrupt to tcg_opsClaudio Fontana27-42/+41
Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210204163931.7358-10-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05target/arm: do not use cc->do_interrupt for KVM directlyClaudio Fontana2-4/+6
cc->do_interrupt is in theory a TCG callback used in accel/tcg only, to prepare the emulated architecture to take an interrupt as defined in the hardware specifications, but in reality the _do_interrupt style of functions in targets are also occasionally reused by KVM to prepare the architecture state in a similar way where userspace code has identified that it needs to deliver an exception to the guest. In the case of ARM, that includes: 1) the vcpu thread got a SIGBUS indicating a memory error, and we need to deliver a Synchronous External Abort to the guest to let it know about the error. 2) the kernel told us about a debug exception (breakpoint, watchpoint) but it is not for one of QEMU's own gdbstub breakpoints/watchpoints so it must be a breakpoint the guest itself has set up, therefore we need to deliver it to the guest. So in order to reuse code, the same arm_do_interrupt function is used. This is all fine, but we need to avoid calling it using the callback registered in CPUClass, since that one is now TCG-only. Fortunately this is easily solved by replacing calls to CPUClass::do_interrupt() with explicit calls to arm_do_interrupt(). Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20210204163931.7358-9-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: Move debug_excp_handler to tcg_opsEduardo Habkost7-9/+9
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210204163931.7358-8-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: Move tlb_fill to tcg_opsEduardo Habkost26-38/+42
[claudio: wrapped target code in CONFIG_TCG] Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210204163931.7358-7-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: Move cpu_exec_* to tcg_opsEduardo Habkost25-42/+54
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> [claudio: wrapped target code in CONFIG_TCG] Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20210204163931.7358-6-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: Move synchronize_from_tb() to tcg_opsEduardo Habkost13-22/+30
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> [claudio: wrapped target code in CONFIG_TCG, reworded comments] Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210204163931.7358-5-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05accel/tcg: split TCG-only code from cpu_exec_realizefnClaudio Fontana5-40/+77
move away TCG-only code, make it compile only on TCG. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> [claudio: moved the prototypes from hw/core/cpu.h to exec/cpu-all.h] Signed-off-by: Claudio Fontana <cfontana@suse.de> Message-Id: <20210204163931.7358-4-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05target/riscv: remove CONFIG_TCG, as it is always TCGClaudio Fontana1-2/+1
for now only TCG is allowed as an accelerator for riscv, so remove the CONFIG_TCG use. Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210204163931.7358-3-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05cpu: Introduce TCGCpuOperations structEduardo Habkost25-30/+48
The TCG-specific CPU methods will be moved to a separate struct, to make it easier to move accel-specific code outside generic CPU code in the future. Start by moving tcg_initialize(). The new CPUClass.tcg_opts field may eventually become a pointer, but keep it an embedded struct for now, to make code conversion easier. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> [claudio: move TCGCpuOperations inside include/hw/core/cpu.h] Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20210204163931.7358-2-cfontana@suse.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Remove TCG_CONSTRichard Henderson4-194/+89
Restrict all operands to registers. All constants will be forced into registers by the middle-end. Removing the difference in how immediate integers were encoded will allow more code to be shared between 32-bit and 64-bit operations. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Fix TCG_REG_R4 misusageRichard Henderson2-10/+5
This was removed from tcg_target_reg_alloc_order and tcg_target_call_iarg_regs on the assumption that it was the stack. This was incorrectly copied from i386. For tci, the stack is R15. By adding R4 back to tcg_target_call_iarg_regs, adjust the other entries so that 6 (or 12) entries are still present in the array, and adjust the numbers in the interpreter. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Restrict TCG_TARGET_NB_REGS to 16Richard Henderson2-53/+5
As noted in several comments, 8 regs is not enough for 32-bit to perform calls, as currently implemented. Shortly, we will rearrange the encoding which will make 32 regs impossible. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Remove TODO as unusedRichard Henderson1-8/+0
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Implement 64-bit divisionRichard Henderson3-11/+25
Trivially implemented like other arithmetic. Tested via check-tcg and the ppc64 target. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Remove dead code for TCG_TARGET_HAS_div2_*Richard Henderson2-20/+0
We do not simultaneously support div and div2 -- it's one or the other. TCI is already using div, so remove div2. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Use g_assert_not_reachedRichard Henderson1-8/+7
Three TODO instances are never happen cases. Other uses of tcg_abort are also indicating unreachable cases. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Stefan Weil <sw@weilnetz.de> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Merge INDEX_op_{st_i32,st32_i64}Richard Henderson1-6/+1
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Move stack bounds check to compile-timeRichard Henderson2-2/+13
The existing check was incomplete: (1) Only applied to two of the 7 stores, and not to the loads at all. (2) Only checked the upper, but not the lower bound of the stack. Doing this at compile time means that we don't need to do it at runtime as well. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Merge INDEX_op_st16_{i32,i64}Richard Henderson1-7/+1
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Merge INDEX_op_st8_{i32,i64}Richard Henderson1-7/+1
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Merge INDEX_op_{ld_i32,ld32u_i64}Richard Henderson1-6/+1
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Merge INDEX_op_ld16s_{i32,i64}Richard Henderson1-4/+1
Eliminating a TODO for ld16s_i64. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Merge INDEX_op_ld16u_{i32,i64}Richard Henderson1-8/+5
Eliminating a TODO for ld16u_i32. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Merge INDEX_op_ld8s_{i32,i64}Richard Henderson1-8/+5
Eliminating a TODO for ld8s_i32. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Merge INDEX_op_ld8u_{i32,i64}Richard Henderson1-7/+13
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Inline tci_write_reg64 into 64-bit callersRichard Henderson1-33/+27
Note that we had two functions of the same name: a 32-bit version which took two register numbers and a 64-bit version which was a no-op wrapper for tcg_write_reg. After this, we are left with only the 32-bit version. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Inline tci_write_reg32 into all callersRichard Henderson1-36/+30
For a 64-bit TCI, the upper bits of a 32-bit operation are undefined (much like a native ppc64 32-bit operation). It simplifies everything if we don't force-extend the result. Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Inline tci_write_reg16 into the only callerRichard Henderson1-9/+1
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Inline tci_write_reg8 into its callersRichard Henderson1-7/+2
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Inline tci_write_reg32s into the only callerRichard Henderson1-9/+1
Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Implement INDEX_op_ld8s_i64Stefan Weil1-1/+4
That TCG opcode is used by debian-buster (arm64) running ffmpeg: qemu-aarch64 /usr/bin/ffmpeg -i theora.mkv theora.webm Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reported-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20210128020425.2055454-1-sw@weilnetz.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2021-02-05tcg/tci: Implement INDEX_op_ld16s_i32Stefan Weil1-1/+4
That TCG opcode is used by debian-buster (arm64) running ffmpeg: qemu-aarch64 /usr/bin/ffmpeg -i theora.mkv theora.webm Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reported-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20210128024814.2056958-1-sw@weilnetz.de> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>