summary refs log tree commit diff stats
path: root/tcg/optimize.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* tcg/optimize: Fix folding of vector bitselWANG Rui2025-09-231-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | It looks like a typo. When the false value (C) is the constant -1, the correct fold should be: R = B | ~A Reproducer (LoongArch64 assembly): .text .globl _start _start: vldi $vr1, 3073 vldi $vr2, 1023 vbitsel.v $vr0, $vr2, $vr1, $vr2 vpickve2gr.d $a1, $vr0, 1 xori $a0, $a1, 1 li.w $a7, 93 syscall 0 Fixes: e58b977238e3 ("tcg/optimize: Optimize bitsel_vec") Link: https://github.com/llvm/llvm-project/issues/159610 Signed-off-by: WANG Rui <wangrui@loongson.cn> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20250919124901.2756538-1-wangrui@loongson.cn>
* tcg/optimize: Don't fold INDEX_op_and_vec to extractRichard Henderson2025-07-211-1/+1
| | | | | | | | | | | There is no such thing as vector extract. Fixes: 932522a9ddc1 ("tcg/optimize: Fold and to extract during optimize") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3036 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Tested-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
* tcg/optimize: Simplify fold_eqv constant checksRichard Henderson2025-06-301-3/+1
| | | | | | | Both cases are handled by fold_xor after conversion. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Simplify fold_orc constant checksRichard Henderson2025-06-301-5/+5
| | | | | | | | | | If operand 2 is constant, then the computation of z_mask and a_mask will produce the same results as the explicit check via fold_xi_to_i. Shift the calls of fold_xx_to_i and fold_ix_to_not down below the i2->is_const check. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Simplify fold_andc constant checksRichard Henderson2025-06-301-4/+5
| | | | | | | | | | If operand 2 is constant, then the computation of z_mask and a_mask will produce the same results as the explicit check via fold_xi_to_i. Shift the calls of fold_xx_to_i and fold_ix_to_not down below the i2->is_const check. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Simplify fold_and constant checksRichard Henderson2025-06-301-4/+3
| | | | | | | | | | If operand 2 is constant, then the computation of z_mask and a_mask will produce the same results as the explicit checks via fold_xi_to_i and fold_xi_to_x. Shift the call of fold_xx_to_x down below the ti_is_const(t2) check. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Fold and to extract during optimizeRichard Henderson2025-06-301-3/+30
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Use fold_and in do_constant_folding_cond[12]Richard Henderson2025-06-301-0/+5
| | | | | | | | When lowering tst comparisons, completely fold the and opcode that we generate. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_shiftRichard Henderson2025-06-301-2/+4
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_sextractRichard Henderson2025-06-301-24/+6
| | | | | | | | This was the last use of fold_affected_mask, now fully replaced by fold_masks_zosa. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_movcondRichard Henderson2025-06-301-2/+3
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_extuRichard Henderson2025-06-301-3/+9
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_extsRichard Henderson2025-06-301-2/+4
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use z_bits and o_bits in fold_extract2Richard Henderson2025-06-301-13/+25
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_extractRichard Henderson2025-06-301-7/+5
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_depositRichard Henderson2025-06-301-2/+4
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_bswapRichard Henderson2025-06-301-25/+24
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_xorRichard Henderson2025-06-301-3/+6
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use zero, one and affected bits in fold_orcRichard Henderson2025-06-301-2/+9
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use one and affected bits in fold_orRichard Henderson2025-06-301-2/+8
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use z_bits and o_bits in fold_notRichard Henderson2025-06-301-1/+5
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use z_bits and o_bits in fold_norRichard Henderson2025-06-301-4/+10
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use z_bits and o_bits in fold_nandRichard Henderson2025-06-301-4/+10
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use z_bits and o_bits in fold_eqvRichard Henderson2025-06-301-2/+12
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_andcRichard Henderson2025-06-301-15/+8
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Build and use o_bits in fold_andRichard Henderson2025-06-301-13/+7
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Introduce fold_masks_zosaRichard Henderson2025-06-301-5/+11
| | | | | | | | | Add a new function with an affected mask. This will allow folding to a constant to happen before folding to a copy, without having to mind the ordering in all users. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Add one's mask to TempOptInfoRichard Henderson2025-06-301-16/+35
| | | | | | | | Add o_mask mirroring z_mask, but for 1's instead of 0's. Drop is_const and val fields, which now logically overlap. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Introduce arg_const_valRichard Henderson2025-06-301-37/+41
| | | | | | | | Use arg_const_val instead of direct access to the TempOptInfo val member. Rename both val and is_const to catch all direct accesses. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_{ld,st}_{i32,i64,i128}Richard Henderson2025-04-281-11/+4
| | | | | | | | | Merge into INDEX_op_{ld,st,ld2,st2}, where "2" indicates that two inputs or outputs are required. This simplifies the processing of i64/i128 depending on host word size. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Remove INDEX_op_qemu_st8_*Richard Henderson2025-04-281-1/+0
| | | | | | | | | The i386 backend can now check TCGOP_FLAGS to select the correct set of constraints. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_st*_{i32,i64}Richard Henderson2025-04-281-20/+8
| | | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_ld*_{i32,i64}Richard Henderson2025-04-281-14/+13
| | | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Remove add2/sub2 opcodesRichard Henderson2025-04-281-87/+0
| | | | | | | | All uses have been replaced by add/sub carry opcodes. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: With two const operands, prefer 0 in arg1Richard Henderson2025-04-281-6/+12
| | | | | | | | | | For most binary operands, two const operands fold. However, the add/sub carry opcodes have a third input. Prefer "reg, zero, const" since many risc hosts have a zero register that can fit a "reg, reg, const" insn format. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg/optimize: Handle add/sub with carry opcodesRichard Henderson2025-04-281-3/+316
| | | | | | | | | Propagate known carry when possible, and simplify the opcodes to not require carry-in when known. The result will be cleaned up further by the subsequent liveness analysis pass. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Add add/sub with carry opcodes and infrastructureRichard Henderson2025-04-281-0/+11
| | | | | | | | | | | | Liveness needs to track carry-live state in order to determine if the (hidden) output of the opcode is used. Code generation needs to track carry-live state in order to avoid clobbering cpu flags when loading constants. So far, output routines and backends are unchanged. Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_extract2_{i32,i64}Richard Henderson2025-04-281-5/+5
| | | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_deposit_{i32,i64}Richard Henderson2025-04-281-1/+1
| | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_sextract_{i32,i64}Richard Henderson2025-04-281-19/+3
| | | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_extract_{i32,i64}Richard Henderson2025-04-281-10/+4
| | | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Rename INDEX_op_bswap64_i64 to INDEX_op_bswap64Richard Henderson2025-04-281-3/+3
| | | | | | | | | Even though bswap64 can only be used with TCG_TYPE_I64, rename the opcode to maintain uniformity. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_bswap32_{i32,i64}Richard Henderson2025-04-281-4/+3
| | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_bswap16_{i32,i64}Richard Henderson2025-04-281-4/+3
| | | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_movcond_{i32,i64}Richard Henderson2025-04-281-1/+1
| | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_brcond_{i32,i64}Richard Henderson2025-04-281-3/+3
| | | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_{neg}setcond_{i32,i64}`Richard Henderson2025-04-281-24/+8
| | | | | Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Remove TCG_TARGET_HAS_negsetcond_{i32,i64}Richard Henderson2025-04-281-15/+9
| | | | | | | All targets now provide negsetcond, so remove the conditional. Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_mulu2_{i32,i64}Richard Henderson2025-04-281-8/+9
| | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* tcg: Merge INDEX_op_muls2_{i32,i64}Richard Henderson2025-04-281-8/+9
| | | | | | Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>