qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 9/9] target/ppc: Use tcg_gen_sextract_tl


From: Richard Henderson
Subject: Re: [RFC PATCH 9/9] target/ppc: Use tcg_gen_sextract_tl
Date: Mon, 23 Oct 2023 18:04:01 -0700
User-agent: Mozilla Thunderbird

On 10/23/23 09:09, Philippe Mathieu-Daudé wrote:
Inspired-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
RFC: Please double-check 32/64 & bits
---
  target/ppc/translate.c | 22 ++++------------------
  1 file changed, 4 insertions(+), 18 deletions(-)

diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index c6e1f7c2ca..1370db9bd5 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -2892,13 +2892,7 @@ static void gen_slw(DisasContext *ctx)
t0 = tcg_temp_new();
      /* AND rS with a mask that is 0 when rB >= 0x20 */
-#if defined(TARGET_PPC64)
-    tcg_gen_shli_tl(t0, cpu_gpr[rB(ctx->opcode)], 0x3a);
-    tcg_gen_sari_tl(t0, t0, 0x3f);
-#else
-    tcg_gen_shli_tl(t0, cpu_gpr[rB(ctx->opcode)], 0x1a);
-    tcg_gen_sari_tl(t0, t0, 0x1f);
-#endif
+    tcg_gen_sextract_tl(t0, cpu_gpr[rB(ctx->opcode)], 5, 1);
      tcg_gen_andc_tl(t0, cpu_gpr[rS(ctx->opcode)], t0);

Patch looks correct as is, so
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


However:
I'd be tempted to use and+movcond instead of sext+andc.
Also there is a special case of 32-bit shifts with 64-bit shift count on ppc64.

#ifdef TARGET_PPC64
    tcg_gen_andi_tl(t0, rb, 0x3f);
#else
    tcg_gen_andi_tl(t0, rb, 0x1f);
    tcg_gen_andi_tl(t1, rb, 0x20);
    tcg_gen_movcond_tl(TCG_COND_NE, t1, t1, zero, zero, rs);
    rs = t1;
#endif
    tcg_gen_shl_tl(ra, rs, t0);
    tcg_gen_ext32u_tl(ra, ra);


It also makes me wonder about adding some TCGCond for bit-test so that this 
could be

    tcg_gen_movcond_tl(TCG_COND_TSTNE, t1, rb, 0x20, 0, 0, rs);

and make use of the "test" vs "cmp" instructions on most hosts, but especially 
x86.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]