qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 09/13] target/riscv: Adjust vector address with ol


From: Richard Henderson
Subject: Re: [PATCH 09/13] target/riscv: Adjust vector address with ol
Date: Tue, 9 Nov 2021 10:25:50 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0

On 11/9/21 10:05 AM, LIU Zhiwei wrote:
Do you mean we should add this code to riscv_tr_init_disas_context

     if (ctx->pm_enabled) {
          switch (priv) {
          case PRV_M:
              env->mask = env->mpmmask;
              env->base = env->mpmbase;
              break;
          case PRV_S:
              env->mask = env->spmmask;
              env->base = env->spmbase;
              break;
          case PRV_U:
              env->mask = env->upmmask;
              env->base = env->upmbase;
              break;
          default:
              g_assert_not_reached();
          }
          ctx->pm_mask = pm_mask[priv];
          ctx->pm_base = pm_base[priv];
          ctx->need_mask = true; /* new flag for mask */
      } else if (get_xlen(ctx)  < TARGET_LONG_BITS) {
          env->mask = UINT32_MAX;
          env->base = 0;

Certainly we cannot modify env in riscv_tr_init_disas_context.

          ctx->pm_mask = tcg_constant_tl(UINT32_MAX);
          ctx->pm_base = tcg_constant_tl(0);
         ctx->need_mask = true;
      } else {
         env->mask = UINT64_MAX;
          env->base = 0;
      }

I think the code is wrong, perhaps we should modify the write_mpmmask
env->mask = env->mpmmask = value;

Something like that, yes. However, env->mask must be set based on env->priv, etc; you can't just assign the same as mpmmask.

Then you also need to update env->mask in a hook like you created in patch 11 to switch context (though I would call it from helper_mret and helper_sret directly, and not create a new call from tcg). Then you need to call the hook as well on exception entry, reset, and vmstate_riscv_cpu.post_load.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]