qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 1/2] target/riscv: separate priv from mmu_idx


From: Wu, Fei
Subject: Re: [PATCH v4 1/2] target/riscv: separate priv from mmu_idx
Date: Fri, 24 Mar 2023 09:22:49 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0

On 3/24/2023 9:02 AM, Wu, Fei wrote:
> On 3/23/2023 11:53 PM, Richard Henderson wrote:
>> On 3/22/23 19:44, Fei Wu wrote:
>>> Currently it's assumed the 2 low bits of mmu_idx map to privilege mode,
>>> this assumption won't last as we are about to add more mmu_idx.
>>>
>>> Signed-off-by: Fei Wu <fei2.wu@intel.com>
>>> ---
>>>   target/riscv/cpu.h                             | 1 -
>>>   target/riscv/cpu_helper.c                      | 2 +-
>>>   target/riscv/insn_trans/trans_privileged.c.inc | 2 +-
>>>   target/riscv/insn_trans/trans_xthead.c.inc     | 7 +------
>>>   target/riscv/translate.c                       | 3 +++
>>>   5 files changed, 6 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
>>> index 638e47c75a..66f7e3d1ba 100644
>>> --- a/target/riscv/cpu.h
>>> +++ b/target/riscv/cpu.h
>>> @@ -623,7 +623,6 @@ G_NORETURN void
>>> riscv_raise_exception(CPURISCVState *env,
>>>   target_ulong riscv_cpu_get_fflags(CPURISCVState *env);
>>>   void riscv_cpu_set_fflags(CPURISCVState *env, target_ulong);
>>>   -#define TB_FLAGS_PRIV_MMU_MASK                3
>>>   #define TB_FLAGS_PRIV_HYP_ACCESS_MASK   (1 << 2)
>>>   #define TB_FLAGS_MSTATUS_FS MSTATUS_FS
>>>   #define TB_FLAGS_MSTATUS_VS MSTATUS_VS
>>> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
>>> index f88c503cf4..76e1b0100e 100644
>>> --- a/target/riscv/cpu_helper.c
>>> +++ b/target/riscv/cpu_helper.c
>>> @@ -762,7 +762,7 @@ static int get_physical_address(CPURISCVState
>>> *env, hwaddr *physical,
>>>        * (riscv_cpu_do_interrupt) is correct */
>>>       MemTxResult res;
>>>       MemTxAttrs attrs = MEMTXATTRS_UNSPECIFIED;
>>> -    int mode = mmu_idx & TB_FLAGS_PRIV_MMU_MASK;
>>> +    int mode = env->priv;
>>
>> This is incorrect.  You must map back from mmu_idx to priv.
>> Recall the semantics of MPRV.
>>
> The following code (~20 lines below) takes MPRV into consideration:
> 
>     if (riscv_cpu_two_stage_lookup(mmu_idx)) {
>         mode = get_field(env->hstatus, HSTATUS_SPVP);
>     } else if (mode == PRV_M && access_type != MMU_INST_FETCH) {
>         if (get_field(env->mstatus, MSTATUS_MPRV)) {
>             mode = get_field(env->mstatus, MSTATUS_MPP);
>         }
>     }
> 
> I think it's the same logic as the original one.
> 
>>> diff --git a/target/riscv/insn_trans/trans_xthead.c.inc
>>> b/target/riscv/insn_trans/trans_xthead.c.inc
>>> index df504c3f2c..adfb53cb4c 100644
>>> --- a/target/riscv/insn_trans/trans_xthead.c.inc
>>> +++ b/target/riscv/insn_trans/trans_xthead.c.inc
>>> @@ -265,12 +265,7 @@ static bool trans_th_tst(DisasContext *ctx,
>>> arg_th_tst *a)
>>>     static inline int priv_level(DisasContext *ctx)
>>>   {
>>> -#ifdef CONFIG_USER_ONLY
>>> -    return PRV_U;
>>> -#else
>>> -     /* Priv level is part of mem_idx. */
>>> -    return ctx->mem_idx & TB_FLAGS_PRIV_MMU_MASK;
>>> -#endif
>>> +    return ctx->priv;
>>>   }
>>
>> I guess we aren't expecting optimization to remove dead system code?
>> That would be the only reason to keep the ifdef.
>>
>> This function should be hoisted to translate.c, or simply replaced by
>> the field access.
>>
> I only want to replace mem_idx to priv here, Zhiwei might decide how to
> adjust the code further.
> 
I'm not sure if this is good English, sorry for that if it's not. I mean
I want to keep this patch small.

Thanks,
Fei.

> Thanks,
> Fei.
> 
>>> @@ -1162,8 +1163,10 @@ static void
>>> riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
>>>       } else {
>>>           ctx->virt_enabled = false;
>>>       }
>>> +    ctx->priv = env->priv;
>>
>> Incorrect, as Zhiwei pointed out.
>> I gave you the changes required to TB_FLAGS...
>>
>>
>> r~
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]