qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] target/riscv: reduce overhead of MSTATUS_SUM change


From: Wu, Fei
Subject: Re: [PATCH] target/riscv: reduce overhead of MSTATUS_SUM change
Date: Wed, 22 Mar 2023 15:04:31 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0

On 3/22/2023 2:50 PM, LIU Zhiwei wrote:
> 
> On 2023/3/22 14:40, Wu, Fei wrote:
>> On 3/22/2023 11:36 AM, Wu, Fei wrote:
>>> On 3/22/2023 11:31 AM, Richard Henderson wrote:
>>>> On 3/21/23 19:47, Wu, Fei wrote:
>>>>>>> You should be making use of different softmmu indexes, similar to
>>>>>>> how
>>>>>>> ARM uses a separate index for PAN (privileged access never)
>>>>>>> mode.  If
>>>>>>> I read the manual properly, PAN == !SUM.
>>>>>>>
>>>>>>> When you do this, you need no additional flushing.
>>>>>> Hi Fei,
>>>>>>
>>>>>> Let's follow Richard's advice.
>>>>>> Yes, I'm thinking about how to do it, and thank Richard for the
>>>>>> advice.
>>>>> My question is:
>>>>> * If we ensure this separate index (S+SUM) has no overlapping tlb
>>>>> entries with S-mode (ignore M-mode so far), during SUM=1, we have to
>>>>> look into both (S+SUM) and S index for kernel address translation,
>>>>> that
>>>>> should be not desired.
>>>> This is an incorrect assumption.  S+SUM may very well have overlapping
>>>> tlb entries with S.
>>>> With SUM=1, you *only* look in S+SUM index; with SUM=0, you *only* look
>>>> in S index.
>>>>
>>>> The only difference is a check in get_physical_address is no longer
>>>> against MSTATUS_SUM directly, but against the mmu_index.
>>>>
>>>>> * If all the tlb operations are against (S+SUM) during SUM=1, then
>>>>> (S+SUM) could contain some duplicated tlb entries of kernel address
>>>>> in S
>>>>> index, the duplication means extra tlb lookup and fill.
>>>> Yes, if the same address is probed via S and S+SUM, there is a
>>>> duplicated lookup.  But this is harmless.
>>>>
>>>>
>>>>> Also if we want
>>>>> to flush tlb entry of specific addr0, we have to flush both index.
>>>> Yes, this is also true.  But so far target/riscv is making no use of
>>>> per-mmuidx flushing. At the moment you're *only* using tlb_flush(cpu),
>>>> which flushes every mmuidx.  Nor are you making use of per-page
>>>> flushing.
>>>>
>>>> So, really, no change required at all there.
>>>>
>>> Got it, let me try this method.
>>>
>> There seems no room in flags for this extra index, all 3 bits for
>> mem_idx have been used in target/riscv/cpu.h. We need some trick.
>>
>> #define TB_FLAGS_PRIV_MMU_MASK                3
>> #define TB_FLAGS_PRIV_HYP_ACCESS_MASK   (1 << 2)
> 
> #define TB_FLAGS_PRIV_HYP_ACCESS_MASK   (1 << 3)
> 
> Renumber the new mmu index to 5 (Probably by extending the function
> riscv_cpu_mmu_index)
> 
Currently mem_idx is also saved in flags (tb_flags) below, which only
has 3 bits for mem_idx and can't expand.

Thanks,
Fei.

> Zhiwei
> 
>> FIELD(TB_FLAGS, MEM_IDX, 0, 3)
>> FIELD(TB_FLAGS, LMUL, 3, 3)
>>
>> Thanks,
>> Fei.
>>
>>> Thanks,
>>> Fei.
>>>
>>>> r~
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]