[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 5/7] target/i386: Fix physical address truncation
From: |
Zhao Liu |
Subject: |
Re: [PATCH v2 5/7] target/i386: Fix physical address truncation |
Date: |
Mon, 26 Feb 2024 16:31:05 +0800 |
On Fri, Feb 23, 2024 at 02:09:46PM +0100, Paolo Bonzini wrote:
> Date: Fri, 23 Feb 2024 14:09:46 +0100
> From: Paolo Bonzini <pbonzini@redhat.com>
> Subject: [PATCH v2 5/7] target/i386: Fix physical address truncation
> X-Mailer: git-send-email 2.43.0
>
> The address translation logic in get_physical_address() will currently
> truncate physical addresses to 32 bits unless long mode is enabled.
> This is incorrect when using physical address extensions (PAE) outside
> of long mode, with the result that a 32-bit operating system using PAE
> to access memory above 4G will experience undefined behaviour.
>
> The truncation code was originally introduced in commit 33dfdb5 ("x86:
> only allow real mode to access 32bit without LMA"), where it applied
> only to translations performed while paging is disabled (and so cannot
> affect guests using PAE).
>
> Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX")
> rearranged the code such that the truncation also applied to the use
> of MMU_PHYS_IDX and MMU_NESTED_IDX. Commit 4a1e9d4 ("target/i386: Use
> atomic operations for pte updates") brought this truncation into scope
> for page table entry accesses, and is the first commit for which a
> Windows 10 32-bit guest will reliably fail to boot if memory above 4G
> is present.
>
> The truncation code however is not completely redundant. Even though the
> maximum address size for any executed instruction is 32 bits, helpers for
> operations such as BOUND, FSAVE or XSAVE may ask get_physical_address()
> to translate an address outside of the 32-bit range, if invoked with an
> argument that is close to the 4G boundary. Likewise for processor
> accesses, for example TSS or IDT accesses, when EFER.LMA==0.
>
> So, move the address truncation in get_physical_address() so that it
> applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX.
>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040
> Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates",
> 2022-10-18)
> Co-developed-by: Michael Brown <mcb30@ipxe.org>
> Signed-off-by: Michael Brown <mcb30@ipxe.org>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> target/i386/cpu.h | 6 ++++++
> target/i386/cpu.c | 2 +-
> target/i386/tcg/sysemu/excp_helper.c | 12 +++++-------
> 3 files changed, 12 insertions(+), 8 deletions(-)
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
>
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index ee4ad372021..8a165889de6 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -2326,6 +2326,12 @@ static inline bool is_mmu_index_user(int mmu_index)
> return (mmu_index & ~1) == MMU_USER64_IDX;
> }
>
> +static inline bool is_mmu_index_32(int mmu_index)
> +{
> + assert(mmu_index < MMU_PHYS_IDX);
> + return mmu_index & 1;
> +}
> +
> static inline int cpu_mmu_index_kernel(CPUX86State *env)
> {
> int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 1 : 0;
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 647371198c7..ba6d7b80a7f 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -7732,7 +7732,7 @@ static bool x86_cpu_has_work(CPUState *cs)
> return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0;
> }
>
> -static int x86_cpu_mmu_index(CPUState *env, bool ifetch)
> +static int x86_cpu_mmu_index(CPUState *cs, bool ifetch)
> {
> CPUX86State *env = cpu_env(cs);
> int mmu_index_32 = (env->hflags & HF_CS64_MASK) ? 1 : 0;
> diff --git a/target/i386/tcg/sysemu/excp_helper.c
> b/target/i386/tcg/sysemu/excp_helper.c
> index b2c525e1a92..8bcdd2906d5 100644
> --- a/target/i386/tcg/sysemu/excp_helper.c
> +++ b/target/i386/tcg/sysemu/excp_helper.c
> @@ -558,6 +558,10 @@ static bool get_physical_address(CPUX86State *env, vaddr
> addr,
> break;
>
> default:
> + if (is_mmu_index_32(mmu_idx)) {
> + addr = (uint32_t)addr;
> + }
> +
> if (likely(env->cr[0] & CR0_PG_MASK)) {
> in.cr3 = env->cr[3];
> in.mmu_idx = mmu_idx;
> @@ -581,14 +585,8 @@ static bool get_physical_address(CPUX86State *env, vaddr
> addr,
> break;
> }
>
> - /* Translation disabled. */
> + /* No translation needed. */
> out->paddr = addr & x86_get_a20_mask(env);
> -#ifdef TARGET_X86_64
> - if (!(env->hflags & HF_LMA_MASK)) {
> - /* Without long mode we can only address 32bits in real mode */
> - out->paddr = (uint32_t)out->paddr;
> - }
> -#endif
> out->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
> out->page_size = TARGET_PAGE_SIZE;
> return true;
> --
> 2.43.0
>
>
- [PATCH v2 0/7] target/i386: Fix physical address masking bugs, Paolo Bonzini, 2024/02/23
- [PATCH v2 3/7] target/i386: introduce function to query MMU indices, Paolo Bonzini, 2024/02/23
- [PATCH v2 5/7] target/i386: Fix physical address truncation, Paolo Bonzini, 2024/02/23
- Re: [PATCH v2 5/7] target/i386: Fix physical address truncation,
Zhao Liu <=
- [PATCH v2 2/7] target/i386: check validity of VMCB addresses, Paolo Bonzini, 2024/02/23
- [PATCH v2 7/7] target/i386: leave the A20 bit set in the final NPT walk, Paolo Bonzini, 2024/02/23
- [PATCH v2 4/7] target/i386: use separate MMU indexes for 32-bit accesses, Paolo Bonzini, 2024/02/23
- [PATCH v2 6/7] target/i386: remove unnecessary/wrong application of the A20 mask, Paolo Bonzini, 2024/02/23
- [PATCH v2 1/7] target/i386: mask high bits of CR3 in 32-bit mode, Paolo Bonzini, 2024/02/23
- Re: [PATCH v2 0/7] target/i386: Fix physical address masking bugs, Michael Brown, 2024/02/23