[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 23/30] tcg/loongarch: Add softmmu load/store helpers, impleme
From: |
Richard Henderson |
Subject: |
Re: [PATCH 23/30] tcg/loongarch: Add softmmu load/store helpers, implement qemu_ld/qemu_st ops |
Date: |
Mon, 20 Sep 2021 10:10:58 -0700 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 |
On 9/20/21 1:04 AM, WANG Xuerui wrote:
+static void tcg_out_goto(TCGContext *s, const tcg_insn_unit *target)
+{
+ tcg_out_opc_b(s, 0);
+ bool ok = reloc_sd10k16(s->code_ptr - 1, target);
+ tcg_debug_assert(ok);
+}
Hmm. This is an existing bug in tcg/riscv/. We should have no asserts on relocations
being in range. We should always be able to tell our caller that the relocation failed,
and we'll try again with a smaller TB.
In this case, return the result of reloc_sd10k16 and ...
+
+ tcg_out_goto(s, l->raddr);
+ return true;
+}
... return the result of tcg_out_goto.
+static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl, TCGMemOpIdx oi,
+ tcg_insn_unit **label_ptr, bool is_load)
+{
+ MemOp opc = get_memop(oi);
+ unsigned s_bits = opc & MO_SIZE;
+ unsigned a_bits = get_alignment_bits(opc);
+ tcg_target_long compare_mask;
+ int mem_index = get_mmuidx(oi);
+ int fast_ofs = TLB_MASK_TABLE_OFS(mem_index);
+ int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask);
+ int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table);
+ TCGReg mask_base = TCG_AREG0, table_base = TCG_AREG0;
+
+ tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP0, mask_base, mask_ofs);
+ tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP1, table_base, table_ofs);
I think we can eliminate the mask_base and table_base variables now. This dates from the
TCG_TARGET_TLB_DISPLACEMENT_BITS thing, where we would need to compute an intermediate
offset, and adjust these base registers.
+ /* Clear the non-page, non-alignment bits from the address. */
+ compare_mask = (tcg_target_long)TARGET_PAGE_MASK | ((1 << a_bits) - 1);
+ if (compare_mask == sextreg(compare_mask, 0, 12)) {
+ tcg_out_opc_andi(s, TCG_REG_TMP1, addrl, compare_mask);
LoongArch uses an unsigned mask for andi, not signed. The immediate case will never match
for LoongArch.
+static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
+{
+ TCGMemOpIdx oi = l->oi;
+ MemOp opc = get_memop(oi);
+ TCGReg a0 = tcg_target_call_iarg_regs[0];
+ TCGReg a1 = tcg_target_call_iarg_regs[1];
+ TCGReg a2 = tcg_target_call_iarg_regs[2];
+ TCGReg a3 = tcg_target_call_iarg_regs[3];
Drop these, since you've already named TCG_REG_A0 etc.
+
+ /* We don't support oversize guests */
+ if (TCG_TARGET_REG_BITS < TARGET_LONG_BITS) {
+ g_assert_not_reached();
+ }
This is redundant with TCG_TARGET_REG_BITS == 64.
+ tcg_out_call(s, qemu_ld_helpers[opc & MO_SSIZE]);
+ tcg_out_mov(s, (opc & MO_SIZE) == MO_64, l->datalo_reg, a0);
Because you have single-insn sign-extend instructions, it's better to always call the
unsigned load function, and then sign-extend here. See the aarch64 version.
+ tcg_out_mov(s, TCG_TYPE_PTR, a2, l->datalo_reg);
+ switch (s_bits) {
+ case MO_8:
+ tcg_out_ext8u(s, a2, a2);
+ break;
+ case MO_16:
+ tcg_out_ext16u(s, a2, a2);
+ break;
+ default:
+ break;
+ }
Do you have a pointer to the LoongArch ABI? Do 32-bit values need to be sign- or
zero-extended in the call arguments?
Anyway, merge the move and extend.
+ tcg_out_movi(s, TCG_TYPE_PTR, a4, (tcg_target_long)l->raddr);
Oh, just FYI, this is another case where movi wants to handle pc-relative
addresses.
+ if (guest_base == 0) {
+ tcg_out_opc_add_d(s, base, addr_regl, TCG_REG_ZERO);
Don't add zero. RISCV bug that's been fixed recently.
+static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo,
+ TCGReg base, MemOp opc)
+{
+ /* Byte swapping is left to middle-end expansion. */
+ tcg_debug_assert((opc & MO_BSWAP) == 0);
+
+ switch (opc & (MO_SSIZE)) {
MO_SIZE here.
+static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
Shouldn't need is_64 argument for store. It's only relevant to load so that TCG_TYPE_I32
values are always sign-extended in the host register.
r~
- Re: [PATCH 20/30] tcg/loongarch: Implement setcond ops, (continued)
- [PATCH 15/30] tcg/loongarch: Implement clz/ctz ops, WANG Xuerui, 2021/09/20
- [PATCH 18/30] tcg/loongarch: Implement mul/mulsh/muluh/div/divu/rem/remu ops, WANG Xuerui, 2021/09/20
- [PATCH 08/30] tcg/loongarch: Implement the memory barrier op, WANG Xuerui, 2021/09/20
- [PATCH 24/30] tcg/loongarch: Implement tcg_target_qemu_prologue, WANG Xuerui, 2021/09/20
- [PATCH 28/30] configure, meson.build: Mark support for 64-bit LoongArch hosts, WANG Xuerui, 2021/09/20
- [PATCH 23/30] tcg/loongarch: Add softmmu load/store helpers, implement qemu_ld/qemu_st ops, WANG Xuerui, 2021/09/20
- Re: [PATCH 23/30] tcg/loongarch: Add softmmu load/store helpers, implement qemu_ld/qemu_st ops,
Richard Henderson <=
- [PATCH 07/30] tcg/loongarch: Implement necessary relocation operations, WANG Xuerui, 2021/09/20
- [PATCH 10/30] tcg/loongarch: Implement goto_ptr, WANG Xuerui, 2021/09/20
- [PATCH 16/30] tcg/loongarch: Implement shl/shr/sar/rotl/rotr ops, WANG Xuerui, 2021/09/20
- [PATCH 03/30] tcg/loongarch: Add the tcg-target.h file, WANG Xuerui, 2021/09/20