qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v6 0/9] riscv: set vstart_eq_zero on mark_vs_dirty


From: Daniel Henrique Barboza
Subject: [PATCH v6 0/9] riscv: set vstart_eq_zero on mark_vs_dirty
Date: Wed, 21 Feb 2024 18:31:31 -0300

Hi,

In this version 2 new patches were added:

- patch 5 eliminates the 'cpu_vl' global, and do_vsetvl() now loads 'vl'
  directly from env. This was suggested by Richard in the v5 review;

- patch 9 does a change in how we're doing the loops in ldst helpers.
  This was also proposed by Richard but back in v2. 

Patch 9 is not related to what we're fixing here but let's fold it in
and avoid leaving any code suggestions behind.

Series based on alistair/riscv-to-apply.next. 

Patches missing acks/reviews: 5 and 9

Changes from v5:
- patch 5 (new): remove 'cpu_vl' global
- patch 9 (new): change the loop in ldst helpers
- v5 link: 
20240221022252.252872-1-dbarboza@ventanamicro.com/">https://lore.kernel.org/qemu-riscv/20240221022252.252872-1-dbarboza@ventanamicro.com/

Daniel Henrique Barboza (8):
  trans_rvv.c.inc: mark_vs_dirty() before loads and stores
  trans_rvv.c.inc: remove 'is_store' bool from load/store fns
  target/riscv: remove 'over' brconds from vector trans
  target/riscv/translate.c: remove 'cpu_vstart' global
  target/riscv: remove 'cpu_vl' global
  target/riscv/vector_helper.c: set vstart = 0 in GEN_VEXT_VSLIDEUP_VX()
  trans_rvv.c.inc: remove redundant mark_vs_dirty() calls
  target/riscv/vector_helper.c: optimize loops in ldst helpers

Ivan Klokov (1):
  target/riscv: Clear vstart_qe_zero flag

 target/riscv/insn_trans/trans_rvbf16.c.inc |  18 +-
 target/riscv/insn_trans/trans_rvv.c.inc    | 294 ++++++---------------
 target/riscv/insn_trans/trans_rvvk.c.inc   |  30 +--
 target/riscv/translate.c                   |  11 +-
 target/riscv/vector_helper.c               |   7 +-
 5 files changed, 104 insertions(+), 256 deletions(-)

-- 
2.43.2




reply via email to

[Prev in Thread] Current Thread [Next in Thread]