On 1/16/24 09:25, Daniel Henrique Barboza wrote:
Use the new 'vlenb' CPU config to validate fractional LMUL. The original
comparison is done with 'vlen' and 'sew', both in bits. Adjust the shift
to use vlenb.
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
---
target/riscv/vector_helper.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index cb944229b0..9e3ae4b5d3 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -45,9 +45,13 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong
s1,
xlen - 1 -
R_VTYPE_RESERVED_SHIFT);
if (lmul & 4) {
- /* Fractional LMUL - check LMUL * VLEN >= SEW */
+ /*
+ * Fractional LMUL: check VLEN * LMUL >= SEW,
+ * or VLEN * (8 - lmul) >= SEW. Using VLENB we
+ * need 3 less shifts rights.
The last sentence is structured oddly. Perhaps
Using VLENB, we decrease the right shift by 3
or perhaps just show the expansion:
/*
* Fractional LMUL, check
*
* VLEN * LMUL >= SEW
* VLEN >> (8 - lmul) >= sew
* (vlenb << 3) >> (8 - lmul) >= sew
* vlenb >> (8 - 3 - lmul) >= sew
*/