qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 07/57] accel/tcg: Honor atomicity of stores


From: Richard Henderson
Subject: Re: [PATCH v4 07/57] accel/tcg: Honor atomicity of stores
Date: Mon, 8 May 2023 11:11:23 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0

On 5/5/23 10:28, Peter Maydell wrote:
On Wed, 3 May 2023 at 08:11, Richard Henderson
<richard.henderson@linaro.org> wrote:

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
  accel/tcg/cputlb.c             | 103 +++----
  accel/tcg/user-exec.c          |  12 +-
  accel/tcg/ldst_atomicity.c.inc | 491 +++++++++++++++++++++++++++++++++
  3 files changed, 540 insertions(+), 66 deletions(-)


+/**
+ * store_atom_insert_al16:
+ * @p: host address
+ * @val: shifted value to store
+ * @msk: mask for value to store
+ *
+ * Atomically store @val to @p masked by @msk.
+ */
+static void store_atom_insert_al16(Int128 *ps, Int128Alias val, Int128Alias 
msk)
+{
+#if defined(CONFIG_ATOMIC128)
+    __uint128_t *pu, old, new;
+
+    /* With CONFIG_ATOMIC128, we can avoid the memory barriers. */
+    pu = __builtin_assume_aligned(ps, 16);
+    old = *pu;
+    do {
+        new = (old & ~msk.u) | val.u;
+    } while (!__atomic_compare_exchange_n(pu, &old, new, true,
+                                          __ATOMIC_RELAXED, __ATOMIC_RELAXED));
+#elif defined(CONFIG_CMPXCHG128)
+    __uint128_t *pu, old, new;
+
+    /*
+     * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always
+     * defer to libatomic, so we must use __sync_val_compare_and_swap_16
+     * and accept the sequential consistency that comes with it.
+     */
+    pu = __builtin_assume_aligned(ps, 16);
+    do {
+        old = *pu;
+        new = (old & ~msk.u) | val.u;
+    } while (!__sync_bool_compare_and_swap_16(pu, old, new));

Comment says "__sync_val..." but code says "__sync_bool...". Which is right?

Fixed the comment to __sync_*_compare_and_swap_16


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]