bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] Implement basic sleeping locks for gnumach.


From: Agustina Arzille
Subject: Re: [PATCH 1/2] Implement basic sleeping locks for gnumach.
Date: Sun, 29 Jan 2017 16:46:45 -0300

On 2017-01-28 21:03, Samuel Thibault wrote:
Hello,

Agustina Arzille, on Thu 19 Jan 2017 10:00:32 -0500, wrote:
+/* Atomically compare *PTR with EXP and set it to NVAL if they're equal. + * Evaluates to a boolean, indicating whether the comparison was successful.*/
+#define atomic_cas(ptr, exp, nval)   \
+  ({   \
+     typeof(exp) __e = (exp);   \
+     __atomic_compare_exchange_n ((ptr), &__e, (nval), 0,   \
+       __ATOMIC_SEQ_CST, __ATOMIC_RELAXED);   \
+   })
+
+/* Atomically exchange the value of *PTR with VAL, evaluating to
+ * its previous value. */
+#define atomic_swap(ptr, val)   \
+  __atomic_exchange_n ((ptr), (val), __ATOMIC_SEQ_CST)
+
+/* Atomically store VAL in *PTR. */
+#define atomic_store(ptr, val)   \
+  __atomic_store_n ((ptr), (val), __ATOMIC_SEQ_CST)

I'd say suffix them with _seq, to express that they have SEQ ordering.
We may later want to define non-SEQ versions.

Actually, for best performance you'd already want to define both the
_acq (for mutex_(try?)lock) and the _rel versions (for mutex_unlock).


Alright, I'll add some more primitives with different memory orderings.
As a first approach, I wanted to err on safety first :)

+int kmutex_lock (struct kmutex *mtxp, int interruptible)
+{
+  if (atomic_cas (&mtxp->state, KMUTEX_AVAIL, KMUTEX_LOCKED))
+    /* Unowned mutex - We're done. */
+    return (0);
+
+  /* The mutex is locked. We may have to sleep. */
+  simple_lock (&mtxp->lock);
+  if (locked_swap (&mtxp->state, KMUTEX_CONTENDED) == KMUTEX_AVAIL)

But this will not be safe against others using atomic operations on
state. Think about:

T1 atomic_cas(), gets the mutex
T2 fails atomic_cas()
T2 simple_lock()
T1 atomic_cas() to release the mutex
T2 reads KMUTEX_AVAIL in locked_swap
T3 atomic_cas(), gets the mutex
T2 writes KMUTEX_CONTENDED, gets KMUTEX_AVAIL

now T2 believes it has the mutex, while it was given to T3.

Ahh, that's a good point. We can use atomic_swap in place of locked_swap.
Do you think that will be enough?

Don't use thread_sleep() as a whole: after failing to acquire simply,
assert_wait() the event, then use atomic_cas to try to put
KMUTEX_CONTENDED, and on success thread_block().

This I don't see. thread_sleep() calls assert_wait() before unlocking the interlock and blocking. I'm not sure how using assert_wait() would improve
things.

+void kmutex_unlock (struct kmutex *mtxp)
+{
+  if (atomic_cas (&mtxp->state, KMUTEX_LOCKED, KMUTEX_AVAIL))
+    /* No waiters - We're done. */
+    return;
+
+  simple_lock (&mtxp->lock);
+
+  if (!thread_wakeup_one ((event_t)mtxp))
+ /* The waiter was interrupted and left - Reset the mutex state. */
+    mtxp->state = KMUTEX_AVAIL;

And what about other threads which were waiting on the mutex?  We need
to make sure that at least one wakes up to get the mutex, otherwise
they'll stay asleep indefinitely.


I modified thread_wakeup_prim() so that it returns TRUE if it managed
to wake up at least a thread. So, if it returns false, we can be assert
that there were no waiters.

I will upload a new patch set shortly. I also changed it so that instead
of using 0 and -1 for success and failure, we use the KERN_* codes.

Thank you for your comments, Samuel :)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]