bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v15] kern: simple futex for gnumach


From: Justus Winter
Subject: Re: [PATCH v15] kern: simple futex for gnumach
Date: Wed, 15 Jan 2014 18:24:54 +0100
User-agent: alot/0.3.4

Quoting Marin Ramesa (2014-01-15 14:12:08)
> On 01/15/2014 10:56:25 AM, Justus Winter wrote:
> > > I tested a simple userspace mutex based on this and multiple  
> > futexes,
> > > all tests passed.
> > 
> > Could you put that somewhere? I still have so little "context", I'd
> > like to see how futexe are used in a mutex implementation.
> 
> Here's what I used:

Ok, I've digested 'futexes are tricky'.  So your code is basically
'mutex, take three'. Nice.

> #include "gnumach.h"
> #include <mach_init.h>
> #include <pthread.h>
> #include <stdio.h>
> 
> struct simple_mutex {
> 
>         #define SMUTEX_UNLOCKED 0
>         #define SMUTEX_NO_WAITERS 1
>         #define SMUTEX_WAITERS 2
> 
>         int value;
> 
> };
> 
> void simple_mutex_init(struct simple_mutex *smutex)
> {
>         smutex->value = SMUTEX_UNLOCKED;
> }
> 
> void simple_mutex_lock(struct simple_mutex *smutex)
> {
>         int c;
> 
>         if ((c = __sync_val_compare_and_swap(&smutex->value,  

__sync* is deprecated.  Also, dissassemble this function and make sure
that the __atomic* function you use here is in fact a
compare-and-exchange operation.

> SMUTEX_UNLOCKED, SMUTEX_NO_WAITERS)) != SMUTEX_UNLOCKED) {

(Note how your mail client introduced a line-break here. If you do
paste code into your mail client, try to avoid that.)

>                 if (c != SMUTEX_WAITERS)
>                         c = __sync_lock_test_and_set(&smutex->value,  
> SMUTEX_WAITERS);

Likewise.  And check that this is an exchange operation.

>                 while (c != SMUTEX_UNLOCKED) {
>                         futex_wait(mach_task_self(),  
> (vm_offset_t)&smutex->value, SMUTEX_WAITERS, 0, 1);
>                         c = __sync_lock_test_and_set(&smutex->value,  
> SMUTEX_WAITERS);
>                 }
>         }
> }
> 
> void simple_mutex_unlock(struct simple_mutex *smutex)
> {
>         if (__sync_lock_test_and_set(&smutex->value, smutex->value - 1)  
> != SMUTEX_NO_WAITERS) {

This is incorrect. You need an atomic decrement here.

>                 smutex->value = SMUTEX_UNLOCKED;
>                 futex_wake(mach_task_self(),  
> (vm_offset_t)&smutex->value, 0, 1);
>         }
> }
> 
> int e = 0;
> struct simple_mutex mutex;
> 
> void *thread_f(void *arg)
> {
>         printf("Tread 2 locking the mutex.\n");
>         simple_mutex_lock(&mutex);
>         printf("Thread 2 in critical section.\n");
>         printf("Thread 2 about to sleep.\n");
>         futex_wait(mach_task_self(), (vm_offset_t)&e, e, 1000, 1);
>         printf("Thread 2 waking up.\n");
>         simple_mutex_unlock(&mutex);
>         printf("Thread 2 unlocked the mutex.\n");
>         return NULL;
> }
> 
> int main()
> {
>         simple_mutex_init(&mutex);
> 
>         pthread_t new_thread;
>         pthread_attr_t attr;
> 
>         pthread_attr_init(&attr);
> 
>         pthread_create(&new_thread, &attr, thread_f, NULL);
> 
>         futex_wait(mach_task_self(), (vm_offset_t)&e, e, 1000, 1);
> 
>         printf("Thread 1 trying to lock the mutex.\n");
>         simple_mutex_lock(&mutex);
>         printf("Thread 1 in critical section.\n");
>         simple_mutex_unlock(&mutex);
>         printf("Thread 1 unlocked the mutex.\n");
> 
>         return 0;
> }

Ok, I think you should do the following:

1. Patch hurds libpthread to use your futexes.  Then use it to test
   your implementation using more complex programs.  Richards
   libpthread test-suite of choice seems to be iceweasel.

2. Consider the other use cases of futexes besides implementing a
   simple mutex.  The paper outlines a few of them.  Also, the linux
   futex implementation seems to expose a richer interface, most
   likely for this purpose.

3. futexes should work across process boundaries so that one can build
   inter-process synchronization mechanisms with that.  The paper
   talks extensively how this is implemented (the kernel
   implementation on linux seems to use physical addresses to that
   end).  Verify that your implementation allows that as well.

Cheers,
Justus



reply via email to

[Prev in Thread] Current Thread [Next in Thread]