bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH hurd 28/30] libports: use protected payloads to optimize the


From: Samuel Thibault
Subject: Re: [PATCH hurd 28/30] libports: use protected payloads to optimize the object lookup
Date: Tue, 2 Dec 2014 01:51:36 +0100
User-agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)

Justus Winter, le Thu 27 Nov 2014 14:19:08 +0100, a écrit :
> +      /* We are the only one updating generation, so this is safe.  */
> +      old = generation;
> +
> +      /* Update generation.  */
> +      __atomic_store_n (&generation, (old + 1) % 3, __ATOMIC_RELAXED);
> +
> +      /* This is the garbage generation.  As all writers are long
> +      gone, we do not need to bother with atomic operations.  */

It seems unsafe to me: "long" may be not long enough some day, you can
never know with extra-loaded systems, you may see the following:

- thread reads generation, gets preempted for a "long" time
- GC increases and handles generation
- GC increases and handles generation
- thread (at last!) gets to queue its thing
- GC increase and handles generation

The last two are then working on the same queue concurrently.

The GC is not on the critical path anyway, so we can use an atomic
operation there.  We can probably just get away with the generation, and
just do something like the following:

GC () {
        while(1) {
                sleep();
                d = __atomic_load_n (&list, __ATOMIC_RELAXED);
        retry:
                if (! __atomic_compare_exchange_n (&list, &d, NULL,
                                                     0, __ATOMIC_RELAXED, 
__ATOMIC_RELAXED))
                        goto retry;

                while (d) {
                        do d;
                        next = d->next;
                        free(d);
                        d = next;
                }
        }
}

queue() {
        d = malloc();
        d->next = __atomic_load_n (&list, __ATOMIC_RELAXED);
retry:
        if (! __atomic_compare_exchange_n (&list, &d->next, d,
                                             0, __ATOMIC_RELAXED, 
__ATOMIC_RELAXED))
                goto retry;
}

Now that's still not enough: one very important thing is that d->next
must be seen as written before list gets seen as written. This is always
the same on nice x86, but not on evil alpha & such, so it should be
rather something lile this:

GC () {
        while(1) {
                sleep();
                d = __atomic_load_n (&list, __ATOMIC_RELAXED);
        retry:
                if (! __atomic_compare_exchange_n (&list, &d, NULL,
                                                     0, __ATOMIC_ACQUIRE, 
__ATOMIC_RELAXED))
                        goto retry;

                while (d) {
                        do d;
                        next = d->next;
                        free(d);
                        d = next;
                }
        }
}

queue() {
        d = malloc();
        d->next = __atomic_load_n (&list, __ATOMIC_RELAXED);
retry:
        if (! __atomic_compare_exchange_n (&list, &d->next, d,
                                             0, __ATOMIC_RELEASE, 
__ATOMIC_RELAXED))
                goto retry;
}

The RELEASE in queue() makes sure that the write to d->next will be
seen before the write to list, and the ACQUIRE in GC() makes sure that
the read from d->next sees the value that was written before the value
written to list.

Samuel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]