bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: Lightweight synchronization mechanism for gnumach v3


From: Samuel Thibault
Subject: Re: RFC: Lightweight synchronization mechanism for gnumach v3
Date: Tue, 26 Apr 2016 02:37:31 +0200
User-agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)

Hello,

Agustina Arzille, on Mon 25 Apr 2016 12:31:17 -0300, wrote:
> >>In my opinion, the low-level lock stuff should be added to glibc to
> >>replace most (all?) the spin locks.
> >Sure, all of the lock stuff would be extremely welcome!
> 
> OK. This, I think, needs to be in glibc, since that's where it is for other
> platforms.

Yes.

> I haven't patched anything in glibc yet, but I think the files:
> https://github.com/avarzille/hlpt/blob/master/pthread/lowlevellock.c and
> https://github.com/avarzille/hlpt/blob/master/pthread/lowlevellock.h
> 
> should be in the hurd/ directory? or maybe in mach/?

Since it depends on mach features only, it should be in mach.

> >>so feel free to pick what you feel is useful.
> >
> >I'd say the priority order would be:
> >
> >- pthread_spin_lock and mach's spinlock
> >- mach's mutex_lock (and thus libc_lock)
> >- pthread_mutex_lock & pthread_cond
> 
> Mmm. pthread_spin_lock* should be left as they are, since it matches users'
> expectations best of what a spin lock is.

Well, I wouldn't say so. Our current implementation does yield to other
threads, which is not what is usually done by spin lock implementations:
usually they really spin on the value, without making system calls, so
as to acquire as fast as possible. Such kinds of locks are of course
delicate to use: you have to control where threads are running,
otherwise you could be spinning for a whole scheduling quantum.

It happens that the current use of spin locks from translators assumes
that the spin locks are somehow yielding: they really don't do control
where threads are running.  These were converted as such from the
cthreads library, which does yield.  Probably we should just turn them
into using mutexes, which should become very lightweight with gsync.
But let's do step by step, so for now we have to keep pthread_spin_lock
somehow yield.  But yielding blindly like currently is done is not
the best way to achieve things, especially when we have the gsync
facility which allows to exactly get an optimized behavior with not much
overhead. Since we'll want to turn __spin_lock_solid (which is really
supposed to be somehow yielding) into using gsync anyway, that'll make
our current pthread_spin_lock implementation block with gsync, and get
better performance.

We can turn translators into using mutexes and then fix
pthread_spin_lock into really spinning, but independently.

> Regarding mach's mutex_lock, and therefore libc_lock, am I correct in assuming
> that they end up calling pthread functions once the lib gets linked in?

Yes, __mutex_lock_solid gets overrided by the libpthread version.

> If so, it should only be necessary to rewrite the latter.

That'd only fix performance of multithreaded applications.
Non-multithreaded application will still be using the non-thread
__mutex_lock_solid which calls __spin_lock_solid which yields. But
again, I believe we want to fix that one into using gsync, anyway.

> >- pthread_rwlock
> 
> I'm half guessing here, but I think the reason this function gets called so 
> often
> is because the horrible id->descriptor translation that is going on in 
> libpthread.

You mean the lock protecting __pthread_threads?  That's only used on
thread creation and pthread_self calls, which are really not that often,
actually, so that's not really the problem.

> Still worth to rewrite, because the algorithm in hlpt is very fast :)

We could integrate that optimization, yes. That's just not the most
pressing thing to fix for performances :)

Samuel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]