[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Concurrency, again
From: |
Ken Raeburn |
Subject: |
Re: Concurrency, again |
Date: |
Thu, 20 Oct 2016 02:08:14 -0400 |
On Oct 19, 2016, at 07:57, Eli Zaretskii <address@hidden> wrote:
>>> That should be easy: since a subprocess is locked to a single thread,
>>
>> by default, but if that thread exits, that lock disappears
>
> And the process gets locked to some other thread.
Not that I can see, unless you explicitly call set-process-thread;
update_processes_for_thread_death sets the process thread field to Qnil.
>
>>
>>> SIGCHLD should be delivered to that thread. If we don't have that
>>> already, we should add that, it doesn't sound hard, given the
>>> infrastructure we already have (deliver_thread_signal etc.).
>>
>> It’s not completely trivial. […]
>
> So you are saying this problem was never encountered before in any
> other program out there, and doesn't already have a solution? I find
> that hard to believe.
Not at all, just that it may take some work. Though, I would expect that most
such programs we might go look at are committed to a multi-threaded design, and
aren’t written to support both multi-threaded and single-threaded environments.
>
>> On the other hand, perhaps we can create one special thread to do all the
>> waitpid() calls and pass info to the Lisp-running threads.
>
> Sounds like an unnecessary complication, but if that's how others solve
> this problem, so shall we.
I don’t know. Aside from checking a few man pages while writing the previous
email, I haven’t researched it. I think it’s probably the way I’d be most
inclined to do such a thing, if the ability to use such helper threads could be
assumed; it can’t be in this case.
I just took a quick look at some bits of a Thunderbird source tree. In the
Netscape Portable Runtime
(thunderbird-49.0b1/mozilla/nsprpub/pr/src/md/unix/uxproces.c) they’ve got a
“waitpid daemon thread”; whichever thread gets SIGCHLD delivered to it writes a
byte to a pipe to wake up that daemon thread, which then calls waitpid until
there are no more child process status updates to process. It updates global
(mutex-protected) data structures and optionally uses a condition variable to
notify some other thread of the process change.
>
>>>> It’s easy enough to disable stack overflow checking when enabling thread
>>>> support.
>>>
>>> Or add some simple code in the stack overflow handler to check if we
>>> are in the main thread, and if not, punt (i.e. crash).
>>>
>>>> If only one thread is allowed into the image processing code at a time
>>>> (i.e., don’t release the global lock for that code) then that’s probably
>>>> fine for now, and there’s probably other state there that different
>>>> threads shouldn’t be mucking around with in parallel.
>>>
>>> Redisplay runs in the main thread anyway, right? If so, there's no
>>> problem.
>>
>> If some random thread calls (redisplay) or (sit-for …)? I think it’ll run
>> in whichever Lisp-running thread triggers it. But, it’ll be the one holding
>> the lock.
>
> No, sit-for causes a thread switch.
I believe that’s when it’s checking for input, after calling redisplay, though.
I just tried a test with a breakpoint in redisplay_internal; it can get called
from threads other than the main thread. As far as I know, we can’t have
multiple threads in the redisplay code at the same time, though.
>
>>>> The keyboard.c one is the only one I’m a bit concerned about, in part
>>>> because I haven’t looked at it.
>>>
>>> What part(s) of keyboard.c, exactly?
>>
>> Anything looking at getcjmp; that means read_event_from_main_queue and
>> read_char. Like I said, I haven’t looked very closely; if the static
>> storage isn’t ever used across a point where the global lock could be
>> released to allow a thread switch, it may be fine.
>
> That should already be solved, or else threads cannot receive keyboard
> input safely.
I hope so. I haven’t convinced myself one way or the other yet, though.
Ken
- Re: Concurrency, again, (continued)
- Re: Concurrency, again, Tom Tromey, 2016/10/17
- Re: Concurrency, again, Eli Zaretskii, 2016/10/18
- Re: Concurrency, again, Ken Raeburn, 2016/10/18
- Re: Concurrency, again, Eli Zaretskii, 2016/10/18
- Re: Concurrency, again, Ken Raeburn, 2016/10/18
- Re: Concurrency, again, Eli Zaretskii, 2016/10/18
- Re: Concurrency, again, Ken Raeburn, 2016/10/19
- Re: Concurrency, again, Eli Zaretskii, 2016/10/19
- Re: Concurrency, again,
Ken Raeburn <=
- Re: Concurrency, again, Eli Zaretskii, 2016/10/20
- RE: Concurrency, again, Herring, Davis, 2016/10/20
- Re: Concurrency, again, Ken Raeburn, 2016/10/20
- Re: Concurrency, again, Paul Eggert, 2016/10/20
- Re: Concurrency, again, Alan Third, 2016/10/18
- Re: Concurrency, again, Ken Raeburn, 2016/10/19
- Re: Concurrency, again, Tom Tromey, 2016/10/18
- Re: Concurrency, again, Philipp Stephani, 2016/10/25
- Re: Concurrency, again, Eli Zaretskii, 2016/10/25
- Re: Concurrency, again, John Wiegley, 2016/10/25