qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v10 18/21] job.c: enable job lock/unlock and remove Aiocontex


From: Emanuele Giuseppe Esposito
Subject: Re: [PATCH v10 18/21] job.c: enable job lock/unlock and remove Aiocontext locks
Date: Tue, 16 Aug 2022 14:52:48 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0

> 
>>   }
>> @@ -501,8 +481,12 @@ void job_unref_locked(Job *job)>          
>> assert(!job->txn);
>>             if (job->driver->free) {
>> +            AioContext *aio_context = job->aio_context;
>>               job_unlock();
>> +            /* FIXME: aiocontext lock is required because cb calls
>> blk_unref */
>> +            aio_context_acquire(aio_context);
>>               job->driver->free(job);
>> +            aio_context_release(aio_context);
> 
> So, job_unref_locked() must be called with aio_context not locked,
> otherwise we dead-lock here? That should be documented in function
> declaration comment.
> 
> For example in qemu-img.c in run_block_job() we do exactly that: call
> job_unref_locked()  inside aio-context lock critical seaction..

No, job_unref_locked has also status and refcnt and all the other fields
that need to be protectd. Only free must be called without job lock held.

> 
> 
>>               job_lock();
>>           }
>>   @@ -581,21 +565,17 @@ void job_enter_cond_locked(Job *job,
>> bool(*fn)(Job *job))
>>           return;
>>       }
>>   -    real_job_lock();
>>       if (job->busy) {
>> -        real_job_unlock();
>>           return;
>>       }
>>         if (fn && !fn(job)) {
>> -        real_job_unlock();
>>           return;
>>       }
>>         assert(!job->deferred_to_main_loop);
>>       timer_del(&job->sleep_timer);
>>       job->busy = true;
>> -    real_job_unlock();
>>       job_unlock();
>>       aio_co_wake(job->co);
>>       job_lock();
>> @@ -626,13 +606,11 @@ static void coroutine_fn job_do_yield_locked(Job
>> *job, uint64_t ns)
>>   {
>>       AioContext *next_aio_context;
>>   -    real_job_lock();
>>       if (ns != -1) {
>>           timer_mod(&job->sleep_timer, ns);
>>       }
>>       job->busy = false;
>>       job_event_idle_locked(job);
>> -    real_job_unlock();
>>       job_unlock();
>>       qemu_coroutine_yield();
>>       job_lock();
>> @@ -922,6 +900,7 @@ static void job_clean(Job *job)
>>   static int job_finalize_single_locked(Job *job)
>>   {
>>       int job_ret;
>> +    AioContext *ctx = job->aio_context;
>>         assert(job_is_completed_locked(job));
>>   @@ -929,6 +908,7 @@ static int job_finalize_single_locked(Job *job)
>>       job_update_rc_locked(job);
>>         job_unlock();
>> +    aio_context_acquire(ctx);
> 
> Hmm, and this function and all its callers now should be called with
> aio-context lock not locked?

Why not leave it as it is?
> 
> For example job_exit is scheduled as as BH. Aren't BHs called with
> aio-context lock held?

I see no aiocontext call in aio_bh_schedule_oneshot and callees...

So summing up, no, I don't think I will apply your suggestions for this
patch here (assume the opposite for all the others).

Emanuele
> 
>>         if (!job->ret) {
>>           job_commit(job);
>> @@ -937,6 +917,7 @@ static int job_finalize_single_locked(Job *job)
>>       }
>>       job_clean(job);
>>   +    aio_context_release(ctx);
>>       job_lock();
>>         if (job->cb) {
> 
> [..]
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]