qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 5/6] hmp: convert handle_hmp_command() to AIO_WAIT_WHILE_UNLO


From: Markus Armbruster
Subject: Re: [PATCH 5/6] hmp: convert handle_hmp_command() to AIO_WAIT_WHILE_UNLOCKED()
Date: Thu, 02 Mar 2023 17:25:41 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

Stefan Hajnoczi <stefanha@redhat.com> writes:

> On Thu, Mar 02, 2023 at 04:02:22PM +0100, Markus Armbruster wrote:
>> Stefan Hajnoczi <stefanha@redhat.com> writes:
>> 
>> > On Thu, Mar 02, 2023 at 08:17:43AM +0100, Markus Armbruster wrote:
>> >> Stefan Hajnoczi <stefanha@redhat.com> writes:
>> >> 
>> >> > The HMP monitor runs in the main loop thread. Calling
>> >> 
>> >> Correct.
>> >> 
>> >> > AIO_WAIT_WHILE(qemu_get_aio_context(), ...) from the main loop thread is
>> >> > equivalent to AIO_WAIT_WHILE_UNLOCKED(NULL, ...) because neither unlocks
>> >> > the AioContext and the latter's assertion that we're in the main loop
>> >> > succeeds.
>> >> >
>> >> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
>> >> > ---
>> >> >  monitor/hmp.c | 2 +-
>> >> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >> >
>> >> > diff --git a/monitor/hmp.c b/monitor/hmp.c
>> >> > index 2aa85d3982..5ecbdac802 100644
>> >> > --- a/monitor/hmp.c
>> >> > +++ b/monitor/hmp.c
>> >> > @@ -1167,7 +1167,7 @@ void handle_hmp_command(MonitorHMP *mon, const 
>> >> > char *cmdline)
>> >> >          Coroutine *co = qemu_coroutine_create(handle_hmp_command_co, 
>> >> > &data);
>> >> >          monitor_set_cur(co, &mon->common);
>> >> >          aio_co_enter(qemu_get_aio_context(), co);
>> >> > -        AIO_WAIT_WHILE(qemu_get_aio_context(), !data.done);
>> >> > +        AIO_WAIT_WHILE_UNLOCKED(NULL, !data.done);
>> >> >      }
>> >> >  
>> >> >      qobject_unref(qdict);
>> >> 
>> >> Acked-by: Markus Armbruster <armbru@redhat.com>
>> >> 
>> >> For an R-by, I need to understand this in more detail.  I'm not familiar
>> >> with the innards of AIO_WAIT_WHILE() & friends, so I need to go real
>> >> slow.
>> >> 
>> >> We change
>> >> 
>> >>     ctx from qemu_get_aio_context() to NULL
>> >>     unlock from true to false
>> >> 
>> >> in
>> >> 
>> >>     bool waited_ = false;                                          \
>> >>     AioWait *wait_ = &global_aio_wait;                             \
>> >>     AioContext *ctx_ = (ctx);                                      \
>> >>     /* Increment wait_->num_waiters before evaluating cond. */     \
>> >>     qatomic_inc(&wait_->num_waiters);                              \
>> >>     /* Paired with smp_mb in aio_wait_kick(). */                   \
>> >>     smp_mb();                                                      \
>> >>     if (ctx_ && in_aio_context_home_thread(ctx_)) {                \
>> >>         while ((cond)) {                                           \
>> >>             aio_poll(ctx_, true);                                  \
>> >>             waited_ = true;                                        \
>> >>         }                                                          \
>> >>     } else {                                                       \
>> >>         assert(qemu_get_current_aio_context() ==                   \
>> >>                qemu_get_aio_context());                            \
>> >>         while ((cond)) {                                           \
>> >>             if (unlock && ctx_) {                                  \
>> >>                 aio_context_release(ctx_);                         \
>> >>             }                                                      \
>> >>             aio_poll(qemu_get_aio_context(), true);                \
>> >>             if (unlock && ctx_) {                                  \
>> >>                 aio_context_acquire(ctx_);                         \
>> >>             }                                                      \
>> >>             waited_ = true;                                        \
>> >>         }                                                          \
>> >>     }                                                              \
>> >>     qatomic_dec(&wait_->num_waiters);                              \
>> >>     waited_; })
>> >> 
>> >> qemu_get_aio_context() is non-null here, correct?
>> >
>> > qemu_get_aio_context() always returns the main loop thread's AioContext.
>> 
>> So it's non-null.
>
> Yes. Sorry, I should have answered directly :).
>
>> > qemu_get_current_aio_context() returns the AioContext that was most
>> > recently set in the my_aiocontext thread-local variable for IOThreads,
>> > the main loop's AioContext for BQL threads, or NULL for threads
>> > that don't use AioContext at all.
>> >
>> >> What's the value of in_aio_context_home_thread(qemu_get_aio_context())?
>> >
>> > This function checks whether the given AioContext is associated with
>> > this thread. In a BQL thread it returns true if the context is the main
>> > loop's AioContext. In an IOThread it returns true if the context is the
>> > IOThread's AioContext. Otherwise it returns false.
>> 
>> I guess that means in_aio_context_home_thread(qemu_get_aio_context()) is
>> true in the main thread.
>> 
>> Before the patch, the if's condition is true, and we execute
>> 
>>            while ((cond)) {                                           \
>>                aio_poll(ctx_, true);                                  \
>>                waited_ = true;                                        \
>>            }                                                          \
>> 
>> Afterwards, it's false, and we execute
>> 
>> >>     }                                                              \
>> >>     qatomic_dec(&wait_->num_waiters);                              \
>> >>     waited_; })
>> >> 
>> >> qemu_get_aio_context() is non-null here, correct?
>> >
>> > qemu_get_aio_context() always returns the main loop thread's AioContext.
>> 
>> So it's non-null.
>> 
>> > qemu_get_current_aio_context() returns the AioContext that was most
>> > recently set in the my_aiocontext thread-local variable for IOThreads,
>> > the main loop's AioContext for BQL threads, or NULL for threads
>> > that don't use AioContext at all.
>> >
>> >> What's the value of in_aio_context_home_thread(qemu_get_aio_context())?
>> >
>> > This function checks whether the given AioContext is associated with
>> > this thread. In a BQL thread it returns true if the context is the main
>> > loop's AioContext. In an IOThread it returns true if the context is the
>> > IOThread's AioContext. Otherwise it returns false.
>> 
>> I guess that means in_aio_context_home_thread(qemu_get_aio_context()) is
>> true in the main thread.
>
> Yes.
>
>> Before the patch, the if's condition is true, and we execute
>> 
>>            while ((cond)) {                                           \
>>                aio_poll(ctx_, true);                                  \
>>                waited_ = true;                                        \
>>            }                                                          \
>> 
>> Afterwards, it's false, and we instead execute
>> 
>>            assert(qemu_get_current_aio_context() ==                   \
>>                   qemu_get_aio_context());                            \
>>            while ((cond)) {                                           \
>>                if (unlock && ctx_) {                                  \
>>                    aio_context_release(ctx_);                         \
>>                }                                                      \
>>                aio_poll(qemu_get_aio_context(), true);                \
>>                if (unlock && ctx_) {                                  \
>>                    aio_context_acquire(ctx_);                         \
>>                }                                                      \
>>                waited_ = true;                                        \
>>            }                                                          \
>> 
>> The assertion is true: both operands of == are the main loop's
>> AioContext.
>
> Yes.
>
>> The if conditions are false, because unlock is.
>> 
>> Therefore, we execute the exact same code.
>> 
>> All correct?
>
> Yes, exactly.

Thank you!

Reviewed-by: Markus Armbruster <armbru@redhat.com>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]