qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 06/11] monitor: release the lock before calling close()


From: Markus Armbruster
Subject: Re: [PATCH v4 06/11] monitor: release the lock before calling close()
Date: Mon, 06 Mar 2023 16:29:24 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

marcandre.lureau@redhat.com writes:

> From: Marc-André Lureau <marcandre.lureau@redhat.com>
>
> As per comment, presumably to avoid syscall in critical section.
>
> Fixes: 0210c3b39bef08 ("monitor: Use LOCK_GUARD macros")
> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> ---
>  monitor/fds.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/monitor/fds.c b/monitor/fds.c
> index 26b39a0ce6..7daf1064e1 100644
> --- a/monitor/fds.c
> +++ b/monitor/fds.c
> @@ -80,7 +80,8 @@ void qmp_getfd(const char *fdname, Error **errp)
>          return;
>      }
>  
> -    QEMU_LOCK_GUARD(&cur_mon->mon_lock);
> +    /* See close() call below. */
> +    qemu_mutex_lock(&cur_mon->mon_lock);
>      QLIST_FOREACH(monfd, &cur_mon->fds, next) {
>          if (strcmp(monfd->name, fdname) != 0) {
>              continue;
> @@ -88,6 +89,7 @@ void qmp_getfd(const char *fdname, Error **errp)
>  
>          tmp_fd = monfd->fd;
>          monfd->fd = fd;
> +        qemu_mutex_unlock(&cur_mon->mon_lock);
>          /* Make sure close() is outside critical section */
>          close(tmp_fd);
>          return;

Not changed by your patch, but odd: when no fd named @fdname exists, the
command does nothing silently.  Shouldn't it fail then?

> @@ -98,6 +100,7 @@ void qmp_getfd(const char *fdname, Error **errp)
>      monfd->fd = fd;
>  
>      QLIST_INSERT_HEAD(&cur_mon->fds, monfd, next);
> +    qemu_mutex_unlock(&cur_mon->mon_lock);
>  }
>  
>  void qmp_closefd(const char *fdname, Error **errp)

Alex suggested a different way to do this in reply to v3 of this patch.
Please have a look and reply there.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]