qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC 6/7] migration: Split multifd pending_job into two boolea


From: Fabiano Rosas
Subject: Re: [PATCH RFC 6/7] migration: Split multifd pending_job into two booleans
Date: Mon, 23 Oct 2023 12:15:49 -0300

Peter Xu <peterx@redhat.com> writes:

> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/multifd.h | 16 ++++++++++------
>  migration/multifd.c | 33 +++++++++++++++++++++++----------
>  2 files changed, 33 insertions(+), 16 deletions(-)
>
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 2acf400085..ddee7b8d8a 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -101,12 +101,16 @@ typedef struct {
>      uint32_t flags;
>      /* global number of generated multifd packets */
>      uint64_t packet_num;
> -    /* thread has work to do */
> -    int pending_job;
> -    /* array of pages to sent.
> -     * The owner of 'pages' depends of 'pending_job' value:
> -     * pending_job == 0 -> migration_thread can use it.
> -     * pending_job != 0 -> multifd_channel can use it.
> +    /* thread has a request to sync all data */
> +    bool pending_sync;
> +    /* thread has something to send */
> +    bool pending_job;
> +    /*
> +     * Array of pages to sent. The owner of 'pages' depends of
> +     * 'pending_job' value:
> +     *
> +     *   - true -> multifd_channel owns it.
> +     *   - false -> migration_thread owns it.
>       */
>      MultiFDPages_t *pages;
>  
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 8140520843..fe8d746ff9 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -425,7 +425,7 @@ static int multifd_send_pages(QEMUFile *f)
>          p = &multifd_send_state->params[i];
>          qemu_mutex_lock(&p->mutex);
>          if (!p->pending_job) {
> -            p->pending_job++;
> +            p->pending_job = true;
>              next_channel = (i + 1) % migrate_multifd_channels();
>              break;
>          }
> @@ -615,8 +615,7 @@ int multifd_send_sync_main(QEMUFile *f)
>  
>          qemu_mutex_lock(&p->mutex);
>          p->packet_num = multifd_send_state->packet_num++;
> -        p->flags |= MULTIFD_FLAG_SYNC;
> -        p->pending_job++;
> +        p->pending_sync = true;
>          qemu_mutex_unlock(&p->mutex);
>          qemu_sem_post(&p->sem);
>      }
> @@ -747,8 +746,6 @@ static void *multifd_send_thread(void *opaque)
>  
>          qemu_mutex_lock(&p->mutex);
>          if (p->pending_job) {
> -            bool need_sync = p->flags & MULTIFD_FLAG_SYNC;
> -
>              if (!multifd_send_prepare(p, &local_err)) {
>                  assert(local_err);
>                  qemu_mutex_unlock(&p->mutex);
> @@ -764,12 +761,27 @@ static void *multifd_send_thread(void *opaque)
>              qemu_mutex_lock(&p->mutex);
>  
>              /* Send successful, mark the task completed */
> -            p->pending_job--;
> +            p->pending_job = false;
> +
> +        } else if (p->pending_sync) {

Is your intention here to stop sending the SYNC along with the pages?
This would have to loop once more to send the sync.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]