qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/4] migration: check for rate_limit_max for RATE_LIMIT_DISAB


From: Peter Xu
Subject: Re: [PATCH 2/4] migration: check for rate_limit_max for RATE_LIMIT_DISABLED
Date: Tue, 10 Oct 2023 16:17:42 -0400

On Thu, Sep 21, 2023 at 11:56:23PM -0700, Elena Ufimtseva wrote:
> In migration rate limiting atomic operations are used
> to read the rate limit variables and transferred bytes and
> they are expensive. Check first if rate_limit_max is equal
> to RATE_LIMIT_DISABLED and return false immediately if so.
> 
> Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

One trivial comment:

> ---
>  migration/migration-stats.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/migration/migration-stats.c b/migration/migration-stats.c
> index 095d6d75bb..abc31483d5 100644
> --- a/migration/migration-stats.c
> +++ b/migration/migration-stats.c
> @@ -24,14 +24,14 @@ bool migration_rate_exceeded(QEMUFile *f)
>          return true;
>      }
>  
> -    uint64_t rate_limit_start = stat64_get(&mig_stats.rate_limit_start);
> -    uint64_t rate_limit_current = migration_transferred_bytes(f);
> -    uint64_t rate_limit_used = rate_limit_current - rate_limit_start;
>      uint64_t rate_limit_max = stat64_get(&mig_stats.rate_limit_max);

Side note: since we have a helper, this can be migration_rate_get() too.

> -
>      if (rate_limit_max == RATE_LIMIT_DISABLED) {
>          return false;
>      }

empty line would be nice.

> +    uint64_t rate_limit_start = stat64_get(&mig_stats.rate_limit_start);
> +    uint64_t rate_limit_current = migration_transferred_bytes(f);
> +    uint64_t rate_limit_used = rate_limit_current - rate_limit_start;
> +
>      if (rate_limit_max > 0 && rate_limit_used > rate_limit_max) {
>          return true;
>      }
> -- 
> 2.34.1
> 
> 

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]