qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 05/14] migration: Yield bitmap_mutex properly when sending/sl


From: Dr. David Alan Gilbert
Subject: Re: [PATCH 05/14] migration: Yield bitmap_mutex properly when sending/sleeping
Date: Wed, 5 Oct 2022 12:18:00 +0100
User-agent: Mutt/2.2.7 (2022-08-07)

* Peter Xu (peterx@redhat.com) wrote:
> On Tue, Oct 04, 2022 at 02:55:10PM +0100, Dr. David Alan Gilbert wrote:
> > * Peter Xu (peterx@redhat.com) wrote:
> > > Don't take the bitmap mutex when sending pages, or when being throttled by
> > > migration_rate_limit() (which is a bit tricky to call it here in ram code,
> > > but seems still helpful).
> > > 
> > > It prepares for the possibility of concurrently sending pages in >1 
> > > threads
> > > using the function ram_save_host_page() because all threads may need the
> > > bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind 
> > > of
> > > qemu_sem_wait() blocking for one thread will not block the other from
> > > progressing.
> > > 
> > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > 
> > I generally dont like taking locks conditionally; but this kind of looks
> > OK; I think it needs a big comment on the start of the function saying
> > that it's called and left with the lock held but that it might drop it
> > temporarily.
> 
> Right, the code is slightly hard to read, I just didn't yet see a good and
> easy solution for it yet.  It's just that we may still want to keep the
> lock as long as possible for precopy in one shot.
> 
> > 
> > > ---
> > >  migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
> > >  1 file changed, 31 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/migration/ram.c b/migration/ram.c
> > > index 8303252b6d..6e7de6087a 100644
> > > --- a/migration/ram.c
> > > +++ b/migration/ram.c
> > > @@ -2463,6 +2463,7 @@ static void postcopy_preempt_reset_channel(RAMState 
> > > *rs)
> > >   */
> > >  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > >  {
> > > +    bool page_dirty, release_lock = postcopy_preempt_active();
> > 
> > Could you rename that to something like 'drop_lock' - you are taking the
> > lock at the end even when you have 'release_lock' set - which is a bit
> > strange naming.
> 
> Is there any difference on "drop" or "release"?  I'll change the name
> anyway since I definitely trust you on any English comments, but please
> still let me know - I love to learn more on those! :)

I'm not sure 'drop' is much better either; I was struggling to find a
good nam.

> > 
> > >      int tmppages, pages = 0;
> > >      size_t pagesize_bits =
> > >          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> > > @@ -2486,22 +2487,41 @@ static int ram_save_host_page(RAMState *rs, 
> > > PageSearchStatus *pss)
> > >              break;
> > >          }
> > >  
> > > +        page_dirty = migration_bitmap_clear_dirty(rs, pss->block, 
> > > pss->page);
> > > +        /*
> > > +         * Properly yield the lock only in postcopy preempt mode because
> > > +         * both migration thread and rp-return thread can operate on the
> > > +         * bitmaps.
> > > +         */
> > > +        if (release_lock) {
> > > +            qemu_mutex_unlock(&rs->bitmap_mutex);
> > > +        }
> > 
> > Shouldn't the unlock/lock move inside the 'if (page_dirty) {' ?
> 
> I think we can move into it, but it may not be as optimal as keeping it
> as-is.
> 
> Consider a case where we've got the bitmap with continous zero bits.
> During postcopy, the migration thread could be spinning here with the lock
> held even if it doesn't send a thing.  It could still block the other
> return path thread on sending urgent pages which may be outside the zero
> zones.

OK, that reason needs commenting then - you're going to do a lot of
release/take pairs in that case which is going to show up as very hot;
so hmm, if ti was just for that type of 'yield' behaviour you wouldn't
normally do it for each bit.

> > 
> > 
> > >          /* Check the pages is dirty and if it is send it */
> > > -        if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
> > > +        if (page_dirty) {
> > >              tmppages = ram_save_target_page(rs, pss);
> > > -            if (tmppages < 0) {
> > > -                return tmppages;
> > > +            if (tmppages >= 0) {
> > > +                pages += tmppages;
> > > +                /*
> > > +                 * Allow rate limiting to happen in the middle of huge 
> > > pages if
> > > +                 * something is sent in the current iteration.
> > > +                 */
> > > +                if (pagesize_bits > 1 && tmppages > 0) {
> > > +                    migration_rate_limit();
> > 
> > This feels interesting, I know it's no change from before, and it's
> > difficult to do here, but it seems odd to hold the lock around the
> > sleeping in the rate limit.
> 
> Good point.. I think I'll leave it there for this patch because it's
> totally irrelevant, but seems proper in the future to do unlocking too for
> normal precopy.
> 
> Maybe I'll just attach a patch at the end of this series when I repost.
> That'll be easier before things got forgotten again.

Dave

> -- 
> Peter Xu
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]