qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 08/13] stream: Replace subtree drain with a single node drain


From: Kevin Wolf
Subject: Re: [PATCH 08/13] stream: Replace subtree drain with a single node drain
Date: Thu, 10 Nov 2022 18:27:46 +0100

Am 10.11.2022 um 12:25 hat Vladimir Sementsov-Ogievskiy geschrieben:
> On 11/10/22 13:16, Kevin Wolf wrote:
> > Am 09.11.2022 um 17:52 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > On 11/8/22 15:37, Kevin Wolf wrote:
> > > > The subtree drain was introduced in commit b1e1af394d9 as a way to avoid
> > > > graph changes between finding the base node and changing the block graph
> > > > as necessary on completion of the image streaming job.
> > > > 
> > > > The block graph could change between these two points because
> > > > bdrv_set_backing_hd() first drains the parent node, which involved
> > > > polling and can do anything.
> > > > 
> > > > Subtree draining was an imperfect way to make this less likely (because
> > > > with it, fewer callbacks are called during this window). Everyone agreed
> > > > that it's not really the right solution, and it was only committed as a
> > > > stopgap solution.
> > > > 
> > > > This replaces the subtree drain with a solution that simply drains the
> > > > parent node before we try to find the base node, and then call a version
> > > > of bdrv_set_backing_hd() that doesn't drain, but just asserts that the
> > > > parent node is already drained.
> > > > 
> > > > This way, any graph changes caused by draining happen before we start
> > > > looking at the graph and things stay consistent between finding the base
> > > > node and changing the graph.
> > > > 
> > > > Signed-off-by: Kevin Wolf<kwolf@redhat.com>
> > > [..]
> > > 
> > > >        base = bdrv_filter_or_cow_bs(s->above_base);
> > > > -    if (base) {
> > > > -        bdrv_ref(base);
> > > > -    }
> > > > -
> > > >        unfiltered_base = bdrv_skip_filters(base);
> > > >        if (bdrv_cow_child(unfiltered_bs)) {
> > > > @@ -82,7 +85,7 @@ static int stream_prepare(Job *job)
> > > >                }
> > > >            }
> > > > -        bdrv_set_backing_hd(unfiltered_bs, base, &local_err);
> > > > +        bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
> > > >            ret = bdrv_change_backing_file(unfiltered_bs, base_id, 
> > > > base_fmt, false);
> > > If we have yield points / polls during bdrv_set_backing_hd_drained()
> > > and bdrv_change_backing_file(), it's still bad and another
> > > graph-modifying operation may interleave. But b1e1af394d9 reports only
> > > polling in bdrv_set_backing_hd(), so I think it's OK to not care about
> > > other cases.
> > At this point in the series, bdrv_replace_child_noperm() can indeed
> > still poll. I'm not sure how bad it is, but at this point we're already
> > reconfiguring the graph with two specific nodes and somehow this poll
> > hasn't caused problems in the past. Anyway, at the end of the series,
> > there isn't be any polling left in bdrv_set_backing_hd_drained(), as far
> > as I can tell.
> > 
> > bdrv_change_backing_file() will certainly poll because it does I/O to
> > the image file. However, the change to the graph is completed at that
> > point, so I don't think it's a problem. Do you think it would be worth
> > putting a comment before bdrv_change_backing_file() that mentions that
> > the graph may change again from here on, but we've completed the graph
> > change?
> > 
> 
> Comment won't hurt. I think theoretically that's possible that we
> 
> 1. change the graph
> 2. yield in bdrv_change_backing_file
> 3. switch to another graph-modifying operation, change backing file and do 
> another bdrv_change_backing_file()
> 4. return to bdrv_change_backing_file() of [2] and write wrong backing file 
> to metadata
> 
> And the only solution for such things that I can imagine is a kind of
> global graph-modifying lock, which should be held around the whole
> graph modifying operation, including writing metadata.

Actually, I don't think this is the case. The problem that you get here
is just that we haven't really defined what happens when you get two
concurrent .bdrv_change_backing_file requests. To solve this, you don't
need to lock the whole graph, you just need to order the updates at the
block driver level instead of doing them in parallel, so that we know
that the last .bdrv_change_backing_file call wins. I think taking
s->lock in qcow2 would already achieve this (but still lock more than is
strictly necessary).

> Probably, we shouldn't care until we have real bug reports of it.
> Actually I hope that the only user who start stream and commit jobs in
> parallel on same backing-chain is our iotests :)

Yes, it sounds very theoretical. :-)

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]