qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/1] tests/acceptance/boot_linux: Switch to Fedora 32


From: Daniel P . Berrangé
Subject: Re: [PATCH 0/1] tests/acceptance/boot_linux: Switch to Fedora 32
Date: Fri, 5 Feb 2021 17:28:44 +0000
User-agent: Mutt/1.14.6 (2020-07-11)

On Fri, Feb 05, 2021 at 05:54:24PM +0100, Philippe Mathieu-Daudé wrote:
> Hi Wainer,
> 
> On 1/28/21 11:06 PM, Daniele Buono wrote:
> > On 1/28/2021 3:19 PM, Wainer dos Santos Moschetta wrote:
> >> Hi,
> >>
> >> On 1/26/21 10:09 PM, Daniele Buono wrote:
> >>> Local acceptance tests run with "make check-acceptance" are now
> >>> showing some cases canceled like the following:
> >>>
> >>> (01/39)
> >>> tests/acceptance/boot_linux.py:BootLinuxX8664.test_pc_i440fx_tcg:
> >>> CANCEL: Failed to download/prepare boot image (0.25 s)
> >>>
> >>> Turns out, every full-vm test in boot_linux.py is trying to use a
> >>> Fedora 31 cloud image and is failing, with Avocado refusing to download
> >>> it, presumably because Fedora 31 is EOL.
> >>>
> >>> This patch moves to Fedora 32, which is still supported. And seem to
> >>> work fine
> >>
> >> While ago it was discussed about updating the Fedora version which, in
> >> my opinion, ended up without a conclusion. Please see the complete
> >> thread in:
> >>
> >> https://www.mail-archive.com/qemu-devel@nongnu.org/msg763986.html
> > 
> > Oops, didn't notice the previous thread. Apologies for the duplicate!
> > 
> >>
> >> I'm CC'ing Daniel Berrrangé so that, perhaps, we could resume the
> >> discussion.
> 
> The first question I'd like to figure out is how/where can we archive
> the artifacts being tested by the project. As we maintain stable tags,
> I'm more worried about regressions affecting LTS use rather than
> recently added features which get more coverage and activity.
> Is this too conservative for acceptance testing?

I think there's multiple issues to worry about, one short
term, the others long term.

The most immediate issue is that we are pointing to an EOL
Fedora release with broken download URL.

While we could update to F 32, that doesn't solve the problem.
It will burn us again in just a few months when F34 comes out
making F32 EOL. We need a long term solution.


The broader long term question is what our goals are for the
acceptance tests.

I think the primary goal is detect regressions in QEMU where
we break something which used to work.  To achieve this we don't
need to be chasing distro releases. It is fine for us to be
testing an EOL distro. The fact that it is EOL doesn't invalidate
the test behaviour.

The only problem switch EOL distros is the URL breakge. That is
trivially solved in the Fedora case by downloading from the
archive.fedoraproject.org server instead of main server. Problem
solved forever.


The more important long term question is whether the selection of
distros we are testing gives us coverage which exercises all the
scenarios that we care about.

For example, considering virtio devices, we need

 - Guest OS which *only* has virtio legacy mode implemented
     - pc for PCI 
     - q35 for PCI-e
   Proves deices work in legacy mode on both PCI and PCIe
 
 - Guest OS which has both legacy and modern mode implemented
     - pc for PCI 
     - q35 for PCI-e
   Proves devices work in transitional mode (pc) and
   modern-only mode (PCI-e)


So for virtio coverage, we'll need 2 guest OS, each with two
scenarios as a starting point.  RHEL-6 is an example of a
distro that was legacy only. Anything newer does the other
case.

But then we need to expand the axis to non-x86 too, most
especially for ppc64 and s390, which have non-PCI based
options as well as PCI.

Periodically a new virtio device will arrive (eg virtiofs)
and that might mean we need to add another test scenario, or
upgrade the more modern OS in an existing test scenario.
This is reasonably infrequent though, so in general we won't
need to be chasing the bleeding edge distros.


The matrix can grow pretty damn fast, and we need to keep it
under control. One way to deal with that is to have one test
case cover multiple features. eg don't run a separate test
for each of virtio-net, virtio-blk, virtio-scsi. Launch a
single VM in which we can test all of them at once.

Also we don't need to test all of Ubuntu, Fedora, RHEL, SUSE
if all those distros are basically exercising the same set
of QEMU device code paths. ie just becuase the distro is
different, doesn't mean it adds value to the test coverage.

We should be strongly driven by whether the distro exercises
a feature that isn't already covered by an existing test.


So the next question is what needs to be in our matrix ?
Most downstream vendors will say it needs to be all KVM related
features. QEMU as a project though also cares more broadly
about many more devices for emulating older platforms.
There's some balance to be struck here.


On many previous occassions we've talked about classifying
QEMU features into support tiers. If we had a view of what
our support tiers were for each feature, this would in turn
show us where to spend effort in building up testing coverage.

I made an abortive start at trying to define what tiers might
mean but didn't get into really classifying features 

   https://wiki.qemu.org/Support_Tiers

don't pay attention to the host/guest classifications I did
there - I was mostly just playing with how to represent it.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]