On Mon, Sep 11, 2023 at 07:01:38PM -0300, Daniel Henrique Barboza wrote:
On 9/11/23 16:46, Daniel Henrique Barboza wrote:
On 9/8/23 11:10, Andrew Jones wrote:
On Fri, Sep 08, 2023 at 03:28:12AM -0700, Andrea Bolognani wrote:
Wouldn't that be exposed as a named vendor CPU rather than something
that users have to assemble themselves by layering profiles?
Combing multiple profiles should be considered as creating a union of the
mandatory extensions each profile represents, rather than as layering.
Let's take the hypothetical case of C being dropped from rva24u64 as an
example. When specifying both rva22u64 and rva24u64
(-cpu min64,rva22u64=on,rva24u64=on) the user will get C enabled without
even having to know about it, whereas if the user could only select the
latest profile (-cpu rva24u64), C would not get enabled unless the user
was aware that it needed to be explicitly enabled (-cpu rva24u64,c=on).
Right, that's what I had in mind when I was talking about layering.
Using the term union is probably more appropriate though :)
If you're working on defining a new CPU model and need fine-grained
control over the exact set of extensions, then you can already do
that by explicitly flipping each one of them on or off, and adding
the ability to layer profiles doesn't add much value except possibly
removing a bit of verbosity. Not particularly compelling IMO.
I think it's compelling, because the extension lists that profiles provide
are long and mostly uninteresting. For example, how often do we want to
type and think about imafd_Zicsr_Zifencei? I think we'd mostly rather take
those for granted, and we can, because we just specify 'g' instead. Indeed
the profile spec even points out that using profiles as a way to deal with
"unwieldy" ISA strings is another motivation for them. A RISC-V QEMU CPU
command line is effectively an ISA string, so I think it's appropriate to
apply profiles to it as well.
I'm not sure this will save a lot of typing unless you're enabling a
lot of profiles which are possibly far apart from each other in terms
of release year, but I also don't have a very strong opposition to
the approach. As you say, we are already effectively implementing it
with virtual extensions like "g", so extending the applicability
further to cover profiles is not too much of a stretch.
I think libvirt will probably need to learn what set of extensions
each virtual extension/profile maps to, but since the definition will
come directly from actual specifications that shouldn't be a big
problem.
(This also gets really interesting when you start thinking about
extensions that are optional for the profile... Maybe we need
something like '-cpu rva23u64,optional-extensions=off' so that
testing against the baseline for a profile doesn't require combing
through the spec and disabling extensions manually?)
I don't think the profiles (whether they are CPU virtual extensions or
CPU models) should enable the optional extensions by default. When
software targets a profile (or set of profiles with the virtual
extension approach) then it should never assume the optional extensions
will be present. Developers should need to consciously include those
extensions when creating the QEMU platform used for testing.
Fair enough.
Should there be a way to explicitly enable all possible optional
extensions, then? To allow the developer to verify that the
application works correctly on a minimal implementation of the
profile, but also takes advantage of optional extensions if they are
present? Or would that be achieved by enabling the relevant
extensions manually? Or perhaps by using the max CPU model instead?