qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] CXL: TCG/KVM instruction alignment issue discussion default


From: Jørgen Hansen
Subject: Re: [RFC] CXL: TCG/KVM instruction alignment issue discussion default
Date: Wed, 1 Mar 2023 14:51:52 +0000

On 2/28/23 11:49, Jonathan Cameron wrote:
>>> Second there's the performance issue:
>>>
>>> 0) Do we actually care about performance? How likely are users to
>>>      attempt to run software out of CXL memory?
>>>
>>> 1) If we do care, is there a potential for converting CXL away from the
>>>      MMIO design?  The issue is coherency for shared memory. Emulating
>>>      coherency is a) hard, and b) a ton of work for little gain.
>>>
>>>      Presently marking CXL memory as MMIO basically enforces coherency by
>>>      preventing caching, though it's unclear how this is enforced
>>>      by KVM (or if it is, i have to imagine it is).
>>
>> Having the option of doing device specific processing of accesses to a
>> CXL type 3 device (that the MMIO based access allows) is useful for
>> experimentation with device functionality, so I would be sad to see that
>> option go away. Emulating cache line access to a type 3 device would be
>> interesting, and could potentially be implemented in a way that would
>> allow caching of device memory in a shadow page in RAM, but that it a
>> rather large project.
> 
> Absolutely agree.  Can sketch a solution that is entirely in QEMU and
> works with KVM on a white board, but it doesn't feel like a small job
> to actually implement it and I'm sure there are nasty corners
> (persistency is going to be tricky)
> 
> If anyone sees this as a 'fun challenge' and wants to take it on though
> that would be great!

I'd be interested in getting more details on your thoughts on this and 
potentially work on it. We'd like to get closer to the CXL.mem traffic 
that a physical system would see, ideally seeing read requests only on 
LLC cache misses - although that seems hard to achieve.

Thanks,
Jorgen

reply via email to

[Prev in Thread] Current Thread [Next in Thread]