qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 04/10] vfio: Query and store the maximum number of DMA map


From: David Hildenbrand
Subject: Re: [PATCH v3 04/10] vfio: Query and store the maximum number of DMA mappings
Date: Thu, 7 Jan 2021 13:56:50 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.5.0

On 17.12.20 20:47, Alex Williamson wrote:
> On Thu, 17 Dec 2020 20:04:28 +0100
> David Hildenbrand <david@redhat.com> wrote:
> 
>> On 17.12.20 18:55, Alex Williamson wrote:
>>> On Wed, 16 Dec 2020 15:11:54 +0100
>>> David Hildenbrand <david@redhat.com> wrote:
>>>   
>>>> Let's query the maximum number of DMA mappings by querying the available
>>>> mappings when creating the container.
>>>>
>>>> In addition, count the number of DMA mappings and warn when we would
>>>> exceed it. This is a preparation for RamDiscardMgr which might
>>>> create quite some DMA mappings over time, and we at least want to warn
>>>> early that the QEMU setup might be problematic. Use "reserved"
>>>> terminology, so we can use this to reserve mappings before they are
>>>> actually created.  
>>>
>>> This terminology doesn't make much sense to me, we're not actually
>>> performing any kind of reservation.  
>>
>> I see you spotted the second user which actually performs reservations.
>>
>>>   
>>>> Note: don't reserve vIOMMU DMA mappings - using the vIOMMU region size
>>>> divided by the mapping page size might be a bad indication of what will
>>>> happen in practice - we might end up warning all the time.  
>>>
>>> This suggests we're not really tracking DMA "reservations" at all.
>>> Would something like dma_regions_mappings be a more appropriate
>>> identifier for the thing you're trying to count?  We might as well also  
>>
>> Right now I want to count
>> - Mappings we know we will definitely have (counted in this patch)
>> - Mappings we know we could eventually have later (RamDiscardMgr)
>>
>>> keep a counter for dma_iommu_mappings where the sum of those two should
>>> stay below dma_max_mappings.  
>>
>> We could, however, tracking active IOMMU mappings when removing a memory
>> region from the address space isn't easily possible - we do a single
>> vfio_dma_unmap() which might span multiple mappings. Same applies to
>> RamDiscardMgr. Hard to count how many mappings we actually *currently*
>> have using that approach.
> 
> It's actually easy for the RamDiscardMgr regions, the unmap ioctl
> returns the total size of the unmapped extents.  Therefore since we
> only map granule sized extents, simple math should tell us how many
> entries we freed.  OTOH, if there are other ways that we unmap multiple
> mappings where we don't have those semantics, then it would be
> prohibitive.

So, I decided to not track the number of current mappings for now, but
instead use your suggestion to sanity-check via

1. Consulting kvm_get_max_memslots() to guess how many DMA mappings we
might have in the worst case apart from the one via RamDiscardMgr.

2. Calculating the maximum number of DMA mappings that could be consumed
by RamDiscardMgr in the given setup by looking at all entries in the
vrdl list.

Looks much cleaner now.

-- 
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]