qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/6] Add debug interface to kick/call on purpose


From: Jason Wang
Subject: Re: [PATCH 0/6] Add debug interface to kick/call on purpose
Date: Thu, 8 Apr 2021 13:59:40 +0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.9.0


在 2021/4/8 下午1:51, Dongli Zhang 写道:

On 4/6/21 7:20 PM, Jason Wang wrote:
在 2021/4/7 上午7:27, Dongli Zhang 写道:
This will answer your question that "Can it bypass the masking?".

For vhost-scsi, virtio-blk, virtio-scsi and virtio-net, to write to eventfd is
not able to bypass masking because masking is to unregister the eventfd. To
write to eventfd does not take effect.

However, it is possible to bypass masking for vhost-net because vhost-net
registered a specific masked_notifier eventfd in order to mask irq. To write to
original eventfd still takes effect.

We may leave the user to decide whether to write to 'masked_notifier' or
original 'guest_notifier' for vhost-net.
My fault here. To write to masked_notifier will always be masked:(

Only when there's no bug in the qemu.


If it is EventNotifier level, we will not care whether the EventNotifier is
masked or not. It just provides an interface to write to EventNotifier.

Yes.


To dump the MSI-x table for both virtio and vfio will help confirm if the vector
is masked.

That would be helpful as well. It's probably better to extend "info pci" 
command.

Thanks
I will try if to add to "info pci" (introduce new arg option to "info pci"), or
to introduce new command.


It's better to just reuse "info pci".



About the EventNotifier, I will classify them as guest notifier or host notifier
so that it will be much more easier for user to tell if the eventfd is for
injecting IRQ or kicking the doorbell.


Sounds good.



Thank you very much for all suggestions!

Dongli Zhang


Thanks



Thank you very much!

Dongli Zhang





reply via email to

[Prev in Thread] Current Thread [Next in Thread]