|
From: | LIU Zhiwei |
Subject: | Re: [PATCH v5 07/11] hw/char: Initial commit of Ibex UART |
Date: | Thu, 4 Jun 2020 09:59:18 +0800 |
User-agent: | Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.8.1 |
On 2020/6/3 23:56, Alistair Francis wrote:
On Wed, Jun 3, 2020 at 3:33 AM LIU Zhiwei <zhiwei_liu@c-sky.com> wrote:On 2020/6/3 1:54, Alistair Francis wrote:On Tue, Jun 2, 2020 at 5:28 AM LIU Zhiwei<zhiwei_liu@c-sky.com> wrote:Hi Alistair, There are still some questions I don't understand. 1. Is the baud rate or fifo a necessary feature to simulate? As you can see, qemu_chr_fe_write will send the byte as soon as possible. When you want to transmit a byte through WDATA, you can call qemu_chr_fe_write directly.So qemu_chr_fe_write() will send the data straight away. This doesn't match what teh hardware does though. So by modelling a FIFO and a delay in sending we can better match the hardware.I see many UARTs have similar features. Does the software really care about these features? Usually I just want to print something to the terminal through UART.In this case Tock (which is the OS used for OpenTitan) does car about these features as it relies on interrupts generated by the HW to complete the serial send task. It also just makes the QEMU model more accurate.
Fair enough. I see the "tx_watermark" interrupt, which needs the FIFO. At least,
it can verify the ISP.
I see the UART can work with many different backends, such as pty , file, socket and so on. I wonder if this a backend, which has some requirements on the baud rate. You can ignore it,Most simulation in QEMU is for running software, not exactly the details of hardware. For example, we will not simulate the 16x oversamples in this UART.Agreed. Lots of UARTs don't bother modelling the delay from the hardware as generally it doesn't matter. In this case it does make a difference for the software and it makes the QEMU model more accurate, which is always a good thing.There is no error here. Personally I think it is necessary to simulate the FIFO and baud rate, maybe for supporting some backends.So baud rate doesn't need to be modelled as we aren't actually sending UART data, just pretending and then printing it.Can someone give a reasonable answer for this question?Which question?
as it doesn't matter.
2. The baud rate calculation method is not strictly right. I think when a byte write to FIFO, char_tx_time * 8 is the correct time to send the byte instead of char_tx_time * 4.Do you mind explaining why 8 is correct instead of 4?Usually write a byte to WDATA will trigger a uart_write_tx_fifo. Translate a bit will take char_tx_time. So it will take char_tx_time * 8 to transmit a byte.I see your point. I just used the 4 as that is what the Cadence one does. I don't think it matters too much as it's just the delay for a timer (that isn't used as an accurate timer).
Got it. Just a way to send the bytes at sometime later.
I try to boot a RISC-V Linux, and set a breakpoint to a watch callback function.3. Why add a watch here?This is based on the Cadence UART implementation in QEMU (which does the same thing). This will trigger a callback when we can write more data or when the backend has hung up.Many other serials do the same thing, like virtio-console and serial. So it may be a common interface here. I will try to understand it(Not yet).Yep, it's just a more complete model of that the HW does.
The breakpoint did't match. I just wonder if there is a case really need the callback function. Zhiwei
Alistair
[Prev in Thread] | Current Thread | [Next in Thread] |