qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC PATCH v1 0/3] Introduce ssi_txfifo_transfer


From: Francisco Iglesias
Subject: [RFC PATCH v1 0/3] Introduce ssi_txfifo_transfer
Date: Tue, 19 Jan 2021 14:01:52 +0100

Dear all,

This small RFC patch series attempts to make it possible to support the
SPI commands requiring dummy clock cycles in the SPI controllers that
currently do not support them.

There are two ways SPI controllers transfer dummy clock cycles. In one way
the dummy clock cycles to be used with a command are configured into a
register and the second way the dummy clock cycles are generated from
dummy bytes pushed into a txfifo. Since these two ways work differently
this patch series introduces ssi_txfifo_transfer to be used by the
controllers that makes use of a txfifo. Furthermore, since the QEMU SPI
controllers models transfering through a txfifo require that the SPI flash
(m25p80) operates with dummy byte accuracy (instead of dummy clock cycle),
m25p80 is made to support toggling accuracy between dummy clock cycle into
dummy byte in this patch series. This is done automatically inside
ssi_txfifo_transfer.

Lastly, one SPI controller transfering through a txfifo is modified to use
the new function and has been tested to work with the FAST_READ command
(using 8 dummy clock cycles). For testing the first way of
transferring dummy clock cycles (when they are configured into a register)
the Xilinx ZynqMP GQSPI has been used and all works equally well as previously
with the controller.

Best regards,
Francisco Iglesias


Francisco Iglesias (3):
  hw: ssi: Introduce ssi_txfifo_transfer
  hw: block: m25p80: Support dummy byte accuracy
  hw: ssi: xilinx_spi: Change to use ssi_txfifo_transfer

 hw/block/m25p80.c    | 112 +++++++++++++++++++++++++++++++++++--------
 hw/ssi/ssi.c         |  22 +++++++++
 hw/ssi/xilinx_spi.c  |   2 +-
 include/hw/ssi/ssi.h |   3 ++
 4 files changed, 118 insertions(+), 21 deletions(-)

-- 
2.20.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]