qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v3 06/15] hw/block/nvme: Add support for Namespace Types


From: Dmitry Fomichev
Subject: RE: [PATCH v3 06/15] hw/block/nvme: Add support for Namespace Types
Date: Fri, 18 Sep 2020 22:22:35 +0000

> -----Original Message-----
> From: Klaus Jensen <its@irrelevant.dk>
> Sent: Friday, September 18, 2020 5:30 PM
> To: Dmitry Fomichev <Dmitry.Fomichev@wdc.com>
> Cc: Keith Busch <kbusch@kernel.org>; Klaus Jensen
> <k.jensen@samsung.com>; Kevin Wolf <kwolf@redhat.com>; Philippe
> Mathieu-Daudé <philmd@redhat.com>; Maxim Levitsky
> <mlevitsk@redhat.com>; Fam Zheng <fam@euphon.net>; Niklas Cassel
> <Niklas.Cassel@wdc.com>; Damien Le Moal <Damien.LeMoal@wdc.com>;
> qemu-block@nongnu.org; qemu-devel@nongnu.org; Alistair Francis
> <Alistair.Francis@wdc.com>; Matias Bjorling <Matias.Bjorling@wdc.com>
> Subject: Re: [PATCH v3 06/15] hw/block/nvme: Add support for Namespace
> Types
> 
> On Sep 14 07:14, Dmitry Fomichev wrote:
> > From: Niklas Cassel <niklas.cassel@wdc.com>
> >
> > Namespace Types introduce a new command set, "I/O Command Sets",
> > that allows the host to retrieve the command sets associated with
> > a namespace. Introduce support for the command set and enable
> > detection for the NVM Command Set.
> >
> > The new workflows for identify commands rely heavily on zero-filled
> > identify structs. E.g., certain CNS commands are defined to return
> > a zero-filled identify struct when an inactive namespace NSID
> > is supplied.
> >
> > Add a helper function in order to avoid code duplication when
> > reporting zero-filled identify structures.
> >
> > Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
> > Signed-off-by: Dmitry Fomichev <dmitry.fomichev@wdc.com>
> > ---
> >  hw/block/nvme.c | 235
> +++++++++++++++++++++++++++++++++++++++++++-----
> >  1 file changed, 215 insertions(+), 20 deletions(-)
> >
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index 4bd88f4046..d01c1c1e06 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -595,6 +595,33 @@ static inline uint16_t
> nvme_check_bounds(NvmeCtrl *n, NvmeNamespace *ns,
> >      return NVME_SUCCESS;
> >  }
> >
> > +static void nvme_fill_data(QEMUSGList *qsg, QEMUIOVector *iov,
> > +                           uint64_t offset, uint8_t pattern)
> > +{
> > +    ScatterGatherEntry *entry;
> > +    uint32_t len, ent_len;
> > +
> > +    if (qsg->nsg > 0) {
> > +        entry = qsg->sg;
> > +        for (len = qsg->size; len > 0; len -= ent_len) {
> > +            ent_len = MIN(len, entry->len);
> > +            if (offset > ent_len) {
> > +                offset -= ent_len;
> > +            } else if (offset != 0) {
> > +                dma_memory_set(qsg->as, entry->base + offset,
> > +                               pattern, ent_len - offset);
> > +                offset = 0;
> > +            } else {
> > +                dma_memory_set(qsg->as, entry->base, pattern, ent_len);
> > +            }
> > +            entry++;
> > +        }
> > +    } else if (iov->iov) {
> > +        qemu_iovec_memset(iov, offset, pattern,
> > +                          iov_size(iov->iov, iov->niov) - offset);
> > +    }
> > +}
> > +
> >  static void nvme_rw_cb(void *opaque, int ret)
> >  {
> >      NvmeRequest *req = opaque;
> > @@ -1153,6 +1180,19 @@ static uint16_t nvme_create_cq(NvmeCtrl *n,
> NvmeRequest *req)
> >      return NVME_SUCCESS;
> >  }
> >
> > +static uint16_t nvme_rpt_empty_id_struct(NvmeCtrl *n, uint64_t prp1,
> > +                                         uint64_t prp2, NvmeRequest *req)
> > +{
> > +    uint16_t status;
> > +
> > +    status = nvme_map_prp(n, prp1, prp2, NVME_IDENTIFY_DATA_SIZE,
> req);
> > +    if (status) {
> > +        return status;
> > +    }
> > +    nvme_fill_data(&req->qsg, &req->iov, 0, 0);
> > +    return NVME_SUCCESS;
> > +}
> > +
> 
> Instead of doing the filling, why not just directly call nvme_dma_prp
> with an empty NvmeIdCtrl/NvmeIdNs stack allocated variable?

Yes, this should work too, perhaps it will be simpler.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]