qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fwd: VirtioSound device emulation implementation


From: Gerd Hoffmann
Subject: Re: Fwd: VirtioSound device emulation implementation
Date: Fri, 16 Apr 2021 13:32:52 +0200

  Hi,

> Starting off with the get config function for pcm streams. Initially I
> thought I was supposed to store those configs in the VirtIOSound
> struct but then I realized these configurations should be queried from
> the backend and then be presented to the driver/guest.

No.  The device can completely ignore the backend capabilities.

We have mixing-engine and fixed-settings.

With mixing-engine=on and fixed-settings=on (default) qemu will mix all
streams, resample if needed, and pass a single stream to the backend.

mixing-engine=off will turn disable the qemu mixer, the streams are
passed as-is to the backend then.  This requires fixed-settings=off too
(see documentation comments in qapi/audio.json).  The backend must
handle the settings the device asks for, either by configuring the audio
hardware accordingly (oss/alsa), or by passing on the settings to the
sound daemon (pulseaudio).

> Now, the virtio sound card supports multiple formats and bit rates. If
> we have fixed settings turned on in the audiodev, the virtio sound
> card should support only a single freq and frame rate depending upon
> what was passed to the command line which we can get from
> audio_get_pdo_out.  Is this correct?

No, see above, qemu will resample if needed.

> Secondly if fixed settings was not set, what should the get config
> query return with for supported formats and bitrates? For now I am
> returning the formats defined in the enum for the qemu audio
> subsystem.

The device can support every format and sample rate which is supported
by both qemu and the virtio spec.  Whenever it actually makes sense to
support outdated formats like S8 is questionable.  On the other hand not
supporting it will not simplify the code much.  Your choice.

> Thirdly, for the set params function, how do I change the params for
> an SWVoiceOut stream?  I could not find a function for changing the
> audsettings for a stream. Should I close the stream and reopen it?

Call AUD_open_out() again with the new settings.

> I learned that the callback passed in AUD_open_out, (lets call it the write
> audio callback,)  is supposed to mix and write the
> buffers to HWVoiceOut. I have written that, the basic algorithm being:
> 
> 1. Pop element from tx virtqueue.
> 2. Get the xfer header from the elem->out_sg (iov_to_buf(elem->out_sg, 1,
> 0, &hdr, sizeof(hdr)))
> 3. Get the buffer from elem->out_sg (iov_to_buf(elem->out_sg, 1,
> sizeof(hdr), &mixbuf, period_bytes))
> 4. AUD_write the buffer

AUD_write returns the number of bytes actually accepted.

In case the audio backend consumed the complete buffer you can go ahead
as described.  Otherwise stop here and resume (try AUD_write() the
remaining data) when the callback is called again.

> Also I do not understand what the tx virtqueue handler is supposed to
> do. I have written a handler for the control queue. But I don't know
> what to do about the tx queue for now. I thought it would be something
> similar to what the callback does, it wouldn't play the audio though.

The tx handler probably doesn't need to do much if anything in case you
do the virtqueue processing in the audio callback as described above.

> Also since the callback does so many things, I do not understand how I
> can implement the pcm stream prepare, start, stop and release
> functions. The prepare function is supposed to allocate resources for
> the stream, but we already do that in the realize_fn for the device
> (AUD_open_out). So should I move that part out of the realize function
> and into the prepare stream function?

start/stop maps to AUD_set_active_out(card, true/false);
You can probably just ignore prepare + release.

> Another thing that I wanted to ask was about the hda codec. The
> specification mentions that the virtio sound card has a single codec
> device in it. I saw a lot of codec device related code in hda-codec.c
> which I think can be re-used for this. But there were no headers that
> exposed the code elsewhere. After reading through the hda
> specification I realized that these function group nids all come under
> the codec, so the jacks will be pin widgets attached to this codec.
> And the streams will be the streams associated with this codec. But I
> do not understand how I should go about implementing the codec, or if
> I need to implement it considering the already existing source from
> intel-hda and hda-codec.c.

I don't think you can reuse much code, if any.

The AC_* #defines in intel-hda-defs.h should be useful though (jack
colors etc).  Move them to a separate header file is probably a good
idea.

> Also sorry for the late response, I had fallen ill. Also I had to move
> thrice in the past month, so I couldn't really work on this a lot, and
> I didn't want to write a mail without having any work to show to you
> guys. Thanks a lot for being patient with me. :)

No problem.  I'm likewise busy or on (easter) vacation at times and fail
to send timely answers (sorry for that).

HTH & take care,
  Gerd




reply via email to

[Prev in Thread] Current Thread [Next in Thread]