emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GStreamer xwidget


From: Yuri Khan
Subject: Re: GStreamer xwidget
Date: Mon, 29 Nov 2021 14:31:20 +0700

On Mon, 29 Nov 2021 at 10:02, Richard Stallman <rms@gnu.org> wrote:

>   > > +         && gst_registry_find_feature (registry, "videotestsrc",
>   > > +                                       GST_TYPE_ELEMENT_FACTORY)
>
>   > > +      xw->gst_source = gst_element_factory_make ("videotestsrc", NULL);
>   ...
>
> You seem to know something about this code.

Disclaimer: I have not worked specifically with GStreamer, but I come
from a Windows background where I used to work with DirectShow. They
are technically similar.

> Can you explain the
> meaning of it?  What the dats structures are, and what these
> operations actually do?

I think the easiest way to explain media processing frameworks is to
make an analogy with UNIX pipes.

A typical UNIX process has a single input descriptor and two outputs
(stdout and stderr). You combine them in pipelines by connecting an
output of one program to the input of another.

Shell pipelines are typically linear or tree-like.

Media processing frameworks generalize this. Each “filter” or
“element” can have an arbitrary number of inputs and outputs, and they
are typed so the framework can check if the connection you are trying
to make is nonsensical. For example, you cannot meaningfully connect
an audio source to a video sink.

Filters and connections form a directed acyclic graph.

The videotestsrc element is a source. That is, it has no inputs, and
has one video output. For this particular filter, the output is a
fixed test image.

There is a sink filter that can display video. By connecting a
videotestsrc to a sink, you get a minimal complete graph that does
something.

A full graph for playing a video or audio clip will typically contain:

* A file or URL source. This knows the URL of the input file and
outputs a byte stream.

* A splitter or demultiplexer. This takes the byte stream and outputs
separate streams for any video, audio or subtitle tracks included.
Different container formats (e.g. AVI, MP4, Matroska, Ogg) require
different demux filters. (Closest UNIX analogy: zip and tar are
different containers, and use different programs.)
The user may not want all streams rendered — e.g. when watching a film
that contains audio tracks in English and Spanish, they will want just
one.

* Decoders for a subset of streams in the clip. (The user may not want
all streams rendered — e.g. when watching a film that contains audio
tracks in English and Spanish, they will want just one.) Each specific
video or audio compression format requires a different decoder filter.
(Analogy: deflate, xz, and bzip are different compression formats.) A
decoder takes a compressed stream from the demultiplexer and outputs
raw RGB or YCbCr video or PCM audio.

* “Renderers” or “sinks” for the decoded video and audio. These differ
by target video or audio subsystem (e.g. pulseaudio), hardware
acceleration capabilities, etc.


> I'm trying to find out how Emacs determines which plug-ins to use
> for any given argument supplied by the ueer.  And what do they depend on?
> Does a given file format fully determine the plug-ins to use?

A given file format (which is a combination of the container format
and compression formats of individual streams within) constrains the
set of possible rendering graphs.

A system may have multiple filters capable of filling each of the
roles listed above. In that case, the application (e.g. Emacs) can
choose between possibilities.

Frameworks have a way to try to automatically complete a rendering
graph for a source; I do not know all the details on how they resolve
ambiguities.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]