discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Packet Radio


From: dlapsley
Subject: Re: [Discuss-gnuradio] Packet Radio
Date: Sat, 1 Apr 2006 11:09:18 -0500

Eric,

Thank you for your comments. Greatly appreciated. See my comments inline.

Cheers,

David.

On Apr 1, 2006, at 2:42 AM, Eric Blossom wrote:

On Fri, Mar 31, 2006 at 11:42:37PM -0500, dlapsley wrote:

The document is available at

    http://acert.ir.bbn.com/downloads/adroit/gr-arch-changes-1.pdf

We would appreciate feedback, sent to gnuradio-discuss, or feel free
to email us privately if there's some reason gnuradio-discuss isn't
appropriate.


I think the basic m-block idea looks reasonable, and achieves the goal
of extending GNU Radio without disturbing the existing framework.

Great!

In section 4.5, "Two stage, quasi-real time, hybrid scheduler":

FYI, a given flow graph currently may be evaluated with more than one
thread if it can be partitioned into disjoint subgraphs.  I don't
think that fundamentally changes anything with regard to embedding a
flow graph in an m-block.

Thanks for the pointer. We had missed the partition_graph step in the
scheduler class. As you say, it won't affect the flow graph embedding,
apart from the possibility of having more than one thread spawned
from inside an m-block.

Section 4.5.4, second bullet: "profile header portion".  Limiting the
kind and profile lengths to 8-bits each seems like you're asking for
trouble.   For example, when combining many m-blocks from many
different sub-projects, the universe of kinds could easily exceed 256.

Good point.

Are you assuming that this gets passed across the air, or just within
a given node?  If within a node, for the kind I'd suggest something
reminiscent of interned symbols.  16-bits would probably be big
enough, if each block mapped their arbitrary kind name (string) into
an interned 16-bit value at block init time.

We are assuming that this gets passed between elements of the software
radio, so just within a node. 16-bits sounds good for this.

I'd also make sure you've got some way to ensure that the data portion
is aligned on the most restrictive architectural boundary (16-bytes on
x86 / x86-64)

Good idea.

Section 4.5.5 Standardized Time:

In reading what's there, I don't see how you're going to solve the
problems that I think we've got.  Perhaps an end-to-end example would
help illustrate your proposal?

We'll work on getting a good example into the document.

For example, Table 4.2 says that "Timestamp" carries the value of the
transmit-side "sampling-clock" at the time this message was
transmitted.  If I'm a "source m-block" generating, say a test
pattern, what do I put in the Timestamp field?  Where do I get the
value?  Consider the case where the "real" sampling-clock is across
USB or ethernet.

One option would be to have a sample counter in the source m-block
that is incremented for every data sample that is transmitted.
The value of that sample counter when you transmit a message is what
would be written into the timestamp field.

The timing message ties wall clock time to this sample. Every block
in the flow graph would know the relationship between wall clock time
from the periodic timing messages which contain an NTP timestamp and
the equivalent RTP timestamp (i.e. sample count). Sampling frequency
can also be used to work out the time corresponding to a given sample
given a single timing/synchronization message.

If I want to tell the ultimate downstream end of the pipeline not to
transmit the first sample of the modulated packet until time t, how do
I do that?  That's essential for any kind of TDMA mechanism.

The most direct way is through the signaling interface. The MAC layer
(or other client) can send a signal enabling/disabling transmission at
the end of the pipeline at the appropriate point in time.

Another way to do it would be to have some form of "playout buffer" at
the end of the pipeline that buffers packets until it is time for them to be sent. In this case the timing transfer mechanism would be used to enable each block
to measure the latency from the time the packet entered the top of the
pipeline until it arrived (or left) the block. These latencies would be
exposed to the top level m-block scheduler which could then allocate
processing time to blocks based on these latencies in order to ensure that
some threshold was not exceeded. Typically, you could imagine the
processor just looking at the end to end delay and scheduling processing
to keep that below a certain threshold. In a sense, the scheduler does
coarse grain scheduling to ensure the end to end delay does not go
beyond some tolerance, while the playout buffer does fine grained
scheduling.

In general, I'm not following this section.  I'm not sure if you're
trying to figure out the real time required through each m-block
and/or if you're trying to figure out the algorithmic delay through
each block, and/or if you're trying to figure out the NET to NET
delay between multiple nodes, ...

It's the first two. The initial thought is just to re-use the semantics
and format of RTP/RTCP for transferring timing information between
elements of a radio.  There are a couple of options. One option would
be to to figure out the wall clock delay between two blocks within a
flow graph (could imagine that typically they would be endpoints of a
pipeline). This way we can make sure the delay through a flow graph
stays within limits by scheduling blocks  appropriately.  Another option
would be to measure the end to end delay  between some process in
the MAC (or other controlling entity) and the bottom of a pipeline in the PHY.
There could also be a control loop here to ensure that the end to end
delay requirements are not exceeded.

We'll work on making this section clearer and get a new revision out
next week.

Also, an example of how we'd map whatever you're thinking about on to
something that looked like a USRP or other h/w would be useful.

Will do.

I guess I'm missing the overall statement of intention.  I.e., what do
the higher layers care about, and how does your proposal help them
realize their goals?

The main goal of section 4.5.5 is to provide mechanisms that will bound
the time it takes for a request to make it all the way to the bottom of the PHY and to enable real-time scheduling/playout of data at the bottom of the PHY.


Meta data:

General questions about meta-data: Does an m-block just "copy-through"
meta-data that it doesn't understand?

Yes. Ideally, we would just be passing references/pointers so there
wouldn't need to be any copying. You could also imagine blocks
"popping" off profiles/sections of metadata specific to them.

Or in the general case, why not just make it *all* key/value pairs?
Why restrict yourself to a single distinguished "data portion"?

Sure. That's a nice way to think about it. It would also be nice to
maintain a hierarchy of metadata so that there was some structure
to it (e.g. grouping by profiles or block type).


Section 4.5.8: Scheduler.

I'm not sure I follow Figure 4.8.  Perhaps once I understand the
timing stuff it'll make more sense.

We'll work on making this clearer.

Section 4.5.9: Memory Mgmt

With regard to reference counting, we've had good luck with the
boost::shared_ptr stuff.  It's transparent, interacts well with
Python, and just works.

Thanks for the pointer.

Section 4.5.10: Implementation Considerations

* Reentrancy:  I think we need to distinguish between multiple
instances of a block each running in a separate thread, vs a given single
instance running in multiple threads.  I don't see an overwhelming
need to have a given instance be reentrant, with the possible
exception of communicating commands to it at runtime.  But in that
case, a thread safe queue of commands might suffice.

Yes. I agree.

That's it for now!
Eric

Thanks so much for the feedback. That is great!





reply via email to

[Prev in Thread] Current Thread [Next in Thread]