discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] High Flowgraph Latency in 3.6.4.1


From: Daniele Nicolodi
Subject: Re: [Discuss-gnuradio] High Flowgraph Latency in 3.6.4.1
Date: Tue, 02 Jun 2015 22:21:55 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:31.0) Gecko/20100101 Thunderbird/31.7.0

Hello,

I resurrect this old thread because I'm almost exactly in the situation
described below by Matt but I have an hard time to come up with a solution.

I'm working with an N210. In my application I generate a modulation
signal that is sent to a system and the response of the system is
demodulated and processed. The modulation signal is adjusted accordingly
to the result of this processing.

Because of the modulation-demodulation scheme, I like to keep a constant
phase relation between the tx and rx channels. To this purpose I use
set_start_time() calls for the tx and rx channels. The result of the
signal processing is communicated to the modulation source via
asynchronous messages. The sampling rate for tx and rx is 1.0 MHz.

What I observe is a ~250 ms delay between the asynchronous messages and
changes in the modulation signal. This delay scales linearly with the tx
sampling rate, thus I conclude it is due to buffering on the tx side.

I haven't managed to find a way to route any signal from the rx side to
the tx side without causing buffer under-runs, I think because the start
time of the tx and rx sides are fixed.

Does anyone has an idea on how to solve the problem?


On 17/10/14 19:16, Matt Ettus wrote:
> We see this issue a lot with applications that only transmit, and which
> transmit continuously.  The problem is that you end up generating
> samples far in advance of when you really know what you want to
> transmit, because there is no rate-limiting on the production side.
> 
> Some general principles -- Large buffers *allow* you to deal with high
> latency.  Large buffers do not *create* high latency unless the
> application is not designed properly.  A properly designed application
> will work with infinitely large buffers as well as it does with
> minimally sized ones.
> 
> Shrinking buffers may allow your application to work, but that isn't
> really the best way to solve this problem.  The best way to solve the
> problem is to modify your head-end source block to understand wall-clock
> time.  The easiest way to do that if you are using a USRP is to
> instantiate a UHD source (i.e. a receiver) at a relatively low sample
> rate and feed it into the source you have created.  
> 
> Your source block should then look at timestamps on the incoming samples
> (it can throw the samples themselves away).  It should generate only
> enough samples to cover the maximum latency you want, and it should
> timestamp those transmit samples.  For example, if it receives samples
> timestamped with T1, it should generate samples with timestamps from
> T1+L1 to T1+L1+L2, where L1 is the worst-case flowgraph and device
> latency, and L2 is the worst case reaction time you are looking for. 
> Thus, if you suddenly get a message from your operator to send a
> message, you know that you will never need to wait for more than L2
> seconds.  Thus, you can bound your worst case reaction time.
>
> I think we should generate an example app to do this, because the issue
> comes up periodically, especially among the space communications crowd. 
> It is a design pattern we really should document.


Any news about this example app?

Thank you! Cheers,
Daniele





reply via email to

[Prev in Thread] Current Thread [Next in Thread]