discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Re: Problem with FIR filter


From: John Wilson
Subject: Re: [Discuss-gnuradio] Re: Problem with FIR filter
Date: Thu, 22 Jul 2010 19:45:22 +0100



On Thu, Jul 22, 2010 at 6:48 PM, Eric Blossom <address@hidden> wrote:
On Thu, Jul 22, 2010 at 05:13:09PM +0100, John Wilson wrote:
> So does no one have any idea about this? It doesn't seem like it should be
> too difficult to solve, I've worked around the problem for the simulations
> I'm currently running but it's a shame as it reveals a fundamental
> incapacity for GNU Radio to perform some other interesting simulations and
> techniques, notably (for me at least) in the area of repeat requests.
> Currently I'm implementing a type II HARQ repeat request system by
> dynamically reconfiguring a flow graph during operation, this makes use of
> the run() command a lot but if every time I run this command every filter in
> the system is reset it's going to make it almost impossible to implement.

Part of the challenge in implementing dynamic reconfiguration is what
to do with any intermediate samples left over in the buffers.  This
brings up questions such as do the blocks own the buffers?  If so,
does the upstream block own the buffer, or the downstream block?  What
about the cases where there are multiple downstream readers of a
buffer?  What happens if you replace the upstream block?  What about
the downstream block?

With regards to the FIRs, I think what you're seeing is a side effect
of our decision to implement the delay line implicitly. That is, it's
handled using the "history" mechanism, and involves pre-stuffing the
input buffer with zeros.  The chief advantage of this strategy is
performance.  This idea combined with the our zero copy MMU circular
buffer trick, means there are no corner cases in the implementation of
the FIRs (no "end-of-buffer wrap around" etc.) and no cycles are
expended storing anything in any delay line.  (Having versions of the
FIRs that explicity handle a delay line is definitely possible.)


I understand, so I'm guessing the operation of the FIRs is a lot more 'fundamental' than the other common blocks in that their operation is quite intimately tied up with the lower level buffer management of the flow graph itself. I've read about the history mechanism, and I understand that when flow graph edges are reconfigured then not calling it could cause problems. However in my case I don't need the graph edges connecting to the FIR to change at all, in fact I need them to stay in the exact state that they were left in when wait() was called to maintain the data integrity through these buffers (and therefore the FIR). To put this into context the part of the graph that's causing a problem is part of the channel model, a Watterson model which goes something like this;

                                                                                             Signal In
                                                                                                 I
                                                                                                 v
WGN -> Gaussian Filter -> Interpolation -> Anti-Aliasing filter -> Multiply
                                                                                                 I
                                                                                                 v
                                                                                              AWGN
                                                                                                 I
                                                                                                 v
                                                                                            Signal Out

The anti-aliasing filter has a large number of taps compared to each packet length. I was hoping to send one packet at a time, check it and re-run the graph, starting the model from here it left off, the model takes a few thousand bits sent through it to fill up the buffers, so the first few packets send through it are received as rubbish. When sending one packet at a time between resets, it's always rubbish! I fixed this by just sending all my packets in one transmission and checking them at the block level with a custom block, but the repeat request techniques I have require that the run() method be called.

Looking back at my ARQ test code (which I wrote a while back) I can see that it was a bit of a hack, I'm instructing my transmit source to 'send nothing' when a repeat is needed, by this I mean the start point is actually requested to run the work function again, copy nothing to the buffer and return 0. The repeat packet generator then generates the actual data to be transmitted. This is to fool GNU Radio into believing that its sending something from the start point of the graph and is clearly related to this issue. So me calling this a 'serious bug' might have been a little melodramatic!


I understand that you're relating with this like it's a serious bug,
but I think it would be more productive if you would specify exactly
the behavior that you want, see if that spec is consistent with
expectations that other folks may have (for their own good reasons),
and then we could discuss where in the code the modifications would
go.

A related question is when (if ever) should runtime state (both
external to a block, like the buffers) and internal state (say PLL
state inside of a block) be reset.


This is a good question and I think it would be nice to build in a little more user involvement into this. I think there needs to be a further state that can be entered on request, where the graph can be paused under certain conditions without causing a reset of some or all of the buffers on the graph edges, for instance when the graph detects that no further data is being 'pulled' towards the end point (I'm guessing that's what causes the wait() function to return), could the graph, or certain pre-defined edges of the graph be put into a 'paused' state which maintains residual data within the buffers of those edges? This could maybe be defined by the user by calling a modified connect() function, for example connect_persistent().

 
I can see use cases arguing for a number of models.  It basically
depends on what you are trying to do.

So, how would you want it to behave?

Some of the primitive operations to consider are tb.start(),
tb.stop(), tb.wait() and tb.run() == { tb.start(); tb.wait(); }

Adding a "reset" method to gr_basic_block is also possible.  Of course
then you need to figure out when the runtime (or user) should call it.
 
When using lock/unlock for dynamic reconfiguration, the internal
represenation of the graph is actually reconfigured in the midst of
the final unlock, and that code uses start() and stop() to pause the
action in the graph while it shuffles everything around.

I looked at the lock() and unlock() functions but I didn't find them useful, presumably because of the stop() function called which resets everything and seems to cause every block to reinitialise.
 
Again, I think your question/complaint is valid, and figuring out the
desired behavior at all the corner cases is important.  Then of course
there's figuring out how to implement it :-)

If you get a chance, please have a go at it.

Eric

Where would I start to look at these things? In particular for two usage cases, one being a modification to the generic FIR functions, whereby the filter could be instructed to copy the input buffer into an internal buffer when finishing a graph, and then copy this back in lieu of using the add_history() function; the second being an additional connect_persistent() function as described above.

As a side note I have a load of code relating to LDPC, ARQ and turbo equalisation that I'd like to share, I thought I put a message on this list about this a while ago and assumed I was being ignored, but (as you know:) I was being a fool and sending it via the wrong email address!

Cheers,

John

reply via email to

[Prev in Thread] Current Thread [Next in Thread]