discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Understanding produce, consume, and data streams


From: Michael Dickens
Subject: Re: [Discuss-gnuradio] Understanding produce, consume, and data streams
Date: Thu, 13 Jul 2017 10:24:06 -0400

Hi AB - If I recall correctly, the output stream is not cleared / zeroed in the scheduler. So, it has, for all practical purposes, random data in it upon entry to ::general_work (whether in C++ or Python). Also IIRC, a guarantee provided by the scheduler is that there will be enough output data space for the provided amount of input data based on the block's estimates, not where in the output data buffer the output_items will be located. So, you cannot rely on the output buffer pointer location between calls to ::general_work as it can change beyond the number of items generated. So, to guarantee data integrity, you must use "=" to set output data; using "+=" cannot provide reliable data. I'd have to check the actual code to verify my recalls, but I believe the above is correct. Does this answer your query, directly or indirectly? Hope this is useful! - MLD

On Thu, Jul 13, 2017, at 10:01 AM, Bakshi, Arjun wrote:

Apologies for a possible duplicate message. 


I've made a few OOT blocks and thought I had a handle on the process but I've found something that I don't understand. I have a general block that "passes" the input to the output stream. However, instead of doing something like: out[:] = in0[:], I did out[:]+=in[:] and found something strange. The full code is as follows:
 
import numpy as np
from gnuradio import gr

class check(gr.basic_block):
    def __init__(self):
        gr.basic_block.__init__(self,
            name="check",
            in_sig=[np.float32],
            out_sig=[np.float32])

    def forecast(self, noutput_items, ninput_items_required):
        for i in range(len(ninput_items_required)):
            ninput_items_required[i] = noutput_items

    def general_work(self, input_items, output_items):
        in0 = input_items[0]
        out = output_items[0]
        common = min(in0.shape[0], out.shape[0])
        out[:common] += in0[:common]             #changing += to = fixes/hides the problem
        self.consume_each(common)
        return common

 
I thought that by calling consume_each and return with common, I'd be telling the system to move forward by "common" number of input and output indices/addresses. However, in this case the system doesn't and I think reuses the indices of the output stream. I've attached a plot of the input and output.

Whats really going on here?

I've simplified the block here to focus on the issue. My actual application was a filter which selected parts of the input stream and wrote the filtered version on the corresponding parts of the output stream. I found similar issues there also.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]