discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss-gnuradio] Fwd: Correlation Estimator in 3.7.10


From: Steven Knudsen
Subject: [Discuss-gnuradio] Fwd: Correlation Estimator in 3.7.10
Date: Tue, 27 Sep 2016 09:37:06 -0600

Alright, after some more thought I believe I understand what is going on.

By looking at the cross-correlation power for the current input samples, the threshold can be calculated as a desired false alarm rate. An “arbitrary” threshold can be set independently of the received signal power.

Unfortunately, for the test_corr_est.grc example, you just can’t set the threshold high enough to avoid false alarms. I tried the CE block with up to 0.999999 for the threshold and it was still a mess, though much better.

That said, for my own application/flowgraph, setting the CE block threshold to 0.9999 appears to work. I can set as “low” as 0.999, but that’s it. 

This makes me wonder why bother to have a threshold parameter at all? Maybe just set the calculated d_pfa value to 10 or something and be done with it. Of course, we all hate magic numbers, and so doing would introduce a second number to the algorithm (the first is a multiplier of 4 in the peak thresholding logic).

Thanks for your patience…


Steven Knudsen, Ph.D., P.Eng.

Von einem gewissen Punkt an gibt es keine Rückkehr mehr. Dieser Punkt ist zu erreichen. - Franz Kafka

Begin forwarded message:

From: Steven Knudsen <address@hidden>
Subject: Correlation Estimator in 3.7.10
Date: September 27, 2016 at 07:40:17 MDT
To: GNURadio Discussion List <address@hidden>
Cc: Knud <address@hidden>

Hi All,

I am pretty confused with some of the changes to the corr_est_cc_impl.cc threshold detection algorithm.

Previously, in _set_threshold(), the zero-offset autocorrelation of the reference symbols was computed and scaled by the desired relative threshold when the correlation estimator is instantiated to provide a fixed (constant for the run) threshold (d_thresh). That makes sense to me and appeared to work pretty well.

Now, in _set_threshold(), a XXXX is calculated as d_pfa = -logf(1.0f - threshold) . In the work function, the detection threshold is calculated anew on each invocation as the mean value of the square of the current correlation of the reference symbols with the input symbols, scaled by d_pfa. 

How can the threshold for detecting the peak of the correlation against the reference symbols be a function of the input symbols? I know that using an absolute threshold based on only the reference symbols may be a problem because you can’t know the magnitude of the received input symbols, but I think that’s why there is a block callback for set_threshold().

The net result is that when one runs the correlation estimator, a bunch of peaks are output in the vicinity of the sync sequence in the input stream. The test_corr_est.grc example in gr-digital shows this well.

I checked the commit history and the only relevant comment is “Works with arbitrary scaling”. This suggests that the changes are meant to include in the threshold the effect of arbitrary input signal power, which does make some sense. Again, I thought that was why there was a callback, but even that is clunky and I could be wrong.

Anyway, all I know is that the correlation estimator used to work really well for me and is now, in my applications, broken badly. Maybe I’m not setting it up correctly, but AFAIK the block parameters are the same.

Thanks for your consideration of this question,

steven


Steven Knudsen, Ph.D., P.Eng.

All the wires are cut, my friends
Live beyond the severed ends.  Louis MacNeice



reply via email to

[Prev in Thread] Current Thread [Next in Thread]