discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Using DSP for precise zero crossing, measurement?


From: Bob McGwier
Subject: Re: [Discuss-gnuradio] Using DSP for precise zero crossing, measurement?
Date: Tue, 19 Sep 2006 12:21:56 -0400
User-agent: Thunderbird 1.5 (X11/20051201)

Lee:

The marginal distributions for the parameters never contain as much information as the joint probability distributions except those conditions where the underlying problem is truly separable. It is almost always better from an information theoretic and probabilistic point of view to use the joint conditional distributions for the parameters given the observations and do the estimation jointly.

In this case however, we are actually making measurements on two independent signals, and the observations are r1,r2 as you say. It would be better in this case, since they are truly separated, to measure each independently. The joint observation process would arise if John measured the difference signal through a mixer or time interval counter, etc.

Achilleas identified incorrect parameters given John's statement of the problem. Achilleas used amplitude and phase as the parameters and the original statement of John's problem has constant 1V peak to peak as the amplitude and the unknowns are frequency and phase. Your formulation of the problem is correct, but it is more general that John's statement of the problem since you include A1,A2 as (possibly) different when John's statement of the problem is that A1=A2=1.

r(t)  = sin(wt+phi)+n(t)

and determining w and phi in the presence of noise n(t) is just about the oldest problem in town. Let us consider John's original problem given the system he claims he has. Since John's statement is that he is doing the measurements on each separately using a coherent system, he can repeatedly estimate w and phi using FFT's and downsampling. One way to reduce the impact of the noise given a fixed size FFT, is to use the coherence as stated and to do long term autocorrelations, where the autocorrelations are computed using FFT's and then simply added, complex bin by bin. This coherent addition of the correlations will produce a very long term autocorrelation where accuracy of the estimates from this process goes up like N where N is the number of FFT's added. THIS ASSUMES THE SYSTEM IS REALLY COHERENT FOR THAT N * FFTsize SAMPLES and THE NOISE REALLY IS ZERO MEAN GAUSSIAN. Phase noise, drift, non-Gaussian noise, etc. will destroy this coherence assumption and the Gaussian properties we use in the analysis. He can reduce the ultimate computational complexity by mixing, downsampling and doing the FFT again and then mixing, downsampling and doing the FFT again, etc. until the final bin traps his w to sufficient accuracy for his needs and then phi is simply read off from the FFT coefficient. The mixing and downsampling would be a usable approach. Careful bookkeeping must be done on the phase group delay through the downsampling filters should this approach be used or phi will be in error by the neglected phase group delay.

This is one approach that I believe John can take and it is pretty simple to put together even if it is not necessarily the most computationally benign. He can grab tons of samples and do this in Octave on his favorite Linux computer. In the case the signals are not actually 1V pk-pk, this will also yield the amplitude since the power of the sinusoid as measured by the FFT's in use above will yield the result for the amplitude. If this is to be done real time, then a cascade of power of 2 reductions in sample rate and frequency offset can be done until the parameters are trapped sufficiently for the exercise.

Bob


Lee Patton wrote:
On Mon, 2006-09-18 at 12:32 -0400, Achilleas Anastasopoulos wrote:
John,

If want to measure the time difference between two sine waves in noise and accuracy is your primary objective, then you should start with the
"optimal" solution to the problem and not with an ad-hoc technique
such as measuring the zero crossings.

In the simplest scenario, if your model looks like:
s(t) = s(t;A1,A2,t1,t2) = A1 sin(w (t-t1)) + A2 sin(w (t-t2))
r(t)=s(t)+n(t)

John,
Does your model look like the one Achilleas described above, or is it
like the following?
s1(t) = A1*sin(w1(t-t1)+phi1); r1(t) = s1(t) + n1(t)
s2(t) = A2*sin(w2(t-t2)+phi2); r2(t) = s2(t) + n2(t)

In this model, you have separate observations of each sinusoid.  i.e.,
r1(t) and r2(t) respectively.

Bob, Achilleas -

On an intuitive level, it seems to me that (ML) estimating the
parameters of s1(t) and s2(t) from r1(t) and r2(t) respectively, instead
of jointly from r(t) = s1(t)+s2(t)+n(t), would result in better
accuracy.  Do you agree?  At the very least, I think an adaptive
technique would converge faster if only three parameters needed to be
estimated instead of six. (Obviously, you would estimate both signals in
parallel to get dphi = phi1-phi2).

-Lee



_______________________________________________
Discuss-gnuradio mailing list
address@hidden
http://lists.gnu.org/mailman/listinfo/discuss-gnuradio



--
Robert W. McGwier, Ph.D.
Center for Communications Research
805 Bunn Drive
Princeton, NJ 08540
(609)-924-4600
(sig required by employer)






reply via email to

[Prev in Thread] Current Thread [Next in Thread]