Hi ben,
I love this topic of how to match hardware clocks just as much as
you do, but I personally think that solving the two-clock problem
between an SDR receiver and an audio device might be just a tiny
bit out of scope of a GSoC project on a broadcast standard
implementation. Also, it's not part of the milestones /
deliverables that Luca set in cooperation with the community, and
a considerable effort, so I don't think I'd advise Luca to do
that;
however, I'd really like to see this happening in another context.
Maybe we can help you get that working, preferably with an
analog audio modulation first? Also, haven't we been talking about
this extensively before? I don't see how the fact that your PC
simply can't, with sufficient accuracy, measure what you call t_A
for this approach to work without much higher-level tracking has
changed. But that's a discussion that we really shouldn't be
having (again) within the context of an unrelated GSoC project.
background: in digital mod, you have to recover the TX sample
clock, anyway, and then this problem boils down to matching the
studio's audio sample rate to your soundcard's sample rate.
Studio equipment typically has rather good oscillators (I think:
better than 10ppm offset), and even the cheapest USB codec from
Texas Instruments advises you to use a <=25ppm oscillator. That
leads to a total worst-case rate offset of 35 ppm; with an 48 kHz
sample clock, that's an offset of about 0.6 Hz, or one sample
every 1.68 s. Thus, meh, assume your ALSA does 1024-sample
periods, it'd take some half hour for a single frame to go missing
or accumulate. And that's not even when you get an issue. It's
just the first point that you'd actually be able to count things
worthy of being sent to the audio driver not being the number that
would be correct.
In analog audio modulation, you don't get the benefit of anything
that inherently transports the transmitter's sampling clock, and
thus, your SDR's frequency error can't be corrected.
Best regards,
Marcus
On 06.06.2017 18:02, Benny Alexandar
wrote:
Hi Luca,
Nice to see your progress so far. Once you have the
DAB receiver audio listening in place, I would
suggest to have an audio synchronization for continuous
playback without any buffer overflow or under-runs.
DAB+ audio super frame length is 120ms according to DAB+
standard (ETSI TS 102 563). Each audio super frame is
carried in five consecutive logical DAB frames.
Which means 120ms of audio is mapped to 5 DAB frames.
If I add a timestamp at the receiver when the first DAB frame
sample arrives, I can check the max latency when it comes to
audio renderer, I mean after buffering to adjust the variable
decoding time of compressed audio.
t_D = t_A - t_B ,
where,
t_A = time at audio out
t_B = time at input baseband sample.
t_D = maximum system delay.
The difficulty is to estimate the slow clock drift correctly
and separate it from the short-time channel/decoding jitter.
Add a delay to buffer audio at audio out, say D, which is
larger
than max system delay. Whenever the audio reaches audio out,
check the
delay to separate the clock drift.
drift = t_D - D
Please let me know if you need any more details.
-ben
From:
Discuss-gnuradio
<address@hidden>
on behalf of Moritz Luca Schmid
<address@hidden>
Sent: Friday, May 26, 2017 6:19:31 PM
To: GNURadio Discussion List
Subject: [Discuss-gnuradio] [GSoC 17] DAB: updates of
the week
Hi everyone,
I just published my latest updates of my DAB project in a new
blog post.
This week, I created a source block for the Fast Information
Channel and started to build a reception chain for the Main
Service Channel (where the audio data is transmitted).
Read more about it in my post.
Cheers
Luca
_______________________________________________
Discuss-gnuradio mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio