discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Discuss-gnuradio] inband timestamp issues


From: Eric Schneider
Subject: RE: [Discuss-gnuradio] inband timestamp issues
Date: Tue, 26 Aug 2008 13:21:29 -0600

Thanks for the comments Brian, my replies are inline.

> -----Original Message-----
> From: Brian Padalino [mailto:address@hidden
> Sent: Tuesday, August 26, 2008 11:41 AM

> Maximum throughput for 16-bit IQ is a decimation by 8, so 6 clock
> cycles for header setup should be fine.  Even for decimation rates of
> 4, which drops IQ down to 8-bits each, the scheme should probably work
> fine if you write both I and Q in the same clock cycle.

I agree that in normal operations, there shouldn't be a problem.  However,
in the pull design it's not even a consideration.  Max clocks on both sides,
overflows are the only limit.
 
 
> You need the time the first sample comes out of the halfband filter as
> the timestamp on the whole packet, so you really have to write the
> header, and then wait for the first sample.  When the first sample is
> strobed in, get and write the timestamp and then write the IQ sample.
> Do this until the entire packet has been filled and repeat N times or
> infinitely (depending on setting).

Just to be clear, the pull method does save the timestamp at the beginning
of a packet, it just doesn't write it into the header until read via the
USB/FX2 interface.

> This sounds like a decent idea, but the smallest block RAM in the FPGA
> is 256x16.  If you wanted to make this just in the fabric, you'll have
> to deal with crossing clock domains and that can get a bit hairy with
> just flops used as memory.

A pair of 256x16 FIFOs doesn't sound exorbitant, do you think this will be
an issue?  We are getting rid of the packet buffer all together.

> The packet size is always the same size since the ADC is always
> running.  There won't ever really be a lack of samples to be pushed to
> the host.  The RSSI might be interesting, but since it's reported with
> every packet, you can get a granularity at the packet level which
> should be pretty sufficient.  For the padding, I don't think that will
> be required since the ADC is always running and there isn't a lack of
> samples to send to the host.
> 
> This is kind of why I wanted to be able to send down a command to say
> "At time=X, receive N packets and then stop" as it builds in the
> limiting factor to the command.

I agree that for channel data, a less than full packet is unlikely, nor does
it even need to be supported.  Using pull, we could easily support variable
payload sizes, even when we don't know the final size when we get the first
sample.  The benefit is of questionable value however.

Inband communications would normally have a lot of padding.  But I suppose
that the control channel is so different that it could have a different
structure than the rx channels.  So I guess padding in rx channels is mostly
moot point.  But I think that the pull design would be basically the same
for both.

More importantly is the general idea that we could include information in
the header that is not available at the time of the first sample, because we
are not actually constructing it until all the data is available and it is
being read via the fx/usb interface.  For example how would you push a
packet max or average RSSI into a header when data hasn't even arrived yet?
I'm not saying that this is something we want to do, but rather that we
could, if desired.  Timestamp is moot point, as it is always available at
the first sample, by definition.  I'm just trying to think beyond this
particular issue.

> The last issue I have is when dealing with sample overruns.  Can this
> scheme easily recover if the sample FIFO is full when a new sample
> comes in, but the metadata FIFO has header information pushed into it?
>  In the packet push situation, there will be a discontinuity in the
> timestamps inherently within the system.

In either push or pull, it would probably be a good idea to not start a
packet if the receiving buffer doesn't have enough room to hold all of it.
As soon as there is enough room, start a new packet.  Packets would then
always contain contiguous samples.  The number of lost samples could be
easily identified by the timestamp on the next packet.

When to set the overrun flag?  On the next complete packet after the
overrun?  What about the USB rx_overrun signal?  As soon as the overrun
occurs?  How does (or will) this affect the good packets that have already
been queued?

> I prefer the packet pushing idea, but feel free to do what you feel is
> the better idea.

What do you see as the advantage of the push design?

--ets






reply via email to

[Prev in Thread] Current Thread [Next in Thread]