discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss-gnuradio] RE: Discuss-gnuradio Digest, Vol 4, Issue 19


From: MacLeod, Matthew
Subject: [Discuss-gnuradio] RE: Discuss-gnuradio Digest, Vol 4, Issue 19
Date: Fri, 14 Mar 2003 10:11:25 -0500


-----Original Message-----
From: address@hidden
[mailto:address@hidden
Sent: March 13, 2003 4:07 PM
To: address@hidden
Subject: Discuss-gnuradio Digest, Vol 4, Issue 19


Send Discuss-gnuradio mailing list submissions to
        address@hidden

To subscribe or unsubscribe via the World Wide Web, visit
        http://mail.gnu.org/mailman/listinfo/discuss-gnuradio
or, via email, send a message with subject or body 'help' to
        address@hidden

You can reach the person managing the list at
        address@hidden

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Discuss-gnuradio digest..."


Today's Topics:

   1. Re: Salon article -- off topic (Steve Schear)
   2. Re: information theory -- follow up (MacLeod, Matthew)
   3. RE: Re: information theory -- follow up (Ettus, Matt)
   4. synchronizing sound cards in a cluster (fwd) (Eugen Leitl)
   5. Re: synchronizing sound cards in a cluster (fwd) (Eugen Leitl)
   6. RE: information theory -- follow up (off topic
        )
   7. RE: Re: synchronizing sound cards in a cluster
        (fwd) (Matthew Kaufman)


----------------------------------------------------------------------

Date: Thu, 13 Mar 2003 09:33:45 -0800
From: Steve Schear <address@hidden>
To: address@hidden
Cc: David Reed <address@hidden>
Subject: Re: [Discuss-gnuradio] Salon article -- off topic
Message-ID: <address@hidden>
In-Reply-To: <address@hidden>
Content-Type: text/plain; charset="us-ascii"; format=flowed
MIME-Version: 1.0
Precedence: list
Message: 1

At 08:35 AM 3/13/2003 -0800, Dewayne Hendricks wrote:
>[Note:  David Reed is not on this list and he has asked me to forward to 
>the list his comments on Steve Schear's recent post.  DLH]
>
>At 6:43 -0800 3/13/03, David P. Reed wrote:
>>From: "David P. Reed" <address@hidden>
>>To: Dewayne Hendricks <address@hidden>, Steve Schear
<address@hidden>
>>Subject: Re: Fwd: Re: [Discuss-gnuradio] Salon article -- off topic
>>Date: Thu, 13 Mar 2003 06:43:16 -0800
>>MIME-Version: 1.0
>>
>>At 05:33 AM 3/13/2003 -0800, Steve Schear wrote:
>>>David Reed's physics is not strictly correct.  He maintains that 
>>>"Photons in free space act almost exclusively as waves. Therefore, when 
>>>they cross paths they merely set up an interference pattern for the very 
>>>brief time of their interaction. No energy is exchanged and the quantum 
>>>state of each photon is unchanged after they pass each other."
>>
>>Steve, Dewayne -
>>
>>I never said this.  (and I wouldn't have)
>>
>>Here's what I said in my mail:
>>
>>>Steve - of course you are right that when two photons interact with an 
>>>electron they can produce other photons with surprising behaviors.
>>>
>>>We see these interactions every day.   That is how signals interact 
>>>inside antennas. (which are seas of electrons).  That is how signals 
>>>reflect and refract.   You can get frequency doubling, you can get 
>>>tropospheric tunneling, etc.
>>>
>>>QED explains everything about radio, in theory, and it is remarkably 
>>>precise. and predictive.   It's one of the most well established parts 
>>>of science.

David's right.  The misquote was due to my misinterpretation of the quoted 
text exchanged between David, another party and myself.

steve



------------------------------

Date: Thu, 13 Mar 2003 14:14:45 -0500
From: "MacLeod, Matthew" <address@hidden>
To: "'address@hidden'" <address@hidden>
Subject: [Discuss-gnuradio] Re: information theory -- follow up
Message-ID:
<address@hidden>
Content-Type: text/plain;
        charset="iso-8859-1"
MIME-Version: 1.0
Precedence: list
Message: 2

> Eric -- I haven't been here very long, but my assumption about this code
to
> demod two FM stations at once is NOT an example of how you can demod two
FM
> stations "on a single frequency" as stated by David Weinberger in this
Salon
> article.

I had that basic problem with the article as well.

But my main problem is the colour analogy. It doesn't really account for a
lot of things. If you're looking at a 60W blue light bulb flashing signal at
you, then someone turns on a 1 000 000W blue light source behind it, you're
not going to be picking up much useful information any more, now are you?
Even if the two lights were different colours it would be difficult.
Basically any situation where you get washed out images, glare, partial
reflections, or any other visual problem show that just because colours
behave mostly like radio waves, and that we can still get usually get useful
information from them, doesn't mean there's infinite capacity there.

Also consider what some people do with university crib sheets. To get more
on the page they will sometimes write in two different colours, and wear 3D
glasses to let them separate the green and the red, or whatever colours they
choose, by closing one eye or the other. But without the glasses it's really
quite hard to read. All the lenses really are are band pass filters, which
we already do with radio waves.

Although I agree with a lot of the arguments, I don't think the colour
analogy is all that useful, and doesn't hold a lot of insight. I like the
multiple voices idea a lot better, which is more akin to CDMA, or how
freeing the spectrum gets rid of losses to guard bands.

What I would also see as a really interesting project for people looking for
dead zones (or illicit transmission) would be glasses that upconverted RF
into the visible spectrum, allowing you to look around and find the 'colour'
of wave you're looking for from different locations.

Matt MacLeod



------------------------------

Date: Thu, 13 Mar 2003 11:25:13 -0800
From: "Ettus, Matt" <address@hidden>
To: "'MacLeod, Matthew'" <address@hidden>, 
        "'address@hidden'" <address@hidden>
Subject: RE: [Discuss-gnuradio] Re: information theory -- follow up
Message-ID: <address@hidden>
Content-Type: text/plain;
        charset="iso-8859-1"
MIME-Version: 1.0
Precedence: list
Message: 3

> But my main problem is the colour analogy. It doesn't really 
> account for a
> lot of things. If you're looking at a 60W blue light bulb 
> flashing signal at
> you, then someone turns on a 1 000 000W blue light source 
> behind it, you're
> not going to be picking up much useful information any more, 
> now are you?

This is kind of a loaded analogy, but even so, it is not true.  If your bulb
is flashing, it produces sidebands.  If you filter out the carriers (both
yours and the megawatt one), you can still receive the sidebands.

It is also misleading.  The are petawatt (maybe stronger) light sources all
around us (i.e. the sun, stars, etc.)  That doesn't mean that I can't see an
LED.  It just means I have to be closer to the LED to see it. 

Besides, I don't think anyone is saying that everyone should be allowed to
trasmit with a megawatt.  Just becuase spectrum might be open to all,
doesn't mean there won't be rules to limit your power.

> Even if the two lights were different colours it would be difficult.
> Basically any situation where you get washed out images, 
> glare, partial
> reflections, or any other visual problem show that just 
> because colours
> behave mostly like radio waves, and that we can still get 
> usually get useful
> information from them, doesn't mean there's infinite capacity there.

Very false.  Filtering separates color/frequencies.  

> Also consider what some people do with university crib 
> sheets. To get more
> on the page they will sometimes write in two different 
> colours, and wear 3D
> glasses to let them separate the green and the red, or 
> whatever colours they
> choose, by closing one eye or the other. But without the 
> glasses it's really
> quite hard to read. All the lenses really are are band pass 
> filters, which
> we already do with radio waves.

Exactly the point he's trying to make.  If you can't read the signal then
you need better glasses.



------------------------------

Date: Thu, 13 Mar 2003 21:15:37 +0100 (CET)
From: Eugen Leitl <address@hidden>
To: address@hidden
Subject: [Discuss-gnuradio] synchronizing sound cards in a cluster (fwd)
Message-ID: <address@hidden>
Content-Type: TEXT/PLAIN; charset=US-ASCII
MIME-Version: 1.0
Precedence: list
Message: 4

---------- Forwarded message ----------
Date: Thu, 13 Mar 2003 11:56:19 -0800
From: Jim Lux <address@hidden>
To: address@hidden
Subject: synchronizing sound cards in a cluster

Anybody have any good ideas on how to synchronize the sampling from 
multiple sound cards in a cluster using Ethernet as the interconnect. The 
application would grab data from the sound card (notionally at 100 
ksamples/second total, for two channels) and do a ton of signal 
processing.  At some point in the processing, the streams of data need to 
be shared between processors (i.e. to do beamforming), and so, needs to be 
time registered.
The bandwidth isn't a real challenge here (with, say, 16 processors, that's 
only about 32 Mbps total), nor is latency, but synchronization is.

One can fairly easily synchronize to a millisecond over Ethernet, but this 
application needs sync to, at worst, 1 sample time (20 microseconds) 
although order of a microsecond would be nice.



James Lux, P.E.
Spacecraft Telecommunications Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875

_______________________________________________
Beowulf mailing list, address@hidden
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf




------------------------------

Date: Thu, 13 Mar 2003 22:01:30 +0100 (CET)
From: Eugen Leitl <address@hidden>
To: address@hidden
Subject: [Discuss-gnuradio] Re: synchronizing sound cards in a cluster (fwd)
Message-ID: <address@hidden>
Content-Type: TEXT/PLAIN; charset=US-ASCII
MIME-Version: 1.0
Precedence: list
Message: 5

---------- Forwarded message ----------
Date: Thu, 13 Mar 2003 14:32:52 -0500 (EST)
From: Robert G. Brown <address@hidden>
To: Jim Lux <address@hidden>
Cc: address@hidden
Subject: Re: synchronizing sound cards in a cluster

On Thu, 13 Mar 2003, Jim Lux wrote:

> Anybody have any good ideas on how to synchronize the sampling from 
> multiple sound cards in a cluster using Ethernet as the interconnect. The 
> application would grab data from the sound card (notionally at 100 
> ksamples/second total, for two channels) and do a ton of signal 
> processing.  At some point in the processing, the streams of data need to 
> be shared between processors (i.e. to do beamforming), and so, needs to be

> time registered.
> The bandwidth isn't a real challenge here (with, say, 16 processors,
that's 
> only about 32 Mbps total), nor is latency, but synchronization is.
> 
> One can fairly easily synchronize to a millisecond over Ethernet, but this

> application needs sync to, at worst, 1 sample time (20 microseconds) 
> although order of a microsecond would be nice.

a) Check out the documentation on http://www.ntp.org/documentation.html.
>From what it says, you can synchronize at roughly the level of network
latency with ntp alone, so you can (I would expect) get an otherwise
quiet LAN sync'd to a millisecond or even less.  NTP does correction
over a long time and damps to a common clock, so you MIGHT get down
below the 1 ms mark over time.  I doubt that ntp alone would make 10
usec.

b) Do you get to spend money?  Can you purchase each node its own GPS
clock?  The ntp docs suggest that if you have any sort of reference
clock (atomic, GPS, time pulse) your resolution is limited only by the
reference (and, probably things like gettimeofday, which are no better
than 2 usec as it is and can easily be worse).

c) If you don't get to spend money then you could TRY to use the onboard
tsc instead of coarsely adjusting the system clock per se.  I'm using it
as a timer in my benchmark code and can give you a wrapped assembler
fragment for reading it.  This clock is accurate to an inverse clock
(typically sub-nanosecond these days) BUT by the time you add the
overhead of reading it you diminish to perhaps 40-60 nanoseconds.
Still, in principle you have access to a clock with sub-usec resolution
(you can even measure and correct on average for the time required for
the wrapped call).

This clock is not configured for computing anything like absolute
systems time, so you'd have to do things like pingpong between systems
on the network a million times or so making slow adjustments to a
subtraction base until your "clocks" match within some resolution across
the entire network.

I actually don't think this would be horribly difficult.  It's sort of
like you and I looking at our watches and saying "10:02:02" (you adjust
a tick and say "10:02:04" (and I adjust a tick) and eventually we get to
the point where we are PREDICTING what the other person will say so that
given an average latency MEASURED to within some precision, we can say
that our clocks match within that precision.  I'm sure NTP does
something like this now with a coarser-grained clock, and you might be
able to steal it and just hack it to use the tsc and get what you want.

  rgb

-- 
Robert G. Brown                        http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:address@hidden



_______________________________________________
Beowulf mailing list, address@hidden
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf




------------------------------

Date: Thu, 13 Mar 2003 13:02:46 -0800
From: address@hidden
To: address@hidden
Cc: address@hidden
Subject: RE: [Discuss-gnuradio] information theory -- follow up (off topic
        )
Message-ID: <address@hidden>
Content-Type: text/plain
MIME-Version: 1.0
Precedence: list
Message: 6


== snippets selected from one of Matt Ettus' emails ====
== /* my comments like this */ ====

Basically, nobody is saying there is anything here which contradicts
Shannon. 
You just need to realize what Shannon's theorem says and what it doesn't. 
Shannon's theorem, for example says nothing about interference -- it talks
about AWGN.  A very strong interferer is no big deal if you can filter it
away, right?
 It says nothing about antennas, shared channels, path loss, spatial
diveristy, directional diversity, etc.

/* (bw) I absolutely agree, except I don't know what you mean by 'shared
channels'... Matt's comment here about interfers supports the idea of there
not being any 'inherent interference' in signals of different frequencies.
If you can't see a weak blue light in the presence of a super-bright violet
one, its because you need a better filter mechanism.  This is referred to as
an out-of-channel jammer, or out-of-band jammer problem.
  The on-channel jammer issue (strong/weak blue light given earlier) is
another issue -- here you need some other mechanism to distinguish the two
signals... if you know the jammer is on all the time, and the desired is
pulsing, then you look for the AM sidelobes... otherwise you hope you can
fall back on spatial/directional diversity or some other way to separate the
two.
  I agree that just because a jammer is on-frequency with our desired signal
does not mean that we can't decode the desired. 
  If I understand correctly, the basis of Dave's thoughts referred to in the
Salon article is that we need modern-day systems to use more sophisticated
tricks to encode channels than just "you get this 30kHz, and you get this
other 30kHz" -- the classic FDMA approach of 100 years ago. 
*/

The idea is that given a network in which everyone talks at will on the same
freqency, the SNR can become significantly negative.  This does not preclude
communication, it merely requires a different way of communicating.  We know
this intuitively if we've ever been to a football game.  Everyone is talking
at the same time, in the audio band.  There is more interference than
signal.  Yet we can still communicate.  (This observation/metaphor is due to
Tim Shepard, BTW)

/* Not a fair representation of what we're talking about -- it is not
straight forward to describe the information transferred when talking over a
crowd in a football game. Furthermore, you have many other cues to help you
decode the words spoken, if we restrict our definition to this -- facial
expressions, known speach patterns from a familiar person. All of this is a
form of spreading -- you're including lots of known/predictable 'stuff'
along with the actual 'message' trying to transmit, and in doing so, you can
listen well under the system noise floor (ie, negative SNR).
  Example:  You hear your wife singing a familiar song -- at one point, you
can't hear a damn thing, but see her lips mouth the word
"supercalifragilisticexpealladocious" so you know that's what she said...
alternatively, you listen to two Japanese fellows with heavy accents (seated
in front of you) discuss their company's business -- at an SNR of 3.0dB you
still might not be able to make out what is being said.
  It all comes down to how 'information' and SNR is defined -- Shannon had a
very particular definition where we have to reduce the symbol set to remove
redundancy and such -- difficult to do in discussions and analogies like
this
*/

We also know this from spread spectrum systems like CDMA cellphones.  The
difference is that we are talking about much bigger networks here, and they
are
decentralized.

There seems to be the belief that SNR would become "too negative to
communicate".  This is false.  It is the same as Olber's paradox -- if there
are
infinitely many stars in an infinite universe, why isn't the sky bright at
night?

So given a non-zero SNR, we can always communicate, albeit slowly, or more
accurately, at fewer bits per second per hertz.  It might be 1 bit per
second
per 100 Hertz.


/* (bw) you mean non-zero SNR as a ratio, not in dB, right?  this is the
part where my head explodes. Shannon tells us that there is indeed minimum
SNR required to support a given bandwidth efficiency (measured in
bits-per-second-per-Hz -- units look like the timed acceleration of a bit,
funny, huh?)... with a theoretical asymptote at -1.6dB -- that is, with less
than this, you can't transmit anything reliably.
  Of course CDMA (and spreading in general) offers you a 'spreading gain' --
for CDMA, its just the ratio of the chip rate to the data rate -- something
like 10dB-20dB. This means that the RAKE receiver can still decode one
channel when it look like its buried 10dB in the noise floor.
  This appears to preclude the idea of not having a fundamental limit on SNR
required until you remember how Shannon talks about 'information' -- these
extra bits we're using to spread the signal, they're NOT information...
they're a known, predictable sequence -- just a mechanism we use to share a
given frequency band to reduce 'deadbands' required by freqency-division
multiple access methods -- in an effort to increase the spectral effiency.
*/


If you are really interested in this, you should check out Tim Shepard's PhD
thesis (its very readable, and not too long).  You can find it at:

ftp://ftp.lcs.mit.edu/pub/lcs-pubs/tr.outbox/MIT-LCS-TR-670.ps.gz

After that, you might want to read my MS thesis which extended Tim's work to
higher-order path-loss environments.  You can find it at
http://ettus.com/thesis.ps.gz

/* Thanks for these links -- I'll be sure to read them. I'm sure these will
provide some insight to help me fill in some of the holes in my
understanding.
*/

Matt

/*
Brian Whitaker
Maxim RF Applications


------------------------------

Date: Thu, 13 Mar 2003 13:03:36 -0800
From: "Matthew Kaufman" <address@hidden>
To: "'Eugen Leitl'" <address@hidden>, <address@hidden>
Subject: RE: [Discuss-gnuradio] Re: synchronizing sound cards in a cluster
        (fwd)
Message-ID: <address@hidden>
In-Reply-To: <address@hidden>
Content-Type: text/plain;
        charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Precedence: list
Message: 7

This isn't really the hard part. The hard part is getting multiple sound
cards to share the same sample clock. IF the same clocks all ran at the
same rate, then you could fairly easily use known inputs to synchronize
up the results, but the sample clocks aren't nearly close enough
together for that. In fact, the sample clocks in most sound cards are
*terrible*.

Even synchronizing multiple sound cards inside the same PC is a
difficult problem for this reason.

So, for starters, go get some sound cards that have an external clock
input for the sampling. (If you can find such a thing)

Matthew

> -----Original Message-----
> From: address@hidden 
> [mailto:address@hidden On 
> Behalf Of Eugen Leitl
> Sent: Thursday, March 13, 2003 1:02 PM
> To: address@hidden
> Subject: [Discuss-gnuradio] Re: synchronizing sound cards in 
> a cluster (fwd)
> 
> 
> ---------- Forwarded message ----------
> Date: Thu, 13 Mar 2003 14:32:52 -0500 (EST)
> From: Robert G. Brown <address@hidden>
> To: Jim Lux <address@hidden>
> Cc: address@hidden
> Subject: Re: synchronizing sound cards in a cluster
> 
> On Thu, 13 Mar 2003, Jim Lux wrote:
> 
> > Anybody have any good ideas on how to synchronize the sampling from
> > multiple sound cards in a cluster using Ethernet as the 
> interconnect. The 
> > application would grab data from the sound card (notionally at 100 
> > ksamples/second total, for two channels) and do a ton of signal 
> > processing.  At some point in the processing, the streams 
> of data need to 
> > be shared between processors (i.e. to do beamforming), and 
> so, needs to be 
> > time registered.
> > The bandwidth isn't a real challenge here (with, say, 16 
> processors, that's 
> > only about 32 Mbps total), nor is latency, but synchronization is.
> > 
> > One can fairly easily synchronize to a millisecond over 
> Ethernet, but 
> > this
> > application needs sync to, at worst, 1 sample time (20 
> microseconds) 
> > although order of a microsecond would be nice.
> 
> a) Check out the documentation on 
> http://www.ntp.org/documentation.html.
> >From what it says, you can synchronize at roughly the level 
> of network
> latency with ntp alone, so you can (I would expect) get an 
> otherwise quiet LAN sync'd to a millisecond or even less.  
> NTP does correction over a long time and damps to a common 
> clock, so you MIGHT get down below the 1 ms mark over time.  
> I doubt that ntp alone would make 10 usec.
> 
> b) Do you get to spend money?  Can you purchase each node its 
> own GPS clock?  The ntp docs suggest that if you have any 
> sort of reference clock (atomic, GPS, time pulse) your 
> resolution is limited only by the reference (and, probably 
> things like gettimeofday, which are no better than 2 usec as 
> it is and can easily be worse).
> 
> c) If you don't get to spend money then you could TRY to use 
> the onboard tsc instead of coarsely adjusting the system 
> clock per se.  I'm using it as a timer in my benchmark code 
> and can give you a wrapped assembler fragment for reading it. 
>  This clock is accurate to an inverse clock (typically 
> sub-nanosecond these days) BUT by the time you add the 
> overhead of reading it you diminish to perhaps 40-60 
> nanoseconds. Still, in principle you have access to a clock 
> with sub-usec resolution (you can even measure and correct on 
> average for the time required for the wrapped call).
> 
> This clock is not configured for computing anything like 
> absolute systems time, so you'd have to do things like 
> pingpong between systems on the network a million times or so 
> making slow adjustments to a subtraction base until your 
> "clocks" match within some resolution across the entire network.
> 
> I actually don't think this would be horribly difficult.  
> It's sort of like you and I looking at our watches and saying 
> "10:02:02" (you adjust a tick and say "10:02:04" (and I 
> adjust a tick) and eventually we get to the point where we 
> are PREDICTING what the other person will say so that given 
> an average latency MEASURED to within some precision, we can 
> say that our clocks match within that precision.  I'm sure 
> NTP does something like this now with a coarser-grained 
> clock, and you might be able to steal it and just hack it to 
> use the tsc and get what you want.
> 
>   rgb
> 
> -- 
> Robert G. Brown                              
> http://www.phy.duke.edu/~rgb/
> Duke University Dept. of 
> Physics, Box 90305
> Durham, N.C. 27708-0305
> Phone: 1-919-660-2567  Fax: 919-660-2525     email:address@hidden
> 
> 
> 
> _______________________________________________
> Beowulf mailing list, address@hidden
> To change your subscription (digest mode or unsubscribe) 
> visit http://www.beowulf.org/mailman/listinfo/beowulf
> 
> 
> 
> 
> _______________________________________________
> Discuss-gnuradio mailing list
> address@hidden 
> http://mail.gnu.org/mailman/listinfo/discuss-> gnuradio
> 



------------------------------

_______________________________________________
Discuss-gnuradio mailing list
address@hidden
http://mail.gnu.org/mailman/listinfo/discuss-gnuradio


End of Discuss-gnuradio Digest, Vol 4, Issue 19
***********************************************





reply via email to

[Prev in Thread] Current Thread [Next in Thread]