discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Scalability


From: Eric Blossom
Subject: Re: [Discuss-gnuradio] Scalability
Date: Mon, 31 Jul 2006 13:49:55 -0700
User-agent: Mutt/1.5.9i

On Mon, Jul 31, 2006 at 06:13:16AM -0700, jjw wrote:
> 
> I have a few questions related to the scalability of GNU Radio.  Any insight
> would be most appreciated.
> 
> 1)  I am unfamiliar with IPC, but understand how it could be useful to
> increase computing power.  What would be the first step in setting it up to
> do distributed computing with GNU Radio?

First off you'd need to develop an understanding of the various
interprocess communication primitives that are available, and the pros
and cons of each.  E.g., TCP/IP, shared memory, various locking
primitives, etc.

The most obvious solution for IPC, TCP, is already available.

Are you assuming an SMP environment (shared address space), or are you
thinking about solutions distributed across multiple machines?  Both
are possible.  Particularly in the multiple machine case, the i/o
capacity of the interconnect will need to be taken into account.

Lots of choices, depending on the underlying h/w architecture.

> 2)  Does data get buffered in between flowgraph blocks?  If so, what is the
> block size, and can it be adjusted dynamically to optimize the data flow for
> a given computer?

Yes, it's buffered.  The block size is a function of many things.
Yes it could be adjusted, but currently isn't.  It wouldn't be hard to
make it adjustable.

> 3)  Is it possible to launch multiple fg.start() functions (i.e. Is it
> possible to run multiple threads/demodulations at the same time)?

Yes, but that's not what I would suggest.  fg.start already uses
multiple threads in certain cases.  Also, multiple threads are not
needed to run multiple demodulations at the same time.

A more transparent solution would be to have the GNU Radio scheduler
dynamically partition the graph across multiple SMP processors.  This
would mean that user code would "just go faster" with no user code
changes required.  Getting good performance in most of these cases
requires consideration of memory allocation across processors, cache
characteristics, locking overhead for shared data structures, etc.
There's a mountain of literature available.

> I am sorry if some of these questions may seem pedestrian, however I am
> coming from a more hardware-centric background and am trying to improve my
> software knowledge.

No problem.  There are lots of ways to distribute the processing.

> Thanks in advance for any responses,
> John

You're welcome!

Eric




reply via email to

[Prev in Thread] Current Thread [Next in Thread]