discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Block's maximum Samples per second


From: Marcus Müller
Subject: Re: [Discuss-gnuradio] Block's maximum Samples per second
Date: Mon, 15 Aug 2016 12:07:48 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0

Hey Joe,

On 15.08.2016 02:26, Joe D wrote:
Referring to numerous papers and previous discussions, GNURadio  design is such as a block will run as fast as the CPU will allow it  (unless  limited by the source/sink front-end’s speed or introducing a Throttle Block.)
Exactly.

 But practically how fast will it go, is there a known correlation with the CPU / machine specs that will allow to determine an upper limit of the maximum sampling rate for a given block?

Of course there's a general correlation that the faster your machine is, the faster your algorithms finish doing their work, and the faster a block will process the next chunk of work, and that is effectively equivalent to a processing speed.
But no other statement can be made in general.

Obviously, things are very different for different classes of problems – for example, a simple multiplicator is memory-bandwidth limited usually, whereas an Eigenvector decomposition-based frequency estimator will be CPU limited. The number of different blocks will have influence of how the blocks can be scheduled, how much of their out/input stays in CPU caches, the number, architecture and size of caches and RAM is of course critical, just as much as your OS, etc.

 Is there a documented / known empirical way to determine the specs of a machine (CPU /Cores /RAM)  based on the Maximum Sample per Second we would like to maintain? Taking into account the type/number of Blocks in the flowgraph

No; there can't be. The blocks do different things, and these different things will work differently on different machines.

Really, this is the good old "benchmarking" problem: no benchmark can represent all use cases of something that is effectively a library, and the only meaningful measurement is that done on the system that you're actually interested in, as soon as those systems hit a certain level of complexity. Multi-core, speed-adaptive, RAM-caching architectures running multi-threading applications on general-purpose operating systems definitely are something that hit that level of complexity.

Best regards,
Marcus

reply via email to

[Prev in Thread] Current Thread [Next in Thread]