discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] gr_file_descriptor_source blocking


From: Tom Rondeau
Subject: Re: [Discuss-gnuradio] gr_file_descriptor_source blocking
Date: Mon, 6 May 2013 10:01:13 -0400

On Sat, May 4, 2013 at 12:21 PM, Nico Otterbach
<address@hidden> wrote:
> Hi all,
>
> I just ran into a couple of problems concerning blocking calls and wanted to
> discuss possible solutions together with some gr-experts.
>
> I'm trying to exchange samples between different flowgraphs on different
> computers over the network. Therefore I'm using the TCP blocks (grc_blks2),
> which are actually using the gr_file_descriptor_*-blocks to access the
> created socket file descriptors.
>
> The problem occurs almost every time I try to stop the tob block of a
> "relaying" (tcp_source - do some processing - tcp_sink) flowgraph in order
> to rebuild the whole flowgraph. It hangs up in the wait() method called
> right after stop(); leading to the assumption, that some thread(s) couldn't
> be exited.
>
> Here's the relevant gdb output after calling the wait() method, you can find
> a more extensive one on http://pastebin.com/1mtgi8jZ :
>
> (gdb) t 57
> [Switching to thread 57 (Thread 0x7fff4dfeb700 (LWP 5839))]
> #0  0x00007ffff7bcbd2d in read () from /lib/x86_64-linux-gnu/libpthread.so.0
> (gdb) bt
> #0  0x00007ffff7bcbd2d in read () from /lib/x86_64-linux-gnu/libpthread.so.0
> #1  0x00007ffff41f3b24 in gr_file_descriptor_source::read_items(char*, int)
> () from /usr/local/lib/libgnuradio-core-3.6.5git.so.0.0.0
> #2  0x00007ffff41f3bc5 in gr_file_descriptor_source::work(int,
> std::vector<void const*, std::allocator<void const*> >&, std::vector<void*,
> std::allocator<void*> >&) ()
>    from /usr/local/lib/libgnuradio-core-3.6.5git.so.0.0.0
> #3  0x00007ffff40eeb54 in gr_sync_block::general_work(int, std::vector<int,
> std::allocator<int> >&, std::vector<void const*, std::allocator<void const*>
>>&, std::vector<void*, std::allocator<void*> >&) ()
>    from /usr/local/lib/libgnuradio-core-3.6.5git.so.0.0.0
> #4  0x00007ffff40cd3a3 in gr_block_executor::run_one_iteration() () from
> /usr/local/lib/libgnuradio-core-3.6.5git.so.0.0.0
> #5  0x00007ffff40fb347 in
> gr_tpb_thread_body::gr_tpb_thread_body(boost::shared_ptr<gr_block>, int) ()
> from /usr/local/lib/libgnuradio-core-3.6.5git.so.0.0.0
> #6  0x00007ffff40ec1b6 in
> boost::detail::function::void_function_obj_invoker0<gruel::thread_body_wrapper<tpb_container>,
> void>::invoke(boost::detail::function::function_buffer&) ()
>    from /usr/local/lib/libgnuradio-core-3.6.5git.so.0.0.0
> #7  0x00007ffff3dea6be in boost::detail::thread_data<boost::function0<void>
>>::run() () from /usr/local/lib/libgruel-3.6.5git.so.0.0.0
> #8  0x00007ffff3483da9 in ?? () from /usr/lib/libboost_thread.so.1.48.0
> #9  0x00007ffff7bc4e9a in start_thread () from
> /lib/x86_64-linux-gnu/libpthread.so.0
> #10 0x00007ffff69b1ccd in clone () from /lib/x86_64-linux-gnu/libc.so.6
> #11 0x0000000000000000 in ?? ()
>
>
> It looks like the gr_file_descriptor_source is blocking, which seems to be
> correct if you take a look at the appropriate code section.
>
> Is this (still) the expected behavior or did I make a mistake?
> If not, do you have any advice on how to smartly solve the problem without a
> rewrite? - Perhaps the blocking call could be a problem for other users,
> too.
>
> Temporarily, I solved the problem by using the UDP blocks, whereby the
> implementation looks like "boost thread calls" were taken into account.
> However, this is not the preferred method because of the possible
> sample-loss.
>
> Thanks,
> Nico

Nico,

Yes, the 'read' function is a blocking call that doesn't exit when the
SIGINT signal is sent when you say 'stop()'. In the UDP blocks that
were redone with Boost ASIO, that was partly done to make the block
(more easily) compatible on more OSes as well as to use the asynch
receive code for easy interruption.

For the file_descriptor_source, you'll want to implement the use of
'select' to better handle this.

Tom



reply via email to

[Prev in Thread] Current Thread [Next in Thread]