lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] TCP server app, using select


From: mick s
Subject: [lwip-users] TCP server app, using select
Date: Thu, 29 May 2008 15:21:08 +0800

Hi

I'm having a problem using select() in my TCP server application, and I hope someone can point out where I'm mistaken.  It seems to always mark my accepted client sockets as readable, even when there is no data to be read.

I'd like my task to:
- periodically service some of my own functions,
- accept incoming tcp connections and
- service any already received connection. 
It should drop current connections in favour of new connections if the old connection doesn't have any data.

I've designed my code to first do a select on the server socket after the bind() and a listen().  The select has a timeout.

Once the server socket is readable, I do an lwip_accept() to get the client socket number.  I then use the SO_RCVTIMEO socket option to set the read timeout for the socket.

I try a lwip_recvfrom() on the client socket, which returns 0.

If I then call lwip_select() with both the server and client sockets in the readset, it will always return the readset with the client socket marked.  A subsequent recv of the client socket returns 0, unless there actually is data to read.

I can't follow the lwip_select() code entirely, but it seems that the problem may be in the accept.   There is a line;

nsock->rcvevent += -1 - newconn->socket;

This affects the subsequent select, which tests the socket with

 if (p_sock && (p_sock->lastdata || p_sock->rcvevent))


My call to select is as follows;

      FD_ZERO(&readset);
      FD_SET(iServerSock, &readset);
      iNumSocks = iServerSock + 1;
      if ( iClientSock >= 0 )
      {
         FD_SET(iClientSock, &readset);
         if ( iClientSock + 1 > iNumSocks )
            iNumSocks = iClientSock + 1;
      }

      selectTimeout.tv_sec  = 0;
      selectTimeout.tv_usec = POLL_TIMEOUT * 1000;

      if ( lwip_select(iNumSocks, &readset, NULL, NULL, &selectTimeout) == 0 )
         return 0;

Once a client socket has been accepted, lwip_select always returns 1.  I then test the client socket with

  if ( iClientSock >= 0 && FD_ISSET(iClientSock, &readset) )

Which is always true (after the client socket has been accepted).

I can read data once it has been sent, and send on the socket. The SO_RCVTIMEO option doesn't seem to change this behaviour.  I've tried lwip 1.3.0, and the version from cvs today.  I'm using FreeRTOS on a small ARM chip, the atmel AT91SAM7X256.


Thanks in advance



reply via email to

[Prev in Thread] Current Thread [Next in Thread]