lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] Re: [lwip] Re: lwIP on DSPs


From: Bill Knight
Subject: [lwip-users] Re: [lwip] Re: lwIP on DSPs
Date: Thu, 09 Jan 2003 01:06:36 -0000

All of the other code in the stack is written expecting an 8 bit
char.  The stuct definition I gave supplies that.  This saves
problems with a stucture like:

struct foo {
u8_t  value1;
u16_t value2;
};

If you packed everything into words (16 bit) then the high order
byte of value2 would have to share a word location with value1.

The method I used would create a stuct like:

struct foo {
u8_t value1;
u8_t value2[2];
};

While this takes up more RAM, it uses the same number of chars
as the original.  Not elegant, but it should work and be reasonably
easy to implement.

BTW- I forgot to include the trailing #endif in the checksum code.

-Bill


On Monday 03 December 2001 20.53, you wrote:
> Adam
>   I've been doing some more thinking on the problem and it might not
> be as bad as I originally thought.  My solution is similar to yours
> with a change to the header structure definitions.  The problem with 16
> bit chars if that only the lower 8 bits actually gets sent to the
> network controller.  What is needed for sending to the network interface
> is still an array (pbuf) of chars with only the lower 8 bits used in
> each char. Also in <limits.h> there is a #define for CHAR_BIT
> which is 16 on the processor I am working with.  So:

I afraid I don't really follow you. Are every other 8 bits of the 16 bit char 
unused? Wouldn't it be possible to use a 16 bit type instead? I was thinking 
of using u16_t:s for most of the stuff and shift the 8 bits around.

The CHAR_BIT #define seems to be the right way to go.

About the bit fields - they are gone in the latest code. They were just too 
much hassle with too little gain.



[This message was sent through the lwip discussion list.]




reply via email to

[Prev in Thread] Current Thread [Next in Thread]