lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] inet_chksum.c misbehaving with compiler optimisation?


From: FreeRTOS Info
Subject: Re: [lwip-users] inet_chksum.c misbehaving with compiler optimisation?
Date: Wed, 23 Nov 2011 21:17:40 +0000
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:8.0) Gecko/20111105 Thunderbird/8.0

> Ok, as would hopefully be expected, checksumming dummy arrays produces
> the same result with and without the function in question being
> optimised.  That is good, so I should stop going down that dead end.
> Thanks for the suggestion, I should have thought to do that myself, doh!
> 
> I have attached the logs as they look now, and they don't show any
> strange data at all, but do show the problem with retransmissions.
> Good.pcap is without optimisation in the checksuming, bad.pcap is with
> optimisation in the checksumming.  192.168.0.200 is the target.
> 
> I would be grateful if you could elaborate on the timing theory.  I
> presume it is because the speed is causing a resource to be exhausted,
> but where shall I look first.  I checked the stats array some time back,
> and could not see any error counts incremented.  I also tried to get
> some debug prints out, but unfortunately the target I have does not make
> that easy without the CPU being held in debug state for the entire print
> out (I could write a UART driver and shove the message out there though).
> 
> The DMA buffers are squeezed into 16K of RAM.  I could try increasing
> those, but there is very little room.


Just in case anybody is looking at it, I think as has been suggested by
various people, speed does seem to be the issue.  To add to the evidence
of checksumming dummy buffers with/without optimisation, I have now also
added a forced delay in the check sum function, and everything now runs
with full optimisation.

I would still be interested in any elaboration on where to look for the
part that is falling over at higher execution speed, as I mentioned above.

I am concentrating on the driver again right now, but none of the error
bits are being set (out of descriptors, overrun, underrun, etc.).  I
have also managed to double the number of DMA buffers but cutting out
other parts of the code.

If it was queueing packets for transmission due to execution speed
increases, is there a limit to the queue length, or is it just
determined by RAM availability?  If there was starvation of a resource,
pbuf or whatever, would that show up by inspecting the stats array?

Regards,
Richard.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]