lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] [lwip] Forcing PBUF_POOL


From: Wurmsdobler, Peter
Subject: [lwip-users] [lwip] Forcing PBUF_POOL
Date: Wed, 08 Jan 2003 23:37:21 -0000

Hello,

We are considering to use lwIP for a control application, mainly UDP for
control messages, but also TCP for other services. Therfore, speed is
important and PBUF_POOL buffers are preferable, both for input and output.
Second, since control messages are rather small, it will be unlikely that
they have to be chained. Another issue is, that the underlying link layer
will be FireWire, with a worst case payload of 512 bytes 
(for 100Mbit/sec). 

If I understand the current implementation correctly, what happens for UDP
is: The application allocates a pbuf for data, fills them in and can then
use udp_send to send the pbuf over a udp_pcb. Certainly the data can be
longer than a PBUF_SIZE in which case there will be a chain. In any case,
udp_send will chain a header buffer in front and finally over ip_output_if
and netif->output sent it to the driver code. There the pbuffs have to be
copied into a linear buffer which can be written to the device driver
function. 

In our case, messages are much smaller than PBUF_SIZE (or PBUF_SIZE can be
adapted to be 512). So it would be preferable for us to use a pre-canned
pbuf containing already a link header, use those and have udp_send not to
chain another pbuf in front. Even further, the link layer would not be
obliged to copy all pbuf payload to a local buffer, but a pointer to the
payload (including all headers) could be passed to the low-level output
function. No copying would take place throughout the entire stack. In the
case of bigger data, there would be the overhead that all pbufs in the chain
have to be populated with the same header.

What do you think and how could I achieve this with the currrent code? Would
I need to write a modified udp_send function ?

In the case of TCP, a tcp_write in the current implementation will enqueue
the stream to be cut into segments. For each segment a MEMP_TCP_SEG and a
PBUF_RAM of the segment size are allocated. The byte stream chunks are
copied into the respective pbuf. If (pcb->mss) is negotiated to be always
smaller than (512-(sum of all headersizes))), _and_ PBUF_POOL pbufs are
used, then the same concept would also work. The driver could just use
p->paylaod to stream the link packet to the device. Only one copy takes
place.

Is this reasonable? What changes do I have to make other than suggest in
order not to break the stack?

peter

Peter Wurmsdober
Eurotherm Drives Ltd.
Littlehampton, BN17 7RZ, UK
TEL: +44 19 03 73 73 58
FAX: +44 19 03 73 71 07
[This message was sent through the lwip discussion list.]




reply via email to

[Prev in Thread] Current Thread [Next in Thread]