lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lwip-users] mem_malloc(): memory fragmentation


From: Goldschmidt Simon
Subject: RE: [lwip-users] mem_malloc(): memory fragmentation
Date: Thu, 26 Oct 2006 09:43:11 +0200

> On Mon, 2006-10-23 at 13:42 +0100, Jonathan Larmour wrote:
> > Goldschmidt Simon wrote:
> > > maybe a better solution might be to implement
> > > mem_malloc() as different pools and leave the PBUF_RAM 
> > > implementation since it would be allocated from pools then.
> > 
> > Note that despite what's implied in that bug, IMHO you can't
actually 
> > let it be the current pbuf_alloc(..., PBUF_POOL), otherwise  if you
use 
> > up all the pbufs with TX data, you won't have room for any  RX
packets, 
> > including the TCP ACK packets which will allow you to free some of
your TX packets.
> > So either RX and TX packets should be allocated from different
pools, 
> > or there should be a low water mark on the pool for TX allocations
in 
> > order to reserve a minimum number of packets for RX.
> 
> The later would be my preference as it is much more efficient 
> on memory usage where you have unidirectional traffic (which 
> is, to a first approximation, quite common for bulk transfers).

That would imply that the current lwIP implementation uses PBUF_POOL for
RX frames only, whereas all TX frames would use PBUF_RAM or no-copy
pbufs. As far as I know, that's not true: SNMP & PPP use PBUF_POOL where
I don't think it's on the input side. But certainly this would be a good
idea! If PBUF_RAM was allocated from pools also, so there would be no
difference in using PBUF_POOL or PBUF_RAM.

> 
> > One good solution if using 2^n sized pools is to use a buddy 
> > allocator[1] to divide up larger contiguous space, so it may not be
as 
> > wasteful as you think. One difference with a normal buddy allocator
is 
> > that a normal one would normally e.g. return a 2Kbyte buffer if you 
> > request 1025 bytes. An lwIP implementation could work for maximum 
> > efficiency instead and allocate that as a 1024 byte buffer plus a 64

> > byte buffer (or whatever the lowest granularity would be) chained
together.
> 
> I think that would definitely be the only way such a change 
> would be acceptable given that lwIP tries to have a low 
> memory footprint, and indeed it's the only way I'd even 
> though of it being done.

I don't really understand what you mean with the buddy allocator... Do
you mean chaining 2 pbufs together? The way it's done in the wikipedia
example does not really suppress fragmentation, or does it? Chaining a
pbuf from two fragments would be easy (implementing pools), but I don't
know if it would be less speedy or causing any other problem.

> 
> > But all this would be a non-trivial bit of coding so I'm sure people

> > would be grateful if you have time to do it

Don't know about the triviality... I implemented 5 pools of different
sizes (using the memp.c interface, just added some pools). Normal calls
to mem_malloc() get the size they want (maybe too big, but only DHCP,
SNMP & loopif are using that and they could be reprogrammed using
memp_malloc, I suggest). Pbuf_alloc(PBUF_RAM) would construct a
pbuf-chain of sizes just like you suggested.

> 
> Exactly.  These day changes to lwIP generally only get 
> included by someone who uses it taking the time to write the 
> code - the maintainers do their best to maintain it and fix 
> bugs, but a rework such as this is unlikely to ever reach the 
> top of their list of things to do.
> 
> > I could also believe the result will use more a fair bit more code 
> > space than the present mem_malloc.
> 
> Certainly true.

About code size, I'm not sure, but you certainly can better calculate
the amount of data RAM you need when suppressing (external)
fragmentation (and thus save space compared to the current heap), and
that should make up for the bigger code size. Last but not least you get
a _much_ better feeling running lwIP applications for some years without
rebooting (at least I do).

I would be open to fragmentation tests, too (like Christiaan suggested),
but I'm not sure that this makes much sense. Since the memory usage is
almost always caused by clients contacting my board, fragmentation would
possibly depend on the network traffic my app is running on... So I'd
rather theoretically think about the fragmentation issue instead of
trying to prove it in examples.

Simon.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]