bug-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Cvs-test-results] CVS trunk testing results (BSDI BSD/OS)


From: Larry Jones
Subject: Re: [Cvs-test-results] CVS trunk testing results (BSDI BSD/OS)
Date: Fri, 4 Nov 2005 17:09:03 -0500 (EST)

Derek Price writes:
> 
> It is possible. My Linux mmap man page says: ENOMEM is set when "No
> memory is available, or the process’s maximum number of mappings would
> have been exceeded." Could you count the number of times mmap is called
> (probably minus calls to munmap) before failing and see if it looks to
> be a consistent number?

Not only is it consistent, it's extremely suspicious: 1000 exactly.

> Before you try to do this, it might be instructive to verify the values
> for HAVE_MMAP & HAVE_MAP_ANONYMOUS from your BSDI config.h to make sure
> we really understand the code path. :)

They're both 1, of course.  :-)

> >I note that on my system, malloc() claims to page align large requests
> >-- maybe we're trying too hard.
> 
> Maybe in what sense? 

Maybe in the sense that if it were worth doing, malloc() should do it. 
So if malloc() doesn't do it, it's probably not worth doing.

> I gather that some systems don't automatically
> align large malloc requests, or this GNULIB module probably would not
> exist.

According to the comments in the module, you're the author!  It would
seem that it was written specifically for CVS as a replacement for
valloc().

> Are you suggesting that we shouldn't care and CVS should simply
> run slower on any systems that don't align large requests to the page
> boundry automatically? I'm not personally sure how drastic the speed
> decrease would be on uncooperative systems, but I would guess in the
> worst-case double the number of page faults, and twice the disk access
> could be a big deal with the amount of data CVS transfers.

I expect that the speed decrease would be minuscule; such micro-
optimization is rarely of any significant value.  Given the amount of
I/O that's occurring to read the file and the amount of data being moved
around (since the file is read into one big buffer and then copied into
a bunch of little [relatively speaking] buffers), any additional paging
would probably be undetectible.  Particularly since the buffers are
allocated sequentially, so even if they're not page aligned, the tail of
one is probably in the same page as the head of the next, which would
mean only one additional page fault.

A much more effective optimization would be to enhance the buffer system
so that you could simply hand-off the original (non-page aligned!) big
buffer to it rather than having to copy all the data.

> Of course, I seem to recall
> the reason that GNULIB preferred mmap to pagealign_alloc in the first
> place was that posix_memalign on most systems just allocated (size +
> pagesize - 1) or more bytes and returned the first page boundry within
> the allocated memory, which can waste up to a page of memory

That would indicate that posix_memalign() is built on top of the pre-
existing allocation system rather than being integrated into it.  That's
not entirely unreasonable since posix_memalign() allows arbitrary
alignment to any power of two rather than just the page size and most
realtime processes don't mind wasting a little space but may well be
unable to tolerate an unexpected page fault without jeopardizing their
realtime requirements.  CVS is not a realtime application.

-Larry Jones

I'm not a vegetarian!  I'm a dessertarian. -- Calvin





reply via email to

[Prev in Thread] Current Thread [Next in Thread]