info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Lock contention executing cvs log


From: Larry Jones
Subject: Re: Lock contention executing cvs log
Date: Fri, 28 Feb 2003 15:44:22 -0500 (EST)

MacDonald, Ian writes:
> 
> While several of my build processes were running, I was monitoring the CVS
> server with 'top'.  I noticed a strange occurrence with two of cvs processes
> that were currently executing a log command - both processes grew to consume
> nearly all available memory (in my case 512MB).  While this was occurring,
> the cvs clients associated with these growing cvs processes started dumping
> the 'waiting for blahs lock' messages to stdout.  This went on for about 15
> minutes before the log commands completed.  Does anyone know if this is
> normal behavior?

Probably.  CVS can be quite memory intensive.  Now, that's generally
*virtual* memory, so it doesn't matter so much.  If you were running a
version of top that's similar to the one I have, SIZE is the amount of
virtual memory a process is using, RES is the amount of real memory. 
You had to have been looking at SIZE rather than RES since it isn't
possible for two processes to both be using nearly all the physical
memory.  Don't forget that virtual memory is paged rather than processes
being swapped, so the two processes don't interfere with each other --
it's not necessary for one to wait for the other to release some virtual
memory before it can procede.  In fact, I suspect you'll find that the
processes *never* release virtual memory; they start out growing, they
may or may not eventually stabilize; if they do, they stay that size
until they end; they never shrink.

I think what you're seeing is, indeed, lock contention.  Whichever
process starts first has to actually read the data off the disk and is
constantly stopping and waiting for that to happen.  The second process,
however, finds all the data it needs in the filesystem cache (courtesy
of the first process) and thus can proceed much faster until it finally
catches up.  Once that happens, the two process procede more-or-less in
lockstep and end up running into each other the next time they try to
create read locks.  Waiting for a lock is a very time consuming process
-- the current code just sleeps for 30 seconds and then tries again.

I just checked in a change that modifies the algorithm slightly.  When
contention is encountered, the code now makes a few retry attempts with
a very short time wait time (initially 2 us with an exponential backoff)
before giving up and going into the fullblown wait with messages and the
30 second wait.  That should significantly reduce the impact of
contention.  In my test with two simultaneous log processes, there was
lots of contention, but the second process usually got through after a
single retry and never needed more than 3 (I allow up to 9 retries which
is a total delay of about 1ms).  Of course, if there's enough
contention, you'll still end up in the old "30 second wait with a
message" code.  Better would be to make the whole waiting process use a
random wait with exponential backoff; that should help avoid process
that have gotten in lockstep all trying to grab the master lock at the
same time.

-Larry Jones

I've never seen a sled catch fire before. -- Hobbes




reply via email to

[Prev in Thread] Current Thread [Next in Thread]