monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Monotone-devel] Netsync problems


From: Nathaniel Smith
Subject: [Monotone-devel] Netsync problems
Date: Wed, 17 Nov 2004 05:49:54 -0800
User-agent: Mutt/1.5.6i

In staring at the netsync code to figure out the attach map assertion
failure thingie, I think I found a more serious bug.  As far as I can
tell, this is the basic logic of netsync pull:
  - Use spiffykeen algorithm to synchronize cert collections
  - Take all the revisions mentioned in the server's certs.  Then
    remove all the ones that you already have.  This gives the list of
    revisions we want to fetch.
  - Fetch the basic changeset for each such revision.  Now we have
    ancestry information locally, but still need to get manifests.
  - To do so, first find the heads (for purposes of netsync, "heads"
    are any revision in the collection which has no children; unlike
    the normal monotone definition, certs and branches and trust play
    no role).
  - Create a data structure called the "attachment map".  The purpose
    of this structure is to allow us to fetch revisions in two
    different ways, depending on whether we have some ancestor of them
    or not.  If we have an ancestor, we only need to fetch deltas
    versus that ancestor, working forward; if we don't have an
    ancestor, we have to fetch a full version, and work backwards.
    Therefore, we recursively scan the ancestry tree, starting at the
    heads, and build up a table saying for each revision whether we
    already have an ancestor of it or not.
    [this is where the problem that was triggering assertion failures
    today came from; the logic here was aborting the scan early, and
    leaving this table only partially filled]
  - Having created this structure, we are now ready to fetch
    revisions.  We do this by iterating over the heads, and for each
    head, either:
    - if the head is attached, do a depth-first traversal of that
      head's attached ancestors, requesting forward deltas for each
      manifest.  This is cheap network-wise, because we only ever
      request deltas, which are small, but it is expensive CPU-wise
      for the server, because it has to reconstruct each manifest
      using the backwards deltas that it stores, and then compute the
      forward deltas from that.  This can be quadratic, I think, as
      the server rechains delta application on every client request?
    - if the head is not attached, fetch the manifest for the head,
      and do a breadth-first traversal requesting backwards deltas for
      each manifest.  This is expensive network-wise, because we have
      to request a full manifest; but it is cheap CPU-wise, because
      the server just sends out the backwards deltas that it is
      already storing.
    In both cases, the traversal stops whenever it hits a revision
    that the attachment map says should be handled by the other case.
    In both cases, files are also handled simultaneously and
    identically to manifests.
    In both cases, we keep a lookaside table of all revisions that
    have been processed, so that when we have a graph like
              A
             / \
            B   C
    , A won't be fetched twice, even though we start traversals from
    both B and C.

Important cases:
  - Fetching from a server into an empty database: in this case we are
    fetching the entire revision graph by the breadth-first unanchored
    method.  This is good; it means that we don't have to require
    thousands and thousands of xdeltas by the server.  However, if
    there are multiple heads, we will fetch full manifests and files
    for each, which can be bad, and is certainly silly, since having
    fetched the entire upstream of one head, we have probably fetched
    99% of the upstream of all the others, and shouldn't waste network
    traffic on downloading the whole thing in its entirety.
  - Fetching from a server that we've been following: in this case we
    are fetching a small part of the revision graph, almost entirely
    by the depth-first anchored method.  This is good, downloading a
    whole manifest would completely overwhelm the deltas required
    here, so it's well worth some server CPU.
  - Fetching from a server that we're way behind on fetching from: On
    a busy project -- think GCC, or the Linux kernel -- I think it's
    pretty easy to get hundreds or even thousands of revisions behind;
    just go on vacation for a week or two, or be a casual contributor
    who only pays attention once a month or so.  Here the cost of the
    depth-first anchored method may be a problem (though it's hard to
    properly estimate CPU/network trade-offs, we're also requiring
    much more I/O and CPU out of the server, which has to be shared by
    everyone.  Though I guess monotone does make it trivial to build
    mirrors).

Correctness:
  - If there is a node A that is anchored, we are guaranteed that
    there is some node B that is a descendent of A such that B is a
    head.  Because anchored nodes have the property that all of their
    descendents are also anchored, we infer that B is anchored.  Thus
    when we do the traversal starting at B, we will reach A, thus A
    will in fact be fetched.
  - If there is some node B that is not anchored, we are _not_
    guaranteed that it will be fetched.  Consider the following graph:
              A   B
               \ /
                C
    where we assume that A is shared by both parties from a previous
    netsync, but B and C are not.  C is anchored, B is not.  C is the
    only head.  So we will start a traversal at C, it will fetch C, it
    will consider fetching A but stop because A does not need
    fetching; it will consider fetching B but stop because B is not
    anchored.  Having run a traversal at C, we stop, because we have
    traversed all the heads.  B remains unfetched.

Solutions to this problem:
  - Just always do the breadth-first unanchored thing.  This is
    simple, obviously correct, and cheap for the server.  However,
    fetching full manifests is prohibitively expensive when updating.
  - Just always do the depth-first anchored thing; the trick here is
    that instead of scanning for unanchored nodes, we scan for root
    notes that we don't have.  Then we first fetch those root nodes in
    their entirety -- we can't expect to do much better than that
    anyway -- and then proceed with the depth-first approach, secure
    in our knowledge that all nodes are anchored.  This is fairly
    simple, obviously correct, and very expensive for the server.
    It is probably unacceptable for initial fetches of the full
    revision graph.
  - Hybrid model: choose which of the previous two versions to use
    based on, say, the number of nodes to fetch, or a similar sort of
    cut-off.  The idea is, if we have a lot to fetch anyway, the
    breadth-first version is attractive, because the full fetch is not
    prohibitive, and we we're cheap on the server.  OTOH, if we don't
    have much to fetch, then we know we're not going to hurt the
    server much by asking it to do some work.  This is not terribly
    simple, but it is obviously correct and combines the best parts of
    the previous too models.  It might be simpler code-wise than the
    current mixed-up system.  It has a magic constant in it.
  - Always do the breadth-first unanchored thing, but with a twist --
    figure out if our heads are anchored like we do now, and if they
    are, don't do a complete fetch; fetch a delta against the nearest
    ancestor that both sides have.  I think this can't do worse than
    double the network traffic involved (on the theory that
    diff(A, Z) should be <= diff(A, B) + diff(B, C) + ... + diff(Y, Z)
    ) and it means that the server only has to do reconstruction and
    diffing for one revision pair, not some arbitrary large number.
    I think this is the simplest solution that might be acceptable.
    Doubling the network traffic might not be acceptable, though; I'm
    not sure.  I'm also not sure if it comes close to doubling in
    practice.

There, now that I've written all that, hopefully it will serve as bait
for someone (hi Graydon!) to point out my silly errors, come up with
different ideas, etc.?

-- Nathaniel

-- 
"...these, like all words, have single, decontextualized meanings: everyone
knows what each of these words means, everyone knows what constitutes an
instance of each of their referents.  Language is fixed.  Meaning is
certain.  Santa Claus comes down the chimney at midnight on December 24."
  -- The Language War, Robin Lakoff




reply via email to

[Prev in Thread] Current Thread [Next in Thread]