gnu-arch-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] RFC: arch protocol, smart server, and tla implement


From: Aaron Bentley
Subject: Re: [Gnu-arch-users] RFC: arch protocol, smart server, and tla implementation prototypes
Date: Sat, 31 Jan 2004 11:49:12 -0500
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.6b) Gecko/20031205 Thunderbird/0.4

Tom Lord wrote:

   > From: Aaron Bentley <address@hidden>

   >> In general, streaming is going to require the client-side application
   >> logic to generate lots of requests before consuming the data they
   >> return.  That's awkward, client-side, and there's only a few special
   >> cases where it would be worthwhile.  More importantly: I think that
   >> those special cases will most often have better solutions than
   >> (literal) streaming.

   > I disagree: solving the latency problem for archd is nice, but
   > solving it for all protocols is nicer.  And I think we'll I'll
   > get close to that functionality in the backwards-builder anyhow.

Um... I _think_ you're talking nonsense.  The backwards-builder is
nice but it's orthogonal to what we're talking about.
You said "I'm no so sure that general purpose streaming is a particularly useful

strategy for most arch operations."

arch_build_revision is
- frequently used
- sensitive to latency
- adaptable to streaming
- about to be worked on

"Solving for all protocols [using streaming]" would mean that the
revision builder, whether forwards or backwards, would have to issue N
getpatch requests all at once, then read N replies after that.

Knowing what N revisions to request is no big deal.   You figure that
out up-front going forwards or backwards.

But how do I process those N incoming replies?  One idea is that I
read each one, apply it, then read the next.  Another idea is that I
read them all up front, store them away, and then apply them one at a
time.  Both ideas are losing for two reasons: (1) you _can't_ do
server-side changeset composition that way; (2) you're making N calls
to apply_changeset.  The first of the two ideas is additionally lame
because, over TCP, at least, it won't really be streaming after all.

The third idea is that you add a DELTA function to the archive.h
vtable.  _That_ solves the problem for all protocols.  It can truly
stream to dumb-fs'es just fine (assuming that compose changesets isn't
_too_ expensive).  It can additionally benefit from server-side
changeset composition by a smart server.
I think that because archd is a smart server, it's at a different level from the dumb servers.
For example, you could implement arch_delta() as a plain old function:
if (archive_is_smart)
{
  send_message("DELTA ...", tx_fd)
  arch_unzip_delta(rx_fd, delta_dir)
}
else
  arch_get_local_delta(..., delta_dir)
arch_apply_delta(delta_dir);

Then you could implement archd as
switch(message_type)
{
case DELTA:
  arch_get_local_delta(..., delta_dir)
  arch_zip_delta(tx_fd, delta_dir)
}



   > Making build_revision work backwards looks relatively easy.
   > Just find out how many revisions away you have a library or a
   > cacherev, and build in that direction.

Sure. Glad to hear it. Never expected anything different.
I _suspect_ you'll find that it will take some tweaking to get the
heuristics exactly right.  Ancestry-distance isn't a great heuristic
-- you want to approximate weighting for archive latency and bandwidth
too.
Sorry, I meant "making build_revision work backwards *without* crossing tag boundaries" looks relatively easy. In that case, you're working with a single archive, so there are no differences in latency. Since this is a temporary stage, I don't want to spend too much time on its heuristics.

   > Crossing tag boundaries will make the problem harder.  While tla
   > implicitly uses the call stack to build forwards, crossing tag
   > boundaries requires tla to map out several paths, and determining the
   > best one will require a cost assessment based on

   > 1. aggregate download size
   > 2. download cost for a given archive

Are you aware of the previous threads about these kinds of heursitics?
I think it was mostly between miles and I.  If not, I can try to find
it again.
I'd like to read it.  Is this it?
http://mail.gnu.org/archive/html/gnu-arch-users/2003-09/msg01419.html

Looks like. 5x to 25x speedups and half the size for "summary deltas" of 200 KiB does sound worthwile. My concern is that the speedup on the client side might be produced by pushing that CPU cost onto the server. Do we know that composing changesets will be significantly cheaper? If they are, we should be doing it client-side too.

   > The archd pfs can merge the requests for "patch-1, patch-2, patch-3"
   > into "delta patch-1 patch-3" with little difficulty.  Since pfs-archd
   > will be alone in supporting this functionality, it makes sense to
   > special-case for archd instead of special-casing the current supported
   > protocols.

Please don't make the archive.h protocol asynchronous -- there's just
no need for it.
I meant that you'd submit the jobs in batches, and let pfs-foo worry about the optimal way to implement them-- streaming (http?), parallel (ftp?), or serial (fs).

e.g, you'd have
int pfs_get_files (struct arch_pfs_session * p, int num_files, int *out_fd, t_uchar ** path, int soft_errors)

It would return when all requests were satisfied or when one proved impossible.

   > >     If from-revision is not * or
   > >     is not the immediate ancestor of to-revision, then implementations
   > >     MAY instead return an error.

   > I don't believe this provision is required.  Instead:

   > >     If from-revision is not * or is not the immediate ancestor of 
to-revision,
> > the server may return more than > > one changeset. The composition of the changesets returned
   > >     describes the differences between the two revisions.

   > Hmm.  Looks awfully like streaming to me.

Yes, _like_ streaming in its benefits but _not_ streaming in how it
works.

Maybe we're just using words differently.
I think so. I haven't been making a clear distinction between streaming and pipelining. Pipelining is what you're describing up there, and I should have said that. My bad.

The question is, though, should streaming be exposed as a property of
the archive.h vtable and the answer is "no".
Okay, but I think it should be fine to allow some or all requests to be batched. This keeps the funky code isolated and low-level, leaving a pretty clear and usable interface. And implementing pfs_file_exists in terms of pfs_files_exist would be easy.

   > I'm not sure that there's any value in composing the changesets
   > before applying them, though.  It would probably be better to
   > say arch_archive_delta can return any number of changesets, and
   > apply them directly.

It takes pressure off the need to find (not obviously perfectable)
inventory-traversal-elimination optimizations in apply_changeset.
Perhaps it is ideally better.   But compose_changesets
- will be new, buggy code
- affects a core function
- may, depending on implementation, require users to install interdiff
- looks about as complex as those optomizations

   >> Other merge commands can take good advantage of
   >> arch_compose_changesets as well.

   > If you're thinking of tla delta, I suspect that
   > compose-changesets will be implemented in terms of replay and
   > make-changeset, not the other way around.

That's just completely wrong.   `replay' needs a tree to operate on.
compose-changesets most certainly does not.
It depends how quickly you need it. I can imagine see an initial or reference implementation based on replay.

Aaron




reply via email to

[Prev in Thread] Current Thread [Next in Thread]