monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Monotone-devel] Re: Future of monotone


From: Graydon Hoare
Subject: [Monotone-devel] Re: Future of monotone
Date: Mon, 28 Jan 2008 00:20:54 -0800
User-agent: Thunderbird 2.0.0.11pre (X11/20071204)

Thomas Keller wrote:

You've probably read Graydons recent message; I did and I have to say that it made me a little sad. Not because Graydon "officially" stopped working on the project - he wasn't doing many things lately anyways beside the recent attempt of redoing netsync with something smarter in nvm.nuskool (Whats the status of this Graydon? Do you think it is worth that somebody else picks up your work there?)

I think it's a good branch as they go in this sort of thing. I just ran out of interest. It contains a variety of interrelated new code. Anyone else is welcome to pick it up of course.

The changes are, primarily:

  - a JSON printer/parser and querying system
  - an SCGI i/o facility such that monotone can serve SCGI requests
    from a webserver or a raw socket
  - a sketch of the long-planned upgrade to certificates
  - a micro HTTP client
  - a much simpler synchronization system that works over HTTP+JSON+SCGI

My plan here was to rip out netsync, packets, basic_io, netio, and the old cert system. Put in the new cert system and the new sync system, and replace the automate commands with json-speaking equivalents. Do away with all the legacy i/o stuff, and provide one that's actually easy to speak.

The idea was that this would both make hosting and deploying much easier -- HTTP is more friendly -- remove some of the nonsense that makes debugging netsync difficult, and make it easier to write browser-based frontends since you could just code them in JS and have them do XHR to the server to pluck out JSON objects describing database structures directly. Plus, lots of script languages can just slurp in JSON, so it might make it a little easier to script.

The one key point of this was that it required a small bit of policy-branch work to connect each branch to a corresponding certificate-storage lineage. It's really just a branch-ID -> lineage-root-revid mapping you need to store, nothing fancy. This is because the new sync system, being compeltely DAG-based, was to transmit certs by transferring and then trivially auto-merging a "filesystem representation" of the set of certs associated with a branch. The details actually make sense, and it's way simpler than using merkle tries; if anyone wants to care I'm happy to explain further. Early experiments with the new sync system suggest it would work well and be simpler than netsync.

-Graydon





reply via email to

[Prev in Thread] Current Thread [Next in Thread]