monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Monotone-devel] Re: results of mercurial user survey


From: Bruce Stephens
Subject: [Monotone-devel] Re: results of mercurial user survey
Date: Fri, 28 Apr 2006 15:49:34 +0100
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.0.50 (gnu/linux)

Markus Schiltknecht <address@hidden> writes:

[...]

> To me (and a lot of others) it seems that monotone is error checking
> at the wrong place. AFAICT netsync already guarantees to deliver the
> data as it has been sent on the other peer, thus consistency
> checking just after syncing seems unnecessary.

Except if the sending end has bogus data (as happened with the
venge.net server).

[...]

> If a peer S is serving a repository and a peer C is pulling it
> monotone currently puts the burden of consistency checking on the
> shoulders of C.  IMHO this is generally wrong. I think every peer
> should check itself for correct operation. And in case of a
> filesystem corruption or such on S, the admin of S needs to know and
> not the user on C. Probably S should even stop to server its
> repository, as it is corrupt.

Maybe.  On the other hand, the current symmetry is nice: that servers
and clients really aren't *that* different.  And it's a nice property
that my monotone checks everything that it gets before the data goes
into its database.  

And would we really want to rely on a possibly damaged peer to
reliably detect that it's damaged?

(And that's not considering deliberately malign peers.)

[...]

> This would allow a user to pull a repository and have a look at it. And
> since a lot of users only want an up to date read-only copy (i.e. they
> don't commit anything) that's a huge gain, IMHO.

How much does verification cost?  IIRC, njs measured it and found that
the answer's not that much.  It seems likely that if we assumed that
we were always doing no verification, then one could change things in
such a way that netsync was quite a bit faster.  

I'd want to see some estimates before I found that convincing, though.
For example, we know it's going to be slower than the raw network
speed because we're taking and sticking data into a database in little
chunks.  For the initial pull, we could just copy the whole database,
but then that suggests that it might not be the verification as such
that's the problem as much as using the database in this way.

[...]

> What do you think? Is it feasible to implement such a suspection list?

It sounds too complex to be worthwhile without being more sure about
the benefits, IMHO.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]