[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Revision control

From: Arne Babenhauserheide
Subject: Re: Revision control
Date: Mon, 23 Jun 2008 11:41:17 +0200
User-agent: KMail/1.9.9

Am Montag 16 Juni 2008 19:08:00 schrieb olafBuddenhagen@gmx.net:
> Something feeling intuitive depends solely on previous experience. It is
> *always* subjective.

There are usability people saying quite the contrary. Have a look at 
http://openusability.org for once. 

A program feeling intuitive depends on the people you write *for*, and on 
workflows for the most often used actions. 

> "git-checkout -f" is definitely not a command one uses regularily. It's
> roughly the equivalent of "cvs co -C".

What do you use instead? I regularly update all files to see how my changes 
interact with the tree. I almost never update only my part. 
"hg up" stays in your branch, too, by the way. 

> I don't know what "hg up" does -- but if it indeed follows CVS, it is
> something entirely different... "cvs up" roughly translates to
> "git-pull". (Or "git-fetch && git-rebase origin" if you want to avoid
> clobbering history when you have local changes.)

it does what "svn up" does, but locally. 

"hg pull" gets the changes from soemwhere else. 

"hg up" updates the files you see. 

"hg pull -u" does both in one step. 

> > > And in fact, the git developers never cease to point out that such
> > > interfaces can be easily created for git. The fact that very little
> > > actually exists in this way, 

Uh, what about Cogito? 

It isn't developed anymore, but it was created and used, so the problem did 
exist, and even though the situation became better, git still requires much 
learning new commands from developers (as by your own words). 

> > > only proves that in spite of any 
> > > initial grunting, people are quite fine using the less shiny but
> > > fully powerful standard interface in the end.
> >
> > No. It just proves that people can get used to any kind of workflow,
> > however inefficient.
> Maybe it would prove that, if we make the assumption that git workflow
> is inefficient... Which is something I totally fail to agree with.

I'm sorry. My wording wasn't fair here. 

Differently put: 
"People can get used to any workflow, and once they got used to it, the 
workflow will feel efficient to them." 

Until they try out other workflows where there may be more efficient ones. 

> > Forcing users to learn things which aren't strictly necessary for
> > using a tool just steals their time.
> I don't agree here either. Understanding the fundamental concepts *is*
> strictly necessary to use the tool properly. Without such understanding,
> users can never be quite sure what the commands really do, and this
> uncertainity is very frustrating, as well as time-consuming whenever in
> a less familiar situation.

This may be true for git, but I didn't ever have that problem in Mercurial, 
and that's what "intuitive" means to me. You quickly get a feeling what 
commands do (if you know svn and put "distributed" (pull + push + merge) on 
top of that, you have it) and your feeling is right most of the time (for me 
it never was wrong yet). 

> Eh? Why would accessing a few objects in a single pack *ever* be less
> efficient than accessing the same objects in per-file structures?

You have to open the whole pack to get a few objects. 
When the changes are only in a few files, getting the files will require far 
less data to read. 

> > - Git offers some things only few workflows need and offers you
> > flexibility,
> Sanitizing history is important in any distributed project, regardless
> of the workflow.

How often? 

For the Linux kernel, you need to "sanitize" yourself, but in smaller 
projects, the individual history might be very interesting to other 

You can see the changes over time, and you can prove in the end, when the code 
really entered the repository. 

> > but gets in your way with necessary garbage collection (for active
> > projects),
> I doubt that really gets in your way... IMHO you are artifically
> inflating this minor issue.

It is something I have to remember which has nothing to do with what I want to 
do: Adding changes. 

I want my tools to not bother be. A tool is my interface to the changes, but 
git requires me to think about something unrelated to the changes I could to 
or find: I have to think about the way they are stored. 

You can see it like this: 

I want to access changes. 

In Mercurial I can just access the changes and don't have to think about the 
way they are stored. Mercurial makes sure, that it's efficient. 

In Git I have to care for the store from time to time, to avoid it getting 
inefficient. so it gets in my way. 

I don't want to have to care for my tool. 

It's not my child. It's a tool. 

I want to care for the programs I write with it, instead. 

> > and its syntax and use are confusing at times and almost always badly
> > documented.
> Must be subjective as well -- I for my part didn't have any negative
> experience with git documentation so far.

As I wrote, I did. So sure, it's subjective. But when many people have a 
subjective experience, what they think might well be quite objective. 

> > And it is inconvenient, that when you just repacked the project I am
> > following, but you didn't garbage collect for a month or so, I have to
> > access the pack from the whole month in order to get the last few
> > changes.
> Eh? Unless using a dumb transport, you *never* get anything but the
> necessary changes.

Differently worded: The other side has to open the whole pack on disk, even 
when we're dealing with an intelligent transport. 

> And anyways, this is an internal detail -- I don't see how it is related
> to covenience at all.

It's related to necessary garbage collection not being a good idea in my 
opinion :) 

> > If at some time the repository grows too big and you didn't ever gc
> > before, I will have to access the whole project history to just get
> > the few changes you did after I last pulled your code.
> I have a growing suspicion that you do not really understand how git's
> object store works...

As far as I know the whole repository will be packed, that's why. 

I read up on how it works, but maybe they changed it since then. 

For network access they seem to have fixed the issue. Today it creates a 
custom pack, whenver you pull changes. 

It has to open the whole pack on disk, though (mostly matters for local 
operations, and as long as the network is slower than the disks). 

> > And garbage collecting all the time destroys its advantage, as it will
> > just create as many packs as you have revisions (as far as I
> > understand it).
> Indeed... Though even that wouldn't be terribly inefficient in most
> cases. But anyways, if you really need to, there is an option to repack
> existing packs.

And create a single big pack with the disadvantages I wrote about above. 

> I only can say that in spite of various attempts, AFAIK versioned
> filesystems never got implemented in any mainstream system except VMS...
> And this is still totally unrelated to the question which version
> control system to use for the Hurd repository...

It is related to the question, why I think that having to gc is a bad idea. 

> The point is that in git, like in UNIX shell, the interface is not
> abstracted from the internals -- which means a somewhat steeper learning
> curve, but offers a lot of advantages in the long run -- not excepting
> usability.

Usability for whom? 

Best wishes, 
-- Weblog: http://blog.draketo.de
-- Infinite Hands: http://infinite-hands.draketo.de - singing a part of the 
history of free software. 
-- Ein Würfel System: http://1w6.org - einfach saubere (Rollenspiel-) Regeln

-- Mein öffentlicher Schlüssel (PGP/GnuPG): 

Attachment: signature.asc
Description: This is a digitally signed message part.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]