[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [task #4633] GPG-Signed Commits

From: Jim Hyslop
Subject: Re: [task #4633] GPG-Signed Commits
Date: Wed, 05 Oct 2005 16:51:43 -0400
User-agent: Mozilla Thunderbird 1.0.6 (Windows/20050716)

Derek Price wrote:
Jim Hyslop wrote:
This does complicate things somewhat on the client side. Instead of
simply signing the file and creating a detached signature, the client
has to sign [header info]+[file].

I'm afraid that is unavoidable if replay attacks are going to be detected.

Agreed - perhaps I was simply pointing out the obvious ;=)

I think we are discussing replay attacks, inserted at any of a number of

Yes, in this branch of discussion, we are. I meant in the more general case of overall documentation for this feature.

I'll start pulling together a list of attacks we've identified. Should I put that up on the CVS Wiki at ximbiot? (which reminds me, I haven't updated the other discussion paper I already put up with your comments).

   1. The first and most obvious is an authenticated malicious client
      (compromised or otherwise) attempting to replay an earlier commit
      session (its own or someone else's), presumably because it
      contains exploitable code.
   2. Next would be someone with access to a compromised server
      attempting to replace archive revisions with old, signed revisions
      (effectively equivalent to a compromised server which just sends
      the wrong, replayed data in response to client requests).

It occurs to me that in my replay-attack analysis so far, I have mostly
neglected to consider a compromised server (case 2 above).  In this
case, the server cannot be relied on to enforce that each client signs
the same sequence id it was passed because the server cannot be trusted.

I'm not sure whether a CVS client can defend itself against *any* attack from a compromised server. The client is at the mercy of the server.

Maybe we need to start thinking about ways of detecting and foiling a compromised server: keeping a GPG signature of the CVS executable, running an independent utility that scans the repository checking revision signatures, etc.

This question remains: is there enough value to only handing out
sequence ids on commit rather than handing them out with every session
and clients only using them for commits?  i.e., is there really a value
to having seq ids (S), or even signed revision numbers, like scenario 1:

1.1 (S1) --- 1.2 (S2) --- 1.3 (S4) --- 1.4 (S6)
              \- (S3) --- (S5)

Over a simpler-to-implement but still always-increasing scenario 2 like:

1.1 (S1) --- 1.2 (S21) --- 1.3 (S47) --- 1.4 (S68)
              \- (S31) --- (S52)

Seems to me scenario 1 is more robust against the DoS attack. But, OTOH, how long would such an attack have to go on before the ID wraps? If we assume an attacker makes, say, 100 connections per second (just picking an arbitrary figure for discussion purposes), and assuming an unsigned 32 bit integer, then the counter will wrap after somewhere in the neighbourhood of 500 days, or approximately 16 months.

The question is: is 100 connections per second a reasonable assumption at current technology levels? i.e. could an attacker mount a much more aggressive attack, with perhaps thousands of connections per second? I don't know the answer to that one.

Actually, unless the attacker is extremely patient, the DoS can be foiled by using a 64 bit number. Now, before you go pointing out that not all platforms support 64 bits, it doesn't have to be that complex. After all, we really aren't doing 64 bit math. Two 32 bit integers will suffice. Whenever the low-order long rolls over, increment the high-order long.

Generating the text to sign is as simple as:

   char sequence_id[22];
   sprintf(sequence_id, "%ul:%ul", high_order, low_order);

Even at 1 million connections per second, it will take (if my calculations are correct) close to 600,000 years to exhaust the number space. So I say go for the easy route, scenario 2.


I still think that the simplest way to verify the repository integrity
after the fact is to maintain a mirror of changes via commitinfo/loginfo
and compare it to the contents of the repository via some external
toolset periodically.
I agree, with one caveat: if the server is compromised, I don't think we can trust any commitinfo/loginfo messages it generates.

Signing the delta makes each commit atomic, with no dependencies on
the validity of previous versions. In effect, Alice says "These are
the specific changes I made, and this is the result."

Again, not very valuable to a single client, but potentially valuable to
a forensic analysis.  You are wrong about having no dependencies on
previous versions, however.  A signed delta cannot be used to construct
its revision of a file unless all other deltas in the chain are valid.

True, if the file has been compromised you won't be able to correctly reconstruct previous images. I wasn't looking at it from that point of view, though.

My point of view is: when I'm signing something, I don't like signing things that someone else assures me is correct. Unless I go through and validate the entire signature chain before committing, that's effectively what I'm doing: the server says "Here's someone else's signature, please sign it to acknowledge that it is correct." If, OTOH, I am signing the file I'm checking in, and the changes I have made to that file, then I will stand by everything that I've signed.

Actually, this raises a separate concern: my signature is either valid or it isn't. As I recall, the proposal is not necessarily to store the exact thing I've signed, but just the signature itself. This raises the question - if my signature encompasses both the delta and the entire file, and if suddenly becomes invalidated, what got tampered with: my file as checked in, or the delta? Or do we just not worry about it, and retrieve the latest backup file that validates the signatures properly?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]