sks-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Sks-devel] Re: Debian SKS packages was: [pgp-keyserver-folk] new pg


From: Yaron Minsky
Subject: Re: [Sks-devel] Re: Debian SKS packages was: [pgp-keyserver-folk] new pgp key server
Date: Mon, 20 Sep 2004 08:38:59 -0400

On Sun, 19 Sep 2004 23:45:43 -0400, David Shaw <address@hidden> wrote:
>  [ Much sensible discussion of the difficulties of packet parsing discussed ]

I guess the problem is that it's not clear where to stop in your
analysis of the packets, how deep into the structure to go.  Here's
what SKS does do:

First, it parses the key into a sequence of packets, i.e., content
tags and message bodies.  For this, it needs only to parse the outside
of the packets.  That is,  it needs to pull out the first byte, figure
out whether it's a new or old style packet, get the length, and then
get the body.

The next thing that is done is that the packet grammer is parsed. 
That is, the key is analyzed into a reasonable structure so that the
key can be merged in with any existing keys.  Here's the basic
structure I require all keys to fit into:

type sigpair = packet * packet list

type pkey = { key : packet;
                         selfsigs: packet list; (* revocations only in v3 keys 
*)
                         uids: sigpair list;
                         subkeys: sigpair list;
                       }

The parsing is based off of the content types.  So, the key packet has
to be a Public_Key_Packet, the uids have to be either User_Id_Packets
or  User_Attribute_Packets followed by Signature_Packets, etc.

Any key which passes these two steps can be accepted into the
keyserver.  That's because these two structural steps are the minimum
required in order to merge the keys in question.  If I have any two
keys that fit into the pkey structure above, I can merge them into a
new key.  And something like GPG should be able to throw away any bad
or inconsistent packets.  The idea would have to be that whenever you
find a bad packet, you toss out anything depending on it.  So, if you
throw out a uid, you would also toss out all the signatures on that
uid.  Similarly for a subkey.  And if you had to toss out the key
packet itself, the entire key is junk.

> 
> I am not fully convinced that having GnuPG try and massage garbage
> into a usable key is a good idea.

I'm not talking about garbage the comes from keyserver bugs.  SKS,
after all, does work in a reasonable way here, as does GPG.  The
problem is that they're not quite compatible with each other.  SKS
will stick garbage into keys, but only in a well-controlled way that
should be consistently repairable.  The kinds of problems generated by
PKS's subkey mangling should not come up with SKS, because the merging
algorithm has been built to be consistent with the RFC.

The problem in some sense, is that we need a standard describing what
kinds of problems GPG will repair.  Then we can make sure that SKS
filters out everything that GPG can't recover from.   I wrote an
algorithm based on my understanding of how PKS already worked. 
Basically, I thought to myself, "for this system to make sense,
merging has to work this way."   But I never engaged in a conversation
with the developers of GPG or PGP, so I lacked a close understanding
of what kinds of merging said software would actually tolerate.   If
we can come to an agreement about that,.I think we'll be in pretty
good shape.

One thing I feel pretty strongly about as to where that boundary lies
is that SKS's understanding of OpenPGP should be kept pretty minimal. 
We don't want SKS being too "smart", both because that would be
expensive (once we get to the point of requiring cryptography), and
because the keyserver should be fairly tolerant of deviations from the
spec.

(It's worth noting that SKS does already understand a fair bit of the
PGP spec.  It parses things fairly deeply for generating indices.  But
when accepting data, most of that parsing is not done.  Only the
minimal analysis described above is.)

y

> To a certain degree, I want this to be the responsibility of the
> storage (i.e. the keyservers).  The keyservers are charged with
> delivering keys safely.  If someone is uploading garbage, then the
> keyservers should not accept it.
> 
> I have been forced into key repair in the past with the PKS multiple
> subkey nonsense, and there is also code in GnuPG to undo the PKS
> duplicated user ID problem.  Where, though, does it end?  Every time
> there is a new form of mangling on the keyservers, I don't want to add
> code to GnuPG to try and de-mangle it.  I don't know of many pieces of
> software that are expected to clean up after broken storage.
> Photoshop doesn't try to reassemble bits and pieces of JPEG after the
> file gets mangled.
> 
> To have GnuPG fix this particular key is actually pretty trivial.
> I've attached a patch for the curious.  It's against CVS GnuPG, though
> you could probably massage it into 1.2.x.  I'm just concerned that in
> an effort to de-mangle a key, GnuPG might accept something it should
> not.
> 
> There is a lot going for what you say.  This is something I need to
> think about more.
> 
> David
> 
> 
> 
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]